Hello everyone,
I stumbled upon a strange problem yesterday.
Here's the thing :
On C6678, we have a master-slave application (single image on all cores) with SYS/BIOS where the Master notifies (with IPC/notify) the slaves to start working on the data that is in MSMC memory. Each slave work on a non-overlaping part of the data (directly in MSMC) and then notifies the Master that the work is done.
Cache L1P and L1D are enabled, L2 is set as all SRAM (no cache).
When everyone is finished and have notified back the master, the Master read ONE sample of data like a = array[120];
We use the expression view at this point to check the values too.
a) Without the read and with cache enabled : the whole output array has the good values. 100% right.
b) With the read and with cache enabled : the whole array has good values EXCEPT 4 samples starting at the read : array[120] to array[123] values are 0. In total 128-bit is wrong.
c) with the read and with cache L1D disabled for the Master only (enabled for the slaves) : everything is fine again.
I don't get how a single READ affects 128-bit of data in MSMCRAM.
Any ideas ? Am I missing cache coherency operations ?! Is it something else ?
Thank you
Clément