Hi all,
I've got a program that runs an RMS algorithm on a TM4C123GH6PGE (formerly LM4F232H5QD) to do an equipment-protection function. We take a full line cycle of current values at 60 Hz, square each one, add them all together, divide by the number of samples, and then square root the result.
Obviously a fairly math-intensive operation.
My problem is our numbers are coming out wrong: they are much higher than the true value. I took the data in our input buffer, imported it into Excel, ran the calculation there, and ended up with a lower number. I've checked the code and it appears to be doing the algorithm correctly, so now I'm wondering if I'm losing precision somewhere. I'm using double precision floating point, and I'm wondering if the multiply, divide, or sqrt operations might be giving me lower-precision answers.
For reference, I already know about the CMSIS DSP library for the LM4/TM4C line, but I have not yet integrated this. I am using the standard c multiply and divide operators, and the sqrt function. Do I need CMSIS to avoid precision loss?