You’ve bought some standards and you made a multi-point curve. You might think your work to improve laboratory accuracy is over, but then doubt starts to creep into your mind. What if something’s gone wrong!?! What if you run a sample and you expect one answer and your instrument tells you another?! Is the sample bad, or is there a problem with your calibration curve? Has something gone wrong somewhere in your plant’s process, or do you have a bad batch of calibration standards!?

The first step in ensuring that your calibration curve is up to par is checking the **Correlation Coefficient**. I know, I know….more math….but stick with me here because it’s important! The R^{2} value of your calibration curve has a large amount of statistical power. No one wants to talk about statistics, but it has a lot to do with how well your instrument is calibrated.

In a nutshell, the correlation coefficient gives you an idea of how well the standards relate to each other. R^{2} values statistically can range from -1.0 to +1.0, but on your instruments, you should look for a value as close to 1.0 as possible! Values of R^{2}=0.9987, R^{2}=0.99943, or R^{2}=0.9963 are examples of what you are looking for. The closer to 1.0 your R^{2 }value is, the stronger the relationship between your standards. A strong relationship between standards leads to more confidence in the reported results for your samples. Remember that your standard value points hug your samples, and no one likes crappy, wet noodle armed hugs! It’s listed somewhere in your instrument’s software, you might just have to look around for it.

So you’ve got a strong curve, but calibration curves can be strong….but wrong! How can you check the accuracy of the standards you used to make the curve?

There is a way to prove that you standards and instrument are operating correctly. We are going to employ the Dr. Sheldon Cooper of standard….the validation standard! Just like Dr. Copper, the validation standard knows more than you, and it isn’t afraid to tell you when you’re calibration is wrong!

The validation standard should be a standard ** SEPARATE** from your curve! Buy it from an alternative supplier, buy a different lot of a standard you use in your curve, or buy 4 standards and just pick one to use only as a validation….just don’t use your validation standard as part of your calibration curve.

If you just analyze one of your calibration standards again as a sample, you will get the right answer 100% of the time….because somewhere in the software you’ve entered those values in as the answer. This might look promising, but it tells you ZERO useful information about the accuracy and correctness of your calibration curve. The validation standard’s “job” is essentially to be an unknown sample, that you secretly know the answer to.

A good basic rule of thumb is that the difference between the known value of your validation standard and the instrument’s reported result based on your curve should vary no more than 10%. We’ll call this “Validation Recovery” for ease of terminology. For most applications, 10% is much larger than the recovery should be, but if you’re just started down the path of improving your laboratory accuracy….it’s a good basic place to start.

From here we are going to have to dive into a bit more math, and I think we can all agree that one math topic per post is frankly too much math!

Next week I’ll cover the basic statistics of calculating a more precise validation recovery range for your instrument, tracking validation recovery, and how you can use that information to access the performance of your instrument!

This was so well written I have no idea what you are taking about but I really enjoyed reading it!

LikeLike

Well thank you! 🙂

Sent from my iPhone

>

LikeLike

Hahah thank you!

LikeLike