Lab Queries: How Low Can You Go??

Welcome back from Thanksgiving Break!! 

This week we will crack open the analytical textbook and cover more math!

We are going to figure out how to go low.  I’m not talking about dropping it low on the dance floor….or limbo, although if anyone is interested in some friendly competition I’m sure we can arrange for that to happen at next year’s FELC.  Any takers!? 😊

How Low Can You Go??

Sure, I know those instrument companies are big on posting lower detection limits for all of their instruments, but you can’t take that answer as the gospel truth.  Those posted detection limits are created under the most ideal operating conditions.  It’s the same as car companies who post that their brand new, super shiny, fresh off the production line vehicle has an average 48 MPG HIGHWAY…..we all know that’s not true! Maybe it’ll happen once if the wind is blowing in the right direction….but it really isn’t a fair indicator of how your vehicle is going to operate day in and day out.  Standard detection limits on instruments are the same way,it might happen, but it probably won’t. 

Setting detection limits is a journey you and your instruments are going to have to go on together.


It’s after Thanksgiving….I have no Christmas shame!

IYou may or may not have noticed, but several of the tests we run on ethanol, at least finished product ethanol, are searching for an answer very near to 0.0…..or as close as we can realistically get.

  • Methanol
  • Copper
  • Chloride

These are all test results that typically you’d typically expect to see very low-levels on.  How do you know that your instrument is capable of seeing levels that are that low effectively?  Low level detection is one of the most difficult things we ask of our instruments, so it’s important that we know exactly what we can and cannot expect.

Calculating your Detection Level, or DL is a pretty easy and straightforward process.  First things first you’ll need to run a good calibration curve, but let’s assume you’ve read all the blogs and you’ve already got that step done!  Find a standard with low values, similar to where you hypothesize your DL might be.  You’re going to analyze that standard 7-10 times.  I don’t recommend doing this over time, just run them back to back.  Unlike Control Limits which are calculated based on instrument shift over time, DL can be calculated based on a snapshot of the instrument.

For the math portion, start by taking the standard deviation across all injections for all the components.  My table also shows the average, but you won’t directly need that for DL calculation, but it is good practice to check your repeatability recovery….the average over the known value*100 will get you there.  If your repeatability recovery isn’t inside of your control levels we discussed in Common Calibration Conundrums and Other Laboratory Queries Part 4, you’ll want to rerun the study with a higher standard level.  It could be that you’re too close to your instrument’s detection level.

The Detection Limit is then calculated as the Standard Deviation value times 3.143.

There statistically are several ways of calculating a DL,but this is the easiest and for most laboratory purposes will work just fine.  In my above example, my IC can see sulfate peaks down to 0.0026mg/L and chloride peaks down to 0.0084mg/L.  Now, that’s absolute bottom low as you can go level on my instrument.  Do I routinely analyze samples at that level….no.  There is a practical level for using your instrument.  In my case, I don’t consider my instrument practically capable of analyzing samples below 0.25mg/L, and I wouldn’t report any levels lower than that.

Hopefully this will help you dial in the lower limits of your instrument systems, and guide you toward some practical levels of analysis and reporting.

Part 3: How to Know if Your Calibration Curve is Correct

You’ve bought some standards and you made a multi-point curve.  You might think your work to improve laboratory accuracy is over, but then doubt starts to creep into your mind.  What if something’s gone wrong!?!  What if you run a sample and you expect one answer and your instrument tells you another?!  Is the sample bad, or is there a problem with your calibration curve?  Has something gone wrong somewhere in your plant’s process, or do you have a bad batch of calibration standards!?

Image result for sheldon cooper meme

The first step in ensuring that your calibration curve is up to par is checking the Correlation Coefficient.  I know, I know….more math….but stick with me here because it’s important!  The R2 value of your calibration curve has a large amount of statistical power.  No one wants to talk about statistics, but it has a lot to do with how well your instrument is calibrated.

In a nutshell, the correlation coefficient gives you an idea of how well the standards relate to each other.  R2 values statistically can range from -1.0 to +1.0, but on your instruments, you should look for a value as close to 1.0 as possible!  Values of R2=0.9987, R2=0.99943, or R2=0.9963 are examples of what you are looking for.  The closer to 1.0 your R2 value is, the stronger the relationship between your standards.  A strong relationship between standards leads to more confidence in the reported results for your samples.  Remember that your standard value points hug your samples, and no one likes crappy, wet noodle armed hugs! It’s listed somewhere in your instrument’s software, you might just have to look around for it.

So you’ve got a strong curve, but calibration curves can be strong….but wrong!  How can you check the accuracy of the standards you used to make the curve?

There is a way to prove that you standards and instrument are operating correctly.  We are going to employ the Dr. Sheldon Cooper of standard….the validation standard!  Just like Dr. Copper, the validation standard knows more than you, and it isn’t afraid to tell you when you’re calibration is wrong!

Image result for sheldon cooper meme

The validation standard should be a standard SEPARATE from your curve!  Buy it from an alternative supplier, buy a different lot of a standard you use in your curve, or buy 4 standards and just pick one to use only as a validation….just don’t use your validation standard as part of your calibration curve.

If you just analyze one of your calibration standards again as a sample, you will get the right answer 100% of the time….because somewhere in the software you’ve entered those values in as the answer.  This might look promising, but it tells you ZERO useful information about the accuracy and correctness of your calibration curve.  The validation standard’s “job” is essentially to be an unknown sample, that you secretly know the answer to.

A good basic rule of thumb is that the difference between the known value of your validation standard and the instrument’s reported result based on your curve should vary no more than 10%.  We’ll call this “Validation Recovery” for ease of terminology.  For most applications, 10% is much larger than the recovery should be, but if you’re just started down the path of improving your laboratory accuracy….it’s a good basic place to start.

From here we are going to have to dive into a bit more math, and I think we can all agree that one math topic per post is frankly too much math!

Next week I’ll cover the basic statistics of calculating a more precise validation recovery range for your instrument, tracking validation recovery, and how you can use that information to access the performance of your instrument!

Part 2: What Creates an Accurate Calibration Curve

Welcome back for the second week of Common Calibration Conundrums and Other Laboratory Queries!

 Before we get back into the science fun, I have an exciting announcement.  Bion Sciences now has an Instagram account!  Come follow along with us!  There will be blog notifications, access to our website, and photos of the Bion Sciences team in our natural, laboratory habitat!

@bionsciences605

 Part 2: What Creates an Accurate Calibration Curve

Now that we’ve covered how often you should calibrate an instrument, we should talk about how you pick standards to use for your curve and how many standards you should be using.

Think back to algebra when we first learned the equation for a line.  If your math teacher was anything like my math teacher, they drilled into your brain that 3 points are needed to accurately verify a line and the corresponding slope.  One point is simply a dot in space, and two points allow for too much swing in the correlation of the line, but three points……three points is really where the magic starts to happen!

Image result for math meme

Don’t get overwhelmed with the math, I’ll simplify this for you!

The accuracy of your line, and therefore your results, improves dramatically as more data points are added to the line.  Think back to our puppy from last week.  If you only tell the puppy once a day, every day to potty outside, your results might be somewhat….questionable.  Now imagine you tell that puppy 3 times a day, every day, to potty outside.  The chances that puppy is going to catch on greatly improve!  The same goes for your instruments.

When it comes to calibration points, more is always better.  Again, there is no such thing as over calibrating an instrument!

The more data points you provide the instrument, the accuracy of the answers the instrument provides you will also improve.  The only issue that arises from more calibration standards is the time it takes to analyze each standard.  It’s important to find a balance between the most accurate curve you can produce and managing the time restrictions inside your lab.

To deal with time restrictions and busy schedules, sometimes improving your accuracy might involve decreasing the number of calibration points.  (Cue massive shock and awe as I appear to contradict everything I’ve said up to this point! 😊)

Let me give you an example.  Suppose you run 10 calibration standards….but because it takes several hours to build that impressive calibration curve you only calibrate your instrument every other month.  By the end of the second month, how true do you think those 10 data points are….I would argue they probably aren’t accurate anymore.  Instead, what if you ran 5 points once a week, or 3 points every day??  A smaller, more current curve will almost certainly produce more accurate data.  In the world of the laboratory, accuracy is the name of the game!

wyatt

So, you’re staring at a list of calibration standards and you find yourself wondering, “How do I figure out what to order?!”

A good rule of thumb is that your calibration standards should “hug” your expected value.  For example, if your anticipated answer is 3ppm, you wouldn’t want to make a curve using points 0.5ppm, 1.0 ppm, and 1.5 ppm.  You also wouldn’t want to use standards that all have values above your expected answer….meaning that 5.0ppm, 8.5ppm, and 12.25ppm also wouldn’t make a good curve for a value of 3ppm.

For an expected answer is 3ppm, I might select standards with values of 1.0 ppm, 2.5ppm, and 5.0ppm.  Typically, the wider the range of expected answers, the larger the range of standards should be.  Remember, your values need to be hugged by calibration points, not left to fend for themselves on the outskirts of your curve.  Every once in a while you might have a stray value that exceeds the boundaries of your calibration points, and that’s fine….every once in a while!  On the day-to-day and with your typical samples though, your results will be most correct if they are contained within the confines of your calibration data points!

Next week will be, Part 3: How to Know if Your Calibration Curve is Correct.  It’s Big Bang  Theory-themed so be sure to keep your eyes peeled for that!  In the meantime, if you have any questions about this week’s post or any ideas for future posts but sure to leave a comment and let me know!