Thursday, August 6, 2015

Meter vs Meter or a quick shot at some Internet and marketing diabetic memes.

What about the BG meter issues?

What do we actually measure?

Well, in principle, a BG meter test measures capillary blood glucose. It is constantly changing, some times very quickly. It differs from interstitial glucose, venous, arterial blood glucose, etc... On top of that, the differences aren't static. Think about it in terms of shifted waves going up and down. Complex? Yes. But even that is a simplification: think about them in terms of shifted going up and down where the shift is not constant. Going that deep isn't very useful. What is useful is a reasonably representative snapshot of some value you want to keep in some range. There is no need to hunt for the perfectly accurate glucose value, it doesn't exist. 

Back to BG meters

What do I want from a BG meter? Within limits, I don't care that much about accuracy: if the reader tells me I am at 90 mg/dL when it should have measured 100 mg/dL. 110 mg/dL is also fine. I am measuring a fleeting local reality that does not exist as an absolute truth. What I do care a lot about is precision, consistency. If my fleeting reality was at 100 mg/dL and just dropped to 80 mg/dL, my precise but inaccurate BG meter would tell me that I fell from 90 mg/dL to 72 mg/dL while an accurate but imprecise reader could have given me a stable value.

Of course, ideally, you would want a BG meter that is both precise and accurate. But a device that is biased consistently 10 mg/dL lower at +/- 5% is obviously less dangerous than a perfectly calibrated device that works at +/- 15%.

Internet meme 1: "Your reader is only accurate +/- 20%"

Where does it come from? A misunderstanding in the coverage by most diabetic sites of the ISO BG meters criteria and tests that basically state that 95% of the time, the results should fall withing 20% (or 15%) of the "correct" value. 

How is it typically interpreted? As "The result you got is +/- 20% anyway..." 

I am sorry to say that it is total bull****. Anyone with a basic high school statistical education has been given a free hint with the 95%.

Lets look at a real example. Here is the data of the 38 double BG meter tests we did, within a 120 seconds interval, since January 1st 2015. I am actually cheating a bit here, we did 40 double BG meter tests but we'll get to that later. These tests are the ones we did for the initial calibration of the Dexcom sensor and random double checks we did when we just wanted to be sure.

[70, 90, 95, 65, 84, 242, 81, 69, 119, 88, 110, 109, 85, 66, 182, 162, 245, 53, 55, 111, 140, 119, 170, 56, 77, 80, 234, 78, 55, 129, 79, 93, 77, 88, 135, 77, 77, 124]

[64, 85, 104, 60, 91, 268, 82, 78, 108, 74, 102, 108, 86, 64, 189, 147, 240, 47, 58, 100, 134, 109, 160, 54, 81, 89, 206, 75, 55, 128, 93, 89, 75, 91, 146, 76, 69, 130]

Here are the differences

[6, 5, -9, 5, -7, -26, -1, -9, 11, 14, 8, 1, -1, 2, -7, 15, 5, 6, -3, 11, 6, 10, 10, 2, -4, -9, 28, 3, 0, 1, -14, 4, 2, -3, -11, 1, 8, -6]

Let's plot that data in terms or error percentage. Does that ring a Bell?



Even if you know nothing about statistics and don't recognize the curve, you can't fail to notice that most of the results will be found in the +/- 10% range. Strictly speaking, we can't say on the basis of that sample alone that the distribution is purely normal but it is certainly much closer to normal than a random +/-20% error would be. My hunch is that it is essentially normal, plus a time drift (BG can change in 2 minutes), plus an "accidental" component.

I said above that I removed two data points. One of the tests we did was, in fact, a triple test. Why? Because it was very visible that there wasn't enough blood in the well. That test gave us 101 mg/dL while the two controls with enough blood gave us 124 and 130 mg/dL.

The other removed test is more interesting. What would have happened if we had included it?


The values returned by the two BG meter tests were 283 mg/dL and 24 mg/dL. Something was wrong. And that something was a failing battery in the BG meter.

That illustrates the fact that while BG meters will generally deliver results around the fleeting "correct" value, they aren't immune to extra-ordinary errors. Dextrose powder on the fingers, water, lack of blood are typical factors that will lead to inconsistent results. The list is long but, in practice, most of them are available (the topic of another blog post maybe).

At this point, I hope I have put the "anything +/-20%" Internet meme to rest. It should be rephrased into something like "very often quite close, sometimes 20% off, potentially anywhere if not used properly"

The emerging marketing meme

Now, let's have a look at the currently emerging meme: "CGMs are now more accurate than BG Meters".

Before I start, I'd like to stress that I am totally convinced that CGMs are the best tool to manage your diabetes. I can't stress that enough. But the reason why they are the best tool is not that they are more accurate. The reason is that they allow patients to understand how their diabetes work, how their body reacts to meals and exercise. 

But that is not necessarily how they are marketed. Dexcom said in one of their conference calls (I summarize) that CGMs were now more accurate than BGMs, opposing their best MARD (around 10-11%) which most people don't get in real life to the above +/-20% Internet BG meter meme.

The problem is that the current G4 needs BG meter calibrations. You can't logically claim that a measuring instrument B calibrated with a measuring instrument A will be more accurate than the instrument A. 

The eventual unavoidable systemic error in instrument A will be added to the error of instrument B in a complex way (error chaining analysis). Even if you are using "optimal" calibrations you will still introduce a bit of additional error, as Abbott has apparently shown in its Libre papers.

[Note: can be skipped if you don't want to nit pick... You could actually calibrate a device B with an inaccurate device A and get the device B to perform much better than device A if you do a large number of tests with device B. If you do 2, 4, 8, 16, 32, 64... inaccurate tests you will reduce the error by a factor of 1.4 at each step. Unfortunately, unless you want to do a lot of simultaneous blood tests, you are unlikely to approach perfection.]

And lastly

How did the BG meter worked vs itself in a more conventional medical view? In other words, how consistent was it?


In other words, excluding any bias, our BG meter works within the latest ISO spec in terms of result consistency and the ISO spec does not mean that its results are randomly distributed in the +/- 20% (or 15%) range.

No comments:

Post a Comment