It almost immediately seemed a bit "fishy". Surely, the Libre wasn't immune to the average BG-ISIG delay. Could it be that it was actually wrong but lucky? But this kind of freak occurrence repeated itself throughout our Libre phase. The Libre was systematically ahead of the Dexcom G4. After the initial lowish startup few hours (a behavior that was seen in all our sensors except the ones we pre-inserted) it caught up and started to be ahead.
However, spot checks remained peculiar. Correct most of the time, but with a distinct overshoot in situations of fast rising BG. I couldn't shake the feeling that something strange was going on. This "paranoia" was also fueled by the observation that the Libre historical "average" value wasn't written at the end of the period, but after a certain delay. The Libre became, at least in my mind, the first "revisionist" CGM. Some of the spot checks did not materialize in the period averages but did fit quite nicely with what a predictor would have calculated.
At that point, I became convinced the Libre spot checks were, in times of changing BG, predictive.
As an amateur observer of the tools we use to manage our diabetes, I was also a bit shocked by the use of prediction. I guess that my expectations were that a CGM would try to match its current reality as closely as it could and that was it. None of the Abbott's material mentioned "predictive" as far as I could tell. Interestingly enough, the Libre RAW data that I had been able to mostly understand in December wasn't always showing these spikes either. Approximate averages of RAW closely matched historical values, more than they did match the "over-shooters" spot checks that did not ultimately materialize in BG. Still I thought I was wrong, I had missed something and kept looking.
I then stumbled upon a paper that described how Abbott compensated for slopes in its Navigator II calibration algorithm. (FreeStyle Navigator Continuous Glucose Monitoring System with TRUstart Algorithm, a 1-Hour Warm-Up Time)
If it was kosher to compensate for lag in a calibration algorithm, the next natural step was to display projected values to patients... As it turns out, using a predictive model allowed my raw data interpretation to closely match what the Libre displayed and understand odd, potentially dangerous behaviors such as the one described in the "meal and bath" incident. As an informed patient, that incident annoyed me profoundly: it is one thing to rely on actual measured data and another to rely on projected data that is equivalent 80% of the time, better 19% of the time and outrageously wrong 1% of the time. Your mileage may vary and this probably doesn't matter much in the grand scheme of things for the general T1D population, but still.
Back to DexcomThis may have been one of the reasons we went back to the non AP G4 Dexcom, along with the obvious Abbott sensors availability issues. But then, I missed the quick reaction time of the Libre. To some extent, using xdrip solved the issue partially as it allowed me to get rid of the non AP G4 algorithm induced delay. But it also incited me to hunt for possible improvements on the G4 reaction time by using, you guessed it, predictive algorithms...
Now, before one gets too excited about the results, I'd like to insist that my "work" has been of the dirty, inconvenient, unpractical kitchen sink type of work. The first constraint is, of course, to have access to the dexcom secondary raw data in real time through xdrip. The dirty part involves artificially tampering with the 5 mins data frequency of the Dexcom. I needed value every minute and decided to interpolate the min by min data between Dexcom readings. That is not very clean but, based on the resampling done when comparing the Libre and Dexcom 14 days run, it doesn't seem to have any impact on the big picture. Then, based on that minute by minute data, I started issuing "predictions" of what 9 minutes later would look like and how it would match BG meter readings.
Here are the results: in both cases, my kitchen sink approach was able to drive the non AP G4 MARD from above 10% to below 10%.
Two points worth noting:
- the resampling adds or removes a couple of points. This happens because my BG meter clock doesn't have second resolution and drifts a bit. Since I use the closest previous CGM data points provided it is withing two minutes on the BG test and since the granularity of the prediction is 1 minute, a few points drift in and out of the window. This has no impact on the results in one case and actually worsens it a bit in the other case.
- the predictive algorithm usually improves accuracy but worsens it in some cases (not unlike the Libre in fact)
But things start to get interesting with the Roche sensor. The Roche sensor is a bit of a "Loch Ness" sensor. Roche published material talking about Artificial Pancreas Development in 2003, supported by a micro dialysis CGM sensor. Twelve years later, they are still talking about it, but outside of a few clinical tests, I don't think many people have seen the beast. Micro dialysis has a few advantages over glucose oxydase based sensors, but also a few inconveniences. It relies on a flow of ringer through a double lumen catheter and the ringer, as I understand it, doesn't recycle itself and must be discarded. Even at a few micro-liters per minute, that puts some limits on what it achievable in the form factor diabetics are now expecting from their CGMs. Plus there is the issue of generating the flow.
The Roche sensor claimed extremely good results in that paper in 2013 (so much that I was expecting it instead of the Libre in 2014), was used in AP tests in 2014 And come 2015, new sightings of the monster have been reported. In this paper, for example, where it is reported to track rapid changes much better than the Dexcom G4: Rate-of-Change Dependence of the Performance of Two CGM Systems During Induced Glucose Swings.(Pleus S1, Schoemaker M2, Morgenstern K2, Schmelzeisen-Redeker G2, Haug C3, Link M3, Zschornack E3, Freckmann G3.)
Hmmmmm, tracking rapid changes much better than the Dexcom? Where have we seen this before?
And where does the magic come from? Can you guess? A second paper by some of the same authors provides the answer: Time Delay of CGM sensors Günther Schmelzeisen-Redeker, PhD1,Michael Schoemaker, PhD1 Harald Kirchsteiger, PhD2 Guido Freckmann, MD3 Lutz Heinemann, PhD4 Luigi del Re, PhD2
And the answer is: predictive algorithms.
In an environment where the vast majority of endocrinologists are still somewhat uncomfortable with basic CGMs, devices that often do not actually display a measured value but show a predicted one could be a tough sell. Fortunately, the vast majority of practicing endocrinologists will neither have the time nor the desire to explore the darkest recesses of the technology behind their tools and they won't know.
PS: and yes, I am aware that yet another player based on another technology (senseonics) has also published significant results. But at this point, I have mixed feelings on the potential scarring.