Monday, January 9, 2017

It’s always compression, duh!

Let’s be honest, I’ve been waiting for a moment like this one, where algorithms trump my quick visual assessment.

Here’s the situation on a relatively standard scale: can you spot anything? Don't cheat and look below.


21:00 Max starts climbing slowly. Because he had a small hypo earlier, the light evening meal, mostly protein (salmon) he took starts showing up.
21:30 recalibration, the Dexcom was a tad too high, the Libre (on the other arm) was spot on at 108 mg/dL. The decision is made to take no action because we know that the 20:30 will start pushing BG back down a bit roughly 3-4 hours after injection (20:15 in this case).
Sure enough, we seem to be on a small downtrend starting around 23:15, for a 155 mg/dL high. Yes, that is not ideal, but at some point we have to consider the trade-off between undisturbed sleep and perfect BG. Today, undisturbed sleep was the intent.
At first sight, this slope still looks like a mild downtrend, with a bit of noise.
However, this is what I get in another view: my compression detection algorithm has triggered!


Interesting… Time to look at the decision parameters

Parenthesis: my compression algorithm isn’t wildly different from what has been published in the literature. I developed it independently in 2015, as a toy project. In a nutshell, the algorithms examines the last few hours available (at least an hour although I can fine tune the parameters conveniently), assesses noise, overall trend and builds “confidence” on those values. It’s a bit of a cookbook of hacks and rules. For example, the SD a detrended trend gives a good indicator of the current “meta” noise level in the signal: a drop caused by a compression should, obviously, by more important than the SD of that detrended signal by some factor (one that I tuned based on experience and visual assessment). It also goes without saying that the delta must be negative. On top of that, a few rules have been added here and there for experimentally observed special cases.

At that 00:53, the new value enters the “hour buffer” which happens to have an extremely high level of confidence. Note that the algorithm did not have that level of confidence at 23:23, post peak, where the hourly trend was less clear and a bit of noise (maybe a transient compression) pushed the detrended SD a bit higher.

That being said, the case isn’t settled at this point and I decide to zoom on the chart and go up and check. Max is indeed leaning on his Dexcom, and not leaning on his Libre. The Dexcom, which had been tracking the Libre after recalibration is actually 10 points below.

The acid test is of course to move Max a bit from leaning on its Dexcom to not leaning on either devices. Here is a zoom on what happened: the very mild compression recovered almost immediately.


  • Instead of being slightly down, the trend is actually either stable of very slightly trending up. Knowing this allows me to push a 1 U correction (being extremely conservative here in order to avoid any hypo risk.
  • the scale at which we are looking at our CGM signal impacts our perception and our assessment of a situation. (which is one of the reason I developed my own “in-the-cloud” visualization, which I can tweak and zoom to my liking)
At this point, I can already hear the dissenters saying “How in the world can you tell it is a compression on such a small variation ? My Dex can be off by xx points or jump around”

Good question: let’s answer this methodically
  • the custom “artificially intelligent” algorithm says so Winking smile
  • the Libre says there was no drop.
  • I have confirmed visually that Max was sleeping on its Dexcom side.
  • I have confirmed that, by relieving the compression, the sensor recovers as expected and resumes cruising.
  • yes, I have no idea that the “real” level is actually 137, 127 or 147 mg/dL but it does not matter: the relative change does matter.
  • yes, there are situations where the Dexcom is too noisy, the trend is unclear and the decision is ambiguous, if possible at all.
but when the Dexcom (or Libre) is tracking smoothly, there is very little variation in the signal, or detrended signal if in a clear trend. That is that consistency, when the signal is good, that allows Dexcom executives to claim their technology is already much better than BG Meters. That is a statement I can totally agree with… until real life interferes (micro traumas, compressions, failing sensor, encapsulation…) and, of course, except for the fact that the baseline Dexcom values depend to a large extent of the performance of your BG Meter.

Anyway, this blog post is almost live because this must be the first time an algorithm tells me something I may have not noticed. 3 years ago I had a quick look at Neural Networks and AI but, while I got them to tell me interesting things and issue decent predictions, they never told me anything I wouldn’t have noticed or predicted by myself. That one is a first!

Ah, and one more thing – let me reassure all the hydrophiles out there, no glass of water was harmed in this experiment.

No comments:

Post a Comment