Wednesday, May 11, 2016

A Libre summary, some disappointment on the third party side and possibly new information

As I have said before, we have been off the Libre and back to the Dexcom G4 and now G4AP. That decision was mostly motivated by availability issues, not product issues. I learned a lot during my Libre investigations and used the knowledge I had gained to improve the results we got from the G4 (non AP) running a custom algorithm on "raw" data provided by xDrip.
 
This being said, most of the traffic, the mail and contacts I get from this blog are still Libre related. That is why I have decided to post a summary of the Libre information I posted here, take a look at what the current landscape offers, rant a bit and... maybe provide a bit of new information for those who can read between the lines.

Quick Summary


I could not resist comparing the Libre and the Dexcom as soon as I got it (first days - full 14 days). We were lucky and the Libre lived up to the hype, especially in terms of speed. I also started looking at the Libre technical details as soon as I got it. Reading the NFC tag and interpreting its structure led me to post this in December 2014 which was soon picked up by others and led to the first Libre data interpretations attempt on GitHub. Meanwhile, I worked on my own interpretation.

As great as it is, the Libre had its dark side as well, uploading your data to a remote server while explicitly telling you it did not. That issue is now "solved", but solved in a way only big companies can get away with: the Libre still grabs your data, but its license has changed...

Anyway, the Libre still outperformed the Dexcom in terms of speed and accuracy, which was extremely useful for sports. Speed matters in other practical circumstances, alerting to a bad hypo before it could harm while the Dexcom was on strike. That hyper reactivity unfortunately has downsides as well, which brings us back to data interpretation.

By then, I had a fairly decent data interpretation going. But, working from scratch, with no outside help and with an extremely limited supply of sensors was becoming a bit tiring. It involved a lot of fiddling, reading and toying with the custom TI/Abbot chip uses (did you know the FRAM also contains part of the code - taught myself a bit of MSP430 assembler). The Libre behaved oddly at times, simple data interpretation failed... The reasons behind this became clear to me: the interpretation algorithm was not direct, temperature also played a role (there are two thermistors used). So did algorithms. Abbott had used predictive algorithms in the past, Roche was using them in their yet to be released sensor and that is what I used to improve on our Dexcom G4 non AP result. Could it be a factor? I had, as I have shown above, a decent interpretation in January 2015 but certainly did not have all the answers. This is why I decided not to release anything in public: I explained my reasons here. Looking back at them, the custom micro-controller remains an issue, the interpretative algorithm remains an issue, Abbott has not been as aggressive as I thought it could be and I still don't care much about a smartphone application. Point 3 was essentially: "it would be in the hands of real people" and, with that, comes a certain amount of responsibilities. Which brings me to...

The current state of third party applications.


When I started looking at the Libre, despite enjoying the process I had hoped to get some assistance. Nightscout was a team effort, thriving on the talents and skills of many different people. I can't remotely hope to emulate a whole team of dedicated competent and generous people. And I did make a lot of contacts and virtual friends in the Libre world. Some of you have been extremely generous, offering to send me sensors or the very impressive hardware they have developed. But an awful lot of contacts were simply "gimme, gimme your formula", some of them offered fair commercial deals (not what I was looking for), some were either a bit delusional or too optimistic as far as my abilities are concerned as in "I will pay you so you develop an artificial pancreas on my current insulin pump controlled by the Libre"... Yeah, just a small afternoon project...

But, what I expected (possible spoilers ahead) would be some level of technical assistance. Something like "I happen to be an engineer with embedded TI 430 experience and this is how you look at the SRAM" or "These are obviously CRCs, that's how data corruption is detected" or "have you looked at the difference between thermistors" or "Which sub algorithm do you use? Some algorithms are well suited to smoothing signals while keeping the actual peaks"... 

Looking back, maybe I was the delusional one. :)

Third party apps


Third party apps were released, one of them I am told is fully open source. They were welcomed with enthusiasm (people really love to see number on cell phones). That was great, and I was actually happy that maybe this blog has helped a tiny little bit in their genesis. But, to be honest, I have not tried any of them, simply because I had no sensors. Then, readers of this blog contacted me and asked questions or even expressed some level of disappointment. That piqued my curiosity and I had a look: from what I remembered, there has been some improvements in the way the sensors are read, that's a plus. 

However, glucose level computations seem to be lost in the dark ages!

One of the applications I looked at simply divides some value by 10, another interprets the values with an approximate formula derived from what was posted on Github more than a year ago. A third one is a basic copy paste of that initial Github formula. The problem with that formula, which was recognized at once by their authors, is that it does not work very well. To be honest, I was floored. I haven't seen any attempt at addressing the thermistor and real algorithmic issues. Somewhat hilariously, some of those formulas are kept "secret" behind... well... very weak doors.

So, basically, those apps display approximate values (how approximate depends on the range in which they are applied) which are considered something like "good enough anyway", they are a direct (incorrect) translation of a single value in the Libre and disregard any of the things that make the Libre a great CGM/FGM. 

And, as usual, people start using those values. When the results were too bad, the idea of "calibrating" them was floated (and possibly implemented, I wouldn't know). Rich idea: calibrating a device whose main advantage is to not need calibrations. As a bonus, it will allow people to whine against their BG meters again...

Let's compare this to xDrip/NS for example

  1. xDrip works on pre-processed data (not pure raw).
  2. used/uses an interpretation of that data that has been extensively tested and does not break down in some ranges (and whose initial version has been, incidentally, clearly detailed in Dexcom patents)
  3. sticks closely to the Dexcom data.
  4. uses proven mathematical techniques to generate its calibration, techniques that have also appeared in the newer Dexcom versions
  5. adds to the Dexcom functionality, does not remove anything.
whereas Libre Applications

  1. work on non pre-processed/adjusted data (pure raw)
  2. use a mix of approximate formulas that are know to break down in certain ranges and situations
  3. are hit and miss as far as matching the Libre
  4. do not generally bother with any recognized techniques
  5. remove reactivity, potentially add calibrations, lose accuracy


I can see the moment when someone will run a home made artificial pancreas based on divide by ten.  I love the decimal system, I really do. But that scares the shit out of me.

Turning responsibilities around.


Let's go back to the responsibilities issues. Whenever you release something that can have an impact on people's health, you are assuming some responsibility whether you think so or not. Authorities, even if they are at times heavy handed, think so as well. Regardless of your intentions, honorable or not, driven by profit or by a genuine desire to help, responsibilities remain.

This is why I decided not to release my interpretations: even if they work for me most of the time, there is still a risk they won't in some cases and I will feel guilty. But I will not be delusional and believe that other people will agree with me.  That is why I have decided to release some more information on what I did, now and possibly more in the future. 

IMPORTANT NOTE
I WILL OUTLINE THE PROCESS BY WHICH I REACHED ACCEPTABLE RESULTS FOR MYSELF, THERE IS NO GUARANTEE THAT IT WILL WORK FOR YOU OR EVEN WORK WITH CURRENT SENSORS. THIS IS A PROCESS, NOT A MAGICAL SOLUTION. 

The first topic I will address will be reaching an approximate formula that works a bit better than the "magic" ones, they way I did it in January 2014.

Fact: the Libre is a CGM. It uses a technology called "wired enzyme" which basically means that it works at a lower voltage (less or no interactions with other substances such as paracetamol), a different electrode configuration from the Dexcom's and a different chemical process (mostly a different electron acceptor). However, it still remains an amperometric sensor. That means that it does not escape calibration, base signal, etc... Since Abbott's sensors are stabler and more consistent than Dexcom sensors (something Dexcom will have to address at some point, not sure they can in the sea of patents in the field), they can be factory calibrated. It is not visible to the user but they do have a standard linear calibration! That calibration is used to eventually convert the nanoamps provided by the system (if the system does not provide nanoamps directly, calibrations can be chained) into glucose values.

Problem: how do you get access to that calibration? Well, there is the non politically correct method, which could eventually yield the exact intercept and slope of the factory calibration curve, but there is no need for it if you sweat a bit. You simply measure the system. In the case of the Libre, that means take a reading with the Abbott reader and dump the NFC data as closely as possible.

This is the result from a series of such tests I got in early 2015 on my 2014 data. That process is all you need to replace any eventual magic or copy pasted formula.



Key points


  • that is a process, a method: by all means derive your own values on current sensors. My values worked very well for us (no catastrophic breakdowns at any range) from November 2014 to the beginning of May 2015. All it takes to invalidate it is Abbott's changing the tiniest thing (membrane permissivity, chip, chip gain, whatever...) to invalidate those values. This is really what struck me when I looked at interpretations in the recent weeks. It seemed that no one had moved from basic, known to break down, formulas (apologies if you did, maybe I haven't seen it). Do not copy paste my parameters. Don't be lazy. Derive your own.
  • you will notice that there is some dispersion for some values. They are caused either by the algorithm, thermistors and relative thermistors, or errors (where your app would display a value when the Libre reader would not). I may or may not look at those issues in the future. There is a valid baseline (calibration slope and intercept) around which those values will gravitate.
  • that baseline (well, the one you have established for yourself) is a better platform to begin investigating the other issues.
  • don't mail me if you don't know what the above chart means.

Conclusion

When you release an app, you are putting glucose values into people's hands. While I tend to agree that, for a lot of applications, rough zones can be OK, drift in the high ranges can be a lot of trouble if they are used to calculate corrections or drive pumps. Think about it FFS!

Magic is not acceptable...










No comments:

Post a Comment