Friday, January 30, 2015

The Magical Tattoo - yeah, sure.

The "magic tattoo" strikes again...  

For a couple of weeks now, diabetics forums all over the world have been buzzing with excitement over that amazing new "glucose sensing tattoo". Some people seem ready to buy it right here, right now. Apparently, they don't read further than the headlines... (Universities love to make headlines with glucose sensing tattoos - see here for a 2009 release, that one based on a different technology)

While the full article isn't available on-line anymore, the abstract is here

Let me give you a hint of what is to come by showing you the functionally equivalent part of the Libre FGM sensor. See those tiny metallic circles in the center? Yes, that's it. Add a very small subcutaneous wire and you have also replaced the "gel" under the tattoo...

 How does the tattoo measure glucose?

The tattoo attempts to measure glucose by using the well known glucose oxydase reaction. This reaction has been used to measure blood glucose levels since at least 1957 (here) and is what your blood glucose meter probably uses although, for the sake of completeness, I should mention that the glucose deshydrogenase reaction has its supporters (here). This is also the reaction at the core of our current CGMs systems. It generates a very small electrical current - a flow of electrons - proportional to the amount of the glucose it oxydizes. That flow of electrons is then measured by an amperometric sensor and finally correlated with glucose concentrations. So far, so good: the technology is proven and has slowly matured over the last 50-60 years. Proven, but hardly innovative.

Where does the glucose come from?

When you measure your blood glucose level with a glucometer, the answer is obvious: glucose-oxydase reacts with the glucose present in the blood drop sample. A sub-cutaneous CGM measures the glucose present in the interstitial fluid, the liquid that makes humans soft and squishy. The glucose concentration in the interstitial fluid depends, with a few minutes delay, on the glucose concentration in blood. The extremely complex transfer process is summarized here. It is possibly one of the hottest current research topic since a full understanding of that system is essential to the development of effective artificial pancreases.

But how does the tattoo get its glucose? Unfortunately, humans aren't renowned for their glucose sweating abilities. The molecule has to be forcefully extracted to the surface of the skin from the interstitial tissue. The skin is a biological membrane whose goal is to keep essential molecules and liquids inside our bodies. The technique used by the tattoo is called reverse iontophoresis. It is a more recent innovation than the glucose-oxydase reaction, but still dating back to 1995 at least (here). In fact, products attempting to measure glucose non-invasively through reverse electrophoresis have already been developed, already been approved by the FDA and already been withdrawn.  Well, they had annoying cutaneous side effects, were slow and unreliable. This is probably because they failed commercially (you can read part of the ill-fated Glucowatch story here).

Forcing a glucose molecule through the skin is never going to be an easy task. While the University commentary mentioned the Glucowatch noting that the new method used less intense currents, this was of course ignored in the press coverage.

So, no progress here either. Trans cutaneous measure as implemented here is definitely a giant step back if one aims for a closed loop system.

What did the tattoo achieve?

Connected to a lab power supply, the tattoo took ten minutes to extract glucose from under the skin. That glucose reacted with a glucose-oxydase prepared tattoo (as it was expected to) and did not react with a blank control tattoo. The amperometric sensor showed a current increase that was correlated with the absorption of glucose (a soft drink) after some delay (as it was expected to).

That's it. As the author noted, some work remains. (insert favorite emoticon here)

But I still want one!!!

Yes, the tattoo looks so cool. So you might want one anyway. Anything else you need to get up and running in addition to a glucose-oxydase gel and the tattoo?
  • a laboratory power supply
  • an amperometer
  • wires
  • a computer
  • a thermometer
  • a mathematica license or the ability to develop your own methods in numpy
  • a keen sense of observation to correlate measured values with glucose values.
Please note that current CGM include all of this in small boxes that are a bit bigger than a couple of coins.

You'll be facing the following additional challenges (incomplete list)
  • skin is not a constant
  • you'll be adding a 10 minutes reverse intophoresis delay to the already present 5-6 mins (best case) blood glucose - interstitial glucose delay
  • your blood glucose concentration will change while you measure
  • your interstitial glucose concentration will change because you are actively driving molecules to and through the skin
  • you'll be dealing with concentrations two orders of magnitude lower than what can be observed invasively.
  • consequently, you will suffer from an accuracy that will be intrinsically limited by the signal to noise ratio of the whole system
What will you gain?
  • you will avoid a CGM insertion every two weeks and a couple a finger pricks per day.
While you can't expect much from the Internet mainstream press, I was really surprised by the very superficial coverage of the tattoo on many diabetic centric sites. I would have expected better. Maybe their role isn't to inform in depth but rather to deliver a bit of optimistic news ("within five years....") now and then to keep us going. Maybe it is better not to know...

Sunday, January 25, 2015

Abbott - it's amazing...

Apparently, Abbott keeps denying it uploads anything to its servers, even when talking to professional audiences. Either this perpetuates intentional deception on a large scale or, Abbott's executives are themselves very poorly informed.

They could be counting on the fact that medical audiences are usually not very comfortable with IT security mechanisms. Let me give you another view of the upload.

Here is a traffic dump of what happens on my own computer when the Abbott reader is connected. Again, let me stress that no kind of fancy "hacking" is involved. I just look at the packets transiting through my computer, just as you would look at people coming in and out of your house.
It consists of fairly normal traffic: I have a connection open to Google's services, some packets going to the Amazon cloud services (serving lots of third parties), my Dropbox is checking if it is synchronized and my anti-virus queries its cloud database from time to time.

On the left pane, you can clearly see that the Abbot software connects to two distinct servers, one called "" and one called "". Which company would call one of its servers "" if its purpose was not to upload something from its product...

On the right pane, you can see what is actually uploaded (a small part of it actually) and that clearly includes unique identifiers and, among other things, real time glucose data.

Frankly, I am a really amazed by their denial. Maybe this should be more widely publicized...

There are tons of possible scenario usage for that amount of data.

  • Abbott could be using it to improve its products. That's the reasonable scenario.
  • In the nightmare scenario (which I think is very unlikely, at least for now), Abbott could for example sell the data to insurance companies.
  • Since the behavior doesn't seem to change when multiple meters/patients are connected to the Abbott software (as they would in an endocrinology practice), Abbott could use data to optimize the appointments of its sales force or, even, to evaluate the levels of controls the practices achieve with their patient base. "You should go there: they aren't using the Libre much" or "Jeez, this practice gets awful results"
  • If you are running a clinical trial and have asked your patients to track their treatments and habits on the Libre during the trial, Abbott could be in a position to analyze the progress of your trial before you even get to see the data. Think about it for a minute: if you are running an artificial pancreas project and equip your patients with a Libre to have additional glucose tracking, Abbott could very well be in a position to evaluate your results before you do... and, why not, make a business move towards a promising project while ignoring others. Without your informed consent of course.
The possibilities are endless...

I like the Libre a lot. I would share most of the data if asked.

Thursday, January 22, 2015

Some Libre raw data experiments

There has been a lot of interest in the diabetes on-line technical communities to address some perceived Libre shortcomings. The first improvement the community is after is "Wouldn't it be great not to have to carry an additional device and be able to scan the data with a phone?" The second one is "Can't we turn this thing into a real CGM?" While replacing the Libre reader with a phone doesn't really interest me (for example Max would not be allowed into school exams with a phone but can keep his Dexcom), the second improvement does, as it could allow us to drop the Dexcom which we mostly use for night monitoring.

Obtaining the data from the NFC tag is easy. Understanding its organization was as well. Understanding it completely to reproduce exactly the results provided by the Abbott meter is going, I think, to be hard if one limits oneself to a black box analysis approach.

Even in the simple case of the historical data as it is reported by the Libre and as it stored on the chip, which is easy to study at leisure, there are many catches...

All roads lead to Rome but... all cities are Rome to somebody.

It is easy to find a fitting function that links the raw data to the reported data for some period of time. Why is it "easy"? Unfortunately, not because we are "genius hackers". The real reason is less glorious: there is an infinity of possible fitting functions. We'd have to be extremely dumb not to find one! The problem with those almost random functions is that they are good until they break.

Another issue is data quality: Abbott will reject values when the temperature is out of range, there are a few flags that are raised on some occasions, a quality assessment, etc...  But let's get back to the "simple" historical data.

We know a few things and have noticed some peculiarities
  • that the data is displayed with a certain delay. It is not simply the immediate average of the last 15 minutes. The Abbott algorithm takes its time before producing the value. 
  • that the same historical raw data isn't always displayed as the same official value.
Based on this, here's the result of a few interpretative tests

The wide red line is Abbott's view of the data. The dotted blue line is the result of a simple fitting function that has been made public. Decent, but not perfect. The dotted purple function is the result of a similar fitting function I have been using for a while. While it looks better on this data set, there is no guarantee it will always look better. Technically, it is just another fitting function in the sea of infinite fitting functions. (for the technically minded reader, the "purple" function takes a more functional approach: it basically subtracts a bias from the raw value, works with power of 2 compatible with the likely device architecture and then scales the result - it is, in a way, inspired by the TI documentation). If such a function was adequate (I believe it is not), that way of generating it is more natural for the system.

The wide green line is essentially the same function, plus a small algorithmic correction (also just a guess) that I implemented after noticing the Abbott reader was late in generating its data, that the fitting function always had problems with sharp increases and decreases and, finally, that two equivalent raw values did not necessary lead to the same historical values. (for techies: detect a condition that is causing issues to your guessed fitting function and smooth your way out of it). Just like the Abbott official data, this method would, of course, trail by one value. It does however give consistently nice results on my data sets. But its interest is limited: if you want the historical data, look at it on the meter or download it. I saw it as a stepping stone to a better interpretation of real time data.


What happens when that guessed function plus another algorithm is applied to the real time spot value? Testing that part is not easy as one needs to collect bot a raw reading and an official reading every minute. The chart below shows the result of such an attempt. The correlation is quite good and might actually not be too far from the real interpretation of the data... most of the time and in that range. But it also breaks, possibly because something triggers an Abbott algorithm under some circumstances. On a very small scale, we find the same kind of previously observed behavior  - see earlier posts - some kind of "trigger happiness" of the official Abbott data compared to the state of the raw data.  A small trigger happy increase and a small trigger happy decrease towards the end of the run. And, unfortunately, what I would describe as an "incident": while the raw data shows an increase, Abbott decides to report that period as flat.... (for the technically minded, the algorithmic part added on top of the conversion part takes into account what the immediate data is projected to do based on the previous trend. The historical interpretation looks at what happened before and after for smoothing. The immediate interpretation just projects what the next point should be and averages with the observed data.)

Please note that I have absolutely no idea about how the interpretation behaves in the very high ranges. We don't have too many, at least when I am around, and I don't see myself sending the kid high on purpose for an experiment. And of course, the limited sensor supply doesn't help much...

OK, where's the source code?

Sorry, I am not releasing anything at this point. Let me reassure you: should I hit the magic formula, I do not plan to exploit it commercially in any way. If that happened, since I love my quiet life, I will probably lose it somewhere to be found ;-).  My reasons are as follows

  • I am sure my formulas will break at some point. Possibly catastrophically. It would be irresponsible to release them in the wild.
  • they are easy to replicate, pick your bias, your power of 2, your scaling factor, a smoothing method for historical data and an extrapolation method for the immediate data and off you go. If you try, maybe you'll hit the gold that has eluded me so far. And I'll be happy!
  • there's the thorny issue of calibration. My function guesses are possibly the approximation of a calibration function (I know there is calibration data, important enough that it requires special care): if it approximates a sigmoid calibration curve in its near linear part, big problems are guaranteed on both ends.
  • there's the thorny issue of errors: we know that Abbott refuses to give a value in some situations (some unknown, some known such as a temperature outside a well defined range - see TI Docs for related examples). Even a perfect fitting function will fail if it does not recognizes all flagged conditions and does not ignore the correct data.

I'll probably try to run a couple of experiments when our last sensor has expired. But don't hold your breath for a breakthrough.

Tuesday, January 20, 2015

Some Libre peculiar behaviour patterns

The standard wisdom about the Libre is to consider that it takes a measure every minute (spot measures) and then averages them every 15 minutes in values that are stored in a 32 entries circular table (8 hours) as I previously described here. Some progress has been made connecting the raw values with actual data, by Vicktor and friends for example.

It should be noted, however, than a straightforward interpretation will not always work as the sensor varies or encounters one or more of its many possible error conditions.

In addition to, even when the sensor is working smoothly, peculiar behaviors can be observed.

Consider the first example below

Around 17:20, we did both a blood check and a Libre spot check. The Libre indicated 154 mg/dL and the meter 137 mg/dL a still acceptable discrepancy, if a bit on the high side compared to how the sensor has run until now. Twenty minutes later, the Libre spot check was at 175 mg/dL. We skipped the blood test as Max was about to eat and we were going to inject the pre-meal insulin anyway.

Now, if we look at the average historical values, there is really no way we can end up averaging higher values into lower values, unless we also had an equivalent number of spot checks well below the average, in the 60-70 range. But that never happens... While we did not monitor both the raw and the official values every minute (there's more to life than living by the ticks and tocks of your CGM), we have seen that pattern quite a few times already. Interesting facts here
  • it is hard to reconcile the high spot value with the historical average
  • the high spot value is almost perfectly what would have been predicted by linear extrapolation of the three previous historical data points.
This begs the question: how raw is the displayed spot check value. Is what we are seeing simply the reflections of jumps in the raw measurements or the result of a clever algorithm predicting the future and hopefully displaying what the meter will show. The repeated uncanny accuracy of the Libre vs observed meter values is still a bit surprising as far as I am concerned. Interstitial Glucose Values should, to some extent, trail Blood Glucose Values (the latest research I have seen estimates a 6 minutes delay). I have the impression that the Libre does better than that in many cases... (but of course analyzing 3 sensors on a single individual isn't enough)

And there is more: the historical value, which we would have expected at 18:53 is missing. The sensor is operating normally and delivering accurate and consistent spot checks at 19:03

178055    2015/01/20 18:38    0    84                                                           
178056    2015/01/20 19:02    1        85                                                       
178128    2015/01/20 19:03    1        85

Does that mean that something malfunctioned? Not at all. The historical data just appears later than a naive interpretation of the process would have expected.

178056    2015/01/20 19:02    1        85                                                       
178128    2015/01/20 19:03    1        85                                                       
178178    2015/01/20 18:53    0    87                                                           
178179    2015/01/20 19:08    0    71                                                           
178180    2015/01/20 19:30    1        59

and we have now an explanation for the relatively strange record numbering scheme Abbott uses and the answer to some related questions about raw data and its processing.

Monday, January 19, 2015

Speed matters: the meal and bath incident

We had a small typical incident today, frequent occurrences in the life of every T1D kid.

Max takes a relatively standard meal around 18:30 - the meter check (red dot as usual) is on the Libre Trend (red line). The Libre spot check (orange dot) matches the Dexcom spot value (small green dot). It is a draw at this point. We inject Novorapid and eat. Then, real life interferes: Max doesn't feel like eating more and takes his warm bath. Taking a warm bath soon after a Novorapid injection is a mistake. We know it and yet keep making it....

Warm baths cause vasodilation. Vasodilation causes the insulin to be picked up faster. Faster means that the timing is probably off. Vasodilation also slows digestion because blood is diverted from the guts to the periphery. An hypo is on the way.

The bath starts at 19:00. The Dex loses the signal, the Libre keeps recording to its sensor memory. At 19:20 Max feels low, the Libre spot value is 56 mg/dL and so is the meter. The Dex hasn't picked up the signal yet, but when it does 10 minutes later, it still indicates a healthy 90 mg/dL. We aren't extremely aggressive in treating lows and decide to double check the trend 10 minutes later (we also know that food is being digested, even if the vasodilation retarded that. The next spot check is below 50 mg/dL and the Dex has begun to notice the fall, time to treat with 15 grams of fast carbs. The Dex will beep exactly twenty five minutes after the fall in BG was felt by Max and confirmed by both the meter and the Libre. In this case, since the Dex missed an important data point, it is probably a worst case.

In the subsequent rise, we confirm once more that the Libre runs roughly 10 minutes ahead of the Dexcom. I should have the raw dexcom data in my Nightscout mongo database as it was connected throughout the episode but I am less and less motivated to go through the hoops to see what it would have told us.

Tuesday, January 13, 2015

Speed matters: tennis and the Libre

The Dexcom delay issue in sports

The chart below, of a typical tennis training session, shows that the Dexcom G4 can be both utterly wrong (in terms of reality) and perfectly correct within its well defined limits. Dexcom values are blue dots, glucometer values are red dots. We start carb loading for the training at 15:15. At this point, the Dexcom and the meter seem to agree. The Dexcom will match the 15:15 meter value at 15:25, a decent delay. But in fact, the Dexcom is still averaging the drop it saw earlier, when the additional carbs are already acting. At 15:45, it gets ugly: the dex is around 60, the meter is at 140. The Dex, averaging the climbing values, almost catches up at 16:10 but the reality is that BG is already in the danger zone. At 16:40, the fast carbs we took at 16:20 are already working.

In practice, the Dexcom consistently gave incorrect information. It saw a low when we were high, and a high when we were actually crashing. But if you shift the Dexcom's curve 15 minutes back, it almost perfectly fits the BG meter. Sometimes, it is very very hard to warp our heads around what we see. If you don't have a mental image of the curves, at 15:45, when you see 60 vs 140 you simply want to throw the Dex on the wall. To add insult to injury, at 16:25, that wasn't the Dexcom that prompted us to do a meter test and a carb reload: it was just the fact that Max missed 3-4 easy plays in row.

We have a problem...

The case above is almost a textbook example of CGM delay. However, it is not always that simple. The training may be harder than expected, we may have underestimated the required carb amounts. The match may be easier, we may be overshooting. If the match has been delayed, we'll be at another point on our insulin action curve... In practice, the Dex helped very little during sports, and we returned to lots and lots of finger pricks.

But, here's the good news, or at least a reason for hope. When we started using both the Freestyle Libre and the Dexcom, we noticed the Libre reacted much faster to BG level changes. Further analysis showed that, in general and on average, the Libre ran 9-10 mins ahead of the Dexcom.

Is that 10 mins bonus valuable? Incredibly.

Real life test.

Could we rely exclusively on the Libre and forget finger pricks for a whole real game? We had the opportunity to test during the Christmas Holiday. The third round adversary had literally destroyed Max in the Summer. If we crashed based on wrong information, that wouldn't be a problem. The match turned out to be intense: Max lost in two hours and three sets. His opponent went on to win the tournament without losing another set - we wanted a tough game, we had one. The video below shows a few of the second set exchanges so you can judge the intensity of the effort by yourself (and also because dad enjoys making videos of his kids - Max is the white one).

And here are the traces of the day (red - historic / orange spot checks) and Dexcom (green - calibrations in red). I won't go into a longish detailed analysis but that chart again shows that the Libre is significantly faster than the Dexcom when conditions are changing rapidly. It picked up the carb pre-loading more quickly than the Dexcom before the game began at 13:30 and also picked up the transient 14:30 intense carb-reload increase that the Dexcom averaged out. As soon as the game was over, at 15:30, the Libre picked up the small post game increase. The rest of the day also included a double at 20:00 that Max couldn't really handle very well as he was still low from the afternoon. And, unfortunately, we saw a pattern of strong delayed hypos, to be expected after a long intense game.

But the good news was that we could maintain good glucose levels through most of the game, sliding under 100 mg/dL in the third set only. Every Libre spot check (big orange dots) led to a decision and Max was able to maintain an ideal range for an extended period. The Dexcom (green) would have been totally useless for that purpose: its curves looks decent here because it retrospectively averaged a decent evolution. It would unfortunately have not been helpful, as shown in the initial chart, in obtaining that relative stability.

Both devices tracked the post match situation correctly, the Libre being slightly more correct than the G4 compared to the nightly BG meter tests, but not significantly so.

Thanks to the Libre, we almost had a normal game. 

Friday, January 9, 2015

Exercise, Lantus and Levemir comparison

 Let's go back to the initial purpose of this blog which was to document our adventures in sports and diabetes. Allow me to stress once again that what we are sharing here is our personal limited experience. We all must remain constantly aware that with type 1 diabetes, one size does not fit all!

I will also skip our usual monthly report. Max HbA1c was tested on the 24th of December (you can see our latest data on the right) and I was of course pleased to see a good result. The question "what is a good HbA1c" is a complex one. Personally, I would like to obtain a slightly higher result - between 5.5 and 5.9 - but that is a personal impression rather than a firm opinion.

The Lantus issue


I have talked earlier about the issues we had with our Lantus injections. In order to keep our dawn phenomenon under control, we had to increase our Lantus dose. But doing that, we were unfortunately increasing the frequency and intensity of our early nights lows. I also began to suspect that the Lantus was responsible for the tremendous amounts of carbs we had to consume during some intensive sport sessions. In one case, we were even forced to quit the tennis training despite having taken 60 grams of non covered carbs an hour or so before the training began. It looked as if the sugar was just evaporating into thin air. Since we were already using very little Novorapid, it couldn't be the reason, and we had very little room for tweaks in either its dose or its timing.

We took advantage of the second week of the Holiday period to attempt a switch from one 17 U Lantus injection to two unequal Levemir injections. Let's see what the results where in a typical example. What I am sharing below are two traces of two extremely similar days in terms of activity, meal content and timing and exercise. 

The Lantus day

The Lantus day begins with a slighlty reduced Lantus dose (16U instead of 17U) in order to avoid the early night low. We are quite successful, hovering for a while in the 90 to 100 mg/dL range but a little before 3AM the Dexcom high alarm flags a steep rise(confirmed by the Libre) and we are forced to correct with a significant dose of 4U Novorapid (based on our own algorithm, taking into account the projected increase during the time the Novo will act). If we were on a 150 flat, in a pattern that would not be predicted to rise, 1 or 2 units would have been enough. Around 5 AM, most of the Novorapid activity has occurred and we are back were we aimed, around 100 mg/dL. The dawn phenomenon restarts so much that morning meal, taking at 7:50 AM doesn't even show a spike. Max may have eaten a tiny bit less than optimal or moved a bit too much and has to correct a small low at the school recess. Good timing for a snack anyway. We then continue with a small noon meal because we know we'll have to preload carb for the tennis anyway. We pre-load generously, tolerating a transient high that we know will disappear and go to the training. You don't need to be a specialist to see that, by the end of the session, Max can barely run. More annoyingly, despite constant correction and reduced evening doses, he'll keep crashing repeatedly until very late in the evening. We end up with a nice looking average of 101 mg/dL which is in fact very bad. SD is at 41, at the limit of the acceptable. But we have 17% of lows. That's definitely not acceptable.

The Levemir day

The overall profile of the Levemir day is visually very similar to the Lantus day, but there are actually a ton of differences. After 13 U of Levemir, we start smoothly sailing around 80 mg/dL and end-up rising smoothly to 120 mg/dL at wake up time. By the way, can you spot the 3-4 sensor compression events in our nightly curve? NOTE: we correctly ignored the 1:30 LOW alert as a compression event. If we had panicked and shot "a juice", we would have been above 250 at 3 AM and we would have entered a new approximate correction cycle. The dawn phenomenon follows its course, the morning meal spikes a bit. Our second Levemir injection consists of 9U The noon meal is similar, again with a possible slight overshoot or under-eat situation. We load less carb pre-tennis, don't hit any significant low, see a muted evening meal spike and again hit a small actual low that is easily corrected with 4.5 grs of carbs (3 dextro-energy tablets) and sail smoothly into the night.

Ultimately, the Levemir day ends up on a 102 mg/dL average, compared to the Lantus 101 mg/dL average but we had 2.5x less highs, and 8x less lows. The SD of 26 is outstanding for a "tennis" day.

We'll stick with Levemir for the next few months and see how the story unfolds.

Thursday, January 8, 2015

Freestyle Libre : firewalling the upload

Since there seems to be some interest in blocking the data upload to the Abbott server, I'll provide a couple of options below.

Be aware that, in this type of situation, it is not possible (or maybe extremely time consuming) to be 100% sure that one reliably blocks all accesseses at all times. For example, a program that sees it is blocked from accessing its main server, can decide to fall back to another location or protocol either immediately or after a random delay. That is what makes detecting real life sophisticated data ex-filtration extremely hard and what makes blocking services with multiple fallback options (think MS Messenger in its days) such a chore for system administrators.

An eventual software update, server name or IP address change could of course also go around any measure you have implemented at any given moment in time. If Abbott decided to implement steganography to get at your data, there is very little you could do to prevent them from doing so unless you want to reverse-engineer their software and spend a lot of time on it.

Lastly, if Abbott wants to force you to connect before starting or using their software reporting, you would have no choice but to patch their program.

This being said, here is what I have confirmed to work with the initial release.

Firewall block at the local level on Windows 8.x


You'll want to go to Windows Firewall with Advanced Security (for example by typing "firewall" in the search box and then selecting the advanced option).

You'll create a TCP rule, not bothering about specific ports as you probably don't want to be in touch with that server anyway.

You'll add the current IP address of the Abbott Research server. (Note: this may change at any time should Abbott decide to move its server or use proxies)
And you will finally choose to name and block the outbound connection (well, you can block everything, but I tend to create specific rules not to have unexpected side effects). The screenshot on the left shows the filter applied, the Libre software started with a connected reader and no packets flowing to that address in wireshark. The screenshot below shows what you should see if that block rule has properly been applied.

Firewall block at the router level


The above block will of course only work on the machine where it is active. It will not block other computers that you might use. If you are using multiple computers, you may want to block the address at your router firewall level as in the example below. Again, this does not prevent the connection of the reader to the computer and the local download/export of results but it blocks its connection attempts. This time wireshark will show repeated failed connection attempts from your computer to the Abbott Research Center server.

Other options

It is also possible to use whatever anti-virus security suite you happen to be using, or even that advanced options of the Windows firewall to prevent a specific application from having Internet access. This is "solution specific" and can't be explained. 

At the moment, the process to monitor is the service Abbott installs on your computer. Note: when your reader stops being detected, this is often because that service has stopped. Restarting it will usually restore the connection to your reader.

Other options such as DNS intervention, host files, routing modifications are of course possible, but I have found some of them to be somewhat unreliable after the reader had already connected once. This could simply be because of caching issues or the logic of the Abbott connection process. So many possibilities that aren't necessarily worth investigating.

Wednesday, January 7, 2015

Reactions to the Libre data uploads and a few clarifications

I've received a lot of reactions to my posts about the Libre data dumps to Abbott's US servers. Some of them were highly critical of Abbott's attitude - "Let's sue them" "Let's report them". Others were more relaxed - "Who cares, everyone knows I am diabetic anyway". And non-technical users where a bit confused as far as what the different images and examples really meant. In this post, I will try to clarify some points.

Q: are you sure it really uploads that data?

A: yes, no doubt about it. Analyzing network traffic is not something I invented on my kitchen sink and it is as factual as anything can be.

Q: are you sure Abbott can identify the upload?

A: yes, there is no "technical" doubt that it can be done: linked to the person who purchased the device, see where the device is actually used. Whether Abbott actually does it right now depends on what they do with the data on their servers and how they link their databases.

Q: isn't it only for update purposes?

A: no. For several reasons
  • Abbott could use the other, non encrypted requests it sends to the other server for that purpose. They have two communication channels. How they use them is their choice. The unencrypted open channel could perfectly be used for updates (and may be the one they will use - it seems they will update the computer software through that channel). 
  • Abbott uploads everything that has been entered or collected in the reader. In terms of size and bandwidth, depending on how much you enter in the reader, that is 99.9% to 99.9999% overkill for updating purposes or hardware performance monitoring purposes.
Q: is it intentionally hidden?

A: most likely.
  • If I was to find a software behaving like this in an IT security situation, it would definitely be classified as spyware (imagine a CAD software that would covertly upload the designs you are working on to the CAD program developer). 
  • There is no interface feedback that it is done and the configuration files are intentionally placed in a hidden directory. Typically: if one gives visual feedback (options, status), one may hide the config files to protect them from user error/deletion. If an option is not reachable through an interface, one usually stores a user editable config file with some explanations. Here, everything is hidden and the peculiar design of the program forces you on that hidden path. 
  • Abbott uses two connection channels, one insecure and one "secure", the ex-filtrated data flows on the "secure" channel. To an average user, both channels are "invisible". To an advanced user, the "secure" channel could be visible but its content inaccessible. Abbott may even have thought that the "secure" channel was completely out of reach of users. In any case, there is either an attempt to hide the program's behavior or an attempt to secure it because one knows potentially sensitive data flows on it.

Q: assuming Abbott can but doesn't link data to individuals, is it still a problem ?

A: yes.
  • Even if Abbott isn't doing any linking yet, it could very well decide to do it later. Once uploaded, that data is unlikely to ever disappear. That's a typical issue with all cloud services, but at least they usually give you some control to be forgotten (erase your Google history for example, asking Facebook to forget about you and delete your files). Whether they do it or not and how quickly they do it remains an issue. However, the option is at least nominally present.
  • Even if Abbott doesn't ever link anything, they are, just like Adobe or more recently Sony, Luxleaks and countless others, at the mercy of external hackers or disgruntled employees that could dump tons of data allowing others to do the linking.
  • Even if Abbott is actually unable to do the linking, that anonymous data can be mined to provide information about endocrinology practices and clinics, research projects, competitor clinical trials, etc... Some of it will be detailed in a "scenario" post.

Saturday, January 3, 2015

Freestyle Libre board image

I've accumulated so much data and information over the holiday period that I think I will dump some of if without too many comments.

Ever wondered what's inside your Freestyle Libre reader? Here's the 43 megapixels answer (click here to download the file from a possibly temporary link). The shell of the reader is glued, it is relatively easy to glue it back together.

It's hard not to be impressed - this is a seriously beautiful board - with a properly attached USB port (something Dexcom G4 users will envy), plenty of test points and the expected chips. 

 Abbott went as far as putting a nice o-ring around its button even if the presence of a strip tester makes water-proofing a bit moot.

Compared to some other medical devices, costing 20 times more, I have taken apart, I can confidently state that Abbott did put a lot of efforts in doing things correctly.

And, just in case you wonder if a Freestyle Libre Reader was hurt during this experiment, let me reassure you: the reader is still 100% functional.

Thursday, January 1, 2015

Freestyle Libre RAW data (TI update)

More info from TI now officially available

Very quick update

It turns out that most of what I found out and explained in my Dec 4 and Nov 27 posts about the Libre RAW data format, plus some of the information I shared in private with some of the readers who contacted me directly, is mostly about how the RF430FRL152H or - more precisely - the RF430TAL152H Abbott uses operates. I had decided to take a small analysis vacation and, sure enough, Texas Instruments decided to release a ton of documentation as soon as I stopped looking.

The information is now available here:

That line of products is actually quite interesting as an enabler for different kinds of bio-sensors.

Having access to that in November would have saved me a couple of evenings and an arm cramp. On one hand, I am a bit sad to have lost time but, on the other hand, I am quite happy to see my assumptions, including the ones about the DAC confirmed. However, some of the things I have confirmed on my three sensors are strikingly similar but slightly different from what the TI manual describes. It could be that the TAL is either a custom variation of the FRL developed specifically for Abbott (is this what the A stands for? Not necessarily as TI typically has a technology based naming convention) or a product that is in the pipeline for everyone but to which Abbott got early access. Time will tell I guess.

As a side note, our third sensor suddenly stopped working (in interesting circumstances...) after ten days and was flagged as having failed by the reader. This led to interesting insights about error modes, and how the flags changed in the device FRAM.

A bit of data analysis should, at some point, tell us if the "magic" of the Libre sensor is due to the quality of its signal or sophisticated algorithms.