**provide some data, including full raw scans**, for you to download. [download]. This is the data set used for the charts at the bottom of this post.

The source code below contains the data I used for this regression. There might be some overlap with the above files, and some missing. In fact, I could have used more than 100 values, which would not have helped much and would have added to the tedium.

I am sorry that I am unable to provide a ton of nicely arranged dumps, but remember that this data was acquired in 2014 and early 2015, with various tools, including pencil and paper. I actually shared that data previously on github, but removed it when it became clear I was getting zero data in return and tons of requests for the "formula".

```
import matplotlib.pyplot as plt
import numpy as np
from scipy import stats
mono = {'family' : 'monospace'}
# scanned values
xlist = [101, 101, 72, 161, 75, 69, 112, 203, 163, 154, 168, 93, 99, 66, 80, 105, 137, 124, 156, 277, 141, 135, 67]
# observed counts
ylist = [906, 892, 689, 1291, 755, 664, 954, 1605, 1360, 1340, 1485, 805, 865, 632, 818, 971, 1206, 1064, 1368, 2150, 1300, 1167, 641]
# some outliers
x_outlier = [101, 153]
y_outlier = [1084, 1160]
# do the regression
gradient, intercept, r_value, p_value, std_err = stats.linregress(xlist, ylist)
print ("Gradient and intercept, r, p, std", gradient, intercept, r_value, p_value, std_err)
# plot it
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_title('Libre: correlation reported values / observed counts')
ax.text(350, 2500, '{:_<10}'.format('intercept: ') + str('{:06.2f}'.format(intercept)), fontdict=mono)
ax.text(350, 2390, '{:_<10}'.format('gradient : ') + str('{:06.2f}'.format(gradient)), fontdict=mono)
ax.text(350, 2280, '{:_<10}'.format('r : ') + str('{:06.4f}'.format(r_value)), fontdict=mono)
ax.text(350, 2170, '{:_<10}'.format('p : ') + str('{:.2e}'.format(p_value)), fontdict=mono)
ax.text(350, 2060, '{:_<10}'.format('std : ') + str('{:06.4f}'.format(std_err)), fontdict=mono)
ax.set_ylim(0, 3000)
ax.set_xlim(0, 520)
ax.set_xlabel('Reported Value')
ax.set_ylabel('Observed Counts')
mono = {'family' : 'monospace'}
line = plt.plot([0, 300], [intercept, intercept+300*gradient], color='r', linestyle='-', linewidth=1, label=("04/2015"))
plt.legend()
plt.show()
```

Which gave

Gradient and intercept, r, p, std 7.26235656465 181.083590507 0.99295809065 6.0805538057e-21 0.189073777001

The conversion applied is as follows, based on the previously derived parameters. (I kept the habit of masking on 14 bits because in theory that is what the TI chip should deliver...)

def LibreConvert(r): bitmask = 0x3FFF return ((r & bitmask) - 181.08) / 7.26

Here are the results on the data set above.

Small deviation, please note scanned value is "in trend" |

Small deviation, trend uncertain |

Small deviation, in trend. |

Small deviation, in trend, possible noise |

Small deviation, in trend |

Bigger deviation, but in trend from the 145 going down, the Libre doesn't know we corrected the fall |

Flattish, very small error |

Again small deviation, fully in trend. |

Nice regular trend, spot on. |

Outlier, but in trend, the Libre doesn't know there's some exercise. |

Stablish, in trend, possible noise our trend change. |

Stable conditions, matching the scan |

Deviation, but again in trend. |

Nice trend, nice match. |

Very large mismatch, but explainable if trend based on previous minutes |

In trend. |

Now, to recap...

- I don't use the Libre anymore, this is based on 2014/2015 data. Some of you sent data, I will definitely have a look at it, thanks.
- Match is near perfect, especially when the trend isn't changing abruptly.
- These mismatches (I could devote an entire post to mismatched graphs)
which led me to examine the eventually predictive nature of the Libre's algorithm.__are always well in line with the previous trends and never against the trend__ - In many cases,
**direct interpretation of raw data led to better acccuracy with BG meter!** - Those results are quite close to the actual factory calibration slope of the Libre (edit: the Libre we had - again, haven't tried any recent ones). I am confident this will be confirmed now that the official apk is in wide circulation.
**I still consider those results to be insufficient for release**. That will be my opinion until the full Libre algorithm is documented.