Friday, January 22, 2016

Article reviews 1: DIABETES TECHNOLOGY & THERAPEUTICS Volume 18


‘DTT’ has made its full special CGM issue available for free here. If you are a typical CGM user, this is a great opportunity to see how professionals think about your favorite diabetes management tool. I plan to review, summarize and comment the articles I have read in full.

The Future of Glucose Monitoring by Satish K. Garg, MD – is an introduction in which the author states that he expects CGM to replace BG Meters at some point during the next decade. That is obviously my wish as well but a couple of major obstacles remain: cost, of course – which is why we need more competition in the field sooner than later – and the remaining odd behavior/calibration issues I have covered on this blog.

The meaty part begins with in Continuous Glucose Monitoring: A Review of Successes, Challenges, and Opportunities, where David Rodbard, MD addresses those issues and more. A few words on some of the points he raises

Lack of approval by the FDA for dosing – is definitely an issue for the “establishment”, less so for most users who usually develop their own reasonable strategies. On that point, I have mixed feelings. On one hand, the recommendation NOT to dose insulin based on CGM readings is baseless as a well working CGM trend provides more information than a single blood measure. But, on the other hand, a blanket dosing approval remains dangerous if it is followed blindly as long as we have calibration, compression or temperature compensation issues.

Cost is a no brainer. The technology has to get cheaper before every T1D or social security system can afford it. Cost will go down if there is real competition in the market. Real competition will come with more players, a more open architecture, non captive data, less patent obstruction or agreements not to step on other people’s toes… Many conditions. I am not too hopeful in the short term on that side…

Need for recalibrations: absolutely! Calibration MUST DIE (even if that means I have nothing left to write about). Chained measurements are a strong as their weakest link - although oversampling can solve some of those issues -and the user itself and his environment is a great source of errors.

Periodic replacement of sensors: again, yes. Cost, of course. And not so much for the inconvenience itself – T1Ds are tough -, but mostly because the hours following sensor insertions remain a large source of inaccuracies… which can take huge proportions (essentially, pick a random slope) when combined with user calibrations.

Day-to-day variability in glycemic patterns: are cited as an obstacle because they limit the predictability of findings in masked professional use. This is the only weak/sub-optimally argued point in the article I think: day-to-day variability may be an obstacle in up-sell of service by professionals or companies but day-to-day variability and unpredictability of patterns is the strongest argument for CGM use as far as patients are concerned. Patients, let me apologize for them, do not live in the fairy land world of nice meal curves, slow/fast carbs and optimal basals where  non-CGM aware endos imagine they are. 

Time, implicit costs and inconvenience of uploads: a small issue in my opinion but that can become very annoying for practices that have to upload data from multiple devices in multiple proprietary software. That’s a VERY easy issue to solve though, and here is my recipe
  • regulation agencies: please do not allow proprietary and closed formats!
  • do not turn a blind eye or let covert data theft fly under your nose.
It so simple that it should not need to be restated, but let me do it once more. Commercial entities are all in love with Apple’s business strategy: the closed and walled eco-system in which subscriptions, planned obsolescence and mandatory upgrades bring tons of dollars. That model is absolutely outrageous in terms of healthcare but they want to replicate that. Your own interstitial glucose data stream is now licensed back to you (as I feared earlier) and you become a captive subscriber to your own data. Your doctor then also soon becomes the captive of the system and, unless he wants to spend significant time developing his often sorely lacking IT skills, becomes the one that chooses your prison. Granted, the walled garden has already been somewhat present in the field of drug prescription where the MD’s choice is often oriented by marketing, deals with social security or insurances. But if the drug does not work, or if there is another better option, a good MD will move his ass to make sure his patients received the best care. In this case, moving data around, when the “owner”, who is not the patient, does not collaborate or actively prevents it is going to be almost impossible.

Reimbursements for physician time, inexperience and lack of training go hand in hand. However, I do not see that as a huge obstacle. An obstacle to the acceptance of the technology by the medical establishment, yes. But, even though I was trained as a MD and I fully understand that MDs have to take a central (and possibly authoritarian) role in some circumstances, management of a chronic condition such as Type 1 Diabetes is not one of them. You are the one in charge, you are the one who needs to develop skills and the occasional quarterly MD advice should be seen as coaching. This being said, the author makes a very good point about how automated and standardized analysis of data would solve a lot of issues. Once again, open access and open standards are sorely needed. Rodbard uses the ECG analogy but does not push it far enough... A good question could be
Where would ECG technology be today if we had started the field by “licensing real time feeds of patient heart rates”, patenting infarction and arrythmia detection algorithms or “methods to transcribe electrical heart signals on paper by means of a moving pen” ?
Lack of standardization of software methods for analysis of CGM data, is mildly annoying. Again, it is an issue that can be solved by open standards and an issue that will certainly not be solved if proprietary standards and systems are allowed to proliferate.

Clinical guidelines won’t hurt but don’t matter much from a patient point of view. A caring and concerned physician will remain a caring and concerned physician. The indirect beneficial impact of clinical guidelines might be to push technophobic doctors into the right direction.

A few additional comments on the article content.
 

In “Lag time of interstitial fluid glucose relative to blood glucose”, the author rightly notes that the situation is improving through better algorithms (a possible allusion to the Libre improved response time). That is, by the way, an area where Dexcom is sorely in need of improvements and probably lagging the competition. The real life benefits of a higher sampling frequency (as long as the signal is good enough) combined with possibly predictive algorithm is tremendous.

In “Confusion regarding interpretation of glycemic variability”, Rodbard again hits the nail on the head. The different variability indexes circus serves no fundamental purpose. It is basically a sterile and alimentary publication generator machine. Of course, the author makes what I interpret to be the same statement in a much more polite way: “Nearly every measure of glycemic variability is very highly correlated with the overall or total SD”
    Facetious parenthesis:
    • “very highly correlated with SD”  is a nice way of saying much if the work done in this field as led to absolutely no significant result.
    • “nearly every measure”  avoids vexing the 50 or so colleagues who published on the topic as each of them is allowed to think his measure is the one that is meaningful.
    Rodbard mentions that CGM accuracy has now reached the 10% MARD threshold that has been demonstrated (on simulated patients) to be sufficient by Kovachev. That’s essentially restating the manufacturer’s claims, claims that have partially been replicated in clinical studies… and that should, in my opinion, be taken with a grain of salt. Rodbard partially addresses this issue in “Confusion regarding reporting of accuracy and precision of CGM sensors” and suggests some standardization here as well. This is indeed necessary but will not, in my opinion, have a major impact as long as external user calibration is required.
    • I saw a lot of “horrible” things in the few years of G4 ‘(non AP) CGM data users kindly sent me. While this analysis suffers from many limitations, it was quite clear that during the first 10 to 24 hours, the CGM results were a calibration timing lottery at best, a hopeless issue if the sensor/sensor wound did not cooperate. The G4 (non AP) was clearly not performing up to specs in the standard scenario.
    • calibration during compressions could cause huge discrepancies as well. Even if that problem can partially solved by black box “silent rejection” of calibrations in those circumstances, it clearly was not implemented at that time. 
    • users don’t have access to the very accurate BG meters used in clinical studies. Calibrating a perfect CGM with a couple of readings taken by an 15197 ISO2003 compliant meter is a very different situation. Here’s a somewhat theoretical view of the relative performance of BG Meters according to standards (and excluding user errors)
    1milliontests

    Here is what 100.000 simulated perfect CGM runs would look like after an initial double calibration with an ISO 15197:2003 compliant meter (red) vs an ISO 15197:2013 compliant one. The central dark line is the perfect measure of the ISIG

    100000
    • failing sensors (which at one point were estimated to be 1 out of 5 on a significant sample of early Libre adopters) never seem to make their mark in studies. It could either be that sensors used in clinical tests are cherry picked by the manufacturer or that the failing ones are somehow masked in the “people who left the study” category.
    NightScout and its influence on pushing the industry and regulating agencies is acknowledged. This is great!

    In the software table, the most excellent Dexcom Studio is still listed as the analysis toold for Dexcom data. Unfortunately, it has now been replaced by “Dexcom’s Jail” for the G5, which might one day evolve into a decent tool, but is essentially Dexcom trump’s card in the establishment of a proprietary walled garden. 

    In the Controversy regarding clinical benefits section, Rodbard perfectly articulates the essential question that should be asked from a medical point of view,

    ‘‘Did the introduction of CGM result in a change in the relationship between risk of hypoglycemia and HbA1c achieved?’’

    From a patient point of view, the question “How does it improve my every day life?” is even more important. From a purely clinical point of view, our HbA1c actually increased from 5.4 to 5.6 when we started to use a CGM but the number of activities we attempted or felt freer to attempt increase. 

    Our quality of life improved: that is what ultimately matters in a disease like T1D.

    This article is, on the whole, a very good overview of the CGM landscape, the remaining challenges and future opportunities. I strongly recommend that you read the full paper if you have the time.

    But let me insist on one point

    Please note that many of the problems Rodbard identifies would be solved by open data formats and open access to that data.

     

    No comments:

    Post a Comment