Thursday, October 15, 2015

ResearchKit is Official at Duke: Autism & Beyond

Over the past 6 months there's been something brewing at Duke ... something that we're now incredibly excited to share with the world.

But first, a bit of background:

  • 1 in 68 children will be diagnosed with Autism
  • Autism can be diagnosed as young as 18 months old
  • The average age of Autism diagnosis in the US is over 5 years old
  • A child's brain will grow at a rate of 700 synapses/sec in the first years of life
  • 70 counties in NC have no access to a childhood mental health specialist
  • In Africa, where there are approximately 500 million children, there are only about 50 childhood mental health specialists

Do you detect a need?

We do, too.

Thus, Autism & Beyond was born. Autism & Beyond is an incredible new ResearchKit study that builds upon the groundbreaking work of Geri Dawson, Helen Egger and Guillermo Sapiro, who over the past several years have been working to refine novel video algorithms that can analyze and detect a child's emotion in real-time. They've been conducting studies at Duke clinics using an iPad prototype app for almost 2 years.

In early 2014 I had the pleasure of working with Kathleen Campbell, a wonderful 2nd-year medical student on her inpatient Pediatrics clerkship. I was her attending physician at the time. Shortly thereafter, and knowing I had an interest in mobile technology, she gave me a demo of an app that the aforementioned team had put together. The idea and technology were amazing. I thought it was great work, although I really had nothing to add at the time. I looked forward to seeing the results of that research.

Fast forward to March 2015, and the announcement of ResearchKit. It was clear that we needed to put this technology to the test, and quickly. We cast a wide net looking for "shovel-ready" projects, and of course, the autism app Kathleen showed me bubbled to the top. The project was already underway (with an iOS app no less!), and had a great team that had already made significant strides in this area. It was an natural fit.

It was also a remarkable coincidence that at this exact time within the Duke Institute for Health Innovation (DIHI) we had just hired two talented mobile developers, Mike Revoir and Jamie Daniel, who were (are) passionate about mobile technology, health, and research. They were so excited about the project that they were eager to dive in even before their first official day of work! As the scope of the project grew, so did the number of individuals and teams involved. In the end, it was a peerless example of cross-institutional collaboration across Duke University and the Health System. This project couldn't have happened without any of them.

Autism & Beyond

So what about the actual app? I could explain it in detail here, but seeing is believing, so go download it now! And if you want more info, check out the website. Even if you don't meet the eligibility criteria, we've made it simple to get a taste of the cool technology that's gone into it without ever signing up.


The app basically comes down to our need to know one thing: could we one day use a mobile phone to automate screening for conditions such as autism or anxiety? To start, we first need to know if it's even feasible to analyze facial expressions on such a small device. That's what this study is intended to determine: feasibility.

As you can see from the screenshot above, the software algorithms developed by Guillermo and his team not only detect facial features, but also expressions, and can do so in real time. The data from this study will help to refine those algorithms.

In addition to the facial recognition pieces, we've also included several critical questionnaires with the help of Helen and Geri. For example, we ask about temper tantrums and provide feedback to users regarding where their child falls compared to his/her peers.

Additionally, study participants will be able to see how many other families have enrolled as well as a few additional aggregate data points:

Check out the number of enrolled
participants in near real-time!

While the release of the app marks the end of one chapter (and a whole lotta work by our incredible team!), it's clearly just the beginning. We hope that the app will have an impact in the US, but also plan to roll it out in China and South Africa soon. The more children we can reach, the more we can help. While ResearchKit allows us to reach millions, we still need to take care of one child at a time, and ensure that those children have access to the resources they'll ultimately need for full diagnosis and treatment. That's going to take even more teamwork!

Tuesday, October 13, 2015

Duke's on FHIR (for real this time)!

In June I described our work to date on integrating the SMART and FHIR APIs into our Epic-based EHR.

As a recap, we started this process in 2014 with a custom Android app that pulled patient problems, medications and demographics directly from the EHR via simple REST-based APIs. As we became more familiar with SMART and FHIR, we realized the value in joining forces with a common standard. It was never our intention to create something that would only be useful to us, and it was clear that the momentum around FHIR was building.

Fast-forward to January 2015, when we had our first SMART apps running in our proof-of-concept environment. It could be done! We followed this with integration of several additional apps and a demo at HIMSS in Chicago. You've seen this before:

This was all fine and dandy, but it was still all in our proof-of-concept system with "fake" patient data.

Since that time our amazing development team, led by Felipe Polo-Wood, has been diligently working to move this infrastructure into our production environment, an important milestone in order to show that SMART and FHIR can do more than just play in the sandbox.

I'm happy to report that as of August 26, 2015, the infrastructure has been live for some Duke-specific internal use-cases. In fact, Felipe was so excited to share that I had a screenshot waiting in my inbox that morning, demonstrating that the systems were, indeed, calling the FHIR APIs, and that the transition was seamless from the outdated infrastructure FHIR was intended to replace:

Duke was officially on FHIR!

But not ones to rest on their laurels, the team immediately got to work to enable a more visible example of what FHIR can do: a true SMART-compatible app, Pediatric Growth Chart.

Drumroll ...

On October 9, 2015 I successfully logged into our production system for the first time to view real patient data in a FHIR app! I'd love to share screenshots with you, but they contain real patient data, so I can't! Let me say that again: real patient data, via FHIR, within Maestro Care, our Epic-based EHR.

And, of course, the best is yet to come!

Kudos to Felipe and the rest of the incredible CATS development team for making this happen. It has truly been a team effort, and I know they share the same enthusiasm for this project as I do, because they're always smiling!

The incredible CATS team:
  • Felipe Polo-Wood (SeƱor Manager)
  • Vince Guaglione
  • Lusia Li
  • Luiz Omori
  • Carrie Porterfield

Wednesday, July 8, 2015

White House Champions of Change in Precision Medicine: Duke's Commitment

Today I had an opportunity today to attend a White House event highlighting work in the field of precision medicine. I was joined by my colleague Geoff Ginsburg who directs the Duke Center for Personalized and Precision Medicine (video replay here, with Geoff's comments at 2:08:30). As part of the event, Duke was highlighted for our commitment to precision medicine. This was for two reasons:

  • The innovative MeTree platform created by Geoff, Lori Orlando and the rest of the MeTree team. This platform provides a way for patients to work together with their doctors to improve the recording of their detailed family health history - information that is critical to the success of precision medicine.
  • The fact that the MeTree platform will be integrated into our Epic-based EHR using SMART on FHIR. Duke is an "Implementer" of the Argonaut Project, and was the first health system with an Epic-based EHR to run unmodified SMART apps directly (see our HIMSS demonstration video here).

As I've said elsewhere, we're on the verge of a renaissance in healthcare technology modularity and interoperability, led by open and familiar standards. The same principles that have led to the successful mobile app ecosystem are being applied to healthcare, with the result that many more innovators will have access to tools that allow them to build EHR-compatible apps. The problems in healthcare are enormous, and the more brilliant minds we have focused on these problems, the more likely it is that we'll find compelling solutions, and soon.

Geoff shared a few comments with the group at the event today, which I've shared here with his permission:
Good afternoon. My name is Geoff Ginsburg and I direct the Duke University Center for Applied Genomics and Precision Medicine. 
It is an honor for our work to be recognized at this Champions for Change for Precision Medicine event. I want to acknowledge up front the support of the Duke Health System, of my colleague Ricky Bloomfield (Director of Mobile Technology Strategy and hospitalist at Duke) who is in the audience and of Lori Orlando (a health services researcher and internist at Duke) who could not be here today but who has been key to the development of this idea and making it real. 
Today we are announcing the development of a platform that will make it easier for the patient to provide information about their family history to their provider and for providers to access family history and risk information to better care for their patients
 -- information that is critical to precision medicine. 
Several years ago we recognized that family history is fundamental to optimizing effective clinical approaches to personalized and precision medicine. However, our research showed that seldom, if at all, was a patient’s family history captured and adequately documented by providers. Furthermore, when histories were taken, there were challenges in interpreting the risk information in a multigenerational family history.

How many of you in the audience have had a truly detailed family history taken by your doctor and learned something from the results?

To address this challenge we created a patient facing, web-based, evidence based software platform to capture family health history – called MeTree. Patients talk to their families and loved ones about what illnesses family members have had and their age of onset and enter the information via the web into our software platform. The information is used to calculate risks for developing disease and the results are reported back both to the provider and to the patient creating an effective provider-patient interaction about their hereditary health risks and what to do about them. 
Two years ago we were fortunate to be funded by National Human Genome Research Institute to expand the reach of this platform to five different health systems across the country representing a variety of care environments and demographic groups. 
Now thousands of patients are learning about their family history of disease and using that information with their providers to get appropriate screening, genetic counseling and testing and taking actions to enhance disease prevention. 
Duke is pioneering the use of open, vendor-neutral standards espoused by the Argonaut Project. These standards will be in place by year’s end to enable this platform to be integrated into multiple patient portals and EHRs, which will allow will near universal accessibility of family history information to patients and providers seamlessly --  improving shared decision making related to prevention of inherited disease. 
With commitment comes responsibility. We will now get to work to honor this commitment to the President and to our patients. It is our hope that this work will help facilitate many more innovations from health systems and technology companies in the future so that we can realize our shared goal of higher quality and more cost-effective healthcare in our country.

Wednesday, June 24, 2015

Duke's on FHIR (but it's ok)!

Current EHRs are among the most complex pieces of software ever written. They serve a critical role to help standardize clinical workflow, facilitate billing, and integrate simplistic forms of clinical decision support, yet tremendous effort is required to customize EHRs to meet the needs of diverse hospital systems.

We felt there had to be a better way.

Starting in 2012 we started investigating ways to create a framework built on top of our Epic-based EHR that would allow us to access the EHR in a standard way from any device or platform. By the summer of 2013 we had a functional proof of concept that allowed us to access patient demographics, problem lists, medications and more from a simple Android app. Duke Apps Supporting Healthcare, referred to internally as DASH, was born!

Around this same time, we started learning more about a similar effort underway at Boston Children’s Hospital called SMART (Substitutable Medical Apps and Reusable Technologist) that incorporated a new REST-based open API called FHIR (Fast Healthcare Interoperability Resources, supported by HL7). Since our goal was general ease of use and interoperability, it made sense to join this effort, originally funded by the ONC via the SHARP program. There were already several proof-of-concept apps written to be SMART on FHIR compliant, so from a practical standpoint, this made sense.

We updated our code to be compliant with both SMART and FHIR and as of January 2015 we became the first Epic-based hospital system to run unmodified SMART apps within our EHR (in our proof of concept environment).

We were thrilled to present this work at the HIMSS 2015 national conference, and you can view a sample the demonstration here (UI bits blurred at Epic's request):

In order to have a compelling demo, we wanted to show several types of apps running in multiple environments. We chose:

  • Growth Chart, an open-source pediatric growth chart app with an award-winning interface
  • Meducation RS, a closed-source app by Polyglot that presents a patient's medication list in simple language translated into 21 languages
  • Duke PillBox, a skill-based interactive learning tool to help patients teach themselves how to take their medications as part of the discharge process. This was developed by MedAppTech for a Duke research initiative.

We demonstrated each of these apps running in both the Epic desktop environment as well as the Epic mobile apps for iOS. Once the infrastructure was in place, it took less than 5 minutes to add these SMART apps to our mobile EHR. Playing with FHIR has never been so fun (or easy)!

Finally, we also demonstrated a native iOS app, Pediatric Growth Charts, that we launched with the patient context from the Epic iOS app.

So, we demonstrated an open source app, a closed source app, and an internally-developed app all functional within desktop and mobile EHR environments, plus a native iOS app to boot. And we're just getting warmed up!

Probably the most frequent question I get is: How did you do it?

Answer: Very carefully.

Ok, so the real answer is that this was a development effort in the truest sense of the phrase. Prior to our switch to Epic, Duke had a home-growth EHR, which means we have some seriously talented developers here who know how to write production-grade EHR code. In this case, we were able to use many of the web services already provided by Epic and simply add a FHIR wrapper. Some data elements required a bit more work. To tie it all together, we wrote a rate-limiting and authorization server in Node.js and installed MITREid Connect to handle authentication.

Is this the path we'd recommend for everyone? No, probably not. Fortunately, most hospital systems won't have to write their own SMART on FHIR implementations. Through the Argonaut Project, many major EHR vendors and academic medical centers have committed to FHIR as well as the OAuth profiles needed to make it all work. This includes Epic. So hang tight, and your EHR vendor of choice will likely soon provide you the tools you need to kindle the flame.

Duke is involved in Argonaut as an "Implementer," which means we have real-world experience implementing apps that use the standard, and have provided feedback to improve both SMART and FHIR.

As I've said before, there probably hasn't been a more exciting time to work in healthcare technology. Between novel patient engagement tools such as HealthKit and Google Fit, cutting-edge research platforms such as ResearchKit, and now truly modern APIs that will usher in a new generation of substitutable EHR apps, there's plenty to keep us busy, and most importantly, to help patients take better care of themselves and providers take better care of patients.

And the best part? Things are just starting to heat up ...

Monday, May 25, 2015

The Apple Watch: Counting Calories Counts

I've now spent a few weeks with Apple Watch, and while there's a lot to talk about, one feature in particular has "surprised and delighted" me, and it's not the one I was expecting.

I exercise regularly, but I wasn't expecting Apple Watch to help me much with respect to my routine. That's not because I didn't think it would be accurate or easy to use, but simply because Apple Watch wasn't made for me: I'm a swimmer.

Granted, it's been shown that Apple Watch can withstand a good swim, but it's still not designed to track swimming (I use my Pebble with the app for that), and Apple specifically discourages it.

So rather than lament Apple Watch's neglect of swimmers the world over, I went for a walk. A few of them, actually (dog needs exercise, too). And what I saw surprised me. Check out the following two workouts that Apple Watch saved to the Health app. Notice anything interesting?

I did a double-take when I saw these results, but then it dawned on me what was going on, and it's super cool. If you noticed, the first workout was a 0.93 mile walk that lasted 17 minutes, during which I burned 66 calories. The second workout was only 0.91 miles, lasted only 14 minutes, yet I burned 72 calories!

So how could a shorter walk (both in distance and duration) lead to more calories burned? While you mull that over, take a few minutes to head over to ABC News to view a 5-minute video about Apple's secret fitness lab. I'll wait.

Wasn't that cool? Apple is amassing what is probably the world's largest and most complete set of physiologic data during exercise (over 18,000 hours and counting), all while volunteers wear Apple Watch. This testing takes multiple factors into account, including activity level (accelerometer), heart rate, temperature, and probably also factors in height and weight when available, although I don't have access to their algorithms to know for sure. The volunteers in the fitness lab also wear specialized (and very expensive) gear designed to accurately measure caloric expenditure. These data can then be used to create accurate data models to fit almost every profile, leading to an extremely accurate estimate of calories burned.

In other words, Apple is validating the watch to be the most accurate consumer-oriented calorie-counting machine ever created.

Of course, this explains the "discrepancy" in my workout numbers. In the second workout, I had to walk much quicker in order to cover 0.91 miles three minutes faster than the 0.93 miles I walked the day before. Apple Watch knew I was working harder because it was tracking my heart rate every 5 seconds for the duration of the workout, and my heart rate reflected the increased activity.

Lesson: to burn calories, nothing can replace good old-fashioned, heart-pumpin' aerobic exercise!

Want to know the coolest part? Measuring heart rate is just the beginning! As I've mentioned before, now that we have a computer in constant contact with our skin, the market for transcutaneous sensors is going to explode, and the more data we collect, the more accurate these measurements will be.

Just hurry it up with the swim tracking, ok, Apple?

Tuesday, April 14, 2015

ResearchKit is upon us!

Today, Apple has unveiled the open source code to their ResearchKit framework. You can find it here:

I was pleased to see the code hosted on GitHub, which I've found to be exceedingly user-friendly and which we already use here at Duke Medicine.

Also, not only has Apple released the code for ResearchKit itself, but they've also released the code for all 5 of the ResearchKit apps developed to date, as well as the back end code used for those apps, called AppCore. That's the power of open source, folks!

We've been discussing ResearchKit internally since it was announced in March. Anyone waiting for the code to be released before starting on their ResearchKit apps is already behind. Why? I'd estimate that over 90% of the effort required to create a ResearchKit app comes before a single line of code is written. The code itself is simple and straightforward, making it easy to create a consent workflow and make use of active tasks.

But the real work comes in designing the study itself. What do you want to study? Why? What is your target population? Who gets access to the data? How many versions of the consent do you need? Do you want it localized? US? International? Has it been approved by the IRB? The list goes on.

In my opinion, this is the future of research for a certain category of studies that require access to large numbers of individuals for more refined data, or that need access to a very specific and hard-to-reach patient population, such as one with a rare disease.

I've already spent too long on this blog post. Our developers are jumping into the framework as we speak, ready to begin our study implementation. Time to dig in!

Saturday, March 21, 2015

Meaningful Use of Patient-Generated Health Data

This week the NPRM for Meaningful Use 3 was made available in "unpublished" form on the Federal Register site. It seems that one of of the primary aims for MU 3 is to streamline the set of objectives applicable to eligible providers (EPs), eligible hospitals (EHs) and critical access hospitals (CAHs).

The new item most interesting to me is Objective 6: Coordination of Care through Patient Engagement (starts on page 103 of the linked document).

This proposed objective aims to "Use communications functions of certified EHR technology to engage with patients or their authorized representatives about the patient's care" and employs three measures:

Measure 1: >25% of all unique patients "actively engage with the electronic health record made accessible by the provider" either by 1) viewing, downloading or transmitting to a third party their health information; or 2) "access[ing] their health information through the use of an ONC-certified API that can be used by third-party applications or devices."

I've previously discussed the Argonaut Project Implementation Program and its relation to the SMART on FHIR project. The FHIR APIs and added functionality of the SMART project (OAuth, OpenID) will dramatically lower the barrier for third-parties to easily add functionality and significant value to current EHRs. While these APIs are already enjoying broad support even before they are complete, seeing this emphasized in the MU 3 NPRM is a testament to their importance.

Measure 2: For >35% of all unique patients, a secure message should be sent using electronic messaging function of CEHRT to the patient, or in response to a secure message sent by the patient.

It's critically important that we encourage direct engagement and interaction between patients and providers, and this measure intends to do just that.

Measure 3: "Patient-generated health data or data from a non-clinical setting is incorporated into the certified EHR technology for more than 15 percent of all unique patients."

This is exciting. While patient-generated data can come in many forms, including manual entry by patients, this measure will only be achievable if we employ technologies that reduce or remove such barriers. Apple's HealthKit is by far the easiest-to-deploy tool to facilitate this data handoff currently, and it's available right now. We're hopeful an Android-equivalent will be available soon for patients with those devices (Google Fit doesn't yet ... fit that purpose).

Neither SMART on FHIR nor HealthKit are yet widely deployed or adopted, but these technologies will undoubtedly be critical to ushering in the learning health system, and it's great to see APIs and patient-generated data being emphasized in the latest NPRM.

MU 3, welcome to the 21st century!

2015-03-24 UPDATE: Just to make sure this is clear, MU 3 is still draft at this stage, and the content is subject to change. Also, attestation for objective 6 will require meeting only 2 of 3 of the measures listed above.

Monday, March 9, 2015

ResearchKit - More Details

ResearchKit is exciting, as I've already noted. Apple has posted some additional information in the form of a Technical Overview document. This document publicly sheds a little more light on what ResearchKit will enable.

Apple describes three "modules" available within ResearchKit, including Surveys, Informed Consent and Active Tasks.


ResearchKit provides a standardized interface to quickly build surveys. These modules are already localized. This would work for the majority of current research use cases.

Informed Consent

One of the most significant inclusions is native informed consent capability, as you may have seen in the keynote:
Credit: Apple
This is something many researchers have been scrambling to buy, create or produce since it's not currently a common feature of modern EHRs. The fact that this is now available out of the box and in open source form is compelling. The informed consent mechanism is flexible and takes into account the use of waivers, use of institution-specific ethics language, and also provides the ability to insert comprehension tests ensure the patient has capacity to sign. The consent framework then generates a PDF for upload or email.

Active Tasks

As demonstrated in the video today during the keynote, ResearchKit provides the ability to capture patient interactions which Apple calls "Active Tasks." The Active Task modules currently available in ResearchKit include:
  • Motor Activities
    • Gait, tapping
  • Fitness
    • 6-minute walk
  • Cognition
    • Spatial memory
  • Voice
    • Phonation
All of these are accomplished using hardware built-in to the phone itself.

Open Source

Probably the most significant innovation here, however, is the fact that this is all (or will be soon) open source. For this to succeed and expand, it's vitally important that academic medical centers keep this in mind when developing their apps, and that they choose to share the source of their own apps so other researchers can build on their foundation. This collaboration will be the key to accelerating the transition to a Learning Health System.

ResearchKit and the Future of Healthcare

Apple landed a pretty big bombshell today with the announcement of ResearchKit. And probably the biggest part of the announcement was this:

Yes, ResearchKit is fully open source. Why is that important? Because it means that this technology will eventually be available on any platform. Yup, including Android. And Windows. And whatever else comes in the future.

And it also means that integration between ResearchKit and other emerging healthcare technologies will be possible, including the SMART on FHIR platform, which is leading the way to modernize healthcare interoperability.

Given Apple's commitment to privacy, I have no doubt that the healthcare industry will quickly adopt this new platform, as evidenced by the high-quality institutions already participating.

As I've mentioned previously, the Learning Health System represents the vision of a world in which we are constantly learning from a stream of high-quality data, using that data to quickly make even better decisions, even at the point of care. ResearchKit will help us reach that goal even more quickly, possibly even before the 10-year goal as set forth by the ONC.

Let's get started!

The Why of Wearables

Ten years from now, 2015 may well go down as the Year of the Wearable. Activity trackers are plentiful and accurate (including those built-in to your phone), Android Wear devices are now becoming more refined, the Pebble will see its first substantial upgrade (including 10-day battery life), and, of course, the Watch will launch next month after additional details are revealed later today.

So why have these technologies recently become interesting and relevant? Did some Silicon Valley innovator simply decide that we needed more technology on our wrists (and *POOF*! VC funding suddenly made it happen)?

Of course not. The truth is actually far more interesting, and, when properly understood, is the lens through which we can peer into the future.

Time for a history lesson ...

My grandfather worked on the UNIVAC computer during his time in the Army at Ft. Meade, MD in the 1950s. It was large (and the battery life was really bad...). This is an example of a UNIVAC system that was used by the Navy:
Credit: Wikipedia
At this nascent point in computing history, could anyone have envisioned a device that's orders of magnitude more powerful, yet small enough to fit on your wrist? Perhaps (see #11). But truth is stranger than (science) fiction, and the reality is that in the realm of personal computing we've easily surpassed even the most fantastic futuristic visions of the 1950s.

The invention and subsequent miniaturization of the transistor have accounted for this success, which has followed a trajectory known as Moore's Law, which I won't rehash here. Some have warned that Moore's Law is coming to an end, at least via silicon. I wouldn't be so quick to throw in the towel. The progress is staggering (note a version of the UNIVAC at the bottom):
Credit: AMD via Technology Review
This dramatic miniaturization allowed my family to purchase its first desktop computer in the early 1990s - a Packard Bell with a 66MHz AMD chip inside. I have such fond memories playing Myst with my dad. I had a desktop in college as well (this time a Dell with a 450MHz Pentium II), and it wasn't until I started medical school in 2004 that I owned my first laptop, a sturdy IBM Thinkpad.

This miniaturization continued and I bought my first iPod touch in 2009 to use as a test device while I was teaching myself to write iOS apps in residency. This was followed by an iPad in 2010 (the day they were released, although I should note that the iPad was most certainly not the first tablet computer on the market), and then pretty much every iPhone since then (I now use an iPhone 6 Plus - the first iPhone that actually fits my hands). My first "smart watch" came in the form of the Pebble in 2013, and I've since used a plethora of other gizmos and gadgets that would fall in the "wearables" category.

Notice a trend here?

Computers have continued to progress towards smaller and faster devices, and as they've done so, new applications for that technology have inevitably been the result. "Wearables" are simply the next step in that logical progression.

So don't be surprised as these devices start to pop up in ever smaller and more discrete places, such as the buttons on your shirt, woven into your socks (you gotta know how much your feet are sweating!), mixed with your food, in your medicine ...

If we've learned anything from the past, it's that this progression is inevitable, and that it will only exceed our expectations and our wildest futurist fantasies.

With respect to his latest creation, Jony Ive seems to agree: "It’s technology worn on the wrist. I sensed there was an inevitability to it."

Despite this predicable inevitability, the future will still hold plenty of surprises. And I suppose that's the best prediction of them all.

Wednesday, February 25, 2015

The Argonaut Project Kickoff

Jason returning with the Golden Fleece
I just got off a kickoff call for the Argonaut Project Implementation Program.

This is, simply stated (and to paraphrase John Halamka on the call today), the most promising path towards meaningful healthcare interoperability we've ever known. Finally, we have modern protocols such as REST, OAuth and OpenID that are being applied to healthcare in a scalable way.

The SMART on FHIR platform epitomizes the work done to date in this effort, and the Argonaut Project is intended as a sprint to get the FHIR DSTU 2 deliverables in time for a May ballot. To be clear, this is still an alpha/beta product and not quite ready for public consumption, but the promise is already quite evident.

When I arrived at Duke in the summer of 2013 (and before the SMART project had been converted to work with FHIR), I had a goal to create a standardized platform for interoperability for web and native mobile apps that would work with our Epic implementation.

Starting in 2014, while we already had a functional implementation of our own framework, we realized that we shouldn't recreate the wheel, and that the SMART on FHIR project was being developed the accomplish the same goals. With that in mind we realigned our efforts to push forward with integration of the SMART platform into our Epic EHR.

As of January 2015 we have a functional implementation of the SMART platform here at Duke in our proof-of-concept environment. We currently have an iOS app and a SMART-enabled web app working against this environment with plans for several more current and future app integrations, including apps developed here at Duke.

We're looking forward to be involved in this Pilot Implementation process (see graphic below) and look forward to sharing more in the coming months, including at HIMSS in April where we will demo these integrations.

Never before have we had such a golden opportunity for robust healthcare interoperability!

Anyone can get involved in this open process.

Thursday, February 12, 2015

FDA Updates & Finalizes Mobile App & Device Guidance

This week the FDA released the final version of their guidance on Medical Device Data Systems (MDDS - original draft was released on June 20, 2014) as well as an update to their guidance on Mobile Medical Applications (last updated on September 25, 2013) to make it consistent with the MDDS document.

I've shared my thoughts on this guidance previously.

For those who aren't familiar with this guidance, the FDA divides mobile medical applications into three categories, only the first of which will be regulated:

Apps that are medical devices

These apps will be regulated, and fall into one of two categories, including those that are either intended:
  1. to be used as an accessory to a regulated medical device; or
  2. to transform a mobile platform into a regulated medical device.
The updated Mobile Medical Applications document provides a number of helpful examples of hypothetical apps as well as a list of approved applications that have gone through the 510(k) process.  Most of these fall into one of the following categories:
  • Apps that use an integrated or attached sensor to obtain data used in medical decision making
  • Apps that use the camera to analyze images and present data used in medical decision making
  • Apps that connect to and alter the behavior of other medical devices (physical connection or remote)
  • Apps used to calibrate medical devices
Apps for which FDA intends to exercise enforcement discretion

These are apps that MAY meet the definition of medical device but likely pose lower risk to the public, therefore they will not be regulated unless decides the app poses a significant patient safety risk.

This is a big deal because all clinical decision support apps will fall into this category. Clinical decision support apps represent one of the most promising categories of medical software, including those that can be integrated into a provider's workflow through integration into the EHR via technologies such as the SMART of FHIR framework.

The FDA realizes that the volume of these apps will likely exceed their capacity to regulate them, but the bottom line is that purchasers and consumers of these applications need to be ever vigilant that what they are using is safe, and immediately report any concerns to the FDA.

Apps that are not medical devices

These are apps the FDA has decided it will not regulate. Types of apps include:
  • Apps that provide medical reference material
  • Apps used as educational tools for medical training
  • Apps used for patient education
  • Apps that automate office operations in a health care setting
  • Apps that are generic aids or general purpose products (e.g., apps that magnify images, record audio, facilitate communication, provide turn-by-turn directions)
For further examples of any of these categories of apps, especially if you're planning to develop a medical app, be sure to check out the full document.

Once again I applaud the FDA for helping clear up some of the confusion regarding regulation of mobile medical applications.  It will be expected, though, that these guidelines will continue to evolve as technology matures.

Sunday, February 1, 2015

The Death of Meaningful Use: ONC's Interoperability Roadmap

ONC's vision of the health IT ecosystem as a Learning Health System

This past week the ONC released it's vision for health IT interoperability in a draft 1.0 document entitled "A Shared Nationwide Interoperability Roadmap."  There were a few items that caught my attention in the 166-page document, most notably the following words, nestled deep in the middle of page 50:
"This Roadmap shifts the nation’s focus from meaningfully using specific technologies with specific features to working together as a nation to achieve the outcomes desired from interoperability and a learning health system."
Granted, this is a draft document, but that sure sounds like an official death knell to meaningful use (MU) as we know it.

From my perspective, MU has done quite a bit of good to raise awareness of the importance of health IT to help us speak the same language so we can get the right information to the right person at the right time to make the right decision (sound familiar?).

However, the accompanying MU certification process has bogged-down health systems and stifled innovation through timed incentives and disincentives that has resulted in hospitals scurrying to claim their entitlements at the expense of thoughtful and measured health IT progress.

So it's with great excitement that I read the ONC's draft document that focuses this effort on public-private collaboration to start solving the thorniest issues we've faced, including policies and technologies to promote streamlined and robust interoperability.

The document also touched on a subject near and dear to my heart (p. 10):
"Given the increasing volume of mobile technology usage among consumers and across the care delivery system, approaches to enable "send, receive, find and use" in the near-term must support the flow of electronic health information across both institutional and mobile-based technologies. This means traditional approaches to health IT interoperability will need to become more agile and leverage the experience of modular consumer applications, such as those created by Facebook, Amazon and Apple. These secure, but simple architectures have enabled an ecosystem of applications that allow users to engage with electronic health information across a variety of different platforms and devices and open opportunities for entrepreneurial third parties to thrive."
Through the SMART on FHIR framework, this will soon be a reality in health care.  I'm excited to be leading the initiative at Duke to be the first Epic-based hospital with a functional implementation of SMART.

Furthermore, the document highlights the importance of facilitating the incorporation of patient-generated health data into our EHRs (p. 46):
"There needs to be a greater focus on incorporating patient-generated health data and ensuring the availability of tools for individuals to use this information to manage their health and make more informed health-related decisions."
Our experience with Apple's HealthKit here at Duke has shown us that this idea is a reality, today.  It's never been easier to get high-quality patient data integrated directly into our clinical systems so that providers can quickly act to improve patient outcomes.

This type of technology is the essence of a Learning Health System, which is emphasis of the ONC's draft document.  2015 is going to be an exciting year for healthcare technology and interoperability.

RIP, Meaningful Use.

Tuesday, January 20, 2015

Obama's Precision Medicine Initiative

Despite the fact that President Obama spent the last week spilling the beans on the content of his State of the Union address, he had one little surprising tidbit for those of us who are passionate about Health IT: a Precision Medicine Initiative!  In his own words:
"21st century businesses will rely on American science, technology, research and development. I want the country that eliminated polio and mapped the human genome to lead a new era of medicine — one that delivers the right treatment at the right time. In some patients with cystic fibrosis, this approach has reversed a disease once thought unstoppable. Tonight, I’m launching a new Precision Medicine Initiative to bring us closer to curing diseases like cancer and diabetes — and to give all of us access to the personalized information we need to keep ourselves and our families healthier."
All I can say is, "Amen to that!"

We're currently at a critical juncture of the need for personalized and precision medicine and the availability of technology to support exactly that.  Patients can - right now - share their own health information in real time with their physicians via novel technologies such as Apple's HealthKit, which we've implemented here at Duke.

Plus, technologies such as the SMART Platform (about which I'll have much more to say very soon ...) will enable a new generation of standardized medical apps that can scale to every EHR and every hospital in the country ... and eventually the world.

I couldn't be more excited to be surrounded by such talented and creative folks here at Duke working to make all of these things (and more!) a reality.  Apparently for the past year we've been working to make the President's vision a reality ... we just didn't know it!

FDA Draft Guidance for General Wellness Devices

Today the FDA released draft guidance on regulation of low-risk "general wellness" devices.  Draft guidance is nonbinding and subject to change, but it does provide a window into the FDA's thinking on a topic of increasing relevance.  This guidance will be part of a compliance policy of the Center for Devices and Radiological Health (CDRH).

This draft guidance continues the FDA's recent trend to state explicitly what they will not be regulating.  In September 2013 the FDA issued final guidance on mobile medical applications where they stated that "The agency intends to exercise enforcement discretion (meaning it will not enforce requirements under the Federal Drug & Cosmetic Act) for the majority of mobile apps as they pose minimal risk to consumers. The FDA intend to focus its regulatory oversight on a subset of mobile medical apps that present a greater risk to patients if they do not work as intended."  My thoughts on that report can be found here.

Then, in August 2014 the FDA issued a draft document detailing their "Intent to Exempt Certain Class II and Class I Reserved Medical Devices from Premarket Notification Requirements" - my thoughts are here.

So it comes as no surprise that this latest draft proposes that "CDRH does not intend to examine low risk general wellness products to determine whether they are devices within the meaning of the FD&C Act."

The document goes on to define "general wellness products as products that:
  1. are intended for only general wellness use, as defined in this guidance, and
  2. present a very low risk to users' safety."
Examples of device categories covered by this guidance include:
  • weight measurement
  • physical fitness, including products intended for recreational use
  • relaxation or stress management
  • mental acuity
  • self-esteem (e.g., devices with a cosmetic function that make claims related only to self-esteem)
  • sleep management
  • sexual function
However, any devices falling into the above categories that are not low risk would not be covered by this guidance.  The risk is determined by whether or not the product:
  1. is invasive
  2. involves an intervention or technology that may pose a risk to a user's safety if device controls are not applied, such as risks from lasers, radiation exposure, or implants
  3. raises novel questions of usability
  4. raises questions of biocompatibility
Finally, to be very clear, the document gives several examples of low risk products, such as a mobile app that plays soothing music to manage stress; a mobile app that monitors daily energy expenditure to increase self-awareness to maintain good cardiovascular health; a mobile app that records food consumption help the user manage dietary activity; a device that monitors pulse during exercise or hiking; and a product that mechanically exfoliates the skin (to increase self-esteem).

I continue to applaud the FDA for giving clear guidelines for mobile apps and devices to help foster an environment ripe for innovation while keeping patients safe.