PHR and Microsoft Healthvault Insights

This year, Microsoft Healthvault awoke from a fitful slumber and released “HealthVault Insights.”

Insights is a major step that includes hooks to Microsoft Cortana in order to provide some degree of analytics and access to diary etc. The most significant component in Insights is the ability of participating health care professionals (HCP) to build a Care Plan in their part of the application, and push it to the user (patient).

That is a pretty major step, because instead of a dim and clouded memory of the encounter to rely on, the patient now has an electronic record to work with. The record might involve excercise targets, dietary objectives, or precipitation schedules.

Great so far, and Microsoft deserves applause for taking personal health records (PHR) to the next level.

However, HealthVault developers and I have argued with regard to who should have the ability to initiate a Care Plan. Understandably, Microsoft are physician-centric, and see the Care Plan as starting its existence as the culmination of a medical encounter.

I see this as a workflow mistake.

To my mind, the vast majority of healthcare encounters will begin with a patient seeking an appointment for an injury, illness, or medication change. While it is certainly true that sometimes the physician initiates an encounter for a routine checkup or medication review, this is by far in the minority.

What Healthvault Insights doesn’t allow, is for the patient to initiate a Care Plan and push it to the provider. I believe this is a major gap, given the way most encounters are originated, and the need to enable activated patients to be self-managing.

Allowing the patient to create the Care Plan makes the process more patient-centered, encourages better encounter planning, and makes the patient more responsible for their health management.

The key elements should include:

  • Templates to guide the patient in listing signs and symptoms
  • Guides to help the patient select a tentative chief complaint
  • Diary and PHR hooks to develop a recent medical history of the complaint as a timeline
  • Prompts to help them develop a statement of health goals
  • Listing of their current PHCP or HCPs available with their current insurance
  • Means to push the resulting care plan to the HCP they selected as a care request with preferred dates and times

On receipt of the care request, the HCP could respond with an encounter at the most medically appropriate level of care, whether that is a Care Plan for self-care at home, encounter with a nurse, encounter with a physician, referral to a specialist, etc.

Currently, Microsoft agrees with the concept as I have described it here, but is relying on the HCP or other interested stakeholders to develop the functionality using the Healthvault software development kit. I think this is an unlikely scenario, and that the ability for patients to initiate, build, and push a care plan should be part of the core Insights functionality.

That’s my story, and I’m sticking to it.

Electronic Health Records: Where they should be Going but aren’t

The past few years, mostly because of the Affordable Care Act, the adoption of Electronic Health Record (EHR) systems in the USA has seen a dramatic growth. Because of the rapid climb, EHR vendors have been trousering some pretty large amounts of revenue, billions of dollars, in fact. This is not a bad thing per se’, but as Congress suddenly realized this year, all that cash didn’t translate into giant leaps in innovation, as they predicted. Some of this is the result of a captive market, some because of psychosocial artifacts of clinicians, and some to do with that markets aren’t necessarily innovative.

One of the ways in which one can see the lack of innovation, or even basic maturity, is the degree to which clinicians have to type the same data over and over in different electronic forms. Not only do the EHR systems not interoperate very well between vendors, some don’t even interoperate with themselves! So it is a common sight to see a nurse type in records from a sheet of paper, then if they are lucky, copy and paste them into another form. If they are unlucky, they get to retype the same data multiple times in different EHR screens. If they are doubly unlucky, the system is also somewhat fragile, which isn’t unusual, and it aborts the session before the data is saved. In that case, they get to retype it all again when the system comes back to life. Sometimes this happens several times a day – in one case that I encountered, the clinician had to try fourteen times before the system recorded the data!

This is obviously a pretty abominable situation, and to get even the most basic degree of workflow into this is going to take a lot of effort and money. Luckily, the EHR vendors are flush and positively glowing pink with all that Meaningful Use cash in their fists.

The Goal

What I want to see isn’t beyond current technology or in the realm of science fiction, and not even where we ultimately want to be, but it shows where the thinking needs to head (In my opinion, that is).

What I want to see is the removal of the human from any data capture that doesn’t actually require their expertise.
Not really a big ask, given that we can put intelligence in spectacles and the average smartphone has more brains than it knows what to do with.

So let’s say a patient arrives for a consultation.

When they enter the waiting room, I want them to get a transponder sticker. These are dirt cheap, pretty reliable, and can be scanned without actual contact. At the reception desk, the clerk reads the sticker and associates it with the patient record. Now I can tally who left without being registered (elopement), how long it took (primary wait time), and at which stage of the encounter all the patients are (census).

When the patient is called, they are read leaving the waiting room, and again when they enter the examination room. The nurse or nurse practitioner scans them, and the patient record is already onscreen in the room when the nurse scans their ID on the workstation. Each vital sign collected goes directly into the patient record because the instruments are vaguely intelligent. Blood pressure, pulse-oximetry, weight, height, respirations, temperature, etc. are all directed from the device to the EHR simply by using them on the patient. These are all time-stamped, have the ID of who was using them, the ID of the device, and are shown as machine entries in the patient record.

Verbal notes can already be captured through speech recognition, but let’s say that the nurse actually has to enter this themselves. They don’t have to search for the patient record or the screen, those are already there, and they simply need to verify that the patient record is correct. (Although unless the patient swapped armbands with somebody, we are pretty sure who they are).

When the process has reached a certain point, the EHR can buzz the physician that the patient is close to ready. So no long wait while the nurse has to write things down or type in much, and no need for them to go find the physician.

A similar scenario unfolds when the physician enters: the room, patient, and physician are associated in an entry event because all three have transponder identities. Relevant patient data is already displayed when the physician scans their ID at the workstation to login, and again, any use of instruments captures data. Listening to the patients lungs with an intelligent stethoscope can capture the sounds, timestamp them, and put them into the correct place in the patient’s record. Even more wonderful, if the patient has any electronic records pertinent to the encounter, these can be transmitted from a smartphone Personal Health Record (PHR) app.

The only parts the physician play in capturing data is when expertise is required or when the machines can’t (yet) do it themselves. There is no reason on earth why a scale, blood pressure cuff, or pulse-oximetry device can’t transfer the data to the EHR themselves. Only the most antiquarian of medical offices don’t already have devices that display the data digitally, it’s just that we then typically ask a human to write it down or type it into the EHR manually. That is a bad use of resources, and opens up opportunities to get it wrong.

With time stamped machine data, the practice can start monitoring movement and wait times, and would be enabled to make adjustments to their workflow to optimize patient flow, and reduce unnecessary steps or waits. Staffing rosters and equipment placement can be evidence based rather than rely on guesswork, and bottlenecks in the processes will be far more visible.

Conclusion

The basic theory is similar to industrial engineering – don’t ask a human to do something that the machine can do. Free up clinician time, reduce transcription errors, and allow the clinician to focus on where their expertise lies – not in being low-level data capture clerks.

We should be demanding that equipment manufacturers and EHR vendors get their act together, and stop making clinicians do their dirty work.

That’s my story, and I’m sticking to it!