Monday, December 17, 2018

Common Sense in C-CDA: Comparing Carequality/CommonWell C-CDA to r2.1

The history of shared clinical documents is marked by decades of industry-wide deliberation.  In recent years, catalyzed by ONC Certification, there has been widespread adoption and refinement of the standard. 

Data EHRs were required to record and share under ONC Certification were originally called Meaningful Use Data and represented a list of basic chart elements. These were later reformulated as the more expansive Common Clinical Data Set (CCDS) in 2015 Edition Certification. The emergence of 2015 Edition criteria also removed the requirement for the encounter-based document previously required in 2014 Edition - the ambulatory Clinical Summary - making the test data for the latest edition of Health IT certification effectively patient-based.

The current ONC-sanctioned R2.1 version of the C-CDA document, upon which the sharing of these minimum data elements is based, includes long-discussed enhancements that have lead to much greater clarity. But there remain issues of reliability, relevance and provenance surrounding the standard and its implementation, particularly at the encounter level. Some of these issues, particularly with respect to provenance, are being addressed in the underlying data with the Draft U.S. Core Data for Interoperability (USCDI), which is slated to replace the Common Clinical Data Set (CCDS).

There have also been efforts provide constructive recommendations on the structure of the document itself.  Earlier this year, Carequality and CommonWell Content Work Groups released a white paper, using the C-CDA R2.1 Companion Guide as a baseline to provide "complementary, not conflicting guidance." CommonWell is a not-for-profit trade association focused on interoperability, while Carequality represents a similar industry-wide collaborative, convening healthcare stakeholders on interoperability issues.

The goal of the collaboration was a more clinically-relevant and parsimonious C-CDA. The collaborative work group tackled the following issues:
  • "Unacceptably large" C-CDA documents
  • A general absence of clinical notes 
  • Need for encounter summary support
  • Need for version management

Clinical Notes should follow the encounter. (Credit: Max Pixel)
The white paper has a wide range of common-sense recommendations, but we'll discuss some of the recommendations most relevant to implementing CCDA R2.1 under 2015 Edition Certification and USCDI (which is slated to replace CCDS).
  • Problems - only those addressed during the encounter 
  • Allergies - "only if the system can recreate the active Allergy list at the time of encounter"
  • Medications - "only if the system can recreate Medications at time of encounter"
  • Immunizations - those given during the encounter

While conditionality in general can get thorny, these recommendations make a good deal of clinical sense. If events are not touched by the encounter, updating and sending them makes little sense.

Specific to the USCDI, the work group judged the data set to contain "valuable data elements and should be exchanged to improve patient care." But they go on to say that Clinical Notes should not simply be a data dump: when they are supported, support for Encounter Summary documents should be added to ensure the note follows the encounter.

The recommendations focus on Encounter Summary documents (Progress Note Document and Discharge Summary) and preserve all sections required in the base C-CDA document template. After setting that foundation, they map out a "priority subset" of clinical data from the ONC Common Clinical Data Set (CCDS) and draft US Core Data for Interoperability (USCDI). The idea is to weed out data that is stale or irrelevant to the clinical encounter.

The prioritized sections for 'always include' have data within them that should appear conditional based on the events of the encounter:

In theory at least, the C-CDA as tested under 2015 Edition would become easier to interpret and implement. There also a few general guidelines for non-priority elements:
Systems SHOULD send a ‘No information’ assertion template if nothing is available for one of the priority subset data elements. 
Systems MAY send additional data elements, beyond the priority subset, if relevant to the encounter. For these additional data elements, systems should not send a ‘No information’ template if nothing is available.
The concept of 'no information,' while ostensibly straightforward, is crucial here. It's the "known unknown," that provides more certainly to the receiver of the C-CDA that in fact nothing clinically relevant appears under that section, as opposed to having been omitted for an unknown reason.

The recommendations also touch the world of Fast Healthcare Interoperability Resources (FHIR), which implications for how its resources are used. The CommonWell-Carequality alliance is deeply involved in FHIR workgroups and its recommendations may, through governance of the standard, help shape efforts to ensure C-CDA and FHIR resource play more nicely together. Currently, there is a significant amount of subjectivity in getting the sections of a C-CDA in and out of FHIR.

Dynamic Health IT has been reviewing its C-CDA practices and guidance to make sure we help our clients send and receive documents that are clinically-relevant and readable. True interoperability requires nothing less.

We're looking forward to seeing you at the C-CDA IAT meeting. You can find the Track Agenda here.

Friday, December 7, 2018

ONC Annual Meeting 2018 Recap

On November 29-30, 2018, Health IT policymakers and implementers met in DC for the Annual ONC Meeting. As always, the agenda was full to the brim with discussions on the state-of-the-health IT industry.

Interoperability and the future of 2015 Edition CEHRT were central themes, but the agenda content reveals just how far-reaching ONC's influence reaches - and by extension, the use of health IT systems and applications. Sessions tackled subjects as wide-ranging as care coordination, interoperability strategies, APIs, disaster response and opioid prescribing.

Image result for washington dcJeff Robbins, Dynamic Health IT President, attended the meeting. His enthusiastic takeaway was that the conference was very beneficial, providing a survey trends in health IT today and learn about innovative approaches to our most pressing challenges. While it's impossible to attend all sessions (or even to summarize them in a single blog post), we'll review a few highlights.

The breakout session “Improving Opioid Prescribing through Electronic Clinical Decision Support Tools” focused on the solving the challenges posed by the Opioid Epidemic in the Prescription Drug Monitoring Program (PDMP). As part of monitoring, prescribing physicians can be securely notified of non-fatal overdose episodes, but this is currently uncommon. By connecting providers to PDMPs more widely, we can ensure that they have timely information that can limit drug-seeking, modulate prescription behavior and point to important connections to limit the spread of opioid abuse.

While we acknowledge the depth of the challenges facing the industry currently - particularly with respect to interoperability - there were plenty of success stories in evidence. DHIT was impressed by case studies showing enthusiastic adoption of the HL7 Fast Healthcare Interoperability Resource (FHIR) data standard for provider-payer data exchange. A private sector-led initiative — the Da Vinci Project — has progressed quickly enough to enlist government sponsorship from the ONC and then spawned the P2 FHIR Taskforce. The task force is supporting FHIR development efforts and turning its focus to challenges preventing adoption.

The “FHIR Implementation and Transition Planning” session was another opportunity to discuss EHR vendor Application Programming Interfaces (APIs). The 21st Century Cures Act calls for the development of APIs to promote data-sharing and the ONC 2015 Edition Test Method includes three measures (g7, g8, g9) encompassing patient data APIs - though neither explicitly mandate FHIR. During the Q&A session, Cerner and Epic were in the hot seat with application and personal health record (PHR) developers seeking broader access to their FHIR APIs.

On Friday, a lively panel discussion on “Data and Value-Based Care" was held. Economist Mark Pauly discussed the proverbial elephant in the room - a subject that is often avoided:  cost/benefit of healthcare and the need to set dollar limits on individual healthcare plan coverage.  Mark posed the hypothetical of whether there should be marginal dollar limits for coverage.

Finally, it's worth reflecting on the future of ONC's latest health IT test method: 2015 Edition CEHRT. With funding in questions and transition in both programs, Certification and Meaningful Use have both seen grave pronouncements within the last few years. But 2015 Edition CEHRT is still required in 2019 for providers reporting MIPS. And for those providers targeting the “end-to-end bonus" for eCQM reporting, that must be done in conjunction with a system that meets 2015 Edition CEHRT.

Our challenges - and the policy initiatives designed to meet them - aren't going anywhere.

Friday, November 16, 2018

2019 CQM changes – what’s it all about?

Each year, there are a host of changes to clinical quality measure logic, reporting requirements and CMS quality program policy. Come 2019, the usual cycle of CQM turnover is joined by an overhaul of CQM logic methodology: Clinical Quality Language (CQL).

In our previous post, we discussed 2018 changes. In this post, we’ll handle the changes coming for the 2019 Reporting Year. Those fall into five categories:
  1. CQL transition: Clinical Quality Language (CQL) is a measure authoring language standard that will replace the measure logic previously defined by the Quality Data Model (QDM) and QDM in 2019.
  2. Value Set Changes: changes to official codes and code sets used by CMS quality measures
  3. Measure Specification and Logic Changes: changes in the manner of calculation and/or specific codes and code sets included in each measure
  4. QRDA-I/III reporting format changes: changes in the XML requirements (QRDA-I or QRDA-III) for major CQM reporting programs (such as MIPS, HQR and Joint Commission)
  5. Reporting changes: High-level program changes such as menu and number of required of measures, method of reporting, etc.
CQL transition
The conversion of eCQMs to Clinical Quality Language (CQL) has been a hot topic ever since CMS announced that, starting with the 2019 reporting period, eCQMs would move from logic based on an HQMF XML modeling language to CQL.  In little more than two months, CQL will become the lingua franca of eCQMs.

CQL has been chosen for its ability to give quality measure authors more flexibility in creating precise definitions of quality measures that are human readable. But what does this mean for health care providers and EHR vendors? If implemented smoothly, it should be barely noticeable by providers. Measure changes between 2018 and 2019 will be absorbed by the CQL measures and calculated as expected, versions notwithstanding.

The impact on EHR developers is far more significant. While developers providing calculation and analysis tools based on CQL have flexibility to use the files that integrate best into their software (ie, ELM vs JSON), they will need to overhaul their measure specifications at the root level to ensure they are using a CQL basis.

CQL makes calculation logic more readable and transparent. For example ability to calculate within the logic itself. Previously, a concept like “cumulative medication duration” was a derived
element that could not be expressed with QDM-based logic. CQL expresses this kind of calculation in a computable format within the logic.

Fortunately, DHIT has rendered eCQMS (and a wide range of non-eCQMs) in JSON well in advance of the 2019 reporting deadlines.

Value Set Changes
As discussed in Part 1, there are annual value set changes that affect eCQMs differently each reporting year. The National Library of Medicine (NLM) maintains the Value Set Authority Center (VSAC), which releases value set updates annually. The entirety of these value set changes are incorporated into our CQMsolution application each release year, with backward compatibility maintained for previous reporting years. The only way to fully implement these changes is to be sure that value sets and their respective codes crosswalk perfectly to the VSAC release.

Measure Specification and Logic Changes
Each new reporting year brings a new batch of measure versions. The United States Health Information Knowledge base (USHIK) offers a comparison tool for visualizing changes in measures across years. The tool is accessible directly from the eCQI Resource Center, but as of this writing is not yet accessible for 2019 eCQMs. Changes to the measure specifications can range from adjusted wording in the measure overview to an adjustment to how a measure population is calculated and its data elements. Examples of new data criteria include the attributes ‘prevalencePeriod’ and ‘components.’

QRDA-I/III reporting format changes
Here is a rundown of the most relevant changes to the requirements in QRDA-I and QRDA-III submission files:

  • Changes to accepted date time formats: made more expansive
  • Data types of CD or CE SHALL have either code or nullFlavor but not both
  • Changes to group-level reporting identification (for MIPS-Group and MIPS Virtual Groups). MIPS Groups will use such that it SHALL be the group's TIN, while the virtual group will use its Virtual Group Identifier
  • Changes across the XML to support transition from ‘ACI’ to ‘Promoting Interoperability’
  • Standard measure identifier changes (UUIDs)
Reporting changes
The overall list of CMS eCQM measures in 2019 will stay the same for hospitals, but there are a few program-level measure adjustments for hospitals:
  • CMS 55 is discontinued in the IQR program, but will remain in TJC
2019 EH measures by Program
On the EP/EC side, there are some changes to the overall menu of measures:
  • CMS 249 and CMS 349 are added
  • CMS 65, CMS 123, CMS 158, CMS 164, CMS 167, CMS 169 are removed
  • CMS166 -previously for Medicaid-only submission – has been phased out.
Reporting for eligible clinicians has expanded to include the use of the MIPS-API, which has undergone evolution since introduction in 2018.
One significant change on the hospital side is that a QRDA-I submitted and accepted into production will overwrite any preexisting file based on the exact match of five key elements identifying the file: CCN, CMS Program Name, EHR Patient ID, EHR Submitter ID, and the reporting period specified in the Reporting Parameters Section.

For those submitting Joint Commission eCQMs, the program will be shifting to the Direct Data Submission (DDS) Platform, in which hospitals will submit measures directly. DHIT will offer a full suite of consulting and data review services to support this.

Tuesday, October 23, 2018

2018 CQM changes – what’s it all about?

Finding yourself unsure about recent developments in Clinical Quality Measures?  Join the club.

Each year, there are a host of changes to clinical quality measure logic, reporting requirements and CMS quality program policy. And currently, the usual cycle of CQM turnover is joined by an overhaul of CQM logic methodology for the 2019 CQM definitions: Clinical Quality Language (CQL).

Suffice it to say, there’s a lot to keep on your radar right now in the CQM world. And the deadline for eCQM submission will be here before we know it:

  • Hospital Quality Reporting: February 28, 2019, 11:59pm PT
  • Quality Payment Program (MIPS) eCQMs (non-Web Interface submission): April 2, 2019, 5pm PT

We prepared two posts to break down and interpret some of the changes that have occurred in 2018 and those that are coming in 2019. Our goal is to provide some action steps for CQM implementers.

The changes for 2018 submission fall into four categories:
  1. Value Set: changes to official codes and code sets used by CMS quality measures
  2. Measure Specification and Logic: changes in the manner of calculation and/or specific codes and code sets included in each measure
  3. QRDA-I/III format and submission: changes in the XML requirements (QRDA-I or QRDA-III) for major CQM reporting programs (such as MIPS, HQR and Joint Commission)
  4. Reporting: changes in high-level quality reporting program requirements
For those reporting Joint Commission (TJC) ORYX eCQM in addition to CMS, it’s important to note that TJC has aligned their measures closely with CMS, but there are some programmatic changes they we’ll note below.

As you dig into the changes, an invaluable resource is the eCQI Resource Center, hosted by CMS. The eCQI site also has an annual Implementation Checklist that is helpful in preparing for CQM submission. The Implementation steps contain a list of steps to understand year-over-year changes.
To varying degrees, these changes require workflow and/or back-end changes by EHR vendors. But take heart – if the data exists somewhere, Dynamic Health IT’s quality measure bolt-on software, CQMsolution, can take care of the rest.

Value Set Changes
The National Library of Medicine (NLM) maintains the Value Set Authority Center (VSAC), which releases value set updates annually. The entirety of these value set changes are incorporated into our CQMsolution application each release year, with backward compatibility maintained for previous reporting years.

The update of value set changes contains changes that have the potential to affect all eCQMs in a given release year. The only way to fully implement these changes is to be sure that value sets and their respective codes crosswalk perfectly to the VSAC release.

VSAC also includes some retired and legacy codes in order to accommodate a lookback period for measure calculation, so it’s important to keep in mind that the eCQM value sets do not always correspond to strictly the latest version of any code set (such as ICD-10 vs ICD-9). CMS makes a determination of which code system versions they will approve for use during the measure year.

Measure and Measure Logic Changes
While the high-level identifier (eg, “CMS 2”) and description for measures may stay the same across years, each year generally brings a new batch of measure versions.

The United States Health Information Knowledgebase (USHIK) offers a comparison tool for visualizing changes in measures across years. The tool is accessible directly from the eCQI Resource Center.

Changes to the measure specifications can range from adjusted wording in the measure overview to an adjustment to how a measure population is calculated. For instance, in 2018, CMS 128 added an exclusion for patients who were in hospice care during the measurement year.

Each measure release year also has a Technical Release Notes document attached. Where the USHIK compare tool has a side-by-side descriptive comparison, the Release Notes provide granular list of all changes to a measure. These changes must be incorporated into the measure logic used to power all reporting and this update is done within CQMsolution’s calculation engine 12-16 months in advance of measure reporting cycles.

QRDA-I/III reporting format changes
Each year, there are changes great and small to the formatting of XML documents that must be submitting for both eligible clinicians (ECs) and eligible hospitals (EH). Many of these changes are absorbed directly into the files themselves. However, here are the changes that have the potential to affect data capture and workflow on the provider side:

Eligible Hospital QRDA-Is (patient-level XML file):
  • Medicare Beneficiary Identifier (MBI) is not required for HQR but can and should be be submitted if the payer is Medicare and the patient has an MBI
  • CMS EHR Certification Identification Number is now required for HQR.
  • TIN is no longer required
Eligible Clinician QRDA-IIIs (aggregate XML file or JSON via API):
  • The performance period under MIPS can be reported at either of the following levels:
    • The individual measure level for the MIPS quality measures and at the individual activity level for the MIPS improvement activities (IA), or
    • The performance category level for Quality and IA performance categories
  • Virtual Groups can now be reported in QRDA-III (under a CMS program name code created called “MIPS_VIRTUALGROUP”)
  • Eight new Promoting Interoperability PI measures can be reported to indicate active engagement with more than one registry.
  • The 2015 Edition (c)(4) filter certification criterion (45 CFR 170.315(c)(4)) is no longer a requirement for CPC+ reporting (practices must continue to report eCQM data at the CPC+ practice site level)
For eligible clinicians submitting via API to the Quality Payment Program (as opposed to file upload), the QRDA-III file is converted to JSON (or is submitted directly via JSON). In 2018, some technical changes have been made to that process and updates to measure names to include "Promoting Interoperability (PI)" in the identifiers. 

ECs can submit via API when using the Registry method. Dynamic Health IT is an authorized Registry with API submission privileges to QPP.

Reporting changes

For eligible clinicians, there were changes in 2018 related to both measures lists and measure count thresholds for reporting. The overall list of CMS-sanctioned eCQMs previously shrunk in 2017 and in 2018 CMS added two EP eCQMs:

  • CMS347v1
  • CMS645v1
Note that these measures are NOT available for Medicaid EHR Incentive Program for Eligible Professionals.

In 2017, clinicians could submit a minimum of 1 measure for 1 patient for 1 day. But for 2018 data, clinicians must submit at least 6 measures for the 12-month performance period (January 1 - December 31, 2018). The percentage of the overall MIPS score comprising quality has also been modified from 60 to 50%.

As in 2017, hospitals will still select at least four (4) of the 15 available electronic clinical quality measures (eCQMs) for one self-selected quarter of 2018 data (Q1, Q2, Q3, or Q4) during the same reporting period.

In 2017, the Joint Commission’s ORYX eCQM reporting was modified to require a minimum of four eCQMs, over a minimum of one self-selected calendar quarter. This will remain the same in 2018.

Bringing it all back home
The most important consideration for EHR vendors is to minimize workflow impacts for the user interface and user processes, while ensuring adequate data capture for CQM calculation and reporting. One virtue of using a calculation and analysis package such as CQMsolution is that once you’ve captured and mapped the data elements, we handle all of the changes described above. That means you can beginning running reports as soon as data is available.

Tuesday, October 9, 2018

DHIT at Connectathon 19: Unpacking CQMs, Compositions and the Future of FHIR

FHIR Connectathon 19 took place September 29 and 30th in Baltimore, Maryland. Overlooking historic Baltimore Harbor, attendees flocked to collaborate and compare notes on the standard. The continued momentum in FHIR implementation is evident in the capacity crowds and raises the hope that this critical mass will conquer some of the outstanding obstacles to adoption.
Baltimore Inner Harbor

The backdrop of the harbor served as a visual metaphor for the navigation required of implementers. As in global shipping, we move valuable product (in our case, data) around the world and the care for each aspect – packaging, chain of custody, containers – requires thoughtful attention to detail (as you’ll see below, especially the containers).

The Dynamic Health IT team focused efforts on two subject-based tracks: Clinical Reasoning, which centered around CQMs; and FHIR Documents, which involved creating and consuming FHIR-specific documents to a server.

The Documents Track
For the FHIR Documents Track, participants tackled both creation and consumption of FHIR documents, with the primary goal of sending a composition. In FHIR, a composition is meant to “provide a single coherent statement of meaning” as well as provide a clear identification attribution for the document. While it may seem obscure, getting this right is essential to transparency and trust on which true interoperability depends.

On the creation side, we were tasked with assembling a FHIR document – defined as a “bundle containing a Composition and supporting resources” and submitting that document to a FHIR server. There were a number of approaches to document creation. From the beginning, ours has been to use C-CDA R2.1 XML as the basis and create FHIR resources accordingly. This is the transformation on which our Dynamic FHIR API is based. This foundation enables us to support the use of familiar conventions and data types in CDA r2.1, while introducing clients to the FHIR standard and providing a method for fully meeting the 2015 ONC Certification API requirement.

Another point of discussion in document creation was the need to use the “contains” operator to create the Composition. In FHIR, ‘contains’ has a very specific use case and most agreed that it is not meant for creating a composition. The main takeaway here is that by wrapping a composition in a ‘contains,’ the data is rendered  non-interoperable.

The consumer side of the Documents track presented fewer hurdles conceptually. The goal was to retrieve a FHIR document from a reference server and prove its validity. One way to do this was to use an XSL stylesheet to display the human readable version of the document retrieved. The DHIT team’s preparation in mapping FHIR to CDA R2.1 in the forward direction paid dividends here.

Clinical Reasoning Track
The clinical reasoning track dealt was a intersection of DHIT’s core competencies in quality measures and interoperability. Participants set out to integrate FHIR with clinical quality measures, focusing on the CMS-based electronic CQMs. DHIT specifically focused on testing against the latest versions of CMS 124, 125, and 130.

Our team’s preparation going into the Connectathon was an essential prerequisite for success. In the lead up to the event, this meant taking FHIR-based CQM data and converting to Quality Data Model (QDM) format to enable calculation. This is certainly not the only approach our team will be adopting as FHIR-based CQMs evolve. But as long as the Quality Data Model remains relevant, it provides a direct link between the measure logic and the underlying FHIR data.

This intermediary requires that QDM-based concepts such as ‘Encounter, Performed’ (which do not exist in FHIR) need to be converted to FHIR-standardized data. Our approach has been to convert the data coming in:

FHIR server with FHIR data --> CQMsolution data format --> QRDA-I/QRDA-III/FHIR MeasureReport

Data can be gathered from FHIR and filtered by hospital, clinic, clinician, etc, for calculation by CQMsolution’s engine.

FHIR’s proposed Bulk Data extraction method, for use with large datasets, has great potential to expand quality measure interoperability. When interacting with FHIR servers that do not perform calculation, our CQMsolution application would make use of this standardized bulk transfer method on a specified group of patients as dictated by the client and receive a ready status when the bulk export is available for calculation and analysis.

During the Connectathon, we performed calculation and manual analysis to compare our results to the reference server, comparing against synthetic FHIR-based patients there. The exercises were largely focused on proof of concept and tracking data from point-to-point. The next steps in FHIR CQMs will likely involve more validation of calculations and to start looking at more exotic data types, such as negated elements or embedded events.

Our team is eager to identify use cases for data required by Quality Measures across the spectrum.

Looking ahead
With another Connectathon come and gone, there’s still plenty to unpack. We’re looking ahead to wider adoption of DSTU4 and to the emergence of bulk data processing, while devoting current efforts to resolving the very real challenges facing the community at present. As consensus builds, implementers will be able to make fewer compromises between maintaining interoperability and steering toward the standard.

Keep in touch and check out our website for more information where we’re going with FHIR.

Tuesday, August 28, 2018

DHIT Goes to Washington: Behind the Scenes at the ONC #InteropForum

Credit: Max Pixel.
The Office of the National Coordinator for Health IT (ONC) hosted the the 2nd Interoperability Forum from August 6th to 8th, 2018 in Washington, DC.

The event was an interdisciplinary meeting-of-minds on interoperability, with plenary sessions and break-out, topic-based panel discussions. Dynamic Health IT staff participated in all three days, with our VP of Development, Raychelle Fernandez, giving a panel talk on the second day of the event and our team taking a deep dive into the collaborative atmosphere throughout.

The event featured a variety of “tracks” to facilitate focused discussion and information sharing. These centered around application programming interfaces (APIs), clinician experience, patient matching, security and others.  The tracks are overseen by ONC and other industry experts, allowing for a direct line to those shaping policy in this sphere.

The final day of the Forum consisted of recap and presentation from each track lead.

Hot Topics
One big-picture issue discussed at the Forum was the future ONC role in advancing interoperability. With the recent pivot of the Advancing Care Information (ACI) program to “Promoting Interoperability,” it’s clear how central this is to both CMS and ONC’s missions, but what will be levers for real change? Everyone has their thoughts, but much of the discussion will come down to picking winners among the available standards and championing their success.

In any contemporary discussion of interoperability, you’re likely to hear about FHIR, Blockchain, Patient Engagement, data blocking and the roadblocks perceived in the leading standards the inhibit broader implementation. The Forum was no different, but what was eminently useful about the event is the breadth of perspectives – the attendees really attacked the issues from all sides.

The discussion of Patient Engagement went well beyond Portals, into newer methods or delivery and patient involvement.  There was a discussion of current EMR/bolt-on software development punctuated by remarks from the panel on how patients will (or will not) benefit from there trends. As Mark Scrimshire (@ekivemark) mentioned on Twitter, in order to start better joining data together “we have got to start involving patients in our efforts to resolve patient matching challenges.” Blue Button 2.0, a patient-facing, FHIR-based API and a treasure trove of CMS data, is another major channel for engagement.

In the halls of application development, it can be difficult to get a direct perspective on the burden on the patient and feasibility of tasks such as patient matching without costly studies with uncertain value. And patient users are often at least degree removed from developers, who are not on the ground as service providers.

Clinician perspectives
As Dr. Steve Lane put it, "If you're not satisfying the needs of the clinician you are missing the mark."

For one specific example of where data requirements and providers, there was much discussion about how clinicians document when something is NOT done. While it’s true you can’t prove a negative, it’s important to understand why another clinician did not perform a task and to get that info in a clinical note. This concept of 'Reason not done' data is required for eCQMs, but currently isn't represented in CCDA.

Where there’s Interop, there’s FHIR
FHIR is always near the top of the marquee for any interoperability event. But like many ballyhooed technologies, it is still largely opaque to its end users – both patients and clinicians.
There were a number of other discussions that entered into practical applications of interoperability, including an implementation of the Bulk FHIR API. Our team previewed items for discussion at the upcoming FHIR Connectathon and Roundtable, including CCDA on FHIR, our Dynamic FHIR API and Health Lock-It Mobile App. Having certified and refine our FHIR API, we are afforded time to participate in the ongoing CCDA-FHIR mapping discussion and pivot to new challenges in the FHIR orbit. At the Connectathon, our focus will be we on CQMs and FHIR, with special attention paid to the Clinical Reasoning track.
Apple has made its presence known at recent interoperability and FHIR events and wherever they go, they have a tendency to move markets. One way in which this directly affects DHIT and other development shops is the need to support FHIR DTSU 2 to be in the HealthKit ecosystem.

DHIT: Our Implementer Story
Raychelle Fernandez speaking at the Implementer's Story panel.
Raychelle Fernandez, DHIT’s VP of Development, participated in the Implementer's Story panel and shared insights related to our development of FHIR resources, ‘FHIR on the Fly’, API, Mobile application development and relevant tools.
Raychelle also discussed integration with a wide range of EMRs and the need for U.S. Core Data for Interoperability Task Force (USCDI) to expand data elements and ensure proper use cases exist to minimize the burden on implementers. For example, we should not require specialties like Optometrists, Orthopedics, Podiatrists, or Chiropractors to capture Immunizations if it has no clinical relevance.

These are exciting and fast-moving times in our field. Stay tuned for more.

Tuesday, May 15, 2018

2015 Certification: 6 Things to Know

Steve Jobs famously remarked that users “don’t know what they want until you show it to them.” This is often true in the software development world as a whole, while in Healthcare IT much of what we do is essentially client-driven.

But what about when a feature is neither? Such is the case with some of the requirements found ONC 2015 Edition Certification. When faced with 2015 Edition Certification, EMR developers have a lot of questions, starting with “Why should we do this in the first place?”

Drawing on experience from our own certification and that of our clients, we’ll address some of the most pressing concerns in this post.

1. 2015 Certification software has become increasingly compulsory

At its inception, EMR developers fairly asked, given limited time and resources, whether there was any immediate reason to broadly adopt 2015 Certification criteria. It’s been essential to keep current on clinical quality measures in order to report to CMS programs (QPP, HQR, Joint Commission and CPC+), but Certification criteria as a whole has become more relevant with time.

Part of this is catching up with early adopters for competitive and marketing reasons. MIPS requirements for ambulatory providers have also been a driver - certified electronic health record technology is required for participation in the Advancing Care Information category of the QPP and only a 2015 Edition certification that includes automated measure calculation will enable reporting on ACI measures past the “Transition” phase.

2. Self-declaration takes some of the pressure off
Perhaps the most important thing to note about self-declaration is that the technical requirements for “self-declare” measures have not been eased. And there is a good bit of documentation required to prove the testing you have conducted independently.  However, the inclusion of a wide swath of self-declaration (non-live testing) measures has eased some of the burden for developers. The stakes and costs are lower now that you can test iteratively and do not have to schedule extra live testing (and potential re-testing) with your proctor. Keep in mind also that your proctor can ask at any time to review the self-declaration criteria.

3. Building an API for Patient Engagement means knowing your endpoints
If you know you’re going to be certifying the 2015 Edition “API” measures (g7 – g9), you’ll need to decide on technologies for delivering clinical data resources and authenticating user. We recommended FHIR and OAuth, respectively. There are pros and cons for both, but our decision was based on which technologies are best-positioned for where Health IT interoperability is headed.

It’s also worth exploring how patient-accessible APIs are going to work in the wild. The 2015 Certification measure provides a pathway to certifying that you can make XML/JSON available per patient and filterable by common clinical dataset sections and date. But it doesn’t connect the dots between accessing the raw resources from the API and getting it into a consolidated location that is usable by a non-technical patient user. For that, you’ll need to consider the extent to which you’ll take that leap into Patient Health Record development – or tailor your solution for compatibility with big players in this space such as Apple (which is, for now at least, pursuing a FHIR-based mobile record).

4. (b)(1) Transition of Care is a many-layered measure
The (b)(1) measure is proof that a world of functionality can lurk in a single sentence. In addition to sending and receiving a variety of transition of care documents through your chosen protocol(s), b1 also requires you to:
  • Detect valid and invalid ToC/referral summaries and provide an accounting of errors
  • Display a human-readable C-CDA for BOTH r1.1 and r2.1 CCDA
  • For both r1.1 and r2.1, allow your users to display only the data within a particular C-CDA section, set a preference for the display order and set the initial number of sections to be displayed.
5. (b)(6) Data Export is about more than just CCDAs
The 2015 Edition Data Export measure (b6) shares a similarly ambitious goal with the API: that all patient data can be made portable and requested from an EMR at any given time. For data export, not only are you asked to pull out patient data in CCDA form, but you need to be able to slice the patient data in several different ways, exporting either in real-time or scheduled ahead.

The different permutations for exporting under this measure are often a source of confusion, so here are the discreet taste cases to help make things more intelligible:
  • On-demand (export runs immediately):
    • Request all patients from a specific start date and time until the present
    • Request all patients from a specific start date and end date, with the end date occurring prior to the present
    • Request a subset of patients from a specific start date and time until the present
    • Request a subset of patients from a specific start date and end date, with the end date occurring prior to the present
  • Scheduled (configured for a future date-time):
    • Relative (a recurring scheduled report)
      • Request all patients based upon a relative date and time from the date range in the data (e.g., generate a set of export summaries from the prior month on the first of every month at 1:00 a.m.)
      • Request a subset of patients based upon a relative date and time from the date range in the data
    • Specific (a one-time scheduled report)
      • Request all patients based upon a specific date from the entered start and end dates and times (e.g., generate a set of export summaries with a date range between 01/01/2015 and 03/31/2015 on 04/01/2015 at 1:00 a.m.).
      • Request a subset of patients based on upon a specific date from the entered start and end dates and times
6. Automated measure calculations and usability testing involve a lot of data-gathering 
Automated measure calculations are based on an expansive spreadsheet of test cases furnished by ONC. These cases ensure that your measure calculations follow the expected measure logic faithfully. There’s really no corner-cutting on these, but one limiting factor is the measures you are certifying overall. If, for instance, you aren’t certifying ePrescribing, you will not be on the hook for those calculations.  Also, if you know the programs and stages of Medicare MU/Promoting Interoperability, Medicaid MU and ACI that apply to your users, you can hone in on just the calculations that are relevant. 

For usability testing, you will be in good shape by modeling your study and write-up after an established usability process approved by ONC. Your proctor should provide you with a sample list of functions subject to testing. The estimated time to conduct the study and complete the write-up would be 3 weeks, but this will be contingent on difficulty of recruitment, your own test design and then post-test editorial/graphic design considerations for the doc itself. The results of the testing will be posted on CHPL.

For vendors seeking support on 2015 Edition measures, check out our site to see how our bolt-on certification solutions can fit into your certification plan.

Copyright Dynamic Health IT ©, Inc 2018. We specialize in ONC-certified solutions, CQMs and system interoperability via standards such as HL7®, CDA®, CCD and FHIR® using our flagship products - CQMSolutionsConnectEHRConnectEHR-PortalDynamicFHIR and The Interface Engine(TIE)

Subscribe to our newsletter and mailing list here.

Monday, April 16, 2018

Quality Measure Submission: A Brief Review, A Look Ahead

For most, the 2017 submission period for eCQMs is over. Not only did we survive, we thrived. Dynamic Health IT worked with clients submitting measures to CMS for the Hospital Quality Reporting (HQR) program and Quality Payment Program (QPP) to improve timelines for submission as compared to 2016.

On the Ambulatory side, this was everyone’s first trip through MIPS submission and, in light of that fact alone, the process was a great success. DHIT worked with clients to submit both individual and group data, rolling out some new validation and user experience enhancements to ensure continuity between the submission program and the output.

We assisted clients in file validation using a series of newly-developed APIs, received files securely for validation via a cloud-based submission server and assisted our clients in both data troubleshooting and making an informed decision regarding submission type by comparing measure outcomes.

Achieving successful submission doesn’t mean we haven’t learned something along the way – quite to the contrary – so we wanted to share a few lessons learned in this space. CQMs have become a perpetual development and submission cycle, which means we can’t pause long before looking ahead.

Lessons from 2017 submission
The 2017 submission cycle was highly educational, particularly on the Ambulatory side (QPP/MIPS submission).  Here are a few lessons to carry into 2018:
  1. Make sure to choose your most clinically-relevant measures: Reaching consensus in your organization about measure selection prior to data review will eliminate inconvenient reversals later in the process. Running an initial report with all available measures for the previous and/or current years can aid in this process (CQMsolution excludes all non-MIPS EP measures from QRDA-III output specific to MIPS). 
  2. Review data early and often: The most time-consuming aspect of eCQM reporting is making sure data is complete, accurate and does not trigger submission errors. To ease the burden, we have upgraded our specs, error handling and validation options to get data client into shape. In our CQM specs, we want you to know exactly what has changed from the previous submission period so you can move quickly from stored procedure changes to measure performance review. A quarterly submission option is available on the Inpatient side to facilitate incremental quality checks.
  3. Weigh your options: With CQMsolution, you can run reports using a variety of reporting periods, measures and outputs to compare ahead of time choose your best performance. The MIPS program enables providers to submit with their group and/or as an individual, taking the best performance. The key here, as always, is giving yourself the time for this step.
  4. Know your core data elements: We have ramped up validation options to catch warnings and errors as soon as a report is complete, but you can also get a lead on these errors by checking a few important values and identifiers:
    • General (across programs): Check to make sure you have mapped essential rows for identifying patients, clinicians and encounters
    • HQR: Make sure you know your hospital’s TIN, CCN, CMS EHR Certification Number and, if available, the Medicare Beneficiary Identifier (MBI) for your patient.
    • QPP Group: Make sure your practice is providing a TIN (CQMsolution will take it in your data or you can provide on the UI) and sending all data for the entire group practice of clinicians under that TIN (whether virtual or not)
    • QPP Individual: Make sure you provide a TIN and a single NPI and make sure your data is filtered by a unique identifier (most commonly, NPI)
    • General (across programs): Check to make sure you have mapped essential rows for identifying patients, clinicians and encounters
  5. Ensure quick turn around on pre-submission validation: When your files are ready for pre-submission validation, we will work with you on an option that makes the most sense, including available APIs to DHIT to provide Data Submission Vendor services and direct submission and feedback from the MIPS program.
  6. Be prepared for Value Set and Measure changes: The change to Inpatient value sets for Q4 was highly disruptive and while CMS is working to avoid similar, there’s no guarantee there won’t be mid-stream changes afoot in 2018.
Next Steps
It’s a relief to get submission in on time, but quality measure submission now allows for less down time than ever. In addition to getting ready for HQR and QPP 2018, here are a few things to consider:
  • Hardship exemption deadlines: Some providers affected by disasters in 2017 have deadlines extended (and now looming)
  • Joint Commission: If you are submitting Inpatient eCQMs for the ORYX program, the deadline was extended until June 29, 2018. DHIT is an approved vendor for the program.
  • Medicaid Submission: Do you have any clients submitting to states? CQMsolution offers support for Medicaid Submission for all States
  • MIPS feedback: MIPS Preliminary Feedback is now available. If you submitted data through the Quality Payment Program website, you are now able to review your preliminary performance feedback data. 
  • Hybrid Measures: If you’re a hospital submitting HQR, you can submit this optional measure to QualityNet to assist in risk-adjustment
  • Get in touch with your DSV or Registry early: Along with our CQMsolution full-service quality measure application, server as a Data Submission Vendor (DSV) and MIPS Registry.
We’d love to talk to you about data submission that fits yourschedule and show you our expanded roster of measures, new error validation processes and submission management/archiving.