Tuesday, October 23, 2018

2018 CQM changes – what’s it all about?

Finding yourself unsure about recent developments in Clinical Quality Measures?  Join the club.

Each year, there are a host of changes to clinical quality measure logic, reporting requirements and CMS quality program policy. And currently, the usual cycle of CQM turnover is joined by an overhaul of CQM logic methodology for the 2019 CQM definitions: Clinical Quality Language (CQL).

Suffice it to say, there’s a lot to keep on your radar right now in the CQM world. And the deadline for eCQM submission will be here before we know it:

  • Hospital Quality Reporting: February 28, 2019, 11:59pm PT
  • Quality Payment Program (MIPS) eCQMs (non-Web Interface submission): April 2, 2019, 5pm PT


We prepared two posts to break down and interpret some of the changes that have occurred in 2018 and those that are coming in 2019. Our goal is to provide some action steps for CQM implementers.

The changes for 2018 submission fall into four categories:
  1. Value Set: changes to official codes and code sets used by CMS quality measures
  2. Measure Specification and Logic: changes in the manner of calculation and/or specific codes and code sets included in each measure
  3. QRDA-I/III format and submission: changes in the XML requirements (QRDA-I or QRDA-III) for major CQM reporting programs (such as MIPS, HQR and Joint Commission)
  4. Reporting: changes in high-level quality reporting program requirements
For those reporting Joint Commission (TJC) ORYX eCQM in addition to CMS, it’s important to note that TJC has aligned their measures closely with CMS, but there are some programmatic changes they we’ll note below.

As you dig into the changes, an invaluable resource is the eCQI Resource Center, hosted by CMS. The eCQI site also has an annual Implementation Checklist that is helpful in preparing for CQM submission. The Implementation steps contain a list of steps to understand year-over-year changes.
To varying degrees, these changes require workflow and/or back-end changes by EHR vendors. But take heart – if the data exists somewhere, Dynamic Health IT’s quality measure bolt-on software, CQMsolution, can take care of the rest.

Value Set Changes
The National Library of Medicine (NLM) maintains the Value Set Authority Center (VSAC), which releases value set updates annually. The entirety of these value set changes are incorporated into our CQMsolution application each release year, with backward compatibility maintained for previous reporting years.

The update of value set changes contains changes that have the potential to affect all eCQMs in a given release year. The only way to fully implement these changes is to be sure that value sets and their respective codes crosswalk perfectly to the VSAC release.

VSAC also includes some retired and legacy codes in order to accommodate a lookback period for measure calculation, so it’s important to keep in mind that the eCQM value sets do not always correspond to strictly the latest version of any code set (such as ICD-10 vs ICD-9). CMS makes a determination of which code system versions they will approve for use during the measure year.

Measure and Measure Logic Changes
While the high-level identifier (eg, “CMS 2”) and description for measures may stay the same across years, each year generally brings a new batch of measure versions.

The United States Health Information Knowledgebase (USHIK) offers a comparison tool for visualizing changes in measures across years. The tool is accessible directly from the eCQI Resource Center.

Changes to the measure specifications can range from adjusted wording in the measure overview to an adjustment to how a measure population is calculated. For instance, in 2018, CMS 128 added an exclusion for patients who were in hospice care during the measurement year.

Each measure release year also has a Technical Release Notes document attached. Where the USHIK compare tool has a side-by-side descriptive comparison, the Release Notes provide granular list of all changes to a measure. These changes must be incorporated into the measure logic used to power all reporting and this update is done within CQMsolution’s calculation engine 12-16 months in advance of measure reporting cycles.

QRDA-I/III reporting format changes
Each year, there are changes great and small to the formatting of XML documents that must be submitting for both eligible clinicians (ECs) and eligible hospitals (EH). Many of these changes are absorbed directly into the files themselves. However, here are the changes that have the potential to affect data capture and workflow on the provider side:

Eligible Hospital QRDA-Is (patient-level XML file):
  • Medicare Beneficiary Identifier (MBI) is not required for HQR but can and should be be submitted if the payer is Medicare and the patient has an MBI
  • CMS EHR Certification Identification Number is now required for HQR.
  • TIN is no longer required
Eligible Clinician QRDA-IIIs (aggregate XML file or JSON via API):
  • The performance period under MIPS can be reported at either of the following levels:
    • The individual measure level for the MIPS quality measures and at the individual activity level for the MIPS improvement activities (IA), or
    • The performance category level for Quality and IA performance categories
  • Virtual Groups can now be reported in QRDA-III (under a CMS program name code created called “MIPS_VIRTUALGROUP”)
  • Eight new Promoting Interoperability PI measures can be reported to indicate active engagement with more than one registry.
  • The 2015 Edition (c)(4) filter certification criterion (45 CFR 170.315(c)(4)) is no longer a requirement for CPC+ reporting (practices must continue to report eCQM data at the CPC+ practice site level)
For eligible clinicians submitting via API to the Quality Payment Program (as opposed to file upload), the QRDA-III file is converted to JSON (or is submitted directly via JSON). In 2018, some technical changes have been made to that process and updates to measure names to include "Promoting Interoperability (PI)" in the identifiers. 

ECs can submit via API when using the Registry method. Dynamic Health IT is an authorized Registry with API submission privileges to QPP.

Reporting changes

For eligible clinicians, there were changes in 2018 related to both measures lists and measure count thresholds for reporting. The overall list of CMS-sanctioned eCQMs previously shrunk in 2017 and in 2018 CMS added two EP eCQMs:

  • CMS347v1
  • CMS645v1
Note that these measures are NOT available for Medicaid EHR Incentive Program for Eligible Professionals.

In 2017, clinicians could submit a minimum of 1 measure for 1 patient for 1 day. But for 2018 data, clinicians must submit at least 6 measures for the 12-month performance period (January 1 - December 31, 2018). The percentage of the overall MIPS score comprising quality has also been modified from 60 to 50%.

As in 2017, hospitals will still select at least four (4) of the 15 available electronic clinical quality measures (eCQMs) for one self-selected quarter of 2018 data (Q1, Q2, Q3, or Q4) during the same reporting period.

In 2017, the Joint Commission’s ORYX eCQM reporting was modified to require a minimum of four eCQMs, over a minimum of one self-selected calendar quarter. This will remain the same in 2018.

Bringing it all back home
The most important consideration for EHR vendors is to minimize workflow impacts for the user interface and user processes, while ensuring adequate data capture for CQM calculation and reporting. One virtue of using a calculation and analysis package such as CQMsolution is that once you’ve captured and mapped the data elements, we handle all of the changes described above. That means you can beginning running reports as soon as data is available.

Tuesday, October 9, 2018

DHIT at Connectathon 19: Unpacking CQMs, Compositions and the Future of FHIR

FHIR Connectathon 19 took place September 29 and 30th in Baltimore, Maryland. Overlooking historic Baltimore Harbor, attendees flocked to collaborate and compare notes on the standard. The continued momentum in FHIR implementation is evident in the capacity crowds and raises the hope that this critical mass will conquer some of the outstanding obstacles to adoption.
Baltimore Inner Harbor

The backdrop of the harbor served as a visual metaphor for the navigation required of implementers. As in global shipping, we move valuable product (in our case, data) around the world and the care for each aspect – packaging, chain of custody, containers – requires thoughtful attention to detail (as you’ll see below, especially the containers).

The Dynamic Health IT team focused efforts on two subject-based tracks: Clinical Reasoning, which centered around CQMs; and FHIR Documents, which involved creating and consuming FHIR-specific documents to a server.

The Documents Track
For the FHIR Documents Track, participants tackled both creation and consumption of FHIR documents, with the primary goal of sending a composition. In FHIR, a composition is meant to “provide a single coherent statement of meaning” as well as provide a clear identification attribution for the document. While it may seem obscure, getting this right is essential to transparency and trust on which true interoperability depends.

On the creation side, we were tasked with assembling a FHIR document – defined as a “bundle containing a Composition and supporting resources” and submitting that document to a FHIR server. There were a number of approaches to document creation. From the beginning, ours has been to use C-CDA R2.1 XML as the basis and create FHIR resources accordingly. This is the transformation on which our Dynamic FHIR API is based. This foundation enables us to support the use of familiar conventions and data types in CDA r2.1, while introducing clients to the FHIR standard and providing a method for fully meeting the 2015 ONC Certification API requirement.

Another point of discussion in document creation was the need to use the “contains” operator to create the Composition. In FHIR, ‘contains’ has a very specific use case and most agreed that it is not meant for creating a composition. The main takeaway here is that by wrapping a composition in a ‘contains,’ the data is rendered  non-interoperable.

The consumer side of the Documents track presented fewer hurdles conceptually. The goal was to retrieve a FHIR document from a reference server and prove its validity. One way to do this was to use an XSL stylesheet to display the human readable version of the document retrieved. The DHIT team’s preparation in mapping FHIR to CDA R2.1 in the forward direction paid dividends here.

Clinical Reasoning Track
The clinical reasoning track dealt was a intersection of DHIT’s core competencies in quality measures and interoperability. Participants set out to integrate FHIR with clinical quality measures, focusing on the CMS-based electronic CQMs. DHIT specifically focused on testing against the latest versions of CMS 124, 125, and 130.

Our team’s preparation going into the Connectathon was an essential prerequisite for success. In the lead up to the event, this meant taking FHIR-based CQM data and converting to Quality Data Model (QDM) format to enable calculation. This is certainly not the only approach our team will be adopting as FHIR-based CQMs evolve. But as long as the Quality Data Model remains relevant, it provides a direct link between the measure logic and the underlying FHIR data.

This intermediary requires that QDM-based concepts such as ‘Encounter, Performed’ (which do not exist in FHIR) need to be converted to FHIR-standardized data. Our approach has been to convert the data coming in:

FHIR server with FHIR data --> CQMsolution data format --> QRDA-I/QRDA-III/FHIR MeasureReport

Data can be gathered from FHIR and filtered by hospital, clinic, clinician, etc, for calculation by CQMsolution’s engine.

FHIR’s proposed Bulk Data extraction method, for use with large datasets, has great potential to expand quality measure interoperability. When interacting with FHIR servers that do not perform calculation, our CQMsolution application would make use of this standardized bulk transfer method on a specified group of patients as dictated by the client and receive a ready status when the bulk export is available for calculation and analysis.

During the Connectathon, we performed calculation and manual analysis to compare our results to the reference server, comparing against synthetic FHIR-based patients there. The exercises were largely focused on proof of concept and tracking data from point-to-point. The next steps in FHIR CQMs will likely involve more validation of calculations and to start looking at more exotic data types, such as negated elements or embedded events.

Our team is eager to identify use cases for data required by Quality Measures across the spectrum.


Looking ahead
With another Connectathon come and gone, there’s still plenty to unpack. We’re looking ahead to wider adoption of DSTU4 and to the emergence of bulk data processing, while devoting current efforts to resolving the very real challenges facing the community at present. As consensus builds, implementers will be able to make fewer compromises between maintaining interoperability and steering toward the standard.

Keep in touch and check out our website for more information where we’re going with FHIR.