🔔 We've moved! Visit our new blog at dynamichealthit.com/dynamic-blog

Wednesday, May 15, 2019

2019 MIPS – What You Need to Know


You’ve completed 2018 MIPS – everything is submitted and filed away.  Time to relax? 

Well you certainly deserve some R&R but don’t lose sight of the upcoming MIPS challenges and opportunities for 2019 reporting year.  Increasingly, MIPS success will mean a year-round focus as CMS ratchets down on scoring thresholds and imposes greater penalties for weak and non-performers.  Here is our roundup of changes that will present challenges and opportunities in the upcoming year.

 

Opportunities

·      By June of 2019, CMS will have digested and posted MIPS scores in a patient-friendly format on the Medicare Physician Compare website.  The site will have a new hyperlink indicating “Performance information available”.   This “Performance information” is derived from MIPS scoring and may be used not just by patients and prospective patients but by any other interested parties.  So, even though your practice may provide excellent patient care, a sub-standard MIPS score could drag you down.
·        Strong performers can submit both as group and individual and then choose the highest score.  Eligible Clinicians now include Physical therapist, Occupational therapists, Qualified speech-language pathologists, Qualified audiologists, Clinical Psychologists, and Registered dieticians/nutrition professionals. 
·        eCQMs, Promoting Interoperability and Improvement Activities (details below) can now all be submitted via the new QPP API, eliminating the old manual upload process. 

Challenges

·        2015 Certified software must be in place during the entire reporting period, although it is permissible for the certification to happen after the start of the reporting period, as long as it is prior to the end of the reporting period. 2014 Certified software is no longer acceptable for 2019 reporting. 
·        To avoid a penalty, the minimum score is 30 points as opposed to 15 points in 2018. Likewise, the exceptional performance bonus threshold is up from 70 points to 75 points.

·        The 4 categories from 2018 remain but some percentages have been adjusted  for 2019:

1.      Quality 45% (decrease of 5%)
·        Minimum of 6 measures for 1 year
·        1 must be an outcome or High Priority Measures (awarded higher points)
·        Bonus points awarded if you choose the same measure and show improvement from 2018
·        Avoid topped out measures, since scoring is capped at a maximum of 7 points
·        On the overall list of Quality Measures, 26 were removed and 8 were added 8 (6 of which are high priority -- see chart below)
·        Of the 26 and 8, some eCQMs changed:
o   CMS 249 and CMS 349 have been added
o   CMS 65, CMS 123, CMS 158, CMS 164, CMS 167, CMS 169 were removed
o   CMS166 -previously for Medicaid-only submission – has been phased out.

·        New for 2019: CMS will aggregate eCQMs collected through multiple collection types; if the same measure is collected, the greatest number of measure achievement points will be awarded.

Measure ID

eCQM ID
New Measures for 2019:
Name
Measure Type
468
None
Continuity of Pharmacotherapy for Opioid Use Disorder
High Priority
469
None
Average Change in Functional Status Following Lumbar Spine Fusion Surgery
High Priority
470
None
Average Change in Functional Status Following Total Knee Replacement Surgery
High Priority
471
None
Average Change in Functional Status Following Lumbar Discectomy Laminectomy Surgery
High Priority
472
CMS249v1
Appropriate Use of DXA Scans in Women Under 65 Years Who Do Not Meet the Risk Factor Profile for Osteoporotic Fracture
High Priority
473
·        None
·        Average Change in Leg Pain Following Lumbar Spine Fusion Surgery
High Priority
474
None
Zoster (Shingles) Vaccination

Process
475
CMS349v1
HIV Screening

Process

2.      Promoting Interoperability 25%
·        Minimum of 90 days
·        Only one set of objectives & measures (reduced from 2 in 2018)
·        4 Objectives include: e-Prescribing, Health Info Exchange, Provider to Patient Exchange and Public Health & Clinical Data Exchange
·        50 point “base value”/bonus from 2018 has been removed
3.      Improvement Activities 15%
·        Minimum of 90 days
·        Added 6, Modified 5, removed 1 = 118 total Improvement Activities
·        Bonus removed
4.      Cost 15% (increase of 5%)
·        1 year
·        No actual submission


Deadlines

·        Groups must register by June 30.
·        Submission Deadline:  March 31, 2020

The bottom line (in our opinion):  Don’t wait until the year is over to take action to improve your MIPS score.  Remember, the bar is set higher for 2019 and the financial incentives and penalties are also greater. 


Thursday, May 2, 2019

Prepping for 2019 Hospital Quality Reporting


Now that the 2018 reporting year has been wrapped up and submitted, this is a good opportunity to examine what worked and what areas need improvement to ensure a successful 2019 reporting year.

Rear-View Mirror

  • On the Quality Net site, we experienced issues with generating reports and site speed.  Apparently, others had the same issues.  Fortunately, CMS extended the 2018 deadline from February 28 to April 14.
  • To compound the frustration, Quality Net lacks an open forum for support tickets.  MIPS, Cypress, CDA 2.0 and C-CDA and FHIR all have open Jira or Google Groups for support, allowing developers, implementers, and users to comment and ask questions using a transparent process. CMS does not.
  • The support process is tedious and time-consuming.  Undisclosed reporting tool issues created “false alarms” for our calculations and turnaround on support tickets moved slowly.   Nonetheless, we worked with the CMS help desk and technical support to ensure that our CQMsolution calculations matched Quality Net.
  • In spite of these obstacles, our new ‘Submit to DHIT’ button made testing and submitting a much smoother process.   All clients submitted successfully prior to the deadline.

Challenges

  • 2015 Certified EHR Technology (CEHRT) must be in place during the entire reporting period, although it is permissible for the certification to happen later, as long as it is posted on the ONC CHPL prior to the end of the reporting period. 
  • In case you still have doubts, 2014 Certified software is not acceptable for 2019 reporting.
  • 2019 Promoting Interoperability (formerly Meaningful Use) now has a MIPS-like scoring system, although unlike MIPS, Quality Measures are not part of the scoring.
  • The big challenge for EHR vendors and other suppliers of eCQM software is the transition to Clinical Quality Language (CQL) but, if done correctly, this transition should be transparent to software users.
  • Keep in mind that your CQM results are digested and posted to the Medicare Hospital Compare website.

Opportunities for Success

  • By submitting eCQMs to the IQR program, you will meet PI (MU) requirements for EHR submission.
  • Start running CQM reports early to identify problem areas and home in on CQMs that are best suited to your hospital.
  • In spite of CMS’ new “Meaningful Measures” initiative, the actual eCQMs and reporting period requirements are not changing for 2019:  You still choose a minimum of 4 eCQMs for one self-selected calendar quarter.
  • The overall list of hospital CMS eCQM measures in 2019 will stay the same, except for one adjustment:
    • CMS 55 is discontinued in the IQR program, but will remain in TJC (see below).
  •  For 2020, CMS is proposing to remove the 7 eCQMs (highlighted in blue, below) so you may want to take this into consideration when choosing your 2019 eCQMs:


  • For 2021, CMS is proposing to adopt two new opioid-related eCQMs: 
    • Safe Use of Opioids – Concurrent Prescribing eCQM, and
    • Hospital Harm – Opioid-Related Adverse Events eCQM. 


TJC Submission

  • The big news is that next year Joint Commission ORYX vendors will assist hospitals using their Direct Data Submission Platform (DDSP).  Additional communication regarding the transition is supposed to be released this Spring.
  • 2019 Measure selection was due on 12/31/2018. Hospitals that still need to select can do so by contacting hcooryx@jointcommission.org .
  • 2019 Measures: (no changes from 2018), hospitals choose a minimum of 4 measures for 1 quarter.
  • We submitted to ORYX for a number of clients and found that the calculations from our CQMsolution software were consistent with TJC across the board.

We hope this helps with your 2019 reporting process and, as always, welcome your feedback.

 


Thursday, February 21, 2019

FHIR Heats Up HIMSS

In the middle of eCQM submissions for our EH and EP clients, and Carnival season here in New Orleans, it seems impossible that something would be able to pull us away from our office in the Crescent City. However, HIMSS accomplishes this on an annual basis and we were eager to arrive in Orlando for the 2019 conference last week. ONC had just announced a new proposed rule, including the endorsement and mandatory incorporation of FHIR, and the news was traveling faster than the Boeing 737 we were on. This was quite the way to start off our eighth HIMSS conference. Fortunately, we also frequent FHIR Connectathons and have been, and continue to, prepare for the FHIR-based changes ahead.



The New Proposed Rule


“FHIR” and “API” were two of the hottest acronyms at this year’s HIMSS Conference. This is still a proposed rule that is open for comments but, if finalized, ONC would require EHR vendors to offer a well-documented API that:
  •         Exposes US Core Data for Interoperability (USCDI v1) data elements
  •         Uses FHIR to support USCDI data classes and data elements
  •         Supports the SMART Application Launch Framework Implementation Guide

USDCI expands on the Common Clinical Dataset that was introduced with the 2015 Edition final rule and includes two additional data classes: Clinical Notes and Provenance. Clinical Notes is aptly named and may include notes detailing assessment, a plan of care, patient teaching, and other relevant data points. While defining Provenance is not as straightforward, the objective is using metadata that defines the creator and owner of an element to deepen trust in and alleviate audit pains with that dataset as it is passed between systems and APIs.

We were also gifted an abbreviated version of “API Resource Collection in Healthcare” as the proposed rule introduced us to the acronym ARCH. ARCH is aligned with the proposed USCDI and references 15 FHIR resources that certified vendors would be required to support. To help achieve this goal, ONC has created an open source (and still under development) tool for testing if patients can access their health data as expected. Fans of Dante, Dan Brown, and Tom Hanks are delighted with the naming of the new tool and “Inferno” is available at: https://inferno.healthit.gov/inferno/. 

More details on the testing suite can also be found here: https://www.healthit.gov/buzz-blog/interoperability/onc-is-fhird-up-unwrapping-the-new-inferno-testing-suite.  Additionally, the proposed rule would require adoption of SMART (Substitutable Medical Applications, Reusable Technologies) which uses OAuth2 to provide a layer of security for FHIR interfaces. OAuth2 is a widely used industry standard that, coupled with OpenID Connect, provides a level of authentication and authorization that is outside of FHIR’s scope. It also describes a process by which an EHR application can launch an external application while preserving the patient and user context, thus providing secure access to EHR data.

Information blocking (including seven exceptions) is also a large part of the proposed rule and attempts to codify the concept of patients, rather than their health systems, owning their data and being entitled to access that data free of charge. As CMS administrator Seema Verma noted, “The idea that patient data belongs to providers or vendors is an epic misunderstanding. Patient data belongs to patients.”

Final Thoughts

Of course, the highlight of our HIMSS experience is interacting with our peers who are equally passionate about healthcare IT. Throughout the event, current and prospective DHIT clients stopped by our booth to chat and the energetic environment of HIMSS left us with additional motivation and a renewed commitment to healthcare IT and providing our client base with outstanding customer service.

Our promise is continuing to strive towards an impactful footprint in the HIT industry by providing research, news, and innovative development. If you missed us at HIMSS, plan to meet us at the next FHIR Developer Days, FHIR Connectathon, or ONC Annual Meeting as we dive deep into ongoing development on FHIR Resources and C-CDA requirements. We have a blog on NPRM, comments to ONC, newsletters, and emails all coming soon and, in the meantime, you can join us on Twitter @DynamicHealthIT.

Wednesday, January 30, 2019

HL7 January 2019 Working Group Meeting, FHIR Connectathon and C-CDA IAT: A Recap (Part II)

For the week following January 12th, a delegation from the Dynamic Health IT development team was on hand in San Antonio for the HL7 Working Group Meeting and FHIR Connectathon. Our team shared technical approaches and a vision for the future of FHIR, C-CDA and our industry at large. This is Part II of our recap of the week’s events.

While the Working Group Meeting (WGM) covered the entire week’s festivities, FHIR and C-CDA got the intensive treatment over the weekend of January 12th. While FHIR had its own discussion track at the WGM, in-depth technical discussions of interoperability at the document and data levels reigned over the weekend.

As in any technological meeting of minds, some of the discussion can become obscure. But it is just this level of detail that is pushing the maturity of the standards and is proving crucial to adoption.

Among other efforts over a busy weekend, our team discussed important use cases for the standards, reviewed USCDI data elements, shared our methodology for C-CDA to FHIR (not to be confused with C-CDA on FHIR) and went deep into the details of Clinical Notes. There were also, as always, a range of tracks offered to put developers through their paces.

In Part II, we summarize few more of these important flash points.

Provenance comes of age
DHIT team in a rare moment of down time.


Provenance is “data and metadata for who and when information was/is created, for the purposes for trust, traceability and identification.” And it’s been hot topic of conversation in some recent C-CDA events and this time was no different. To show that it’s arrived (and needs to be taken seriously) it has been included in the expanded USCDI.
If document-level meta-data can be verified and included as a matter of course, it will go along way to breaking down barriers of distrust between providers and systems.

Patient Summary vs. Encounter Summary

We discussed Encounter Summary documents at length in our previous post in which we reviewed some of the Carequality/CommonWell recommendations. Whereas an encounter summary is meant to be limited to a specific episode of care, patient summary documents contain a record of care over a period of time, including multiple encounters. 

But as debated at the IAT, how do we represent a problem list – should it show a snapshot in time, a net problem list or be in some way shifted to another document type or section to avoid any confusion? Most clinicians, at any given time, want condensed data that provides direct insight to the patients’ current state of being – so what happens to the history when a document is compiled?

Meds and Labs

Shades of meaning become increasingly important as standards are defined down to the last character.  While a ‘Medication, Administered’ should indicate that a clinician confirmed that the patient took the med, medication “complete” is another story.

There was a consensus for putting all lab results in the results section, while some associated procedures may be essential dual-entered in the procedures section.



Close quarters at Connectathons, while at times chaotic, have the virtue of providing sustained and meaningful interaction. Many questions that may get overlooked during a normal workday back at each of our respective offices get answered.

If the spirit of the event is “share and share alike,” then San Antonio was another roaring success. When it comes to interoperability, the learning and collaboration never stops.




Wednesday, January 23, 2019

HL7 January 2019 Working Group Meeting, FHIR Connectathon and C-CDA IAT: A Recap (Part I)

Tower of the Americas, San Antonio, TX.
For the week following January 12th, a delegation from the Dynamic Health IT development team was on hand in San Antonio for the HL7 Working Group Meeting and FHIR Connectathon. Our team shared technical approaches and a vision for the future of FHIR, C-CDA and our industry at large. 

For the uninitiated, the event is an interoperability extravaganza, allowing implementers to dive deep and uncover best practices in health data sharing.  The Connectathon portion now also hosts the C-CDA Implementation-a-Thon (IAT), allowing in-depth collaboration with industry leading EHR developers.

Major policymakers, developers and stakeholders are in attendance for an ambitious look at interoperability from practical and conceptual standpoints. The free flow of information on display mirrors what interoperability itself aspires to be: open and unencumbered. Perhaps most importantly, it reminds us that our human connections are essential to our virtual ones.

FHIR: Is the future now?

The FHIR Connectathon added the C-CDA IAT track, which allowed for exploration of the connection between FHIR and C-CDA documents. We continued our active engagement in this collaboration as we hammered out the C-CDA to FHIR Mapping. A CDA to FHIR and FHIR to CDA roundtrip allows implementers who only do CDA (or who only do FHIR) to communicate with each other.


One of the greatest challenges in this effort is to decide on and maintain the conversion mechanism that translates a C-CDA document into a FHIR C-CDA document (and vice versa). C-CDAs rendered in XML do not always map neatly to FHIR resources and there are all sorts of judgment calls to be made; though it should be said: getting these two to play nicely is a key goal for HL7 policymakers and the next major release of FHIR (R5) will have more support for migrating data to and from v2.1 CDA documents.

FHIR Connectathon swag.

One conceptual hurdle, aside from mapping, is the fact that a C-CDA document tells a clinical story and is therefore more legible. FHIR’s resources are more abstract and, while highly flexible. Therefore, they’re better positioned to serve as the clearinghouse: to be “read” directly by EHRs and mobile applications. But clinicians and other human readers require the narrative quality that a discrete document format provides.

The Dynamic team shared its experience with implementing CCDA to FHIR in and our decision to emphasize the use of a document repository that creates FHIR resources on the fly. We have rolled this all the way up to a prototype mobile app for patient end-users, while our near terms goals include having Apple HealthKit talking directly to our Dynamic FHIR product.

Quality Measures – they’re everywhere

Clinical quality measures calculation from FHIR (or CCDA) remains a nascent field of inquiry. The data sources for CQMs currently in the field are overwhelmingly based on mapping measure logic to some existing relation data format used by the EHR. But as our team has shown in previous Connectathon events, CQM calculation can work hand-in-hand with FHIR.


With the advent of Clinical Quality Language (CQL) in 2019, there is further hope that this more interpretable logic will enable more interoperability in CQMs. In theory, CQL files should be easier to create, read and share across domains beyond CQM compliance (for instance, clinical decision support). And by extension, they should dovetail more easily with FHIR resources.

There is plenty of cause for optimism here, including the fact that the team beyond CQL offers a wealth of open source tooling and some emerging recommendations that CQM programs support more of the code systems under 2015 Edition.

Here is another example of the way in which clinical documents are colliding with CQMs:  The CPC+ program, which includes a subset of required CQMs, will also be requiring the output of a Care Plan document. Dynamic Health IT, along with a handful of other vendors, has previously implemented the Care Plan C-CDA in a production environment.

Connecting and collaborating.


C-CDA Scorecard rubric updates
Many EHRs have implemented some version of incoming C-CDA grading consistent with 2015 Edition. Use of the ONC-engineered Scorecard Tool is not required for certification, but some form validation similar to that made available by ONC via API is needed. The Scorecard, which provides the additional flourish of a letter grade, was ONC’s way of allowing users to check the quality of the C-CDA from a sending system and provide a channel for accountability.

However, there has been some pushback by EHRs that the analysis provided by the tool is too blunt and users of the tool are getting caught up in the grade, as opposed to what the grade represent (not unlike college). EMR vendors seem to prefer a Progress Bar allowing the end-user to understand where progress can be made to improve the provided data. 

ONC has some enhancements coming to provide more meaningful feedback to implementers by indicating the number of tests graded and number of tests passed, while indicating the number of unique issues.

There has also been a request from EHR vendors for more clarity in error messaging, which may be addressed in part by further education.

Stay tuned for our next post for more information on some of the lead-edge discussions around interoperability standards.

Monday, December 17, 2018

Common Sense in C-CDA: Comparing Carequality/CommonWell C-CDA to r2.1

The history of shared clinical documents is marked by decades of industry-wide deliberation.  In recent years, catalyzed by ONC Certification, there has been widespread adoption and refinement of the standard. 

Data EHRs were required to record and share under ONC Certification were originally called Meaningful Use Data and represented a list of basic chart elements. These were later reformulated as the more expansive Common Clinical Data Set (CCDS) in 2015 Edition Certification. The emergence of 2015 Edition criteria also removed the requirement for the encounter-based document previously required in 2014 Edition - the ambulatory Clinical Summary - making the test data for the latest edition of Health IT certification effectively patient-based.

The current ONC-sanctioned R2.1 version of the C-CDA document, upon which the sharing of these minimum data elements is based, includes long-discussed enhancements that have lead to much greater clarity. But there remain issues of reliability, relevance and provenance surrounding the standard and its implementation, particularly at the encounter level. Some of these issues, particularly with respect to provenance, are being addressed in the underlying data with the Draft U.S. Core Data for Interoperability (USCDI), which is slated to replace the Common Clinical Data Set (CCDS).

There have also been efforts provide constructive recommendations on the structure of the document itself.  Earlier this year, Carequality and CommonWell Content Work Groups released a white paper, using the C-CDA R2.1 Companion Guide as a baseline to provide "complementary, not conflicting guidance." CommonWell is a not-for-profit trade association focused on interoperability, while Carequality represents a similar industry-wide collaborative, convening healthcare stakeholders on interoperability issues.

The goal of the collaboration was a more clinically-relevant and parsimonious C-CDA. The collaborative work group tackled the following issues:
  • "Unacceptably large" C-CDA documents
  • A general absence of clinical notes 
  • Need for encounter summary support
  • Need for version management

Clinical Notes should follow the encounter. (Credit: Max Pixel)
The white paper has a wide range of common-sense recommendations, but we'll discuss some of the recommendations most relevant to implementing CCDA R2.1 under 2015 Edition Certification and USCDI (which is slated to replace CCDS).
  • Problems - only those addressed during the encounter 
  • Allergies - "only if the system can recreate the active Allergy list at the time of encounter"
  • Medications - "only if the system can recreate Medications at time of encounter"
  • Immunizations - those given during the encounter

While conditionality in general can get thorny, these recommendations make a good deal of clinical sense. If events are not touched by the encounter, updating and sending them makes little sense.

Specific to the USCDI, the work group judged the data set to contain "valuable data elements and should be exchanged to improve patient care." But they go on to say that Clinical Notes should not simply be a data dump: when they are supported, support for Encounter Summary documents should be added to ensure the note follows the encounter.

The recommendations focus on Encounter Summary documents (Progress Note Document and Discharge Summary) and preserve all sections required in the base C-CDA document template. After setting that foundation, they map out a "priority subset" of clinical data from the ONC Common Clinical Data Set (CCDS) and draft US Core Data for Interoperability (USCDI). The idea is to weed out data that is stale or irrelevant to the clinical encounter.

The prioritized sections for 'always include' have data within them that should appear conditional based on the events of the encounter:

In theory at least, the C-CDA as tested under 2015 Edition would become easier to interpret and implement. There also a few general guidelines for non-priority elements:
Systems SHOULD send a ‘No information’ assertion template if nothing is available for one of the priority subset data elements. 
Systems MAY send additional data elements, beyond the priority subset, if relevant to the encounter. For these additional data elements, systems should not send a ‘No information’ template if nothing is available.
The concept of 'no information,' while ostensibly straightforward, is crucial here. It's the "known unknown," that provides more certainly to the receiver of the C-CDA that in fact nothing clinically relevant appears under that section, as opposed to having been omitted for an unknown reason.

The recommendations also touch the world of Fast Healthcare Interoperability Resources (FHIR), which implications for how its resources are used. The CommonWell-Carequality alliance is deeply involved in FHIR workgroups and its recommendations may, through governance of the standard, help shape efforts to ensure C-CDA and FHIR resource play more nicely together. Currently, there is a significant amount of subjectivity in getting the sections of a C-CDA in and out of FHIR.

Dynamic Health IT has been reviewing its C-CDA practices and guidance to make sure we help our clients send and receive documents that are clinically-relevant and readable. True interoperability requires nothing less.

We're looking forward to seeing you at the C-CDA IAT meeting. You can find the Track Agenda here.




Friday, December 7, 2018

ONC Annual Meeting 2018 Recap

On November 29-30, 2018, Health IT policymakers and implementers met in DC for the Annual ONC Meeting. As always, the agenda was full to the brim with discussions on the state-of-the-health IT industry.

Interoperability and the future of 2015 Edition CEHRT were central themes, but the agenda content reveals just how far-reaching ONC's influence reaches - and by extension, the use of health IT systems and applications. Sessions tackled subjects as wide-ranging as care coordination, interoperability strategies, APIs, disaster response and opioid prescribing.


Image result for washington dcJeff Robbins, Dynamic Health IT President, attended the meeting. His enthusiastic takeaway was that the conference was very beneficial, providing a survey trends in health IT today and learn about innovative approaches to our most pressing challenges. While it's impossible to attend all sessions (or even to summarize them in a single blog post), we'll review a few highlights.

The breakout session “Improving Opioid Prescribing through Electronic Clinical Decision Support Tools” focused on the solving the challenges posed by the Opioid Epidemic in the Prescription Drug Monitoring Program (PDMP). As part of monitoring, prescribing physicians can be securely notified of non-fatal overdose episodes, but this is currently uncommon. By connecting providers to PDMPs more widely, we can ensure that they have timely information that can limit drug-seeking, modulate prescription behavior and point to important connections to limit the spread of opioid abuse.

While we acknowledge the depth of the challenges facing the industry currently - particularly with respect to interoperability - there were plenty of success stories in evidence. DHIT was impressed by case studies showing enthusiastic adoption of the HL7 Fast Healthcare Interoperability Resource (FHIR) data standard for provider-payer data exchange. A private sector-led initiative — the Da Vinci Project — has progressed quickly enough to enlist government sponsorship from the ONC and then spawned the P2 FHIR Taskforce. The task force is supporting FHIR development efforts and turning its focus to challenges preventing adoption.

The “FHIR Implementation and Transition Planning” session was another opportunity to discuss EHR vendor Application Programming Interfaces (APIs). The 21st Century Cures Act calls for the development of APIs to promote data-sharing and the ONC 2015 Edition Test Method includes three measures (g7, g8, g9) encompassing patient data APIs - though neither explicitly mandate FHIR. During the Q&A session, Cerner and Epic were in the hot seat with application and personal health record (PHR) developers seeking broader access to their FHIR APIs.

On Friday, a lively panel discussion on “Data and Value-Based Care" was held. Economist Mark Pauly discussed the proverbial elephant in the room - a subject that is often avoided:  cost/benefit of healthcare and the need to set dollar limits on individual healthcare plan coverage.  Mark posed the hypothetical of whether there should be marginal dollar limits for coverage.

Finally, it's worth reflecting on the future of ONC's latest health IT test method: 2015 Edition CEHRT. With funding in questions and transition in both programs, Certification and Meaningful Use have both seen grave pronouncements within the last few years. But 2015 Edition CEHRT is still required in 2019 for providers reporting MIPS. And for those providers targeting the “end-to-end bonus" for eCQM reporting, that must be done in conjunction with a system that meets 2015 Edition CEHRT.

Our challenges - and the policy initiatives designed to meet them - aren't going anywhere.


Friday, November 16, 2018

2019 CQM changes – what’s it all about?

Each year, there are a host of changes to clinical quality measure logic, reporting requirements and CMS quality program policy. Come 2019, the usual cycle of CQM turnover is joined by an overhaul of CQM logic methodology: Clinical Quality Language (CQL).

In our previous post, we discussed 2018 changes. In this post, we’ll handle the changes coming for the 2019 Reporting Year. Those fall into five categories:
  1. CQL transition: Clinical Quality Language (CQL) is a measure authoring language standard that will replace the measure logic previously defined by the Quality Data Model (QDM) and QDM in 2019.
  2. Value Set Changes: changes to official codes and code sets used by CMS quality measures
  3. Measure Specification and Logic Changes: changes in the manner of calculation and/or specific codes and code sets included in each measure
  4. QRDA-I/III reporting format changes: changes in the XML requirements (QRDA-I or QRDA-III) for major CQM reporting programs (such as MIPS, HQR and Joint Commission)
  5. Reporting changes: High-level program changes such as menu and number of required of measures, method of reporting, etc.
CQL transition
The conversion of eCQMs to Clinical Quality Language (CQL) has been a hot topic ever since CMS announced that, starting with the 2019 reporting period, eCQMs would move from logic based on an HQMF XML modeling language to CQL.  In little more than two months, CQL will become the lingua franca of eCQMs.

CQL has been chosen for its ability to give quality measure authors more flexibility in creating precise definitions of quality measures that are human readable. But what does this mean for health care providers and EHR vendors? If implemented smoothly, it should be barely noticeable by providers. Measure changes between 2018 and 2019 will be absorbed by the CQL measures and calculated as expected, versions notwithstanding.

The impact on EHR developers is far more significant. While developers providing calculation and analysis tools based on CQL have flexibility to use the files that integrate best into their software (ie, ELM vs JSON), they will need to overhaul their measure specifications at the root level to ensure they are using a CQL basis.

CQL makes calculation logic more readable and transparent. For example ability to calculate within the logic itself. Previously, a concept like “cumulative medication duration” was a derived
element that could not be expressed with QDM-based logic. CQL expresses this kind of calculation in a computable format within the logic.

Fortunately, DHIT has rendered eCQMS (and a wide range of non-eCQMs) in JSON well in advance of the 2019 reporting deadlines.

Value Set Changes
As discussed in Part 1, there are annual value set changes that affect eCQMs differently each reporting year. The National Library of Medicine (NLM) maintains the Value Set Authority Center (VSAC), which releases value set updates annually. The entirety of these value set changes are incorporated into our CQMsolution application each release year, with backward compatibility maintained for previous reporting years. The only way to fully implement these changes is to be sure that value sets and their respective codes crosswalk perfectly to the VSAC release.

Measure Specification and Logic Changes
Each new reporting year brings a new batch of measure versions. The United States Health Information Knowledge base (USHIK) offers a comparison tool for visualizing changes in measures across years. The tool is accessible directly from the eCQI Resource Center, but as of this writing is not yet accessible for 2019 eCQMs. Changes to the measure specifications can range from adjusted wording in the measure overview to an adjustment to how a measure population is calculated and its data elements. Examples of new data criteria include the attributes ‘prevalencePeriod’ and ‘components.’

QRDA-I/III reporting format changes
Here is a rundown of the most relevant changes to the requirements in QRDA-I and QRDA-III submission files:

QRDA-I:
  • Changes to accepted date time formats: made more expansive
  • Data types of CD or CE SHALL have either code or nullFlavor but not both
QRDA III:
  • Changes to group-level reporting identification (for MIPS-Group and MIPS Virtual Groups). MIPS Groups will use such that it SHALL be the group's TIN, while the virtual group will use its Virtual Group Identifier
  • Changes across the XML to support transition from ‘ACI’ to ‘Promoting Interoperability’
  • Standard measure identifier changes (UUIDs)
Reporting changes
The overall list of CMS eCQM measures in 2019 will stay the same for hospitals, but there are a few program-level measure adjustments for hospitals:
  • CMS 55 is discontinued in the IQR program, but will remain in TJC
2019 EH measures by Program
On the EP/EC side, there are some changes to the overall menu of measures:
  • CMS 249 and CMS 349 are added
  • CMS 65, CMS 123, CMS 158, CMS 164, CMS 167, CMS 169 are removed
  • CMS166 -previously for Medicaid-only submission – has been phased out.
Reporting for eligible clinicians has expanded to include the use of the MIPS-API, which has undergone evolution since introduction in 2018.
One significant change on the hospital side is that a QRDA-I submitted and accepted into production will overwrite any preexisting file based on the exact match of five key elements identifying the file: CCN, CMS Program Name, EHR Patient ID, EHR Submitter ID, and the reporting period specified in the Reporting Parameters Section.

For those submitting Joint Commission eCQMs, the program will be shifting to the Direct Data Submission (DDS) Platform, in which hospitals will submit measures directly. DHIT will offer a full suite of consulting and data review services to support this.


Tuesday, October 23, 2018

2018 CQM changes – what’s it all about?

Finding yourself unsure about recent developments in Clinical Quality Measures?  Join the club.

Each year, there are a host of changes to clinical quality measure logic, reporting requirements and CMS quality program policy. And currently, the usual cycle of CQM turnover is joined by an overhaul of CQM logic methodology for the 2019 CQM definitions: Clinical Quality Language (CQL).

Suffice it to say, there’s a lot to keep on your radar right now in the CQM world. And the deadline for eCQM submission will be here before we know it:

  • Hospital Quality Reporting: February 28, 2019, 11:59pm PT
  • Quality Payment Program (MIPS) eCQMs (non-Web Interface submission): April 2, 2019, 5pm PT


We prepared two posts to break down and interpret some of the changes that have occurred in 2018 and those that are coming in 2019. Our goal is to provide some action steps for CQM implementers.

The changes for 2018 submission fall into four categories:
  1. Value Set: changes to official codes and code sets used by CMS quality measures
  2. Measure Specification and Logic: changes in the manner of calculation and/or specific codes and code sets included in each measure
  3. QRDA-I/III format and submission: changes in the XML requirements (QRDA-I or QRDA-III) for major CQM reporting programs (such as MIPS, HQR and Joint Commission)
  4. Reporting: changes in high-level quality reporting program requirements
For those reporting Joint Commission (TJC) ORYX eCQM in addition to CMS, it’s important to note that TJC has aligned their measures closely with CMS, but there are some programmatic changes they we’ll note below.

As you dig into the changes, an invaluable resource is the eCQI Resource Center, hosted by CMS. The eCQI site also has an annual Implementation Checklist that is helpful in preparing for CQM submission. The Implementation steps contain a list of steps to understand year-over-year changes.
To varying degrees, these changes require workflow and/or back-end changes by EHR vendors. But take heart – if the data exists somewhere, Dynamic Health IT’s quality measure bolt-on software, CQMsolution, can take care of the rest.

Value Set Changes
The National Library of Medicine (NLM) maintains the Value Set Authority Center (VSAC), which releases value set updates annually. The entirety of these value set changes are incorporated into our CQMsolution application each release year, with backward compatibility maintained for previous reporting years.

The update of value set changes contains changes that have the potential to affect all eCQMs in a given release year. The only way to fully implement these changes is to be sure that value sets and their respective codes crosswalk perfectly to the VSAC release.

VSAC also includes some retired and legacy codes in order to accommodate a lookback period for measure calculation, so it’s important to keep in mind that the eCQM value sets do not always correspond to strictly the latest version of any code set (such as ICD-10 vs ICD-9). CMS makes a determination of which code system versions they will approve for use during the measure year.

Measure and Measure Logic Changes
While the high-level identifier (eg, “CMS 2”) and description for measures may stay the same across years, each year generally brings a new batch of measure versions.

The United States Health Information Knowledgebase (USHIK) offers a comparison tool for visualizing changes in measures across years. The tool is accessible directly from the eCQI Resource Center.

Changes to the measure specifications can range from adjusted wording in the measure overview to an adjustment to how a measure population is calculated. For instance, in 2018, CMS 128 added an exclusion for patients who were in hospice care during the measurement year.

Each measure release year also has a Technical Release Notes document attached. Where the USHIK compare tool has a side-by-side descriptive comparison, the Release Notes provide granular list of all changes to a measure. These changes must be incorporated into the measure logic used to power all reporting and this update is done within CQMsolution’s calculation engine 12-16 months in advance of measure reporting cycles.

QRDA-I/III reporting format changes
Each year, there are changes great and small to the formatting of XML documents that must be submitting for both eligible clinicians (ECs) and eligible hospitals (EH). Many of these changes are absorbed directly into the files themselves. However, here are the changes that have the potential to affect data capture and workflow on the provider side:

Eligible Hospital QRDA-Is (patient-level XML file):
  • Medicare Beneficiary Identifier (MBI) is not required for HQR but can and should be be submitted if the payer is Medicare and the patient has an MBI
  • CMS EHR Certification Identification Number is now required for HQR.
  • TIN is no longer required
Eligible Clinician QRDA-IIIs (aggregate XML file or JSON via API):
  • The performance period under MIPS can be reported at either of the following levels:
    • The individual measure level for the MIPS quality measures and at the individual activity level for the MIPS improvement activities (IA), or
    • The performance category level for Quality and IA performance categories
  • Virtual Groups can now be reported in QRDA-III (under a CMS program name code created called “MIPS_VIRTUALGROUP”)
  • Eight new Promoting Interoperability PI measures can be reported to indicate active engagement with more than one registry.
  • The 2015 Edition (c)(4) filter certification criterion (45 CFR 170.315(c)(4)) is no longer a requirement for CPC+ reporting (practices must continue to report eCQM data at the CPC+ practice site level)
For eligible clinicians submitting via API to the Quality Payment Program (as opposed to file upload), the QRDA-III file is converted to JSON (or is submitted directly via JSON). In 2018, some technical changes have been made to that process and updates to measure names to include "Promoting Interoperability (PI)" in the identifiers. 

ECs can submit via API when using the Registry method. Dynamic Health IT is an authorized Registry with API submission privileges to QPP.

Reporting changes

For eligible clinicians, there were changes in 2018 related to both measures lists and measure count thresholds for reporting. The overall list of CMS-sanctioned eCQMs previously shrunk in 2017 and in 2018 CMS added two EP eCQMs:

  • CMS347v1
  • CMS645v1
Note that these measures are NOT available for Medicaid EHR Incentive Program for Eligible Professionals.

In 2017, clinicians could submit a minimum of 1 measure for 1 patient for 1 day. But for 2018 data, clinicians must submit at least 6 measures for the 12-month performance period (January 1 - December 31, 2018). The percentage of the overall MIPS score comprising quality has also been modified from 60 to 50%.

As in 2017, hospitals will still select at least four (4) of the 15 available electronic clinical quality measures (eCQMs) for one self-selected quarter of 2018 data (Q1, Q2, Q3, or Q4) during the same reporting period.

In 2017, the Joint Commission’s ORYX eCQM reporting was modified to require a minimum of four eCQMs, over a minimum of one self-selected calendar quarter. This will remain the same in 2018.

Bringing it all back home
The most important consideration for EHR vendors is to minimize workflow impacts for the user interface and user processes, while ensuring adequate data capture for CQM calculation and reporting. One virtue of using a calculation and analysis package such as CQMsolution is that once you’ve captured and mapped the data elements, we handle all of the changes described above. That means you can beginning running reports as soon as data is available.

Tuesday, October 9, 2018

DHIT at Connectathon 19: Unpacking CQMs, Compositions and the Future of FHIR

FHIR Connectathon 19 took place September 29 and 30th in Baltimore, Maryland. Overlooking historic Baltimore Harbor, attendees flocked to collaborate and compare notes on the standard. The continued momentum in FHIR implementation is evident in the capacity crowds and raises the hope that this critical mass will conquer some of the outstanding obstacles to adoption.
Baltimore Inner Harbor

The backdrop of the harbor served as a visual metaphor for the navigation required of implementers. As in global shipping, we move valuable product (in our case, data) around the world and the care for each aspect – packaging, chain of custody, containers – requires thoughtful attention to detail (as you’ll see below, especially the containers).

The Dynamic Health IT team focused efforts on two subject-based tracks: Clinical Reasoning, which centered around CQMs; and FHIR Documents, which involved creating and consuming FHIR-specific documents to a server.

The Documents Track
For the FHIR Documents Track, participants tackled both creation and consumption of FHIR documents, with the primary goal of sending a composition. In FHIR, a composition is meant to “provide a single coherent statement of meaning” as well as provide a clear identification attribution for the document. While it may seem obscure, getting this right is essential to transparency and trust on which true interoperability depends.

On the creation side, we were tasked with assembling a FHIR document – defined as a “bundle containing a Composition and supporting resources” and submitting that document to a FHIR server. There were a number of approaches to document creation. From the beginning, ours has been to use C-CDA R2.1 XML as the basis and create FHIR resources accordingly. This is the transformation on which our Dynamic FHIR API is based. This foundation enables us to support the use of familiar conventions and data types in CDA r2.1, while introducing clients to the FHIR standard and providing a method for fully meeting the 2015 ONC Certification API requirement.

Another point of discussion in document creation was the need to use the “contains” operator to create the Composition. In FHIR, ‘contains’ has a very specific use case and most agreed that it is not meant for creating a composition. The main takeaway here is that by wrapping a composition in a ‘contains,’ the data is rendered  non-interoperable.

The consumer side of the Documents track presented fewer hurdles conceptually. The goal was to retrieve a FHIR document from a reference server and prove its validity. One way to do this was to use an XSL stylesheet to display the human readable version of the document retrieved. The DHIT team’s preparation in mapping FHIR to CDA R2.1 in the forward direction paid dividends here.

Clinical Reasoning Track
The clinical reasoning track dealt was a intersection of DHIT’s core competencies in quality measures and interoperability. Participants set out to integrate FHIR with clinical quality measures, focusing on the CMS-based electronic CQMs. DHIT specifically focused on testing against the latest versions of CMS 124, 125, and 130.

Our team’s preparation going into the Connectathon was an essential prerequisite for success. In the lead up to the event, this meant taking FHIR-based CQM data and converting to Quality Data Model (QDM) format to enable calculation. This is certainly not the only approach our team will be adopting as FHIR-based CQMs evolve. But as long as the Quality Data Model remains relevant, it provides a direct link between the measure logic and the underlying FHIR data.

This intermediary requires that QDM-based concepts such as ‘Encounter, Performed’ (which do not exist in FHIR) need to be converted to FHIR-standardized data. Our approach has been to convert the data coming in:

FHIR server with FHIR data --> CQMsolution data format --> QRDA-I/QRDA-III/FHIR MeasureReport

Data can be gathered from FHIR and filtered by hospital, clinic, clinician, etc, for calculation by CQMsolution’s engine.

FHIR’s proposed Bulk Data extraction method, for use with large datasets, has great potential to expand quality measure interoperability. When interacting with FHIR servers that do not perform calculation, our CQMsolution application would make use of this standardized bulk transfer method on a specified group of patients as dictated by the client and receive a ready status when the bulk export is available for calculation and analysis.

During the Connectathon, we performed calculation and manual analysis to compare our results to the reference server, comparing against synthetic FHIR-based patients there. The exercises were largely focused on proof of concept and tracking data from point-to-point. The next steps in FHIR CQMs will likely involve more validation of calculations and to start looking at more exotic data types, such as negated elements or embedded events.

Our team is eager to identify use cases for data required by Quality Measures across the spectrum.


Looking ahead
With another Connectathon come and gone, there’s still plenty to unpack. We’re looking ahead to wider adoption of DSTU4 and to the emergence of bulk data processing, while devoting current efforts to resolving the very real challenges facing the community at present. As consensus builds, implementers will be able to make fewer compromises between maintaining interoperability and steering toward the standard.

Keep in touch and check out our website for more information where we’re going with FHIR.