Tuesday, October 9, 2018

DHIT at Connectathon 19: Unpacking CQMs, Compositions and the Future of FHIR

FHIR Connectathon 19 took place September 29 and 30th in Baltimore, Maryland. Overlooking historic Baltimore Harbor, attendees flocked to collaborate and compare notes on the standard. The continued momentum in FHIR implementation is evident in the capacity crowds and raises the hope that this critical mass will conquer some of the outstanding obstacles to adoption.
Baltimore Inner Harbor

The backdrop of the harbor served as a visual metaphor for the navigation required of implementers. As in global shipping, we move valuable product (in our case, data) around the world and the care for each aspect – packaging, chain of custody, containers – requires thoughtful attention to detail (as you’ll see below, especially the containers).

The Dynamic Health IT team focused efforts on two subject-based tracks: Clinical Reasoning, which centered around CQMs; and FHIR Documents, which involved creating and consuming FHIR-specific documents to a server.

The Documents Track
For the FHIR Documents Track, participants tackled both creation and consumption of FHIR documents, with the primary goal of sending a composition. In FHIR, a composition is meant to “provide a single coherent statement of meaning” as well as provide a clear identification attribution for the document. While it may seem obscure, getting this right is essential to transparency and trust on which true interoperability depends.

On the creation side, we were tasked with assembling a FHIR document – defined as a “bundle containing a Composition and supporting resources” and submitting that document to a FHIR server. There were a number of approaches to document creation. From the beginning, ours has been to use C-CDA R2.1 XML as the basis and create FHIR resources accordingly. This is the transformation on which our Dynamic FHIR API is based. This foundation enables us to support the use of familiar conventions and data types in CDA r2.1, while introducing clients to the FHIR standard and providing a method for fully meeting the 2015 ONC Certification API requirement.

Another point of discussion in document creation was the need to use the “contains” operator to create the Composition. In FHIR, ‘contains’ has a very specific use case and most agreed that it is not meant for creating a composition. The main takeaway here is that by wrapping a composition in a ‘contains,’ the data is rendered  non-interoperable.

The consumer side of the Documents track presented fewer hurdles conceptually. The goal was to retrieve a FHIR document from a reference server and prove its validity. One way to do this was to use an XSL stylesheet to display the human readable version of the document retrieved. The DHIT team’s preparation in mapping FHIR to CDA R2.1 in the forward direction paid dividends here.

Clinical Reasoning Track
The clinical reasoning track dealt was a intersection of DHIT’s core competencies in quality measures and interoperability. Participants set out to integrate FHIR with clinical quality measures, focusing on the CMS-based electronic CQMs. DHIT specifically focused on testing against the latest versions of CMS 124, 125, and 130.

Our team’s preparation going into the Connectathon was an essential prerequisite for success. In the lead up to the event, this meant taking FHIR-based CQM data and converting to Quality Data Model (QDM) format to enable calculation. This is certainly not the only approach our team will be adopting as FHIR-based CQMs evolve. But as long as the Quality Data Model remains relevant, it provides a direct link between the measure logic and the underlying FHIR data.

This intermediary requires that QDM-based concepts such as ‘Encounter, Performed’ (which do not exist in FHIR) need to be converted to FHIR-standardized data. Our approach has been to convert the data coming in:

FHIR server with FHIR data --> CQMsolution data format --> QRDA-I/QRDA-III/FHIR MeasureReport

Data can be gathered from FHIR and filtered by hospital, clinic, clinician, etc, for calculation by CQMsolution’s engine.

FHIR’s proposed Bulk Data extraction method, for use with large datasets, has great potential to expand quality measure interoperability. When interacting with FHIR servers that do not perform calculation, our CQMsolution application would make use of this standardized bulk transfer method on a specified group of patients as dictated by the client and receive a ready status when the bulk export is available for calculation and analysis.

During the Connectathon, we performed calculation and manual analysis to compare our results to the reference server, comparing against synthetic FHIR-based patients there. The exercises were largely focused on proof of concept and tracking data from point-to-point. The next steps in FHIR CQMs will likely involve more validation of calculations and to start looking at more exotic data types, such as negated elements or embedded events.

Our team is eager to identify use cases for data required by Quality Measures across the spectrum.

Looking ahead
With another Connectathon come and gone, there’s still plenty to unpack. We’re looking ahead to wider adoption of DSTU4 and to the emergence of bulk data processing, while devoting current efforts to resolving the very real challenges facing the community at present. As consensus builds, implementers will be able to make fewer compromises between maintaining interoperability and steering toward the standard.

Keep in touch and check out our website for more information where we’re going with FHIR.

Tuesday, August 28, 2018

DHIT Goes to Washington: Behind the Scenes at the ONC #InteropForum

Credit: Max Pixel.
The Office of the National Coordinator for Health IT (ONC) hosted the the 2nd Interoperability Forum from August 6th to 8th, 2018 in Washington, DC.

The event was an interdisciplinary meeting-of-minds on interoperability, with plenary sessions and break-out, topic-based panel discussions. Dynamic Health IT staff participated in all three days, with our VP of Development, Raychelle Fernandez, giving a panel talk on the second day of the event and our team taking a deep dive into the collaborative atmosphere throughout.

The event featured a variety of “tracks” to facilitate focused discussion and information sharing. These centered around application programming interfaces (APIs), clinician experience, patient matching, security and others.  The tracks are overseen by ONC and other industry experts, allowing for a direct line to those shaping policy in this sphere.

The final day of the Forum consisted of recap and presentation from each track lead.

Hot Topics
One big-picture issue discussed at the Forum was the future ONC role in advancing interoperability. With the recent pivot of the Advancing Care Information (ACI) program to “Promoting Interoperability,” it’s clear how central this is to both CMS and ONC’s missions, but what will be levers for real change? Everyone has their thoughts, but much of the discussion will come down to picking winners among the available standards and championing their success.

In any contemporary discussion of interoperability, you’re likely to hear about FHIR, Blockchain, Patient Engagement, data blocking and the roadblocks perceived in the leading standards the inhibit broader implementation. The Forum was no different, but what was eminently useful about the event is the breadth of perspectives – the attendees really attacked the issues from all sides.

The discussion of Patient Engagement went well beyond Portals, into newer methods or delivery and patient involvement.  There was a discussion of current EMR/bolt-on software development punctuated by remarks from the panel on how patients will (or will not) benefit from there trends. As Mark Scrimshire (@ekivemark) mentioned on Twitter, in order to start better joining data together “we have got to start involving patients in our efforts to resolve patient matching challenges.” Blue Button 2.0, a patient-facing, FHIR-based API and a treasure trove of CMS data, is another major channel for engagement.

In the halls of application development, it can be difficult to get a direct perspective on the burden on the patient and feasibility of tasks such as patient matching without costly studies with uncertain value. And patient users are often at least degree removed from developers, who are not on the ground as service providers.

Clinician perspectives
As Dr. Steve Lane put it, "If you're not satisfying the needs of the clinician you are missing the mark."

For one specific example of where data requirements and providers, there was much discussion about how clinicians document when something is NOT done. While it’s true you can’t prove a negative, it’s important to understand why another clinician did not perform a task and to get that info in a clinical note. This concept of 'Reason not done' data is required for eCQMs, but currently isn't represented in CCDA.

Where there’s Interop, there’s FHIR
FHIR is always near the top of the marquee for any interoperability event. But like many ballyhooed technologies, it is still largely opaque to its end users – both patients and clinicians.
There were a number of other discussions that entered into practical applications of interoperability, including an implementation of the Bulk FHIR API. Our team previewed items for discussion at the upcoming FHIR Connectathon and Roundtable, including CCDA on FHIR, our Dynamic FHIR API and Health Lock-It Mobile App. Having certified and refine our FHIR API, we are afforded time to participate in the ongoing CCDA-FHIR mapping discussion and pivot to new challenges in the FHIR orbit. At the Connectathon, our focus will be we on CQMs and FHIR, with special attention paid to the Clinical Reasoning track.
Apple has made its presence known at recent interoperability and FHIR events and wherever they go, they have a tendency to move markets. One way in which this directly affects DHIT and other development shops is the need to support FHIR DTSU 2 to be in the HealthKit ecosystem.

DHIT: Our Implementer Story
Raychelle Fernandez speaking at the Implementer's Story panel.
Raychelle Fernandez, DHIT’s VP of Development, participated in the Implementer's Story panel and shared insights related to our development of FHIR resources, ‘FHIR on the Fly’, API, Mobile application development and relevant tools.
Raychelle also discussed integration with a wide range of EMRs and the need for U.S. Core Data for Interoperability Task Force (USCDI) to expand data elements and ensure proper use cases exist to minimize the burden on implementers. For example, we should not require specialties like Optometrists, Orthopedics, Podiatrists, or Chiropractors to capture Immunizations if it has no clinical relevance.

These are exciting and fast-moving times in our field. Stay tuned for more.

Tuesday, May 15, 2018

2015 Certification: 6 Things to Know

Steve Jobs famously remarked that users “don’t know what they want until you show it to them.” This is often true in the software development world as a whole, while in Healthcare IT much of what we do is essentially client-driven.

But what about when a feature is neither? Such is the case with some of the requirements found ONC 2015 Edition Certification. When faced with 2015 Edition Certification, EMR developers have a lot of questions, starting with “Why should we do this in the first place?”

Drawing on experience from our own certification and that of our clients, we’ll address some of the most pressing concerns in this post.

1. 2015 Certification software has become increasingly compulsory

At its inception, EMR developers fairly asked, given limited time and resources, whether there was any immediate reason to broadly adopt 2015 Certification criteria. It’s been essential to keep current on clinical quality measures in order to report to CMS programs (QPP, HQR, Joint Commission and CPC+), but Certification criteria as a whole has become more relevant with time.

Part of this is catching up with early adopters for competitive and marketing reasons. MIPS requirements for ambulatory providers have also been a driver - certified electronic health record technology is required for participation in the Advancing Care Information category of the QPP and only a 2015 Edition certification that includes automated measure calculation will enable reporting on ACI measures past the “Transition” phase.

2. Self-declaration takes some of the pressure off
Perhaps the most important thing to note about self-declaration is that the technical requirements for “self-declare” measures have not been eased. And there is a good bit of documentation required to prove the testing you have conducted independently.  However, the inclusion of a wide swath of self-declaration (non-live testing) measures has eased some of the burden for developers. The stakes and costs are lower now that you can test iteratively and do not have to schedule extra live testing (and potential re-testing) with your proctor. Keep in mind also that your proctor can ask at any time to review the self-declaration criteria.

3. Building an API for Patient Engagement means knowing your endpoints
If you know you’re going to be certifying the 2015 Edition “API” measures (g7 – g9), you’ll need to decide on technologies for delivering clinical data resources and authenticating user. We recommended FHIR and OAuth, respectively. There are pros and cons for both, but our decision was based on which technologies are best-positioned for where Health IT interoperability is headed.

It’s also worth exploring how patient-accessible APIs are going to work in the wild. The 2015 Certification measure provides a pathway to certifying that you can make XML/JSON available per patient and filterable by common clinical dataset sections and date. But it doesn’t connect the dots between accessing the raw resources from the API and getting it into a consolidated location that is usable by a non-technical patient user. For that, you’ll need to consider the extent to which you’ll take that leap into Patient Health Record development – or tailor your solution for compatibility with big players in this space such as Apple (which is, for now at least, pursuing a FHIR-based mobile record).

4. (b)(1) Transition of Care is a many-layered measure
The (b)(1) measure is proof that a world of functionality can lurk in a single sentence. In addition to sending and receiving a variety of transition of care documents through your chosen protocol(s), b1 also requires you to:
  • Detect valid and invalid ToC/referral summaries and provide an accounting of errors
  • Display a human-readable C-CDA for BOTH r1.1 and r2.1 CCDA
  • For both r1.1 and r2.1, allow your users to display only the data within a particular C-CDA section, set a preference for the display order and set the initial number of sections to be displayed.
5. (b)(6) Data Export is about more than just CCDAs
The 2015 Edition Data Export measure (b6) shares a similarly ambitious goal with the API: that all patient data can be made portable and requested from an EMR at any given time. For data export, not only are you asked to pull out patient data in CCDA form, but you need to be able to slice the patient data in several different ways, exporting either in real-time or scheduled ahead.

The different permutations for exporting under this measure are often a source of confusion, so here are the discreet taste cases to help make things more intelligible:
  • On-demand (export runs immediately):
    • Request all patients from a specific start date and time until the present
    • Request all patients from a specific start date and end date, with the end date occurring prior to the present
    • Request a subset of patients from a specific start date and time until the present
    • Request a subset of patients from a specific start date and end date, with the end date occurring prior to the present
  • Scheduled (configured for a future date-time):
    • Relative (a recurring scheduled report)
      • Request all patients based upon a relative date and time from the date range in the data (e.g., generate a set of export summaries from the prior month on the first of every month at 1:00 a.m.)
      • Request a subset of patients based upon a relative date and time from the date range in the data
    • Specific (a one-time scheduled report)
      • Request all patients based upon a specific date from the entered start and end dates and times (e.g., generate a set of export summaries with a date range between 01/01/2015 and 03/31/2015 on 04/01/2015 at 1:00 a.m.).
      • Request a subset of patients based on upon a specific date from the entered start and end dates and times
6. Automated measure calculations and usability testing involve a lot of data-gathering 
Automated measure calculations are based on an expansive spreadsheet of test cases furnished by ONC. These cases ensure that your measure calculations follow the expected measure logic faithfully. There’s really no corner-cutting on these, but one limiting factor is the measures you are certifying overall. If, for instance, you aren’t certifying ePrescribing, you will not be on the hook for those calculations.  Also, if you know the programs and stages of Medicare MU/Promoting Interoperability, Medicaid MU and ACI that apply to your users, you can hone in on just the calculations that are relevant. 

For usability testing, you will be in good shape by modeling your study and write-up after an established usability process approved by ONC. Your proctor should provide you with a sample list of functions subject to testing. The estimated time to conduct the study and complete the write-up would be 3 weeks, but this will be contingent on difficulty of recruitment, your own test design and then post-test editorial/graphic design considerations for the doc itself. The results of the testing will be posted on CHPL.

For vendors seeking support on 2015 Edition measures, check out our site to see how our bolt-on certification solutions can fit into your certification plan.

Copyright Dynamic Health IT ©, Inc 2018. We specialize in ONC-certified solutions, CQMs and system interoperability via standards such as HL7®, CDA®, CCD and FHIR® using our flagship products - CQMSolutionsConnectEHRConnectEHR-PortalDynamicFHIR and The Interface Engine(TIE)

Subscribe to our newsletter and mailing list here.

Monday, April 16, 2018

Quality Measure Submission: A Brief Review, A Look Ahead

For most, the 2017 submission period for eCQMs is over. Not only did we survive, we thrived. Dynamic Health IT worked with clients submitting measures to CMS for the Hospital Quality Reporting (HQR) program and Quality Payment Program (QPP) to improve timelines for submission as compared to 2016.

On the Ambulatory side, this was everyone’s first trip through MIPS submission and, in light of that fact alone, the process was a great success. DHIT worked with clients to submit both individual and group data, rolling out some new validation and user experience enhancements to ensure continuity between the submission program and the output.

We assisted clients in file validation using a series of newly-developed APIs, received files securely for validation via a cloud-based submission server and assisted our clients in both data troubleshooting and making an informed decision regarding submission type by comparing measure outcomes.

Achieving successful submission doesn’t mean we haven’t learned something along the way – quite to the contrary – so we wanted to share a few lessons learned in this space. CQMs have become a perpetual development and submission cycle, which means we can’t pause long before looking ahead.

Lessons from 2017 submission
The 2017 submission cycle was highly educational, particularly on the Ambulatory side (QPP/MIPS submission).  Here are a few lessons to carry into 2018:
  1. Make sure to choose your most clinically-relevant measures: Reaching consensus in your organization about measure selection prior to data review will eliminate inconvenient reversals later in the process. Running an initial report with all available measures for the previous and/or current years can aid in this process (CQMsolution excludes all non-MIPS EP measures from QRDA-III output specific to MIPS). 
  2. Review data early and often: The most time-consuming aspect of eCQM reporting is making sure data is complete, accurate and does not trigger submission errors. To ease the burden, we have upgraded our specs, error handling and validation options to get data client into shape. In our CQM specs, we want you to know exactly what has changed from the previous submission period so you can move quickly from stored procedure changes to measure performance review. A quarterly submission option is available on the Inpatient side to facilitate incremental quality checks.
  3. Weigh your options: With CQMsolution, you can run reports using a variety of reporting periods, measures and outputs to compare ahead of time choose your best performance. The MIPS program enables providers to submit with their group and/or as an individual, taking the best performance. The key here, as always, is giving yourself the time for this step.
  4. Know your core data elements: We have ramped up validation options to catch warnings and errors as soon as a report is complete, but you can also get a lead on these errors by checking a few important values and identifiers:
    • General (across programs): Check to make sure you have mapped essential rows for identifying patients, clinicians and encounters
    • HQR: Make sure you know your hospital’s TIN, CCN, CMS EHR Certification Number and, if available, the Medicare Beneficiary Identifier (MBI) for your patient.
    • QPP Group: Make sure your practice is providing a TIN (CQMsolution will take it in your data or you can provide on the UI) and sending all data for the entire group practice of clinicians under that TIN (whether virtual or not)
    • QPP Individual: Make sure you provide a TIN and a single NPI and make sure your data is filtered by a unique identifier (most commonly, NPI)
    • General (across programs): Check to make sure you have mapped essential rows for identifying patients, clinicians and encounters
  5. Ensure quick turn around on pre-submission validation: When your files are ready for pre-submission validation, we will work with you on an option that makes the most sense, including available APIs to DHIT to provide Data Submission Vendor services and direct submission and feedback from the MIPS program.
  6. Be prepared for Value Set and Measure changes: The change to Inpatient value sets for Q4 was highly disruptive and while CMS is working to avoid similar, there’s no guarantee there won’t be mid-stream changes afoot in 2018.
Next Steps
It’s a relief to get submission in on time, but quality measure submission now allows for less down time than ever. In addition to getting ready for HQR and QPP 2018, here are a few things to consider:
  • Hardship exemption deadlines: Some providers affected by disasters in 2017 have deadlines extended (and now looming)
  • Joint Commission: If you are submitting Inpatient eCQMs for the ORYX program, the deadline was extended until June 29, 2018. DHIT is an approved vendor for the program.
  • Medicaid Submission: Do you have any clients submitting to states? CQMsolution offers support for Medicaid Submission for all States
  • MIPS feedback: MIPS Preliminary Feedback is now available. If you submitted data through the Quality Payment Program website, you are now able to review your preliminary performance feedback data. 
  • Hybrid Measures: If you’re a hospital submitting HQR, you can submit this optional measure to QualityNet to assist in risk-adjustment
  • Get in touch with your DSV or Registry early: Along with our CQMsolution full-service quality measure application, server as a Data Submission Vendor (DSV) and MIPS Registry.
We’d love to talk to you about data submission that fits yourschedule and show you our expanded roster of measures, new error validation processes and submission management/archiving.

Tuesday, November 7, 2017

DHIT participated in the HL7 Digital Quality Summit November 1st and 2nd in Washington, DC.  The event was jointly hosted by HL7 and NCQA and provided a unique opportunity to discuss overlapping issues between quality tracking and interoperability.  The interactive sessions were an excellent forum for peer networking, discussing challenges and showing off the latest solutions. 

The hot discussion topic at the meeting was the CMS announcement that starting with the 2019 reporting period, eCQMs would be transitioned from the current HQMF XML modeling language to the new Clinical Quality Language (CQL) standard.  CQL gives quality measure authors much more power and flexibility in creating precise definitions of quality measures that are human readable yet structured enough for processing a query electronically.  CQL replaces the logic expressions currently defined in the Quality Data Model (QDM). Going forward, QDM will include only the conceptual model for defining data elements (the data model).

Thursday, February 9, 2017

FHIR and CCDA Implementation-a-thon 4: Care Plans, Root IDs and much more

Last month, Dynamic Health IT was in attendance for another meeting of minds hosted by HL7 International down in San Antonio, TX. While these events around CCDA and FHIR have become a regular part of our travel schedule, there's nothing routine about taking a deep dive into healthcare standards. Each trip provides an opportunity to combine ground-level development with high-level policy-making, up to and including interactions with HL7 and ONC policy-makers.

Coding away.
The ultimate goal, as always, is healthcare data interoperability. San Antonio, with its backdrop of canals, struck a fitting visual metaphor. As with man-made waterways, engineering connectivity in healthcare presents challenges at ever turn. 

CCDA Implementation-a-Thon 4

San Antonio hosted the fourth-ever CCDA Implementation-a-thon. The scope of CCDA v2.1 - and its corresponding implications for 2015 Edition Certification - have made for wide-ranging discussions at each of these events. We'll focus on just a few highlights.

There was considerable interest in clarifying the links between concerns, goals and intervention. A source of confusion is that, as of this writing, there is an issue with the ONC CCDA Scorecard not scoring Care Plan documents correctly.

For health concerns, there were a number of clarifications that proved helpful. Concerns expressed by patient need not be collapsed into a code and, by nature, will often need narrative:
Health concerns may be coded, but may need to be represented in narrative form as there are likely to be terminology system gaps 
On the topic of goals, there was discussion around differentiating between patient goals and provider goals. This may not always be a fine distinction captured by codesets, but where possible it will be defined by the author concept. Goals that have both a patient and provider author are coded as shared or negotiated goals

The concept of document-level authorship in general was conceptually challenging. Vendors can generate and recognize their own document roots, but may extends only within your own vendor system in many cases.

The CCDA event also featured a presentation by the Value Set Authority Center (VSAC). VSAC offers a Support Center to promote closer collaboration with value set implementers.

For those in the midst of 2015 Edition Certification, there was a discussion of Care Plan certification test data used in 170.315 (b)(9) and 170.315(e)(1).

FHIR Connectathon 14

FHIR itself is very accessible as far as resources and the rest API is concerned. But getting a broader understanding of everything FHIR touches and how it behaves - the goal for any given Connectathon - can be more daunting. For the 14th Connectathon, our focus was on the CCDA on FHIR track.

Over the course of the weekend, we were able to get a better understanding of the current status and purpose of CCDAs on FHIR in general as well as the following key insights:
  • Difference between composition and bundles. 
  • How to reference a different server for resources used in our composition
  • How to style and render our bundle as retrieved from the server 
  • Search parameters and where to find them 
  • How CCDA on FHIR imposes additional constraints on existing resource types
The DHIT team was able to make it through all of the producer and some of the consumer scenarios, which we counted as a major success in two short days. Outside of our development track, we were able to gain quite a bit of information about strategies for integrating FHIR servers into our ecosystem, the viability of the Spark Production server, strategies for integrating OAuth, and more.

Organizationally, it is good to know for those coming into the FHIR fold that Work Group Meetings are where a lot of the development work on FHIR happens. Numerous work groups will be considering FHIR change proposals, working on FHIR profiles and resources and debating other aspects of FHIR implementation.

As well, there will be meetings of the FHIR Governance Board and FHIR Management Group discussing policies relating to FHIR. Within this framework, attendees and group members discuss items of interest such as tooling, new domains, particular technical issues, etc.

We hope to see you at the next stop!

Tuesday, December 6, 2016

New Era of CQMs, Part I: Quality Measures in The Age of MIPS

The first performance year for the CMS Merit-based Incentive Payment System (MIPS) begins on January 1 of next year, yet much of the healthcare world is still in the dark about large portions of the program. Or even unaware of its existence entirely.

Perhaps one of the most misunderstood aspects of the program: MIPS does not apply to Medicaid Meaningful Use or eligible hospital Meaningful Use (MU) programs.

With MIPS beginning in earnest and ramping up over the next few years, we wanted to provide a series of plain-language posts and address how it will affect your use of clinical quality measures.

Our goal as the program unfolds is to grow our CQMsolution software to support as many measures as possible.

MACRA sets up a new system of quality-based reimbursement for clinicians called the Quality Payment Program.  Within this new model, there are two pathways to select:
  • The Merit-based Incentive Payment System (MIPS)
  • Advanced Alternative Payment Models (APMs)
For this post, we will focus on quality assessment within the context of MIPS. Multiple quality reporting programs are folded into MIPS:
  • Physician Quality Reporting System (PQRS)
  • Value-Based Payment Modifier (VBM)
  • Medicare Electronic Health Records (EHR) Incentive Program (ie, Meaningful Use)
As with the current PQRS program, multiple clinicians can participate in MIPS as an individual or a group.

Payment adjustment

In determining the payment adjustment based on MIPS, clinicians will received a composite score, weighting four different categories of performance:
  • Quality
  • Resource user
  • Clinical practice improvement activities
  • Advancing care information 

Quality Measures under MIPS
Quality receives the highest weight under MIPS and to determine quality score. The program has taken PQRS and the value-based modifier, mashed them up and provided a menu of quality measures that will determine reimbursement.

MIPS-eligible clinicians and groups can select their measures from either the list of all
MIPS measures or a subset of specialty-specific measure as identified by CMS. Unlike the current requirements under PQRS, clinicians will not be required to report a "cross-cutting measure." The program has been positioned to allow clinicians to select measures that are the most relevant to their practice.

In the transition from the current state of quality reporting (PQRS/VBM/MU) to MIPS, there are a few other key points to bear in mind:
  • Registry, EHR and QCDR reporting measures currently require 9 measures across three quality domains, but thew new requirement is 6 measures
  • Measures can now have any combination of NQF quality domains, though the 6 selected must include an outcome measure (as opposed to strictly process, for example)
  • MAV process changed (more on this in future posts)
  • PQRS registry measures group method is eliminated
  • Registry and QCDR reporting requires meeting a "data completeness" standard: 50% of patients in the denominator
There are more complexities to explore - and changes and clarifications in the coming months, as always. We look forward to staying on top of these and simplifying the process for each of our clients.

Friday, October 21, 2016

FHIR works: Notes from Baltimore Plenary Meeting and Virginia CCDA 2.1 Implementationathon

We say this every time we attend a meetup, but it remains true: interest in HL7 interoperability standards continues to grow remarkably. As FHIR in particular matures, we see proliferation of attendees, ballot comments and general buzz. As Graham Grieve mentioned over on the FHIR Directors Blog, the most recent meetup was most likely HL7's largest meeting to date.
Baltimore at night.

Members of the DHIT team traversed the DMV (that's DC-Maryland-Virginia, in Beltway-speak) last month, heading to Baltimore for the annual plenary meeting and Arlington, VA, for the C-CDA Implementation-A-Thon.

News on FHIR
As the FHIR Chief himself, Grahame Grieve, mentions over at his blog, there were some major headlines at the very well-attended FHIR plenary event:
  • FHIR release 3 is slated for release at the end of this year.
  • New communities are cropping up, including from medical disciplines that hadn't previously shown up on the FHIR scene
  • The FHIR Foundation will continue to be a key player, supporting the "implementation process of standard"
  • The site fhir.registry.org will go live soon
  • Discussion of whether to support logical reference; in short, a "URL-based view of the world," as Grahame puts it, may be incomplete.
As Grahame also mentioned, the "most significant single decision" made at the plenary was to take the specification known previously as “DAF-core” and rename it the US Realm Implementation guide. That may sound like inside baseball, but it's another symbolic leap in the maturity of the standard.

From the DHIT standpoint, we are well underway developing features in our flagship interoperability application, ConnectEHR, and across our product line to support EMR clients, including the development of testing and production FHIR servers. Our overriding goal is for our clients to move forward in interoperability as they meet the latest edition of ONC Certification Standards (2015 Edition).

We anticipate that FHIR may one day become an explicitly mandated standard and, as it stands, is a boon not only to interoperability but meeting meaningful use in Stage 3 and beyond.

We participated in the "CCDA on FHIR" track as document creator as well as document consumer, testing our implementation against multiple servers (including those of other participants as well as the reference servers from Grahame Grieve and Furore). Our coding was done in C# using the fhir-net-api provided by fellow FHIR Chief Ewout Kramer.

HL7 hosted its third C-CDA Implementation-A-Thon last week in Arlington, VA. The DHIT team kept up its perfect attendance, convening with other CCDA developers and experts just outside DC.

In addition to the usual networking and educational opportunities afforded at HL7 Events, it's always interesting to chart the development progress of the industry as a whole. And the "real-world" scenarios provided at the event - creating and exchanging live data - are worth the trip alone.

As is typical of healthcare standards-based Connectathons, clinical scenarios are laid out for participants to navigate. In this case, the exercises were related to the exchange of v2.1 documents, discharge summaries and electronic referrals. We're proud to report that we all the CCDA Homework Scenarios were accomplished. 'A+' goes to the DHIT developers on hand.

Valueset OIDs
Valueset OIDs continue to be a point of some controversy. There was a presentation at the event providing background and information on the process of creating them. For the initiated, value sets for use in EMRs, CQMs, research and other contexts are created by professionals and organizations and submitted to be approved by the National Library of Medicine (NLM), under its Unified Medical Language System (UMLS) arm. The code sets are validated and checked for duplicates. However, our development has uncovered some of the codes may be subject to duplication and we've requested some further information from NLM.

Lessons learned
We left with some takeaways on the process of generating a v2.1 CCDA and we wanted to share with our audience:
  • Often overlooked, developers should pay a little bit more attention to mood codes and their usage even though it may complicate the data that is requested from a client
  • C-CDA Scorecard is a very useful checkpoint in development
  • Having lower score in the scorecard doesn’t mean that the CCDA will fail the validation. Higher scores will determine that the CCDA is much closer to the expected standard
  • Display name should come from the code system otherwise it will lower the score. 
  • Narrative Text for all sections and textual clinical notes
  • The task of categorizing results may tolerate multiple pathways. Example: CT scans go to Procedures or Results or both
  • Allergies and problems should always have time recorded
  • For effectiveTime of immunizations, do not use low+high when moodCode=EVN

Wednesday, October 12, 2016

NTT's Optimum and Dynamic Health IT Partner on Forward-Thinking Solution for CQMs

DHIT President Jeff Robbins addressing the
NTT Data Client Conference
After wrapping up our successful ONC certification testing for CQMsolution in early September, we headed to Newport Beach, CA, for the NTT Data Client Conference. Held annually, the conference offers clients of NTT Data products and services a wide range of educational sessions, networking opportunities and face-time with NTT DATA staff.

The event was a great opportunity to meet with implementers and users of CQMsolution. We were able to provide specific education on our application through the lens of the NTT's Optimum clinical ecosystem. CQMsolution is developed as universal quality measure solution, but context always matters, of course.

In keeping with the mission of the conference, we also spent time discussing some policy specifics to help NTT users prepare for changes in quality measurement. This included a glimpse into the future to MIPS/MACRA and Meaningful Use Stage 3.

We also expressed our confidence that our solution will continue to be among the first - if not the very first - to update with each successive release of CMS measures. Among other benefits, this allows maximal testing and educational opportunities in the lead-up to submission.

As with all DHIT clients, we seek to offer a full range of development, quality assurance, support and project management resources, tailored to the environmental needs of the specific implementation and user base. Our close collaboration with NTT has yielded a solution that allows their EMR team to focus on development and customer support, while we provide an effective and aggressively-supported tool to attack quality measures.

DHIT VP Raychelle Fernandez providing
clinical background for CQM calculation process.
By way of demonstration, DHIT gave a detailed presentation on CQMsolution and showed key elements of the software using a specific clinical use case: Ischemic/Hemorrhagic Stroke (via CMS measure 102v4). With our goal of guiding clients through CQMs from start to finish, we discussed not only the calculation and display of measures in CQMsolution, but the process of submission.

It's our hope that CQMsolution, like that Southern California weather, makes everything a little sunnier.

Monday, October 10, 2016

CQMsolution blazes trail as first 2015 Edition Certified CQM product

Dynamic Health IT is proud to announce that we're the first software developer to be certified for Clinical Quality Measures under the latest ONC Health IT Certification (2015 Edition).

But don't take our word for it: our listing on the ONC CHPL website is viewable here.

The process of certification testing gives our clients confidence that our product can support eligible clinicians and eligible hospitals in meeting CMS EHR Incentive Program objectives. We have developed the product with an eye on not only the current formulation of Meaningful Use, PQRS, IQR and other quality measurement programs, but the changes to come under MIPS/MACRA.

“Dynamic Health IT remains a trailblazer in clinical quality measures software development. We’re very proud to be the first vendor to certify for 2015 Edition Quality Measures Quality Measures for Cypress 3.0,” said Jeff Robbins, President of Dynamic Health IT.

Certification is proud achievement, but also a way station to further development. DHIT continues to enhance our software to include bulk, automated practice and user adds, API access and a number of other new features.  We hope to provide our client not only quality measure compliance, but a transparent user interface that enables easy analysis.

CQMsolution 3.0 certification meets the following certification CQM-related criteria:
  • 170.315(c)(1) Clinical Quality Measures- Capture And Export
  • 170.315(c)(2) Clinical Quality Measures- Incorporate And Calculate
  • 170.315(c)(3) Clinical Quality Measures- Reporting
  • 170.315(c)(4) Clinical Quality Measures- Filter
CQMsolution Version 3.0 also includes new interface enhancements driven by 170.315(C) (4) - a brand new module in 2015 Edition - allowing users to filter report data on a number of demographic categories. In addition, we also certified our solution on:
  • 170.315(g)(4) Quality Management System
  • 170.315(d)(1) Authentication, access control, authorization
  • 170.315(d)(2) Auditable events and tamper-resistance
  • 170.315(d)(3) Audit report(s)
  • 170.315(d)(5) Automatic access time-out
The clinical quality measures to which CQMsolution has been certified include:
  • All 29 updated measures for eligible hospitals
  • All 64 updated measures for eligible professionals
  • All 64 aligned PQRS measures for EPs (additional PQRS measures can be supported)
This marks the fourth ONC-certified version of CQMsolution, the previous certification coming in conjunction with the release of Cypress 3.0 validation software.

Photo credit: MJ Boswell

Version 3.0 was certified by ICSA Labs, an Office of the National Coordinator-Authorized Certification Body (ONC-ACB) and is compliant in accordance with applicable criteria adopted by the Secretary of Health and Human Services (HHS).