Tuesday, May 15, 2018

2015 Certification: 6 Things to Know

Steve Jobs famously remarked that users “don’t know what they want until you show it to them.” This is often true in the software development world as a whole, while in Healthcare IT much of what we do is essentially client-driven.

But what about when a feature is neither? Such is the case with some of the requirements found ONC 2015 Edition Certification. When faced with 2015 Edition Certification, EMR developers have a lot of questions, starting with “Why should we do this in the first place?”

Drawing on experience from our own certification and that of our clients, we’ll address some of the most pressing concerns in this post.

1. 2015 Certification software has become increasingly compulsory

At its inception, EMR developers fairly asked, given limited time and resources, whether there was any immediate reason to broadly adopt 2015 Certification criteria. It’s been essential to keep current on clinical quality measures in order to report to CMS programs (QPP, HQR, Joint Commission and CPC+), but Certification criteria as a whole has become more relevant with time.

Part of this is catching up with early adopters for competitive and marketing reasons. MIPS requirements for ambulatory providers have also been a driver - certified electronic health record technology is required for participation in the Advancing Care Information category of the QPP and only a 2015 Edition certification that includes automated measure calculation will enable reporting on ACI measures past the “Transition” phase.

2. Self-declaration takes some of the pressure off
Perhaps the most important thing to note about self-declaration is that the technical requirements for “self-declare” measures have not been eased. And there is a good bit of documentation required to prove the testing you have conducted independently.  However, the inclusion of a wide swath of self-declaration (non-live testing) measures has eased some of the burden for developers. The stakes and costs are lower now that you can test iteratively and do not have to schedule extra live testing (and potential re-testing) with your proctor. Keep in mind also that your proctor can ask at any time to review the self-declaration criteria.

3. Building an API for Patient Engagement means knowing your endpoints
If you know you’re going to be certifying the 2015 Edition “API” measures (g7 – g9), you’ll need to decide on technologies for delivering clinical data resources and authenticating user. We recommended FHIR and OAuth, respectively. There are pros and cons for both, but our decision was based on which technologies are best-positioned for where Health IT interoperability is headed.

It’s also worth exploring how patient-accessible APIs are going to work in the wild. The 2015 Certification measure provides a pathway to certifying that you can make XML/JSON available per patient and filterable by common clinical dataset sections and date. But it doesn’t connect the dots between accessing the raw resources from the API and getting it into a consolidated location that is usable by a non-technical patient user. For that, you’ll need to consider the extent to which you’ll take that leap into Patient Health Record development – or tailor your solution for compatibility with big players in this space such as Apple (which is, for now at least, pursuing a FHIR-based mobile record).

4. (b)(1) Transition of Care is a many-layered measure
The (b)(1) measure is proof that a world of functionality can lurk in a single sentence. In addition to sending and receiving a variety of transition of care documents through your chosen protocol(s), b1 also requires you to:
  • Detect valid and invalid ToC/referral summaries and provide an accounting of errors
  • Display a human-readable C-CDA for BOTH r1.1 and r2.1 CCDA
  • For both r1.1 and r2.1, allow your users to display only the data within a particular C-CDA section, set a preference for the display order and set the initial number of sections to be displayed.
5. (b)(6) Data Export is about more than just CCDAs
The 2015 Edition Data Export measure (b6) shares a similarly ambitious goal with the API: that all patient data can be made portable and requested from an EMR at any given time. For data export, not only are you asked to pull out patient data in CCDA form, but you need to be able to slice the patient data in several different ways, exporting either in real-time or scheduled ahead.

The different permutations for exporting under this measure are often a source of confusion, so here are the discreet taste cases to help make things more intelligible:
  • On-demand (export runs immediately):
    • Request all patients from a specific start date and time until the present
    • Request all patients from a specific start date and end date, with the end date occurring prior to the present
    • Request a subset of patients from a specific start date and time until the present
    • Request a subset of patients from a specific start date and end date, with the end date occurring prior to the present
  • Scheduled (configured for a future date-time):
    • Relative (a recurring scheduled report)
      • Request all patients based upon a relative date and time from the date range in the data (e.g., generate a set of export summaries from the prior month on the first of every month at 1:00 a.m.)
      • Request a subset of patients based upon a relative date and time from the date range in the data
    • Specific (a one-time scheduled report)
      • Request all patients based upon a specific date from the entered start and end dates and times (e.g., generate a set of export summaries with a date range between 01/01/2015 and 03/31/2015 on 04/01/2015 at 1:00 a.m.).
      • Request a subset of patients based on upon a specific date from the entered start and end dates and times
6. Automated measure calculations and usability testing involve a lot of data-gathering 
Automated measure calculations are based on an expansive spreadsheet of test cases furnished by ONC. These cases ensure that your measure calculations follow the expected measure logic faithfully. There’s really no corner-cutting on these, but one limiting factor is the measures you are certifying overall. If, for instance, you aren’t certifying ePrescribing, you will not be on the hook for those calculations.  Also, if you know the programs and stages of Medicare MU/Promoting Interoperability, Medicaid MU and ACI that apply to your users, you can hone in on just the calculations that are relevant. 

For usability testing, you will be in good shape by modeling your study and write-up after an established usability process approved by ONC. Your proctor should provide you with a sample list of functions subject to testing. The estimated time to conduct the study and complete the write-up would be 3 weeks, but this will be contingent on difficulty of recruitment, your own test design and then post-test editorial/graphic design considerations for the doc itself. The results of the testing will be posted on CHPL.

For vendors seeking support on 2015 Edition measures, check out our site to see how our bolt-on certification solutions can fit into your certification plan.

Copyright Dynamic Health IT ©, Inc 2018. We specialize in ONC-certified solutions, CQMs and system interoperability via standards such as HL7®, CDA®, CCD and FHIR® using our flagship products - CQMSolutionsConnectEHRConnectEHR-PortalDynamicFHIR and The Interface Engine(TIE)

Subscribe to our newsletter and mailing list here.

Monday, April 16, 2018

Quality Measure Submission: A Brief Review, A Look Ahead

For most, the 2017 submission period for eCQMs is over. Not only did we survive, we thrived. Dynamic Health IT worked with clients submitting measures to CMS for the Hospital Quality Reporting (HQR) program and Quality Payment Program (QPP) to improve timelines for submission as compared to 2016.

On the Ambulatory side, this was everyone’s first trip through MIPS submission and, in light of that fact alone, the process was a great success. DHIT worked with clients to submit both individual and group data, rolling out some new validation and user experience enhancements to ensure continuity between the submission program and the output.

We assisted clients in file validation using a series of newly-developed APIs, received files securely for validation via a cloud-based submission server and assisted our clients in both data troubleshooting and making an informed decision regarding submission type by comparing measure outcomes.

Achieving successful submission doesn’t mean we haven’t learned something along the way – quite to the contrary – so we wanted to share a few lessons learned in this space. CQMs have become a perpetual development and submission cycle, which means we can’t pause long before looking ahead.

Lessons from 2017 submission
The 2017 submission cycle was highly educational, particularly on the Ambulatory side (QPP/MIPS submission).  Here are a few lessons to carry into 2018:
  1. Make sure to choose your most clinically-relevant measures: Reaching consensus in your organization about measure selection prior to data review will eliminate inconvenient reversals later in the process. Running an initial report with all available measures for the previous and/or current years can aid in this process (CQMsolution excludes all non-MIPS EP measures from QRDA-III output specific to MIPS). 
  2. Review data early and often: The most time-consuming aspect of eCQM reporting is making sure data is complete, accurate and does not trigger submission errors. To ease the burden, we have upgraded our specs, error handling and validation options to get data client into shape. In our CQM specs, we want you to know exactly what has changed from the previous submission period so you can move quickly from stored procedure changes to measure performance review. A quarterly submission option is available on the Inpatient side to facilitate incremental quality checks.
  3. Weigh your options: With CQMsolution, you can run reports using a variety of reporting periods, measures and outputs to compare ahead of time choose your best performance. The MIPS program enables providers to submit with their group and/or as an individual, taking the best performance. The key here, as always, is giving yourself the time for this step.
  4. Know your core data elements: We have ramped up validation options to catch warnings and errors as soon as a report is complete, but you can also get a lead on these errors by checking a few important values and identifiers:
    • General (across programs): Check to make sure you have mapped essential rows for identifying patients, clinicians and encounters
    • HQR: Make sure you know your hospital’s TIN, CCN, CMS EHR Certification Number and, if available, the Medicare Beneficiary Identifier (MBI) for your patient.
    • QPP Group: Make sure your practice is providing a TIN (CQMsolution will take it in your data or you can provide on the UI) and sending all data for the entire group practice of clinicians under that TIN (whether virtual or not)
    • QPP Individual: Make sure you provide a TIN and a single NPI and make sure your data is filtered by a unique identifier (most commonly, NPI)
    • General (across programs): Check to make sure you have mapped essential rows for identifying patients, clinicians and encounters
  5. Ensure quick turn around on pre-submission validation: When your files are ready for pre-submission validation, we will work with you on an option that makes the most sense, including available APIs to DHIT to provide Data Submission Vendor services and direct submission and feedback from the MIPS program.
  6. Be prepared for Value Set and Measure changes: The change to Inpatient value sets for Q4 was highly disruptive and while CMS is working to avoid similar, there’s no guarantee there won’t be mid-stream changes afoot in 2018.
Next Steps
It’s a relief to get submission in on time, but quality measure submission now allows for less down time than ever. In addition to getting ready for HQR and QPP 2018, here are a few things to consider:
  • Hardship exemption deadlines: Some providers affected by disasters in 2017 have deadlines extended (and now looming)
  • Joint Commission: If you are submitting Inpatient eCQMs for the ORYX program, the deadline was extended until June 29, 2018. DHIT is an approved vendor for the program.
  • Medicaid Submission: Do you have any clients submitting to states? CQMsolution offers support for Medicaid Submission for all States
  • MIPS feedback: MIPS Preliminary Feedback is now available. If you submitted data through the Quality Payment Program website, you are now able to review your preliminary performance feedback data. 
  • Hybrid Measures: If you’re a hospital submitting HQR, you can submit this optional measure to QualityNet to assist in risk-adjustment
  • Get in touch with your DSV or Registry early: Along with our CQMsolution full-service quality measure application, server as a Data Submission Vendor (DSV) and MIPS Registry.
We’d love to talk to you about data submission that fits yourschedule and show you our expanded roster of measures, new error validation processes and submission management/archiving.

Tuesday, November 7, 2017

DHIT participated in the HL7 Digital Quality Summit November 1st and 2nd in Washington, DC.  The event was jointly hosted by HL7 and NCQA and provided a unique opportunity to discuss overlapping issues between quality tracking and interoperability.  The interactive sessions were an excellent forum for peer networking, discussing challenges and showing off the latest solutions. 

The hot discussion topic at the meeting was the CMS announcement that starting with the 2019 reporting period, eCQMs would be transitioned from the current HQMF XML modeling language to the new Clinical Quality Language (CQL) standard.  CQL gives quality measure authors much more power and flexibility in creating precise definitions of quality measures that are human readable yet structured enough for processing a query electronically.  CQL replaces the logic expressions currently defined in the Quality Data Model (QDM). Going forward, QDM will include only the conceptual model for defining data elements (the data model).

Thursday, February 9, 2017

FHIR and CCDA Implementation-a-thon 4: Care Plans, Root IDs and much more

Last month, Dynamic Health IT was in attendance for another meeting of minds hosted by HL7 International down in San Antonio, TX. While these events around CCDA and FHIR have become a regular part of our travel schedule, there's nothing routine about taking a deep dive into healthcare standards. Each trip provides an opportunity to combine ground-level development with high-level policy-making, up to and including interactions with HL7 and ONC policy-makers.

Coding away.
The ultimate goal, as always, is healthcare data interoperability. San Antonio, with its backdrop of canals, struck a fitting visual metaphor. As with man-made waterways, engineering connectivity in healthcare presents challenges at ever turn. 

CCDA Implementation-a-Thon 4

San Antonio hosted the fourth-ever CCDA Implementation-a-thon. The scope of CCDA v2.1 - and its corresponding implications for 2015 Edition Certification - have made for wide-ranging discussions at each of these events. We'll focus on just a few highlights.

There was considerable interest in clarifying the links between concerns, goals and intervention. A source of confusion is that, as of this writing, there is an issue with the ONC CCDA Scorecard not scoring Care Plan documents correctly.

For health concerns, there were a number of clarifications that proved helpful. Concerns expressed by patient need not be collapsed into a code and, by nature, will often need narrative:
Health concerns may be coded, but may need to be represented in narrative form as there are likely to be terminology system gaps 
On the topic of goals, there was discussion around differentiating between patient goals and provider goals. This may not always be a fine distinction captured by codesets, but where possible it will be defined by the author concept. Goals that have both a patient and provider author are coded as shared or negotiated goals

The concept of document-level authorship in general was conceptually challenging. Vendors can generate and recognize their own document roots, but may extends only within your own vendor system in many cases.

The CCDA event also featured a presentation by the Value Set Authority Center (VSAC). VSAC offers a Support Center to promote closer collaboration with value set implementers.

For those in the midst of 2015 Edition Certification, there was a discussion of Care Plan certification test data used in 170.315 (b)(9) and 170.315(e)(1).

FHIR Connectathon 14

FHIR itself is very accessible as far as resources and the rest API is concerned. But getting a broader understanding of everything FHIR touches and how it behaves - the goal for any given Connectathon - can be more daunting. For the 14th Connectathon, our focus was on the CCDA on FHIR track.

Over the course of the weekend, we were able to get a better understanding of the current status and purpose of CCDAs on FHIR in general as well as the following key insights:
  • Difference between composition and bundles. 
  • How to reference a different server for resources used in our composition
  • How to style and render our bundle as retrieved from the server 
  • Search parameters and where to find them 
  • How CCDA on FHIR imposes additional constraints on existing resource types
The DHIT team was able to make it through all of the producer and some of the consumer scenarios, which we counted as a major success in two short days. Outside of our development track, we were able to gain quite a bit of information about strategies for integrating FHIR servers into our ecosystem, the viability of the Spark Production server, strategies for integrating OAuth, and more.

Organizationally, it is good to know for those coming into the FHIR fold that Work Group Meetings are where a lot of the development work on FHIR happens. Numerous work groups will be considering FHIR change proposals, working on FHIR profiles and resources and debating other aspects of FHIR implementation.

As well, there will be meetings of the FHIR Governance Board and FHIR Management Group discussing policies relating to FHIR. Within this framework, attendees and group members discuss items of interest such as tooling, new domains, particular technical issues, etc.

We hope to see you at the next stop!

Tuesday, December 6, 2016

New Era of CQMs, Part I: Quality Measures in The Age of MIPS

The first performance year for the CMS Merit-based Incentive Payment System (MIPS) begins on January 1 of next year, yet much of the healthcare world is still in the dark about large portions of the program. Or even unaware of its existence entirely.

Perhaps one of the most misunderstood aspects of the program: MIPS does not apply to Medicaid Meaningful Use or eligible hospital Meaningful Use (MU) programs.

With MIPS beginning in earnest and ramping up over the next few years, we wanted to provide a series of plain-language posts and address how it will affect your use of clinical quality measures.

Our goal as the program unfolds is to grow our CQMsolution software to support as many measures as possible.

MACRA sets up a new system of quality-based reimbursement for clinicians called the Quality Payment Program.  Within this new model, there are two pathways to select:
  • The Merit-based Incentive Payment System (MIPS)
  • Advanced Alternative Payment Models (APMs)
For this post, we will focus on quality assessment within the context of MIPS. Multiple quality reporting programs are folded into MIPS:
  • Physician Quality Reporting System (PQRS)
  • Value-Based Payment Modifier (VBM)
  • Medicare Electronic Health Records (EHR) Incentive Program (ie, Meaningful Use)
As with the current PQRS program, multiple clinicians can participate in MIPS as an individual or a group.

Payment adjustment

In determining the payment adjustment based on MIPS, clinicians will received a composite score, weighting four different categories of performance:
  • Quality
  • Resource user
  • Clinical practice improvement activities
  • Advancing care information 

Quality Measures under MIPS
Quality receives the highest weight under MIPS and to determine quality score. The program has taken PQRS and the value-based modifier, mashed them up and provided a menu of quality measures that will determine reimbursement.

MIPS-eligible clinicians and groups can select their measures from either the list of all
MIPS measures or a subset of specialty-specific measure as identified by CMS. Unlike the current requirements under PQRS, clinicians will not be required to report a "cross-cutting measure." The program has been positioned to allow clinicians to select measures that are the most relevant to their practice.

In the transition from the current state of quality reporting (PQRS/VBM/MU) to MIPS, there are a few other key points to bear in mind:
  • Registry, EHR and QCDR reporting measures currently require 9 measures across three quality domains, but thew new requirement is 6 measures
  • Measures can now have any combination of NQF quality domains, though the 6 selected must include an outcome measure (as opposed to strictly process, for example)
  • MAV process changed (more on this in future posts)
  • PQRS registry measures group method is eliminated
  • Registry and QCDR reporting requires meeting a "data completeness" standard: 50% of patients in the denominator
There are more complexities to explore - and changes and clarifications in the coming months, as always. We look forward to staying on top of these and simplifying the process for each of our clients.

Friday, October 21, 2016

FHIR works: Notes from Baltimore Plenary Meeting and Virginia CCDA 2.1 Implementationathon

We say this every time we attend a meetup, but it remains true: interest in HL7 interoperability standards continues to grow remarkably. As FHIR in particular matures, we see proliferation of attendees, ballot comments and general buzz. As Graham Grieve mentioned over on the FHIR Directors Blog, the most recent meetup was most likely HL7's largest meeting to date.
Baltimore at night.

Members of the DHIT team traversed the DMV (that's DC-Maryland-Virginia, in Beltway-speak) last month, heading to Baltimore for the annual plenary meeting and Arlington, VA, for the C-CDA Implementation-A-Thon.

News on FHIR
As the FHIR Chief himself, Grahame Grieve, mentions over at his blog, there were some major headlines at the very well-attended FHIR plenary event:
  • FHIR release 3 is slated for release at the end of this year.
  • New communities are cropping up, including from medical disciplines that hadn't previously shown up on the FHIR scene
  • The FHIR Foundation will continue to be a key player, supporting the "implementation process of standard"
  • The site fhir.registry.org will go live soon
  • Discussion of whether to support logical reference; in short, a "URL-based view of the world," as Grahame puts it, may be incomplete.
As Grahame also mentioned, the "most significant single decision" made at the plenary was to take the specification known previously as “DAF-core” and rename it the US Realm Implementation guide. That may sound like inside baseball, but it's another symbolic leap in the maturity of the standard.

From the DHIT standpoint, we are well underway developing features in our flagship interoperability application, ConnectEHR, and across our product line to support EMR clients, including the development of testing and production FHIR servers. Our overriding goal is for our clients to move forward in interoperability as they meet the latest edition of ONC Certification Standards (2015 Edition).

We anticipate that FHIR may one day become an explicitly mandated standard and, as it stands, is a boon not only to interoperability but meeting meaningful use in Stage 3 and beyond.

We participated in the "CCDA on FHIR" track as document creator as well as document consumer, testing our implementation against multiple servers (including those of other participants as well as the reference servers from Grahame Grieve and Furore). Our coding was done in C# using the fhir-net-api provided by fellow FHIR Chief Ewout Kramer.

HL7 hosted its third C-CDA Implementation-A-Thon last week in Arlington, VA. The DHIT team kept up its perfect attendance, convening with other CCDA developers and experts just outside DC.

In addition to the usual networking and educational opportunities afforded at HL7 Events, it's always interesting to chart the development progress of the industry as a whole. And the "real-world" scenarios provided at the event - creating and exchanging live data - are worth the trip alone.

As is typical of healthcare standards-based Connectathons, clinical scenarios are laid out for participants to navigate. In this case, the exercises were related to the exchange of v2.1 documents, discharge summaries and electronic referrals. We're proud to report that we all the CCDA Homework Scenarios were accomplished. 'A+' goes to the DHIT developers on hand.

Valueset OIDs
Valueset OIDs continue to be a point of some controversy. There was a presentation at the event providing background and information on the process of creating them. For the initiated, value sets for use in EMRs, CQMs, research and other contexts are created by professionals and organizations and submitted to be approved by the National Library of Medicine (NLM), under its Unified Medical Language System (UMLS) arm. The code sets are validated and checked for duplicates. However, our development has uncovered some of the codes may be subject to duplication and we've requested some further information from NLM.

Lessons learned
We left with some takeaways on the process of generating a v2.1 CCDA and we wanted to share with our audience:
  • Often overlooked, developers should pay a little bit more attention to mood codes and their usage even though it may complicate the data that is requested from a client
  • C-CDA Scorecard is a very useful checkpoint in development
  • Having lower score in the scorecard doesn’t mean that the CCDA will fail the validation. Higher scores will determine that the CCDA is much closer to the expected standard
  • Display name should come from the code system otherwise it will lower the score. 
  • Narrative Text for all sections and textual clinical notes
  • The task of categorizing results may tolerate multiple pathways. Example: CT scans go to Procedures or Results or both
  • Allergies and problems should always have time recorded
  • For effectiveTime of immunizations, do not use low+high when moodCode=EVN

Wednesday, October 12, 2016

NTT's Optimum and Dynamic Health IT Partner on Forward-Thinking Solution for CQMs

DHIT President Jeff Robbins addressing the
NTT Data Client Conference
After wrapping up our successful ONC certification testing for CQMsolution in early September, we headed to Newport Beach, CA, for the NTT Data Client Conference. Held annually, the conference offers clients of NTT Data products and services a wide range of educational sessions, networking opportunities and face-time with NTT DATA staff.

The event was a great opportunity to meet with implementers and users of CQMsolution. We were able to provide specific education on our application through the lens of the NTT's Optimum clinical ecosystem. CQMsolution is developed as universal quality measure solution, but context always matters, of course.

In keeping with the mission of the conference, we also spent time discussing some policy specifics to help NTT users prepare for changes in quality measurement. This included a glimpse into the future to MIPS/MACRA and Meaningful Use Stage 3.

We also expressed our confidence that our solution will continue to be among the first - if not the very first - to update with each successive release of CMS measures. Among other benefits, this allows maximal testing and educational opportunities in the lead-up to submission.

As with all DHIT clients, we seek to offer a full range of development, quality assurance, support and project management resources, tailored to the environmental needs of the specific implementation and user base. Our close collaboration with NTT has yielded a solution that allows their EMR team to focus on development and customer support, while we provide an effective and aggressively-supported tool to attack quality measures.

DHIT VP Raychelle Fernandez providing
clinical background for CQM calculation process.
By way of demonstration, DHIT gave a detailed presentation on CQMsolution and showed key elements of the software using a specific clinical use case: Ischemic/Hemorrhagic Stroke (via CMS measure 102v4). With our goal of guiding clients through CQMs from start to finish, we discussed not only the calculation and display of measures in CQMsolution, but the process of submission.

It's our hope that CQMsolution, like that Southern California weather, makes everything a little sunnier.

Monday, October 10, 2016

CQMsolution blazes trail as first 2015 Edition Certified CQM product

Dynamic Health IT is proud to announce that we're the first software developer to be certified for Clinical Quality Measures under the latest ONC Health IT Certification (2015 Edition).

But don't take our word for it: our listing on the ONC CHPL website is viewable here.

The process of certification testing gives our clients confidence that our product can support eligible clinicians and eligible hospitals in meeting CMS EHR Incentive Program objectives. We have developed the product with an eye on not only the current formulation of Meaningful Use, PQRS, IQR and other quality measurement programs, but the changes to come under MIPS/MACRA.

“Dynamic Health IT remains a trailblazer in clinical quality measures software development. We’re very proud to be the first vendor to certify for 2015 Edition Quality Measures Quality Measures for Cypress 3.0,” said Jeff Robbins, President of Dynamic Health IT.

Certification is proud achievement, but also a way station to further development. DHIT continues to enhance our software to include bulk, automated practice and user adds, API access and a number of other new features.  We hope to provide our client not only quality measure compliance, but a transparent user interface that enables easy analysis.

CQMsolution 3.0 certification meets the following certification CQM-related criteria:
  • 170.315(c)(1) Clinical Quality Measures- Capture And Export
  • 170.315(c)(2) Clinical Quality Measures- Incorporate And Calculate
  • 170.315(c)(3) Clinical Quality Measures- Reporting
  • 170.315(c)(4) Clinical Quality Measures- Filter
CQMsolution Version 3.0 also includes new interface enhancements driven by 170.315(C) (4) - a brand new module in 2015 Edition - allowing users to filter report data on a number of demographic categories. In addition, we also certified our solution on:
  • 170.315(g)(4) Quality Management System
  • 170.315(d)(1) Authentication, access control, authorization
  • 170.315(d)(2) Auditable events and tamper-resistance
  • 170.315(d)(3) Audit report(s)
  • 170.315(d)(5) Automatic access time-out
The clinical quality measures to which CQMsolution has been certified include:
  • All 29 updated measures for eligible hospitals
  • All 64 updated measures for eligible professionals
  • All 64 aligned PQRS measures for EPs (additional PQRS measures can be supported)
This marks the fourth ONC-certified version of CQMsolution, the previous certification coming in conjunction with the release of Cypress 3.0 validation software.

Photo credit: MJ Boswell

Version 3.0 was certified by ICSA Labs, an Office of the National Coordinator-Authorized Certification Body (ONC-ACB) and is compliant in accordance with applicable criteria adopted by the Secretary of Health and Human Services (HHS). 

Monday, August 15, 2016

Eliminating the Hurdles of Clinical Quality Measures for the 2016 Reporting Year

Dynamic Health IT is proud to announce we have successfully pilot-tested with ICSA Labs for 2015 Edition CQM-related measures (c)(2) and (c)(3).

That's a bit of a mouthful, but it means that our software, CQMsolution, remains at the forefront in providing meaningful, submission-ready clinical quality measure output. CQMsolution supports 93 CMS eCQMs and the 64 aligned PQRS measures.

Under 2015 Edition, Clinical Quality Measure reporting has been made more comprehensive. The three existing measures (c)(1), (c)(2) and (c)(3) have been revised:
  • Cypress 3.0 validation software, which includes more robust testing relative to 2.6.1, must be supported
  • The latest set of measure versions for the 2016 reporting year, validated by Cypress, must be supported
  • Required data export capability is expanded
  • Data import should be more accessible (“without developer assistance”)
  • Exported data file must meet R2 implementation guide for QRDA
The three CQM measures are also joined by a new measure: measure(c)(4) - filtering. In meeting (c)(4), EHRs must be able to filter quality measure results at the patient and aggregate levels by a list of variables. The filtered results must be made available in a human-readable format and data file.

Not reinventing the wheel, replacing tires

Our developers put in rigorous work over the last quarter, incorporating feedback from clients, to dramatically enhance the software, making it easier-to-use, more robust and, of course, certification-ready for ONC 2015 Edition and Cypress 3.0.

We saw this development cycle as a great opportunity to take everything we have learned over the past few years - in the form of our feature enhancements and performance improvements - and integrate it fully with new development.

This meant taking the principles behind engine and UI that worked and preserving them, while using the regulatory changes in measure logic and output as a chance for a coordinated re-design. The CQMsolution calculation engine uses the eMeasure HQMF files, which are based on the Quality Data Model (QDM), as the basis for evaluation. We use these files to create the data structures through which we process patient data. 

In contrast to feature-specific roll-outs, this was a bottom-up version. It is not often you get a chance to re-write core parts of your application. This can be a perilous process, but there were two factors that made it successful:
  • The ability to create a completely new calculation engine for 2016 reporting year without altering previous reporting year calculation engines meant we were not required to maintain backwards compatibility in the new code
  • After identifying our time parameters and client needs, our development team realized complete development focus would be needed. The project was afforded development time in a distraction-free environment.

CQM data intake and calculation were worked all at once, allowing for a holistic approach. Combined with extensive testing, relying in part on a more exhaustive Cypress data set, the result was a robust upgrade, built on clean code.

-         Key changes
The changes from the 2014 release measure to the 2015/2016 release measures were dramatic, requiring a rewrite of major portions of CQMsolution. Example of these changes can be seen in the text of the measures. New subset operators were added and new temporal operators that make the measures more clear. 

The changes in QRDA format also called for a rewrite to the parsing and generating pieces that enable our clients' certification and submission of CQMs.

On top of the CQM engine, we integrated user experience changes to make both certification testing and day-to-day use of the application easier and reflective of technical changes. These changes range from easier report tracking in the UI, to one-click certification testing through single-upload "compound" report and API to Cypress.

Improvements ahead
CQMsolution 3.0 is now in beta testing with clients, pending final ONC certification. The development cycle is perpetual and we intend to stay at the forefront of CQM development. By year's end, 2017 reporting year support should be complete and a number of features are in the pipeline for the near term, including API-based data collection from client EMRs. 

We look forward to rolling those out and, of course, to our full certification on all four CQM measures. Stay tuned!

Tuesday, August 2, 2016

FHIR Applications Roundtable at Harvard Medical School

The DHIT Team has been an active participant in FHIRConnectathons during the past two years. Among the benefits of these events is the unique glimpse they provide into what the industry is delivering with FHIR and how the standard continues to evolve through active development.

Our team is also eager to find connections between out interoperability expertise and real-world healthcare problems. With those (and other) goals in mind, our President Jeff Robbins attended the 1st annual FHIR Applications Roundtable at Harvard  Medical School in Boston to learn more.  

Although FHIR is a relatively new standard, it has great potential and forward-thinking healthcare IT organizations are already deploying FHIR solutions. 
Harvard Medical School

The Roundtable consisted of a series of 15 minute presentations by academics, software developers and consultants highlighting FHIR-related projects.

The projects on display included patient and provider-facing apps, Clinical Decision Support, clinical collaboration platforms, patient education, all the way up to a complete, native FHIR-based EHR. The expansiveness of the applications and implementations discussed demonstrates just how far the standard has come from its days in draft status.

On the policy front, Steve Posnack, Director of the Office of Standards and Technology at the Office of the National Coordinator for Health Information Technology spoke about efforts to encourage interoperability through FHIR developement and the  HL7 FHIR App Ecosystem. ONC is encouraging market-ready FHIR support through its "challenges."

DHIT plans to offer a CCDA-to-FHIR converter in the near future.  Stay tuned!