- Case Study: Blood Systems Expands Remote Access Connectivity to Prepare for Disaster
- Beyond the EHR: Seamlessly Connecting Nurses and Physicians Using an EHR-Extender (EHR-e)
- Enterprise-class API Patterns for Cloud & Mobile
- QualSight LASIK Achieves HIPAA Compliance After Attempted Hack
- The Power of User Virtualization: Meeting Meaningful Use, Optimizing IT and Clinical Productivity
Another big week for Big Data.
Roger Foster of DRC’s high performance technology group continues his popular series with Big data and public health, part 2: Reducing unwarranted care services. As examples, Foster refers to overuse due to fee-for-service incentives; marginally valued direct care that has no measurable benefit or shows no improvement in patient outcome; unnecessary diagnostic or imaging tests that are performed to protect against malpractice exposure; and high cost diagnostics performed on patients at very low risk for the condition. “The analytic approach uses patterns found in historical data sets like medical records to identify risks, trends, and associations. One well-known example is credit scoring used throughout the financial services industry. Particular to healthcare, predictive analytics can be used to address unwarranted care by answering” a set of questions.
But that’s if, and only if, health entities have access to professionals skilled enough in analytics to make use of big data, otherwise known as data scientists, which, according to a Wall Street Journal article may soon prove problematic.
Another emerging problem area for Big Data: Government agencies, thus far, have made little progress. That’s according to a MeriTalk study circulating this week projecting that while the feds are fixin’ to rack up a petabyte’s worth of new data during the next two years, they have no place to put it. Of the 151 federal CIOs and IT managers MeriTalk interviewed for The Big Data Gap, nine out of 10 reported challenges when it comes to harnessing big data, and only 60 percent indicated that their agency is analyzing all the data it collects. What’s more, fewer than half are actually using it for decision-making.
Lest anyone forget, this week also saw the meaningful use stage 2 public comments period come to a close. I spoke with The Robert Wood Johnson Foundation’s senior program officer Michael Painter, MD, about how the NPRM and, subsequently, the forthcoming final rule might impact public and population health. RWJF mostly applauded CMS and ONC efforts, particularly around HIE, quality measures – and why RWJF, despite public flare-ups over patient access to records is urging the agencies to stand firm on both the number of patients expected to see downloaded records and the timeframe for that.
RWJF is among the many commenters, of course. Others to add their take thus far include the American Medical Association, which is looking to synchronize multiple health IT projects; HIMSS EHR Association focused on certification criteria and standards; the Certification Commission for Health Information Technology (CCHIT), from the perspective of a testing and certification body, offered a number of suggestions designed to minimize confusion; and the College of Healthcare Information Management Executives (CHIME) suggests that a further delay is needed.
And then there’s ICD-10. The public comments period for ICD-10 is open another week and a half or so, and as Carl Natale reports on ICD10Watch, there are relatively few comments thus far. Natale notes that even the AMA, AHIMA and HIMSS have yet to weigh in. Perhaps, as was very much the case with the meaningful use Stage 2 comments, those organizations are waiting until closer to the deadline to do so. [Update: AHIMA, CHIME take stances on ICD-10.]
And, when it happens, ICD-10 will result in greater specificity of health data, more granularity, more codes. Big data, indeed.