- The Power of User Virtualization: Meeting Meaningful Use, Optimizing IT and Clinical Productivity
- Connect to Care Interactive Map: Public Sector Healthcare Innovation
- Case Study: Blood Systems Expands Remote Access Connectivity to Prepare for Disaster
- Taming Complexity: A New Solution for In-House Healthcare EDI
- Beyond the EHR: Seamlessly Connecting Nurses and Physicians Using an EHR-Extender (EHR-e)
Deborah Peel, MD, was trained as Freudian psychoanalyst and worked as a psychiatrist in Austin, Texas for nearly three decades before becoming a privacy activist, founding the group Patient Privacy Rights in 2006 after being appalled by HIPAA’s evolution into what she sees as a weak baseline for privacy and security.
Equally skeptical of both private industry and the government, Peel’s views and policy proposals diverge sharply from myriad health IT leaders in a lot of ways. But she shares many of their goals, and the optimism that technology and patient engagement can improve American healthcare systems.
Recently, Patient Privacy Rights urged the Department of Health and Human Services to regulate cloud-computing, as hosted data services grow and data breaches continue to plague health organizations. In a wide-ranging iinterview with Government Health IT, Peel talked about her background as a mental health provider, how healthcare organizations can build patient trust, her ideal model for information sharing consent and more.
Q: How did your view of privacy and patient rights develop?
A: It really grew out of my long-term practice as a boarded psychiatrist and a Freudian psychoanalyst. Literally the first week I hung out my shingle in the late ‘70s, people came in and they said “If we pay you cash, will you keep our records private?” This is a problem that predates electronic health records. Health information doesn’t stay in the doctor’s office. What happened then was information on paper would get sent to pay claims, and it was very detailed and the claims information would many times be shared with employers. That can happen because under ERISA (the Employee Retirement and Income Security Act), if you’re in an ERISA employer-based health plan, they frankly have the right to see your information. Many companies say they don’t do that but it’s a widespread practice. I learned from my patients that if you use any third parties then the privacy of your sensitive information is at risk.
I really learned that there were significant numbers of people who will not get treatment unless they know it’s private.This is long-standing problem that predated electronic health records, but if you think about the scale of things, it’s very different.
Q: Did you know of mental health patients whose employers learned of their conditions and discriminated against them?
A: Absolutely, not only discriminating against them. Another very common complaint would be, “I applied for life insurance or long-term disability insurance, and I’ve been denied.” They would look into it and it would be because of psychiatric records. I wrote many a letter that said, “This person has never been suicidal. This person has not been on medication. They’ve been in therapy; they’ve managed their problems very well.”
It’s my opinion that insurers have long discriminated against anyone with mental health diagnosis, and I don’t believe it’s actually accurate actuarially. I don’t actuarially believe that there’s a basis for discriminating against anyone who has any mental illness or addiction diagnosis. If you think about it, the ones that do well are actually the ones that come in for treatment. These are people that are going to do well and get better, not create major burdens for the insurance industry.
I do have a pretty negative view of the insurance industry and the managed care industry. Insurers, when we had an indemnity model, all they got to pay claims was the diagnosis, the date of treatment, the place of treatment, the type of treatment and the cost — five elements. Their corporate mandate is to return money to shareholders. So they began to think of ways to ratchet down what they were paying out. Insurers began to demand copies of records as a condition for paying claims, and then they would use whatever they found to collect more information about individuals, and also to find ways to deny and limit payments and claims. Insurers began to require that you sign every year a blanket advance consent that in order to pay a claim that your doctor would send records of the treatment to them. They now pore over people’s records to look for ways to take back payments that have already been made, to claim that something was not revealed earlier that would’ve caused you to be denied.
Q: Your website cites a number of fairly nefarious and invasive scenarios: “If a school or university learns your child has ADHD or is being treated for depression, they may deny admission. If a boss knows you take Xanax or Zoloft, they may reconsider your promotion.” Wouldn't both of those practices be illegal?
A: Of course it’s all illegal, using people’s information against them. But there’s no way that the poor employee can even know until later that’s something’s happened. For example, I can’t tell you how many stories psychiatrists hear about where somebody’s been out for two weeks for depression. They go back in, they’re assigned to a completely new job, and they end up quitting. How are they going to ever prove or know who looked at their records when there is no chain of custody? That’s the other thing that electronic records can prove: you can’t move them, you can’t open them, you can’t see them, without there being a transaction.
One of the things that we have lobbied for is a chain of custody and accounting of disclosures. And we did get that into the HITECH Act. You’re supposed to be able to get a chain of some disclosure, three years of all disclosures of electronic data from your EHR. They don’t even have the rules yet for how we can get disclosures of electronic health records — not from pharmacies, not from labs, not from insurers, not from all the other clearinghouses. What we really need is a chain of custody for all of health data, wherever it is. Because we don’t even know that, there’s no way to prove harm. One of our major projects right now is we’re working really hard with Harvard and Latanya Sweeney to raise the funds to build a data map. We do not even know how many entities have our information or what they’re doing with it. So how can we weigh risks and benefits, when we have institutional control of information, not patient control?
Q: Do we need a better definition of privacy?
A: The word privacy is bandied around a lot in healthcare, and people don’t say what they mean by it. Under the law, privacy means the right to control personal information, and we have very strong constitutional rights to control health information privacy. This comes from the decisions that began around abortion. If you look at that law, the Supreme Court recognized two kinds of privacy: informational privacy and decisional privacy. It’s the decisional privacy that gets everybody into fights — who gets to decide who makes the decision about that information. But no one side of the fence disagrees with the right to informational privacy, which is your right to control the use of your sensitive health information.
When HIPAA was first implemented by President Bush, it had a right of consent. In fact, the privacy rule read that a covered entity had to obtain your consent to use or disclose your protected health information for treatment, payment or healthcare operations. About a year later, they reopened HIPAA; nobody noticed this. And they told us that they fully planned to take that out. They proposed an amended rule, and they got 11,000 comments that said don’t take consent out. But they did anyway. About a year after the privacy rule, the consent provisions were replaced with what’s called regulatory permission for covered entities to use and disclose PHI for TPO — protected health information for treatment, payment and operation.
That’s why I started Patient Privacy Rights. When that change happened in 2002, only a few weird eggheads — like the Freudian psychoanalysts — noticed that language in the privacy rule. When we began to go to Congress in 2006, the first thing we did was show people the language about consent that was in the privacy rule that Bush implemented and then the language in the amended rule that took away consent. Now you can’t even refuse your doctor or anyone from sharing information, if they say it’s for one of those three uses.
It’s truly astonishing that we don’t have the ability to have private email. This is one of the major problems with email of course.
Q: You don’t think it’s beneficial for consumers, say Gmail users, to be able to use a free communications technology service and get customized advertisements?
A: That’s bull. The public doesn’t want customized ads. That’s a fantasy of the Direct Marketing Association. If you do, you should be able to elect to get them. Most people don’t. Microsoft’s new browser, the automatic default is do not track. And they caught a lot of hell from the advertising industry. Their pushback was: that’s what 75 percent of the public wants. I’m sure you’ve seen the stuff about how smartphone users are rising up, how apps collect information and data from their cell phones. This is the beginning of the public’s recognition that all of this tracking is wrong.
Q: Isn’t it a fair trade off, though? A lot of these technologies are available for free.
A: Heck no. I don’t think any of it is. You can’t make a trade off when you don’t know what the risks are. Most people don’t understand that the ability of these massive data aggregators is to create incredibly intimate psychological, political, health, intellectual interest portraits of who you are, including what you have to spend. This is outrageous. This is more than your mother knows. This is more than the CIA or the NSA used to be able to gather. It would take years to gather this type of information.
Q: Back to healthcare. With various health information exchange and EHR systems, patients are often given the choice to opt in or opt out. A lot of health IT providers have tended to prefer the opt-out system, where patients are automatically enrolled unless they choose otherwise. What do you think about this?
A: Opt-in and opt-out are actually very deceptive and very unfair. Opting into what? You don’t even know where your data is going to go, who’s going to see it or use it. You have no ability to control sensitive information, which everyone in every state used to have, because every state used to restrict the disclosure of certain kinds of sensitive information, whether it was on teenagers, whether it had to do with STDs, mental health. There are now some states that have wiped out their state level protections in order to downgrade their standards to HIPAA.
[Related: HIE and the patient privacy conundrum]
In Europe, all health information is sensitive, not just certain categories. We should have the right to segment sensitive information and we should have the right to decide what’s sensitive to us. The five kinds of consent that are being offered for health information exchange in this county are really a privacy disaster. Some states don’t offer any opportunity for consent whatsoever, like Indiana. Everything in Indiana is in; you can’t get out if you want to.
Q: So what do you see as some of the ideal models for balancing privacy and technology, and letting patients decide without creating systems that are too onerous?
A: There are better ways to do this. We could have exquisite control. We could be pinged on our cell phones when somebody wanted to use our information and we could say yes or no. For example, you could set up a rule that “I’m willing to have my data used by any institution that’s studying juvenile diabetes, or whatever, you name it.”
We need to move from institutional control to patient control of data. That’s going to take new technologies and that’s going to take new laws. We have to have patient portals. We have to have robust ID management. We have to have a single place where we can set as detailed or as broad consent permissions as we want. Then we would also need a health bank, where we could actually get a copy of all of our records, so we would have the most complete up-to-date information about ourselves that we could then disperse. We would not then need institutional control.
That’s really the only way that we’re going to be able to reap the benefits of technology and not blow all the money.