The NHS plan to share our medical data can save lives – but must be done right | Ben Goldacre Care.data, the grand project to make the medical records of the UK population available for scientific and commercial use, is not inherently evil – far from it – but its execution has been badly bungled. Here's how the government can regain our trust.
Everything would be much simpler if science really was "just another kind of religion". But medical knowledge doesn't appear out of nowhere, and there is no ancient text to guide us. Instead, we learn how to save lives by studying huge datasets on the medical histories of millions of people. This information helps us identify the causes of cancer and heart disease; it helps us to spot side-effects from beneficial treatments, and switch patients to the safest drugs; it helps us spot failing hospitals, or rubbish surgeons; and it helps us spot the areas of greatest need in the NHS. Numbers in medicine are not an abstract academic game: they are made of flesh and blood, and they show us how to prevent unnecessary pain, suffering and death.
Now all this vital work is being put at risk, by the bungled implementation of the care.data project. It was supposed to link all NHS data about all patients together into one giant database, like the one we already have for hospital episodes; instead it has been put on hold for six months, in the face of plummeting public support. It should have been a breeze. But we have seen arrogant paternalism, crass boasts about commercial profits, a lack of clear governance, and a failure to communicate basic science properly. All this has left the field open for wild conspiracy theories. It would take very little to fix this mess, but time is short, and lives are at stake.
The care.data project was promoted in two ways: we will use your data for lifesaving research, and we will give it to the private sector for commercial exploitation, creating billions for the UK economy. This marriage was a clear mistake: by and large, the public support public research, but are nervous about commercial exploitation of their health data.
Now the teams behind care.data are trying to row back, explaining that access will only be granted for research that benefits NHS patients. That is laudable, but potentially a very broad notion. It's one we would want to unpack, with clear, worked examples of the kind of things they would permit, and the kind of things they would refuse. But that's not possible because, bizarrely, the specific principles, guidelines, committees and regulations that will determine all these decisions have not yet been clearly set out. This poses several difficulties. Firstly, the public are being asked to support something that feels intuitively scary, about the privacy of their medical records, without being told the details of how it will work. Secondly, the field has been left open to conspiracy theories, which are hard to refute without concrete guidance on how permissions for access really will work.
That said, many criticisms have been absurd. There has been endless discussion around the idea of health insurers buying health records, for example, and using them to reject high-risk patients. Call an insurer right now and see how you get on: within minutes you will be asked to declare your full medical history, waive confidentiality and grant access to your full medical notes anyway.
Many have complained about drug companies getting access to data, and this is more complex. On the one hand, arrangements like these are longstanding and essential: if medicines regulators get a few unusual side-effect reports from patients, they go to the drug company and force them to do a big study, examining – for example – 10,000 patients' records, to find out if people on that drug really do have more heart attacks than we'd expect. To do this, the UK health regulator itself sells industry the data, in the past from something called the GP Research Database, which holds millions of people's records already. This needs to happen, and it's good. But equally, people know – I've certainly shouted about it for long enough – that the pharmaceutical industry also misuses data: they hide the results of clinical trials when it suits them, quite legally; they monitor individual doctors' prescribing patterns to guide their marketing efforts, and so on. The public don't trust the pharmaceutical industry unconditionally, and they're right not to.
Trust, of course, is key here, and that's currently in short supply. The NSA leaks showed us that governments were casually helping themselves to our private data. They also showed us that leaks are hard to control, because the National Security Agency of the wealthiest country in the world was unable to stop one young contractor stealing thousands of its most highly sensitive and embarrassing documents.
But there is a more specific reason why it is hard to give the team behind care.data our blind faith: they have been caught red-handed giving false reassurance on the very real – albeit modest – privacy threats posed by the system.
Tim Kelsey is the man running the show: an ex-journalist, passionate and engaging, he has drunk more open-data Kool-Aid than anyone I've ever met. He has evangelised the commercial benefits of sharing NHS data – perhaps because he made millions from setting up a hospital-ranking website with Dr Foster Intelligence – but he is also admirably evangelical about the power of data and transparency to spot problems and drive up standards. Unfortunately, he gets carried away, stepping up and announcing boldly that no identifiable patient data will leave the Health and Social Care Information Centre. Others supporting the scheme have done the same.
This is false reassurance, and that is poison in medicine, or in any field where you are trying to earn public trust. The data will be "pseudonymised" before release to any applicant company, with postcodes, names, and birthdays removed. But re-identifying you from that data is more than possible. Here's one example: I had twins last year (it's great; it's also partly why I've been writing less). There are 12,000 dads with similar luck each year; let's say 2,000 in London; let's say 100 of those are aged 39. From my brief online bio you can work out that I moved from Oxford to London in about 1995. Congratulations: you've now uniquely identified my health record, without using my name, postcode, or anything "identifiable". Now you've found the rows of data that describe my contacts with health services, you can also find out if I have any medical problems that some might consider embarrassing: incontinence, perhaps, or mental health difficulties. Then you can use that information to try and smear me: a routine occurrence if you do the work I do, whether it's big drug companies, or dreary little quacks.
This risk isn't necessarily big, but to say it doesn't exist is crass: it's false reassurance, which ultimately undermines trust, but it's also unnecessary, and counterproductive, like hiding information on side-effects instead of discussing them proportionately. To the best of my knowledge, we've never yet had a serious data leak from a medical research database, and there are plenty around already; but then, we are standing on the verge of a significant increase in the number of people accessing and using medical data. There are steps we can take to minimise the risks: only release a subset of the 60 million UK population to each applicant; only give out the smallest possible amount of information on each patient whose records you are sharing; suggest that people come to your data centre to run their analyses, instead of downloading records, and so on. But, while the care.data project might be planning to do some of those things, the ground rules haven't been properly written out yet.
In any case, even safeguards such as these can be worked around. There are companies out there operating in the grey areas of the law, aggregating data from every source and leak they can find, generating huge, linked datasets with information from direct marketing lists, online purchases, mobile phone companies and more. Who's to know if someone will start quietly aggregating all the small chunks of our health data?
This, of course, would be illegal. As Tim Kelsey and others are keen to point out, re-identifying or leaking data in any way would be a "criminal offence". But as this project lands, we're all becoming rapidly aware that incompetence, malice and creepiness around confidential data is policed with a worryingly light touch. Private investigators have little trouble obtaining confidential data from staff in the police force, banks and tax offices, for example.
Here's why: it took a long time for anyone to realise that Steve Tennison, a finance manager in a GP practice, had accessed patients' records on 2,023 occasions over the course of a year, although this was relevant to his work on only three occasions. The majority of records he snooped on belonged to young women: he repeatedly accessed the record of one woman he had gone to school with, and that of her son. The maximum penalty for this is a fine, with a ceiling of £5,000 in magistrates courts. Tennison was fined £996, in December 2013. This is why the public feel nervous, and this is what we need to fix.
It's painful for me to write critically about a project like care.data, because I love medical data, and I know the good it can do. We have a golden opportunity in the UK, with 60 million people cared for in one glorious NHS. Opt-outs would destroy the data, and the growing calls for an opt-in system would be worse: opt-in killed people by holding back organ donation, and more than that, it would exacerbate social inequality around data, because the poorest patients, those most likely to be unwell, are also the least engaged with services, the least likely to opt in. They would become invisible.
So here's my advice: if you're thinking of opting out – wait. If you run care.data – listen. There are three things the government can do to rescue this project.
Firstly, make a proper announcement about what you will do in the six-month delay. You cannot rely on blind trust when it comes to sharing private medical records, so explain that you'll be coming back soon with a clear story. Sort out the governance framework, present unambiguous rules and principles explaining how data will be shared, list the specific clinical codes you're proposing to upload, then give real-world examples of the kind of access applications that would be approved, and the kind that would be rejected. This is fair, and sensible.
Secondly, show the public how lives are saved by medical research. This needs examples, from the vast archives of medical research on cancer, heart disease and more. Alongside that, give a clear nod to the small risks, and an explanation of how they will be mitigated. Never be seen to give false reassurance on these risks; if you do, you will lose patients' trust for ever.
Lastly, we need stiff penalties for infringing medical privacy, on a grand and sadistic scale. Fines are useless, like parking tickets, for individuals and companies: anyone leaking or misusing personal medical data needs a prison sentence, as does their CEO. Their company – and all subsidiaries – should be banned from accessing medical data for a decade. Rush some test cases through, and hang the bodies in the town square.
If the government do all this, they have a good chance of saving a vital data project, and permitting medical research that saves lives on a biblical scale to continue. If the government try to fudge – with half measures, superficial PR and false reassurance – then care.data will fail, and it might well bring down other sensible public health research with it. Lives are at stake. This cannot be left to the last minute in the six-month pause, and time is precious. It's February. If you're thinking of opting out, please don't. But mark your diary for May. The Guardian
Everything would be much simpler if science really was "just another kind of religion". But medical knowledge doesn't appear out of nowhere, and there is no ancient text to guide us. Instead, we learn how to save lives by studying huge datasets on the medical histories of millions of people. This information helps us identify the causes of cancer and heart disease; it helps us to spot side-effects from beneficial treatments, and switch patients to the safest drugs; it helps us spot failing hospitals, or rubbish surgeons; and it helps us spot the areas of greatest need in the NHS. Numbers in medicine are not an abstract academic game: they are made of flesh and blood, and they show us how to prevent unnecessary pain, suffering and death.
Now all this vital work is being put at risk, by the bungled implementation of the care.data project. It was supposed to link all NHS data about all patients together into one giant database, like the one we already have for hospital episodes; instead it has been put on hold for six months, in the face of plummeting public support. It should have been a breeze. But we have seen arrogant paternalism, crass boasts about commercial profits, a lack of clear governance, and a failure to communicate basic science properly. All this has left the field open for wild conspiracy theories. It would take very little to fix this mess, but time is short, and lives are at stake.
The care.data project was promoted in two ways: we will use your data for lifesaving research, and we will give it to the private sector for commercial exploitation, creating billions for the UK economy. This marriage was a clear mistake: by and large, the public support public research, but are nervous about commercial exploitation of their health data.
Now the teams behind care.data are trying to row back, explaining that access will only be granted for research that benefits NHS patients. That is laudable, but potentially a very broad notion. It's one we would want to unpack, with clear, worked examples of the kind of things they would permit, and the kind of things they would refuse. But that's not possible because, bizarrely, the specific principles, guidelines, committees and regulations that will determine all these decisions have not yet been clearly set out. This poses several difficulties. Firstly, the public are being asked to support something that feels intuitively scary, about the privacy of their medical records, without being told the details of how it will work. Secondly, the field has been left open to conspiracy theories, which are hard to refute without concrete guidance on how permissions for access really will work.
That said, many criticisms have been absurd. There has been endless discussion around the idea of health insurers buying health records, for example, and using them to reject high-risk patients. Call an insurer right now and see how you get on: within minutes you will be asked to declare your full medical history, waive confidentiality and grant access to your full medical notes anyway.
Many have complained about drug companies getting access to data, and this is more complex. On the one hand, arrangements like these are longstanding and essential: if medicines regulators get a few unusual side-effect reports from patients, they go to the drug company and force them to do a big study, examining – for example – 10,000 patients' records, to find out if people on that drug really do have more heart attacks than we'd expect. To do this, the UK health regulator itself sells industry the data, in the past from something called the GP Research Database, which holds millions of people's records already. This needs to happen, and it's good. But equally, people know – I've certainly shouted about it for long enough – that the pharmaceutical industry also misuses data: they hide the results of clinical trials when it suits them, quite legally; they monitor individual doctors' prescribing patterns to guide their marketing efforts, and so on. The public don't trust the pharmaceutical industry unconditionally, and they're right not to.
Trust, of course, is key here, and that's currently in short supply. The NSA leaks showed us that governments were casually helping themselves to our private data. They also showed us that leaks are hard to control, because the National Security Agency of the wealthiest country in the world was unable to stop one young contractor stealing thousands of its most highly sensitive and embarrassing documents.
But there is a more specific reason why it is hard to give the team behind care.data our blind faith: they have been caught red-handed giving false reassurance on the very real – albeit modest – privacy threats posed by the system.
Tim Kelsey is the man running the show: an ex-journalist, passionate and engaging, he has drunk more open-data Kool-Aid than anyone I've ever met. He has evangelised the commercial benefits of sharing NHS data – perhaps because he made millions from setting up a hospital-ranking website with Dr Foster Intelligence – but he is also admirably evangelical about the power of data and transparency to spot problems and drive up standards. Unfortunately, he gets carried away, stepping up and announcing boldly that no identifiable patient data will leave the Health and Social Care Information Centre. Others supporting the scheme have done the same.
This is false reassurance, and that is poison in medicine, or in any field where you are trying to earn public trust. The data will be "pseudonymised" before release to any applicant company, with postcodes, names, and birthdays removed. But re-identifying you from that data is more than possible. Here's one example: I had twins last year (it's great; it's also partly why I've been writing less). There are 12,000 dads with similar luck each year; let's say 2,000 in London; let's say 100 of those are aged 39. From my brief online bio you can work out that I moved from Oxford to London in about 1995. Congratulations: you've now uniquely identified my health record, without using my name, postcode, or anything "identifiable". Now you've found the rows of data that describe my contacts with health services, you can also find out if I have any medical problems that some might consider embarrassing: incontinence, perhaps, or mental health difficulties. Then you can use that information to try and smear me: a routine occurrence if you do the work I do, whether it's big drug companies, or dreary little quacks.
This risk isn't necessarily big, but to say it doesn't exist is crass: it's false reassurance, which ultimately undermines trust, but it's also unnecessary, and counterproductive, like hiding information on side-effects instead of discussing them proportionately. To the best of my knowledge, we've never yet had a serious data leak from a medical research database, and there are plenty around already; but then, we are standing on the verge of a significant increase in the number of people accessing and using medical data. There are steps we can take to minimise the risks: only release a subset of the 60 million UK population to each applicant; only give out the smallest possible amount of information on each patient whose records you are sharing; suggest that people come to your data centre to run their analyses, instead of downloading records, and so on. But, while the care.data project might be planning to do some of those things, the ground rules haven't been properly written out yet.
In any case, even safeguards such as these can be worked around. There are companies out there operating in the grey areas of the law, aggregating data from every source and leak they can find, generating huge, linked datasets with information from direct marketing lists, online purchases, mobile phone companies and more. Who's to know if someone will start quietly aggregating all the small chunks of our health data?
This, of course, would be illegal. As Tim Kelsey and others are keen to point out, re-identifying or leaking data in any way would be a "criminal offence". But as this project lands, we're all becoming rapidly aware that incompetence, malice and creepiness around confidential data is policed with a worryingly light touch. Private investigators have little trouble obtaining confidential data from staff in the police force, banks and tax offices, for example.
Here's why: it took a long time for anyone to realise that Steve Tennison, a finance manager in a GP practice, had accessed patients' records on 2,023 occasions over the course of a year, although this was relevant to his work on only three occasions. The majority of records he snooped on belonged to young women: he repeatedly accessed the record of one woman he had gone to school with, and that of her son. The maximum penalty for this is a fine, with a ceiling of £5,000 in magistrates courts. Tennison was fined £996, in December 2013. This is why the public feel nervous, and this is what we need to fix.
It's painful for me to write critically about a project like care.data, because I love medical data, and I know the good it can do. We have a golden opportunity in the UK, with 60 million people cared for in one glorious NHS. Opt-outs would destroy the data, and the growing calls for an opt-in system would be worse: opt-in killed people by holding back organ donation, and more than that, it would exacerbate social inequality around data, because the poorest patients, those most likely to be unwell, are also the least engaged with services, the least likely to opt in. They would become invisible.
So here's my advice: if you're thinking of opting out – wait. If you run care.data – listen. There are three things the government can do to rescue this project.
Firstly, make a proper announcement about what you will do in the six-month delay. You cannot rely on blind trust when it comes to sharing private medical records, so explain that you'll be coming back soon with a clear story. Sort out the governance framework, present unambiguous rules and principles explaining how data will be shared, list the specific clinical codes you're proposing to upload, then give real-world examples of the kind of access applications that would be approved, and the kind that would be rejected. This is fair, and sensible.
Secondly, show the public how lives are saved by medical research. This needs examples, from the vast archives of medical research on cancer, heart disease and more. Alongside that, give a clear nod to the small risks, and an explanation of how they will be mitigated. Never be seen to give false reassurance on these risks; if you do, you will lose patients' trust for ever.
Lastly, we need stiff penalties for infringing medical privacy, on a grand and sadistic scale. Fines are useless, like parking tickets, for individuals and companies: anyone leaking or misusing personal medical data needs a prison sentence, as does their CEO. Their company – and all subsidiaries – should be banned from accessing medical data for a decade. Rush some test cases through, and hang the bodies in the town square.
If the government do all this, they have a good chance of saving a vital data project, and permitting medical research that saves lives on a biblical scale to continue. If the government try to fudge – with half measures, superficial PR and false reassurance – then care.data will fail, and it might well bring down other sensible public health research with it. Lives are at stake. This cannot be left to the last minute in the six-month pause, and time is precious. It's February. If you're thinking of opting out, please don't. But mark your diary for May. The Guardian
No comments:
Post a Comment