Wednesday 4 May 2016

In The Age We Live

Randeep Ramesh writes: 

When it was revealed that Google’s London-based company DeepMind would be able to access the NHS records of 1.6 million patients who use three London hospitals run by the Royal Free NHS trust – Barnet, Chase Farm and the Royal Free – it rang alarm bells. 

Not just because the British fiercely guard their intimate medical histories. Not just because Google, a sprawling octopus of a company with tentacles in all our lives, wishes to “organise the world’s information”. 

Not just because patients are unlikely to have consented to Google having this information. The issue for many is the intertwining of these concerns with the idea of artificial intelligence (AI). 

DeepMind is no ordinary company. It specialises in AI, developing technology to exhibit something like intelligent reasoning. 

Last year its engineers produced a research paper showing it had created a program that could replicate the work of a “professional human video games tester”. 

In March, Google’s DeepMind made history by creating a program that mastered the 3,000-year-old Chinese board game Go, thought to be beyond current technology because of the number of possible moves. 

In what was considered a computing milestone, the company’s AlphaGo program beat the world Go champion 4-1. 

Now such a company has a database containing detailed, private, albeit anonymised, records of all these people’s medical history, including HIV status, past drug overdoses and abortions. 

DeepMind says it needs the data to produce medical alerts for hospitals attempting to prevent acute kidney injuries. 

The fear for some is that DeepMind’s database could allow for much more than the original stated purpose. 

The public is no stranger to the fact that NHS patient privacy has not been safeguarded – in 2014 the government was forced to halt and then scale back its proposals to produce a single English medical database over concerns that medical confidentiality could be put at risk. 

DeepMind has not hidden its work with the NHS, announcing in February it was working with the health service to build an app called Streams to help doctors and nurses monitor kidney patients. 

What it did not reveal was the extent of its data haul, which encompasses historical patient records. 

Instead of the few thousand patients with kidney injuries, DeepMind got all the patient records of all three hospitals. 

That’s millions of confidential documents.

It says it needs the entire patient database to make Streams work. Backers of such databases claim that with such data powerful software packages can be created to diagnose diseases sooner. 

The New Scientist magazine obtained the data-sharing agreement between DeepMind and the NHS, which revealed just how much information was being made available. 

The Google company’s skill is to discern complex patterns in huge quantities of data – and the NHS is a goldmine for such “deep learning”. 

In this treasure trove of data are logs of day-to-day hospital activity, such as records of the location and status of patients – as well as who visits them and when. DeepMind will also obtain pathology and radiology test patient records. 

As well as real-time data, DeepMind has access to the historical records from critical care and accident and emergency departments. 

Crunching this information, so the theory goes, allows DeepMind to develop predictions based on data that is too broad in scope for any one person to assimilate and analyse. 

By comparing patient data, DeepMind might be able to predict that someone is in the early stages of a disease that has not yet become apparent. 

This is the medical holy grail: not treating a patient when they are ill, but treating them before they become ill.  

Utopian? Perhaps. Behind the promise of these technologies lies the crux of the dilemma in the age we live. 

Google, Facebook and others feed on the fact we suspend our privacy rights in return for new technology built with our data.

Like Apple, Google is building a reputation in medical apps. It is also true that the use of machine learning in medicine by academics is nothing new. 

However this data is being passed to and controlled by one of the world’s biggest and most powerful companies. 

It raises questions over whether it might quickly become the biggest player, a de facto monopoly, over NHS health analytics. 

AI also represents something new, a promise that a program could improve itself – and very quickly surpass human intellect. 

This is the so-called “intelligence explosion” – a point where humanity courts its own destruction. 

We are some way off this. No one has built a machine that respects social and ethical norms, even at the expense of its goals. 

It’s difficult enough to get humans to do that. Some may say such extrapolation is ridiculous. 

After all Tay – the “intelligent” Twitter chatbot from Microsoft – lasted a few hours until she “learned” to become a racist, genocidal tweeter and was killed off. 

However as Elon Musk, the inventor who originally invested in DeepMind, said, it was worries over “Terminator” technology that drove him to warn about its dangers.

For perhaps sound commercial reasons, DeepMind operates under the radar. But this often raises more questions than answers.

Google’s AI ethics board, established when Google acquired DeepMind in 2014 for £400m, remains one of the biggest mysteries in technology, with both companies refusing to reveal who sits on it.

Artificial intelligence needs data to learn. Hence the sucking up of all those patient records by Google’s DeepMind.

So why the secrecy? 

If patients had been told what was going on and why, they could make informed choices. 

If they think the potential risks of Google dominance over a new critical technology for the NHS are outweighed by the benefits, then let’s have that debate. 

But if the company does not explain and carries on in secret, the public will rightly not go along with such plans.

No comments:

Post a Comment