Adopting AI in Healthcare: What barriers need to be overcome?

AI in healthcare is finally moving beyond speculation and is set to bring possibilities that extend past providing diagnostic assistance to doctors. In fact, according to an Accenture report, growth in the AI healthcare market is expected to reach $6.6 billion by 2021, a compound annual growth rate of 40 percent. By Mikael Huss, Data Scientist at Peltarion.

  • 4 years ago Posted in

The application of deep learning in healthcare allows for the automation of time-consuming tasks, such as medical image interpretation, gathering relevant medical records and even drug discovery. This will potentially allow doctors more time to spend on patients, which will help to eliminate inaccuracies caused by fatigue and human error.

 

However, it is still early days and there are still challenges that need to be overcome in order to implement AI successfully.

 

The Regulation Ratification Process

For AI to be used for healthcare purposes in Europe, organisations are faced with the challenge of applying for a certification marking. This certification marking allows organisations to sell products within the European Economic Area (EEA) as long as they conform to the health, safety, and environmental protection standards. Also, healthcare firms need to be classified according to the Medical Device Directive, as explained very well in this blog post by Hugh Harvey. This means that healthcare firms looking to roll out an AI project must prove an algorithm’s intended use, confirm its class of medical solution – stand-alone AI algorithms (algorithms that are not integrated into a physical medical device) are typically classified as “Class II” medical devices – and outline extensive risk management, testing and development procedures to ensure quality standards are met.

 

Additionally, the General Data Protection Regulation (GDPR) directives have led to several new privacy steps that need to be followed when handling Personally Identifiable Information (PII). However, in some cases, these criteria are not clear-cut. For example, some degree of transparency in automated decision-making will be required, but it can be hard to tell from the directives what level of transparency will be enough. Other issues are likely to result from the requirement for informed consent. Organisations need to be monitoring the latest updates when it comes to GDPR, learning from fines such as the recent €400,000 penalty assigned to a Portuguese hospital and ensuring that consent is given when handling personal data.

 

The Dark Side of Transparency

Despite the possible difficulties in establishing regulations, bringing transparency to medical AI is crucial. A doctor needs to be able to understand and explain why a certain procedure was recommended by an algorithm. This necessitates the development of more intuitive and transparent prediction-explanation tools. There is often a trade-off between predictive accuracy and model transparency, especially with the latest generation of AI techniques that make use of neural networks, which makes this issue even more pressing. An interesting viewpoint to consider on transparency and algorithmic decision-making is given in a paper named Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR, which was co-written by a lawyer, a computer scientist and an ethicist.

 

The Realisation of AI Improving Doctors

Doctors treat patients based on learned knowledge, previous experience, intuition and problem-solving skills. This means that getting doctors to consider suggestions from an automated system can be difficult. In order to overcome this challenge, some elements of AI literacy need to be introduced into medical curricula so that AI is not perceived as a threat to doctors, but as an aid and amplifier of medical knowledge. In fact, if AI is introduced in a way that empowers human workers, rather than displacing them, it could free up their time to perform more meaningful tasks or grant more resources to employ more workers.

 

Trouble with Technical Support

The latest developments in AI with the use of deep neural networks have made remarkable breakthroughs in the last five to seven years. However, the tooling and infrastructure needed to support these techniques are still immature, and few people have the necessary technical competence to deal with the whole range of data and software engineering issues. Especially when it comes to medicine, AI solutions will often face problems related to limited data and variable data quality. Predictive models will need to be re-trained when new data comes in, keeping a close eye on changes in data-generation practices and other real-world issues that may cause the data distributions to drift over time. If several data sources are used to train models, additional types of “data dependencies,” which are seldom documented or explicitly handled, are introduced.

 

In medical applications, transfer learning – using a pre-trained model and adapting it to one’s specific use case – is often applied, but then a “model dependency” is introduced where the underlying model may need to be retrained or change its configuration over time. The large amount of “glue code” typically needed to hold together an AI solution, together with potential model and data dependencies, makes it very difficult to perform integration tests on the whole system and make sure that the solution is working properly at any given time.

 

Make AI accessible through Operational AI

The advancement of AI in healthcare will help to administer better in time medication and improved patient care at a lower cost – it could even help us find a cure for diseases such as HIV and various kinds of cancer. But to reap these benefits, healthcare organisations need to get their data in order, allow key stakeholders to review AI projects and be able to audit data usage efficiently.

 

By adopting an operational AI platform, organisations can handle the entire data modelling process, including software dependencies, data and experiment versioning, as well as deployment from a single place. This will ensure greater scalability, visibility and collaboration from the offset. This will help mitigate the challenges of regulation and privacy and offer greater transparency throughout AI projects, all while creating AI solutions that deal with real problems in healthcare faster. Additionally, AI developers can use these features to avoid critical roadblocks, such as software library dependencies, inconsistencies in input data processing steps and the inadvertent introduction of bugs into production code.

 

The potential of AI is there to see and will dramatically transform healthcare for the better in years to come. However, efforts and advances in many areas still need to be made before AI solutions can be deployed in a safe and ethical way. Regulation, privacy and sociocultural aspects need to be addressed by society as a whole. By taking an operational approach, healthcare organisations can get a head start with AI projects, mitigating some of these challenges early on and allowing firms to benefit from AI technology faster.

 

By Krishna Sai, Senior VP of Technology and Engineering.
By Danny Lopez, CEO of Glasswall.
By Oz Olivo, VP, Product Management at Inrupt.
By Jason Beckett, Head of Technical Sales, Hitachi Vantara.
By Thomas Kiessling, CTO Siemens Smart Infrastructure & Gerhard Kress, SVP Xcelerator Portfolio...
By Dael Williamson, Chief Technology Officer EMEA at Databricks.
By Ramzi Charif, VP Technical Operations, EMEA, VIRTUS Data Centres.