Striking the delicate balance of AI regulation and innovation

By James Fisher, Chief Strategy Officer, Qlik.

  • 1 month ago Posted in

As AI continues to advance, navigating the balance between regulation and innovation will have a huge impact on how successful the technology can be.

The EU AI Act came into force this summer, which is a move in the right direction towards classifying AI risk. At the same time, the Labour government has set out its intention to focus on the role of technology and innovation as key drivers for the UK economy. For example, planning to create a Regulatory Innovation Office that will support regulators to update existing regulation more quickly, as technology advances.

In the coming months, ensuring both regulation and innovation are prioritised, and that the two work together hand in hand, should indeed be the focus. We need a nuanced framework that ensures AI is deployed ethically, while also driving market competitiveness and that regulation can flex to keep encouraging advancement among British organisations and businesses.

The UK tech ecosystem depends on it

When it comes to setting guardrails and providing guidance for companies to create and deploy AI in a way that protects citizens, there is the potential to fall into overregulation. Legislation is vital to protect users (and indeed individuals), but too many guardrails can stifle innovation and stop the British tech and innovation ecosystem from being competitive.

And it’s not just about existing tech players facing delays in bringing new products to market. Too much regulation can also create a barrier to entry for new and disruptive players: high compliance costs can make it almost impossible for startups and smaller companies to develop their ideas. Indeed, lowering these barriers will be essential to maintain a strong startup ecosystem in the UK – which is currently the third-largest globally. AI startups lead the way for British VC investment, having raised $4.5 billion in VC investment in 2023, and any regulation must allow this to continue.

The public interest and demand for better regulations

Regulatory talks often focus on the impact it will have on startups and medium-sized companies, but larger institutions are also at risk of feeling the pressure. Innovation and the role of AI are critical for improving the experience of public services. In healthcare, for example, where the sensitive aspects of people’s lives are central to the business, having the correct regulatory framework in place to improve productivity and efficacy can have a huge impact.

In addition to the public sector, the biggest potential for the UK is for organisations to use AI responsibly to compete and innovate themselves. FTSE companies are also considering how they can leverage AI to improve their offering and gain a competitive edge. In a nutshell, while regulation is important, it shouldn’t be too stringent that it becomes a barrier to new innovations.

Learning from existing regulation

We don’t yet have a wealth of examples of AI regulation to learn from, and certainly the global AI regulatory landscape is set to vary in approach dramatically. Whilst it is encouraging that the EU has already put its AI Act in place, we need to recognise that there is much to learn.

In addition to potentially creating a barrier to entry for newcomers and slowing down innovation through overregulation, there are other learnings we should take from the EU AI Act. Where possible, concepts should be clearly defined so there is limited room for interpretation. Specificity and clarity are essential any time, but particularly around regulation. Broad and vague definitions and scopes of application inevitably lead to uncertainty, which in turn can make compliance requirements unclear, causing businesses to spend too much time deciphering them.

So, what should AI regulation look like?

There is no formula to create perfect AI regulation, but there are definitely three elements it should focus on.

The first focus needs to be on protecting individuals and diverse groups from the misuse of AI. We need to ensure transparency when AI is used, which in turn will limit the amount of mistakes and biased outcomes, and when errors are still made, transparency will help rectify the situation.

It is also essential that regulation tries to prevent AI from being used for illegal activity, including fraud, discrimination and faking documents and creating deepfake images and videos. It should be a requirement for companies of a certain size to have an AI policy in place that is publicly available for anyone to consult.

The second focus should be protecting the environment. Due to the amount of energy needed to train the AI, store the data and deploy the technology ones it’s ready for market, AI innovation comes at a great cost for the environment. It shouldn’t be a zero-sum game and legislation should nudge companies to create AI that is respectful to the our planet.

The third and final key focus is data protection. Thankfully there is strong regulation around data privacy and management: the Data Protection Act in the UK and GDPR in the EU are good examples. AI regulation should work alongside existing data regulation and protect the huge steps that have already been taken.

Striking a balance

AI is already one of the most innovative technologies available today, and it will only continue to transform how we work and live in the future. Creating regulation that allows us to make the most of the technology while keeping everyone safe is imperative. With the EU AI Act already in force, there are many lessons the UK can learn from it when creating its own legislation, like avoiding broad definitions that are too open to interpretation.

It is not an easy task, and I believe the new UK government's toughest job around AI and innovation will be striking the delicate balance between protecting its citizens from potential misuse or abuse of AI while enabling innovation and fuelling growth for the UK economy.

By Krishna Sai, Senior VP of Technology and Engineering.
By Danny Lopez, CEO of Glasswall.
By Oz Olivo, VP, Product Management at Inrupt.
By Jason Beckett, Head of Technical Sales, Hitachi Vantara.
By Thomas Kiessling, CTO Siemens Smart Infrastructure & Gerhard Kress, SVP Xcelerator Portfolio...
By Dael Williamson, Chief Technology Officer EMEA at Databricks.
By Ramzi Charif, VP Technical Operations, EMEA, VIRTUS Data Centres.