The ‘sweeping legislation’ will affect UK companies providing AI services to the EU, meaning Britain’s professional and ethical standards need to be scaled up rapidly, according to BCS, the Chartered Institute for IT.
The proposed regulations on AI that influences people’s health, safety and rights include a ban on the use of this technology to track citizens’ behaviour.
Dr Bill Mitchell OBE, Director of Policy at BCS said: “The EU has realised that AI can be of huge benefit or of huge harm to society, and has decided to regulate on standards for the design, development, and adoption of AI systems to ensure we get the very best out of them.
“There will be a huge amount of work to do to professionalise large sections of the economy ready for this sweeping legislation.
“These ambitious plans to make AI work for the good of society will be impossible to deliver without a fully professionalised AI industry. Those with responsibility for adopting and managing AI will need to ensure their systems comply with these new regulations, as well as those designing and developing these systems.
“The IT profession - and particularly those involved in AI – will in future need to evidence they have behaved ethically, competently and transparently. In principle this is something we should all welcome, and it will help restore public trust in AI systems that are used to make high stakes decisions about people.”
The new EU AI legislation also sets Europe on a different path to the US and China, directly prohibiting the use of AI for indiscriminate surveillance and social scoring, BCS added.
It would establish a series of ‘use cases’ for AI, including education, financial services and recruitment, and designate them as high risk. For these high-stakes examples regulation would include mandatory third-party audit, of both actual data and quality management systems.
The new rules are likely to take some years to become law but need to be prepared for now especially in the areas of staff training and development, BCS said.