In February this year, the European Commission released a White Paper on Artificial Intelligence, setting out its strategy for the development and deployment of AI in the EU.
Getting this right is vital. The stakes are high. AI could add up to €2.7tn to the EU’s combined economy output by 2030 – but the international competition is fierce, with Europe currently poised to reap fewer benefits of AI than China and the US. A strategy that encourages AI development and uptake will be crucial in allowing the EU to seize its opportunities.
The White Paper calls for:
- An ecosystem of excellence, encouraging investment in the research, skills and infrastructure to support the development and uptake of AI across the EU
- An ecosystem of trust, focussed on a regulatory framework for AI that addresses risks, builds public trust, and increases business certainty. This includes adjustments to the existing EU legislative framework and the introduction of new AI-specific regulation.
The CBI’s response focusses on the ecosystem of trust, given the enormous impact governance will have on the development and adoption of AI in Europe.
What is the CBI calling for?
Our response to the White Paper highlighted the need for clear, proportionate, and targeted regulation, that supports AI uptake by improving trust and offering greater clarity for businesses. Below are the top three things that we’ve called for:
- The EU must continue to coordinate closely with international partners like the UK, including working towards global standards
Businesses emphasise the importance of standardisation to promote large-scale adoption of AI. It’s vital that the EU works with other partners in international fora. Fragmentation that forces firms to comply with numerous different obligations will harm global AI development, and SMEs in particular.
The UK has Europe’s most developed AI ecosystem – but as we leave the EU, the vibrancy of AI in both places could suffer. For example, it might be more difficult to collaborate on research, to attract and retain AI talent, or for AI innovations to reach each other’s markets (to the detriment of European and British consumers and businesses). Working closely together is therefore vital, so that AI flourishes in both environments while legal and ethical concerns are addressed.
- The EU must avoid both under- and over-regulation of AI
AI can raise legal and ethical difficulties and uncertainties, from clarifying liability in complex supply chains to mitigating bias in data and algorithms. Regulation is one important tool to tackle these challenges – but a range of responses is required, with many businesses are already taking action.
The White Paper identifies areas for regulation to be improved, with a particular focus on product safety and liability law. Though we support appropriate adjustments to the relevant laws, the existing rules that impact AI must be carefully assessed through further stakeholder consultation. Any additional regulatory requirements that are introduced in the future must be proportionate, targeted, and feasible. Under-regulation could risk safety and slow down adoption by failing to address uncertainty, while over-regulation will harm AI development in the EU as developers avoid disproportionate European requirements.
- The EU must take a targeted, risk-based approach
Positively, the White Paper starts to outline a risk-based approach that the EU could take. Certain ‘high-risk’ AI applications would face additional regulatory requirements.
To determine high-risk AI, the Paper suggests first considering the sector in which an AI application will be installed, with more serious risks deemed more likely to happen in certain sectors. Secondly, the application itself will be assessed, based on the impact on the affected parties.
Though this is a positive first step, we suggest that a more nuanced, targeted approach to risk would be valuable. For example, we argue that the EU should assess risk based on whether applications could pose risk of injury or significant material damage – rather than immaterial damage, which could lead to medium-risk AI being wrongly classified as high-risk and subject to disproportionate additional regulation.
What happens now?
Continuing cooperation and collaboration between the UK and EU will be vital as both seek to maximise the opportunities of AI. The CBI will continue to campaign on members’ behalf in the EU and the UK, to ensure that new rules aimed at AI are feasible and proportionate, enabling innovation and upholding safety.
Get in touch with Khush if you’d like to read our full response, join our AI Working Group, or find out more about the CBI’s work on AI.