Are Business and Government Diverging on AI?

As the UK government seeks to expand its AI Safety Institute just as OpenAI disbands its team on long-term AI safety, we look at the gap in approach to AI

2 mins read
With all this talk of governments focusing on AI safety, is business also showing the same level of concern for safety as they push to implement AI into their operations?

The UK Government has announced it will open the AI Safety Institute (AISI) in the US. The office, due to open in the summer of 2024, will be located in San Francisco and is the first UK overseas office in San Francisco.  

Its proscribed aim will be to tap into the wealth of tech talent available in the Bay Area, engage with the world’s largest AI labs headquartered in both London and San Francisco, and cement relationships with the US to advance AI safety for the public interest.  

This announcement comes amid the UK’s push to position itself as the authority of responsibility surrounding AI use. 

In 2023, Rishi Sunak created the same AISI ahead of the first global summit on AI, the AI Safety Summit, which saw 28 national governments sign a declaration to promote AI safety. 

With all this talk of governments focusing on AI safety, is business also showing the same level of concern for safety as they push to implement AI into their operations?

Business v government position

As is patently obvious at this point, AI holds great potential for business. A Oliver Wyman Forum study estimates that GenAI could add up to US$20 trillion to global GDP by 2030 and save 300 billion work hours a year. 

Such incentive is having businesses cross sector race to adopt AI, in one form or another, into their operations. A Tata study revealed how 86% of execs already deploy AI to enhance revenue. 

Yet a 2024 Bank of Ireland report highlighted Most businesses have no AI governance policies in place. 

Microsoft allegedly ignored safety problems its engineer told about its AI image generator, and even trailblazer and self-described practitioner of ‘responsible AI’ OpenAI recently disbanded its team focused on the long-term risks of AI just a year after it was announced.

The AISI meanwhile produced a selection of recent results from safety testing of five publicly available advanced AI models.

Speaking on the announcement, AI Safety Institute Chair, Ian Hogarth, said: “Our evaluations will help to contribute to an empirical assessment of model capabilities and the lack of robustness when it comes to existing safeguards.”

AI disagreement

Following the AI Safety Summit, businesses were hoping the UK would announce some regulatory framework for them to anchor their AI safety concerns in, like was done in the EU. The UK’s recently released AI bill takes a much lighter approach.

The EU AI Act, will be the world’s first comprehensive law regulating AI, and takes a risk-based approach classifying each application into three categories: unacceptable risk, high risk, and limited, minimal, or no risk. The regulations restricting the AI system vary depending on which risk level it is classified as having.

Yet, Many of the EU’s largest companies – including executives from Renault, Heineken, Siemens, and Airbus – signed a letter warning the European Commission that the drafted legislation “would jeopardise Europe’s competitiveness and technological sovereignty without effectively tackling the challenges we are and will be facing.” 

The next chance for government and industry to meet at a broad scale to discuss AI safety will be at the second AI Safety summit. Whether a consensus can be better reached remains to be seen.

Source: AI Magazine

SLG Syndication

SLG Syndication is committed to aggregating excerpts from news published by international news agencies and key insights on contemporary issues published by think tanks. Our aim is to facilitate the expansion of its reach while giving due credit to the original source.

Leave a Reply

Your email address will not be published.

Latest from Blog