Call us
COMMENTARY

AI won’t be safe until we rein in Big Tech






Digital / COMMENTARY
Georg Riekeles , Max von Thun

Date: 22/11/2023
The chaos at OpenAI perfectly illustrates why the EU needs to impose strict regulatory responsibilities on big AI model providers, instead of relying on self-regulation and goodwill from companies whose accountability is highly uncertain.

Earlier this month, British Prime Minister Rishi Sunak convened leading nations, AI companies and experts at Bletchley Park – the historic home of Allied code-breaking during WWII – to discuss how the much-hyped technology can be deployed safely.

This would-be first international AI Safety Summit was rightly criticised on a number of grounds, including prioritising input from Big Tech over civil society voices and fixating on far-flung existential risks over tangible everyday harms. But the summit’s biggest failure – itself a direct result of those biases – was that it had nothing meaningful to say about reining in the dominant corporations that pose the biggest threat to our safety.

The summit’s key “achievements” consisted of a vague joint “communiqué” warning of the risks from so-called “frontier” AI models and calling for “inclusive global dialogue”, plus an (entirely voluntary) agreement between governments and large AI companies on safety testing. Yet neither of these measures has any real teeth, and what’s worse – through their emphasis on “frontier” models – they give powerful corporations a privileged seat at the table when it comes to shaping the debate on AI regulation.

Big Tech is currently promoting the idea that its exclusive control over AI models is the only path to protecting society from major harms. In the words of an open letter signed by 1,500 civil society actors, accepting this premise is naïve at best, dangerous at worst.

Governments that are truly serious about ensuring that AI is used in the public interest would pursue a very different approach. Instead of noble-sounding statements of intent and backroom deals with industry, tough measures are needed to target corporate power. Two areas in particular are key: forceful enforcement of competition policy and tough regulatory obligations for dominant gatekeepers.

As it stands, a handful of tech giants have used their collective monopoly over computing power, data and technical expertise to seize the advantage when it comes to large-scale AI foundation models. Smaller companies without access to these scarce resources find themselves signing one-sided deals with (or being acquired by) larger players to gain access to them. Google’s takeover of Deepmind and OpenAI’s $13 billion partnership with Microsoft are the best-known examples but not the only ones.

The tech giants are also dominant in many other markets (including search engines, cloud computing and browsers), which they can exploit to lock users into their own AI models and services. As ever more people gravitate and provide data to a handful of AI models and services, network effects and economies of scale are set to magnify this considerable initial advantage further.

In the jargon of economists, this is a market that is prone to tipping. A concentrated market for foundation models would allow a handful of dominant corporations to steer the direction and speed of AI innovation and enable them to exploit, extort and manipulate the many businesses, creators, workers and consumers dependent on their services and infrastructure.

In short, the tech winners of yesterday are now extending and reinforcing their monopolistic hold through AI. Governments should not be complicit in that. Antitrust authorities may have failed to prevent digital technologies from being monopolised in the past, but competition policy – if enforced effectively – has a major role to play in tackling AI concentration.  

Competition authorities must use their existing powers to police takeovers, cartels and monopolistic conduct to prevent a handful of digital gatekeepers from running amok in their quest for ever-greater profits. This now requires investigating and, where necessary, breaking up anti-competitive deals between Big Tech and AI startups, and preventing digital gatekeepers from leveraging their control over dominant platforms, such as in search and cloud computing, to entrench their hold on AI.

Yet it is also important to acknowledge that even in a competitive market, the number of foundation model providers will be limited, given the resources required to train and deploy such models. This is where regulation must step in to impose unprecedented responsibilities on dominant companies. 

As AI gains a bigger role in decision-making across society, safety, reliability, fairness and accountability are critical. AI systems can perpetrate biases from their underlying data sets or training and generate plausible-sounding but false responses known as ‘hallucinations’. If deployed by those with malicious intent, they can also be used to create convincing propaganda, spy on workers, and manipulate or discriminate against individuals.

These harms are particularly grievous when they stem from a foundation model because of the cascading effects into downstream uses. Biases, errors, discrimination, manipulation or arbitrary decisions, therefore, present singular risks at the foundation level.

The European Union is currently vying to become the first global authority to put forward binding AI rules imposing obligations on different uses of AI according to risk. However, the EU AI Act is struggling to measure up to the foundation model threat.

Currently, EU legislators are considering imposing a new set of tiered obligations on foundation model providers. Among other things, these providers could be required to share information with regulators on training processes (including the use of sensitive and copyrighted data) and submit to auditing of systemic risks.

This will go some way towards mitigating the risks of AI. But above and beyond this, given their central role in the AI ecosystem, the dominant corporations providing large-scale models must be given strict overarching responsibilities to behave fairly and in the public interest.

One way of achieving this, building on ideas developed by Jack Balkin at the Yale Law School and Luigi Zingales at the University of Chicago, would be to impose a certain number of fiduciary duties on GPAI-providers. A fiduciary duty is the highest standard of care in law and implies being bound both legally and ethically to act in others’ interest.

An alternative or complementary approach would entail designating digital gatekeepers as public utilities (or “common carriers” in U.S. terminology) mandated to treat their customers fairly and ensure operational safety. This legal status could conceivably be applied to both the foundation models upon which AI applications are built and the cloud computing infrastructure hosting them. The EU’s Digital Markets Act, which imposes pro-competition obligations on dominant tech firms, is one potential avenue for pursuing this.

What is abundantly clear is that we cannot trust self-regulation by individual companies, let alone Big Tech, to guarantee safety and openness in AI. Only by tackling monopoly power and ensuring that power comes with responsibility can we realise the promise – and mitigate the risks – of this emerging technology.


A version of this Commentary was first published in The Guardian.


Georg Riekeles is Associate Director and Head of Europe’s Political Economy programme at the European Policy Centre.

Max von Thun is Director of Europe and Transatlantic Partnerships at the Open Markets Institute.

 
The support the European Policy Centre receives for its ongoing operations, or specifically for its publications, does not constitute an endorsement of their contents, which reflect the views of the authors only. Supporters and partners cannot be held responsible for any use that may be made of the information contained therein.





Photo credits:
Sebastien Bozon / AFP

The latest from the EPC, right in your inbox
Sign up for our email newsletter
14-16 rue du Trône, 1000 Brussels, Belgium | Tel.: +32 (0)2 231 03 40
EU Transparency Register No. 
89632641000 47
Privacy PolicyUse of Cookies | Contact us | © 2019, European Policy Centre

edit afsluiten