AI Act falters over foundation model approach
Previous meetings seemed to have reached common ground with a tiered approach, with the most powerful models being subject to tighter controls. A number of member states, however – France, Germany and Italy in particular – have indicated they would prefer little or no specific regulation of foundation models. Parliamentary representatives have made clear that is a red line they are not willing to concede. Critics, meanwhile, have suggested the disagreements are a natural consequence of attempting to regulate technology which is still evolving, something the current UK government has suggested explicitly in mapping out its own ‘light touch’ approach.
The EU has been deliberating over AI regulation for a number of years, so always appeared to hold ‘first mover advantage’ in defining a template for global regulation of AI technology. Consequently, a failure to reach agreement would represent not only a failure in process but a broader concession of ground to others, most notably the US, whose recent AI Executive Order, as well as early draft AI legislation now emerging from Congress, have started to flesh out proposals on AI regulation.
Breakdowns in trilogue, as political factions and EU member states look to apply last-minute leverage on issues on which they (or their domestic business interests) are sensitive are hardly unique, and it’s important to emphasise that the overwhelming likelihood remains that agreement will be reached to allow the AI Act to be approved over the remainder of 2023. There does, however, remain a remote possibility that the Act as a whole is at risk given the limited remaining term of the current Parliament. To say it’s a critical moment in the legislation’s progress, therefore, for once doesn’t seem an exaggeration.
Link to article