Skip to content

As European Council Adopts AI Act Position, Questions Remain on GPAI

An abstract European Union flag of diffused gold stars linked by golden neural pathways on a deep blue mottled background.

“EU Flag Neural Network” by Creative Commons was cropped from an image generated by the DALL-E 2 AI platform with the text prompt “European Union flag neural network.” OpenAI asserts ownership of DALL-E generated images; Creative Commons dedicates any rights it holds to the image to the public domain via CC0.

As we’ve discussed before, the European Union has been considering a new AI Act, which would regulate certain uses of artificial intelligence (AI). In particular, it seeks to ban certain uses of AI, such as broad-based real-time biometric identification for law enforcement in public places, and to ensure that certain precautions are taken before deployment of uses deemed “high-risk.”

Last week, the European Council adopted its position on the Act. This is an important milestone, and the next step is for the European Parliament to form a common position, expected in early 2023, so that the positions of those two institutions and the European Commission can then be negotiated into a final, joint position during the so-called “trilogue” phase of the process.

While the AI Act covers a number of important issues, CC has focused on two aspects of AI regulation. First, while copyright’s relationship to AI is not core to this proposal, it is important that policymakers understand that appropriate limits on copyright are necessary to serve the public interest. Originality and human authorship must remain essential to the granting of copyright or other related exclusive rights over creative works, and content generated with minimal human input by AI does not meet those standards. What’s more, training AI on copyrighted works should not be limited by copyright law. Happily, the AI Act does not interfere with these basic premises.

Second, we’ve focused on ensuring that the AI Act takes a tailored approach to general purpose artificial intelligence (GPAI) and, in particular, the sharing and use of open source tools. We believe this is particularly important to get right, given the fact that GPAI can be an input into myriad different uses, including AI tools for generating content like GPT-3 and Stable Diffusion.

Because GPAI is by definition a multi-purpose tool, it may be impractical for GPAI developers to implement risk management in ways suited for narrowly defined, “high risk” AI uses. In turn, imposing the same rules on GPAI creators may create significant barriers to innovation and drive market concentration, leading the development of AI to only occur within a small number of large, well-resourced commercial operators. When it comes to open-source GPAI, overly broad regulation may be particularly harmful, as open source is critical to lowering barriers to AI development, and open-source developers have even more limited ability to control how their works are shared and re-used.

While there are a wide range of views on how best to address GPAI, the best approach, we believe, would be for legislators and regulations to focus on ensuring GPAI creators provide information to downstream users so that they can comply when implementing “high risk uses.” When GPAI creators and users have an ongoing relationship, the AI Act could require ongoing information exchange and cooperation in service of compliance, with requirements tailored to the different actors’ roles. When it comes to open source, the Act should take a proportionate approach, ensuring developers make information available to downstream users in support of their compliance, but otherwise not regulating open-source software or creating the expectation that open-source developers can control or be responsible for downstream use.

On this GPAI topic, the Council’s approach appears to go further, while still leaving many questions open. The Council’s text applies many of the risk management requirements for “high risk” AI uses to all GPAI services. At the same time, how these requirements will be specifically applied is left to further “implementing acts” (secondary legislation) that the European Commission will craft in the future, and the text recognizes the need to tailor requirements in a proportionate manner to GPAI’s distinct elements. Whilst we appreciate the complexity in drafting future-proof primary legislation of this nature, the potential delay in clarity regarding the technical application of the GPAI provisions may constitute a significant barrier to innovation and sharing. To the extent possible, this should be avoided.

We’re glad to see that further care will be taken on this subject, although this approach leaves a considerable amount of uncertainty for developers at the moment. With that in mind, we would urge the Parliament to continue to refine the proposal to ensure responsibilities are allocated in a way that supports GPAI development, especially when it comes to open-source developers, so as to ensure AI’s potential can benefit society.

Posted 13 December 2022