In this new year edition of the Neural Network, we look at key AI developments from December and January.
- In regulatory and government updates: The Prime Minister has announced the details of measures intended to put the UK at the forefront of AI development and make AI a key engine of economic growth; the UK government has opened a consultation aimed at resolving the impasse between AI developers and copyright holders over the use of copyrighted content to train AI models; the European Data Protection Board (“EDPB”) has published a highly-anticipated opinion on personal data use and data protection compliance in the context of AI development; and a minister has signalled that the UK will introduce new draft legislation early this year to combat AI-generated “deepfake” images depicting the sexual abuse of children.
- In AI enforcement and litigation news: OpenAI has been fined €15 million by the Italian data protection authority over data protection failings connected with its rollout of ChatGPT in the country, with OpenAI stating that it plans to appeal the fine.
- In technology developments and market news: Apple has come in for criticism by the BBC and other news organisations over false and misleading summaries of news articles displayed as notifications to iPhone users.
More details on each of these developments is set out below.
Regulatory and government updates
Enforcement and civil litigation
Technology developments and market news
Regulatory and government updates
Keir Starmer announces drive to “turbocharge” AI in the UK
Prime Minister Keir Starmer has announced a new drive to accelerate the UK’s AI development economy and to promote broader and more rapid AI adoption by the public and private sectors.
Following the July 2024 election, the Labour Government commissioned Matt Clifford CBE, chair of the UK Advanced Research and Invention Agency, to develop a series of recommendations. Mr Clifford took a central role in the 2023 AI Safety Summit hosted by the UK. The recommendations are set out in an AI Opportunities Action Plan. They have all been accepted and will be progressed by the government.
The plan includes setting up a series of new “AI Growth Zones”, which will streamline and accelerate the planning process for data centre construction. The first of these zones will be in Culham, Oxfordshire, and more will be announced “in the summer”. Alongside the announcement of the new measures, a total of £14bn in new private-sector investment in new data centres has been announced by an array of data centre developers.
The plan also commits to a “twentyfold” increase in public sector computing capacity. As a result: (i) construction will be commenced on a new national supercomputer; (ii) the creation of a new cross-department AI Energy Council, jointly chaired by the Secretaries of State for Energy and for Science, Innovation and Technology; (iii) and setting up a new “digital centre of government” within the Department for Science, Innovation and Technology (“DSIT”) to promote widespread AI uptake across the public sector and public services.
Further announcements, due to follow later this year, include the unveiling of the government’s overarching “Digital and Technology Sector Plan”, of which these AI announcements will form a key part.
UK government launches much anticipated consultation on proposed reforms to UK copyright law to address artificial intelligence
The UK Government has launched a renewed consultation on proposed reforms to UK copyright law which are intended to resolve the current impasse between the AI industry and rightsholders over the use of copyright works to train AI models.
The consultation, which will run until 25 February 2025, is the latest attempt by the Government to address the challenges and limitations raised by the UK’s existing copyright laws as they are applied to the rapidly evolving technologies we know as AI. The Government first sought views on AI and IP back in 2020, as part of a general call for views on how the wider UK intellectual property regime applies to AI. Thereafter, in 2021, the UK Intellectual Property Office sought views on the scope of the UK’s text and data mining exemption set out in s.29 of the Copyright, Designs and Patents Act 1998 (“CPDA 1988“), leading to a proposal in 2022 that the limited exemption be extended to cover commercial activities. That proposal was ultimately not implemented and the efforts to develop a voluntary code of practice between industry and rights holders which followed were unsuccessful.
The new consultation is, once again, seeking views on a proposed new exemption to copyright infringement for text and data mining which would go beyond the limited exemption which already exists under s.29A of the CDPA 1988. The proposed new exemption would allow the commercial use of copyright works to train AI models but would also provide rights holders with the ability to ‘opt-out’ and reserve their rights in copyright works. The proposal also introduces a series of transparency requirements for AI providers in relation to the training of their AI models and is intended to strike a balance between fostering AI innovation, whilst not damaging the UK’s IP-rich creative industries. If adopted, it would align UK law more readily with the position as it currently stands in the EU under the provisions of Article 4 of the Directive on Copyright in the Digital Single Market.
The consultation also seeks views on several other copyright issues, including whether the protection afforded to computer generated works under s.9(3) of the CDPA 1988 should be maintained in its current form or at all. The recommendations following the UKIPO 2021 consultation on the same issue concluded that no changes were necessary to the existing law. Whether that feedback will be the same in 2025 where the development, adoption and use of AI is widespread, remains to be seen.
It will be interesting to see how the debate around these issues has evolved as they are addressed for the second time around. Interested parties have until 25 February 2025 to prepare responses and to be part of shaping our copyright regime for the future.
European Data Protection Board issues opinion on using personal data in AI development
The EDPB has issued its highly-anticipated opinion “on certain data protection aspects related to the processing of personal data in the context of AI Models”.
Prompted by a request from the Irish Data Protection Commission, with a view to determining a harmonised EU-wide approach to the regulation of AI models from a data protection perspective, the EDPB opinion addresses three main areas:
- Circumstances in which AI models can, and cannot, be considered anonymous;
- The validity or otherwise of using legitimate interests as a lawful basis for the processing of personal data as part of AI model development and deployment; and
- The consequences of an AI model being developed and trained using unlawfully processed personal data.
Anonymity
The opinion stipulates that, leaving aside AI models that are specifically designed to provide personal data to end-users (which cannot be considered anonymous in any case), the question of whether an AI model can be considered “anonymous” needs to be approached on a case-by-case basis with reference to the model’s particular characteristics.
When determining whether an AI model can be treated as anonymous, supervisory authorities will need to consider (i) whether personal data can be extracted out of the model; and (ii) whether the output provided to end-users entering queries into the model relates in any way to data subjects whose data was used in developing and training the model. An “anonymous” AI model must be such that there is only an “insignificant” likelihood of direct extraction of personal data from the model and a similarly insignificant likelihood of obtaining personal data from querying the model – whether intentionally or otherwise.
In determining the likelihood of such occurrences, controllers and supervisory authorities will need to consider all of the “means reasonably likely to be used” to identify individuals. This will be impacted by the extent to which the AI model is made publicly available.
Use of “legitimate interests” in development and deployment
The opinion states that when relying on the legitimate interests basis for processing personal data, both when developing an AI model and when deploying it, the usual three-part test is required. Firstly, the controller must have a legitimate interest in processing the data; secondly, the relevant processing must be necessary for the pursuit of the legitimate interest (including that there is no reasonably available, less intrusive means of pursuing the interest); and thirdly, the rights and freedoms of the relevant data subjects must not override the controller’s own interest in the processing.
Importantly, the opinion clarifies that it is likely that the controller will have a legitimate interest in the processing if the interest is “lawful”, ‘”clearly and precisely articulated”, and “real and present, not speculative”. The opinion gives some examples of potentially legitimate interests in an AI context, such as developing AI to detect fraud or improve cyber threat detection. The overall message is that, although each model and each instance of processing will need to be assessed on a case-by-case basis, the use of “legitimate interests” as a basis for processing personal data for AI development is not automatically precluded.
Unlawful processing and AI models
The EDPB’s position is that unlawful processing of personal data in the context of AI model development does not automatically mean that subsequent deployment of that AI model also involves unlawful processing. Each instance will need to be considered on a case-by-case basis, and the opinion gives three illustrative scenarios.
If a controller unlawfully processes personal data for model development and later the same controller processes the same data when deploying the model, the second processing activity may not itself be unlawful, depending upon the extent to which the two processing instances can be considered as being carried out for separate purposes.
If a controller unlawfully processes personal data for model development, and another controller processes the data when deploying that model, the deployer should conduct an appropriate assessment of the lawfulness of the processing by the developer. The degree of assessment will vary depending on the risks raised by the deployment to the data subjects whose data was used to develop the model. The lawfulness of the developer’s processing would also be relevant to whether the deployer could apply the legitimate interests basis to the processing involved in its deployment.
Meanwhile, if a controller unlawfully processes personal data when developing an AI model but the model is then effectively anonymised prior to deployment (whether by the same or a different controller), such that the “subsequent operation” of the AI model does not entail processing personal data, the EDPB considers that the GDPR would not apply at all, and “the unlawfulness of the initial processing should not … impact the subsequent operation of the model”.
The full opinion is available to read here.
UK will introduce new legislation on AI-generated child abuse “deepfakes” early in 2025, say ministers
Baroness Margaret Jones, a minister in the DSIT, has indicated that new legislation will be forthcoming from the Government in early 2025 to address the issue of AI-generated “deepfake” images depicting child sexual abuse.
In a debate in the House of Lords on the Data (Use and Access) Bill (“DUA Bill”), currently progressing through Parliament, the minister indicated that the proposals will be introduced to Parliament in order to persuade fellow peers that a proposed amendment to the DUA Bill, which would have sought to introduce measures tackling the issue into that Bill, should not be passed.
Labour had previously committed, to criminalise the production of AI-generated sexually explicit “deepfakes”, including those depicting adults as well as children. An announcement is expected from the Home Office in early 2025 as to the Government’s specific legislative proposals.
AI laws from various US States anticipated in 2025
Lawmakers in a number of US States have signalled that they will renew efforts to pass State-level AI legislation in the coming year.
In the absence of any federal-level AI laws from Congress and no sign that any will be forthcoming soon, a number of proposals in states including California, Texas and Virginia are aimed at “filling in the gap”.
A politically bipartisan “Multistate AI Policymaker Working Group“, convened by the Future of Privacy Forum and established in October 2024, is seeking to achieve some level of consistency and “interoperability” between laws proposed and adopted in different States.
Enforcement and civil litigation
OpenAI fined €15 million by Garante over data protection failings, says it will appeal
The Italian data protection authority, the “Garante”, has fined OpenAI €15 million over a series of data protection failings connected with the rollout and subsequent management of ChatGPT in Italy.
Utilising new powers for the first time, the Garante has also ordered OpenAI to complete a six-month “communication campaign”, over various media including radio, TV and online, to “promote public understanding and awareness of how ChatGPT operates”, as well as the rights available to data subjects including rectification and deletion of personal data. A previous attempt by OpenAI to comply with an order of this sort by the Garante, issued during the course of the investigation, was deemed inadequate by the regulator, and OpenAI must submit its plan for a “second attempt” to the Garante by the end of February.
OpenAI’s failings, as identified by the Garante, include neglecting to notify it of a data breach in March 2023; processing users’ personal data for training the ChatGPT model without prior identification of an adequate GDPR processing basis; failing to implement age verification mechanisms, leading to under-13s being able to access the ChatGPT “chatbot”; and violating relevant transparency principles.
During the investigation, in March 2023, the Garante issued a “stop processing” order against ChatGPT in respect of Italian users’ personal data – in effect shutting down ChatGPT in the country. The service was reinstated a month later.
OpenAI has responded to the Garante’s decision by stating that it will appeal both the fine and the order to run an information campaign, saying that the decision and the penalties imposed were “disproportionate” and would “undermine” ambitions for AI development in Italy.
As OpenAI moved its European headquarters to Ireland during the course of the Garante’s investigation, any further European regulatory action against OpenAI will now be led by the Irish DPC.
Technology developments and market news
BBC calls on Apple to “urgently address” inaccurate AI-generated news summaries
Apple is facing growing calls from a number of news outlets and industry organisations to urgently improve, or even completely disable, its AI news-summary feature on iPhones, following reports that the tool is generating erroneous news summaries including content not reflective of the underlying articles.
The BBC first complained to Apple in December 2024 that the “Apple Intelligence” feature, which uses AI to generate a summary of news articles published on various iPhone apps such as the BBC news app, had produced inaccurate summaries. In one instance, an Apple Intelligence summary of a BBC article covering an extradition hearing in New York for Luigi Mangione, the man accused of murdering United Healthcare CEO Brian Thompson in December 2024, erroneously included a line stating that Mangione had “shot himself”. Subsequent instances have included a summary of a New York Times article apparently stating, falsely, that Israeli Prime Minister Benjamin Netanyahu had been arrested, and a summary of a BBC article saying Luke Littler had won the PDC World Darts Championship hours before the title-deciding match had actually taken place.
Apple has now responded, saying it is currently working to update the feature to ensure users are more clearly made aware that the summaries are AI-generated. Journalist industry bodies, however, have said that this does not go far enough. The National Union of Journalists has called for Apple to disable the feature entirely, until such time as the accuracy of the summaries it generates can be assured – repeating an earlier intervention by influential industry association Reporters Without Borders.
Our recent AI publications
If you enjoyed these articles, you may also be interested in our article looking in overview at enforcement of the EU AI Act, which you can read here, and our article looking at the results from the Bank of England / FCA joint annual survey of AI in the financial sector, available to read here.
We have also produced a podcast series covering various aspects of the evolving world of AI and its implications for businesses and broader society. New entries in the series this month cover topics including enforcement under the AI Act and obligations for providers of general-purpose AI models. The full podcast series is available for listening here.
This post was originally published on here