Meta has affirmed its decision to proceed with the potentially contentious strategy of employing publicly shared Facebook and Instagram posts from UK users to train its artificial intelligence (AI) models. This move arrives amidst ongoing concerns and a clear delineation from the Information Commissioner’s Office (ICO), particularly in light of stringent privacy regulations enforced within the European Union.
Meta stated it will not utilise private messages or any content from individuals under the age of 18. This decision follows a brief pause in June, during which Meta engaged constructively with the ICO regarding privacy protocols. Despite this engagement, the ICO has not formally sanctioned the initiative but will instead supervise the experiment, noting Meta has made changes to facilitate user opt-outs for post-processing.
The resumption of these plans has sparked alarm among privacy advocates, such as the Open Rights Group and NOYB, who have vehemently opposed Meta’s approach. These organisations argue it turns users into unwitting test subjects, thereby urging both the ICO and the EU to intervene and halt the initiative. Conversely, Meta has accused the EU of stymieing AI development by barring the use of EU citizens’ posts for AI training.
In a recent statement, Meta emphasised that its generative AI aims to encapsulate the UK’s unique cultural and historical context. Additionally, it will allow UK enterprises to access cutting-edge technology. Meta reiterated its goal to create AI that mirrors global community diversity, with plans to expand these capabilities into additional countries and languages subsequently.
Stephen Almond, the ICO’s Executive Director for Regulatory Risk, underscored the importance of transparency and robust safeguards for any entity using user data for AI model training. He stated, “We have been clear that any organisation using its users’ information to train generative AI models should be transparent about how people’s data is being used. Organisations should put effective safeguards in place before they start using personal data for model training, including providing a clear and simple route for users to object to the processing.”
Almond further clarified, “The ICO has not provided regulatory approval for the processing, and it is for Meta to ensure and demonstrate ongoing compliance.” This statement highlights the regulatory oversight that will accompany the initiative, emphasising the necessity for Meta to maintain rigorous compliance standards.
Meta’s determination to recommence its AI training plans using UK social media posts underscores a significant juncture in the ongoing debate over privacy and AI development. The ICO’s vigilant oversight will be pivotal in ensuring that user privacy is upheld amidst these advancements.