In short
Current regulatory developments underscore the rising scrutiny {of professional} makes use of of generative AI. On 13 January 2026, the Spanish Information Safety Authority (“Spanish DPA”) issued a proper discover warning of the authorized and privateness dangers concerned in importing, reworking or producing pictures of people by AI instruments. On the identical time, the European Fee has printed the primary draft of its voluntary Code of Observe on Transparency of AI-Generated Content material (“Code”). Whereas adherence to the Code is non-compulsory, it’s meant to help suppliers in assembly the necessary transparency obligations set out in Article 50 of the AI Act, which can apply from August 2026 to suppliers and deployers of AI techniques. These developments reinforce the necessity for strong safeguards, inner controls, and clear labelling when deploying generative AI.
Key takeaways
What firms want to think about now:
- Deal with any add or use of somebody’s picture in an AI device as dealing with private knowledge and put fundamental safeguards in place.
- Earlier than creating or sharing AI‑generated content material — even internally — examine whether or not it might set off dangers past knowledge safety, reminiscent of reputational points, copyright misuse or misuse of somebody’s likeness.
- Put together for the AI Act’s transparency guidelines arriving in August 2026, together with clear labelling of any content material modified or created by AI.
In additional element
Spanish DPA steerage on AI and pictures
The discover issued by the Spanish DPA on 13 January 2026 gives its clearest place to this point on the dangers related to utilizing third get together pictures in generative AI instruments. It confirms that importing, reworking, or producing visible content material based mostly on an individual’s picture constitutes private knowledge processing, even the place the output shouldn’t be meant to be shared or seems innocuous. This represents an specific acknowledgement that merely feeding a picture into an AI system already triggers Common Information Safety Regulation (GDPR) obligations.
The Spanish DPA identifies two predominant classes of dangers:
- Seen dangers which come up when the generated picture or video is shared. These embody:
- Utilizing pictures outdoors of their authentic context and not using a legitimate authorized foundation;
- The convenience of forwarding or distributing content material;
- The sensible impossibility of eradicating replicated copies;
- The creation of intimate or compromising deepfakes with probably extreme penalties; and
- The danger of falsely attributing behaviors or actions to people.
- Much less seen dangers which come up even when the content material shouldn’t be shared. These embody:
- Lack of management when exterior suppliers course of the photographs;
- The potential existence of unremovable copies;
- Further or undisclosed processing by suppliers;
- The technology of metadata enabling re-identification; and
- The sensible problem for knowledge topics to train their rights.
General, the discover establishes a transparent and extra stringent framework: the usage of pictures in AI techniques have to be handled as processing of private knowledge and have to be accompanied by acceptable safeguards.
EU draft good practices code for transparency of AI-generated content material
- In parallel, the European Fee has issued its first draft of a voluntary Code of Observe on Transparency of AI-Generated Content material (“Code”), meant to assist organizations anticipate compliance with the transparency obligations beneath Article 50 of the AI Act. The ultimate model is predicted in June 2026, with necessary transparency necessities making use of to suppliers and deployers of AI techniques from August 2026.
- The Code introduces a two-tier classification system: (i) totally AI-generated content material and (ii) AI-assisted content material, the place AI considerably influences the ultimate output. Every class have to be accompanied by clear labelling utilizing a typical icon. Till the official EU icon is adopted, an interim icon to help constant disclosure composed of a two-letter acronym referring to synthetic intelligence (reminiscent of “AI”, “IA” or “KI” reflecting the interpretation into the languages of the Member States) could also be used.
The Code additionally units out sector- and format-specific guidelines, particularly for deepfakes. As an example, real-time deepfake movies should show a steady on-screen indicator and an preliminary discover, whereas non-real-time movies might use particular person or mixed choices reminiscent of mounted icons, opening notices or credits-based disclosures, as detailed within the Code.
Deployers selecting to stick to the Code should additionally implement strong inner mechanisms, together with documentation of labelling practices, workers coaching on when and easy methods to apply disclosures, steady monitoring procedures, and a channel for reporting mislabeling. Any reported inaccuracies have to be corrected promptly.
This construction is meant to help a constant and clear method to AI-generated content material earlier than the AI Act’s obligations develop into enforceable.
Broader authorized issues
The Spanish DPA’s discover and the Code spotlight that the implications of generative AI lengthen far past knowledge safety. The manipulation or use of third-party pictures, voices or different content material can also affect rights reminiscent of honor, privateness, and one’s personal picture. As well as, generative AI may give rise to vital questions round copyright, design rights, emblems and different mental property rights linked to the supply supplies or the generated outputs.
A holistic, cross reducing authorized evaluation is due to this fact important earlier than implementing or utilizing any generative AI device. Organizations ought to guarantee satisfactory worker coaching, undertake clear inner safeguards, and mitigate dangers rising from each the usage of third-party content material and engagement with exterior AI suppliers. This broader authorized lens is essential to making sure accountable deployment of generative AI applied sciences.
For tailor-made steerage on these regulatory developments and to evaluate your group’s publicity and compliance wants, please contact our IPTech crew.
Associated content material
Marta Expósito, Affiliate, has contributed to this authorized replace.
In short
Current regulatory developments underscore the rising scrutiny {of professional} makes use of of generative AI. On 13 January 2026, the Spanish Information Safety Authority (“Spanish DPA”) issued a proper discover warning of the authorized and privateness dangers concerned in importing, reworking or producing pictures of people by AI instruments. On the identical time, the European Fee has printed the primary draft of its voluntary Code of Observe on Transparency of AI-Generated Content material (“Code”). Whereas adherence to the Code is non-compulsory, it’s meant to help suppliers in assembly the necessary transparency obligations set out in Article 50 of the AI Act, which can apply from August 2026 to suppliers and deployers of AI techniques. These developments reinforce the necessity for strong safeguards, inner controls, and clear labelling when deploying generative AI.
Key takeaways
What firms want to think about now:
- Deal with any add or use of somebody’s picture in an AI device as dealing with private knowledge and put fundamental safeguards in place.
- Earlier than creating or sharing AI‑generated content material — even internally — examine whether or not it might set off dangers past knowledge safety, reminiscent of reputational points, copyright misuse or misuse of somebody’s likeness.
- Put together for the AI Act’s transparency guidelines arriving in August 2026, together with clear labelling of any content material modified or created by AI.
In additional element
Spanish DPA steerage on AI and pictures
The discover issued by the Spanish DPA on 13 January 2026 gives its clearest place to this point on the dangers related to utilizing third get together pictures in generative AI instruments. It confirms that importing, reworking, or producing visible content material based mostly on an individual’s picture constitutes private knowledge processing, even the place the output shouldn’t be meant to be shared or seems innocuous. This represents an specific acknowledgement that merely feeding a picture into an AI system already triggers Common Information Safety Regulation (GDPR) obligations.
The Spanish DPA identifies two predominant classes of dangers:
- Seen dangers which come up when the generated picture or video is shared. These embody:
- Utilizing pictures outdoors of their authentic context and not using a legitimate authorized foundation;
- The convenience of forwarding or distributing content material;
- The sensible impossibility of eradicating replicated copies;
- The creation of intimate or compromising deepfakes with probably extreme penalties; and
- The danger of falsely attributing behaviors or actions to people.
- Much less seen dangers which come up even when the content material shouldn’t be shared. These embody:
- Lack of management when exterior suppliers course of the photographs;
- The potential existence of unremovable copies;
- Further or undisclosed processing by suppliers;
- The technology of metadata enabling re-identification; and
- The sensible problem for knowledge topics to train their rights.
General, the discover establishes a transparent and extra stringent framework: the usage of pictures in AI techniques have to be handled as processing of private knowledge and have to be accompanied by acceptable safeguards.
EU draft good practices code for transparency of AI-generated content material
- In parallel, the European Fee has issued its first draft of a voluntary Code of Observe on Transparency of AI-Generated Content material (“Code”), meant to assist organizations anticipate compliance with the transparency obligations beneath Article 50 of the AI Act. The ultimate model is predicted in June 2026, with necessary transparency necessities making use of to suppliers and deployers of AI techniques from August 2026.
- The Code introduces a two-tier classification system: (i) totally AI-generated content material and (ii) AI-assisted content material, the place AI considerably influences the ultimate output. Every class have to be accompanied by clear labelling utilizing a typical icon. Till the official EU icon is adopted, an interim icon to help constant disclosure composed of a two-letter acronym referring to synthetic intelligence (reminiscent of “AI”, “IA” or “KI” reflecting the interpretation into the languages of the Member States) could also be used.
The Code additionally units out sector- and format-specific guidelines, particularly for deepfakes. As an example, real-time deepfake movies should show a steady on-screen indicator and an preliminary discover, whereas non-real-time movies might use particular person or mixed choices reminiscent of mounted icons, opening notices or credits-based disclosures, as detailed within the Code.
Deployers selecting to stick to the Code should additionally implement strong inner mechanisms, together with documentation of labelling practices, workers coaching on when and easy methods to apply disclosures, steady monitoring procedures, and a channel for reporting mislabeling. Any reported inaccuracies have to be corrected promptly.
This construction is meant to help a constant and clear method to AI-generated content material earlier than the AI Act’s obligations develop into enforceable.
Broader authorized issues
The Spanish DPA’s discover and the Code spotlight that the implications of generative AI lengthen far past knowledge safety. The manipulation or use of third-party pictures, voices or different content material can also affect rights reminiscent of honor, privateness, and one’s personal picture. As well as, generative AI may give rise to vital questions round copyright, design rights, emblems and different mental property rights linked to the supply supplies or the generated outputs.
A holistic, cross reducing authorized evaluation is due to this fact important earlier than implementing or utilizing any generative AI device. Organizations ought to guarantee satisfactory worker coaching, undertake clear inner safeguards, and mitigate dangers rising from each the usage of third-party content material and engagement with exterior AI suppliers. This broader authorized lens is essential to making sure accountable deployment of generative AI applied sciences.
For tailor-made steerage on these regulatory developments and to evaluate your group’s publicity and compliance wants, please contact our IPTech crew.
Associated content material
Marta Expósito, Affiliate, has contributed to this authorized replace.



















