by Gareth Kristensen, Prudence Buckland, Jan-Frederik Keustermans, and Hakki Can Yildiz

Left to proper: Gareth Kristensen, Prudence Buckland, Jan-Frederik Keustermans, and Hakki Can Yildiz (pictures courtesy of Cleary Gottlieb Steen & Hamilton LLP)
On 19 November 2025, the European Fee introduced its much-anticipated Digital “Omnibus” bundle[1] supposed to ease the executive and compliance burden going through European companies. Govt Vice-President of the Fee Henna Virkkunen acknowledged that “[f]rom factories to start-ups, the digital bundle is the EU’s reply to calls to cut back burdens on our companies.”[2]
The bundle is a part of a legislative initiative to simplify and consolidate the EU’s Digital Rulebook[3] which consists of a physique of EU laws governing digital and rising applied sciences, cybersecurity, and information. Developed by the Fee in response to calls from trade for larger authorized readability and alignment in enforcement approaches, the Digital Omnibus focuses on areas “the place it was clear that the regulatory targets could be achieved at a decrease administrative value”[4] throughout the board.
The bundle consists of two key legislative proposals:[5]
- amendments to simplify and streamline a spread of digital laws, together with guidelines regarding cybersecurity, information safety and information governance;[6] and
- a separate proposal to simplify and clean the implementation of sure provisions of the EU AI Act.[7]
1. Single-entry level for incident reporting; uniform necessities for private information breach notification
The Fee proposes to arrange a centralised platform, to be established by the EU Company for Cybersecurity (ENISA), the place organisations can report cyber incidents below a number of frameworks (together with NIS2, GDPR, DORA, and CRA) by the use of a single submitting. Previous to enabling notification of incidents on this means, ENISA could be anticipated to pilot and check the system to make sure it’s fit-for-purpose for every related framework. Different sector-specific incident reporting obligations, corresponding to these below aviation and power regimes, could be introduced below the identical platform in the end.
Amendments have additionally been proposed to streamline and chill out the non-public information breach notification guidelines below GDPR. The proposals improve the brink for notifying a knowledge breach to regulators (the GDPR at present requires notification to regulators “until the non-public information breach is unlikely to lead to a danger” [our emphasis], whereas the proposal solely requires notification to regulators for a breach that’s “prone to lead to a excessive danger…” [our emphasis]) and lengthen the notification deadline to 96 hours (up from 72). The European Information Safety Board would put together a typical template for information breach notifications below GDPR.
2. Unified cookie consent framework
In a transfer to replace the EU’s cookie coverage framework below the ePrivacy Directive, the Fee proposes that the processing of private information on and from terminal tools needs to be ruled solely by the GDPR (the place the subscriber or person of the service is a pure individual – so AI brokers accessing a company subscriber’s terminal tools should still be sure by the legacy ePrivacy Directive consent necessities). Whereas consent would stay the final rule for storing private information on, or accessing private information from, a pure individual’s digital system (e.g., by cookies), the Fee proposes sure exceptions the place consent wouldn’t be required – notably, for utilizing first-party cookies to create aggregated details about the utilization of a web based service for viewers measurement, the place that is carried out by the information controller of that on-line service solely for its personal use.
Modifications to cookie-banner necessities have additionally been proposed: customers should be capable of reject cookies through a single-click button, and, as soon as rejected, cookie consent should not be requested once more for six months.
The proposed amendments may additionally cut back enforcement fragmentation by making the GDPR’s “one cease store” mechanism relevant to the oversight of cookies used to gather private information. The flipside is that this additionally triggers GDPR-level fines, which might imply a big improve in comparison with the utmost fines accessible for breach of the ePrivacy Directive in some member states.
3. Modification of the definition of “private information” below GDPR
The proposal would amend the definition of “private information” by clarifying that (pseudonymised) data will not be thought of private information for anybody who doesn’t have the “means fairly possible for use” to (re-)establish the pure individual to whom the data relates. The Fee would additionally undertake implementing acts to assist specify when information ensuing from pseudonymisation now not constitutes private information.
Whereas that is being characterised as merely codifying latest case regulation on the “relative” (versus “absolute”) strategy to defining private information, it may very well be a extremely impactful change in observe: the idea of private information not solely delineates the scope of utility of GDPR, but additionally of a number of key obligations below different laws within the Digital Rulebook (corresponding to DSA and DMA).
4. Different “innovation-friendly” amendments to the GDPR
Different proposed amendments look like aimed particularly at reconciling GDPR with rising applied sciences, particularly AI coaching and use, by addressing a few of the most urgent information safety compliance questions confronted by AI builders and deployers.
A brand new provision could be launched expressly recognising that processing of private information within the context of the event or operation of an AI system or mannequin could be based mostly on “respectable pursuits” below GDPR, topic to potential exceptions below EU and nationwide regulation and the standard “necessity” requirement and balancing check. This is able to be a useful clarification, confirming what number of within the trade already function at the moment. Nevertheless, the proposal would additionally introduce sure situations which can not all the time be easy to implement in observe, together with offering information topics with “an unconditional proper to object.”
Equally, “[i]n order to not disproportionately hinder the event and operation of AI”, a brand new (albeit slender) exception could be launched to GDPR to permit for the incidental processing of delicate information (so-called “particular classes of private information”) within the context of the event or operation of an AI system. A number of stringent situations would apply, together with a requirement to implement measures to keep away from the gathering and processing of delicate information throughout your complete lifecycle of the AI system, removing of any “residual” delicate information from datasets and stopping such information from being disclosed by outputs of the AI system.
Suppliers of high-risk AI techniques would profit from a further (barely broader) exception below the AI Act, permitting for the “distinctive” processing of delicate information for bias detection and correction, however provided that numerous situations are happy (together with that use of artificial or anonymised information could be insufficient for such objective). The place “crucial and proportionate”, the identical exception could be accessible to deployers of high-risk AI techniques and to suppliers of different (non-high-risk) AI techniques and AI fashions.
Lastly, the proposals would introduce a definition of “scientific analysis” into the GDPR, which might expressly embody analysis for industrial functions. This variation may make it simpler for AI builders to re-purpose private information for AI coaching, and also will be welcomed by firms energetic in different R&D-intensive industries that carry out analysis by the processing of private information at scale, corresponding to pharma and biotech.
5. Exemptions below GDPR for automated decision-making clarified
Amendments have been proposed to make clear the circumstances through which selections which have authorized or equally vital results could be based mostly solely on automated processing, together with profiling. Specifically, this may be allowed when the choice is critical for getting into into or performing a contract with the information topic (no matter whether or not the choice may have been taken by a human by non-automated means). Explanatory commentary supplied by the Fee does, nevertheless, counsel that the place a number of equally efficient automated processing options exist, the controller ought to nonetheless “use the much less intrusive one”.
6. Refusal of “abusive” information topic entry requests
The legislative bundle introduces a brand new floor for information controllers to refuse (or cost an affordable price to reply to) “abusive” information topic entry requests below GDPR made “for functions aside from the safety of their information.” The proposal additionally supplies extra context as to the opposite kinds of requests that could be seen as an “abuse” of the rights of entry granted to information topics (e.g., overly broad and undifferentiated requests, and requests made with the intent to trigger injury or hurt to the controller). If adopted, it stays to be seen how impactful this modification could be in observe – for instance whether or not it would mark the decline of the usage of information topic entry requests as a pre-litigation discovery instrument in non-data protection-related disputes.
7. Guidelines for high-risk AI and sure transparency obligations delayed
Entry into drive of the AI Act necessities for high-risk AI techniques is about to be delayed. Relying on the class of high-risk AI system, there could be a 6- or 12-month transition interval after supporting instruments and measures, corresponding to harmonised requirements and pointers (presently nonetheless below growth by the Fee) turn out to be accessible. It will, nevertheless, be topic to a long-stop date (of both 2 December 2027 or 2 August 2028, relying on the kind of AI system) after which period the foundations would enter into utility in any case.
The Fee additionally proposes to push out the grace interval for compliance with transparency obligations below Article 50(2) of the AI Act to 2 February 2027 for AI techniques (together with general-purpose AI techniques) that generate artificial audio, picture, video or textual content and that are positioned in the marketplace earlier than 2 August 2026.
8. Supervisory mandate of the AI Workplace expanded
The proposal would centralise and broaden the AI Workplace’s oversight of AI techniques, granting it unique supervisory and enforcement competences in respect of AI techniques based mostly on general-purpose AI fashions when the mannequin and the system are developed by the identical supplier.[8] The AI Workplace’s mandate would additionally lengthen to AI techniques that represent, or are built-in into, very giant on-line platforms (VLOPs) or very giant engines like google (VLOSEs) designated as such below the DSA; though the primary level of entry for the evaluation of such AI techniques would nonetheless be the chance evaluation, mitigating measures and audit obligations prescribed below the DSA.
9. Better flexibility for organisations to tailor AI compliance
The proposals give organisations larger flexibility to tailor their AI compliance approaches. Suppliers would now not be required to register an AI system within the EU database for high-risk AI techniques in the event that they assess a system isn’t genuinely high-risk based mostly on how it’s used (in line with the standards in Article 6(3) of the AI Act). Suppliers of high-risk techniques would additionally be capable of tailor their techniques for post-market monitoring, as an alternative of being required to implement components mandated by the Fee for such functions.
The Fee additionally proposes lifting the necessary accountability imposed on companies to advertise AI literacy. Major accountability for AI literacy would as an alternative shift to the Fee and Member States, who could be required to “encourage” suppliers and deployers (on a non-binding foundation) to take measures to make sure enough AI literacy of their workers.
10. Safeguards for commerce secrets and techniques strengthened; guidelines for information sharing narrowed below the EU Information Act
The bundle introduces a further floor for information holders to refuse the disclosure of commerce secrets and techniques below the EU Information Act: if they’ll exhibit a excessive danger of illegal acquisition, use or disclosure of commerce secrets and techniques to third-country entities, or EU entities below the management of such entities, that are topic to weaker protections than these accessible below EU regulation. The Fee’s explanatory commentary signifies that information holders could take account of assorted components when making this evaluation (together with, inadequate authorized requirements, poor enforcement, restricted authorized recourse, strategic misuse of procedural techniques to undermine opponents or undue political affect); however such refusal would in any occasion must be clear, proportionate and tailor-made to the precise circumstances of every case.
The proposals additionally slender the circumstances when information holders could be required to make information accessible to public sector our bodies, limiting this solely to eventualities the place information is required to reply to a public emergency (and supplied additional that the granularity and quantity of knowledge requested and the frequency of entry is proportionate and duly justified within the context of the related emergency).
11. Guidelines for cloud switching below the EU Information Act adjusted
To assist mitigate the price and administrative burden of renegotiating current contracts, the proposals introduce focused exemptions to the Information Act cloud switching guidelines for custom-made information processing providers (nearly all of options and functionalities of which have been tailored by the supplier to fulfill the precise wants of the related buyer) and information processing providers supplied by small mid-caps and SMEs. Providers of this sort (aside from infrastructure-as-a-service) supplied below contracts concluded on or previous to 12 September 2025 wouldn’t typically be topic to the switching guidelines.
Moreover, the amendments make clear that suppliers of knowledge processing providers (aside from infrastructure-as-a-service) could impose “proportionate early termination penalties” in mounted time period contracts (so long as these don’t represent an impediment to switching).
The proposal now requires approval from the European Parliament and the Council earlier than it’s handed into regulation. The measures outlined above subsequently stay topic to possible advanced and politicised negotiations between the EU establishments within the coming months. Towards this backdrop, political strain is mounting to behave swiftly to reinvigorate innovation in Europe and catch up within the AI race. Lots of the guidelines impacted by these initiatives are already (or will quickly be) in drive. Corporations should now reassess the influence of those proposals on their compliance methods and operations, whereas the main points are ironed out by the legislative course of.
[1] The complete press launch could be discovered right here.
[2] The complete model of Govt Vice-President Virkkunen’s remarks is offered right here.
[3] See extra data on the Digital Rulebook right here.
[4] See the 2025 EU Fee Name for Proof on the Digital Omnibus (Digital Bundle on Simplification) right here.
[5] Two additional initiatives accompany these two Omnibus proposals to kind the whole Digital bundle: (i) a Information Union Technique to facilitate information entry for AI growth and (ii) European Enterprise Wallets to offer firms with a single digital identification for cross-border transactions.
[6] The complete textual content of the Proposal for Regulation on simplification of the digital laws could be discovered right here.
[7] The complete textual content of the Digital Omnibus on AI Regulation Proposal could be discovered right here.
[8] Notice, nevertheless, that sectoral authorities will proceed to be liable for the supervision of AI techniques associated to merchandise coated by the Union harmonisation laws listed in Annex 1 of the AI Act.
Gareth Kristensen is a Companion, Prudence Buckland and Jan-Frederik Keustermans are Associates, and Hakki Can Yildiz is a Consulting Lawyer for Information Safety at who additionally contributed to this text.
The views, opinions and positions expressed inside all posts are these of the creator(s) alone and don’t signify these of the Program on Company Compliance and Enforcement (PCCE) or of the New York College Faculty of Legislation. PCCE makes no representations as to the accuracy, completeness and validity or any statements made on this website and won’t be liable any errors, omissions or representations. The copyright of this content material belongs to the creator(s) and any legal responsibility as regards to infringement of mental property rights stays with the creator(s).
by Gareth Kristensen, Prudence Buckland, Jan-Frederik Keustermans, and Hakki Can Yildiz

Left to proper: Gareth Kristensen, Prudence Buckland, Jan-Frederik Keustermans, and Hakki Can Yildiz (pictures courtesy of Cleary Gottlieb Steen & Hamilton LLP)
On 19 November 2025, the European Fee introduced its much-anticipated Digital “Omnibus” bundle[1] supposed to ease the executive and compliance burden going through European companies. Govt Vice-President of the Fee Henna Virkkunen acknowledged that “[f]rom factories to start-ups, the digital bundle is the EU’s reply to calls to cut back burdens on our companies.”[2]
The bundle is a part of a legislative initiative to simplify and consolidate the EU’s Digital Rulebook[3] which consists of a physique of EU laws governing digital and rising applied sciences, cybersecurity, and information. Developed by the Fee in response to calls from trade for larger authorized readability and alignment in enforcement approaches, the Digital Omnibus focuses on areas “the place it was clear that the regulatory targets could be achieved at a decrease administrative value”[4] throughout the board.
The bundle consists of two key legislative proposals:[5]
- amendments to simplify and streamline a spread of digital laws, together with guidelines regarding cybersecurity, information safety and information governance;[6] and
- a separate proposal to simplify and clean the implementation of sure provisions of the EU AI Act.[7]
1. Single-entry level for incident reporting; uniform necessities for private information breach notification
The Fee proposes to arrange a centralised platform, to be established by the EU Company for Cybersecurity (ENISA), the place organisations can report cyber incidents below a number of frameworks (together with NIS2, GDPR, DORA, and CRA) by the use of a single submitting. Previous to enabling notification of incidents on this means, ENISA could be anticipated to pilot and check the system to make sure it’s fit-for-purpose for every related framework. Different sector-specific incident reporting obligations, corresponding to these below aviation and power regimes, could be introduced below the identical platform in the end.
Amendments have additionally been proposed to streamline and chill out the non-public information breach notification guidelines below GDPR. The proposals improve the brink for notifying a knowledge breach to regulators (the GDPR at present requires notification to regulators “until the non-public information breach is unlikely to lead to a danger” [our emphasis], whereas the proposal solely requires notification to regulators for a breach that’s “prone to lead to a excessive danger…” [our emphasis]) and lengthen the notification deadline to 96 hours (up from 72). The European Information Safety Board would put together a typical template for information breach notifications below GDPR.
2. Unified cookie consent framework
In a transfer to replace the EU’s cookie coverage framework below the ePrivacy Directive, the Fee proposes that the processing of private information on and from terminal tools needs to be ruled solely by the GDPR (the place the subscriber or person of the service is a pure individual – so AI brokers accessing a company subscriber’s terminal tools should still be sure by the legacy ePrivacy Directive consent necessities). Whereas consent would stay the final rule for storing private information on, or accessing private information from, a pure individual’s digital system (e.g., by cookies), the Fee proposes sure exceptions the place consent wouldn’t be required – notably, for utilizing first-party cookies to create aggregated details about the utilization of a web based service for viewers measurement, the place that is carried out by the information controller of that on-line service solely for its personal use.
Modifications to cookie-banner necessities have additionally been proposed: customers should be capable of reject cookies through a single-click button, and, as soon as rejected, cookie consent should not be requested once more for six months.
The proposed amendments may additionally cut back enforcement fragmentation by making the GDPR’s “one cease store” mechanism relevant to the oversight of cookies used to gather private information. The flipside is that this additionally triggers GDPR-level fines, which might imply a big improve in comparison with the utmost fines accessible for breach of the ePrivacy Directive in some member states.
3. Modification of the definition of “private information” below GDPR
The proposal would amend the definition of “private information” by clarifying that (pseudonymised) data will not be thought of private information for anybody who doesn’t have the “means fairly possible for use” to (re-)establish the pure individual to whom the data relates. The Fee would additionally undertake implementing acts to assist specify when information ensuing from pseudonymisation now not constitutes private information.
Whereas that is being characterised as merely codifying latest case regulation on the “relative” (versus “absolute”) strategy to defining private information, it may very well be a extremely impactful change in observe: the idea of private information not solely delineates the scope of utility of GDPR, but additionally of a number of key obligations below different laws within the Digital Rulebook (corresponding to DSA and DMA).
4. Different “innovation-friendly” amendments to the GDPR
Different proposed amendments look like aimed particularly at reconciling GDPR with rising applied sciences, particularly AI coaching and use, by addressing a few of the most urgent information safety compliance questions confronted by AI builders and deployers.
A brand new provision could be launched expressly recognising that processing of private information within the context of the event or operation of an AI system or mannequin could be based mostly on “respectable pursuits” below GDPR, topic to potential exceptions below EU and nationwide regulation and the standard “necessity” requirement and balancing check. This is able to be a useful clarification, confirming what number of within the trade already function at the moment. Nevertheless, the proposal would additionally introduce sure situations which can not all the time be easy to implement in observe, together with offering information topics with “an unconditional proper to object.”
Equally, “[i]n order to not disproportionately hinder the event and operation of AI”, a brand new (albeit slender) exception could be launched to GDPR to permit for the incidental processing of delicate information (so-called “particular classes of private information”) within the context of the event or operation of an AI system. A number of stringent situations would apply, together with a requirement to implement measures to keep away from the gathering and processing of delicate information throughout your complete lifecycle of the AI system, removing of any “residual” delicate information from datasets and stopping such information from being disclosed by outputs of the AI system.
Suppliers of high-risk AI techniques would profit from a further (barely broader) exception below the AI Act, permitting for the “distinctive” processing of delicate information for bias detection and correction, however provided that numerous situations are happy (together with that use of artificial or anonymised information could be insufficient for such objective). The place “crucial and proportionate”, the identical exception could be accessible to deployers of high-risk AI techniques and to suppliers of different (non-high-risk) AI techniques and AI fashions.
Lastly, the proposals would introduce a definition of “scientific analysis” into the GDPR, which might expressly embody analysis for industrial functions. This variation may make it simpler for AI builders to re-purpose private information for AI coaching, and also will be welcomed by firms energetic in different R&D-intensive industries that carry out analysis by the processing of private information at scale, corresponding to pharma and biotech.
5. Exemptions below GDPR for automated decision-making clarified
Amendments have been proposed to make clear the circumstances through which selections which have authorized or equally vital results could be based mostly solely on automated processing, together with profiling. Specifically, this may be allowed when the choice is critical for getting into into or performing a contract with the information topic (no matter whether or not the choice may have been taken by a human by non-automated means). Explanatory commentary supplied by the Fee does, nevertheless, counsel that the place a number of equally efficient automated processing options exist, the controller ought to nonetheless “use the much less intrusive one”.
6. Refusal of “abusive” information topic entry requests
The legislative bundle introduces a brand new floor for information controllers to refuse (or cost an affordable price to reply to) “abusive” information topic entry requests below GDPR made “for functions aside from the safety of their information.” The proposal additionally supplies extra context as to the opposite kinds of requests that could be seen as an “abuse” of the rights of entry granted to information topics (e.g., overly broad and undifferentiated requests, and requests made with the intent to trigger injury or hurt to the controller). If adopted, it stays to be seen how impactful this modification could be in observe – for instance whether or not it would mark the decline of the usage of information topic entry requests as a pre-litigation discovery instrument in non-data protection-related disputes.
7. Guidelines for high-risk AI and sure transparency obligations delayed
Entry into drive of the AI Act necessities for high-risk AI techniques is about to be delayed. Relying on the class of high-risk AI system, there could be a 6- or 12-month transition interval after supporting instruments and measures, corresponding to harmonised requirements and pointers (presently nonetheless below growth by the Fee) turn out to be accessible. It will, nevertheless, be topic to a long-stop date (of both 2 December 2027 or 2 August 2028, relying on the kind of AI system) after which period the foundations would enter into utility in any case.
The Fee additionally proposes to push out the grace interval for compliance with transparency obligations below Article 50(2) of the AI Act to 2 February 2027 for AI techniques (together with general-purpose AI techniques) that generate artificial audio, picture, video or textual content and that are positioned in the marketplace earlier than 2 August 2026.
8. Supervisory mandate of the AI Workplace expanded
The proposal would centralise and broaden the AI Workplace’s oversight of AI techniques, granting it unique supervisory and enforcement competences in respect of AI techniques based mostly on general-purpose AI fashions when the mannequin and the system are developed by the identical supplier.[8] The AI Workplace’s mandate would additionally lengthen to AI techniques that represent, or are built-in into, very giant on-line platforms (VLOPs) or very giant engines like google (VLOSEs) designated as such below the DSA; though the primary level of entry for the evaluation of such AI techniques would nonetheless be the chance evaluation, mitigating measures and audit obligations prescribed below the DSA.
9. Better flexibility for organisations to tailor AI compliance
The proposals give organisations larger flexibility to tailor their AI compliance approaches. Suppliers would now not be required to register an AI system within the EU database for high-risk AI techniques in the event that they assess a system isn’t genuinely high-risk based mostly on how it’s used (in line with the standards in Article 6(3) of the AI Act). Suppliers of high-risk techniques would additionally be capable of tailor their techniques for post-market monitoring, as an alternative of being required to implement components mandated by the Fee for such functions.
The Fee additionally proposes lifting the necessary accountability imposed on companies to advertise AI literacy. Major accountability for AI literacy would as an alternative shift to the Fee and Member States, who could be required to “encourage” suppliers and deployers (on a non-binding foundation) to take measures to make sure enough AI literacy of their workers.
10. Safeguards for commerce secrets and techniques strengthened; guidelines for information sharing narrowed below the EU Information Act
The bundle introduces a further floor for information holders to refuse the disclosure of commerce secrets and techniques below the EU Information Act: if they’ll exhibit a excessive danger of illegal acquisition, use or disclosure of commerce secrets and techniques to third-country entities, or EU entities below the management of such entities, that are topic to weaker protections than these accessible below EU regulation. The Fee’s explanatory commentary signifies that information holders could take account of assorted components when making this evaluation (together with, inadequate authorized requirements, poor enforcement, restricted authorized recourse, strategic misuse of procedural techniques to undermine opponents or undue political affect); however such refusal would in any occasion must be clear, proportionate and tailor-made to the precise circumstances of every case.
The proposals additionally slender the circumstances when information holders could be required to make information accessible to public sector our bodies, limiting this solely to eventualities the place information is required to reply to a public emergency (and supplied additional that the granularity and quantity of knowledge requested and the frequency of entry is proportionate and duly justified within the context of the related emergency).
11. Guidelines for cloud switching below the EU Information Act adjusted
To assist mitigate the price and administrative burden of renegotiating current contracts, the proposals introduce focused exemptions to the Information Act cloud switching guidelines for custom-made information processing providers (nearly all of options and functionalities of which have been tailored by the supplier to fulfill the precise wants of the related buyer) and information processing providers supplied by small mid-caps and SMEs. Providers of this sort (aside from infrastructure-as-a-service) supplied below contracts concluded on or previous to 12 September 2025 wouldn’t typically be topic to the switching guidelines.
Moreover, the amendments make clear that suppliers of knowledge processing providers (aside from infrastructure-as-a-service) could impose “proportionate early termination penalties” in mounted time period contracts (so long as these don’t represent an impediment to switching).
The proposal now requires approval from the European Parliament and the Council earlier than it’s handed into regulation. The measures outlined above subsequently stay topic to possible advanced and politicised negotiations between the EU establishments within the coming months. Towards this backdrop, political strain is mounting to behave swiftly to reinvigorate innovation in Europe and catch up within the AI race. Lots of the guidelines impacted by these initiatives are already (or will quickly be) in drive. Corporations should now reassess the influence of those proposals on their compliance methods and operations, whereas the main points are ironed out by the legislative course of.
[1] The complete press launch could be discovered right here.
[2] The complete model of Govt Vice-President Virkkunen’s remarks is offered right here.
[3] See extra data on the Digital Rulebook right here.
[4] See the 2025 EU Fee Name for Proof on the Digital Omnibus (Digital Bundle on Simplification) right here.
[5] Two additional initiatives accompany these two Omnibus proposals to kind the whole Digital bundle: (i) a Information Union Technique to facilitate information entry for AI growth and (ii) European Enterprise Wallets to offer firms with a single digital identification for cross-border transactions.
[6] The complete textual content of the Proposal for Regulation on simplification of the digital laws could be discovered right here.
[7] The complete textual content of the Digital Omnibus on AI Regulation Proposal could be discovered right here.
[8] Notice, nevertheless, that sectoral authorities will proceed to be liable for the supervision of AI techniques associated to merchandise coated by the Union harmonisation laws listed in Annex 1 of the AI Act.
Gareth Kristensen is a Companion, Prudence Buckland and Jan-Frederik Keustermans are Associates, and Hakki Can Yildiz is a Consulting Lawyer for Information Safety at who additionally contributed to this text.
The views, opinions and positions expressed inside all posts are these of the creator(s) alone and don’t signify these of the Program on Company Compliance and Enforcement (PCCE) or of the New York College Faculty of Legislation. PCCE makes no representations as to the accuracy, completeness and validity or any statements made on this website and won’t be liable any errors, omissions or representations. The copyright of this content material belongs to the creator(s) and any legal responsibility as regards to infringement of mental property rights stays with the creator(s).



















