That assumption creates danger. As a result of below the EU AI Act, the place your organization is situated doesn’t matter. What issues is which markets your services are in and whether or not your group makes use of AI programs.
Promoting Into the EU?
In case you use AI instruments in the manufacturing of companies or merchandise that enter the European Financial Space (EEA) market, the EU AI Act might apply, even when you’re based mostly within the U.S. All of it will depend on how you utilize these instruments and which danger stage the Act categorizes them as.
AI programs are now embedded in on a regular basis HR workflows, from screening resumes and supporting hiring choices to shaping efficiency evaluations and workforce analytics. Even general-purpose AI instruments used to draft suggestions or employment-related content material can fall into scope. The Act might categorize these varieties of makes use of as intermediate or high-risk use, which requires compliance with sure obligations.
The Deadline to Know
Most obligations below the Act are already in power.
August 2, 2026, is the date for the subsequent main enforcement milestone below the Act.
Compliance obligations for high-risk AI programs, resembling these concerned in sure HR resolution making, enter into power.
There are various compliance obligations below the Act, however for HR, the 2 most vital are these two:
- Staff who use AI instruments have to be “AI literate”
- Excessive-risk AI programs will need to have significant human oversight
For HR, that interprets into one thing extra acquainted: coaching staff to make sure they perceive expectations, making use of clear guardrails, and having the ability to reveal that each are in place.
Many organizations underestimate how lengthy it takes to establish AI use, practice staff, and doc compliance till they attempt to do it.
Why HR Is on the Hook
When regulators consider AI use, they have a look at whether or not your individuals are ready to use AI instruments responsibly.
That reveals up in very sensible methods: what staff have been skilled on, whether or not expectations have been clear, how choices involving AI have been reviewed, and what proof exists if questions come up later.
The legislation particularly emphasizes that staff want to know how AI ought to and shouldn’t be used, together with applicable human oversight in higher-risk choices
What “AI Literacy” Really Means
Most organizations don’t lack consciousness of AI danger. What they lack is readability on execution.
Who wants coaching?
What ought to it embrace?
How do you tailor it throughout roles?
How do you doc it in a manner that holds up below scrutiny?
These are operational questions, and so they sit squarely with HR and compliance.
AI literacy doesn’t imply turning staff into technical specialists. It means serving to them acknowledge when AI is influencing a call, understanding the place danger or bias can enter the method, and realizing when human judgment must take over.
At its core, it’s about giving individuals the arrogance to make higher choices, not simply comply with a system.
Why Ready Creates Threat
August 2026 might really feel far-off, however making ready for it isn’t a fast train.
Organizations want time to establish the place AI is getting used, outline what qualifies as excessive danger, practice completely different audiences, set up oversight practices, and create documentation that demonstrates compliance.
And the stakes are vital. The EU AI Act permits for fines of as much as €15 million or 3% of worldwide annual income for high-risk obligation violations, together with the broader dangers of regulatory scrutiny and reputational affect
For HR and compliance leaders, the duty is acquainted. Organizations that act now to make sure staff could make sound AI choices will probably be much better positioned when these expectations change into enforceable.
In regards to the Creator
John Brushwood serves as Compliance Counsel at Traliant, the place he oversees regulation, options and matters associated to knowledge privateness, cybersecurity and AI governance. He’s a graduate of St. Petersburg School and George Washington College Regulation Faculty and has labored at varied legislation companies, together with Griffin & Griffin in Washington DC.
That assumption creates danger. As a result of below the EU AI Act, the place your organization is situated doesn’t matter. What issues is which markets your services are in and whether or not your group makes use of AI programs.
Promoting Into the EU?
In case you use AI instruments in the manufacturing of companies or merchandise that enter the European Financial Space (EEA) market, the EU AI Act might apply, even when you’re based mostly within the U.S. All of it will depend on how you utilize these instruments and which danger stage the Act categorizes them as.
AI programs are now embedded in on a regular basis HR workflows, from screening resumes and supporting hiring choices to shaping efficiency evaluations and workforce analytics. Even general-purpose AI instruments used to draft suggestions or employment-related content material can fall into scope. The Act might categorize these varieties of makes use of as intermediate or high-risk use, which requires compliance with sure obligations.
The Deadline to Know
Most obligations below the Act are already in power.
August 2, 2026, is the date for the subsequent main enforcement milestone below the Act.
Compliance obligations for high-risk AI programs, resembling these concerned in sure HR resolution making, enter into power.
There are various compliance obligations below the Act, however for HR, the 2 most vital are these two:
- Staff who use AI instruments have to be “AI literate”
- Excessive-risk AI programs will need to have significant human oversight
For HR, that interprets into one thing extra acquainted: coaching staff to make sure they perceive expectations, making use of clear guardrails, and having the ability to reveal that each are in place.
Many organizations underestimate how lengthy it takes to establish AI use, practice staff, and doc compliance till they attempt to do it.
Why HR Is on the Hook
When regulators consider AI use, they have a look at whether or not your individuals are ready to use AI instruments responsibly.
That reveals up in very sensible methods: what staff have been skilled on, whether or not expectations have been clear, how choices involving AI have been reviewed, and what proof exists if questions come up later.
The legislation particularly emphasizes that staff want to know how AI ought to and shouldn’t be used, together with applicable human oversight in higher-risk choices
What “AI Literacy” Really Means
Most organizations don’t lack consciousness of AI danger. What they lack is readability on execution.
Who wants coaching?
What ought to it embrace?
How do you tailor it throughout roles?
How do you doc it in a manner that holds up below scrutiny?
These are operational questions, and so they sit squarely with HR and compliance.
AI literacy doesn’t imply turning staff into technical specialists. It means serving to them acknowledge when AI is influencing a call, understanding the place danger or bias can enter the method, and realizing when human judgment must take over.
At its core, it’s about giving individuals the arrogance to make higher choices, not simply comply with a system.
Why Ready Creates Threat
August 2026 might really feel far-off, however making ready for it isn’t a fast train.
Organizations want time to establish the place AI is getting used, outline what qualifies as excessive danger, practice completely different audiences, set up oversight practices, and create documentation that demonstrates compliance.
And the stakes are vital. The EU AI Act permits for fines of as much as €15 million or 3% of worldwide annual income for high-risk obligation violations, together with the broader dangers of regulatory scrutiny and reputational affect
For HR and compliance leaders, the duty is acquainted. Organizations that act now to make sure staff could make sound AI choices will probably be much better positioned when these expectations change into enforceable.
In regards to the Creator
John Brushwood serves as Compliance Counsel at Traliant, the place he oversees regulation, options and matters associated to knowledge privateness, cybersecurity and AI governance. He’s a graduate of St. Petersburg School and George Washington College Regulation Faculty and has labored at varied legislation companies, together with Griffin & Griffin in Washington DC.



















