Outlook ’25: How Might the EU AI Act Affect Tech Implementation This Year?
In the United States, Congress can’t seem to agree on a comprehensive artificial intelligence bill or package. But elsewhere in the world, politicians have seen, and answered, the need for safeguards as the technology continues to proliferate.
But a laissez-faire attitude in the U.S. could see companies scrambling to meet the requirements of what experts like Reiko Feaver, partner at CM Law, call “patchwork”-style state laws, especially once paired with international laws, like the EU AI Act.
Already, a few U.S. states have implemented AI-focused legislation. Companies operating at an international scale—whether based in the U.S. or elsewhere—may soon be subject to a slew of opposing or differing pieces of legislation, depending where and how they choose to deploy AI-powered technology.
Helen Christakos, partner at A&O Shearman, said the legislation in varying jurisdictions seems to keep a few central tenets in sight. “There are threads of similarity that we’re seeing across jurisdictions, and some of those threads are around transparency and bias,” Christakos said. “These are common themes, but the implementation is a bit different in different jurisdictions, so the focus is on how we implement.”
Related Stories
When the EU AI Act passed, experts told Sourcing Journal they expected it would have a domino effect on other countries, and that they thought it might help U.S. legislators craft and pass a comprehensive AI bill. By and large, experts now say the hopes of such a package are likely to be dashed under the incoming Trump-Vance administration.
Still, the EU AI Act remains relevant to U.S. corporations operating in or deploying customer-focused technology to the bloc, which means that, though the U.S. itself has not outlined how companies must handle the technology, the EU’s standards may soon become precedent.
The EU AI Act is a risk classification system, with some more innocuous models coming in as “low risk,” and others on the opposite side of the spectrum being labeled “unacceptable risk.” The systems on the more dangerous end of the spectrum include those to do with biometric categorization, social scoring, emotion recognition in the workplace and more.
Regulating models under the EU AI Act
According to Di Lorenzo, some of the most widely adopted use cases for AI systems in fashion and apparel, like using AI-enabled robots to make warehouses more efficient, are likely to be classified as low risk by the EU AI Act. Those operations, then, would not be affected by the AI Act.Other use cases, like using AI to generate digital models, will be subject to transparency requirements under the EU AI Act—even if they still qualify as low risk.
“If you create deepfakes—images that present something that has never happened, and is a made-up thing—you need to qualify them; for example, by having text on them saying ‘Generated by AI,’” she said.
Transparency requirements, though, won’t come into effect until 2026.
In the meantime, Christakos and Catherine Di Lorenzo, partner at A&O Shearman, said, brands and retailers should work to get their ducks in a row—to understand how the systems they already use, or are interested in using, will be classified and to put the necessary safeguards in preemptively.
But systems developed—but in most cases for fashion, apparel and retail, deployed—by companies may not be the only issue companies have to consider.
A March Pew Research report showed that 20 percent of U.S. adults already use ChatGPT to help them with tasks at work. If a company develops or deploys its AI-based technology to EU residents—which could include EU-based employees for multinational companies—it is responsible for those systems’ risk levels.
Di Lorenzo noted that, under the EU AI Act, it will likely become important for companies to set guardrails around employees’ use of general-purpose AI models, like OpenAI’s ChatGPT.
For example, if an HR team uses ChatGPT or another open-source model to make decisions on recruitment and hiring, that could become a high-risk use case, which may leave a company at risk of violating provisions of the AI Act without proper oversight.
A few specific pieces of the EU AI Act—particularly related to models with “unacceptable risk”—will become effective in 2025, but by and large, this year will be a game of preparing for the full slate of regulations to come into effect in 2026.
Globally, it’s probable that other legislation will pass and become effective prior to all sections of the AI Act; Di Lorenzo said it’s important to keep an eye on budding laws and compare them with already established legislation.
“The important thing is to identify which are the laws that apply to you, and try to define a common denominator under those laws,” she said.