Advertisement
Advertisement
Advertisement
Sourcing Journal

The U.S., UK and EU Signed an AI Treaty. What Does That Mean for Retail?

Meghan Hall
5 min read
Generate Key Takeaways

The United States, the United Kingdom and the European Union have banded together on a powerful new treaty set to regulate the rights of consumers and users when it comes to artificial intelligence.

The treaty, called the AI Convention, has been in the works and under negotiations for several years. Last week, the three governments, as well as a few others, finally signed the agreed-upon final version, which contains fewer sticking points than previous versions.

More from Sourcing Journal

Advertisement
Advertisement

The AI Convention mandates that each of the signatories work to “ensure that activities within the lifecycle of artificial intelligence systems are fully consistent with human rights, democracy and the rule of law.”

It encompasses issues like privacy and data protection, individual autonomy and non-discrimination, which each signatory will need to put forth their own strategies and compliance mechanisms for the legally binding Convention.

The Convention differs from the EU AI Act, which entered into force earlier this year. That legislation was the first comprehensive set of law around AI in the bloc, and categorizes systems by risk level to hand down specific requirements.

Because the Convention’s direct charter is to protect human rights, retailers could be affected by its eventual enforcement. Jesse Creange, vice president of supplier management at data intelligence company Akeneo, said forward-facing tools like virtual try-on, product recommendations, personalized shopping experiences and more will soon be subject to scrutiny.

Advertisement
Advertisement

“Fashion and retail companies have historically been using AI to create personalized shopping recommendations and unburden their customer support teams through tools such as AI-powered chatbots, so these companies, in particular, must ensure two key things: that the technology that they’ve already employed and technology that will be incorporated in the future does not lead to discriminatory practices or privacy violations, and [secondly], that affected consumers have avenues for recourse if they experience harmful outcomes from AI-based decisions,” Creange told Sourcing Journal.

Consumer control over data could be one of the most important pieces of the Convention, Michael Elliott, CEO of Over-C, a data and analytics platform based in the UK. That’s of particular interest on biometric information like race and weight, which can be requested as part of hyperrealistic virtual try-on applications employed by retailers.

“Many people think of the dangers of AI as running away and opening up our bank accounts and doing all of that sci-fi stuff, but I think the reality is much closer to home,” he said. “When we talk about data privacy, at the moment, we think of it as being our name, where we live and that sort of stuff. But now we’re talking about giving AI information that’s in medical records, and that for me is where we have to sit here and think about what we’re doing.”

Perhaps the most major piece of using that kind of consumer data is how it’s stored and what kind of autonomy they have over the information after they’ve initially disclosed it.

Advertisement
Advertisement

“The problem that we have is, even depending upon what day of the week it is, sometimes we’re willing to give that information and other days, we’re very guarded. The problem, really, is not so much have we given the data, it’s can we recall it back? Can we say, ‘Actually, that was a mistake, and I don’t want you to have that data anymore’? And what confidence am I going to have in that actually being [deleted]?” Elliott said.

With new regulations and consumer sentiment in mind, Creange and Ron De Jesus, field chief privacy officer at Transcend, recommend retailers and brands facing new regulations and guidelines prepare their businesses by assessing their existing—and future—technology goals against the type of regulations they may soon be subjected to.

De Jesus went on to say that, as companies audit their current systems, they may have a bit of wiggle room on timing, since each individual government has to determine how it will enforce the guidelines set forth in the Convention.

“It’s difficult to translate these principles into concrete, enforceable AI regulation. Given the AI Convention is a regulatory guide without a centralized enforcement mechanism—not a formal piece of legislation—consistent enforcement isn’t really possible,” De Jesus told Sourcing Journal. “Each government will need to interpret the high-level commitments of the treaty into their own specific, actionable legislation. We know this takes time—a lot of time. And at the pace that AI is evolving, it will be difficult for global governments to keep up as new use cases, challenges and risks emerge.”

Advertisement
Advertisement

As regulations and laws have begun to emerge around AI and its impacts on society, some have criticized the strength of certain legislative provisions, whether proposed or actual. They often cite the argument that legislation could stymie or stifle innovation, causing the pace at which the technology develops to be less rapid than it has been up until this point.

However, De Jesus, Elliott and Creange all said that responsible, ethical development and innovation will only continue as legislation emerges. For many companies, that’s because having goalposts to follow makes it easier to align technology strategy with legislative and compliance strategies. It also means that, in some cases, companies need to adapt their plans to meet the moment. All three experts agreed that regulation is a welcome and necessary piece of the AI game for most companies, especially those not directly involved with developing it, as time goes on.

“This treaty focuses more on protecting the rights and data of individual people rather than trying to limit the advancement of AI technology, aiming to strike a balance between promoting AI innovation and safeguarding against bias, discrimination and privacy breaches,” Creange said. “In actuality, by setting clear guidelines and standards, the treaty should drive more ethical innovation, encouraging companies to develop AI solutions that are effective, compliant and responsible. And, considering there is already a palpable sense of hesitation from consumers around over-use of AI, the treaty could help promote public trust in AI technology, which should ultimately lead to wider adoption of AI worldwide.”

Advertisement
Advertisement