New FTC Rule Targets Fake Reviews; Amazon Already Uses AI to Combat Them
Amazon is using AI to send a message: it won’t tolerate review abuse on its platform.
That strategy could help the online retailer stay ahead of the curve as one agency’s newest regulations emerge.
More from Sourcing Journal
The Federal Trade Commission (FTC) put forth a final rule on fake consumer reviews earlier this month, meant to thwart AI-generated reviews and protect consumers from fake reviews or reviews that businesses paid for. Already, it has some companies reconsidering how well their systems for filtering out phonies operate.
The rule stipulates that individual businesses and sellers bear the responsibility for abiding by it, even when they sell their goods through platforms like Amazon. The retail behemoth does sell some Amazon-branded products, which it will need to watch, but about 60 percent of Amazon’s total product offerings are listed by independent sellers.
Amazon is not legally responsible for policing its sellers’ listings when it comes to reviews, but nonetheless, the company has been using machine learning and artificial intelligence to flag and delete potentially fake comments, as well as those that may misrepresent a consumer’s actual experience.
That may be because, whomever the seller of consumers’ products actually is, buyers equate the service they receive and the experience they have with Amazon’s own reputation.
“We make it hard for bad actors to take advantage of our trusted shopping experience,” the company wrote in a Tuesday blog post, citing its AI systems that help screen for sham reviews.
The company uses AI to inspect for indicators that a review may be inauthentic. It does so by using a machine learning model trained on proprietary data, like “whether the seller has invested in ads (which may be driving additional reviews), customer-submitted reports of abuse, risky behavioral patterns [and] review history.”
The Prime purveyor also uses other subcategories of AI, like neural networks, to link complex relationships and patterns together.
If the system flags a review as being potentially fake, Amazon works to “block or remove” the review and, if warranted, reprimand or punish the party involved. It does so by “revoking a customer’s review permissions, blocking bad actor accounts and even litigating against the parties involved,” according to the blog.
In many cases, the company’s systems can handle the characterization of fake reviews on its own. But when the technology flags a review that seems “suspicious” but warrants looking at “additional evidence” before punishing a user, an Amazon investigator takes over, looking for other signals and determining the best course of action.
The markers of a fake review aren’t always consistent with what consumers think they look like, said Josh Meek, senior data science manager. That’s why Amazon allows technology—and trained employees—to draw correlations and make judgements on reviews’ validity.
??“The difference between an authentic and fake review is not always clear for someone outside of Amazon to spot. For example, a product might accumulate reviews quickly because a seller invested in advertising or is offering a great product at the right price. Or, a customer may think a review is fake because it includes poor grammar,” he said in a statement.
In a February blog, the company noted that, in 2023, it filed 14 new lawsuits against companies and people littering the e-commerce platform with review abuse.
“Our goal is to ensure that every review in Amazon’s stores is trustworthy and reflects customers’ actual experiences,” Claire O’Donnell, Amazon’s director of selling partner risk & trust said in a statement at the time. “By taking legal action against these fraudsters, Amazon is sending a clear message that we will hold these bad actors accountable.”
In 2023 alone, Amazon said it blocked more than 250 million reviews globally.
Walmart, which also has a marketplace for sellers, did not return Sourcing Journal’s request for comment on the ways it uses technology to take on review abuse. Amazon and Walmart, widely considered competitors in the industry, have used AI in myriad ways, but Walmart’s blogs and news posts sparsely mention the role of the technology in customer reviews.
Amazon declined to comment on the FTC’s new rule.