Amazon Talks Responsible AI One Year After Agreeing to Biden Admin’s Voluntary Commitments
Amazon wants to be proactive when it comes to artificial intelligence.
The e-commerce giant, which has been using machine learning and AI technologies for about 25 years, partnered with the Biden administration, signing eight major voluntary commitments last July, which are meant to promote the responsible development of AI and mitigate risks.
More from Sourcing Journal
Those commitments included working toward sharing information between companies and governments on potential dangers or safety risks; working on systems to help users understand when content has been generated by AI; developing systems that could have a positive impact on major societal challenges and more.
Other major players like Google, Microsoft and Anthropic made similar promises. Though some of the companies, and many others not included in the commitments, have been leveraging AI systems for decades, generative AI’s publicly available debut brought on a slew of questions about safety, transparency and development of AI systems.
One year after Amazon’s initial vows, David Zapolsky, senior vice president of global public policy and general counsel for the company, posted a blog on the company’s website Monday, reflecting on the importance of responsible AI.
According to the blog, Amazon has created and released 11 AWS AI Service Cards, each of which share an overview of the tool, hone in on its intended use cases, identify its limitations and detail the development process, including safety measures and fairness provisions.
Zapolsky said releasing that information helps companies using AWS’ foundational technology develop applications and use cases in a safe, effective way, while also mitigating safety and accuracy concerns.
“We think it’s fair to ask companies to be transparent about how they are developing and deploying AI, and there must be fostered trust between the public and the private sector regarding responsible and safe AI measures,” Zapolsky wrote.
The company has also begun “building guardrails into all of [its] AI products and services,” he noted, citing invisible watermarks into a system that allows customers to generate images. This, he argued, could help halt the spread of disinformation about products or services.
The AI-focused legislation drought in the U.S. continues, despite some states’ best efforts, a few bills proposed at the federal level and a variety of roadmaps from agencies and U.S. Senators designed to encourage better regulation. Some of the concern behind that kind of legislation comes from various industries’ belief that heavy-handed regulation could squash innovation in a rapidly evolving space.
But, in the past year, Amazon seems to have found that to be far from the truth.
“It’s now very clear we can have rules that protect against risks, while also ensuring we don’t hinder innovation. But we still need to secure global alignment on responsible AI measures to protect U.S. economic prosperity and security. That alignment won’t be easy to accomplish, but we can do it. And if we succeed over the long term, American innovation can be enhanced, not restricted or harmed,” Zapolsky said.
Amazon has been looking into various ways it can use AI to solve pressing problems, which stimulates further innovation in a variety of industries given the company’s overarching market position.
For instance, the company’s Climate Pledge Fund invested in AI recycling startup Glacier earlier this year. As part of the deal, the two are also running pilots together to identify ways to recycle packing that currently cannot be recycled and to create sortation processes that align with Amazon’s goal to decrease its reliance on plastics.
And more recently, Amazon’s Industrial Innovation Fund put some dollars behind Standard Bots, which is using AI to train robotic arms that function well for warehousing and logistics.
Zapolsky said that, despite some companies’ progress with AI in the U.S., rapid and continued growth for the country likely won’t be possible without rules and guardrails in place.
“For the U.S. and our allies to unlock the benefits of AI while minimizing its risks, we must continue to work together to establish AI guardrails that are consistent with democratic values, secure economic prosperity and security, ensure global interoperability, promote competition and enhance safe and responsible innovation,” he said. “Frameworks like the White House commitments help ensure that companies, governments, academia and researchers alike can work together to deliver groundbreaking generative AI innovation with trust at the forefront.”