Generative Data Intelligence

Announcing new tools and capabilities to enable responsible AI innovation | Amazon Web Services

Date:

The rapid growth of generative AI brings promising new innovation, and at the same time raises new challenges. These challenges include some that were common before generative AI, such as bias and explainability, and new ones unique to foundation models (FMs), including hallucination and toxicity. At AWS, we are committed to developing generative AI responsibly, taking a people-centric approach that prioritizes education, science, and our customers, to integrate responsible AI across the end-to-end AI lifecycle.

Over the past year, we have introduced new capabilities in our generative AI applications and models such as built-in security scanning in Amazon CodeWhisperer, training to detect and block harmful content in Amazon Titan, and data privacy protections in Amazon Bedrock. Our investment in safe, transparent, and responsible generative AI includes collaboration with the global community and policymakers as we encouraged and supported both the White House Voluntary AI commitments and AI Safety Summit in the UK. And we continue to work hand-in-hand with customers to operationalize responsible AI with purpose-built tools like Amazon SageMaker Clarify, ML Governance with Amazon SageMaker, and more.

Introducing new responsible AI innovation

As generative AI scales to new industries, organizations, and use cases, this growth must be accompanied by a sustained investment in responsible FM development. Customers want their FMs to be built with safety, fairness, and security in mind, so that they can in turn deploy AI responsibly. At AWS re:Invent this year, we are excited to announce new capabilities to foster responsible generative AI innovation across a broad set of capabilities with new built-in tools, customer protections, resources to enhance transparency, and tools to combat disinformation. We aim to provide customers the information they need to evaluate FMs against key responsible AI considerations, like toxicity and robustness, and introduce guardrails to apply safeguards based on customer use cases and responsible AI policies. At the same time, our customers want to be better informed on the safety, fairness, security, and other properties, of AI services and FMs, as they use them within their own organization. We are excited to announce more resources to help customers better understand our AWS AI services and deliver the transparency they are asking for.

Implementing safeguards: Guardrails for Amazon Bedrock

Safety is a priority when it comes to introducing generative AI at scale. Organizations want to promote safe interactions between their customers and generative AI applications that avoid harmful or offensive language and align with company policies. The easiest way to do that is to put consistent safeguards in place across the whole organization so everyone can innovate safely. Yesterday we announced the preview of Guardrails for Amazon Bedrock—a new capability that makes it easy to implement application-specific safeguards based on customer use cases and responsible AI policies.

Guardrails drive consistency in how FMs on Amazon Bedrock respond to undesirable and harmful content within applications. Customers can apply guardrails to large language models on Amazon Bedrock as well as to fine-tuned models and in combination with Agents for Amazon Bedrock. Guardrails lets you specify topics to be avoided, and the service automatically detects and prevents queries and responses that fall into restricted categories. Customers can also configure content filter thresholds across categories including hate speech, insults, sexualized language, and violence to filter out harmful content to the desired level. For example, an online banking application can be set up to avoid providing investment advice and limit inappropriate content (such as hate speech, insults, and violence). In the near future, customers will also be able to redact personally identifiable information (PII) in user inputs and FMs’ responses, set profanity filters, and provide a list of custom words to block in interactions between users and FMs, improving compliance and further protecting users. With Guardrails, you can innovate faster with generative AI while maintaining protections and safeguards consistent with company policies.

Identifying the best FM for a specific use case: Model Evaluation in Amazon Bedrock

Today, organizations have a wide range of FM options to power their generative AI applications. To strike the right balance of accuracy and performance for their use case, organizations must efficiently compare models and find the best option based on key responsible AI and quality metrics that are important to them. To evaluate models, organizations must first spend days identifying benchmarks, setting up evaluation tools, and running assessments, all of which requires deep expertise in data science. Furthermore, these tests are not useful for evaluating subjective criteria (e.g., brand voice, relevance, and style) that requires judgment through tedious, time-intensive, human-review workflows. The time, expertise, and resources required for these evaluations—for every new use case —make it difficult for organizations to evaluate models against responsible AI dimensions and make an informed choice around what model will provide the most accurate, safe experience for their customers.

Now available in preview, Model Evaluation on Amazon Bedrock helps customers evaluate, compare, and select the best FMs for their specific use case based on custom metrics, such as accuracy and safety, using either automatic or human evaluations. In the Amazon Bedrock console, customers choose the FMs they want to compare for a given task, such as question-answering or content summarization. For automatic evaluations, customers select predefined evaluation criteria (e.g., accuracy, robustness, and toxicity) and upload their own testing dataset or select from built-in, publicly available datasets. For subjective criteria or nuanced content requiring  judgment, customers can easily set up human-based evaluation workflows with just a few clicks. These workflows leverage a customer’s in-house workteam, or use a managed workforce provided by AWS, to evaluate model responses. During human-based evaluations, customers define use case-specific metrics (e.g., relevance, style, and brand voice). Once customers finish the setup process, Amazon Bedrock runs evaluations and generates a report, so customers can easily understand how the model performed across key safety and accuracy criteria and select the best model for their use case.

This ability to evaluate models is not limited to Amazon Bedrock, customers can also use model evaluation in Amazon SageMaker Clarify to easily evaluate, compare, and select the best FM option across key quality and responsibility metrics such as accuracy, robustness, and toxicity – across all FMs.

Combating disinformation: Watermarking in Amazon Titan

Today, we announced Amazon Titan Image Generator in preview, which empowers customers to rapidly produce and enhance high-quality images at scale. We considered responsible AI during each stage of the model development process, including training data selection, building filtering capabilities to detect and remove inappropriate user inputs and model outputs, and improving demographic diversity of our model outputs. All Amazon Titan-generated images contain an invisible watermark by default, which is designed to help reduce the spread of disinformation by providing a discreet mechanism to identify AI-generated images. AWS is among the first model providers to widely release built-in invisible watermarks that are integrated into image outputs and are designed to be resistant to alterations.

Building trust: Standing behind our models and applications with indemnification

Building customer trust is core to AWS. We have been on a journey with our customers since our inception, and with the growth of generative AI, we remain committed to building innovative technology together. To enable customers to harness the power of our generative AI, they need to know they are protected. AWS offers copyright indemnity coverage for outputs of the following Amazon generative AI services: Amazon Titan Text Express, Amazon Titan Text Lite, Amazon Titan Embeddings, Amazon Titan Multimodal Embeddings, Amazon CodeWhisperer Professional, AWS HealthScribe, Amazon Lex, and Amazon Personalize. This means that customers who use the services responsibly are protected from third-party claims alleging copyright infringement by the outputs generated by those services (see Section 50.10 of the Service Terms). In addition, our standard IP indemnity for use of the services protects customers from third-party claims alleging IP infringement by the services and the data used to train them. To put it another way, if you use an Amazon generative AI service listed above and someone sues you for IP infringement, AWS will defend that lawsuit, which includes covering any judgment against you or settlement costs.

We stand behind our generative AI services and work to continually improve them. As AWS launches new services and generative AI continues to evolve, AWS will continue to relentlessly focus on earning and maintaining customer trust.

Enhancing transparency: AWS AI Service Card for Amazon Titan Text

We introduced AWS AI Service Cards at re:Invent 2022 as a transparency resource to help customers better understand our AWS AI services. AI Service Cards are a form of responsible AI documentation that provide customers with a single place to find information on the intended use cases and limitations, responsible AI design choices, and deployment and performance optimization best practices for our AI services. They are part of a comprehensive development process we undertake to build our services in a responsible way that addresses fairness, explainability, veracity and robustness, governance, transparency, privacy and security, safety, and controllability.

At re:Invent this year we are announcing a new AI Service Card for Amazon Titan Text to increase transparency in foundation models. We are also launching four new AI Service Cards including: Amazon Comprehend Detect PII, Amazon Transcribe Toxicity Detection, Amazon Rekognition Face Liveness, and AWS HealthScribe. You can explore each of these cards on the AWS website. As generative AI continues to grow and evolve, transparency on how technology is developed, tested, and used will be a vital component to earn the trust of organizations and their customers alike. At AWS, we are committed to continuing to bring transparency resources like AI Service Cards to the broader community—and to iterate and gather feedback on the best ways forward.

Investing in responsible AI across the entire generative AI lifecycle

We are excited about the new innovations announced at re:Invent this week that gives our customers more tools, resources, and built-in protections to build and use generative AI safely. From model evaluation to guardrails to watermarking, customers can now bring generative AI to their organization faster, while mitigating risk. New protections for customers like IP indemnity coverage and new resources to enhance transparency like additional AI Service Cards are also key examples of our commitment to build trust across technology companies, policymakers, community groups, scientists, and more. We continue to make meaningful investments in responsible AI across the lifecycle of a foundation model—to help our customers scale AI in a safe, secure, and responsible way.


About the Authors

Peter Hallinan leads initiatives in the science and practice of Responsible AI at AWS AI, alongside a team of responsible AI experts. He has deep expertise in AI (PhD, Harvard) and entrepreneurship (Blindsight, sold to Amazon). His volunteer activities have included serving as a consulting professor at the Stanford University School of Medicine, and as the president of the American Chamber of Commerce in Madagascar. When possible, he’s off in the mountains with his children: skiing, climbing, hiking and rafting

Vasi Philomin is currently the VP of Generative AI at AWS. He leads generative AI efforts including Amazon Bedrock, Amazon Titan, and Amazon CodeWhisperer.

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?