Generative Data Intelligence

Tag: question answering

Use RAG for drug discovery with Knowledge Bases for Amazon Bedrock | Amazon Web Services

Amazon Bedrock provides a broad range of models from Amazon and third-party providers, including Anthropic, AI21, Meta, Cohere, and Stability AI, and covers a...

Enhance Amazon Connect and Lex with generative AI capabilities | Amazon Web Services

Effective self-service options are becoming increasingly critical for contact centers, but implementing them well presents unique challenges. ...

Integrate QnABot on AWS with ServiceNow | Amazon Web Services

Do your employees wait for hours on the telephone to open an IT ticket? Do they wait for an agent to triage an issue,...

Deploy large language models for a healthtech use case on Amazon SageMaker | Amazon Web Services

In 2021, the pharmaceutical industry generated $550 billion in US revenue. Pharmaceutical companies sell a variety of different, often novel, drugs on the market,...

Talk to your slide deck using multimodal foundation models hosted on Amazon Bedrock and Amazon SageMaker – Part 1 | Amazon Web Services

With the advent of generative AI, today’s foundation models (FMs), such as the large language models (LLMs) Claude 2 and Llama 2, can perform...

Deploy a Microsoft Teams gateway for Amazon Q, your business expert | Amazon Web Services

Amazon Q is a new generative AI-powered application that helps users get work done. Amazon Q can become your tailored business expert and let...

Build enterprise-ready generative AI solutions with Cohere foundation models in Amazon Bedrock and Weaviate vector database on AWS Marketplace | Amazon Web Services

Generative AI solutions have the potential to transform businesses by boosting productivity and improving customer experiences, and using large language models (LLMs) with these...

Reduce inference time for BERT models using neural architecture search and SageMaker Automated Model Tuning | Amazon Web Services

In this post, we demonstrate how to use neural architecture search (NAS) based structural pruning to compress a fine-tuned BERT model to improve model...

Fine-tune and deploy Llama 2 models cost-effectively in Amazon SageMaker JumpStart with AWS Inferentia and AWS Trainium | Amazon Web Services

Today, we’re excited to announce the availability of Llama 2 inference and fine-tuning support on AWS Trainium and AWS Inferentia instances in Amazon SageMaker...

Inference Llama 2 models with real-time response streaming using Amazon SageMaker | Amazon Web Services

With the rapid adoption of generative AI applications, there is a need for these applications to respond in time to reduce the perceived latency...

Deploy a Slack gateway for Amazon Q, your business expert | Amazon Web Services

Amazon Q is a new generative AI-powered application that helps users get work done. Amazon Q can become your tailored business expert and let...

Generating value from enterprise data: Best practices for Text2SQL and generative AI | Amazon Web Services

Generative AI has opened up a lot of potential in the field of AI. We are seeing numerous uses, including text generation, code generation,...

Latest Intelligence

spot_img
spot_img
spot_img

Chat with us

Hi there! How can I help you?