Generative Data Intelligence

Tag: Deep Learning Model

Maximize Stable Diffusion performance and lower inference costs with AWS Inferentia2 | Amazon Web Services

Generative AI models have been experiencing rapid growth in recent months due to its impressive capabilities in creating realistic text, images, code, and audio....

Optimize AWS Inferentia utilization with FastAPI and PyTorch models on Amazon EC2 Inf1 & Inf2 instances | Amazon Web Services

When deploying Deep Learning models at scale, it is crucial to effectively utilize the underlying hardware to maximize performance and cost benefits. For production...

Stanford academics develop Street View-to-location AI

A trio of Stanford computer scientists have developed a deep learning model to geolocate Google Street View images, meaning it can figure out generally...

Predict vehicle fleet failure probability using Amazon SageMaker Jumpstart | Amazon Web Services

Predictive maintenance is critical in automotive industries because it can avoid out-of-the-blue mechanical failures and reactive maintenance activities that disrupt operations. By predicting vehicle...

Auto-labeling module for deep learning-based Advanced Driver Assistance Systems on AWS | Amazon Web Services

In computer vision (CV), adding tags to identify objects of interest or bounding boxes to locate the objects is called labeling. It’s one of...

How Light & Wonder built a predictive maintenance solution for gaming machines on AWS | Amazon Web Services

This post is co-written with Aruna Abeyakoon and Denisse Colin from Light and Wonder (L&W). Headquartered in Las Vegas, Light & Wonder, Inc. is...

Improve Invoice Processing Accuracy with Nanonets and ChatGPT

I wouldn’t be exaggerating if I said an average person sends/receives at least 10 invoices per week. With the growing digitalization, businesses are dealing...

AWS Inferentia2 builds on AWS Inferentia1 by delivering 4x higher throughput and 10x lower latency | Amazon Web Services

The size of the machine learning (ML) models––large language models (LLMs) and foundation models (FMs)––is growing fast year-over-year, and these models need faster and...

Deploy Falcon-40B with large model inference DLCs on Amazon SageMaker | Amazon Web Services

Last week, Technology Innovation Institute (TII) launched TII Falcon LLM, an open-source foundational large language model (LLM). Trained on 1 trillion tokens with Amazon...

Fine-tune GPT-J using an Amazon SageMaker Hugging Face estimator and the model parallel library | Amazon Web Services

GPT-J is an open-source 6-billion-parameter model released by Eleuther AI. The model is trained on the Pile and can perform various tasks in language...

Announcing the launch of new Hugging Face LLM Inference containers on Amazon SageMaker | Amazon Web Services

This post is co-written with Philipp Schmid and Jeff Boudier from Hugging Face. Today, as part of Amazon Web Services’ partnership with Hugging Face,...

Host ML models on Amazon SageMaker using Triton: CV model with PyTorch backend | Amazon Web Services

PyTorch is a machine learning (ML) framework based on the Torch library, used for applications such as computer vision and natural language processing. One...

Latest Intelligence

spot_img
spot_img
spot_img

Chat with us

Hi there! How can I help you?