Plato Data Intelligence.
Vertical Search & Ai.

Tag: Amazon SageMaker JumpStart

Accenture creates a regulatory document authoring solution using AWS generative AI services | Amazon Web Services

This post is co-written with Ilan Geller, Shuyu Yang and Richa Gupta from Accenture. Bringing innovative new pharmaceuticals drugs to market is a long...

Deploy large language models for a healthtech use case on Amazon SageMaker | Amazon Web Services

In 2021, the pharmaceutical industry generated $550 billion in US revenue. Pharmaceutical companies sell a variety of different, often novel, drugs on the market,...

Monitor embedding drift for LLMs deployed from Amazon SageMaker JumpStart | Amazon Web Services

One of the most useful application patterns for generative AI workloads is Retrieval Augmented Generation (RAG). In the RAG pattern, we find pieces of...

Talk to your slide deck using multimodal foundation models hosted on Amazon Bedrock and Amazon SageMaker ā€“ Part 1 | Amazon Web Services

With the advent of generative AI, todayā€™s foundation models (FMs), such as the large language models (LLMs) Claude 2 and Llama 2, can perform...

Benchmark and optimize endpoint deployment in Amazon SageMaker JumpStartĀ  | Amazon Web Services

When deploying a large language model (LLM), machine learning (ML) practitioners typically care about two measurements for model serving performance: latency, defined by the...

Architect defense-in-depth security for generative AI applications using the OWASP Top 10 for LLMs | Amazon Web Services

Generative artificial intelligence (AI) applications built around large language models (LLMs) have demonstrated the potential to create and accelerate economic value for businesses. Examples...

Build enterprise-ready generative AI solutions with Cohere foundation models in Amazon Bedrock and Weaviate vector database on AWS Marketplace | Amazon Web Services

Generative AI solutions have the potential to transform businesses by boosting productivity and improving customer experiences, and using large language models (LLMs) with these...

Reduce inference time for BERT models using neural architecture search and SageMaker Automated Model Tuning | Amazon Web Services

In this post, we demonstrate how to use neural architecture search (NAS) based structural pruning to compress a fine-tuned BERT model to improve model...

Fine-tune and deploy Llama 2 models cost-effectively in Amazon SageMaker JumpStart with AWS Inferentia and AWS Trainium | Amazon Web Services

Today, weā€™re excited to announce the availability of Llama 2 inference and fine-tuning support on AWS Trainium and AWS Inferentia instances in Amazon SageMaker...

Host the Whisper Model on Amazon SageMaker: exploring inference options | Amazon Web Services

OpenAI Whisper is an advanced automatic speech recognition (ASR) model with an MIT license. ASR technology finds utility in transcription services, voice assistants, and...

Create a document lake using large-scale text extraction from documents with Amazon Textract | Amazon Web Services

AWS customers in healthcare, financial services, the public sector, and other industries store billions of documents as images or PDFs in Amazon Simple Storage...

Generating value from enterprise data: Best practices for Text2SQL and generative AI | Amazon Web Services

Generative AI has opened up a lot of potential in the field of AI. We are seeing numerous uses, including text generation, code generation,...

Latest Intelligence

spot_img
spot_img
spot_img

Chat with us

Hi there! How can I help you?