AWS

Maximize your file server data’s potential by using Ama...

In this post, we show you how to connect Amazon Q, a generative AI-powered assis...

LLM continuous self-instruct fine-tuning framework powe...

In this post, we present the continuous self-instruct fine-tuning framework as a...

Reducing hallucinations in LLM agents with a verified s...

This post introduces a solution to reduce hallucinations in Large Language Model...

Orchestrate an intelligent document processing workflow...

This intelligent document processing solution uses Amazon Bedrock FMs to orchest...

Turbocharging premium audit capabilities with the power...

Verisk’s Premium Audit Advisory Service is the leading source of technical infor...

Generate synthetic counterparty (CR) risk data with gen...

In this post, we explore how you can use LLMs with advanced Retrieval Augmented ...

Best practices for Amazon SageMaker HyperPod task gover...

In this post, we provide best practices to maximize the value of SageMaker Hyper...

Build verifiable explainability into financial services...

In this post, we explore how Automated Reasoning checks work through various com...

How Formula 1® uses generative AI to accelerate race-da...

In this post, we explain how F1 and AWS have developed a root cause analysis (RC...

Using Amazon Rekognition to improve bicycle safety

To better protect themselves, many cyclists are starting to ride with cameras mo...

Use language embeddings for zero-shot classification an...

In this post, we explore what language embeddings are and how they can be used t...

Build a dynamic, role-based AI agent using Amazon Bedro...

In this post, we explore how to build an application using Amazon Bedrock inline...

From concept to reality: Navigating the Journey of RAG ...

In this post, we explore the movement of RAG applications from their proof of co...

LLM-as-a-judge on Amazon Bedrock Model Evaluation

This blog post explores LLM-as-a-judge on Amazon Bedrock Model Evaluation, provi...

Achieve ~2x speed-up in LLM inference with Medusa-1 on ...

Researchers developed Medusa, a framework to speed up LLM inference by adding ex...

Fine-tune LLMs with synthetic data for context-based Q&...

In this post, we explore how to use Amazon Bedrock to generate synthetic trainin...

浮动元件示例