Integrate foundation models into your code with Amazon Bedrock

The rise of large language models (LLMs) and foundation models (FMs) has revolutionized the field of natural language processing (NLP) and artificial intelligence (AI). These powerful models, trained on vast amounts of data, can generate human-like text, answer questions, and even engage in creative writing tasks. However, training and deploying such models from scratch is […]

Nov 6, 2024 - 18:00
Integrate foundation models into your code with Amazon Bedrock

The rise of large language models (LLMs) and foundation models (FMs) has revolutionized the field of natural language processing (NLP) and artificial intelligence (AI). These powerful models, trained on vast amounts of data, can generate human-like text, answer questions, and even engage in creative writing tasks. However, training and deploying such models from scratch is a complex and resource-intensive process, often requiring specialized expertise and significant computational resources.

Enter Amazon Bedrock, a fully managed service that provides developers with seamless access to cutting-edge FMs through simple APIs. Amazon Bedrock streamlines the integration of state-of-the-art generative AI capabilities for developers, offering pre-trained models that can be customized and deployed without the need for extensive model training from scratch. Amazon maintains the flexibility for model customization while simplifying the process, making it straightforward for developers to use cutting-edge generative AI technologies in their applications. With Amazon Bedrock, you can integrate advanced NLP features, such as language understanding, text generation, and question answering, into your applications.

In this post, we explore how to integrate Amazon Bedrock FMs into your code base, enabling you to build powerful AI-driven applications with ease. We guide you through the process of setting up the environment, creating the Amazon Bedrock client, prompting and wrapping code, invoking the models, and using various models and streaming invocations. By the end of this post, you’ll have the knowledge and tools to harness the power of Amazon Bedrock FMs, accelerating your product development timelines and empowering your applications with advanced AI capabilities.

Solution overview

Amazon Bedrock provides a simple and efficient way to use powerful FMs through APIs, without the need for training custom models. For this post, we run the code in a Jupyter notebook within VS Code and use Python. The process of integrating Amazon Bedrock into your code base involves the following steps:

  1. Set up your development environment by importing the necessary dependencies and creating an Amazon Bedrock client. This client will serve as the entry point for interacting with Amazon Bedrock FMs.
  2. After the Amazon Bedrock client is set up, you can define prompts or code snippets that will be used to interact with the FMs. These prompts can include natural language instructions or code snippets that the model will process and generate output based on.
  3. With the prompts defined, you can invoke the Amazon Bedrock FM by passing the prompts to the client. Amazon Bedrock supports various models, each with its own strengths and capabilities, allowing you to choose the most suitable model for your use case.
  4. Depending on the model and the prompts provided, Amazon Bedrock will generate output, which can include natural language text, code snippets, or a combination of both. You can then process and integrate this output into your application as needed.
  5. For certain models and use cases, Amazon Bedrock supports streaming invocations, which allow you to interact with the model in real time. This can be particularly useful for conversational AI or interactive applications where you need to exchange multiple prompts and responses with the model.

Throughout this post, we provide detailed code examples and explanations for each step, helping you seamlessly integrate Amazon Bedrock FMs into your code base. By using these powerful models, you can enhance your applications with advanced NLP capabilities, accelerate your development process, and deliver innovative solutions to your users.

Prerequisites

Before you dive into the integration process, make sure you have the following prerequisites in place:

  • AWS account – You’ll need an AWS account to access and use Amazon Bedrock. If you don’t have one, you can create a new account.
  • Development environment – Set up an integrated development environment (IDE) with your preferred coding language and tools. You can interact with Amazon Bedrock using AWS SDKs available in Python, Java, Node.js, and more.
  • AWS credentialsConfigure your AWS credentials in your development environment to authenticate with AWS services. You can find instructions on how to do this in the AWS documentation for your chosen SDK. We walk through a Python example in this post.

With these prerequisites in place, you’re ready to start integrating Amazon Bedrock FMs into your code.

In your IDE, create a new file. For this example, we use a Jupyter notebook (Kernel: Python 3.12.0).

In the following sections, we demonstrate how to implement the solution in a Jupyter notebook.

Set up the environment

To begin, import the necessary dependencies for interacting with Amazon Bedrock. The following is an example of how you can do this in Python.

First step is to import boto3 and json:

import boto3, json

Next, create an instance of the Amazon Bedrock client. This client will serve as the entry point for interacting with the FMs. The following is a code example of how to create the client:

bedrock_runtime = boto3.client(
    service_name='bedrock-runtime',
    region_name='us-east-1'
)

Define prompts and code snippets

With the Amazon Bedrock client set up, define prompts and code snippets that will be used to interact with the FMs. These prompts can include natural language instructions or code snippets that the model will process and generate output based on.

In this example, we asked the model, “Hello, who are you?”.

To send the prompt to the API endpoint, you need some keyword arguments to pass in. You can get these arguments from the Amazon Bedrock console.

  1. On the Amazon Bedrock console, choose Base models in the navigation pane.
  1. Select Titan Text G1 – Express.
  1. Choose the model name (Titan Text G1 – Express) and go to the API request.
  1. Copy the API request:
{
"modelId": "amazon.titan-text-express-v1",
"contentType": "application/json",
"accept": "application/json",
"body": "{\"inputText\":\"this is where you place your input text\",\"textGenerationConfig\":{\"maxTokenCount\":8192,\"stopSequences\":[],\"temperature\":0,\"topP\":1}}"
}
  1. Insert this code in the Jupyter notebook with the following minor modifications:
    • We post the API requests to keyword arguments (kwargs).
    • The next change is on the prompt. We will replace \”this is where you place your input text\” by \”Hello, who are you?\”
  2. Print the keyword arguments:
kwargs = {
 "modelId": "amazon.titan-text-express-v1",
 "contentType": "application/json",
 "accept": "application/json",
 "body": "{\"inputText\":\"Hello, who are you?\",\"textGenerationConfig\":{\"maxTokenCount\":8192,\"stopSequences\":[],\"temperature\":0,\"topP\":1}}"
}
print(kwargs)

This should give you the following output:

{'modelId': 'amazon.titan-text-express-v1', 'contentType': 'application/json', 'accept': 'application/json', 'body': '{"inputText":"Hello, who are you?","textGenerationConfig":{"maxTokenCount":8192,"stopSequences":[],"temperature":0,"topP":1}}'}

Invoke the model

With the prompt defined, you can now invoke the Amazon Bedrock FM.

  1. Pass the prompt to the client:
response = bedrock_runtime.invoke_model(**kwargs)
response

This will invoke the Amazon Bedrock model with the provided prompt and print the generated streaming body object response.

{'ResponseMetadata': {'RequestId': '3cfe2718-b018-4a50-94e3-59e2080c75a3',
'HTTPStatusCode': 200,
'HTTPHeaders': {'date': 'Fri, 18 Oct 2024 11:30:14 GMT',
'content-type': 'application/json',
'content-length': '255',
'connection': 'keep-alive',
'x-amzn-requestid': '3cfe2718-b018-4a50-94e3-59e2080c75a3',
'x-amzn-bedrock-invocation-latency': '1980',
'x-amzn-bedrock-output-token-count': '37',
'x-amzn-bedrock-input-token-count': '6'},
'RetryAttempts': 0},
'contentType': 'application/json',
'body': }

The preceding Amazon Bedrock runtime invoke model will work for the FM you choose to invoke.

  1. Unpack the JSON string as follows:
response_body = json.loads(response.get('body').read())
response_body

You should get a response as follows (this is the response we got from the Titan Text G1 – Express model for the prompt we supplied).

{'inputTextTokenCount': 6, 'results': [{'tokenCount': 37, 'outputText': '\nI am Amazon Titan, a large language model built by AWS. It is designed to assist you with tasks and answer any questions you may have. How may I help you?', 'completionReason': 'FINISH'}]}

Experiment with different models

Amazon Bedrock offers various FMs, each with its own strengths and capabilities. You can specify which model you want to use by passing the model_name parameter when creating the Amazon Bedrock client.

  1. Like the previous Titan Text G1 – Express example, get the API request from the Amazon Bedrock console. This time, we use Anthropic’s Claude on Amazon Bedrock.

{
"modelId": "anthropic.claude-v2",
"contentType": "application/json",
"accept": "*/*",
"body": "{\"prompt\":\"\\n\\nHuman: Hello world\\n\\nAssistant:\",\"max_tokens_to_sample\":300,\"temperature\":0.5,\"top_k\":250,\"top_p\":1,\"stop_sequences\":[\"\\n\\nHuman:\"],\"anthropic_version\":\"bedrock-2023-05-31\"}"
}

Anthropic’s Claude accepts the prompt in a different way (\\n\\nHuman:), so the API request on the Amazon Bedrock console provides the prompt in the way that Anthropic’s Claude can accept.

  1. Edit the API request and put it in the keyword argument:
    kwargs = {
      "modelId": "anthropic.claude-v2",
      "contentType": "application/json",
      "accept": "*/*",
      "body": "{\"prompt\":\"\\n\\nHuman: we have received some text without any context.\\nWe will need to label the text with a title so that others can quickly see what the text is about \\n\\nHere is the text between these  XML tags\\n\\n\\nToday I sent to the beach and saw a whale. I ate an ice-cream and swam in the sea\\n\\n\\nProvide title between  XML tags\\n\\nAssistant:\",\"max_tokens_to_sample\":300,\"temperature\":0.5,\"top_k\":250,\"top_p\":1,\"stop_sequences\":[\"\\n\\nHuman:\"],\"anthropic_version\":\"bedrock-2023-05-31\"}"
    }
    print(kwargs)

You should get the following response:

{'modelId': 'anthropic.claude-v2', 'contentType': 'application/json', 'accept': '*/*', 'body': '{"prompt":"\\n\\nHuman: we have received some text without any context.\\nWe will need to label the text with a title so that others can quickly see what the text is about \\n\\nHere is the text between these XML tags\\n\\n\\nToday I sent to the beach and saw a whale. I ate an ice-cream and swam in the sea\\n\\n\\nProvide title between XML tags\\n\\nAssistant:","max_tokens_to_sample":300,"temperature":0.5,"top_k":250,"top_p":1,"stop_sequences":["\\n\\nHuman:"],"anthropic_version":"bedrock-2023-05-31"}'}

  1. With the prompt defined, you can now invoke the Amazon Bedrock FM by passing the prompt to the client:
response = bedrock_runtime.invoke_model(**kwargs)
response

You should get the following output:

{'ResponseMetadata': {'RequestId': '72d2b1c7-cbc8-42ed-9098-2b4eb41cd14e', 'HTTPStatusCode': 200, 'HTTPHeaders': {'date': 'Thu, 17 Oct 2024 15:07:23 GMT', 'content-type': 'application/json', 'content-length': '121', 'connection': 'keep-alive', 'x-amzn-requestid': '72d2b1c7-cbc8-42ed-9098-2b4eb41cd14e', 'x-amzn-bedrock-invocation-latency': '538', 'x-amzn-bedrock-output-token-count': '15', 'x-amzn-bedrock-input-token-count': '100'}, 'RetryAttempts': 0}, 'contentType': 'application/json', 'body': }

  1. Unpack the JSON string as follows:
response_body = json.loads(response.get('body').read())
response_body

This results in the following output on the title for the given text.

{'type': 'completion',
'completion': ' A Day at the Beach',
'stop_reason': 'stop_sequence',
'stop': '\n\nHuman:'}

  1. Print the completion:
completion = response_body.get('completion')
completion

Because the response is returned in the XML tags as you defined, you can consume the response and display it to the client.

' A Day at the Beach'

Invoke model with streaming code

For certain models and use cases, Amazon Bedrock supports streaming invocations, which allow you to interact with the model in real time. This can be particularly useful for conversational AI or interactive applications where you need to exchange multiple prompts and responses with the model. For example, if you’re asking the FM for an article or story, you might want to stream the output of the generated content.

  1. Import the dependencies and create the Amazon Bedrock client:
import boto3, json
bedrock_runtime = boto3.client(
service_name='bedrock-runtime',
region_name='us-east-1'
)
  1. Define the prompt as follows:
prompt = "write an article about fictional planet Foobar"
  1. Edit the API request and put it in keyword argument as before:
    We use the API request of the claude-v2 model.
kwargs = {
  "modelId": "anthropic.claude-v2",
  "contentType": "application/json",
  "accept": "*/*",
  "body": "{\"prompt\":\"\\n\\nHuman: " + prompt + "\\nAssistant:\",\"max_tokens_to_sample\":300,\"temperature\":0.5,\"top_k\":250,\"top_p\":1,\"stop_sequences\":[\"\\n\\nHuman:\"],\"anthropic_version\":\"bedrock-2023-05-31\"}"
}
  1. You can now invoke the Amazon Bedrock FM by passing the prompt to the client:
    We use invoke_model_with_response_stream instead of invoke_model.
response = bedrock_runtime.invoke_model_with_response_stream(**kwargs)

stream = response.get('body')
if stream:
    for event in stream:
        chunk = event.get('chunk')
        if chunk:
            print(json.loads(chunk.get('bytes')).get('completion'), end="")

You get a response like the following as streaming output:

Here is a draft article about the fictional planet Foobar: Exploring the Mysteries of Planet Foobar Far off in a distant solar system lies the mysterious planet Foobar. This strange world has confounded scientists and explorers for centuries with its bizarre environments and alien lifeforms. Foobar is slightly larger than Earth and orbits a small, dim red star. From space, the planet appears rusty orange due to its sandy deserts and red rock formations. While the planet looks barren and dry at first glance, it actually contains a diverse array of ecosystems. The poles of Foobar are covered in icy tundra, home to resilient lichen-like plants and furry, six-legged mammals. Moving towards the equator, the tundra slowly gives way to rocky badlands dotted with scrubby vegetation. This arid zone contains ancient dried up riverbeds that point to a once lush environment. The heart of Foobar is dominated by expansive deserts of fine, deep red sand. These deserts experience scorching heat during the day but drop to freezing temperatures at night. Hardy cactus-like plants manage to thrive in this harsh landscape alongside tough reptilian creatures. Oases rich with palm-like trees can occasionally be found tucked away in hidden canyons. Scattered throughout Foobar are pockets of tropical jungles thriving along rivers and wetlands.

Conclusion

In this post, we showed how to integrate Amazon Bedrock FMs into your code base. With Amazon Bedrock, you can use state-of-the-art generative AI capabilities without the need for training custom models, accelerating your development process and enabling you to build powerful applications with advanced NLP features.

Whether you’re building a conversational AI assistant, a code generation tool, or another application that requires NLP capabilities, Amazon Bedrock provides a simple and efficient solution. By using the power of FMs through Amazon Bedrock APIs, you can focus on building innovative solutions and delivering value to your users, without worrying about the underlying complexities of language models.

As you continue to explore and integrate Amazon Bedrock into your projects, remember to stay up to date with the latest updates and features offered by the service. Additionally, consider exploring other AWS services and tools that can complement and enhance your AI-driven applications, such as Amazon SageMaker for machine learning model training and deployment, or Amazon Lex for building conversational interfaces.

To further explore the capabilities of Amazon Bedrock, refer to the following resources:

Share and learn with our generative AI community at community.aws.

Happy coding and building with Amazon Bedrock!


About the Authors

Rajakumar Sampathkumar is a Principal Technical Account Manager at AWS, providing customer guidance on business-technology alignment and supporting the reinvention of their cloud operation models and processes. He is passionate about cloud and machine learning. Raj is also a machine learning specialist and works with AWS customers to design, deploy, and manage their AWS workloads and architectures.

YaduKishore Tatavarthi is a Senior Partner Solutions Architect at Amazon Web Services, supporting customers and partners worldwide. For the past 20 years, he has been helping customers build enterprise data strategies, advising them on Generative AI, cloud implementations, migrations, reference architecture creation, data modeling best practices, and data lake/warehouse architectures.

Jat AI Stay informed with the latest in artificial intelligence. Jat AI News Portal is your go-to source for AI trends, breakthroughs, and industry analysis. Connect with the community of technologists and business professionals shaping the future.