Deploy LLM Foundation Models in AWS Bedrock

AWS Bedrock is a fully managed service that makes it easier to build generative AI applications. It provides access to a variety of high-performing foundation models (FMs) from leading AI companies like Anthropic, Cohere, Meta, and Amazon through a single API. The blog gives details about how to deploy and access LLM using AWS Bedrock.

7/23/20245 min read

Sohamlabs

What is AWS Bedrock

Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon through a single API, along with a broad set of capabilities you need to build generative AI applications with security, privacy, and responsible AI.

Applications of Amazon Bedrock

Organizations can use Amazon Bedrock to build a wide range of applications such as:

  • Text Generation: Create original content like articles, stories, and social media posts.

  • Conversational AI: Develop chatbots and virtual assistants that can interact with users and provide relevant information.

  • Text Summarization: Automatically condense lengthy documents into concise summaries.

  • Image Generation: Generate images based on user prompts, useful for marketing and design purposes.

How much does it cost?

Our pricing varies depending on [what factors affect your price]. We offer a variety of options to fit your needs and budget. Please visit our [pricing page/contact us] for more information.

How much does it cost?

Our pricing varies depending on [what factors affect your price]. We offer a variety of options to fit your needs and budget. Please visit our [pricing page/contact us] for more information.

Prerequisites
  • One should have an AWS account.

  • An IAM user that has AWS Bedrock access permissions.

  • An EC2 Instance running with python and aws cli installed and configured

  • [NOTE] : Amazon Bedrock is not fully functional in most of the regions so please use N.Virginia (us-east-1) to access most of its features.

Step1: Request access to models on Bedrock

Before doing anything with code, we need to request access to the models on AWS Bedrock.

1. Go to your AWS Console. Sign up if needed.

2. Search for AWS Bedrock on your console and click on it.

3. Click Get Started on Bedrock Page.

4. Scroll down on the left side bar and click on Model Access.

4. Scroll down on the left side bar and click on Model Access.

6. Scroll Down, under Amazon you can see TITAN IMAGE GENERATOR G1, select that model (in my case i have already enabled it).

5. Click on Modify Model Access in the Model access page.

7. Click on Next and Submit

8. Refresh as necessary to see if access has been granted. This process can take from less than 10 minutes up to a few days depending on the maturity of your AWS account. For reference, it took my account 5 minutes for the status to be approved like below.

Step2: Setting up Python Environment in EC2 Instance
  1. Connect to EC2 Instance and Install and configure the AWS-CLI.

  2. Also make sure you have python installed.

  3. To verify python installation use command python3 –version

  1. Install Flask and boto3 using below command 
Pip install flask

  2. Pip install boto3

  3. Now create a python file for creating an Flask web application that provides an API endpoint to generate images using AWS Bedrock service.

  4. Below image is of a python file that i have used.

<code>

import base64

import boto3

import json

import os

import random

from flask import Flask, request, jsonify

app = Flask(__name__)

# Replace with your AWS credentials

client = boto3.client("bedrock-runtime",

aws_access_key_id='<enter-aws-access-key>',

aws_secret_access_key='<enter-aws-secret-key>',

region_name='us-east-1')

model_id = "amazon.titan-image-generator-v1"

def generate_image(prompt):

native_request = {

"textToImageParams": {"text": prompt},

"taskType": "TEXT_IMAGE",

"imageGenerationConfig": {

"cfgScale": 8,

"seed": 0,

"width": 1024,

"height": 1024,

"numberOfImages": 3

}

}

request = json.dumps(native_request)

response = client.invoke_model(modelId=model_id, body=request)

model_response = json.loads(response["body"].read())

base64_image_data = model_response["images"][0]

# Save the generated image to a local folder (optional)

# ... (same as your original code)

return base64_image_data

i, output_dir = 1, "output"

if not os.path.exists(output_dir):

os.makedirs(output_dir)

while os.path.exists(os.path.join(output_dir, f"image_{i}.png")):

i += 1

@app.route('/generate_image', methods=['GET'])

def generate_image_api():

prompt = request.args.get('prompt')

if not prompt:

return jsonify({'error': 'Prompt is required'}), 400

try:

image_data = generate_image(prompt)

image_data = base64.b64decode(image_data)

image_path = os.path.join(output_dir, f"image_{i}.png")

with open(image_path, "wb") as file:

file.write(image_data)

return jsonify({'image': image_path}), 200

except Exception as e:

return jsonify({'error': str(e)}), 500

if name == '__main__':

app.run(host='0.0.0.0', port=5000)

</code>

Step2.1: Detailed Explanation of the Above Script's Workflow:
  • Imports necessary libraries: The script imports the base64, boto3, json, os, and random modules, as well as the Flask class and its request, jsonify functions from the flask module.

  • Initializes the Flask app: The script creates a Flask application instance named app.

  • Sets up AWS Bedrock client: The script creates a boto3 client for the "bedrock-runtime" service using the specified AWS access key ID, secret access key, and region name. It also sets the model_id variable to the ID of the image generation model to be used.

  • Defines the generate_image function: This function takes a prompt (text description) as input and generates images using the Amazon Bedrock service. It creates a native request dictionary with the prompt and other configuration parameters, sends the request to the Bedrock service, and retrieves the generated image data in base64 format.

  • Checks for the output directory: The script checks if the "output" directory exists, and if not, creates it. It also finds the next available filename for saving the generated image.

  • Defines the /generate_image route: The script sets up a Flask route at /generate_image that accepts GET requests. When a request is made to this endpoint, it retrieves the prompt parameter from the request.

  • Handles the image generation process: If a prompt is provided, the script calls the generate_image function with the given prompt. It then decodes the base64-encoded image data and saves the image to the "output" directory using the generated filename.

  • Returns the image path: The script returns a JSON response containing the path of the generated image or an error message if an exception occurs during the process.

  • Runs the Flask app: Finally, the script runs the Flask application on the specified host and port (0.0.0.0:5000).