Real-world applications of Amazon Nova Canvas for interior design and product photography

In this post, we explore how Amazon Nova Canvas can solve real-world business challenges through advanced image generation techniques. We focus on two specific use cases that demonstrate the power and flexibility of this technology: interior design and product photography.

May 29, 2025 - 21:00
Real-world applications of Amazon Nova Canvas for interior design and product photography

As AI image generation becomes increasingly central to modern business workflows, organizations are seeking practical ways to implement this technology for specific industry challenges. Although the potential of AI image generation is vast, many businesses struggle to effectively apply it to their unique use cases.

In this post, we explore how Amazon Nova Canvas can solve real-world business challenges through advanced image generation techniques. We focus on two specific use cases that demonstrate the power and flexibility of this technology:

  • Interior design – Image conditioning with segmentation helps interior designers rapidly iterate through design concepts, dramatically reducing the time and cost associated with creating client presentations
  • Product photography – Outpainting enables product photographers to create diverse environmental contexts for products without extensive photo shoots

Whether you’re an interior design firm looking to streamline your visualization process or a retail business aiming to reduce photography costs, this post can help you use the advanced features of Amazon Nova Canvas to achieve your specific business objectives. Let’s dive into how these powerful tools can transform your image generation workflow.

Prerequisites

You should have the following prerequisites:

Interior design

An interior design firm has the following problem: Their designers spend hours creating photorealistic designs for client presentations, needing multiple iterations of the same room with different themes and decorative elements. Traditional 3D rendering is time-consuming and expensive. To solve this problem, you can use the image conditioning (segmentation) features of Amazon Nova Canvas to rapidly iterate on existing room photos. The condition image is analyzed to identify prominent content shapes, resulting in a segmentation mask that guides the generation. The generated image closely follows the layout of the condition image while allowing the model to have creative freedom within the bounds of each content area.

The following images show examples of the initial input, a segmentation mask based on the input, and output based on two different prompts.

Cozy living room featuring stone fireplace, mounted TV, and comfortable seating arrangement AI-generated semantic segmentation map of a living room, with objects labeled in different colors
Input image of a living room Segmentation mask of living room
Minimalist living room featuring white furniture, dark wood accents, and marble-look floors Coastal-themed living room with ocean view and beach-inspired decor
Prompt: A minimalistic living room Prompt: A coastal beach themed living room

This post demonstrates how to maintain structural integrity while transforming interior elements, so you can generate multiple variations in minutes with simple prompting and input images. The following code block presents the API request structure for image conditioning with segmentation. Parameters to perform these transformations are passed to the model through the API request. Make sure that the output image has the same dimensions as the input image to avoid distorted results.

{
    "taskType": "TEXT_IMAGE",
    "textToImageParams": {
        "conditionImage": string (Base64 encoded image), #Original living room
        "controlMode": "SEGMENTATION", 
        "controlStrength": float, #Specify how closely to follow the condition       #image (0.0-1.0; Default: 0.7).
        "text": string, #A minimalistic living room
        "negativeText": string
    },
    "imageGenerationConfig": {
        "width": int,
        "height": int,
        "quality": "standard" | "premium",
        "cfgScale": float,
        "seed": int,
        "numberOfImages": int
    }
}

The taskType object determines the type of operation being performed and has its own set of parameters, and the imageGenerationConfig object contains general parameters common to all task types (except background removal). To learn more about the request/response structure for different types of generations, refer to Request and response structure for image generation.

The following Python code demonstrates an image conditioning generation by invoking the Amazon Nova Canvas v1.0 model on Amazon Bedrock:

import base64  #For encoding/decoding base64 data
import io  #For handling byte streams
import json  #For JSON operations
import boto3  #AWS SDK for Python
from PIL import Image  #Python Imaging Library for image processing
from botocore.config import Config  #For AWS client configuration
#Create a variable to fix the region to where Nova Canvas is enabled 
region = "us-east-1"

#Create Bedrock client with 300 second timeout
bedrock = boto3.client(service_name='bedrock-runtime', region_name=region,
        config=Config(read_timeout=300))

#Original living room image in current working directory
input_image_path = "Original Living Room.jpg"

#Read and encode the image
def prepare_image(image_path):
    with open(image_path, 'rb') as image_file:
        image_data = image_file.read()
        base64_encoded = base64.b64encode(image_data).decode('utf-8')
    return base64_encoded

#Get the base64 encoded image
input_image = prepare_image(input_image_path)

#Set the content type and accept headers for the API call
accept = "application/json"
content_type = "application/json"

#Prepare the request body
api_request = json.dumps({
       "taskType": "TEXT_IMAGE",  #Type of generation task
       "textToImageParams": {
             "text": "A minimalistic living room",  #Prompt
             "negativeText": "bad quality, low res",  #What to avoid
             "conditionImage": input_image,  #Base64 encoded original living room
             "controlMode": "SEGMENTATION"  #Segmentation mode
            },
       "imageGenerationConfig": {
             "numberOfImages": 1,  #Generate one image
             "height": 1024,  #Image height, same as the input image
             "width": 1024,  #Image width, same as the input image
             "seed": 0, #Modify seed value to get variations on the same prompt
             "cfgScale": 7.0  #Classifier Free Guidance scale
            }
})

#Call the model to generate image
response = bedrock.invoke_model(body=api_request, modelId='amazon.nova-canvas-v1:0', accept=accept, contentType=content_type)

#Parse the response body
response_json = json.loads(response.get("body").read())

#Extract and decode the base64 image
base64_image = response_json.get("images")[0]  #Get first image
base64_bytes = base64_image.encode('ascii')  #Convert to ASCII
image_data = base64.b64decode(base64_bytes)  #Decode base64 to bytes

#Display the generated image
output_image = Image.open(io.BytesIO(image_data))
output_image.show()
#Save the image to current working directory
output_image.save('output_image.png')

Product photography

A sports footwear company has the following problem: They need to showcase their versatile new running shoes in multiple environments (running track, outdoors, and more), requiring expensive location shoots and multiple photography sessions for each variant. To solve this problem, you can use Amazon Nova Canvas to generate diverse shots from a single product photo. Outpainting can be used to replace the background of an image. You can instruct the model to preserve parts of the image by providing a mask prompt, for example, “Shoes.” A mask prompt is a natural language description of the objects in your image that should not be changed during outpainting. You can then generate the shoes in different backgrounds with new prompts.

The following images show examples of the initial input, a mask created for “Shoes,” and output based on two different prompts.

Stylized product photo of performance sneaker with contrasting navy/white upper and orange details Black silhouette of an athletic sneaker in profile view
Studio photo of running shoes Mask created for “Shoes”
Athletic running shoe with navy and orange colors on red running track Athletic shoe photographed on rocky surface with forest background
Prompt: Product photoshoot of sports shoes placed on a running track outdoor Prompt: Product photoshoot of sports shoes on rocky terrain, forest background

Instead of using a mask prompt, you can input a mask image, which defines the areas of the image to preserve. The mask image must be the same size as the input image. Areas to be edited are shaded pure white and areas to preserve are shaded pure black. Outpainting mode is a parameter to define how the mask is treated. Use DEFAULT to transition smoothly between the masked area and the non-masked area. This mode is generally better when you want the new background to use similar colors as the original background. However, you can get a halo effect if your prompt calls for a new background that is significantly different than the original background. Use PRECISE to strictly adhere to the mask boundaries. This mode is generally better when you’re making significant changes to the background.

This post demonstrates how to use outpainting to capture product accuracy, and then turn one studio photo into different environments seamlessly. The following code illustrates the API request structure for outpainting:

{
    "taskType": "OUTPAINTING",
    "outPaintingParams": {
        "image": string (Base64 encoded image),
        "maskPrompt": string, #Shoes
        "maskImage": string, #Base64 encoded image
        "outPaintingMode": "DEFAULT" | "PRECISE", 
        "text": string,  #Product photoshoot of sports shoes on rocky terrain
        "negativeText": string
    },
    "imageGenerationConfig": {
        "numberOfImages": int,
        "quality": "standard" | "premium",
        "cfgScale": float,
        "seed": int
    }
}

The following Python code demonstrates an outpainting-based background replacement by invoking the Amazon Nova Canvas v1.0 model on Amazon Bedrock. For more code examples, see Code examples.

import base64  #For encoding/decoding base64 data
import io  #For handling byte streams
import json  #For JSON operations
import boto3  #AWS SDK for Python
from PIL import Image  #Python Imaging Library for image processing
from botocore.config import Config  #For AWS client configuration
#Create a variable to fix the region to where Nova Canvas is enabled 
region = "us-east-1"

#Create Bedrock client with 300 second timeout
bedrock = boto3.client(service_name='bedrock-runtime', region_name=region,
        config=Config(read_timeout=300))

#Original studio image of shoes in current working directory
input_image_path = "Shoes.png"

#Read and encode the image
def prepare_image(image_path):
    with open(image_path, 'rb') as image_file:
        image_data = image_file.read()
        base64_encoded = base64.b64encode(image_data).decode('utf-8')
    return base64_encoded

#Get the base64 encoded image
input_image = prepare_image(input_image_path)

#Set the content type and accept headers for the API call
accept = "application/json"
content_type = "application/json"

#Prepare the request body
api_request = json.dumps({
        "taskType": "OUTPAINTING",
        "outPaintingParams": {
             "image": input_image,
             "maskPrompt": "Shoes", 
             "outPaintingMode": "DEFAULT", 
             "text": "Product photoshoot of sports shoes placed on a running track outdoor",
             "negativeText": "bad quality, low res"
            },
        "imageGenerationConfig": {
             "numberOfImages": 1,
             "seed": 0, #Modify seed value to get variations on the same prompt
             "cfgScale": 7.0
            }
})

#Call the model to generate image
response = bedrock.invoke_model(body=api_request, modelId='amazon.nova-canvas-v1:0', accept=accept, contentType=content_type)

#Parse the response body
response_json = json.loads(response.get("body").read())

#Extract and decode the base64 image
base64_image = response_json.get("images")[0]  #Get first image
base64_bytes = base64_image.encode('ascii')  #Convert to ASCII
image_data = base64.b64decode(base64_bytes)  #Decode base64 to bytes

#Display the generated image
output_image = Image.open(io.BytesIO(image_data))
output_image.show()
#Save the image to current working directory
output_image.save('output_image.png')

Clean up

When you have finished testing this solution, clean up your resources to prevent AWS charges from being incurred:

  1. Back up the Jupyter notebooks in the SageMaker notebook instance.
  2. Shut down and delete the SageMaker notebook instance.

Cost considerations

Consider the following costs from the solution deployed on AWS:

  • You will incur charges for generative AI inference on Amazon Bedrock. For more details, refer to Amazon Bedrock pricing.
  • You will incur charges for your SageMaker notebook instance. For more details, refer to Amazon SageMaker pricing.

Conclusion

In this post, we explored practical implementations of Amazon Nova Canvas for two high-impact business scenarios. You can now generate multiple design variations or diverse environments in minutes rather than hours. With Amazon Nova Canvas, you can significantly reduce costs associated with traditional visual content creation. Refer to Generating images with Amazon Nova to learn about the other capabilities supported by Amazon Nova Canvas.

As next steps, begin with a single use case that closely matches your business needs. Use our provided code examples as a foundation and adapt them to your specific requirements. After you’re familiar with the basic implementations, explore combining multiple techniques and scale gradually. Don’t forget to track time savings and cost reductions to measure ROI. Contact your AWS account team for enterprise implementation guidance.


About the Author

Arjun Singh is a Sr. Data Scientist at Amazon, experienced in artificial intelligence, machine learning, and business intelligence. He is a visual person and deeply curious about generative AI technologies in content creation. He collaborates with customers to build ML/AI solutions to achieve their desired outcomes. He graduated with a Master’s in Information Systems from the University of Cincinnati. Outside of work, he enjoys playing tennis, working out, and learning new skills.

Jat AI Stay informed with the latest in artificial intelligence. Jat AI News Portal is your go-to source for AI trends, breakthroughs, and industry analysis. Connect with the community of technologists and business professionals shaping the future.