Qwen models are now available in Amazon Bedrock | Amazon Web Services

Today we add Qwen models from Alibaba in Amazon Bedrock. With this launch Amazon Bedrock continues to expand the model options by adding access to Qwen3 Open Weight Foundation (FMS) in a fully managed manner without a server. This edition contains four models: Qwen3-Coder-480b-A35b-Instruct, Qwen3-Coder-30B-A3B-Instruct, Qwen3-235b-A22B-Invstruct-2507and Qwen3-32b (dense). Together, these models have both MOE (MOE) and dense architecture that provide flexible options for different application requirements.

Amazon Bedrock provides access to industrial front FMS via the Unified API without required to control infrastructure. You can access models from multiple model providers, integrate models into your applications and use scale based on workload requirements. With Amazon Bedrock, customer data is never used to train basic models. With the addition of Qwen3 models, Amazon Bedrock offers even more options for use as:

  • Code generation and repository analysis with extended context understanding
  • Building agent workflows that organize multiple tools and APIs to automate business
  • Balance AI cost and performance using hybrid thinking modes for adaptive reasoning

Qwen3 models in Amazon Bedrock
These four Qwen3 models are now available in Amazon Bedrock, each of which is optimized for different performance and cost requirements:

  • Qwen3-Coder-480b-A35b-Instruct -It is a model of experts of experts (MOE) with total parameters of 480B and active parameters 35b. It is optimized for coding and agency tasks and achieves strong benchmarks such as agent coding, using browser and use of tools. Thanks to these abilities, it is suitable for analysis of the storage code and automation of a multi -stage workflow.
  • Qwen3-Coder-30B-A3B-Instruct – This is the MOE model with total 30b parameters and 3B active parameters. This model, specifically optimized for coding tasks and scenarios of instructions, demonstrates powerful performance when generating code, analysis and tuning across several programming languages.
  • Qwen3-235b-A22B-Invstruct-2507 -This is a MOE model tuned with a total parameters of 235B and active 22B parameters. It provides competitive performance across the tasks of coding, mathematics and general reasoning, the ability to deal with effectiveness.
  • Qwen3-32b (dense) – This is a dense model with 32B parameters. It is suitable for real -time or resources, such as mobile devices and Edge Computing, where there is a critical consistent performance.

Architectural and functional features in Qwen3
Qwen3 models represent several architectural and functional functions:

MOE compared to dense architectures – MOE models such as Qwen3-Coder-480B-A35B, Qwen3-Coder-30B-A3B-Instruct and Qwen3-235B-A22B-Instruct-Instruct-25507, activate only part of the parameters for each requirement, providing high performance with effective instructions. The thick Qwen3-32B activates all parameters and offers a more consistent and predictable performance.

Agency – Qwen3 models can handle multi -stage thinking and structured planning in one model invoking. They can generate outputs that call external tools or APIs when integrating into the agent framework. Models also maintain a widespread context on long sessions. In addition, they support the tool that calls to allow standardized communication with the external environment.

Hybrid thinking modes – Qwen3 represents a hybrid approach to problem solving that supports two modes: thinking and not to think. The thinking mode will use step by step justifying before delivery of the final answer. This is ideal for complex problems that require a deeper idea. While the mode that does not think, provides fast and almost instant answers for less complex tasks where speed is more important than depth. This helps developers more efficiently to manage performance and costs.

Long context manipulation- Qwen3-coder models support extended context windows, with native to 256K tokens and up to 1 million tokens with extrapolation methods. This allows the model to process entire storage, large technical documents or long conversational history within a single task.

When to use each model
Four Qwen3 models serve different cases of use. Qwen3-Coder-480B-A35B-Instruct is designed for complex software engineering scenarios. It is suitable for advanced code generation, long context processing such as storage analysis and integration with external tools. The QWEN3-CODER-30B-A3B is particularly effective for tasks such as code completion, refactoring and answering programming questions. If you need versatile power across multiple domains, the Qwen3-235B-A22B-Instruct-Instruct-2507 offers balance and provides a strong general justification and teaching of monitoring while taking advantage of the effectiveness of its MOE architecture. Qwen3-32b (dense) is suitable for scenarios where consistent performance, low latency and cost optimization are important.

We start with Qwen models in Amazon Bedrock
To start using the Qwen models, I can use the Amazon Bedrock console Chat/Playground Text Part of the navigation pane for rapid testing of the new Qwen models with several challenges.

I can use any AWS SDK to integrate Qwen3 models into my applications. AWS SDKS includes access to Amazon Bedrock Invokemodel and Converse API. I can also use this model with any agency frame that supports Amazon Bedrock and deploys agents using Amazon Bedrock Agentcore. Here is, for example, the Python code of a simple agent with the access tools created using Strads Agent:

from strands import Agent
from strands_tools import calculator

agent = Agent(
    model="qwen.qwen3-coder-480b-a35b-v1:0",
    tools=(calculator)
)

agent("Tell me the square root of 42 ^ 9")

with open("function.py", 'r') as f:
    my_function_code = f.read()

agent(f"Help me optimize this Python function for better performance:\n\n{my_function_code}")

Now available
The Qwen models are available today in the following AWS regions:

  • The Qwen3-Coder-480B-A35B is available on USA West (Oregon), Asia Pacific (Mumbai, Tokyo) and Europe (London, Stockholm).
  • Qwen3-Coder-30B-A3B-Instruct, Qwen3-235b-A22B-Instruct-2507, and Qwen3-32b are Avaible in the US East (N. Virginia), US West (Oregon), Asia Pacific (Mumbai, Tokyo), Europe (Ireland, London, Milan, Stockholm), and South America (São Paulo) Regions.

For future updates, see the entire Region list. You can start testing and building immediately without setting up the infrastructure or capacity planning. If you want to know more, visit the Qwen product on Amazon Bedrock and Amazon Bedrock Pricing.

Try Qwen models on Amazon Bedrock Console and offer AWS Re: Post for Amazon Bedrock or your typical AWS support channels.

—Danilo

Updated 18 September 2025 – Removes the section of access to the model. Amazon Bedrock will simplify access to all server -free foundation models and all new models by automatically allowing them to enable them for each AWS account, eliminating the need to manually activate access through the Bedrock console. The access site will be retired on October 8, 2025 Account administrators retain full control over the model access through AWS IAM and service control principles to limit the model’s access as needed.

Leave a Comment