ChatQwen
This will help you get started with Qwen chat models. For detailed documentation of all ChatQwen features and configurations head to the API reference.
Overview
Integration details
Class | Package | Local | Serializable | Package downloads | Package latest |
---|---|---|---|---|---|
ChatQwen | langchain-qwq | ❌ | beta |
Model features
Tool calling | Structured output | JSON mode | Image input | Audio input | Video input | Token-level streaming | Native async | Token usage | Logprobs |
---|---|---|---|---|---|---|---|---|---|
✅ | ✅ | ✅ | ✅ | ❌ | ✅ | ✅ | ✅ | ✅ | ❌ |
Setup
To access Qwen models you'll need to create an Alibaba Cloud account, get an API key, and install the langchain-qwq
integration package.
Credentials
Head to Alibaba's API Key page to sign up to Alibaba Cloud and generate an API key. Once you've done this set the DASHSCOPE_API_KEY
environment variable:
import getpass
import os
if not os.getenv("DASHSCOPE_API_KEY"):
os.environ["DASHSCOPE_API_KEY"] = getpass.getpass("Enter your Dashscope API key: ")
Installation
The LangChain QwQ integration lives in the langchain-qwq
package:
%pip install -qU langchain-qwq
Instantiation
Now we can instantiate our model object and generate chat completions:
from langchain_qwq import ChatQwen
llm = ChatQwen(model="qwen-flash")
response = llm.invoke("Hello")
response
AIMessage(content='Hello! How can I assist you today? 😊', additional_kwargs={}, response_metadata={'finish_reason': 'stop', 'model_name': 'qwen-flash'}, id='run--62798a20-d425-48ab-91fc-8e62e37c6084-0', usage_metadata={'input_tokens': 9, 'output_tokens': 11, 'total_tokens': 20, 'input_token_details': {}, 'output_token_details': {}})
Invocation
messages = [
(
"system",
"You are a helpful assistant that translates English to French."
"Translate the user sentence.",
),
("human", "I love programming."),
]
ai_msg = llm.invoke(messages)
ai_msg
AIMessage(content="J'adore la programmation.", additional_kwargs={}, response_metadata={'finish_reason': 'stop', 'model_name': 'qwen-flash'}, id='run--33f905e0-880a-4a67-ab83-313fd7a06369-0', usage_metadata={'input_tokens': 32, 'output_tokens': 8, 'total_tokens': 40, 'input_token_details': {}, 'output_token_details': {}})
Chaining
We can chain our model with a prompt template like so:
from langchain_core.prompts import ChatPromptTemplate
prompt = ChatPromptTemplate(
[
(
"system",
"You are a helpful assistant that translates"
"{input_language} to {output_language}.",
),
("human", "{input}"),
]
)
chain = prompt | llm
chain.invoke(
{
"input_language": "English",
"output_language": "German",
"input": "I love programming.",
}
)
AIMessage(content='Ich liebe Programmierung.', additional_kwargs={}, response_metadata={'finish_reason': 'stop', 'model_name': 'qwen-flash'}, id='run--9d8bab6d-d6fe-4b9f-95f2-c30c3ff0a50e-0', usage_metadata={'input_tokens': 28, 'output_tokens': 5, 'total_tokens': 33, 'input_token_details': {}, 'output_token_details': {}})
Tool Calling
ChatQwen supports tool calling API that lets you describe tools and their arguments, and have the model return a JSON object with a tool to invoke and the inputs to that tool.
Use with bind_tools
from langchain_core.tools import tool
from langchain_qwq import ChatQwen
@tool
def multiply(first_int: int, second_int: int) -> int:
"""Multiply two integers together."""
return first_int * second_int
llm = ChatQwen(model="qwen-flash")
llm_with_tools = llm.bind_tools([multiply])
msg = llm_with_tools.invoke("What's 5 times forty two")
print(msg)
content='' additional_kwargs={'tool_calls': [{'index': 0, 'id': 'call_f0c2cc49307f480db78a45', 'function': {'arguments': '{"first_int": 5, "second_int": 42}', 'name': 'multiply'}, 'type': 'function'}]} response_metadata={'finish_reason': 'tool_calls', 'model_name': 'qwen-flash'} id='run--27c5aafb-9710-42f5-ab78-5a2ad1d9050e-0' tool_calls=[{'name': 'multiply', 'args': {'first_int': 5, 'second_int': 42}, 'id': 'call_f0c2cc49307f480db78a45', 'type': 'tool_call'}] usage_metadata={'input_tokens': 166, 'output_tokens': 27, 'total_tokens': 193, 'input_token_details': {}, 'output_token_details': {}}
vision Support
Image
from langchain_core.messages import HumanMessage
model = ChatQwen(model="qwen-vl-max-latest")
messages = [
HumanMessage(
content=[
{
"type": "image_url",
"image_url": {"url": "https://example.com/image/image.png"},
},
{"type": "text", "text": "What do you see in this image?"},
]
)
]
response = model.invoke(messages)
print(response.content)
This image depicts a cozy, rustic Christmas scene set against a wooden backdrop. The arrangement features a variety of festive decorations that evoke a warm, holiday atmosphere:
- **Centerpiece**: A decorative reindeer figurine with large antlers stands prominently in the background.
- **Miniature Trees**: Two small, snow-dusted artificial Christmas trees flank the reindeer, adding to the wintry feel.
- **Candles**: Three log-shaped candle holders made from birch bark are lit, casting a soft, warm glow. Two are in the foreground, and one is slightly behind them.
- **"Merry Christmas" Sign**: A wooden cutout sign spelling "MERRY CHRISTMAS" is placed on the left, decorated with a tiny golden gift box and a small reindeer silhouette.
- **Holiday Elements**: Pinecones, red berries, greenery, and fairy lights are scattered throughout, enhancing the natural, festive theme.
- **Other Details**: A white sack with "SANTA" written on it is partially visible on the left, along with a large glass ornament and twinkling string lights.
The overall aesthetic is warm, inviting, and traditional, emphasizing natural materials like wood, pine, and birch bark. It captures the essence of a rustic, homemade Christmas celebration.
Video
from langchain_core.messages import HumanMessage
model = ChatQwen(model="qwen-vl-max-latest")
messages = [
HumanMessage(
content=[
{
"type": "video_url",
"video_url": {"url": "https://example.com/video/1.mp4"},
},
{"type": "text", "text": "Can you tell me about this video?"},
]
)
]
response = model.invoke(messages)
print(response.content)
This video features a young woman with a warm and cheerful expression, standing outdoors in a well-lit environment. She has short, neatly styled brown hair with bangs and is wearing a soft pink knitted cardigan over a white top. A delicate necklace adorns her neck, adding a subtle touch of elegance to her outfit.
Throughout the video, she maintains eye contact with the camera, smiling gently and occasionally opening her mouth as if speaking or laughing. Her facial expressions are natural and engaging, suggesting a friendly and approachable demeanor. The background is softly blurred, indicating a shallow depth of field, which keeps the focus on her. It appears to be an urban setting with modern buildings, possibly a residential or commercial area.
The lighting is bright and natural, likely from sunlight, casting a soft glow on her face and highlighting her features. The overall tone of the video is pleasant and inviting, evoking a sense of warmth and positivity.
In the top right corner of the frames, there is a watermark that reads "通义·AI合成," which indicates that this video was generated using AI technology by Tongyi Lab, a company known for its advancements in artificial intelligence and digital content creation. This suggests that the video may be a demonstration of AI-generated human-like avatars or synthetic media.
API reference
For detailed documentation of all ChatQwen features and configurations head to the API reference
Related
- Chat model conceptual guide
- Chat model how-to guides