日本熟妇hd丰满老熟妇,中文字幕一区二区三区在线不卡 ,亚洲成片在线观看,免费女同在线一区二区

OpenAI Chat接口兼容

DashScope提供了與OpenAI兼容的使用方式。如果您之前使用OpenAI SDK或者其他OpenAI兼容接口(例如langchain_openai SDK),以及HTTP方式調用OpenAI的服務,只需在原有框架下調整API-KEY、base_url、model等參數,就可以直接使用DashScope模型服務。

兼容OpenAI需要信息

Base_URL

base_url表示模型服務的網絡訪問點或地址。通過該地址,您可以訪問服務提供的功能或數據。在Web服務或API的使用中,base_url通常對應于服務的具體操作或資源的URL。當您使用OpenAI兼容接口來使用DashScope模型服務時,需要配置base_url。

  • 當您通過OpenAI SDK或其他OpenAI兼容的SDK調用時,需要配置的base_url如下:

    https://dashscope.aliyuncs.com/compatible-mode/v1
  • 當您通過HTTP請求調用時,需要配置的完整訪問endpoint如下:

    POST https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions

靈積API-KEY

您需要開通靈積模型服務并獲得API-KEY,詳情請參考:API-KEY的獲取與配置。

支持的模型列表

當前OpenAI兼容接口支持的通義千問系列模型如下表所示。

模型分類

模型名稱

通義千問

qwen-long

qwen-turbo

qwen-turo-0624

qwen-turo-0206

qwen-plus

qwen-plus-0806

qwen-plus-0723

qwen-plus-0624

qwen-plus-0206

qwen-max

qwen-max-0428

qwen-max-0403

qwen-max-0107

通義千問VL系列

qwen-vl-max-0809

qwen-vl-max-0201

qwen-vl-max

qwen-vl-plus

qwen-vl-v1

qwen-vl-chat-v1

通義千問開源系列

qwen2-math-72b-instruct

qwen2-math-7b-instruct

qwen2-math-1.5b-instruct

qwen2-57b-a14b-instruct

qwen2-72b-instruct

qwen2-7b-instruct

qwen2-1.5b-instruct

qwen2-0.5b-instruct

qwen1.5-110b-chat

qwen1.5-72b-chat

qwen1.5-32b-chat

qwen1.5-14b-chat

qwen1.5-7b-chat

qwen1.5-1.8b-chat

qwen1.5-0.5b-chat

codeqwen1.5-7b-chat

qwen-72b-chat

qwen-14b-chat

qwen-7b-chat

qwen-1.8b-longcontext-chat

qwen-1.8b-chat

通過OpenAI SDK調用

前提條件

  • 請確保您的計算機上安裝了Python環境。

  • 請安裝最新版OpenAI SDK。

    # 如果下述命令報錯,請將pip替換為pip3
    pip install -U openai
  • 已開通靈積模型服務并獲得API-KEY:API-KEY的獲取與配置。

  • 我們推薦您將API-KEY配置到環境變量中以降低API-KEY的泄露風險,配置方法可參考通過環境變量配置API-KEY。您也可以在代碼中配置API-KEY,但是泄風險會提高。

  • 請選擇您需要使用的模型:支持的模型列表

使用方式

您可以參考以下示例來使用OpenAI SDK訪問DashScope服務上的通義千問模型。

非流式調用示例

from openai import OpenAI
import os

def get_response():
    client = OpenAI(
        api_key=os.getenv("DASHSCOPE_API_KEY"), # 如果您沒有配置環境變量,請在此處用您的API Key進行替換
        base_url="https://dashscope.aliyuncs.com/compatible-mode/v1",  # 填寫DashScope服務的base_url
    )
    completion = client.chat.completions.create(
        model="qwen-plus",
        messages=[{'role': 'system', 'content': 'You are a helpful assistant.'},
                  {'role': 'user', 'content': '你是誰?'}]
        )
    print(completion.model_dump_json())

if __name__ == '__main__':
    get_response()

運行代碼可以獲得以下結果:

{
    "id": "chatcmpl-xxx",
    "choices": [
        {
            "finish_reason": "stop",
            "index": 0,
            "logprobs": null,
            "message": {
                "content": "我是來自阿里云的超大規模預訓練模型,我叫通義千問。",
                "role": "assistant",
                "function_call": null,
                "tool_calls": null
            }
        }
    ],
    "created": 1716430652,
    "model": "qwen-plus",
    "object": "chat.completion",
    "system_fingerprint": null,
    "usage": {
        "completion_tokens": 18,
        "prompt_tokens": 22,
        "total_tokens": 40
    }
}

流式調用示例

from openai import OpenAI
import os


def get_response():
    client = OpenAI(
        api_key=os.getenv("DASHSCOPE_API_KEY"),
        base_url="https://dashscope.aliyuncs.com/compatible-mode/v1",
    )
    completion = client.chat.completions.create(
        model="qwen-plus",
        messages=[{'role': 'system', 'content': 'You are a helpful assistant.'},
                  {'role': 'user', 'content': '你是誰?'}],
        stream=True,
        # 可選,配置以后會在流式輸出的最后一行展示token使用信息
        stream_options={"include_usage": True}
        )
    for chunk in completion:
        print(chunk.model_dump_json())


if __name__ == '__main__':
    get_response()

運行代碼可以獲得以下結果:

{"id":"chatcmpl-xxx","choices":[{"delta":{"content":"","function_call":null,"role":"assistant","tool_calls":null},"finish_reason":null,"index":0,"logprobs":null}],"created":1719286190,"model":"qwen-plus","object":"chat.completion.chunk","system_fingerprint":null,"usage":null}
{"id":"chatcmpl-xxx","choices":[{"delta":{"content":"我是","function_call":null,"role":null,"tool_calls":null},"finish_reason":null,"index":0,"logprobs":null}],"created":1719286190,"model":"qwen-plus","object":"chat.completion.chunk","system_fingerprint":null,"usage":null}
{"id":"chatcmpl-xxx","choices":[{"delta":{"content":"來自","function_call":null,"role":null,"tool_calls":null},"finish_reason":null,"index":0,"logprobs":null}],"created":1719286190,"model":"qwen-plus","object":"chat.completion.chunk","system_fingerprint":null,"usage":null}
{"id":"chatcmpl-xxx","choices":[{"delta":{"content":"阿里","function_call":null,"role":null,"tool_calls":null},"finish_reason":null,"index":0,"logprobs":null}],"created":1719286190,"model":"qwen-plus","object":"chat.completion.chunk","system_fingerprint":null,"usage":null}
{"id":"chatcmpl-xxx","choices":[{"delta":{"content":"云的大規模語言模型","function_call":null,"role":null,"tool_calls":null},"finish_reason":null,"index":0,"logprobs":null}],"created":1719286190,"model":"qwen-plus","object":"chat.completion.chunk","system_fingerprint":null,"usage":null}
{"id":"chatcmpl-xxx","choices":[{"delta":{"content":",我叫通義千問。","function_call":null,"role":null,"tool_calls":null},"finish_reason":null,"index":0,"logprobs":null}],"created":1719286190,"model":"qwen-plus","object":"chat.completion.chunk","system_fingerprint":null,"usage":null}
{"id":"chatcmpl-xxx","choices":[{"delta":{"content":"","function_call":null,"role":null,"tool_calls":null},"finish_reason":"stop","index":0,"logprobs":null}],"created":1719286190,"model":"qwen-plus","object":"chat.completion.chunk","system_fingerprint":null,"usage":null}
{"id":"chatcmpl-xxx","choices":[],"created":1719286190,"model":"qwen-plus","object":"chat.completion.chunk","system_fingerprint":null,"usage":{"completion_tokens":16,"prompt_tokens":22,"total_tokens":38}}

VL模型流式調用示例(輸入圖片url)

from openai import OpenAI
import os


def get_response():
    client = OpenAI(
        api_key=os.getenv("DASHSCOPE_API_KEY"),
        base_url="https://dashscope.aliyuncs.com/compatible-mode/v1",
    )
    completion = client.chat.completions.create(
        model="qwen-vl-plus",
        messages=[
            {
              "role": "user",
              "content": [
                {
                  "type": "text",
                  "text": "這是什么"
                },
                {
                  "type": "image_url",
                  "image_url": {
                    "url": "https://dashscope.oss-cn-beijing.aliyuncs.com/images/dog_and_girl.jpeg"
                  }
                }
              ]
            }
          ],
        top_p=0.8,
        stream=True,
        stream_options={"include_usage": True}
        )
    for chunk in completion:
      print(chunk.model_dump_json())

if __name__=='__main__':
    get_response()

運行代碼可以獲得以下結果:

{"id":"chatcmpl-xxx","choices":[{"delta":{"content":"","function_call":null,"role":"assistant","tool_calls":null},"finish_reason":null,"index":0,"logprobs":null}],"created":1719286878,"model":"qwen-vl-plus","object":"chat.completion.chunk","system_fingerprint":null,"usage":null}
{"id":"chatcmpl-xxx","choices":[{"delta":{"content":[{"text":"這"}],"function_call":null,"role":null,"tool_calls":null},"finish_reason":null,"index":0,"logprobs":null}],"created":1719286878,"model":"qwen-vl-plus","object":"chat.completion.chunk","system_fingerprint":null,"usage":null}
{"id":"chatcmpl-xxx","choices":[{"delta":{"content":[{"text":"是一"}],"function_call":null,"role":null,"tool_calls":null},"finish_reason":null,"index":0,"logprobs":null}],"created":1719286878,"model":"qwen-vl-plus","object":"chat.completion.chunk","system_fingerprint":null,"usage":null}
{"id":"chatcmpl-xxx","choices":[{"delta":{"content":[{"text":"張"}],"function_call":null,"role":null,"tool_calls":null},"finish_reason":null,"index":0,"logprobs":null}],"created":1719286878,"model":"qwen-vl-plus","object":"chat.completion.chunk","system_fingerprint":null,"usage":null}
{"id":"chatcmpl-xxx","choices":[{"delta":{"content":[{"text":"圖片,展示了一位"}],"function_call":null,"role":null,"tool_calls":null},"finish_reason":null,"index":0,"logprobs":null}],"created":1719286878,"model":"qwen-vl-plus","object":"chat.completion.chunk","system_fingerprint":null,"usage":null}
{"id":"chatcmpl-xxx","choices":[{"delta":{"content":[{"text":"女士和一只狗在海灘上互動"}],"function_call":null,"role":null,"tool_calls":null},"finish_reason":null,"index":0,"logprobs":null}],"created":1719286878,"model":"qwen-vl-plus","object":"chat.completion.chunk","system_fingerprint":null,"usage":null}
{"id":"chatcmpl-xxx","choices":[{"delta":{"content":[{"text":"。她們似乎正在沙灘上玩握手"}],"function_call":null,"role":null,"tool_calls":null},"finish_reason":null,"index":0,"logprobs":null}],"created":1719286878,"model":"qwen-vl-plus","object":"chat.completion.chunk","system_fingerprint":null,"usage":null}
{"id":"chatcmpl-xxx","choices":[{"delta":{"content":[{"text":"游戲,背景是美麗的日落景色"}],"function_call":null,"role":null,"tool_calls":null},"finish_reason":null,"index":0,"logprobs":null}],"created":1719286878,"model":"qwen-vl-plus","object":"chat.completion.chunk","system_fingerprint":null,"usage":null}
{"id":"chatcmpl-xxx","choices":[{"delta":{"content":[{"text":"與海洋相連的海岸線。這樣的"}],"function_call":null,"role":null,"tool_calls":null},"finish_reason":null,"index":0,"logprobs":null}],"created":1719286878,"model":"qwen-vl-plus","object":"chat.completion.chunk","system_fingerprint":null,"usage":null}
{"id":"chatcmpl-xxx","choices":[{"delta":{"content":[{"text":"場景通常會讓人感覺非常愉快、"}],"function_call":null,"role":null,"tool_calls":null},"finish_reason":null,"index":0,"logprobs":null}],"created":1719286878,"model":"qwen-vl-plus","object":"chat.completion.chunk","system_fingerprint":null,"usage":null}
{"id":"chatcmpl-xxx","choices":[{"delta":{"content":[{"text":"和諧,并且展現出人與寵物之間的"}],"function_call":null,"role":null,"tool_calls":null},"finish_reason":null,"index":0,"logprobs":null}],"created":1719286878,"model":"qwen-vl-plus","object":"chat.completion.chunk","system_fingerprint":null,"usage":null}
{"id":"chatcmpl-xxx","choices":[{"delta":{"content":[{"text":"深厚情感聯系。"}],"function_call":null,"role":null,"tool_calls":null},"finish_reason":"stop","index":0,"logprobs":null}],"created":1719286878,"model":"qwen-vl-plus","object":"chat.completion.chunk","system_fingerprint":null,"usage":null}
{"id":"chatcmpl-xxx","choices":[],"created":1719286878,"model":"qwen-vl-plus","object":"chat.completion.chunk","system_fingerprint":null,"usage":{"completion_tokens":61,"prompt_tokens":1276,"total_tokens":1337}}

VL模型流式調用示例(輸入圖片base64)

VL也支持通過base64編碼的圖片輸入,您可以將圖片轉換為base64字符串后進行調用。

重要

當前API請求負載限制在6M以下。所以VL模型通過base64格式輸入的字符串也不能超過此限制。對應的輸入圖片原始大小需小于4.5M。

from openai import OpenAI
import os
import base64
import mimetypes


def get_response():
    client = OpenAI(
        api_key=os.getenv("DASHSCOPE_API_KEY"),
        base_url="https://dashscope.aliyuncs.com/compatible-mode/v1",
    )
    image_path = 'path/to/your/image.jpeg'

    mime_type, _ = mimetypes.guess_type(image_path)

    # 校驗MIME類型為支持的圖片格式
    if mime_type and mime_type.startswith('image'):
        with open(image_path, 'rb') as image_file:
            # 將圖片內容轉換為Base64字符串
            encoded_image = base64.b64encode(image_file.read())
            encoded_image_str = encoded_image.decode('utf-8')
            # 創建數據前綴
            data_uri_prefix = f'data:{mime_type};base64,'
            # 拼接前綴和Base64編碼的圖像數據
            encoded_image_str = data_uri_prefix + encoded_image_str
            
            completion = client.chat.completions.create(
                model="qwen-vl-plus",
                messages=[
                    {
                        "role": "user",
                        "content": [
                            {
                                "type": "text",
                                "text": "這是什么"
                            },
                            {
                                "type": "image_url",
                                "image_url": {
                                    "url": encoded_image_str
                                }
                            }
                        ]
                    }
                ],
                top_p=0.8,
                stream=True,
                stream_options={"include_usage": True}
            )
            for chunk in completion:
                print(chunk.model_dump_json())
    else:
        print("MIME type unsupported or not found.")


if __name__ == "__main__":
    get_response()

如果需要非流式輸出,將stream相關配置參數去除,并直接打印completion即可。

function call示例

此處以天氣查詢工具與時間查詢工具為例,向您展示通過OpenAI接口兼容實現function call的功能。示例代碼可以實現多輪工具調用。

from openai import OpenAI
from datetime import datetime
import json
import os

client = OpenAI(
    api_key=os.getenv("DASHSCOPE_API_KEY"), # 如果您沒有配置環境變量,請在此處用您的API Key進行替換
    base_url="https://dashscope.aliyuncs.com/compatible-mode/v1",  # 填寫DashScope SDK的base_url
)

# 定義工具列表,模型在選擇使用哪個工具時會參考工具的name和description
tools = [
    # 工具1 獲取當前時刻的時間
    {
        "type": "function",
        "function": {
            "name": "get_current_time",
            "description": "當你想知道現在的時間時非常有用。",
            "parameters": {}  # 因為獲取當前時間無需輸入參數,因此parameters為空字典
        }
    },  
    # 工具2 獲取指定城市的天氣
    {
        "type": "function",
        "function": {
            "name": "get_current_weather",
            "description": "當你想查詢指定城市的天氣時非常有用。",
            "parameters": {  # 查詢天氣時需要提供位置,因此參數設置為location
                        "type": "object",
                "properties": {
                    "location": {
                        "type": "string",
                        "description": "城市或縣區,比如北京市、杭州市、余杭區等。"
                    }
                }
            },
            "required": [
                "location"
            ]
        }
    }
]

# 模擬天氣查詢工具。返回結果示例:“北京今天是晴天?!?def get_current_weather(location):
    return f"{location}今天是雨天。 "

# 查詢當前時間的工具。返回結果示例:“當前時間:2024-04-15 17:15:18?!?def get_current_time():
    # 獲取當前日期和時間
    current_datetime = datetime.now()
    # 格式化當前日期和時間
    formatted_time = current_datetime.strftime('%Y-%m-%d %H:%M:%S')
    # 返回格式化后的當前時間
    return f"當前時間:{formatted_time}。"

# 封裝模型響應函數
def get_response(messages):
    completion = client.chat.completions.create(
        model="qwen-max",
        messages=messages,
        tools=tools
        )
    return completion.model_dump()

def call_with_messages():
    print('\n')
    messages = [
            {
                "content": input('請輸入:'),  # 提問示例:"現在幾點了?" "一個小時后幾點" "北京天氣如何?"
                "role": "user"
            }
    ]
    print("-"*60)
    # 模型的第一輪調用
    i = 1
    first_response = get_response(messages)
    assistant_output = first_response['choices'][0]['message']
    print(f"\n第{i}輪大模型輸出信息:{first_response}\n")
    if  assistant_output['content'] is None:
        assistant_output['content'] = ""
    messages.append(assistant_output)
    # 如果不需要調用工具,則直接返回最終答案
    if assistant_output['tool_calls'] == None:  # 如果模型判斷無需調用工具,則將assistant的回復直接打印出來,無需進行模型的第二輪調用
        print(f"無需調用工具,我可以直接回復:{assistant_output['content']}")
        return
    # 如果需要調用工具,則進行模型的多輪調用,直到模型判斷無需調用工具
    while assistant_output['tool_calls'] != None:
        # 如果判斷需要調用查詢天氣工具,則運行查詢天氣工具
        if assistant_output['tool_calls'][0]['function']['name'] == 'get_current_weather':
            tool_info = {"name": "get_current_weather", "role":"tool"}
            # 提取位置參數信息
            location = json.loads(assistant_output['tool_calls'][0]['function']['arguments'])['properties']['location']
            tool_info['content'] = get_current_weather(location)
        # 如果判斷需要調用查詢時間工具,則運行查詢時間工具
        elif assistant_output['tool_calls'][0]['function']['name'] == 'get_current_time':
            tool_info = {"name": "get_current_time", "role":"tool"}
            tool_info['content'] = get_current_time()
        print(f"工具輸出信息:{tool_info['content']}\n")
        print("-"*60)
        messages.append(tool_info)
        assistant_output = get_response(messages)['choices'][0]['message']
        if  assistant_output['content'] is None:
            assistant_output['content'] = ""
        messages.append(assistant_output)
        i += 1
        print(f"第{i}輪大模型輸出信息:{assistant_output}\n")
    print(f"最終答案:{assistant_output['content']}")

if __name__ == '__main__':
    call_with_messages()

當輸入:杭州和北京天氣怎么樣?現在幾點了?時,程序會進行如下輸出:

2024-06-26_10-04-56 (1).gif

輸入參數配置

輸入參數與OpenAI的接口參數對齊,當前已支持的參數如下:

參數

類型

默認值

說明

model

string

-

用戶使用model參數指明對應的模型,可選的模型請見支持的模型列表。

messages

array

-

用戶與模型的對話歷史。array中的每個元素形式為{"role":角色, "content": 內容}。角色當前可選值:system、user、assistant,其中,僅messages[0]中支持role為system,一般情況下,user和assistant需要交替出現,且messages中最后一個元素的role必須為user。

top_p(可選)

float

-

生成過程中的核采樣方法概率閾值,例如,取值為0.8時,僅保留概率加起來大于等于0.8的最可能token的最小集合作為候選集。取值范圍為(0,1.0),取值越大,生成的隨機性越高;取值越低,生成的確定性越高。

temperature(可選)

float

-

用于控制模型回復的隨機性和多樣性。具體來說,temperature值控制了生成文本時對每個候選詞的概率分布進行平滑的程度。較高的temperature值會降低概率分布的峰值,使得更多的低概率詞被選擇,生成結果更加多樣化;而較低的temperature值則會增強概率分布的峰值,使得高概率詞更容易被選擇,生成結果更加確定。

取值范圍: [0, 2),不建議取值為0,無意義。

重要

qwen-vl相關模型目前不支持該參數。

presence_penalty

(可選)

float

-

用戶控制模型生成時整個序列中的重復度。提高presence_penalty時可以降低模型生成的重復度,取值范圍[-2.0, 2.0]。

重要

目前僅在千問商業模型和qwen1.5及以后的開源模型上支持該參數。

max_tokens(可選)

integer

-

指定模型可生成的最大token個數。例如模型最大輸出長度為2k,您可以設置為1k,防止模型輸出過長的內容。

不同的模型有不同的輸出上限。

重要

qwen-vl相關模型目前不支持該參數。

seed(可選)

integer

-

生成時使用的隨機數種子,用于控制模型生成內容的隨機性。seed支持無符號64位整數。

stream(可選)

boolean

False

用于控制是否使用流式輸出。當以stream模式輸出結果時,接口返回結果為generator,需要通過迭代獲取結果,每次輸出為當前生成的增量序列。

stop(可選)

string or array

None

stop參數用于實現內容生成過程的精確控制,在模型生成的內容即將包含指定的字符串或token_id時自動停止。stop可以為string類型或array類型。

  • string類型

    當模型將要生成指定的stop詞語時停止。

    例如將stop指定為"你好",則模型將要生成“你好”時停止。

  • array類型

    array中的元素可以為token_id或者字符串,或者元素為token_id的array。當模型將要生成的token或其對應的token_id在stop中時,模型生成將會停止。以下為stop為array時的示例(tokenizer對應模型為qwen-turbo):

    1.元素為token_id:

    token_id為108386和104307分別對應token為“你好”和“天氣”,設定stop為[108386,104307],則模型將要生成“你好”或者“天氣”時停止。

    2.元素為字符串:

    設定stop為["你好","天氣"],則模型將要生成“你好”或者“天氣”時停止。

    3.元素為array:

    token_id為108386和103924分別對應token為“你好”和“啊”,token_id為35946和101243分別對應token為“我”和“很好”。設定stop為[[108386, 103924],[35946, 101243]],則模型將要生成“你好啊”或者“我很好”時停止。

    說明

    stop為array類型時,不可以將token_id和字符串同時作為元素輸入,比如不可以指定stop為["你好",104307]。qwen-vl相關模型目前不支持該參數。

tools(可選)

array

None

用于指定可供模型調用的工具庫,一次function call流程模型會從中選擇其中一個工具。tools中每一個tool的結構如下:

  • type,類型為string,表示tools的類型,當前僅支持function。

  • function,類型為object,鍵值包括name,description和parameters:

    • name:類型為string,表示工具函數的名稱,必須是字母、數字,可以包含下劃線和短劃線,最大長度為64。

    • description:類型為string,表示工具函數的描述,供模型選擇何時以及如何調用工具函數。

    • parameters:類型為object,表示工具的參數描述,需要是一個合法的JSON Schema。JSON Schema的描述可以見鏈接。如果parameters參數為空,表示function沒有入參。

在function call流程中,無論是發起function call的輪次,還是向模型提交工具函數的執行結果,均需設置tools參數。當前支持的模型包括qwen-turbo、qwen-plus和qwen-max。

說明

qwen-vl相關模型目前不支持該參數。

stream_options(可選)

object

None

該參數用于配置在流式輸出時是否展示使用的token數目。只有當stream為True的時候該參數才會激活生效。若您需要統計流式輸出模式下的token數目,可將該參數配置為stream_options={"include_usage":True}。

enable_search

(可選,通過extra_body配置)

boolean

False

用于控制模型在生成文本時是否使用互聯網搜索結果進行參考。取值如下:

  • True:啟用互聯網搜索,模型會將搜索結果作為文本生成過程中的參考信息,但模型會基于其內部邏輯判斷是否使用互聯網搜索結果。

  • False(默認):關閉互聯網搜索。

配置方式為:extra_body={"enable_search":True}

http調用方式為"enable_search":true
重要

qwen-long、qwen-vl相關模型目前不支持該參數。

返回參數說明

返回參數

數據類型

說明

備注

id

string

系統生成的標識本次調用的id。

model

string

本次調用的模型名。

system_fingerprint

string

模型運行時使用的配置版本,當前暫時不支持,返回為空字符串“”。

choices

array

模型生成內容的詳情。

choices[i].finish_reason

string

有三種情況:

  • 正在生成時為null;

  • 因觸發輸入參數中的stop條件而結束為stop;

  • 因生成長度過長而結束為length。

choices[i].message

object

模型輸出的消息。

choices[i].message.role

string

模型的角色,固定為assistant。

choices[i].message.content

string

模型生成的文本。

choices[i].index

integer

生成的結果序列編號,默認為0。

created

integer

當前生成結果的時間戳(s)。

usage

object

計量信息,表示本次請求所消耗的token數據。

usage.prompt_tokens

integer

用戶輸入文本轉換成token后的長度。

您可以參考本地tokenizer統計token數據進行token的估計。

usage.completion_tokens

integer

模型生成回復轉換為token后的長度。

usage.total_tokens

integer

usage.prompt_tokens與usage.completion_tokens的總和。

通過langchain_openai SDK調用

前提條件

  • 請確保您的計算機上安裝了Python環境。

  • 通過運行以下命令安裝langchain_openai SDK。

    # 如果下述命令報錯,請將pip替換為pip3
    pip install -U langchain_openai

使用方式

您可以參考以下示例來通過langchain_openai SDK使用DashScope的千問模型。

非流式輸出

非流式輸出使用invoke方法實現,請參考以下示例代碼:

from langchain_openai import ChatOpenAI
import os

def get_response():
    llm = ChatOpenAI(
        api_key=os.getenv("DASHSCOPE_API_KEY"), # 如果您沒有配置環境變量,請在此處用您的API Key進行替換
        base_url="https://dashscope.aliyuncs.com/compatible-mode/v1", # 填寫DashScope base_url
        model="qwen-plus"
        )
    messages = [
        {"role":"system","content":"You are a helpful assistant."}, 
        {"role":"user","content":"你是誰?"}
    ]
    response = llm.invoke(messages)
    print(response.json(ensure_ascii=False))

if __name__ == "__main__":
    get_response()

運行代碼,可以得到以下結果:

{
    "content": "我是來自阿里云的大規模語言模型,我叫通義千問。",
    "additional_kwargs": {},
    "response_metadata": {
        "token_usage": {
            "completion_tokens": 16,
            "prompt_tokens": 22,
            "total_tokens": 38
        },
        "model_name": "qwen-plus",
        "system_fingerprint": "",
        "finish_reason": "stop",
        "logprobs": null
    },
    "type": "ai",
    "name": null,
    "id": "run-xxx",
    "example": false,
    "tool_calls": [],
    "invalid_tool_calls": []
}

流式輸出

流式輸出使用stream方法實現,無需在參數中配置stream參數。

from langchain_openai import ChatOpenAI
import os

def get_response():
    llm = ChatOpenAI(
        api_key=os.getenv("DASHSCOPE_API_KEY"),
        base_url="https://dashscope.aliyuncs.com/compatible-mode/v1", 
        model="qwen-plus",
        # 通過以下設置,在流式輸出的最后一行展示token使用信息
        stream_options={"include_usage": True}
        )
    messages = [
        {"role":"system","content":"You are a helpful assistant."}, 
        {"role":"user","content":"你是誰?"},
    ]
    response = llm.stream(messages)
    for chunk in response:
        print(chunk.json(ensure_ascii=False))

if __name__ == "__main__":
    get_response()

運行代碼,可以得到以下結果:

{"content": "", "additional_kwargs": {}, "response_metadata": {}, "type": "AIMessageChunk", "name": null, "id": "run-xxx", "example": false, "tool_calls": [], "invalid_tool_calls": [], "usage_metadata": null, "tool_call_chunks": []}
{"content": "我是", "additional_kwargs": {}, "response_metadata": {}, "type": "AIMessageChunk", "name": null, "id": "run-xxx", "example": false, "tool_calls": [], "invalid_tool_calls": [], "usage_metadata": null, "tool_call_chunks": []}
{"content": "來自", "additional_kwargs": {}, "response_metadata": {}, "type": "AIMessageChunk", "name": null, "id": "run-xxx", "example": false, "tool_calls": [], "invalid_tool_calls": [], "usage_metadata": null, "tool_call_chunks": []}
{"content": "阿里", "additional_kwargs": {}, "response_metadata": {}, "type": "AIMessageChunk", "name": null, "id": "run-xxx", "example": false, "tool_calls": [], "invalid_tool_calls": [], "usage_metadata": null, "tool_call_chunks": []}
{"content": "云", "additional_kwargs": {}, "response_metadata": {}, "type": "AIMessageChunk", "name": null, "id": "run-xxx", "example": false, "tool_calls": [], "invalid_tool_calls": [], "usage_metadata": null, "tool_call_chunks": []}
{"content": "的大規模語言模型", "additional_kwargs": {}, "response_metadata": {}, "type": "AIMessageChunk", "name": null, "id": "run-xxx", "example": false, "tool_calls": [], "invalid_tool_calls": [], "usage_metadata": null, "tool_call_chunks": []}
{"content": ",我叫通", "additional_kwargs": {}, "response_metadata": {}, "type": "AIMessageChunk", "name": null, "id": "run-xxx", "example": false, "tool_calls": [], "invalid_tool_calls": [], "usage_metadata": null, "tool_call_chunks": []}
{"content": "義千問。", "additional_kwargs": {}, "response_metadata": {}, "type": "AIMessageChunk", "name": null, "id": "run-xxx", "example": false, "tool_calls": [], "invalid_tool_calls": [], "usage_metadata": null, "tool_call_chunks": []}
{"content": "", "additional_kwargs": {}, "response_metadata": {"finish_reason": "stop"}, "type": "AIMessageChunk", "name": null, "id": "run-xxx", "example": false, "tool_calls": [], "invalid_tool_calls": [], "usage_metadata": null, "tool_call_chunks": []}
{"content": "", "additional_kwargs": {}, "response_metadata": {}, "type": "AIMessageChunk", "name": null, "id": "run-xxx", "example": false, "tool_calls": [], "invalid_tool_calls": [], "usage_metadata": {"input_tokens": 22, "output_tokens": 16, "total_tokens": 38}, "tool_call_chunks": []}

VL模型流式調用示例

from langchain_openai import ChatOpenAI
import os


def get_response():
    llm = ChatOpenAI(
      # 如果您沒有配置環境變量,請在此處用您的API Key進行替換
      api_key=os.getenv("DASHSCOPE_API_KEY"),
      # 填寫DashScope base_url
      base_url="https://dashscope.aliyuncs.com/compatible-mode/v1",
      model="qwen-plus",
      # 通過以下設置,在流式輸出的最后一行展示token使用信息
      stream_options={"include_usage": True}
      )
    messages= [
            {
              "role": "user",
              "content": [
                {
                  "type": "text",
                  "text": "這是什么"
                },
                {
                  "type": "image_url",
                  "image_url": {
                    "url": "https://dashscope.oss-cn-beijing.aliyuncs.com/images/dog_and_girl.jpeg"
                  }
                }
              ]
            }
          ]
    response = llm.stream(messages)
    for chunk in response:
      print(chunk.json(ensure_ascii=False))

if __name__ == "__main__":
    get_response()

運行以上代碼,可得到以下示例結果:

{"content": "", "additional_kwargs": {}, "response_metadata": {}, "type": "AIMessageChunk", "name": null, "id": "run-xxx", "example": false, "tool_calls": [], "invalid_tool_calls": [], "usage_metadata": null, "tool_call_chunks": []}
{"content": "這張", "additional_kwargs": {}, "response_metadata": {}, "type": "AIMessageChunk", "name": null, "id": "run-xxx", "example": false, "tool_calls": [], "invalid_tool_calls": [], "usage_metadata": null, "tool_call_chunks": []}
{"content": "圖片", "additional_kwargs": {}, "response_metadata": {}, "type": "AIMessageChunk", "name": null, "id": "run-xxx", "example": false, "tool_calls": [], "invalid_tool_calls": [], "usage_metadata": null, "tool_call_chunks": []}
{"content": "中", "additional_kwargs": {}, "response_metadata": {}, "type": "AIMessageChunk", "name": null, "id": "run-xxx", "example": false, "tool_calls": [], "invalid_tool_calls": [], "usage_metadata": null, "tool_call_chunks": []}
{"content": "有一", "additional_kwargs": {}, "response_metadata": {}, "type": "AIMessageChunk", "name": null, "id": "run-xxx", "example": false, "tool_calls": [], "invalid_tool_calls": [], "usage_metadata": null, "tool_call_chunks": []}
{"content": "只狗和一個小", "additional_kwargs": {}, "response_metadata": {}, "type": "AIMessageChunk", "name": null, "id": "run-xxx", "example": false, "tool_calls": [], "invalid_tool_calls": [], "usage_metadata": null, "tool_call_chunks": []}
{"content": "女孩。狗看起來", "additional_kwargs": {}, "response_metadata": {}, "type": "AIMessageChunk", "name": null, "id": "run-xxx", "example": false, "tool_calls": [], "invalid_tool_calls": [], "usage_metadata": null, "tool_call_chunks": []}
{"content": "很友好,可能是", "additional_kwargs": {}, "response_metadata": {}, "type": "AIMessageChunk", "name": null, "id": "run-xxx", "example": false, "tool_calls": [], "invalid_tool_calls": [], "usage_metadata": null, "tool_call_chunks": []}
{"content": "寵物,而小女孩", "additional_kwargs": {}, "response_metadata": {}, "type": "AIMessageChunk", "name": null, "id": "run-xxx", "example": false, "tool_calls": [], "invalid_tool_calls": [], "usage_metadata": null, "tool_call_chunks": []}
{"content": "似乎在與狗", "additional_kwargs": {}, "response_metadata": {}, "type": "AIMessageChunk", "name": null, "id": "run-xxx", "example": false, "tool_calls": [], "invalid_tool_calls": [], "usage_metadata": null, "tool_call_chunks": []}
{"content": "互動或玩耍。", "additional_kwargs": {}, "response_metadata": {}, "type": "AIMessageChunk", "name": null, "id": "run-xxx", "example": false, "tool_calls": [], "invalid_tool_calls": [], "usage_metadata": null, "tool_call_chunks": []}
{"content": "這是一幅展示", "additional_kwargs": {}, "response_metadata": {}, "type": "AIMessageChunk", "name": null, "id": "run-xxx", "example": false, "tool_calls": [], "invalid_tool_calls": [], "usage_metadata": null, "tool_call_chunks": []}
{"content": "人與動物之間", "additional_kwargs": {}, "response_metadata": {}, "type": "AIMessageChunk", "name": null, "id": "run-xxx", "example": false, "tool_calls": [], "invalid_tool_calls": [], "usage_metadata": null, "tool_call_chunks": []}
{"content": "溫馨關系的畫面。", "additional_kwargs": {}, "response_metadata": {}, "type": "AIMessageChunk", "name": null, "id": "run-xxx", "example": false, "tool_calls": [], "invalid_tool_calls": [], "usage_metadata": null, "tool_call_chunks": []}
{"content": "", "additional_kwargs": {}, "response_metadata": {}, "type": "AIMessageChunk", "name": null, "id": "run-xxx", "example": false, "tool_calls": [], "invalid_tool_calls": [], "usage_metadata": null, "tool_call_chunks": []}
{"content": "", "additional_kwargs": {}, "response_metadata": {"finish_reason": "stop"}, "type": "AIMessageChunk", "name": null, "id": "run-xxx", "example": false, "tool_calls": [], "invalid_tool_calls": [], "usage_metadata": null, "tool_call_chunks": []}
{"content": "", "additional_kwargs": {}, "response_metadata": {}, "type": "AIMessageChunk", "name": null, "id": "run-xxx", "example": false, "tool_calls": [], "invalid_tool_calls": [], "usage_metadata": {"input_tokens": 23, "output_tokens": 40, "total_tokens": 63}, "tool_call_chunks": []}

關于輸入參數的配置,可以參考輸入參數配置,相關參數在ChatOpenAI對象中定義。

通過HTTP接口調用

您可以通過HTTP接口來調用DashScope服務,獲得與通過HTTP接口調用OpenAI服務相同結構的返回結果。

前提條件

  • 已開通靈積模型服務并獲得API-KEY:API-KEY的獲取與配置。

  • 我們推薦您將API-KEY配置到環境變量中以降低API-KEY的泄露風險,配置方法可參考通過環境變量配置API-KEY。您也可以在代碼中配置API-KEY,但是泄露風險會提高。

提交接口調用

POST https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions

請求示例

以下示例展示通過CURL命令來調用API的腳本。

說明

如果您沒有配置API-KEY為環境變量,需將$DASHSCOPE_API_KEY更改為您的API-KEY

非流式輸出

curl --location 'https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions' \
--header "Authorization: Bearer $DASHSCOPE_API_KEY" \
--header 'Content-Type: application/json' \
--data '{
    "model": "qwen-plus",
    "messages": [
        {
            "role": "system",
            "content": "You are a helpful assistant."
        },
        {
            "role": "user", 
            "content": "你是誰?"
        }
    ]
}'

運行命令可得到以下結果:

{
    "choices": [
        {
            "message": {
                "role": "assistant",
                "content": "我是來自阿里云的大規模語言模型,我叫通義千問。"
            },
            "finish_reason": "stop",
            "index": 0,
            "logprobs": null
        }
    ],
    "object": "chat.completion",
    "usage": {
        "prompt_tokens": 11,
        "completion_tokens": 16,
        "total_tokens": 27
    },
    "created": 1715252778,
    "system_fingerprint": "",
    "model": "qwen-plus",
    "id": "chatcmpl-xxx"
}

流式輸出

如果您需要使用流式輸出,請在請求體中指定stream參數為true。

curl --location 'https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions' \
--header "Authorization: Bearer $DASHSCOPE_API_KEY" \
--header 'Content-Type: application/json' \
--data '{
    "model": "qwen-plus",
    "messages": [
        {
            "role": "system",
            "content": "You are a helpful assistant."
        },
        {
            "role": "user", 
            "content": "你是誰?"
        }
    ],
    "stream":true
}'

運行命令可得到以下結果:

data: {"choices":[{"delta":{"content":"","role":"assistant"},"index":0,"logprobs":null,"finish_reason":null}],"object":"chat.completion.chunk","usage":null,"created":1715931028,"system_fingerprint":null,"model":"qwen-plus","id":"chatcmpl-3bb05cf5cd819fbca5f0b8d67a025022"}

data: {"choices":[{"finish_reason":null,"delta":{"content":"我是"},"index":0,"logprobs":null}],"object":"chat.completion.chunk","usage":null,"created":1715931028,"system_fingerprint":null,"model":"qwen-plus","id":"chatcmpl-3bb05cf5cd819fbca5f0b8d67a025022"}

data: {"choices":[{"delta":{"content":"來自"},"finish_reason":null,"index":0,"logprobs":null}],"object":"chat.completion.chunk","usage":null,"created":1715931028,"system_fingerprint":null,"model":"qwen-plus","id":"chatcmpl-3bb05cf5cd819fbca5f0b8d67a025022"}

data: {"choices":[{"delta":{"content":"阿里"},"finish_reason":null,"index":0,"logprobs":null}],"object":"chat.completion.chunk","usage":null,"created":1715931028,"system_fingerprint":null,"model":"qwen-plus","id":"chatcmpl-3bb05cf5cd819fbca5f0b8d67a025022"}

data: {"choices":[{"delta":{"content":"云的大規模語言模型"},"finish_reason":null,"index":0,"logprobs":null}],"object":"chat.completion.chunk","usage":null,"created":1715931028,"system_fingerprint":null,"model":"qwen-plus","id":"chatcmpl-3bb05cf5cd819fbca5f0b8d67a025022"}

data: {"choices":[{"delta":{"content":",我叫通義千問。"},"finish_reason":null,"index":0,"logprobs":null}],"object":"chat.completion.chunk","usage":null,"created":1715931028,"system_fingerprint":null,"model":"qwen-plus","id":"chatcmpl-3bb05cf5cd819fbca5f0b8d67a025022"}

data: {"choices":[{"delta":{"content":""},"finish_reason":"stop","index":0,"logprobs":null}],"object":"chat.completion.chunk","usage":null,"created":1715931028,"system_fingerprint":null,"model":"qwen-plus","id":"chatcmpl-3bb05cf5cd819fbca5f0b8d67a025022"}

data: [DONE]

輸入參數的詳情請參考輸入參數配置

異常響應示例

在訪問請求出錯的情況下,輸出的結果中會通過 code 和 message 指明出錯原因。

{
    "error": {
        "message": "Incorrect API key provided. ",
        "type": "invalid_request_error",
        "param": null,
        "code": "invalid_api_key"
    }
}

狀態碼說明

錯誤碼

說明

400 - Invalid Request Error

輸入請求錯誤,細節請參見具體報錯信息。

401 - Incorrect API key provided

apikey不正確。

429 - Rate limit reached for requests

qps、qpm等超限。

429 - You exceeded your current quota, please check your plan and billing details

額度超限或者欠費。

500 - The server had an error while processing your request

服務端錯誤。

503 - The engine is currently overloaded, please try again later

服務端負載過高,可重試。