扩散语言模型#
扩散语言模型在具有并行解码能力的非自回归文本生成方面展现出了潜力。与自回归语言模型不同,不同的扩散语言模型需要不同的解码策略。
启动命令示例#
python3 -m sglang.launch_server \
--model-path inclusionAI/LLaDA2.0-mini \ # example HF/local path
--dllm-algorithm LowConfidence \
--dllm-algorithm-config ./config.yaml \ # Optional. Uses the algorithm's default if not set.
--host 0.0.0.0 \
--port 30000
配置文件示例#
# Confidence threshold for accepting predicted tokens
# - Higher values: More conservative, better quality but slower
# - Lower values: More aggressive, faster but potentially lower quality
# Range: 0.0 - 1.0
threshold: 0.95
# Default: 32, for LLaDA2MoeModelLM
block_size: 32
客户端代码片段示例#
与其他支持的模型一样,扩散语言模型可以通过 REST API 或 Python 客户端使用。
向已启动的服务器发送生成请求的 Python 客户端示例
import sglang as sgl
def main():
llm = sgl.Engine(model_path="inclusionAI/LLaDA2.0-mini",
dllm_algorithm="LowConfidence",
max_running_requests=1,
trust_remote_code=True)
prompts = [
"<role>SYSTEM</role>detailed thinking off<|role_end|><role>HUMAN</role> Write a brief introduction of the great wall <|role_end|><role>ASSISTANT</role>"
]
sampling_params = {
"temperature": 0,
"max_new_tokens": 1024,
}
outputs = llm.generate(prompts, sampling_params)
print(outputs)
if __name__ == '__main__':
main()
向已启动的服务器发送生成请求的 Curl 示例
curl -X POST "http://127.0.0.1:30000/generate" \
-H "Content-Type: application/json" \
-d '{
"text": [
"<role>SYSTEM</role>detailed thinking off<|role_end|><role>HUMAN</role> Write the number from 1 to 128 <|role_end|><role>ASSISTANT</role>",
"<role>SYSTEM</role>detailed thinking off<|role_end|><role>HUMAN</role> Write a brief introduction of the great wall <|role_end|><role>ASSISTANT</role>"
],
"stream": true,
"sampling_params": {
"temperature": 0,
"max_new_tokens": 1024
}
}'
支持的模型#
下表总结了支持的模型。
模型家族 |
示例模型 |
描述 |
|---|---|---|
LLaDA2.0 (mini, flash) |
|
LLaDA2.0-flash 是一款采用 100B 混合专家 (MoE) 架构的扩散语言模型。 |