OpenAI API - 嵌入#
SGLang 提供与 OpenAI 兼容的 API,以便从 OpenAI 服务平滑过渡到自托管本地模型。API 的完整参考资料可在OpenAI API 参考文档中找到。
本教程涵盖用于嵌入模型的嵌入 API,例如
启动服务器#
在终端中启动服务器并等待其初始化。请记住在命令中添加--is-embedding
。
[1]:
from sglang.test.test_utils import is_in_ci
if is_in_ci():
from patch import launch_server_cmd
else:
from sglang.utils import launch_server_cmd
from sglang.utils import wait_for_server, print_highlight, terminate_process
embedding_process, port = launch_server_cmd(
"""
python3 -m sglang.launch_server --model-path Alibaba-NLP/gte-Qwen2-1.5B-instruct \
--host 0.0.0.0 --is-embedding
"""
)
wait_for_server(f"http://localhost:{port}")
[2025-05-15 22:34:31] server_args=ServerArgs(model_path='Alibaba-NLP/gte-Qwen2-1.5B-instruct', tokenizer_path='Alibaba-NLP/gte-Qwen2-1.5B-instruct', tokenizer_mode='auto', skip_tokenizer_init=False, load_format='auto', trust_remote_code=False, dtype='auto', kv_cache_dtype='auto', quantization=None, quantization_param_path=None, context_length=None, device='cuda', served_model_name='Alibaba-NLP/gte-Qwen2-1.5B-instruct', chat_template=None, completion_template=None, is_embedding=True, enable_multimodal=None, revision=None, host='0.0.0.0', port=36215, mem_fraction_static=0.88, max_running_requests=200, max_total_tokens=20480, chunked_prefill_size=8192, max_prefill_tokens=16384, schedule_policy='fcfs', schedule_conservativeness=1.0, cpu_offload_gb=0, page_size=1, tp_size=1, pp_size=1, max_micro_batch_size=None, stream_interval=1, stream_output=False, random_seed=1059951828, constrained_json_whitespace_pattern=None, watchdog_timeout=300, dist_timeout=None, download_dir=None, base_gpu_id=0, gpu_id_step=1, log_level='info', log_level_http=None, log_requests=False, log_requests_level=0, show_time_cost=False, enable_metrics=False, bucket_time_to_first_token=None, bucket_e2e_request_latency=None, bucket_inter_token_latency=None, collect_tokens_histogram=False, decode_log_interval=40, enable_request_time_stats_logging=False, api_key=None, file_storage_path='sglang_storage', enable_cache_report=False, reasoning_parser=None, dp_size=1, load_balance_method='round_robin', ep_size=1, dist_init_addr=None, nnodes=1, node_rank=0, json_model_override_args='{}', preferred_sampling_params=None, lora_paths=None, max_loras_per_batch=8, lora_backend='triton', attention_backend=None, sampling_backend='flashinfer', grammar_backend='xgrammar', speculative_algorithm=None, speculative_draft_model_path=None, speculative_num_steps=None, speculative_eagle_topk=None, speculative_num_draft_tokens=None, speculative_accept_threshold_single=1.0, speculative_accept_threshold_acc=1.0, speculative_token_map=None, enable_double_sparsity=False, ds_channel_config_path=None, ds_heavy_channel_num=32, ds_heavy_token_num=256, ds_heavy_channel_type='qk', ds_sparse_decode_threshold=4096, disable_radix_cache=False, disable_cuda_graph=True, disable_cuda_graph_padding=False, enable_nccl_nvls=False, enable_tokenizer_batch_encode=False, disable_outlines_disk_cache=False, disable_custom_all_reduce=False, disable_overlap_schedule=False, enable_mixed_chunk=False, enable_dp_attention=False, enable_dp_lm_head=False, enable_ep_moe=False, enable_deepep_moe=False, deepep_mode='auto', enable_torch_compile=False, torch_compile_max_bs=32, cuda_graph_max_bs=None, cuda_graph_bs=None, torchao_config='', enable_nan_detection=False, enable_p2p_check=False, triton_attention_reduce_in_fp32=False, triton_attention_num_kv_splits=8, num_continuous_decode_steps=1, delete_ckpt_after_loading=False, enable_memory_saver=False, allow_auto_truncate=False, enable_custom_logit_processor=False, tool_call_parser=None, enable_hierarchical_cache=False, hicache_ratio=2.0, hicache_size=0, hicache_write_policy='write_through_selective', flashinfer_mla_disable_ragged=False, warmups=None, moe_dense_tp_size=None, n_share_experts_fusion=0, disable_chunked_prefix_cache=False, disable_fast_image_processor=False, mm_attention_backend=None, debug_tensor_dump_output_folder=None, debug_tensor_dump_input_file=None, debug_tensor_dump_inject=False, disaggregation_mode='null', disaggregation_bootstrap_port=8998, disaggregation_transfer_backend='mooncake', disaggregation_ib_device=None, pdlb_url=None)
[2025-05-15 22:34:32] Downcasting torch.float32 to torch.float16.
[2025-05-15 22:34:39] Downcasting torch.float32 to torch.float16.
[2025-05-15 22:34:39] Overlap scheduler is disabled for embedding models.
[2025-05-15 22:34:39] Downcasting torch.float32 to torch.float16.
[2025-05-15 22:34:40] Attention backend not set. Use fa3 backend by default.
[2025-05-15 22:34:40] Init torch distributed begin.
[2025-05-15 22:34:40] Init torch distributed ends. mem usage=0.00 GB
[2025-05-15 22:34:40] Load weight begin. avail mem=78.60 GB
[2025-05-15 22:34:42] Using model weights format ['*.safetensors']
Loading safetensors checkpoint shards: 0% Completed | 0/2 [00:00<?, ?it/s]
Loading safetensors checkpoint shards: 50% Completed | 1/2 [00:01<00:01, 1.30s/it]
Loading safetensors checkpoint shards: 100% Completed | 2/2 [00:03<00:00, 1.95s/it]
Loading safetensors checkpoint shards: 100% Completed | 2/2 [00:03<00:00, 1.86s/it]
[2025-05-15 22:34:46] Load weight end. type=Qwen2ForCausalLM, dtype=torch.float16, avail mem=70.90 GB, mem usage=7.70 GB.
[2025-05-15 22:34:46] KV Cache is allocated. #tokens: 20480, K size: 0.27 GB, V size: 0.27 GB
[2025-05-15 22:34:46] Memory pool end. avail mem=70.07 GB
[2025-05-15 22:34:47] max_total_num_tokens=20480, chunked_prefill_size=8192, max_prefill_tokens=16384, max_running_requests=200, context_len=131072
[2025-05-15 22:34:47] INFO: Started server process [61844]
[2025-05-15 22:34:47] INFO: Waiting for application startup.
[2025-05-15 22:34:47] INFO: Application startup complete.
[2025-05-15 22:34:47] INFO: Uvicorn running on http://0.0.0.0:36215 (Press CTRL+C to quit)
[2025-05-15 22:34:47] INFO: 127.0.0.1:40780 - "GET /v1/models HTTP/1.1" 200 OK
[2025-05-15 22:34:48] INFO: 127.0.0.1:40786 - "GET /get_model_info HTTP/1.1" 200 OK
[2025-05-15 22:34:48] Prefill batch. #new-seq: 1, #new-token: 6, #cached-token: 0, token usage: 0.00, #running-req: 0, #queue-req: 0
[2025-05-15 22:34:49] INFO: 127.0.0.1:40790 - "POST /encode HTTP/1.1" 200 OK
[2025-05-15 22:34:49] The server is fired up and ready to roll!
注意:通常,服务器在单独的终端中运行。
在本 notebook 中,我们同时运行服务器和 notebook 代码,因此它们的输出是组合在一起的。
为了提高清晰度,服务器日志以原始黑色显示,而 notebook 输出则以蓝色高亮显示。
我们正在 CI 并行环境中运行这些 notebook,因此吞吐量不代表实际性能。
使用 cURL#
[2]:
import subprocess, json
text = "Once upon a time"
curl_text = f"""curl -s http://localhost:{port}/v1/embeddings \
-d '{{"model": "Alibaba-NLP/gte-Qwen2-1.5B-instruct", "input": "{text}"}}'"""
text_embedding = json.loads(subprocess.check_output(curl_text, shell=True))["data"][0][
"embedding"
]
print_highlight(f"Text embedding (first 10): {text_embedding[:10]}")
[2025-05-15 22:34:52] Prefill batch. #new-seq: 1, #new-token: 4, #cached-token: 0, token usage: 0.00, #running-req: 0, #queue-req: 0
[2025-05-15 22:34:52] INFO: 127.0.0.1:37184 - "POST /v1/embeddings HTTP/1.1" 200 OK
文本嵌入(前 10 个):[-0.00019550323486328125, -0.049896240234375, -0.0032482147216796875, 0.011077880859375, -0.01406097412109375, 0.0159912109375, -0.01442718505859375, 0.005939483642578125, -0.022796630859375, 0.0273284912109375]
使用 Python Requests#
[3]:
import requests
text = "Once upon a time"
response = requests.post(
f"http://localhost:{port}/v1/embeddings",
json={"model": "Alibaba-NLP/gte-Qwen2-1.5B-instruct", "input": text},
)
text_embedding = response.json()["data"][0]["embedding"]
print_highlight(f"Text embedding (first 10): {text_embedding[:10]}")
[2025-05-15 22:34:52] Prefill batch. #new-seq: 1, #new-token: 1, #cached-token: 3, token usage: 0.00, #running-req: 0, #queue-req: 0
[2025-05-15 22:34:52] INFO: 127.0.0.1:37200 - "POST /v1/embeddings HTTP/1.1" 200 OK
文本嵌入(前 10 个):[-0.00021922588348388672, -0.049896240234375, -0.0032215118408203125, 0.011077880859375, -0.01406097412109375, 0.016021728515625, -0.01439666748046875, 0.00592803955078125, -0.0228271484375, 0.02734375]
使用 OpenAI Python Client#
[4]:
import openai
client = openai.Client(base_url=f"http://127.0.0.1:{port}/v1", api_key="None")
# Text embedding example
response = client.embeddings.create(
model="Alibaba-NLP/gte-Qwen2-1.5B-instruct",
input=text,
)
embedding = response.data[0].embedding[:10]
print_highlight(f"Text embedding (first 10): {embedding}")
[2025-05-15 22:34:52] Prefill batch. #new-seq: 1, #new-token: 1, #cached-token: 3, token usage: 0.00, #running-req: 0, #queue-req: 0
[2025-05-15 22:34:53] INFO: 127.0.0.1:37206 - "POST /v1/embeddings HTTP/1.1" 200 OK
文本嵌入(前 10 个):[-0.00021922588348388672, -0.049896240234375, -0.0032215118408203125, 0.011077880859375, -0.01406097412109375, 0.016021728515625, -0.01439666748046875, 0.00592803955078125, -0.0228271484375, 0.02734375]
使用输入 ID#
SGLang 也支持使用 input_ids
作为输入来获取嵌入。
[5]:
import json
import os
from transformers import AutoTokenizer
os.environ["TOKENIZERS_PARALLELISM"] = "false"
tokenizer = AutoTokenizer.from_pretrained("Alibaba-NLP/gte-Qwen2-1.5B-instruct")
input_ids = tokenizer.encode(text)
curl_ids = f"""curl -s http://localhost:{port}/v1/embeddings \
-d '{{"model": "Alibaba-NLP/gte-Qwen2-1.5B-instruct", "input": {json.dumps(input_ids)}}}'"""
input_ids_embedding = json.loads(subprocess.check_output(curl_ids, shell=True))["data"][
0
]["embedding"]
print_highlight(f"Input IDs embedding (first 10): {input_ids_embedding[:10]}")
[2025-05-15 22:34:53] Prefill batch. #new-seq: 1, #new-token: 1, #cached-token: 3, token usage: 0.00, #running-req: 0, #queue-req: 0
[2025-05-15 22:34:53] INFO: 127.0.0.1:37212 - "POST /v1/embeddings HTTP/1.1" 200 OK
输入 ID 嵌入(前 10 个):[-0.00021922588348388672, -0.049896240234375, -0.0032215118408203125, 0.011077880859375, -0.01406097412109375, 0.016021728515625, -0.01439666748046875, 0.00592803955078125, -0.0228271484375, 0.02734375]
[6]:
terminate_process(embedding_process)
[2025-05-15 22:34:53] Child process unexpectedly failed with an exit code 9. pid=62057
多模态嵌入模型#
请参考多模态嵌入模型