跳至主要内容

Embedding API 及參數說明

Embedding

信息

Embedding API 會受到以下數值的限制:

模型Sequence lengthBatch inference limit
FFM-embedding2,048 tokens2,048 tokens
FFM-embedding-v2131,072 tokens131,072 tokens
FFM-embedding-v2.1131,072 tokens131,072 tokens

Curl 使用方式

Step 1. 設定環境

export API_KEY={API_KEY}
export API_URL={API_URL}
export MODEL_NAME={MODEL_NAME}

Step 2. 透過 curl 指令取得 embedding 結果

使用範例

curl "${API_URL}/models/embeddings" \
-H "X-API-KEY:${API_KEY}" \
-H "content-type: application/json" \
-d '{
"model": ${MODEL_NAME},
"inputs": ["search string 1", "search string 2"]
}'

回傳範例

{
"data": [
{
"embedding": [
0.06317982822656631,
-0.5447818636894226,
-0.3353637158870697,
-0.5117015838623047,
-0.1446804255247116,
0.2036416381597519,
-0.20317679643630981,
-0.9627353549003601,
0.31771183013916016,
0.23493929207324982,
0.18029260635375977,
...
...
],
"index": 0,
"object": "embedding"
},
{
"embedding": [
0.15340591967105865,
-0.26574525237083435,
-0.3885045349597931,
-0.2985926568508148,
0.22742436826229095,
-0.42115798592567444,
-0.10134009271860123,
-1.0426620244979858,
0.507709264755249,
-0.3479543924331665,
-0.09303411841392517,
1.0853372812271118,
0.7396582961082458,
0.266722172498703,
...
...
],
"index": 1,
"object": "embedding"
}
],
"total_time_taken": "0.06 sec"
"usage": {
"prompt_tokens": 6,
"total_tokens": 6
}
}

FFM-Embedding LangChain 使用方式

Custom Embedding Model Wrapper
"""Wrapper Embedding model APIs."""
import json
import requests
from typing import List
from pydantic import BaseModel
from langchain.embeddings.base import Embeddings
import os

class CustomEmbeddingModel(BaseModel, Embeddings):
base_url: str = "http://localhost:12345"
api_key: str = ""
model_name: str = ""
def get_embeddings(self, payload):
endpoint_url=f"{self.base_url}/models/embeddings"
embeddings = []
headers = {
"Content-type": "application/json",
"accept": "application/json",
"X-API-KEY": self.api_key
}
response = requests.post(endpoint_url, headers=headers, data=payload)
body = response.json()
datas = body["data"]
for data in datas:
embeddings.append(data["embedding"])

return embeddings

def embed_documents(self, texts: List[str]) -> List[List[float]]:
payload = json.dumps({"model": self.model_name, "inputs": texts})
return self.get_embeddings(payload)


def embed_query(self, text: str) -> List[List[float]]:
payload = json.dumps({"model": self.model_name, "inputs": [text]})
emb = self.get_embeddings(payload)
return emb[0]
  • 完成以上封裝後,就可以在 LangChain 中直接使用 CustomEmbeddingModel 來完成特定的大語言模型任務。
信息

更多資訊,請參考 LangChain Custom LLM 文件


單一字串

  • 單一字串取得 embeddings,使用 embed_query() 函式,並返回結果。
API_KEY={API_KEY}
API_URL={API_URL}
MODEL_NAME={MODEL_NAME}

embeddings = CustomEmbeddingModel(
base_url = API_URL,
api_key = API_KEY,
model_name = MODEL_NAME,
)

print(embeddings.embed_query("請問台灣最高的山是?"))

輸出:

[-1.1431972, -4.723901, 2.3445783, -2.19996, ......, 1.0784563, -3.4114947, -2.5193133]


多字串

  • 多字串取得 embeddings,使用 embed_documents() 函式,會一次返回全部結果。
API_KEY={API_KEY}
API_URL={API_URL}
MODEL_NAME={MODEL_NAME}

embeddings = CustomEmbeddingModel(
base_url = API_URL,
api_key = API_KEY,
model_name = MODEL_NAME,
)


print(embeddings.embed_documents(["test1", "test2", "test3"]))

輸出:

[[-0.14880371, ......, 0.7011719], [-0.023590088, ...... , 0.49320474], [-0.86242676, ......, 0.22867839]]


Embedding-v2 (含v2.1)

  1. FFM-Embedding-v2 新增下列功能:
    • 新增 input_type 參數。
    • 此參數不為必要,如果沒有設定,預設是 document
    • 值只能設定為 document 或是 query
    • 值為 document 時,input 維持原本,不加前綴語句。
    • 值為 query 時,系統會自動將每一筆 inputs 加上前綴語句來加強 embedding 正確性。

使用範例

   curl "${API_URL}/models/embeddings" \
-H "X-API-KEY:${API_KEY}" \
-H "content-type: application/json" \
-d '{
"model": "'${MODEL_NAME}'",
"inputs": ["search string 1", "search string 2"],
"parameters": {
"input_type": "document"
}
}'

回傳範例

  {
"data": [
{
"embedding": [
0.015003109350800514,
0.002964278217405081,
0.025576837360858917,
0.0009064615005627275,
0.00896097905933857,
-0.010766804218292236,
0.022567130625247955,
-0.020284295082092285,
-0.004011997487396002,
-0.01566183753311634,
-0.016150206327438354,
-0.008938264101743698,
0.010346580296754837,
0.010187577456235886,
...
...
],
"index": 0,
"object": "embedding"
},
{
"embedding": [
0.013649762608110905,
0.003280752571299672,
0.024047400802373886,
0.005184505134820938,
0.009756374172866344,
-0.009389937855303288,
0.027826279401779175,
-0.016409488394856453,
0.0020984220318496227,
-0.0180928073823452,
-0.014462794177234173,
-0.006956569850444794,
0.013260424137115479,
0.018184415996074677,
...
...
],
"index": 1,
"object": "embedding"
}
],
"total_time_taken": "0.05 sec",
"usage": {
"prompt_tokens": 8,
"total_tokens": 8
}
}
  1. 兼容 OpenAI Embedding API 的參數
    • 若要使用 openai 兼容功能,則必須使用 input
    • input:目標字串為 list
    • encoding_format:預設值為 float,可以設定為 float 或是 base64,設為 base64 代表會將向量結果轉成 base64 格式再輸出。
    • dimensions:設定最多輸出多少維度的向量,例如設為 4,那就只會輸出前四維度的向量。預設值為 0,0 代表輸出最大維度的向量。

使用範例

 ​​​ curl "${API_URL}/models/embeddings" \
​ -H "X-API-KEY:${API_KEY}" \
​ -H "content-type: application/json" \
​ -d '{
​​​ "model": "'${MODEL_NAME}'",
​​​ "input": ["search string 1", "search string 2"],
​​​ "encoding_format": "base64",
​​​ "dimensions": 4
​​​ }'

回傳範例

{
​​​​ "data": [
​​​​ {
​​​​ "object": "embedding",
​​​​ "embedding": "pR0QPOuoY7sjFQM9U92HOw==",
​​​​ "index": 0
​​​​ },
​​​​ {
​​​​ "object": "embedding",
​​​​ "embedding": "6BXdOxIpD7vfHgA9suTyOw==",
​​​​ "index": 1
​​​​ }
​​​​ ],
​​​​ "total_time_taken": "0.04 sec",
​​​​ "usage": {
​​​​ "prompt_tokens": 8,
​​​​ "total_tokens": 8
​​​​ }
​​​​}
  1. 在 FFM-Embedding-v2,向量輸出的結果預設已啟用正規化 (normalize),其行為與 OpenAI 保持一致。如果需要與先前版本一致,輸出未正規化的向量結果,可以透過在 parameters 中新增參數 normalize 並將其設為 false。

使用範例

​curl "${API_URL}/models/embeddings" \
​ -H "X-API-KEY:${API_KEY}" \
​ -H "content-type: application/json" \
​ -d '{
​​​​ "model": "'${MODEL_NAME}'",
​​​​ "input": ["search string 1", "search string 2"],
​​​​ "parameters": {
​​​​ "normalize": false
​​​​ },
​​​​ "encoding_format": "base64",
​​​​ "dimensions": 4
​​​​ }'

FFM-Embedding-2 (含v2.1) LangChain 使用方式

Custom Embedding FFM V2 Model Wrapper

"""Wrapper Embedding FFM V2 model APIs."""
class CustomEmbeddingModel(BaseModel, Embeddings):
base_url: str = "${API_URL}"
api_key: str = "${API_KEY}"
model_name: str = "ffm-embedding-v2"
dimensions: int = 4
input_type: str = "document"
encoding_format: str = "base64"
normalize: bool = False

def get_embeddings(self, payload):
endpoint_url = f"{self.base_url}/models/embeddings"
embeddings = []
headers = {
"Content-type": "application/json",
"accept": "application/json",
"X-API-KEY": self.api_key,
}
response = requests.post(endpoint_url, headers=headers, data=payload)

body = response.json()
datas = body["data"]
for data in datas:
embeddings.append(data["embedding"])

return embeddings

def embed_input_type(self, texts: List[str]) -> List[List[float]]:
payload = json.dumps({
"model": self.model_name,
"inputs": texts,
"parameters": {
"input_type": self.input_type
},
"dimensions": self.dimensions
})
return self.get_embeddings(payload)

def embed_encode_dimensions(self, text: str) -> List[List[float]]:
payload = json.dumps({
"model": self.model_name,
"input": [text],
"encoding_format": self.encoding_format,
"dimensions": self.dimensions
})
emb = self.get_embeddings(payload)
return emb[0]

def embed_normalize(self, text: str) -> List[List[float]]:
payload = json.dumps({
"model": self.model_name,
"input": [text],
"encoding_format": self.encoding_format,
"parameters": {
"normalize": self.normalize
},
"dimensions": self.dimensions
})
emb = self.get_embeddings(payload)
return emb[0]
  • 完成以上封裝後,就可以在 LangChain 中直接使用 CustomEmbeddingModel 來完成 FFM-Embedding-V2 大語言模型任務。
信息

更多資訊,請參考 LangChain Custom LLM 文件