14種定義嵌入演進的強大技術

14種定義嵌入演進的強大技術

你知道在過去,我們如何使用簡單的字數技巧來表示文字嗎?從那時起,我們已經走過了漫長的道路。現在,當我們談論嵌入式技術的發展時,我們指的是數字快照,它不僅能捕捉出現的字詞,還能捕捉它們的真正含義,它們在上下文中的相互關係,甚至它們與影像和其他媒體的聯絡。從能理解你意圖的搜尋引擎到似乎能讀懂你心思的推薦系統,嵌入技術為一切提供了動力。它們也是尖端人工智慧和機器學習應用的核心。因此,讓我們來回顧一下從原始計數到語義向量的演變過程,探索每種方法的工作原理、帶來的好處以及不足之處。

不同型別的嵌入技術

Source: Link

MTEB排行榜中的嵌入排名

大多數現代 LLM 都會生成嵌入詞,作為其架構的中間輸出。這些嵌入可以針對各種下游任務進行提取和微調,從而使基於 LLM 的嵌入成為當今最通用的工具之一。

為了跟上快速發展的形勢,Hugging Face 等平臺引入了大量文字嵌入基準(MTEB)排行榜等資源。該排行榜根據嵌入模型在分類、聚類、檢索等各種任務中的表現進行排名。這將極大地幫助從業人員確定最適合其使用案例的模型。

有了對排行榜的深入瞭解,讓我們捲起袖子,深入研究向量化工具箱–計數向量、TF-IDF 和其他經典方法,它們仍然是當今複雜嵌入模型的重要組成部分。

MTEB

有了這些對排行榜的深入瞭解,讓我們捲起袖子,深入研究向量化工具箱–計數向量、TF-IDF 和其他經典方法,它們仍然是當今複雜嵌入式的重要組成部分。

MTEB排行榜

1. 計數向量化

計數向量化是表示文字的最簡單技術之一。它的出現源於將原始文字轉換成數字形式以便機器學習模型處理的需要。在這種方法中,每篇文件都被轉換成一個向量,反映其中出現的每個單詞的計數。這種簡單明瞭的方法為更復雜的表示法奠定了基礎,在對可解釋性要求較高的情況下仍然非常有用。

工作原理

  • 機制:
    • 首先將文字語料標記化為單詞。從所有獨特的標記詞中建立詞彙表。
    • 每個文件被表示為一個向量,其中每個維度都對應於詞彙表中該單詞各自的向量。
    • 每個維度的值只是文件中某個詞的頻率或計數。
  • 舉例說明: 對於詞彙表 [“apple”、“banana”、“cherry”],文件 “apple apple cherry”變為 [2, 0, 1]。
  • 更多詳情: 計數向量化是許多其他方法的基礎。它的簡單性並不能捕捉到任何上下文或語義資訊,但它仍然是許多 NLP 流程中必不可少的預處理步驟。

程式碼實現

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
from sklearn.feature_extraction.text import CountVectorizer
import pandas as pd
# Sample text documents with repeated words
documents = [
"Natural Language Processing is fun and natural natural natural",
"I really love love love Natural Language Processing Processing Processing",
"Machine Learning is a part of AI AI AI AI",
"AI and NLP NLP NLP are closely related related"
]
# Initialize CountVectorizer
vectorizer = CountVectorizer()
# Fit and transform the text data
X = vectorizer.fit_transform(documents)
# Get feature names (unique words)
feature_names = vectorizer.get_feature_names_out()
# Convert to DataFrame for better visualization
df = pd.DataFrame(X.toarray(), columns=feature_names)
# Print the matrix
print(df)
from sklearn.feature_extraction.text import CountVectorizer import pandas as pd # Sample text documents with repeated words documents = [ "Natural Language Processing is fun and natural natural natural", "I really love love love Natural Language Processing Processing Processing", "Machine Learning is a part of AI AI AI AI", "AI and NLP NLP NLP are closely related related" ] # Initialize CountVectorizer vectorizer = CountVectorizer() # Fit and transform the text data X = vectorizer.fit_transform(documents) # Get feature names (unique words) feature_names = vectorizer.get_feature_names_out() # Convert to DataFrame for better visualization df = pd.DataFrame(X.toarray(), columns=feature_names) # Print the matrix print(df)
from sklearn.feature_extraction.text import CountVectorizer
import pandas as pd
# Sample text documents with repeated words
documents = [
"Natural Language Processing is fun and natural natural natural",
"I really love love love Natural Language Processing Processing Processing",
"Machine Learning is a part of AI AI AI AI",
"AI and NLP NLP NLP are closely related related"
]
# Initialize CountVectorizer
vectorizer = CountVectorizer()
# Fit and transform the text data
X = vectorizer.fit_transform(documents)
# Get feature names (unique words)
feature_names = vectorizer.get_feature_names_out()
# Convert to DataFrame for better visualization
df = pd.DataFrame(X.toarray(), columns=feature_names)
# Print the matrix
print(df)

輸出:

計數向量化

優點

  • 簡單易懂:易於實施和理解。
  • 確定性:產生固定的表示形式,易於分析。

缺點

  • 高維度和稀疏性:向量通常很大,且大部分為零,導致效率低下。
  • 缺乏語義語境:無法捕捉詞與詞之間的意義或關係。

2. One-Hot編碼

One-Hot 編碼是最早將單詞表示為向量的方法之一。它與 20 世紀 50 和 60 年代早期的數字計算技術一起發展,將單詞等分類資料轉換為二進位制向量。每個單詞都有唯一的表示形式,確保沒有兩個單詞有相似的表示形式,但這樣做的代價是無法捕捉語義的相似性。

工作原理

  • 機制:
    • 詞彙表中的每個詞都會被分配一個向量,其長度等於詞彙表的大小。
    • 在每個向量中,所有元素都是 0,只有該詞對應位置上的一個 1 除外。
  • 舉例說明:詞彙表[“apple”、“banana”、“cherry”]中的單詞 “banana”表示為[0, 1, 0]。
  • 其他細節:單熱向量是完全正交的,這意味著兩個不同單詞之間的餘弦相似度為零。這種方法簡單明瞭,但無法捕捉到任何相似性(例如,“apple”和 “orange”與 “apple”和 “car”的相似性相同)。

程式碼實現

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
from sklearn.feature_extraction.text import CountVectorizer
import pandas as pd
# Sample text documents
documents = [
"Natural Language Processing is fun and natural natural natural",
"I really love love love Natural Language Processing Processing Processing",
"Machine Learning is a part of AI AI AI AI",
"AI and NLP NLP NLP are closely related related"
]
# Initialize CountVectorizer with binary=True for One-Hot Encoding
vectorizer = CountVectorizer(binary=True)
# Fit and transform the text data
X = vectorizer.fit_transform(documents)
# Get feature names (unique words)
feature_names = vectorizer.get_feature_names_out()
# Convert to DataFrame for better visualization
df = pd.DataFrame(X.toarray(), columns=feature_names)
# Print the one-hot encoded matrix
print(df)
from sklearn.feature_extraction.text import CountVectorizer import pandas as pd # Sample text documents documents = [ "Natural Language Processing is fun and natural natural natural", "I really love love love Natural Language Processing Processing Processing", "Machine Learning is a part of AI AI AI AI", "AI and NLP NLP NLP are closely related related" ] # Initialize CountVectorizer with binary=True for One-Hot Encoding vectorizer = CountVectorizer(binary=True) # Fit and transform the text data X = vectorizer.fit_transform(documents) # Get feature names (unique words) feature_names = vectorizer.get_feature_names_out() # Convert to DataFrame for better visualization df = pd.DataFrame(X.toarray(), columns=feature_names) # Print the one-hot encoded matrix print(df)
from sklearn.feature_extraction.text import CountVectorizer
import pandas as pd
# Sample text documents
documents = [
   "Natural Language Processing is fun and natural natural natural",
   "I really love love love Natural Language Processing Processing Processing",
   "Machine Learning is a part of AI AI AI AI",
   "AI and NLP NLP NLP are closely related related"
]
# Initialize CountVectorizer with binary=True for One-Hot Encoding
vectorizer = CountVectorizer(binary=True)
# Fit and transform the text data
X = vectorizer.fit_transform(documents)
# Get feature names (unique words)
feature_names = vectorizer.get_feature_names_out()
# Convert to DataFrame for better visualization
df = pd.DataFrame(X.toarray(), columns=feature_names)
# Print the one-hot encoded matrix
print(df)

輸出:

One-Hot編碼

因此,基本上可以看出 Count Vectorizer 和 One Hot Encoding 之間的區別。Count Vectorizer 計算的是某個單詞在句子中出現的次數,而 One Hot Encoding 則是將某個單詞在某個句子/文件中出現的次數標為 1。

Count Vectorizer 和 One Hot Encoding 之間的區別

何時使用?

  • 當一個詞出現的次數很重要時(如垃圾郵件檢測、文件相似性),請使用 CountVectorizer
  • 當您只關心一個詞是否至少出現過一次時,請使用 One-Hot Encoding(例如,用於 ML 模型的分類特徵編碼)。

優點

  • 清晰獨特:每個詞都有一個獨特且不重疊的表示法
  • 簡單:易於實現,對小型詞彙表的計算開銷最小。

缺點

  • 詞彙量大時效率低:向量變得非常高維和稀疏。
  • 無語義相似性:不允許詞與詞之間存在任何關係;所有非相同詞的距離都相同。

3. TF-IDF(詞頻-反向文件頻率)

TF-IDF 是為了改進原始計數方法而開發的,它透過計算單詞出現次數,並根據單詞在語料庫中的整體重要性對其進行權衡。TF-IDF 於 20 世紀 70 年代初推出,是資訊檢索系統和文字挖掘應用的基石。它有助於突出單個文件中重要的詞彙,同時淡化所有文件中常見的詞彙。

工作原理

  • 機制:
    • 詞頻 (TF):衡量一個詞在文件中出現的頻率。
    • 反向文件頻率 (IDF):透過考慮一個詞在所有文件中的常見或罕見程度來衡量其重要性。
    • 最終的 TF-IDF 分數是 TF 和 IDF 的乘積。

TF-IDF 分數是 TF 和 IDF 的乘積

Source: Link

  • 舉例說明:像 “the”這樣的常用詞得分較低,而較為獨特的詞得分較高,因此在文件分析中比較突出。因此,在 NLP 任務中,我們通常會省略頻繁出現的詞語,這些詞語也被稱為 Stopwords。
  • 其他細節:TF-IDF 將原始頻率計數轉化為一種能有效區分重要關鍵詞和常用詞的方法。它已成為搜尋引擎和文件聚類的標準方法。

程式碼實現

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
from sklearn.feature_extraction.text import TfidfVectorizer
import pandas as pd
import numpy as np
# Sample short sentences
documents = [
"cat sits here",
"dog barks loud",
"cat barks loud"
]
# Initialize TfidfVectorizer to get both TF and IDF
vectorizer = TfidfVectorizer()
# Fit and transform the text data
X = vectorizer.fit_transform(documents)
# Extract feature names (unique words)
feature_names = vectorizer.get_feature_names_out()
# Get TF matrix (raw term frequencies)
tf_matrix = X.toarray()
# Compute IDF values manually
idf_values = vectorizer.idf_
# Compute TF-IDF manually (TF * IDF)
tfidf_matrix = tf_matrix * idf_values
# Convert to DataFrames for better visualization
df_tf = pd.DataFrame(tf_matrix, columns=feature_names)
df_idf = pd.DataFrame([idf_values], columns=feature_names)
df_tfidf = pd.DataFrame(tfidf_matrix, columns=feature_names)
# Print tables
print("\n🔹 Term Frequency (TF) Matrix:\n", df_tf)
print("\n🔹 Inverse Document Frequency (IDF) Values:\n", df_idf)
print("\n🔹 TF-IDF Matrix (TF * IDF):\n", df_tfidf)
from sklearn.feature_extraction.text import TfidfVectorizer import pandas as pd import numpy as np # Sample short sentences documents = [ "cat sits here", "dog barks loud", "cat barks loud" ] # Initialize TfidfVectorizer to get both TF and IDF vectorizer = TfidfVectorizer() # Fit and transform the text data X = vectorizer.fit_transform(documents) # Extract feature names (unique words) feature_names = vectorizer.get_feature_names_out() # Get TF matrix (raw term frequencies) tf_matrix = X.toarray() # Compute IDF values manually idf_values = vectorizer.idf_ # Compute TF-IDF manually (TF * IDF) tfidf_matrix = tf_matrix * idf_values # Convert to DataFrames for better visualization df_tf = pd.DataFrame(tf_matrix, columns=feature_names) df_idf = pd.DataFrame([idf_values], columns=feature_names) df_tfidf = pd.DataFrame(tfidf_matrix, columns=feature_names) # Print tables print("\n🔹 Term Frequency (TF) Matrix:\n", df_tf) print("\n🔹 Inverse Document Frequency (IDF) Values:\n", df_idf) print("\n🔹 TF-IDF Matrix (TF * IDF):\n", df_tfidf)
from sklearn.feature_extraction.text import TfidfVectorizer
import pandas as pd
import numpy as np
# Sample short sentences
documents = [
   "cat sits here",
   "dog barks loud",
   "cat barks loud"
]
# Initialize TfidfVectorizer to get both TF and IDF
vectorizer = TfidfVectorizer()
# Fit and transform the text data
X = vectorizer.fit_transform(documents)
# Extract feature names (unique words)
feature_names = vectorizer.get_feature_names_out()
# Get TF matrix (raw term frequencies)
tf_matrix = X.toarray()
# Compute IDF values manually
idf_values = vectorizer.idf_
# Compute TF-IDF manually (TF * IDF)
tfidf_matrix = tf_matrix * idf_values
# Convert to DataFrames for better visualization
df_tf = pd.DataFrame(tf_matrix, columns=feature_names)
df_idf = pd.DataFrame([idf_values], columns=feature_names)
df_tfidf = pd.DataFrame(tfidf_matrix, columns=feature_names)
# Print tables
print("\n🔹 Term Frequency (TF) Matrix:\n", df_tf)
print("\n🔹 Inverse Document Frequency (IDF) Values:\n", df_idf)
print("\n🔹 TF-IDF Matrix (TF * IDF):\n", df_tfidf)

輸出:

詞頻-反向文件頻率

優點

  • 增強單詞重要性:強調特定內容的詞語。
  • 降低維度:過濾掉附加值低的普通詞語。

缺點

  • 表示稀疏:儘管進行了加權,但得到的向量仍然稀疏。
  • 缺乏語境:無法捕捉詞序或更深層的語義關係。

4. Okapi BM25

Okapi BM25 開發於 20 世紀 90 年代,是一種機率模型,主要用於資訊檢索系統中的文件排序,而非嵌入方法本身。BM25 是 TF-IDF 的增強版,常用於搜尋引擎和資訊檢索。它在 TF-IDF 的基礎上進行了改進,考慮了文件長度歸一化和詞頻飽和(即重複詞的收益遞減)。

工作原理

  • 機制:
    • 機率框架:該框架根據查詢詞頻來估算文件的相關性,並根據文件長度進行調整。
    • 使用引數來控制詞頻的影響,並抑制極高計數的影響。

在此,我們將研究 BM25 評分機制:

BM25 評分機制

Source – Link

BM25 評分機制-2

Source – Link

BM25 引入了兩個引數,即 k1 和 b,可分別對詞頻飽和度和長度歸一化進行微調。這些引數對於最佳化 BM25 演算法在各種搜尋環境下的效能至關重要。

  • 例如:BM25 在根據文件長度進行調整的同時,會給包含中等頻率的罕見查詢詞的文件分配更高的相關性分數,反之亦然。
  • 其他細節:雖然 BM25 不會產生向量嵌入,但它改進了 TF-IDF 在文件排序方面的不足,從而對文字檢索系統產生了深遠的影響。

程式碼實現

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
import numpy as np
import pandas as pd
from sklearn.feature_extraction.text import CountVectorizer
# Sample documents
documents = [
"cat sits here",
"dog barks loud",
"cat barks loud"
]
# Compute Term Frequency (TF) using CountVectorizer
vectorizer = CountVectorizer()
X = vectorizer.fit_transform(documents)
tf_matrix = X.toarray()
feature_names = vectorizer.get_feature_names_out()
# Compute Inverse Document Frequency (IDF) for BM25
N = len(documents) # Total number of documents
df = np.sum(tf_matrix > 0, axis=0) # Document Frequency (DF) for each term
idf = np.log((N - df + 0.5) / (df + 0.5) + 1) # BM25 IDF formula
# Compute BM25 scores
k1 = 1.5 # Smoothing parameter
b = 0.75 # Length normalization parameter
avgdl = np.mean([len(doc.split()) for doc in documents]) # Average document length
doc_lengths = np.array([len(doc.split()) for doc in documents])
bm25_matrix = np.zeros_like(tf_matrix, dtype=np.float64)
for i in range(N): # For each document
for j in range(len(feature_names)): # For each term
term_freq = tf_matrix[i, j]
num = term_freq * (k1 + 1)
denom = term_freq + k1 * (1 - b + b * (doc_lengths[i] / avgdl))
bm25_matrix[i, j] = idf[j] * (num / denom)
# Convert to DataFrame for better visualization
df_tf = pd.DataFrame(tf_matrix, columns=feature_names)
df_idf = pd.DataFrame([idf], columns=feature_names)
df_bm25 = pd.DataFrame(bm25_matrix, columns=feature_names)
# Display the results
print("\n🔹 Term Frequency (TF) Matrix:\n", df_tf)
print("\n🔹 BM25 Inverse Document Frequency (IDF):\n", df_idf)
print("\n🔹 BM25 Scores:\n", df_bm25)
import numpy as np import pandas as pd from sklearn.feature_extraction.text import CountVectorizer # Sample documents documents = [ "cat sits here", "dog barks loud", "cat barks loud" ] # Compute Term Frequency (TF) using CountVectorizer vectorizer = CountVectorizer() X = vectorizer.fit_transform(documents) tf_matrix = X.toarray() feature_names = vectorizer.get_feature_names_out() # Compute Inverse Document Frequency (IDF) for BM25 N = len(documents) # Total number of documents df = np.sum(tf_matrix > 0, axis=0) # Document Frequency (DF) for each term idf = np.log((N - df + 0.5) / (df + 0.5) + 1) # BM25 IDF formula # Compute BM25 scores k1 = 1.5 # Smoothing parameter b = 0.75 # Length normalization parameter avgdl = np.mean([len(doc.split()) for doc in documents]) # Average document length doc_lengths = np.array([len(doc.split()) for doc in documents]) bm25_matrix = np.zeros_like(tf_matrix, dtype=np.float64) for i in range(N): # For each document for j in range(len(feature_names)): # For each term term_freq = tf_matrix[i, j] num = term_freq * (k1 + 1) denom = term_freq + k1 * (1 - b + b * (doc_lengths[i] / avgdl)) bm25_matrix[i, j] = idf[j] * (num / denom) # Convert to DataFrame for better visualization df_tf = pd.DataFrame(tf_matrix, columns=feature_names) df_idf = pd.DataFrame([idf], columns=feature_names) df_bm25 = pd.DataFrame(bm25_matrix, columns=feature_names) # Display the results print("\n🔹 Term Frequency (TF) Matrix:\n", df_tf) print("\n🔹 BM25 Inverse Document Frequency (IDF):\n", df_idf) print("\n🔹 BM25 Scores:\n", df_bm25)
import numpy as np
import pandas as pd
from sklearn.feature_extraction.text import CountVectorizer
# Sample documents
documents = [
   "cat sits here",
   "dog barks loud",
   "cat barks loud"
]
# Compute Term Frequency (TF) using CountVectorizer
vectorizer = CountVectorizer()
X = vectorizer.fit_transform(documents)
tf_matrix = X.toarray()
feature_names = vectorizer.get_feature_names_out()
# Compute Inverse Document Frequency (IDF) for BM25
N = len(documents)  # Total number of documents
df = np.sum(tf_matrix > 0, axis=0)  # Document Frequency (DF) for each term
idf = np.log((N - df + 0.5) / (df + 0.5) + 1)  # BM25 IDF formula
# Compute BM25 scores
k1 = 1.5  # Smoothing parameter
b = 0.75  # Length normalization parameter
avgdl = np.mean([len(doc.split()) for doc in documents])  # Average document length
doc_lengths = np.array([len(doc.split()) for doc in documents])
bm25_matrix = np.zeros_like(tf_matrix, dtype=np.float64)
for i in range(N):  # For each document
   for j in range(len(feature_names)):  # For each term
       term_freq = tf_matrix[i, j]
       num = term_freq * (k1 + 1)
       denom = term_freq + k1 * (1 - b + b * (doc_lengths[i] / avgdl))
       bm25_matrix[i, j] = idf[j] * (num / denom)
# Convert to DataFrame for better visualization
df_tf = pd.DataFrame(tf_matrix, columns=feature_names)
df_idf = pd.DataFrame([idf], columns=feature_names)
df_bm25 = pd.DataFrame(bm25_matrix, columns=feature_names)
# Display the results
print("\n🔹 Term Frequency (TF) Matrix:\n", df_tf)
print("\n🔹 BM25 Inverse Document Frequency (IDF):\n", df_idf)
print("\n🔹 BM25 Scores:\n", df_bm25)

輸出:

Okapi BM25

程式碼執行(資訊檢索)

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
!pip install bm25s
import bm25s
# Create your corpus here
corpus = [
"a cat is a feline and likes to purr",
"a dog is the human's best friend and loves to play",
"a bird is a beautiful animal that can fly",
"a fish is a creature that lives in water and swims",
]
# Create the BM25 model and index the corpus
retriever = bm25s.BM25(corpus=corpus)
retriever.index(bm25s.tokenize(corpus))
# Query the corpus and get top-k results
query = "does the fish purr like a cat?"
results, scores = retriever.retrieve(bm25s.tokenize(query), k=2)
# Let's see what we got!
doc, score = results[0, 0], scores[0, 0]
print(f"Rank {i+1} (score: {score:.2f}): {doc}")
!pip install bm25s import bm25s # Create your corpus here corpus = [ "a cat is a feline and likes to purr", "a dog is the human's best friend and loves to play", "a bird is a beautiful animal that can fly", "a fish is a creature that lives in water and swims", ] # Create the BM25 model and index the corpus retriever = bm25s.BM25(corpus=corpus) retriever.index(bm25s.tokenize(corpus)) # Query the corpus and get top-k results query = "does the fish purr like a cat?" results, scores = retriever.retrieve(bm25s.tokenize(query), k=2) # Let's see what we got! doc, score = results[0, 0], scores[0, 0] print(f"Rank {i+1} (score: {score:.2f}): {doc}")
!pip install bm25s
import bm25s
# Create your corpus here
corpus = [
   "a cat is a feline and likes to purr",
   "a dog is the human's best friend and loves to play",
   "a bird is a beautiful animal that can fly",
   "a fish is a creature that lives in water and swims",
]
# Create the BM25 model and index the corpus
retriever = bm25s.BM25(corpus=corpus)
retriever.index(bm25s.tokenize(corpus))
# Query the corpus and get top-k results
query = "does the fish purr like a cat?"
results, scores = retriever.retrieve(bm25s.tokenize(query), k=2)
# Let's see what we got!
doc, score = results[0, 0], scores[0, 0]
print(f"Rank {i+1} (score: {score:.2f}): {doc}")

輸出:

Okapi BM25

優勢

  • 改進相關性排名:更好地處理文件長度和術語飽和度。
  • 廣泛採用:許多現代搜尋引擎和 IR 系統的標準配置。

缺點

  • 不是真正的嵌入:它對文件進行評分,而不是產生連續的向量空間表示。
  • 引數敏感性:需要仔細調整才能達到最佳效能。

5. Word2Vec(CBOW和Skip-gram)

Word2Vec 由谷歌於 2013 年推出,它透過學習單詞的密集、低維向量表示,徹底改變了 NLP。它超越了計數和加權,透過訓練淺層神經網路來捕捉基於單詞上下文的語義和句法關係。Word2Vec 有兩種型別: 連續詞袋 (CBOW) 和跳格。

工作原理

  • CBOW(連續詞袋):
    • 機制:根據周圍的語境詞預測目標詞。
    • 過程:提取多個上下文單詞(忽略順序),並學習預測中心詞。
  • 跳格
    • 機制:使用目標詞預測其周圍的語境詞。
    • 過程:透過關注上下文,對學習罕見詞的表徵尤為有效。
      Word2Vec(CBOW和Skip-gram)
  • 其他細節: 這兩種架構都使用了帶有一個隱藏層的神經網路,並採用了負取樣或分層軟最大值等最佳化技巧來管理計算複雜性。由此產生的嵌入結果可以捕捉到細微的語義關係,例如,“king”減去“man”再加上“woman”就近似於“queen”。

Word2Vec(CBOW和Skip-gram)

Source: Link

程式碼執行

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
!pip install numpy==1.24.3
from gensim.models import Word2Vec
import networkx as nx
import matplotlib.pyplot as plt
# Sample corpus
sentences = [
["I", "love", "deep", "learning"],
["Natural", "language", "processing", "is", "fun"],
["Word2Vec", "is", "a", "great", "tool"],
["AI", "is", "the", "future"],
]
# Train Word2Vec models
cbow_model = Word2Vec(sentences, vector_size=10, window=2, min_count=1, sg=0) # CBOW
skipgram_model = Word2Vec(sentences, vector_size=10, window=2, min_count=1, sg=1) # Skip-gram
# Get word vectors
word = "is"
print(f"CBOW Vector for '{word}':\n", cbow_model.wv[word])
print(f"\nSkip-gram Vector for '{word}':\n", skipgram_model.wv[word])
# Get most similar words
print("\n🔹 CBOW Most Similar Words:", cbow_model.wv.most_similar(word))
print("\n🔹 Skip-gram Most Similar Words:", skipgram_model.wv.most_similar(word))
!pip install numpy==1.24.3 from gensim.models import Word2Vec import networkx as nx import matplotlib.pyplot as plt # Sample corpus sentences = [ ["I", "love", "deep", "learning"], ["Natural", "language", "processing", "is", "fun"], ["Word2Vec", "is", "a", "great", "tool"], ["AI", "is", "the", "future"], ] # Train Word2Vec models cbow_model = Word2Vec(sentences, vector_size=10, window=2, min_count=1, sg=0) # CBOW skipgram_model = Word2Vec(sentences, vector_size=10, window=2, min_count=1, sg=1) # Skip-gram # Get word vectors word = "is" print(f"CBOW Vector for '{word}':\n", cbow_model.wv[word]) print(f"\nSkip-gram Vector for '{word}':\n", skipgram_model.wv[word]) # Get most similar words print("\n🔹 CBOW Most Similar Words:", cbow_model.wv.most_similar(word)) print("\n🔹 Skip-gram Most Similar Words:", skipgram_model.wv.most_similar(word))
!pip install numpy==1.24.3
from gensim.models import Word2Vec
import networkx as nx
import matplotlib.pyplot as plt
# Sample corpus
sentences = [
["I", "love", "deep", "learning"],
["Natural", "language", "processing", "is", "fun"],
["Word2Vec", "is", "a", "great", "tool"],
["AI", "is", "the", "future"],
]
# Train Word2Vec models
cbow_model = Word2Vec(sentences, vector_size=10, window=2, min_count=1, sg=0)  # CBOW
skipgram_model = Word2Vec(sentences, vector_size=10, window=2, min_count=1, sg=1)  # Skip-gram
# Get word vectors
word = "is"
print(f"CBOW Vector for '{word}':\n", cbow_model.wv[word])
print(f"\nSkip-gram Vector for '{word}':\n", skipgram_model.wv[word])
# Get most similar words
print("\n🔹 CBOW Most Similar Words:", cbow_model.wv.most_similar(word))
print("\n🔹 Skip-gram Most Similar Words:", skipgram_model.wv.most_similar(word))

輸出:

程式碼執行截圖

視覺化 CBOW 和 Skip-gram:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
def visualize_cbow():
G = nx.DiGraph()
# Nodes
context_words = ["Natural", "is", "fun"]
target_word = "learning"
for word in context_words:
G.add_edge(word, "Hidden Layer")
G.add_edge("Hidden Layer", target_word)
# Draw the network
pos = nx.spring_layout(G)
plt.figure(figsize=(6, 4))
nx.draw(G, pos, with_labels=True, node_size=3000, node_color="lightblue", edge_color="gray")
plt.title("CBOW Model Visualization")
plt.show()
visualize_cbow()
def visualize_cbow(): G = nx.DiGraph() # Nodes context_words = ["Natural", "is", "fun"] target_word = "learning" for word in context_words: G.add_edge(word, "Hidden Layer") G.add_edge("Hidden Layer", target_word) # Draw the network pos = nx.spring_layout(G) plt.figure(figsize=(6, 4)) nx.draw(G, pos, with_labels=True, node_size=3000, node_color="lightblue", edge_color="gray") plt.title("CBOW Model Visualization") plt.show() visualize_cbow()
def visualize_cbow():
   G = nx.DiGraph()
   # Nodes
   context_words = ["Natural", "is", "fun"]
   target_word = "learning"
   for word in context_words:
       G.add_edge(word, "Hidden Layer")
   G.add_edge("Hidden Layer", target_word)
   # Draw the network
   pos = nx.spring_layout(G)
   plt.figure(figsize=(6, 4))
   nx.draw(G, pos, with_labels=True, node_size=3000, node_color="lightblue", edge_color="gray")
   plt.title("CBOW Model Visualization")
   plt.show()
visualize_cbow()

輸出:

視覺化 CBOW 和 Skip-gram

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
def visualize_skipgram():
G = nx.DiGraph()
# Nodes
target_word = "learning"
context_words = ["Natural", "is", "fun"]
G.add_edge(target_word, "Hidden Layer")
for word in context_words:
G.add_edge("Hidden Layer", word)
# Draw the network
pos = nx.spring_layout(G)
plt.figure(figsize=(6, 4))
nx.draw(G, pos, with_labels=True, node_size=3000, node_color="lightgreen", edge_color="gray")
plt.title("Skip-gram Model Visualization")
plt.show()
visualize_skipgram()
def visualize_skipgram(): G = nx.DiGraph() # Nodes target_word = "learning" context_words = ["Natural", "is", "fun"] G.add_edge(target_word, "Hidden Layer") for word in context_words: G.add_edge("Hidden Layer", word) # Draw the network pos = nx.spring_layout(G) plt.figure(figsize=(6, 4)) nx.draw(G, pos, with_labels=True, node_size=3000, node_color="lightgreen", edge_color="gray") plt.title("Skip-gram Model Visualization") plt.show() visualize_skipgram()
def visualize_skipgram():
   G = nx.DiGraph()
   # Nodes
   target_word = "learning"
   context_words = ["Natural", "is", "fun"]
   G.add_edge(target_word, "Hidden Layer")
   for word in context_words:
       G.add_edge("Hidden Layer", word)
   # Draw the network
   pos = nx.spring_layout(G)
   plt.figure(figsize=(6, 4))
   nx.draw(G, pos, with_labels=True, node_size=3000, node_color="lightgreen", edge_color="gray")
   plt.title("Skip-gram Model Visualization")
   plt.show()
visualize_skipgram()

輸出:

視覺化 CBOW 和 Skip-gram

優點

  • 語義豐富:學習單詞之間有意義的關係
  • 高效訓練:可相對快速地在大型語料庫中進行訓練。
  • 密集表示:使用低維度的連續向量,便於下游處理。

缺點

  • 靜態表示:無論上下文如何,每個詞只提供一種嵌入。
  • 語境限制:無法區分在不同語境中具有不同含義的多義詞。

6. GloVe(詞彙表示的全域性向量)

GloVe 於 2014 年在斯坦福大學開發,以 Word2Vec 的理念為基礎,將全域性共現統計與本地上下文資訊相結合。它旨在生成能捕捉語料庫整體統計資訊的詞語嵌入,從而提高不同語境下的一致性。

工作原理

  • 機制:
    • 共現矩陣:構建一個矩陣,記錄詞對在整個語料庫中出現的頻率。共現矩陣
      這種共現矩陣邏輯也廣泛應用於計算機視覺領域,尤其是在 GLCM(灰度共現矩陣)主題下。這是一種用於影像處理和計算機視覺紋理分析的統計方法,它考慮了畫素之間的空間關係。
      • 矩陣因式分解:對該矩陣進行因式分解,從而得出能捕捉全域性統計資訊的詞向量。
    • 其他細節:與 Word2Vec 的純預測模型不同,GloVe 的方法允許模型學習單詞共現率,一些研究發現這種方法在捕捉語義相似性和類比性方面更為穩健。

程式碼實施

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
import numpy as np
# Load pre-trained GloVe embeddings
glove_model = api.load("glove-wiki-gigaword-50") # You can use "glove-twitter-25", "glove-wiki-gigaword-100", etc.
# Example words
word = "king"
print(f"🔹 Vector representation for '{word}':\n", glove_model[word])
# Find similar words
similar_words = glove_model.most_similar(word, topn=5)
print("\n🔹 Words similar to 'king':", similar_words)
word1 = "king"
word2 = "queen"
similarity = glove_model.similarity(word1, word2)
print(f"🔹 Similarity between '{word1}' and '{word2}': {similarity:.4f}")
import numpy as np # Load pre-trained GloVe embeddings glove_model = api.load("glove-wiki-gigaword-50") # You can use "glove-twitter-25", "glove-wiki-gigaword-100", etc. # Example words word = "king" print(f"🔹 Vector representation for '{word}':\n", glove_model[word]) # Find similar words similar_words = glove_model.most_similar(word, topn=5) print("\n🔹 Words similar to 'king':", similar_words) word1 = "king" word2 = "queen" similarity = glove_model.similarity(word1, word2) print(f"🔹 Similarity between '{word1}' and '{word2}': {similarity:.4f}")
import numpy as np
# Load pre-trained GloVe embeddings
glove_model = api.load("glove-wiki-gigaword-50")  # You can use "glove-twitter-25", "glove-wiki-gigaword-100", etc.
# Example words
word = "king"
print(f"🔹 Vector representation for '{word}':\n", glove_model[word])
# Find similar words
similar_words = glove_model.most_similar(word, topn=5)
print("\n🔹 Words similar to 'king':", similar_words)
word1 = "king"
word2 = "queen"
similarity = glove_model.similarity(word1, word2)
print(f"🔹 Similarity between '{word1}' and '{word2}': {similarity:.4f}")

輸出:

程式碼實施 程式碼實施

這張圖片將幫助您瞭解這種相似性在繪製時的樣子:

相似性

優點

  • 全球語境整合: 使用整個語料庫的統計資料來提高代表性。
  • 穩定性: 通常能在不同語境中產生更一致的嵌入。

缺點

  • 資源需求大: 構建和因式分解大型矩陣的計算成本可能很高。
  • 靜態性: 與 Word2Vec 類似,它不能生成與上下文相關的嵌入詞。

GloVe 從詞共現矩陣中學習嵌入。

7. FastText

FastText 由 Facebook 於 2016 年釋出,透過納入子詞(字元 n-gram)資訊對 Word2Vec 進行了擴充套件。這一創新透過將單詞分解為更小的單元,從而捕捉內部結構,幫助模型處理罕見單詞和形態豐富的語言。

工作原理

  • 機制:
    • 子詞建模:將每個單詞表示為其字元 n-gram 向量的總和。
    • 嵌入學習:訓練一個模型,利用這些子詞向量生成最終的詞嵌入。
  • 其他細節:這種方法對於具有豐富詞形的語言和處理詞彙表以外的單詞特別有用。透過分解單詞,FastText 可以更好地概括類似的單詞形式和拼寫錯誤。

程式碼實現

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
import gensim.downloader as api
fasttext_model = api.load("fasttext-wiki-news-subwords-300")
# Example word
word = "king"
print(f"🔹 Vector representation for '{word}':\n", fasttext_model[word])
# Find similar words
similar_words = fasttext_model.most_similar(word, topn=5)
print("\n🔹 Words similar to 'king':", similar_words)
word1 = "king"
word2 = "queen"
similarity = fasttext_model.similarity(word1, word2)
print(f"🔹 Similarity between '{word1}' and '{word2}': {similarity:.4f}")
import gensim.downloader as api fasttext_model = api.load("fasttext-wiki-news-subwords-300") # Example word word = "king" print(f"🔹 Vector representation for '{word}':\n", fasttext_model[word]) # Find similar words similar_words = fasttext_model.most_similar(word, topn=5) print("\n🔹 Words similar to 'king':", similar_words) word1 = "king" word2 = "queen" similarity = fasttext_model.similarity(word1, word2) print(f"🔹 Similarity between '{word1}' and '{word2}': {similarity:.4f}")
import gensim.downloader as api
fasttext_model = api.load("fasttext-wiki-news-subwords-300")
# Example word
word = "king"
print(f"🔹 Vector representation for '{word}':\n", fasttext_model[word])
# Find similar words
similar_words = fasttext_model.most_similar(word, topn=5)
print("\n🔹 Words similar to 'king':", similar_words)
word1 = "king"
word2 = "queen"
similarity = fasttext_model.similarity(word1, word2)
print(f"🔹 Similarity between '{word1}' and '{word2}': {similarity:.4f}")

輸出:

FastText程式碼實現截圖-01 FastText程式碼實現截圖-02 FastText程式碼實現截圖-03

優點

  • 處理 OOV(詞彙表外)單詞:當單詞不常見或未見過時,可提高效能。可以說,測試資料集中有一些標籤在我們的訓練資料集中並不存在。
  • 形態意識:捕捉詞語的內部結構。

缺點

  • 複雜性增加:包含子詞資訊會增加計算開銷。
  • 仍然是靜態或固定的:儘管 FastText 有所改進,但它不會根據句子的上下文調整嵌入。

8. Doc2Vec

Doc2Vec 將 Word2Vec 的理念擴充套件到更大的文字體,如句子、段落或整個文件。Doc2Vec 於 2014 年推出,它為可變長度的文字提供了一種獲得固定長度向量表示的方法,從而實現更有效的文件分類、聚類和檢索。

工作原理

  • 機制
    • 分散式記憶體(DM)模型:透過新增一個獨特的文件向量來增強 Word2Vec 架構,該向量與上下文單詞一起預測目標單詞。
    • 分散式詞袋 (DBOW) 模型:透過預測從文件中隨機抽取的單詞來學習文件向量。
  • 其他細節:這些模型學習文件級嵌入,以捕捉文字的整體語義內容。對於需要了解整個文件的結構和主題的任務,這些模型尤其有用。

程式碼實現

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
import gensim
from gensim.models.doc2vec import Doc2Vec, TaggedDocument
import nltk
nltk.download('punkt_tab')
# Sample documents
documents = [
"Machine learning is amazing",
"Natural language processing enables AI to understand text",
"Deep learning advances artificial intelligence",
"Word embeddings improve NLP tasks",
"Doc2Vec is an extension of Word2Vec"
]
# Tokenize and tag documents
tagged_data = [TaggedDocument(words=nltk.word_tokenize(doc.lower()), tags=[str(i)]) for i, doc in enumerate(documents)]
# Print tagged data
print(tagged_data)
# Define model parameters
model = Doc2Vec(vector_size=50, window=2, min_count=1, workers=4, epochs=100)
# Build vocabulary
model.build_vocab(tagged_data)
# Train the model
model.train(tagged_data, total_examples=model.corpus_count, epochs=model.epochs)
# Test a document by generating its vector
test_doc = "Artificial intelligence uses machine learning"
test_vector = model.infer_vector(nltk.word_tokenize(test_doc.lower()))
print(f"🔹 Vector representation of test document:\n{test_vector}")
# Find most similar documents to the test document
similar_docs = model.dv.most_similar([test_vector], topn=3)
print("🔹 Most similar documents:")
for tag, score in similar_docs:
print(f"Document {tag} - Similarity Score: {score:.4f}")
import gensim from gensim.models.doc2vec import Doc2Vec, TaggedDocument import nltk nltk.download('punkt_tab') # Sample documents documents = [ "Machine learning is amazing", "Natural language processing enables AI to understand text", "Deep learning advances artificial intelligence", "Word embeddings improve NLP tasks", "Doc2Vec is an extension of Word2Vec" ] # Tokenize and tag documents tagged_data = [TaggedDocument(words=nltk.word_tokenize(doc.lower()), tags=[str(i)]) for i, doc in enumerate(documents)] # Print tagged data print(tagged_data) # Define model parameters model = Doc2Vec(vector_size=50, window=2, min_count=1, workers=4, epochs=100) # Build vocabulary model.build_vocab(tagged_data) # Train the model model.train(tagged_data, total_examples=model.corpus_count, epochs=model.epochs) # Test a document by generating its vector test_doc = "Artificial intelligence uses machine learning" test_vector = model.infer_vector(nltk.word_tokenize(test_doc.lower())) print(f"🔹 Vector representation of test document:\n{test_vector}") # Find most similar documents to the test document similar_docs = model.dv.most_similar([test_vector], topn=3) print("🔹 Most similar documents:") for tag, score in similar_docs: print(f"Document {tag} - Similarity Score: {score:.4f}")
import gensim
from gensim.models.doc2vec import Doc2Vec, TaggedDocument
import nltk
nltk.download('punkt_tab')
# Sample documents
documents = [
"Machine learning is amazing",
"Natural language processing enables AI to understand text",
"Deep learning advances artificial intelligence",
"Word embeddings improve NLP tasks",
"Doc2Vec is an extension of Word2Vec"
]
# Tokenize and tag documents
tagged_data = [TaggedDocument(words=nltk.word_tokenize(doc.lower()), tags=[str(i)]) for i, doc in enumerate(documents)]
# Print tagged data
print(tagged_data)
# Define model parameters
model = Doc2Vec(vector_size=50, window=2, min_count=1, workers=4, epochs=100)
# Build vocabulary
model.build_vocab(tagged_data)
# Train the model
model.train(tagged_data, total_examples=model.corpus_count, epochs=model.epochs)
# Test a document by generating its vector
test_doc = "Artificial intelligence uses machine learning"
test_vector = model.infer_vector(nltk.word_tokenize(test_doc.lower()))
print(f"🔹 Vector representation of test document:\n{test_vector}")
# Find most similar documents to the test document
similar_docs = model.dv.most_similar([test_vector], topn=3)
print("🔹 Most similar documents:")
for tag, score in similar_docs:
print(f"Document {tag} - Similarity Score: {score:.4f}")

輸出:

Doc2Vec程式碼實現截圖 Doc2Vec程式碼實現截圖

優點

  • 文件級表述:有效捕捉較大文字的主題和上下文資訊。
  • 用途廣泛:適用於從推薦系統到聚類和摘要等各種任務。

缺點

  • 訓練敏感性:需要大量資料和仔細調整才能生成高質量的講師向量。
  • 靜態嵌入:無論內容的內部變化如何,每份文件都用一個向量來表示。

9. InferSent

InferSent 由 Facebook 於 2017 年開發,旨在透過對自然語言推理(NLI)資料集的監督學習生成高質量的句子嵌入。它旨在捕捉句子層面的語義細微差別,使其對語義相似性和文字蘊含等任務非常有效。

工作原理

  • 機制:
    • 監督訓練:使用標註的 NLI 資料來學習反映句子間邏輯關係的句子表徵。
    • 雙向 LSTM:採用遞迴神經網路從兩個方向處理句子,以捕捉上下文。
  • 其他細節:該模型利用監督理解來完善嵌入,使語義相似的句子在向量空間中靠得更近,從而大大提高了情感分析和轉述檢測等任務的效能。

程式碼實現

您可以按照此 Kaggle Notebook 來實現此功能。

輸出:

InferSent程式碼實現柱狀圖

優勢

  • 豐富的語義捕捉:提供深入的、上下文細微差別的句子表示。
  • 任務最佳化:擅長捕捉語義推理任務所需的關係。

缺點

  • 依賴標記資料:需要大量標註資料集進行訓練。
  • 計算密集:比無監督方法更耗費資源。

10. 通用句子編碼器(USE)

通用句子編碼器(USE)是谷歌開發的一種模型,用於建立高質量、通用的句子嵌入。USE 於 2018 年釋出,旨在以最小的微調在各種 NLP 任務中良好地執行,使其成為從語義搜尋到文字分類等各種應用的通用工具。

工作原理

  • 機制:
    • 架構選項:USE 可以使用變換器架構或深度平均網路 (DAN) 來實現句子編碼。
    • 預訓練:在大型、多樣化的資料集上進行訓練,捕捉廣泛的語言模式,將句子對映到固定維度的空間中。
  • 其他細節:USE 提供跨領域和跨任務的強大嵌入功能,是一種出色的“開箱即用”解決方案。它的設計兼顧了效能和效率,可提供高階嵌入,無需針對具體任務進行大量調整。

程式碼實現

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
import tensorflow_hub as hub
import tensorflow as tf
import numpy as np
# Load the model (this may take a few seconds on first run)
embed = hub.load("https://tfhub.dev/google/universal-sentence-encoder/4")
print("✅ USE model loaded successfully!")
# Sample sentences
sentences = [
"Machine learning is fun.",
"Artificial intelligence and machine learning are related.",
"I love playing football.",
"Deep learning is a subset of machine learning."
]
# Get sentence embeddings
embeddings = embed(sentences)
# Convert to NumPy for easier manipulation
embeddings_np = embeddings.numpy()
# Display shape and first vector
print(f"🔹 Embedding shape: {embeddings_np.shape}")
print(f"🔹 First sentence embedding (truncated):\n{embeddings_np[0][:10]} ...")
from sklearn.metrics.pairwise import cosine_similarity
# Compute pairwise cosine similarities
similarity_matrix = cosine_similarity(embeddings_np)
# Display similarity matrix
import pandas as pd
similarity_df = pd.DataFrame(similarity_matrix, index=sentences, columns=sentences)
print("🔹 Sentence Similarity Matrix:\n")
print(similarity_df.round(2))
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
# Reduce to 2D
pca = PCA(n_components=2)
reduced = pca.fit_transform(embeddings_np)
# Plot
plt.figure(figsize=(8, 6))
plt.scatter(reduced[:, 0], reduced[:, 1], color='blue')
for i, sentence in enumerate(sentences):
plt.annotate(f"Sentence {i+1}", (reduced[i, 0]+0.01, reduced[i, 1]+0.01))
plt.title("📊 Sentence Embeddings (PCA projection)")
plt.xlabel("PCA 1")
plt.ylabel("PCA 2")
plt.grid(True)
plt.show()
import tensorflow_hub as hub import tensorflow as tf import numpy as np # Load the model (this may take a few seconds on first run) embed = hub.load("https://tfhub.dev/google/universal-sentence-encoder/4") print("✅ USE model loaded successfully!") # Sample sentences sentences = [ "Machine learning is fun.", "Artificial intelligence and machine learning are related.", "I love playing football.", "Deep learning is a subset of machine learning." ] # Get sentence embeddings embeddings = embed(sentences) # Convert to NumPy for easier manipulation embeddings_np = embeddings.numpy() # Display shape and first vector print(f"🔹 Embedding shape: {embeddings_np.shape}") print(f"🔹 First sentence embedding (truncated):\n{embeddings_np[0][:10]} ...") from sklearn.metrics.pairwise import cosine_similarity # Compute pairwise cosine similarities similarity_matrix = cosine_similarity(embeddings_np) # Display similarity matrix import pandas as pd similarity_df = pd.DataFrame(similarity_matrix, index=sentences, columns=sentences) print("🔹 Sentence Similarity Matrix:\n") print(similarity_df.round(2)) import matplotlib.pyplot as plt from sklearn.decomposition import PCA # Reduce to 2D pca = PCA(n_components=2) reduced = pca.fit_transform(embeddings_np) # Plot plt.figure(figsize=(8, 6)) plt.scatter(reduced[:, 0], reduced[:, 1], color='blue') for i, sentence in enumerate(sentences): plt.annotate(f"Sentence {i+1}", (reduced[i, 0]+0.01, reduced[i, 1]+0.01)) plt.title("📊 Sentence Embeddings (PCA projection)") plt.xlabel("PCA 1") plt.ylabel("PCA 2") plt.grid(True) plt.show()
import tensorflow_hub as hub
import tensorflow as tf
import numpy as np
# Load the model (this may take a few seconds on first run)
embed = hub.load("https://tfhub.dev/google/universal-sentence-encoder/4")
print("✅ USE model loaded successfully!")
# Sample sentences
sentences = [
"Machine learning is fun.",
"Artificial intelligence and machine learning are related.",
"I love playing football.",
"Deep learning is a subset of machine learning."
]
# Get sentence embeddings
embeddings = embed(sentences)
# Convert to NumPy for easier manipulation
embeddings_np = embeddings.numpy()
# Display shape and first vector
print(f"🔹 Embedding shape: {embeddings_np.shape}")
print(f"🔹 First sentence embedding (truncated):\n{embeddings_np[0][:10]} ...")
from sklearn.metrics.pairwise import cosine_similarity
# Compute pairwise cosine similarities
similarity_matrix = cosine_similarity(embeddings_np)
# Display similarity matrix
import pandas as pd
similarity_df = pd.DataFrame(similarity_matrix, index=sentences, columns=sentences)
print("🔹 Sentence Similarity Matrix:\n")
print(similarity_df.round(2))
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
# Reduce to 2D
pca = PCA(n_components=2)
reduced = pca.fit_transform(embeddings_np)
# Plot
plt.figure(figsize=(8, 6))
plt.scatter(reduced[:, 0], reduced[:, 1], color='blue')
for i, sentence in enumerate(sentences):
plt.annotate(f"Sentence {i+1}", (reduced[i, 0]+0.01, reduced[i, 1]+0.01))
plt.title("📊 Sentence Embeddings (PCA projection)")
plt.xlabel("PCA 1")
plt.ylabel("PCA 2")
plt.grid(True)
plt.show()

輸出:

USE程式碼實現截圖 USE程式碼實現截圖 USE程式碼實現截圖

優點

  • 多功能性:適用範圍廣,無需額外培訓。
  • 預培訓的便利性:可立即使用,節省時間和計算資源。

缺點

  • 表徵固定:每個句子只產生一個嵌入,無法根據不同語境進行動態調整。
  • 模型大小:某些變體相當大,這可能會影響在資源有限的環境中的部署。

11. Node2Vec

Node2Vec 最初是為學習圖結構中的節點嵌入而設計的一種方法。雖然它本身不是一種文字表示方法,但卻越來越多地應用於涉及網路或圖資料的 NLP 任務,如社交網路或知識圖譜。該方法於 2016 年左右推出,有助於捕捉圖資料中的結構關係。

使用案例: 節點分類、連結預測、圖聚類、推薦系統。

工作原理

  • 機制:
    • 隨機漫步:在圖上執行有偏向的隨機行走,生成節點序列。
    • 跳過圖模型:在這些序列上應用與 Word2Vec 類似的策略,學習節點的低維嵌入。
  • 其他細節:透過模擬節點內的句子,Node2Vec 能有效捕捉圖的區域性和全域性結構。它具有很強的自適應能力,可用於各種下游任務,如網路資料中的聚類、分類或推薦系統。

程式碼實現

我們將使用 NetworkX 現成的圖來檢視我們的 Node2Vec 實現。要了解有關空手道俱樂部圖的更多資訊,請單擊此處

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
!pip install numpy==1.24.3 # Adjust version if needed
import networkx as nx
import numpy as np
from node2vec import Node2Vec
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
# Create a simple graph
G = nx.karate_club_graph() # A famous test graph with 34 nodes
# Visualize original graph
plt.figure(figsize=(6, 6))
nx.draw(G, with_labels=True, node_color='skyblue', edge_color='gray', node_size=500)
plt.title("Original Karate Club Graph")
plt.show()
# Initialize Node2Vec model
node2vec = Node2Vec(G, dimensions=64, walk_length=30, num_walks=200, workers=2)
# Train the model (Word2Vec under the hood)
model = node2vec.fit(window=10, min_count=1, batch_words=4)
# Get the vector for a specific node
node_id = 0
vector = model.wv[str(node_id)] # Note: Node IDs are stored as strings
print(f"🔹 Embedding for node {node_id}:\n{vector[:10]}...") # Truncated
# Get all embeddings
node_ids = model.wv.index_to_key
embeddings = np.array([model.wv[node] for node in node_ids])
# Reduce dimensions to 2D
pca = PCA(n_components=2)
reduced = pca.fit_transform(embeddings)
# Plot embeddings
plt.figure(figsize=(8, 6))
plt.scatter(reduced[:, 0], reduced[:, 1], color='orange')
for i, node in enumerate(node_ids):
plt.annotate(node, (reduced[i, 0] + 0.05, reduced[i, 1] + 0.05))
plt.title("📊 Node2Vec Embeddings (PCA Projection)")
plt.xlabel("PCA 1")
plt.ylabel("PCA 2")
plt.grid(True)
plt.show()
# Find most similar nodes to node 0
similar_nodes = model.wv.most_similar(str(0), topn=5)
print("🔹 Nodes most similar to node 0:")
for node, score in similar_nodes:
print(f"Node {node} → Similarity Score: {score:.4f}")
!pip install numpy==1.24.3 # Adjust version if needed import networkx as nx import numpy as np from node2vec import Node2Vec import matplotlib.pyplot as plt from sklearn.decomposition import PCA # Create a simple graph G = nx.karate_club_graph() # A famous test graph with 34 nodes # Visualize original graph plt.figure(figsize=(6, 6)) nx.draw(G, with_labels=True, node_color='skyblue', edge_color='gray', node_size=500) plt.title("Original Karate Club Graph") plt.show() # Initialize Node2Vec model node2vec = Node2Vec(G, dimensions=64, walk_length=30, num_walks=200, workers=2) # Train the model (Word2Vec under the hood) model = node2vec.fit(window=10, min_count=1, batch_words=4) # Get the vector for a specific node node_id = 0 vector = model.wv[str(node_id)] # Note: Node IDs are stored as strings print(f"🔹 Embedding for node {node_id}:\n{vector[:10]}...") # Truncated # Get all embeddings node_ids = model.wv.index_to_key embeddings = np.array([model.wv[node] for node in node_ids]) # Reduce dimensions to 2D pca = PCA(n_components=2) reduced = pca.fit_transform(embeddings) # Plot embeddings plt.figure(figsize=(8, 6)) plt.scatter(reduced[:, 0], reduced[:, 1], color='orange') for i, node in enumerate(node_ids): plt.annotate(node, (reduced[i, 0] + 0.05, reduced[i, 1] + 0.05)) plt.title("📊 Node2Vec Embeddings (PCA Projection)") plt.xlabel("PCA 1") plt.ylabel("PCA 2") plt.grid(True) plt.show() # Find most similar nodes to node 0 similar_nodes = model.wv.most_similar(str(0), topn=5) print("🔹 Nodes most similar to node 0:") for node, score in similar_nodes: print(f"Node {node} → Similarity Score: {score:.4f}")
!pip install numpy==1.24.3 # Adjust version if needed
import networkx as nx
import numpy as np
from node2vec import Node2Vec
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
# Create a simple graph
G = nx.karate_club_graph()  # A famous test graph with 34 nodes
# Visualize original graph
plt.figure(figsize=(6, 6))
nx.draw(G, with_labels=True, node_color='skyblue', edge_color='gray', node_size=500)
plt.title("Original Karate Club Graph")
plt.show()
# Initialize Node2Vec model
node2vec = Node2Vec(G, dimensions=64, walk_length=30, num_walks=200, workers=2)
# Train the model (Word2Vec under the hood)
model = node2vec.fit(window=10, min_count=1, batch_words=4)
# Get the vector for a specific node
node_id = 0
vector = model.wv[str(node_id)]  # Note: Node IDs are stored as strings
print(f"🔹 Embedding for node {node_id}:\n{vector[:10]}...")  # Truncated
# Get all embeddings
node_ids = model.wv.index_to_key
embeddings = np.array([model.wv[node] for node in node_ids])
# Reduce dimensions to 2D
pca = PCA(n_components=2)
reduced = pca.fit_transform(embeddings)
# Plot embeddings
plt.figure(figsize=(8, 6))
plt.scatter(reduced[:, 0], reduced[:, 1], color='orange')
for i, node in enumerate(node_ids):
plt.annotate(node, (reduced[i, 0] + 0.05, reduced[i, 1] + 0.05))
plt.title("📊 Node2Vec Embeddings (PCA Projection)")
plt.xlabel("PCA 1")
plt.ylabel("PCA 2")
plt.grid(True)
plt.show()
# Find most similar nodes to node 0
similar_nodes = model.wv.most_similar(str(0), topn=5)
print("🔹 Nodes most similar to node 0:")
for node, score in similar_nodes:
print(f"Node {node} → Similarity Score: {score:.4f}")

輸出:

Node2Vec Node2Vec Node2Vec Node2Vec

優勢

  • 圖形結構捕捉:擅長嵌入具有豐富關係資訊的節點。
  • 靈活性:可應用於任何圖結構資料,而不僅僅是語言。

缺點

  • 領域特定性:對純文字的適用性較差,除非將其表示為圖形。
  • 引數敏感性:嵌入的質量對隨機遊走中使用的引數很敏感。

12. ELMo(語言模型嵌入)

ELMo 由艾倫人工智慧研究所於 2018 年推出,透過提供深度上下文化的單詞表示,標誌著一項突破。與早期為每個單詞生成單一向量的模型不同,ELMo 生成的動態嵌入會根據句子的上下文發生變化,同時捕捉句法和語義的細微差別。

工作原理

  • 機制
    • 雙向 LSTM:從正向和反向兩個方向處理文字,以捕捉完整的上下文資訊。
    • 分層表示:結合神經網路多個層的表徵,每個層捕捉語言的不同方面。
  • 其他細節:關鍵的創新之處在於,同一個詞可以根據不同的用法有不同的嵌入,這使得 ELMo 能夠更有效地處理歧義和多義詞。這種對上下文的敏感性提高了許多下游 NLP 任務的效率。它透過可定製的引數進行操作,包括維度(嵌入向量大小)、walk_length(每次隨機行走的節點)、num_walks(每個節點的行走次數)以及偏置引數p(返回因子)和q(進出因子),這些引數透過平衡廣度優先(BFS)和深度優先(DFS)的搜尋傾向來控制行走行為。該方法將有偏差的隨機行走Word2Vec 的 Skip-gram 架構相結合,從而學習保留網路結構和節點關係的嵌入。Node2Vec 透過捕捉嵌入空間中的區域性網路模式和更廣泛的結構,實現了有效的節點分類、連結預測和圖聚類。

程式碼實現

全球的 NLP 科學家已經開始將 ELMo 用於各種 NLP 任務,無論是在研究領域還是在工業領域。您必須檢視 ELMo 的原始研究論文

優勢

  • 語境感知:根據上下文提供不同的單詞嵌入。
  • 增強效能:基於情感分析、問題解答和機器翻譯等多種任務提高結果。

缺點

  • 計算要求高:需要更多資源進行訓練和推理。
  • 架構複雜:與其他更簡單的模型相比,在實施和微調方面具有挑戰性。

13. BERT及其變體

什麼 是 BERT?

BERT 或 Bidirectional Encoder Representations from Transformers,由谷歌於 2018 年釋出,透過引入一種基於變壓器的架構來捕捉雙向語境,從而徹底改變了 NLP。與以往以單向方式處理文字的模型不同,BERT 同時考慮了每個單詞的左右上下文。這種深入的上下文理解使 BERT 能夠出色地完成從問題解答、情感分析到命名實體識別等各種任務。

工作原理

  • 轉換器架構:BERT 建立在多層變換器網路的基礎上,該網路使用自我關注機制來同時捕捉句子中所有單詞之間的依賴關係。這樣,模型就能權衡每個詞與其他每個詞之間的依賴關係。
  • 遮蔽語言建模:在預訓練過程中,BERT 會隨機遮蔽輸入中的某些單詞,然後根據上下文對其進行預測。這就迫使模型學習雙向語境,並對語言模式形成穩健的理解。
  • 下一句預測:BERT 還對成對的句子進行訓練,學習預測一個句子在邏輯上是否緊跟另一個句子。這有助於捕捉句子之間的關係,這是文件分類和自然語言推理等任務的基本特徵。

其他細節:BERT 的架構允許它學習複雜的語言模式,包括語法和語義。對下游任務的微調非常簡單,因此在許多基準測試中都取得了一流的效能。

優勢

  • 深度語境理解:透過考慮過去和未來的語境,BERT 可以生成更豐富、更細緻的單詞表述。
  • 多功能性:只需進行相對較少的額外訓練,即可對 BERT 進行微調,以適應各種下游任務。

缺點

  • 計算負荷重:該模型在訓練和推理過程中都需要大量的計算資源。
  • 模型規模大:BERT 的引數數量較多,在資源有限的環境中部署具有挑戰性。

SBERT(Sentence-BERT)

Sentence-BERT (SBERT) 於 2019 年推出,旨在解決 BERT 的一個關鍵侷限–即在生成語義上有意義的句子嵌入方面效率低下,無法完成語義相似性、聚類和資訊檢索等任務。SBERT 對 BERT 的架構進行了調整,以生成固定大小的句子嵌入,並對直接比較句子含義進行了最佳化。

工作原理

  • 連體網路架構:SBERT 採用連體(或三重)網路架構,修改了 BERT 的原始結構。這意味著它可以透過相同的基於 BERT 的編碼器並行處理兩個(或多個)句子,從而讓模型學習嵌入,使語義相似的句子在向量空間中靠得更近。
  • 池化操作:透過 BERT 處理句子後,SBERT 對標記嵌入應用池化策略(通常指池化),為每個句子生成固定大小的向量。
  • 句對微調:SBERT 在涉及句子對的任務中使用對比損失或三重損失進行微調。這一訓練目標可促使模型在嵌入空間中將相似的句子放在更近的位置,而將不相似的句子放在更遠的位置。

優勢

  • 高效的句子比較:SBERT 針對語義搜尋和聚類等任務進行了最佳化。由於其固定大小和語義豐富的句子嵌入,比較數以萬計的句子在計算上是可行的。
  • 下游任務的多功能性:SBERT 嵌入對各種應用都很有效,如轉述檢測、語義文字相似性和資訊檢索。

不足之處:

  • 依賴微調資料:微調過程中使用的訓練資料的領域和質量會嚴重影響 SBERT 嵌入的質量。
  • 資源密集型訓練:雖然推理效率很高,但初始微調過程需要大量計算資源。

DistilBERT

DistilBERT 由 Hugging Face 於 2019 年推出,是 BERT 的一個更輕、更快的變體,保留了其大部分效能。它是利用一種稱為知識蒸餾的技術建立的,即訓練一個較小的模型(學生)來模仿一個較大的、預先訓練好的模型(教師)的行為,在本例中就是 BERT。

工作原理

  • 知識蒸餾:DistilBERT 的訓練目的是與原始 BERT 模型的輸出分佈相匹配,同時使用更少的引數。它刪除了一些層(例如,在 BERT 基礎上刪除了 6 層而不是 12 層),但保留了關鍵的學習行為。
  • 損失函式:訓練使用語言建模損失和蒸餾損失(教師和學生對數之間的 KL 發散)的組合。
  • 速度最佳化:DistilBERT 經過最佳化,推理速度提高了 60%,同時保留了 BERT 在下游任務中約 97% 的效能。

優勢

  • 輕便快速:由於計算需求降低,因此是即時或移動應用的理想選擇。
  • 具有競爭力的效能:以顯著降低的資源使用率實現接近 BERT 的準確性。

缺點

  • 精度略有下降:雖然非常接近,但在複雜任務中可能略遜於完整的 BERT 模型。
  • 微調靈活性有限:在細分領域的通用性可能不如全尺寸模型。

RoBERTa

RoBERTa 或 Robustly Optimized BERT Pretraining Approach 是由 Facebook AI 於 2019 年推出的,是對 BERT 的穩健增強。它調整了預訓練方法,在各種任務中顯著提高了效能。

工作原理

  • 訓練增強
    • 刪除了 “下一句預測”(NSP)目標,該目標在某些情況下會影響效能。
    • 更大的資料集(如普通爬行)上進行更長時間的訓練
    • 使用更大的迷你批次更多的訓練步驟來穩定和最佳化學習。
  • 動態遮蔽:與 BERT 的靜態遮蔽相比,這種方法在每次訓練過程中都會即時應用遮蔽,讓模型接觸到更多不同的遮蔽模式。

優點

  • 卓越效能:在 GLUE 和 SQuAD 等多個基準測試中的表現優於 BERT。
  • 強大的學習能力:由於改進了訓練資料和策略,跨領域通用性更強。

缺點

  • 資源密集:計算要求比 BERT 還高。
  • 過擬合風險:由於需要大量的訓練資料和大型資料集,如果處理不慎,就有可能出現過擬合。

程式碼執行

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
from transformers import AutoTokenizer, AutoModel
import torch
# Input sentence for embedding
sentence = "Natural Language Processing is transforming how machines understand humans."
# Choose device (GPU if available)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# =============================
# 1. BERT Base Uncased
# =============================
# model_name = "bert-base-uncased"
# =============================
# 2. SBERT - Sentence-BERT
# =============================
# model_name = "sentence-transformers/all-MiniLM-L6-v2"
# =============================
# 3. DistilBERT
# =============================
# model_name = "distilbert-base-uncased"
# =============================
# 4. RoBERTa
# =============================
model_name = "roberta-base" # Only RoBERTa is active now uncomment other to test other models
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name).to(device)
model.eval()
# Tokenize input
inputs = tokenizer(sentence, return_tensors='pt', truncation=True, padding=True).to(device)
# Forward pass to get embeddings
with torch.no_grad():
outputs = model(**inputs)
# Get token embeddings
token_embeddings = outputs.last_hidden_state # (batch_size, seq_len, hidden_size)
# Mean Pooling for sentence embedding
sentence_embedding = torch.mean(token_embeddings, dim=1)
print(f"Sentence embedding from {model_name}:")
print(sentence_embedding)
from transformers import AutoTokenizer, AutoModel import torch # Input sentence for embedding sentence = "Natural Language Processing is transforming how machines understand humans." # Choose device (GPU if available) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # ============================= # 1. BERT Base Uncased # ============================= # model_name = "bert-base-uncased" # ============================= # 2. SBERT - Sentence-BERT # ============================= # model_name = "sentence-transformers/all-MiniLM-L6-v2" # ============================= # 3. DistilBERT # ============================= # model_name = "distilbert-base-uncased" # ============================= # 4. RoBERTa # ============================= model_name = "roberta-base" # Only RoBERTa is active now uncomment other to test other models # Load tokenizer and model tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModel.from_pretrained(model_name).to(device) model.eval() # Tokenize input inputs = tokenizer(sentence, return_tensors='pt', truncation=True, padding=True).to(device) # Forward pass to get embeddings with torch.no_grad(): outputs = model(**inputs) # Get token embeddings token_embeddings = outputs.last_hidden_state # (batch_size, seq_len, hidden_size) # Mean Pooling for sentence embedding sentence_embedding = torch.mean(token_embeddings, dim=1) print(f"Sentence embedding from {model_name}:") print(sentence_embedding)
from transformers import AutoTokenizer, AutoModel
import torch
# Input sentence for embedding
sentence = "Natural Language Processing is transforming how machines understand humans."
# Choose device (GPU if available)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# =============================
# 1. BERT Base Uncased
# =============================
# model_name = "bert-base-uncased"
# =============================
# 2. SBERT - Sentence-BERT
# =============================
# model_name = "sentence-transformers/all-MiniLM-L6-v2"
# =============================
# 3. DistilBERT
# =============================
# model_name = "distilbert-base-uncased"
# =============================
# 4. RoBERTa
# =============================
model_name = "roberta-base"  # Only RoBERTa is active now uncomment other to test other models
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name).to(device)
model.eval()
# Tokenize input
inputs = tokenizer(sentence, return_tensors='pt', truncation=True, padding=True).to(device)
# Forward pass to get embeddings
with torch.no_grad():
    outputs = model(**inputs)
# Get token embeddings
token_embeddings = outputs.last_hidden_state  # (batch_size, seq_len, hidden_size)
# Mean Pooling for sentence embedding
sentence_embedding = torch.mean(token_embeddings, dim=1)
print(f"Sentence embedding from {model_name}:")
print(sentence_embedding)

輸出:

RoBERTa

摘要

  • BERT:可提供深度雙向上下文嵌入,是各種 NLP 任務的理想選擇。它透過基於變換器的自我關注捕捉複雜的語言模式,但產生的標記級嵌入需要彙總到句子級任務中。
  • SBERT:對 BERT 進行了擴充套件,將其轉化為一個可直接生成有意義句子嵌入的模型。憑藉其連體網路架構和對比學習目標,SBERT 在需要對句子進行快速、準確的語義比較的任務中表現出色,例如語義搜尋、意譯檢測和句子聚類。
  • DistilBERT:透過使用知識蒸餾技術,為 BERT 提供了一種更輕、更快的替代方案。它保留了 BERT 的大部分效能,同時更適合即時或資源受限的應用。在推理速度和效率是關鍵因素的情況下,它是理想的選擇,不過在複雜場景中可能會略顯不足。
  • RoBERTa:在 BERT 的基礎上進行了改進,修改了預訓練機制,透過使用更大的資料集取消了下一句預測任務,並應用了動態遮蔽。這些改動使 BERT 在各種基準測試中具有更好的泛化能力和效能,但代價是增加了計算資源。

其他著名的 BERT 變體

雖然 BERT 及其直接後代(如 SBERT、DistilBERT 和 RoBERTa)在 NLP 領域產生了重大影響,但也出現了其他一些強大的變體,以解決不同的侷限性並增強特定功能:

  • ALBERT (A Lite BERT):ALBERT 是 BERT 的更高效版本,它透過兩項關鍵創新減少了引數數量:因數化嵌入引數化(將詞彙嵌入的大小從隱藏層中分離出來)和跨層引數共享(在轉換層之間重複使用權重)。這些改變使 ALBERT 速度更快、記憶體效率更高,同時保持了在許多 NLP 基準測試中的效能。
  • XLNet:與依賴於掩碼語言建模的 BERT 不同,XLNet 採用了基於置換的自迴歸訓練策略。這樣,它就可以捕捉雙向語境,而無需依賴掩碼等資料破壞。XLNet 還融合了 Transformer-XL 的理念,這使它能夠建立長期依賴關係模型,並在多項 NLP 任務中表現優於 BERT。
  • T5(Text-to-Text Transfer Transformer):由谷歌研究院開發,T5將從翻譯到分類的所有 NLP 任務都視為文字到文字的問題。例如,T5 不會直接生成分類標籤,而是透過學習將標籤生成為單詞或短語。這種統一的方法使其具有高度的靈活性和強大的功能,能夠應對廣泛的 NLP 挑戰。

14. CLIP和BLIP

CLIP(對比語言-影像預訓練)和 BLIP(引導語言-影像預訓練)等現代多模態模型代表了嵌入技術的最新前沿。它們在文字資料和視覺資料之間架起了一座橋樑,使涉及語言和影像的任務成為可能。這些模型已成為影像搜尋、字幕和視覺問題解答等應用的關鍵。

工作原理

  • CLIP:
    • 機制:在大型影像-文字對資料集上進行訓練,利用對比學習將影像嵌入與相應的文字嵌入對齊。
    • 過程:模型學習將影像和文字對映到一個共享的向量空間,在這個空間中,相關的影像和文字對更接近。
  • BLIP:
    • 機制:使用引導方法,透過迭代訓練完善語言與視覺之間的對齊。
    • 過程:在初始對齊的基礎上進行改進,以實現更準確的多模態表徵。
  • 其他細節:這些模型利用變換器的力量來處理文字,利用卷積網路或基於變換器的網路來處理影像。它們聯合推理文字和視覺內容的能力為多模態人工智慧研究開闢了新的可能性。

程式碼實現

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
from transformers import CLIPProcessor, CLIPModel
# from transformers import BlipProcessor, BlipModel # Uncomment to use BLIP
from PIL import Image
import torch
import requests
# Choose device
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Load a sample image and text
image_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets/cat_style_layout.png"
image = Image.open(requests.get(image_url, stream=True).raw).convert("RGB")
text = "a cute puppy"
# ===========================
# 1. CLIP (for Embeddings)
# ===========================
clip_model_name = "openai/clip-vit-base-patch32"
clip_model = CLIPModel.from_pretrained(clip_model_name).to(device)
clip_processor = CLIPProcessor.from_pretrained(clip_model_name)
# Preprocess input
inputs = clip_processor(text=[text], images=image, return_tensors="pt", padding=True).to(device)
# Get text and image embeddings
with torch.no_grad():
text_embeddings = clip_model.get_text_features(input_ids=inputs["input_ids"])
image_embeddings = clip_model.get_image_features(pixel_values=inputs["pixel_values"])
# Normalize embeddings (optional)
text_embeddings = text_embeddings / text_embeddings.norm(dim=-1, keepdim=True)
image_embeddings = image_embeddings / image_embeddings.norm(dim=-1, keepdim=True)
print("Text Embedding Shape (CLIP):", text_embeddings.shape)
print("Image Embedding Shape (CLIP):", image_embeddings)
# ===========================
# 2. BLIP (commented)
# ===========================
# blip_model_name = "Salesforce/blip-image-text-matching-base"
# blip_processor = BlipProcessor.from_pretrained(blip_model_name)
# blip_model = BlipModel.from_pretrained(blip_model_name).to(device)
# inputs = blip_processor(images=image, text=text, return_tensors="pt").to(device)
# with torch.no_grad():
# text_embeddings = blip_model.text_encoder(input_ids=inputs["input_ids"]).last_hidden_state[:, 0, :]
# image_embeddings = blip_model.vision_model(pixel_values=inputs["pixel_values"]).last_hidden_state[:, 0, :]
# print("Text Embedding Shape (BLIP):", text_embeddings.shape)
# print("Image Embedding Shape (BLIP):", image_embeddings)
from transformers import CLIPProcessor, CLIPModel # from transformers import BlipProcessor, BlipModel # Uncomment to use BLIP from PIL import Image import torch import requests # Choose device device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # Load a sample image and text image_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets/cat_style_layout.png" image = Image.open(requests.get(image_url, stream=True).raw).convert("RGB") text = "a cute puppy" # =========================== # 1. CLIP (for Embeddings) # =========================== clip_model_name = "openai/clip-vit-base-patch32" clip_model = CLIPModel.from_pretrained(clip_model_name).to(device) clip_processor = CLIPProcessor.from_pretrained(clip_model_name) # Preprocess input inputs = clip_processor(text=[text], images=image, return_tensors="pt", padding=True).to(device) # Get text and image embeddings with torch.no_grad(): text_embeddings = clip_model.get_text_features(input_ids=inputs["input_ids"]) image_embeddings = clip_model.get_image_features(pixel_values=inputs["pixel_values"]) # Normalize embeddings (optional) text_embeddings = text_embeddings / text_embeddings.norm(dim=-1, keepdim=True) image_embeddings = image_embeddings / image_embeddings.norm(dim=-1, keepdim=True) print("Text Embedding Shape (CLIP):", text_embeddings.shape) print("Image Embedding Shape (CLIP):", image_embeddings) # =========================== # 2. BLIP (commented) # =========================== # blip_model_name = "Salesforce/blip-image-text-matching-base" # blip_processor = BlipProcessor.from_pretrained(blip_model_name) # blip_model = BlipModel.from_pretrained(blip_model_name).to(device) # inputs = blip_processor(images=image, text=text, return_tensors="pt").to(device) # with torch.no_grad(): # text_embeddings = blip_model.text_encoder(input_ids=inputs["input_ids"]).last_hidden_state[:, 0, :] # image_embeddings = blip_model.vision_model(pixel_values=inputs["pixel_values"]).last_hidden_state[:, 0, :] # print("Text Embedding Shape (BLIP):", text_embeddings.shape) # print("Image Embedding Shape (BLIP):", image_embeddings)
from transformers import CLIPProcessor, CLIPModel
# from transformers import BlipProcessor, BlipModel  # Uncomment to use BLIP
from PIL import Image
import torch
import requests
# Choose device
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Load a sample image and text
image_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets/cat_style_layout.png"
image = Image.open(requests.get(image_url, stream=True).raw).convert("RGB")
text = "a cute puppy"
# ===========================
# 1. CLIP (for Embeddings)
# ===========================
clip_model_name = "openai/clip-vit-base-patch32"
clip_model = CLIPModel.from_pretrained(clip_model_name).to(device)
clip_processor = CLIPProcessor.from_pretrained(clip_model_name)
# Preprocess input
inputs = clip_processor(text=[text], images=image, return_tensors="pt", padding=True).to(device)
# Get text and image embeddings
with torch.no_grad():
    text_embeddings = clip_model.get_text_features(input_ids=inputs["input_ids"])
    image_embeddings = clip_model.get_image_features(pixel_values=inputs["pixel_values"])
# Normalize embeddings (optional)
text_embeddings = text_embeddings / text_embeddings.norm(dim=-1, keepdim=True)
image_embeddings = image_embeddings / image_embeddings.norm(dim=-1, keepdim=True)
print("Text Embedding Shape (CLIP):", text_embeddings.shape)
print("Image Embedding Shape (CLIP):", image_embeddings)
# ===========================
# 2. BLIP (commented)
# ===========================
# blip_model_name = "Salesforce/blip-image-text-matching-base"
# blip_processor = BlipProcessor.from_pretrained(blip_model_name)
# blip_model = BlipModel.from_pretrained(blip_model_name).to(device)
# inputs = blip_processor(images=image, text=text, return_tensors="pt").to(device)
# with torch.no_grad():
#     text_embeddings = blip_model.text_encoder(input_ids=inputs["input_ids"]).last_hidden_state[:, 0, :]
#     image_embeddings = blip_model.vision_model(pixel_values=inputs["pixel_values"]).last_hidden_state[:, 0, :]
# print("Text Embedding Shape (BLIP):", text_embeddings.shape)
# print("Image Embedding Shape (BLIP):", image_embeddings)

輸出:

CLIP和BLIP

優點

  • 跨模態理解:提供跨文字和影像的強大表徵。
  • 適用性廣:適用於影像檢索、字幕和其他多模態任務。

缺點

  • 複雜性高:訓練需要大量經過精心整理的配對資料集。
  • 資源需求大:多模態模型是計算要求最高的模型之一。

各種嵌入技術比較

嵌入技術 型別 模型架構/方法 常見用例
Count Vectorizer 獨立於上下文,無 ML 基於計數 (Bag of Words) 用於搜尋、聊天機器人和語義相似性的句子嵌入
One-Hot Encoding 獨立於上下文,無 ML 手動編碼 基準模型、基於規則的系統
TF-IDF 獨立於上下文,無 ML 計數 + 反向文件頻率 文件排名、文字相似性、關鍵詞提取
Okapi BM25 獨立於上下文,統計排序 機率 IR 模型 搜尋引擎、資訊檢索
Word2Vec (CBOW, SG) 獨立於上下文,基於 ML 神經網路(淺層) 情感分析、詞語相似性、NLP 管道
GloVe 獨立於上下文,基於 ML 全域性共現矩陣 + ML 詞語相似性、嵌入初始化
FastText 獨立於上下文,基於 ML Word2Vec + 子詞嵌入 形態豐富的語言、OOV 詞處理
Doc2Vec 獨立於上下文,基於 ML 針對文件的 Word2Vec 擴充套件 文件分類、聚類
InferSent 上下文無關,基於 RNN 帶監督學習的 BiLSTM 語義相似性、NLI 任務
Universal Sentence Encoder 上下文無關,基於 Transformer Transformer/DAN(深度平均網路) 用於搜尋的句子嵌入、聊天機器人、語義相似性
Node2Vec 基於圖形的嵌入 隨機行走 + Skipgram 圖表示、推薦系統、連結預測
ELMo 上下文無關、基於 RNN 雙向 LSTM 命名實體識別、問題解答、核心參照解析
BERT & Variants 上下文相關,基於 Transformer 問答、情感分析、摘要和語義搜尋 問答、情感分析、總結、語義搜尋
CLIP 多模態,基於 Transformer 視覺 + 文字編碼器(對比) 影像標題、跨模態搜尋、文字到影像檢索
BLIP 多模態,基於 Transformer 視覺語言預訓練(VLP) 影像標題、VQA(視覺問題解答)

小結

從基於計數的基本方法(如單次編碼)到如今功能強大、上下文感知、甚至多模態的模型(如 BERT 和 CLIP),嵌入式技術已經走過了漫長的道路。每一步都是為了突破上一步的侷限,幫助我們更好地理解和表達人類語言。如今,得益於 Hugging Face 和 Ollama 等平臺,我們可以訪問越來越多的前沿嵌入模型庫,從而比以往任何時候都更容易進入語言智慧的新時代。

不過,除了瞭解這些技術的工作原理,我們還應該考慮它們如何與我們的實際目標相匹配。無論您是在構建聊天機器人、語義搜尋引擎、推薦系統還是文件摘要系統,總有一種嵌入技術能將我們的想法變為現實。畢竟,在當今的語言技術世界中,每一種設想都有一個真正的載體。

評論留言