語言模型評估中的交叉熵損失

語言模型評估中的交叉熵損失

交叉熵損失(Cross entropy loss)是評估語言模型的基石指標之一,既是訓練目標,也是評估指標。在本綜合指南中,我們將探討什麼是交叉熵損失,它在大型語言模型(LLM)中的具體作用,以及為什麼它對理解模型效能如此重要。

無論您是機器學習從業者、研究人員,還是希望瞭解現代人工智慧系統如何訓練和評估的人,本文都將為您提供對交叉熵損失及其在語言建模領域重要性的全面瞭解。

lm 微調

Link: Source

什麼是交叉熵損失?

交叉熵損失衡量的是輸出為機率分佈的分類模型的效能。在語言模型中,它量化了下一個標記的預測機率分佈與實際分佈(通常是代表真實下一個標記的單次編碼向量)之間的差異。

什麼是交叉熵損失?

交叉熵損失的主要特徵

  • 資訊理論基礎:交叉熵植根於資訊理論,它測量的是,如果使用針對另一種分佈(預測分佈)進行最佳化的編碼方案,從一種機率分佈(真實分佈)中識別事件需要多少位元的資訊。
  • 機率輸出:適用於產生機率分佈而非確定性輸出的模型。
  • 非對稱:與其他一些距離指標不同,交叉熵不是對稱的–真實分佈和預測分佈的排序很重要。
  • 可微分:對於神經網路訓練中使用的基於梯度的最佳化方法至關重要。
  • 對置信度敏感:對有把握但錯誤的預測進行重罰,鼓勵模型在適當的時候具有不確定性。

交叉熵損失的主要特徵

Source: Link

二元交叉熵與公式

對於二元分類任務(如簡單的是/否 問題或情感分析),使用二元交叉熵:

二元交叉熵

Where:

    • yi​is the true label (0 or 1)
    • y​i​ is the predicted probability
    • N is the number of samples

其中

  • yi 是真實標籤(0 或 1)
  • yi 是預測機率
  • N 是樣本數

二元交叉熵也被稱為對數損失,尤其是在機器學習競賽中。

二元交叉熵也被稱為對數損失

Source: Link

作為損失函式的交叉熵

在訓練過程中,交叉熵是模型試圖最小化的目標函式。透過比較模型預測的機率分佈與實際情況,訓練演算法會調整模型引數,以減少預測與實際情況之間的差異。

交叉熵在大型語言模型中的作用

在大型語言模型中,交叉熵損失起著幾個關鍵作用:

  1. 訓練目標:預訓練和微調的主要目標是儘量減少損失。
  2. 評估指標:用於評估模型在保留資料上的效能。
  3. 複雜度計算:Perplexity 是另一個常見的 LLM 評估指標,由交叉熵推導而來: Perplexity=2^{CrossEntropy}.
  4. 模型比較:可以根據不同模型在同一資料集上的損失對其進行比較。
  5. 遷移學習評估:這可以說明模型將知識從前期訓練轉移到下游任務的程度。

交叉熵在大型語言模型中的作用

它是如何工作的?

對於語言模型,交叉熵損失的工作原理如下:

  1. 模型預測下一個標記在整個詞彙中的機率分佈。
  2. 將該分佈與真實分佈(通常是單擊向量,實際下一個標記的機率為 1)進行比較。
  3. 計算真實標記在模型分佈下的負對數機率。
  4. 該值是序列或資料集中所有標記的平均值。

公式和解釋

語言建模中交叉熵損失的一般公式為:

交叉熵損失公式

其中

  • N 是序列中的標記數
  • V 是詞彙量
  • yi, j 為 1(如果標記 j 是位置 i 上的下一個正確標記),否則為 0
  • yi, j 是標記 j 在位置 i 上的預測機率

由於我們通常處理的是單次編碼的地面實況,因此可以簡化為

交叉熵損失簡化公式

其中,ti 是位於 i 位置的真實標記的索引。

在PyTorch和TensorFlow程式碼中實現交叉熵損失

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
# PyTorch Implementation
import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np
import matplotlib.pyplot as plt
# Simple Language Model in PyTorch
class SimpleLanguageModel(nn.Module):
def __init__(self, vocab_size, embedding_dim, hidden_dim):
super(SimpleLanguageModel, self).__init__()
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, batch_first=True)
self.fc = nn.Linear(hidden_dim, vocab_size)
def forward(self, x):
# x shape: [batch_size, sequence_length]
embedded = self.embedding(x) # [batch_size, sequence_length, embedding_dim]
lstm_out, _ = self.lstm(embedded) # [batch_size, sequence_length, hidden_dim]
logits = self.fc(lstm_out) # [batch_size, sequence_length, vocab_size]
return logits
# Manual Cross Entropy Loss calculation
def manual_cross_entropy_loss(logits, targets):
"""
Computes cross entropy loss manually
Args:
logits: Raw model outputs [batch_size, sequence_length, vocab_size]
targets: True token indices [batch_size, sequence_length]
"""
batch_size, seq_len, vocab_size = logits.shape
# Reshape for easier processing
logits = logits.reshape(-1, vocab_size) # [batch_size*sequence_length, vocab_size]
targets = targets.reshape(-1) # [batch_size*sequence_length]
# Convert logits to probabilities using softmax
probs = F.softmax(logits, dim=1)
# Get probability of the correct token for each position
correct_token_probs = probs[range(len(targets)), targets]
# Compute negative log likelihood
nll = -torch.log(correct_token_probs + 1e-10) # Add small epsilon to prevent log(0)
# Average over all tokens
loss = torch.mean(nll)
return loss
# Example usage
def pytorch_example():
# Parameters
vocab_size = 10000
embedding_dim = 128
hidden_dim = 256
batch_size = 32
seq_length = 50
# Sample data
inputs = torch.randint(0, vocab_size, (batch_size, seq_length))
targets = torch.randint(0, vocab_size, (batch_size, seq_length))
# Create model
model = SimpleLanguageModel(vocab_size, embedding_dim, hidden_dim)
# Get model outputs
logits = model(inputs)
# PyTorch's built-in loss function
criterion = nn.CrossEntropyLoss()
# For CrossEntropyLoss, we need to reshape
pytorch_loss = criterion(logits.view(-1, vocab_size), targets.view(-1))
# Our manual implementation
manual_loss = manual_cross_entropy_loss(logits, targets)
print(f"PyTorch CrossEntropyLoss: {pytorch_loss.item():.4f}")
print(f"Manual CrossEntropyLoss: {manual_loss.item():.4f}")
return model, logits, targets
# TensorFlow Implementation
def tensorflow_implementation():
import tensorflow as tf
# Parameters
vocab_size = 10000
embedding_dim = 128
hidden_dim = 256
batch_size = 32
seq_length = 50
# Simple Language Model in TensorFlow
class TFSimpleLanguageModel(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, hidden_dim):
super(TFSimpleLanguageModel, self).__init__()
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.lstm = tf.keras.layers.LSTM(hidden_dim, return_sequences=True)
self.fc = tf.keras.layers.Dense(vocab_size)
def call(self, x):
embedded = self.embedding(x)
lstm_out = self.lstm(embedded)
return self.fc(lstm_out)
# Create model
tf_model = TFSimpleLanguageModel(vocab_size, embedding_dim, hidden_dim)
# Sample data
tf_inputs = tf.random.uniform((batch_size, seq_length), minval=0, maxval=vocab_size, dtype=tf.int32)
tf_targets = tf.random.uniform((batch_size, seq_length), minval=0, maxval=vocab_size, dtype=tf.int32)
# Get model outputs
tf_logits = tf_model(tf_inputs)
# TensorFlow's built-in loss function
tf_loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
tf_loss = tf_loss_fn(tf_targets, tf_logits)
# Manual cross entropy calculation in TensorFlow
def tf_manual_cross_entropy(logits, targets):
batch_size, seq_len, vocab_size = logits.shape
# Reshape
logits_flat = tf.reshape(logits, [-1, vocab_size])
targets_flat = tf.reshape(targets, [-1])
# Convert to probabilities
probs = tf.nn.softmax(logits_flat, axis=1)
# Get correct token probabilities
indices = tf.stack([tf.range(tf.shape(targets_flat)[0], dtype=tf.int32), tf.cast(targets_flat, tf.int32)], axis=1)
correct_probs = tf.gather_nd(probs, indices)
# Compute loss
loss = -tf.reduce_mean(tf.math.log(correct_probs + 1e-10))
return loss
manual_tf_loss = tf_manual_cross_entropy(tf_logits, tf_targets)
print(f"TensorFlow CrossEntropyLoss: {tf_loss.numpy():.4f}")
print(f"Manual TF CrossEntropyLoss: {manual_tf_loss.numpy():.4f}")
return tf_model, tf_logits, tf_targets
# Visualizing Cross Entropy
def visualize_cross_entropy():
# True label is 1 (one-hot encoding would be [0, 1])
true_label = 1
# Range of predicted probabilities for class 1
predicted_probs = np.linspace(0.01, 0.99, 100)
# Calculate cross entropy loss for each predicted probability
cross_entropy = [-np.log(p) if true_label == 1 else -np.log(1-p) for p in predicted_probs]
# Plot
plt.figure(figsize=(10, 6))
plt.plot(predicted_probs, cross_entropy)
plt.title('Cross Entropy Loss vs. Predicted Probability (True Class = 1)')
plt.xlabel('Predicted Probability for Class 1')
plt.ylabel('Cross Entropy Loss')
plt.grid(True)
plt.axvline(x=1.0, color='r', linestyle='--', alpha=0.5, label='True Probability = 1.0')
plt.legend()
plt.show()
# Visualize loss landscape for binary classification
probs_0 = np.linspace(0.01, 0.99, 100)
probs_1 = 1 - probs_0
# Calculate loss for true label = 0
loss_true_0 = [-np.log(1-p) for p in probs_0]
# Calculate loss for true label = 1
loss_true_1 = [-np.log(p) for p in probs_0]
plt.figure(figsize=(10, 6))
plt.plot(probs_0, loss_true_0, label='True Label = 0')
plt.plot(probs_0, loss_true_1, label='True Label = 1')
plt.title('Cross Entropy Loss for Different True Labels')
plt.xlabel('Predicted Probability for Class 1')
plt.ylabel('Cross Entropy Loss')
plt.legend()
plt.grid(True)
plt.show()
# Run examples
if __name__ == "__main__":
print("PyTorch Example:")
pt_model, pt_logits, pt_targets = pytorch_example()
print("\nTensorFlow Example:")
try:
tf_model, tf_logits, tf_targets = tensorflow_implementation()
except ImportError:
print("TensorFlow not installed. Skipping TensorFlow example.")
print("\nVisualizing Cross Entropy:")
visualize_cross_entropy()
# PyTorch Implementation import torch import torch.nn as nn import torch.nn.functional as F import numpy as np import matplotlib.pyplot as plt # Simple Language Model in PyTorch class SimpleLanguageModel(nn.Module): def __init__(self, vocab_size, embedding_dim, hidden_dim): super(SimpleLanguageModel, self).__init__() self.embedding = nn.Embedding(vocab_size, embedding_dim) self.lstm = nn.LSTM(embedding_dim, hidden_dim, batch_first=True) self.fc = nn.Linear(hidden_dim, vocab_size) def forward(self, x): # x shape: [batch_size, sequence_length] embedded = self.embedding(x) # [batch_size, sequence_length, embedding_dim] lstm_out, _ = self.lstm(embedded) # [batch_size, sequence_length, hidden_dim] logits = self.fc(lstm_out) # [batch_size, sequence_length, vocab_size] return logits # Manual Cross Entropy Loss calculation def manual_cross_entropy_loss(logits, targets): """ Computes cross entropy loss manually Args: logits: Raw model outputs [batch_size, sequence_length, vocab_size] targets: True token indices [batch_size, sequence_length] """ batch_size, seq_len, vocab_size = logits.shape # Reshape for easier processing logits = logits.reshape(-1, vocab_size) # [batch_size*sequence_length, vocab_size] targets = targets.reshape(-1) # [batch_size*sequence_length] # Convert logits to probabilities using softmax probs = F.softmax(logits, dim=1) # Get probability of the correct token for each position correct_token_probs = probs[range(len(targets)), targets] # Compute negative log likelihood nll = -torch.log(correct_token_probs + 1e-10) # Add small epsilon to prevent log(0) # Average over all tokens loss = torch.mean(nll) return loss # Example usage def pytorch_example(): # Parameters vocab_size = 10000 embedding_dim = 128 hidden_dim = 256 batch_size = 32 seq_length = 50 # Sample data inputs = torch.randint(0, vocab_size, (batch_size, seq_length)) targets = torch.randint(0, vocab_size, (batch_size, seq_length)) # Create model model = SimpleLanguageModel(vocab_size, embedding_dim, hidden_dim) # Get model outputs logits = model(inputs) # PyTorch's built-in loss function criterion = nn.CrossEntropyLoss() # For CrossEntropyLoss, we need to reshape pytorch_loss = criterion(logits.view(-1, vocab_size), targets.view(-1)) # Our manual implementation manual_loss = manual_cross_entropy_loss(logits, targets) print(f"PyTorch CrossEntropyLoss: {pytorch_loss.item():.4f}") print(f"Manual CrossEntropyLoss: {manual_loss.item():.4f}") return model, logits, targets # TensorFlow Implementation def tensorflow_implementation(): import tensorflow as tf # Parameters vocab_size = 10000 embedding_dim = 128 hidden_dim = 256 batch_size = 32 seq_length = 50 # Simple Language Model in TensorFlow class TFSimpleLanguageModel(tf.keras.Model): def __init__(self, vocab_size, embedding_dim, hidden_dim): super(TFSimpleLanguageModel, self).__init__() self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim) self.lstm = tf.keras.layers.LSTM(hidden_dim, return_sequences=True) self.fc = tf.keras.layers.Dense(vocab_size) def call(self, x): embedded = self.embedding(x) lstm_out = self.lstm(embedded) return self.fc(lstm_out) # Create model tf_model = TFSimpleLanguageModel(vocab_size, embedding_dim, hidden_dim) # Sample data tf_inputs = tf.random.uniform((batch_size, seq_length), minval=0, maxval=vocab_size, dtype=tf.int32) tf_targets = tf.random.uniform((batch_size, seq_length), minval=0, maxval=vocab_size, dtype=tf.int32) # Get model outputs tf_logits = tf_model(tf_inputs) # TensorFlow's built-in loss function tf_loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) tf_loss = tf_loss_fn(tf_targets, tf_logits) # Manual cross entropy calculation in TensorFlow def tf_manual_cross_entropy(logits, targets): batch_size, seq_len, vocab_size = logits.shape # Reshape logits_flat = tf.reshape(logits, [-1, vocab_size]) targets_flat = tf.reshape(targets, [-1]) # Convert to probabilities probs = tf.nn.softmax(logits_flat, axis=1) # Get correct token probabilities indices = tf.stack([tf.range(tf.shape(targets_flat)[0], dtype=tf.int32), tf.cast(targets_flat, tf.int32)], axis=1) correct_probs = tf.gather_nd(probs, indices) # Compute loss loss = -tf.reduce_mean(tf.math.log(correct_probs + 1e-10)) return loss manual_tf_loss = tf_manual_cross_entropy(tf_logits, tf_targets) print(f"TensorFlow CrossEntropyLoss: {tf_loss.numpy():.4f}") print(f"Manual TF CrossEntropyLoss: {manual_tf_loss.numpy():.4f}") return tf_model, tf_logits, tf_targets # Visualizing Cross Entropy def visualize_cross_entropy(): # True label is 1 (one-hot encoding would be [0, 1]) true_label = 1 # Range of predicted probabilities for class 1 predicted_probs = np.linspace(0.01, 0.99, 100) # Calculate cross entropy loss for each predicted probability cross_entropy = [-np.log(p) if true_label == 1 else -np.log(1-p) for p in predicted_probs] # Plot plt.figure(figsize=(10, 6)) plt.plot(predicted_probs, cross_entropy) plt.title('Cross Entropy Loss vs. Predicted Probability (True Class = 1)') plt.xlabel('Predicted Probability for Class 1') plt.ylabel('Cross Entropy Loss') plt.grid(True) plt.axvline(x=1.0, color='r', linestyle='--', alpha=0.5, label='True Probability = 1.0') plt.legend() plt.show() # Visualize loss landscape for binary classification probs_0 = np.linspace(0.01, 0.99, 100) probs_1 = 1 - probs_0 # Calculate loss for true label = 0 loss_true_0 = [-np.log(1-p) for p in probs_0] # Calculate loss for true label = 1 loss_true_1 = [-np.log(p) for p in probs_0] plt.figure(figsize=(10, 6)) plt.plot(probs_0, loss_true_0, label='True Label = 0') plt.plot(probs_0, loss_true_1, label='True Label = 1') plt.title('Cross Entropy Loss for Different True Labels') plt.xlabel('Predicted Probability for Class 1') plt.ylabel('Cross Entropy Loss') plt.legend() plt.grid(True) plt.show() # Run examples if __name__ == "__main__": print("PyTorch Example:") pt_model, pt_logits, pt_targets = pytorch_example() print("\nTensorFlow Example:") try: tf_model, tf_logits, tf_targets = tensorflow_implementation() except ImportError: print("TensorFlow not installed. Skipping TensorFlow example.") print("\nVisualizing Cross Entropy:") visualize_cross_entropy()
# PyTorch Implementation
import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np
import matplotlib.pyplot as plt
# Simple Language Model in PyTorch
class SimpleLanguageModel(nn.Module):
    def __init__(self, vocab_size, embedding_dim, hidden_dim):
        super(SimpleLanguageModel, self).__init__()
        self.embedding = nn.Embedding(vocab_size, embedding_dim)
        self.lstm = nn.LSTM(embedding_dim, hidden_dim, batch_first=True)
        self.fc = nn.Linear(hidden_dim, vocab_size)
    def forward(self, x):
        # x shape: [batch_size, sequence_length]
        embedded = self.embedding(x)  # [batch_size, sequence_length, embedding_dim]
        lstm_out, _ = self.lstm(embedded)  # [batch_size, sequence_length, hidden_dim]
        logits = self.fc(lstm_out)  # [batch_size, sequence_length, vocab_size]
        return logits
# Manual Cross Entropy Loss calculation
def manual_cross_entropy_loss(logits, targets):
    """
    Computes cross entropy loss manually
    Args:
        logits: Raw model outputs [batch_size, sequence_length, vocab_size]
        targets: True token indices [batch_size, sequence_length]
    """
    batch_size, seq_len, vocab_size = logits.shape
    # Reshape for easier processing
    logits = logits.reshape(-1, vocab_size)  # [batch_size*sequence_length, vocab_size]
    targets = targets.reshape(-1)  # [batch_size*sequence_length]
    # Convert logits to probabilities using softmax
    probs = F.softmax(logits, dim=1)
    # Get probability of the correct token for each position
    correct_token_probs = probs[range(len(targets)), targets]
    # Compute negative log likelihood
    nll = -torch.log(correct_token_probs + 1e-10)  # Add small epsilon to prevent log(0)
    # Average over all tokens
    loss = torch.mean(nll)
    return loss
# Example usage
def pytorch_example():
    # Parameters
    vocab_size = 10000
    embedding_dim = 128
    hidden_dim = 256
    batch_size = 32
    seq_length = 50
    # Sample data
    inputs = torch.randint(0, vocab_size, (batch_size, seq_length))
    targets = torch.randint(0, vocab_size, (batch_size, seq_length))
    # Create model
    model = SimpleLanguageModel(vocab_size, embedding_dim, hidden_dim)
    # Get model outputs
    logits = model(inputs)
    # PyTorch's built-in loss function
    criterion = nn.CrossEntropyLoss()
    # For CrossEntropyLoss, we need to reshape
    pytorch_loss = criterion(logits.view(-1, vocab_size), targets.view(-1))
    # Our manual implementation
    manual_loss = manual_cross_entropy_loss(logits, targets)
    print(f"PyTorch CrossEntropyLoss: {pytorch_loss.item():.4f}")
    print(f"Manual CrossEntropyLoss: {manual_loss.item():.4f}")
    return model, logits, targets
# TensorFlow Implementation
def tensorflow_implementation():
    import tensorflow as tf
    # Parameters
    vocab_size = 10000
    embedding_dim = 128
    hidden_dim = 256
    batch_size = 32
    seq_length = 50
    # Simple Language Model in TensorFlow
    class TFSimpleLanguageModel(tf.keras.Model):
        def __init__(self, vocab_size, embedding_dim, hidden_dim):
            super(TFSimpleLanguageModel, self).__init__()
            self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
            self.lstm = tf.keras.layers.LSTM(hidden_dim, return_sequences=True)
            self.fc = tf.keras.layers.Dense(vocab_size)
        def call(self, x):
            embedded = self.embedding(x)
            lstm_out = self.lstm(embedded)
            return self.fc(lstm_out)
    # Create model
    tf_model = TFSimpleLanguageModel(vocab_size, embedding_dim, hidden_dim)
    # Sample data
    tf_inputs = tf.random.uniform((batch_size, seq_length), minval=0, maxval=vocab_size, dtype=tf.int32)
    tf_targets = tf.random.uniform((batch_size, seq_length), minval=0, maxval=vocab_size, dtype=tf.int32)
    # Get model outputs
    tf_logits = tf_model(tf_inputs)
    # TensorFlow's built-in loss function
    tf_loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
    tf_loss = tf_loss_fn(tf_targets, tf_logits)
    # Manual cross entropy calculation in TensorFlow
    def tf_manual_cross_entropy(logits, targets):
        batch_size, seq_len, vocab_size = logits.shape
        # Reshape
        logits_flat = tf.reshape(logits, [-1, vocab_size])
        targets_flat = tf.reshape(targets, [-1])
        # Convert to probabilities
        probs = tf.nn.softmax(logits_flat, axis=1)
        # Get correct token probabilities
        indices = tf.stack([tf.range(tf.shape(targets_flat)[0], dtype=tf.int32), tf.cast(targets_flat, tf.int32)], axis=1)
        correct_probs = tf.gather_nd(probs, indices)
        # Compute loss
        loss = -tf.reduce_mean(tf.math.log(correct_probs + 1e-10))
        return loss
    manual_tf_loss = tf_manual_cross_entropy(tf_logits, tf_targets)
    print(f"TensorFlow CrossEntropyLoss: {tf_loss.numpy():.4f}")
    print(f"Manual TF CrossEntropyLoss: {manual_tf_loss.numpy():.4f}")
    return tf_model, tf_logits, tf_targets
# Visualizing Cross Entropy
def visualize_cross_entropy():
    # True label is 1 (one-hot encoding would be [0, 1])
    true_label = 1
    # Range of predicted probabilities for class 1
    predicted_probs = np.linspace(0.01, 0.99, 100)
    # Calculate cross entropy loss for each predicted probability
    cross_entropy = [-np.log(p) if true_label == 1 else -np.log(1-p) for p in predicted_probs]
    # Plot
    plt.figure(figsize=(10, 6))
    plt.plot(predicted_probs, cross_entropy)
    plt.title('Cross Entropy Loss vs. Predicted Probability (True Class = 1)')
    plt.xlabel('Predicted Probability for Class 1')
    plt.ylabel('Cross Entropy Loss')
    plt.grid(True)
    plt.axvline(x=1.0, color='r', linestyle='--', alpha=0.5, label='True Probability = 1.0')
    plt.legend()
    plt.show()
    # Visualize loss landscape for binary classification
    probs_0 = np.linspace(0.01, 0.99, 100)
    probs_1 = 1 - probs_0
    # Calculate loss for true label = 0
    loss_true_0 = [-np.log(1-p) for p in probs_0]
    # Calculate loss for true label = 1
    loss_true_1 = [-np.log(p) for p in probs_0]
    plt.figure(figsize=(10, 6))
    plt.plot(probs_0, loss_true_0, label='True Label = 0')
    plt.plot(probs_0, loss_true_1, label='True Label = 1')
    plt.title('Cross Entropy Loss for Different True Labels')
    plt.xlabel('Predicted Probability for Class 1')
    plt.ylabel('Cross Entropy Loss')
    plt.legend()
    plt.grid(True)
    plt.show()
# Run examples
if __name__ == "__main__":
    print("PyTorch Example:")
    pt_model, pt_logits, pt_targets = pytorch_example()
    print("\nTensorFlow Example:")
    try:
        tf_model, tf_logits, tf_targets = tensorflow_implementation()
    except ImportError:
        print("TensorFlow not installed. Skipping TensorFlow example.")
    print("\nVisualizing Cross Entropy:")
    visualize_cross_entropy()

程式碼分析:

我在 PyTorch 和 TensorFlow 中都實現了交叉熵損失,展示了內建函式和手動實現。讓我們來看看其中的關鍵部分:

  1. 簡單語言模型(SimpleLanguageModel):基於 LSTM 的基本語言模型,可預測下一個標記的機率。
  2. 交叉熵手動實現:展示如何根據第一原理計算交叉熵:
    • 使用 softmax 將對數轉換為機率
    • 提取正確標記的機率
    • 取這些機率的負對數
    • 求所有標記的平均值
  3. 視覺化:程式碼包含視覺化功能,顯示損失如何隨不同的預測機率而變化。

輸出:

PyTorch Example:PyTorch CrossEntropyLoss: 9.2140Manual CrossEntropyLoss: 9.2140TensorFlow Example:TensorFlow CrossEntropyLoss: 9.2103Manual TF CrossEntropyLoss: 9.2103

在PyTorch和TensorFlow程式碼中實現交叉熵損失-01 在PyTorch和TensorFlow程式碼中實現交叉熵損失-02

視覺化效果說明了隨著預測結果與真實標籤的偏離,損失是如何急劇增加的,尤其是當模型確信是錯誤的時候。

優勢與侷限

優勢 侷限
可微分且平滑,可進行基於梯度的最佳化 對於非常小的機率,數值上可能不穩定(需要ε處理)
自然處理機率輸出 可能需要對標籤進行平滑處理,以防止過度自信
非常適合多類問題 在不平衡的資料集中,可能會被普通類別所支配
資訊理論基礎紮實 不能直接針對特定的評估指標(如 BLEU 或 ROUGE)進行最佳化
計算效率高 假定標記是獨立的,忽略了順序依賴性
對有把握但錯誤的預測進行懲罰 比準確率或複雜度等指標更難解釋
可按標記分解進行分析 不考慮標記之間的語義相似性

實際應用

交叉熵損失被廣泛應用於語言模型:

  1. 訓練基礎模型:交叉熵損失是在海量文字庫中預訓練大型語言模型的標準目標函式。
  2. 微調:在根據特定任務調整預訓練模型時,交叉熵損失仍是常用的損失函式。
  3. 序列生成:即使在生成文字時,訓練過程中的損失也會影響模型輸出的質量。
  4. 模型選擇:在比較不同的模型架構或超引數設定時,驗證資料上的損失是一個關鍵指標。
  5. 領域適應:衡量交叉熵在不同領域的變化情況可以說明模型的泛化程度。
  6. 知識提煉:用於將知識從較大的“教師”模型轉移到較小的“學生”模型。

與其他指標的比較

雖然交叉熵損失是基本指標,但它經常與其他評估指標一起使用:

  • 困惑度:交叉熵的指數;更容易解釋,因為它代表了模型的“混亂”程度
  • BLEU/ROUGE:對於生成任務,這些指標可捕捉與參考文字的 n-gram 重合度
  • 準確率:預測正確率的簡單百分比,資訊量小於交叉熵
  • F1 分數:平衡分類任務的精確度和召回率
  • KL 發散度:衡量一種機率分佈與另一種機率分佈的發散程度
  • Earth Mover’s Distance:考慮標記間的語義相似性,與交叉熵不同

ML評估指標對比圖

小結

交叉熵損失是評估和訓練語言模型不可或缺的工具。其在資訊理論中的理論基礎,結合其在最佳化方面的實際優勢,使其成為大多數 NLP 任務的標準選擇。

瞭解交叉熵損失不僅能深入瞭解模型的訓練方法,還能瞭解其基本侷限性以及語言建模中的權衡問題。隨著語言模型的不斷發展,交叉熵損失仍然是一個基石指標,可以幫助研究人員和從業人員衡量進展並指導創新。

無論您是在構建自己的語言模型還是在評估現有模型,全面瞭解交叉熵損失對於做出明智決策和正確解釋結果都至關重要。

評論留言