如何使用Hugging Face Evaluate来评估LLM

如何使用Hugging Face Evaluate来评估LLM

评估大型语言模型 (LLM)至关重要。您需要了解它们的性能如何,并确保它们符合您的标准。Hugging Face 评估库为这项任务提供了一套有用的工具。本指南通过实际代码示例,向您介绍如何使用评估库来评估 LLM。

了解Hugging Face评估库

Hugging Face 评估库提供了满足不同评估需求的工具。这些工具分为三大类:

  1. 度量:这些指标通过比较模型的预测结果和地面实况标签来衡量模型的性能。例如准确率、F1 分数、BLEU 和 ROUGE。
  2. 比较:这有助于对两个模型进行比较,通常是通过检查它们的预测如何相互一致或与参考标签一致。
  3. 测量:这些工具研究数据集本身的属性,如计算文本复杂度或标签分布。

您可以使用一个函数访问所有这些评估模块:evaluate.load()。

开始使用

安装

首先,您需要安装该库。打开终端或命令提示符并运行

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
pip install evaluate
pip install rouge_score # Needed for text generation metrics
pip install evaluate[visualization] # For plotting capabilities
pip install evaluate pip install rouge_score # Needed for text generation metrics pip install evaluate[visualization] # For plotting capabilities
pip install evaluate
pip install rouge_score # Needed for text generation metrics
pip install evaluate[visualization] # For plotting capabilities

这些命令安装了核心 evaluate 库、rouge_score 软件包(总结中常用的 ROUGE 指标需要)以及雷达图等可视化的可选依赖项。

加载评估模块

要使用特定的评估工具,可按名称加载。例如,要加载准确度度量,可以使用

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
import evaluate
accuracy_metric = evaluate.load("accuracy")
print("Accuracy metric loaded.")
import evaluate accuracy_metric = evaluate.load("accuracy") print("Accuracy metric loaded.")
import evaluate
accuracy_metric = evaluate.load("accuracy")
print("Accuracy metric loaded.")

输出:

加载评估模块

这段代码将导入评估库并加载精确度度量对象。您将使用该对象计算精度分数。

基本评估示例

让我们来了解一些常见的评估场景。

直接计算准确度

您可以通过一次性提供所有参考信息(地面实况)和预测结果来计算一个指标。

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
import evaluate
# Load the accuracy metric
accuracy_metric = evaluate.load("accuracy")
# Sample ground truth and predictions
references = [0, 1, 0, 1]
predictions = [1, 0, 0, 1]
# Compute accuracy
result = accuracy_metric.compute(references=references, predictions=predictions)
print(f"Direct computation result: {result}")
# Example with exact_match metric
exact_match_metric = evaluate.load('exact_match')
match_result = exact_match_metric.compute(references=['hello world'], predictions=['hello world'])
no_match_result = exact_match_metric.compute(references=['hello'], predictions=['hell'])
print(f"Exact match result (match): {match_result}")
print(f"Exact match result (no match): {no_match_result}")
import evaluate # Load the accuracy metric accuracy_metric = evaluate.load("accuracy") # Sample ground truth and predictions references = [0, 1, 0, 1] predictions = [1, 0, 0, 1] # Compute accuracy result = accuracy_metric.compute(references=references, predictions=predictions) print(f"Direct computation result: {result}") # Example with exact_match metric exact_match_metric = evaluate.load('exact_match') match_result = exact_match_metric.compute(references=['hello world'], predictions=['hello world']) no_match_result = exact_match_metric.compute(references=['hello'], predictions=['hell']) print(f"Exact match result (match): {match_result}") print(f"Exact match result (no match): {no_match_result}")
import evaluate
# Load the accuracy metric
accuracy_metric = evaluate.load("accuracy")
# Sample ground truth and predictions
references = [0, 1, 0, 1]
predictions = [1, 0, 0, 1]
# Compute accuracy
result = accuracy_metric.compute(references=references, predictions=predictions)
print(f"Direct computation result: {result}")
# Example with exact_match metric
exact_match_metric = evaluate.load('exact_match')
match_result = exact_match_metric.compute(references=['hello world'], predictions=['hello world'])
no_match_result = exact_match_metric.compute(references=['hello'], predictions=['hell'])
print(f"Exact match result (match): {match_result}")
print(f"Exact match result (no match): {no_match_result}")

输出:

直接计算准确度

解释:

  1. 我们定义了两个列表:引用保存正确的标签,预测保存模型的输出。
  2. 计算方法使用这些列表计算准确率,并将结果以字典形式返回。
  3. 我们还展示了 exact_match 指标,该指标用于检查预测是否与参考完全匹配。

增量评估(使用 add_batch)

对于大型数据集,分批处理预测会更节省内存。您可以增量添加批次,并在最后计算最终得分。

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
import evaluate
# Load the accuracy metric
accuracy_metric = evaluate.load("accuracy")
# Sample batches of refrences and predictions
references_batch1 = [0, 1]
predictions_batch1 = [1, 0]
references_batch2 = [0, 1]
predictions_batch2 = [0, 1]
# Add batches incrementally
accuracy_metric.add_batch(references=references_batch1, predictions=predictions_batch1)
accuracy_metric.add_batch(references=references_batch2, predictions=predictions_batch2)
# Compute final accuracy
final_result = accuracy_metric.compute()
print(f"Incremental computation result: {final_result}")
import evaluate # Load the accuracy metric accuracy_metric = evaluate.load("accuracy") # Sample batches of refrences and predictions references_batch1 = [0, 1] predictions_batch1 = [1, 0] references_batch2 = [0, 1] predictions_batch2 = [0, 1] # Add batches incrementally accuracy_metric.add_batch(references=references_batch1, predictions=predictions_batch1) accuracy_metric.add_batch(references=references_batch2, predictions=predictions_batch2) # Compute final accuracy final_result = accuracy_metric.compute() print(f"Incremental computation result: {final_result}")
import evaluate
# Load the accuracy metric
accuracy_metric = evaluate.load("accuracy")
# Sample batches of refrences and predictions
references_batch1 = [0, 1]
predictions_batch1 = [1, 0]
references_batch2 = [0, 1]
predictions_batch2 = [0, 1]
# Add batches incrementally
accuracy_metric.add_batch(references=references_batch1, predictions=predictions_batch1)
accuracy_metric.add_batch(references=references_batch2, predictions=predictions_batch2)
# Compute final accuracy
final_result = accuracy_metric.compute()
print(f"Incremental computation result: {final_result}")

输出:

增量评估(使用 add_batch)

解释:

  1. 我们模拟分两批处理数据。
  2. add_batch 会根据每个批次更新度量指标的内部状态。
  3. 调用不带参数的 compute() 会计算所有添加批次的度量指标。

组合多个指标

您经常需要同时计算多个指标(例如分类的准确率、F1、精确度和召回率)。evaluate.combine 函数简化了这一过程。

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
import evaluate
# Combine multiple classification metrics
clf_metrics = evaluate.combine(["accuracy", "f1", "precision", "recall"])
# Sample data
predictions = [0, 1, 0]
references = [0, 1, 1] # Note: The last prediction is incorrect
# Compute all metrics at once
results = clf_metrics.compute(predictions=predictions, references=references)
print(f"Combined metrics result: {results}")
import evaluate # Combine multiple classification metrics clf_metrics = evaluate.combine(["accuracy", "f1", "precision", "recall"]) # Sample data predictions = [0, 1, 0] references = [0, 1, 1] # Note: The last prediction is incorrect # Compute all metrics at once results = clf_metrics.compute(predictions=predictions, references=references) print(f"Combined metrics result: {results}")
import evaluate
# Combine multiple classification metrics
clf_metrics = evaluate.combine(["accuracy", "f1", "precision", "recall"])
# Sample data
predictions = [0, 1, 0]
references = [0, 1, 1] # Note: The last prediction is incorrect
# Compute all metrics at once
results = clf_metrics.compute(predictions=predictions, references=references)
print(f"Combined metrics result: {results}")

输出:

组合多个指标

解释:

  1. evaluate.combine 接收一系列指标名称,并返回一个组合评估对象。
  2. 在该对象上调用 compute 会使用相同的输入数据计算所有指定指标。

使用测量

度量可用于分析数据集。下面介绍如何使用 word_length 测量值:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
import evaluate
# Load the word_length measurement
# Note: May require NLTK data download on first run
try:
word_length = evaluate.load("word_length", module_type="measurement")
data = ["hello world", "this is another sentence"]
results = word_length.compute(data=data)
print(f"Word length measurement result: {results}")
except Exception as e:
print(f"Could not run word_length measurement, possibly NLTK data missing: {e}")
print("Attempting NLTK download...")
import nltk
nltk.download('punkt') # Uncomment and run if needed
import evaluate # Load the word_length measurement # Note: May require NLTK data download on first run try: word_length = evaluate.load("word_length", module_type="measurement") data = ["hello world", "this is another sentence"] results = word_length.compute(data=data) print(f"Word length measurement result: {results}") except Exception as e: print(f"Could not run word_length measurement, possibly NLTK data missing: {e}") print("Attempting NLTK download...") import nltk nltk.download('punkt') # Uncomment and run if needed
import evaluate
# Load the word_length measurement
# Note: May require NLTK data download on first run
try:
   word_length = evaluate.load("word_length", module_type="measurement")
   data = ["hello world", "this is another sentence"]
   results = word_length.compute(data=data)
   print(f"Word length measurement result: {results}")
except Exception as e:
   print(f"Could not run word_length measurement, possibly NLTK data missing: {e}")
   print("Attempting NLTK download...")
   import nltk
   nltk.download('punkt') # Uncomment and run if needed

输出:

使用测量

解释:

  1. 我们加载 word_length,并指定 module_type=“测量”。
  2. 计算方法将数据集(此处为字符串列表)作为输入。
  3. 它会返回所提供数据中单词长度的统计数据。(注:需要 nltk 及其 “punkt ”标记符数据)。

评估特定的NLP任务

不同的 NLP 任务需要特定的指标。抱抱脸 Evaluate 包含许多标准指标。

机器翻译(BLEU)

BLEU(Bilingual Evaluation Understudy)是衡量翻译质量的常用指标。它衡量的是模型翻译(假设)与参考翻译之间的 n-gram 重合度。

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
import evaluate
def evaluate_machine_translation(hypotheses, references):
"""Calculates BLEU score for machine translation."""
bleu_metric = evaluate.load("bleu")
results = bleu_metric.compute(predictions=hypotheses, references=references)
# Extract the main BLEU score
bleu_score = results["bleu"]
return bleu_score
# Example hypotheses (model translations)
hypotheses = ["the cat sat on mat.", "the dog played in garden."]
# Example references (correct translations, can have multiple per hypothesis)
references = [["the cat sat on the mat."], ["the dog played in the garden."]]
bleu_score = evaluate_machine_translation(hypotheses, references)
print(f"BLEU Score: {bleu_score:.4f}") # Format for readability
import evaluate def evaluate_machine_translation(hypotheses, references): """Calculates BLEU score for machine translation.""" bleu_metric = evaluate.load("bleu") results = bleu_metric.compute(predictions=hypotheses, references=references) # Extract the main BLEU score bleu_score = results["bleu"] return bleu_score # Example hypotheses (model translations) hypotheses = ["the cat sat on mat.", "the dog played in garden."] # Example references (correct translations, can have multiple per hypothesis) references = [["the cat sat on the mat."], ["the dog played in the garden."]] bleu_score = evaluate_machine_translation(hypotheses, references) print(f"BLEU Score: {bleu_score:.4f}") # Format for readability
import evaluate
def evaluate_machine_translation(hypotheses, references):
   """Calculates BLEU score for machine translation."""
   bleu_metric = evaluate.load("bleu")
   results = bleu_metric.compute(predictions=hypotheses, references=references)
   # Extract the main BLEU score
   bleu_score = results["bleu"]
   return bleu_score
# Example hypotheses (model translations)
hypotheses = ["the cat sat on mat.", "the dog played in garden."]
# Example references (correct translations, can have multiple per hypothesis)
references = [["the cat sat on the mat."], ["the dog played in the garden."]]
bleu_score = evaluate_machine_translation(hypotheses, references)
print(f"BLEU Score: {bleu_score:.4f}") # Format for readability

输出:机器翻译(BLEU)

解释:

  1. 该函数加载 BLEU 指标。
  2. 它将预测的翻译(假设)与一个或多个正确的参考文献进行比较,计算出得分。
  3. BLEU 得分越高(接近 1.0),一般表示翻译质量越好,与参考译文的重叠度越高。0.51 左右的分值表示有适度的重叠。

命名实体识别(NER-使用seqeval)

对于像 NER 这样的序列标注任务,精确度、召回率和每个实体类型的 F1 分数等指标都很有用。seqeval 指标可处理这种格式(如 B-PER、I-PER、O 标记)。

要运行以下代码,需要 seqeval 库。运行以下命令即可安装:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
pip install seqeval
pip install seqeval
pip install seqeval

代码:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
import evaluate
# Load the seqeval metric
try:
seqeval_metric = evaluate.load("seqeval")
# Example labels (using IOB format)
true_labels = [['O', 'B-PER', 'I-PER', 'O'], ['B-LOC', 'I-LOC', 'O']]
predicted_labels = [['O', 'B-PER', 'I-PER', 'O'], ['B-LOC', 'I-LOC', 'O']] # Example: Perfect prediction here
results = seqeval_metric.compute(predictions=predicted_labels, references=true_labels)
print("Seqeval Results (per entity type):")
# Print results nicely
for key, value in results.items():
if isinstance(value, dict):
print(f" {key}: Precision={value['precision']:.2f}, Recall={value['recall']:.2f}, F1={value['f1']:.2f}, Number={value['number']}")
else:
print(f" {key}: {value:.4f}")
except ModuleNotFoundError:
print("Seqeval metric not installed. Run: pip install seqeval")
import evaluate # Load the seqeval metric try: seqeval_metric = evaluate.load("seqeval") # Example labels (using IOB format) true_labels = [['O', 'B-PER', 'I-PER', 'O'], ['B-LOC', 'I-LOC', 'O']] predicted_labels = [['O', 'B-PER', 'I-PER', 'O'], ['B-LOC', 'I-LOC', 'O']] # Example: Perfect prediction here results = seqeval_metric.compute(predictions=predicted_labels, references=true_labels) print("Seqeval Results (per entity type):") # Print results nicely for key, value in results.items(): if isinstance(value, dict): print(f" {key}: Precision={value['precision']:.2f}, Recall={value['recall']:.2f}, F1={value['f1']:.2f}, Number={value['number']}") else: print(f" {key}: {value:.4f}") except ModuleNotFoundError: print("Seqeval metric not installed. Run: pip install seqeval")
import evaluate
# Load the seqeval metric
try:
   seqeval_metric = evaluate.load("seqeval")
   # Example labels (using IOB format)
   true_labels = [['O', 'B-PER', 'I-PER', 'O'], ['B-LOC', 'I-LOC', 'O']]
   predicted_labels = [['O', 'B-PER', 'I-PER', 'O'], ['B-LOC', 'I-LOC', 'O']] # Example: Perfect prediction here
   results = seqeval_metric.compute(predictions=predicted_labels, references=true_labels)
   print("Seqeval Results (per entity type):")
   # Print results nicely
   for key, value in results.items():
       if isinstance(value, dict):
           print(f"  {key}: Precision={value['precision']:.2f}, Recall={value['recall']:.2f}, F1={value['f1']:.2f}, Number={value['number']}")
       else:
           print(f"  {key}: {value:.4f}")
except ModuleNotFoundError:
   print("Seqeval metric not installed. Run: pip install seqeval")

输出:

命名实体识别

解释:

  • 我们加载 seqeval 度量。
  • 它使用列表的列表,其中每个内列表代表一个句子的标签。
  • 计算方法会针对识别出的每种实体类型(如 PER 表示人,LOC 表示位置)返回详细的精确度、召回率和 F1 分数以及总分。

文本摘要 (ROUGE)

ROUGE(Recall-Oriented Understudy for Gisting Evaluation)将生成的摘要与参考摘要进行比较,重点关注重叠的 n-gram 和最长公共子序列。

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
import evaluate
def simple_summarizer(text):
"""A very basic summarizer - just takes the first sentence."""
try:
sentences = text.split(".")
return sentences[0].strip() + "." if sentences[0].strip() else ""
except:
return "" # Handle empty or malformed text
# Load ROUGE metric
rouge_metric = evaluate.load("rouge")
# Example text and reference summary
text = "Today is a beautiful day. The sun is shining and the birds are singing. I am going for a walk in the park."
reference = "The weather is pleasant today."
# Generate summary using the simple function
prediction = simple_summarizer(text)
print(f"Generated Summary: {prediction}")
print(f"Reference Summary: {reference}")
# Compute ROUGE scores
rouge_results = rouge_metric.compute(predictions=[prediction], references=[reference])
print(f"ROUGE Scores: {rouge_results}")
import evaluate def simple_summarizer(text): """A very basic summarizer - just takes the first sentence.""" try: sentences = text.split(".") return sentences[0].strip() + "." if sentences[0].strip() else "" except: return "" # Handle empty or malformed text # Load ROUGE metric rouge_metric = evaluate.load("rouge") # Example text and reference summary text = "Today is a beautiful day. The sun is shining and the birds are singing. I am going for a walk in the park." reference = "The weather is pleasant today." # Generate summary using the simple function prediction = simple_summarizer(text) print(f"Generated Summary: {prediction}") print(f"Reference Summary: {reference}") # Compute ROUGE scores rouge_results = rouge_metric.compute(predictions=[prediction], references=[reference]) print(f"ROUGE Scores: {rouge_results}")
import evaluate
def simple_summarizer(text):
   """A very basic summarizer - just takes the first sentence."""
   try:
       sentences = text.split(".")
       return sentences[0].strip() + "." if sentences[0].strip() else ""
   except:
       return "" # Handle empty or malformed text
# Load ROUGE metric
rouge_metric = evaluate.load("rouge")
# Example text and reference summary
text = "Today is a beautiful day. The sun is shining and the birds are singing. I am going for a walk in the park."
reference = "The weather is pleasant today."
# Generate summary using the simple function
prediction = simple_summarizer(text)
print(f"Generated Summary: {prediction}")
print(f"Reference Summary: {reference}")
# Compute ROUGE scores
rouge_results = rouge_metric.compute(predictions=[prediction], references=[reference])
print(f"ROUGE Scores: {rouge_results}")

输出:

Generated Summary: Today is a beautiful day.Reference Summary: The weather is pleasant today.ROUGE Scores: {'rouge1': np.float64(0.4000000000000001), 'rouge2': np.float64(0.0), 'rougeL': np.float64(0.20000000000000004), 'rougeLsum': np.float64(0.20000000000000004)}

解释:

  1. 我们加载 rouge 指标。
  2. 我们定义了一个简单的总结器,用于演示。
  3. compute 计算不同的 ROUGE 分数:
  4. 接近 1.0 的分数表示与参考摘要的相似度较高。这里的低分反映了我们的 simple_summarizer 的基本性质。

问题解答 (SQuAD)

SQuAD 指标用于提取式问题解答基准。它计算精确匹配 (EM) 和 F1 分数。

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
import evaluate
# Load the SQuAD metric
squad_metric = evaluate.load("squad")
# Example predictions and references format for SQuAD
predictions = [{'prediction_text': '1976', 'id': '56e10a3be3433e1400422b22'}]
references = [{'answers': {'answer_start': [97], 'text': ['1976']}, 'id': '56e10a3be3433e1400422b22'}]
results = squad_metric.compute(predictions=predictions, references=references)
print(f"SQuAD Results: {results}")
import evaluate # Load the SQuAD metric squad_metric = evaluate.load("squad") # Example predictions and references format for SQuAD predictions = [{'prediction_text': '1976', 'id': '56e10a3be3433e1400422b22'}] references = [{'answers': {'answer_start': [97], 'text': ['1976']}, 'id': '56e10a3be3433e1400422b22'}] results = squad_metric.compute(predictions=predictions, references=references) print(f"SQuAD Results: {results}")
import evaluate
# Load the SQuAD metric
squad_metric = evaluate.load("squad")
# Example predictions and references format for SQuAD
predictions = [{'prediction_text': '1976', 'id': '56e10a3be3433e1400422b22'}]
references = [{'answers': {'answer_start': [97], 'text': ['1976']}, 'id': '56e10a3be3433e1400422b22'}]
results = squad_metric.compute(predictions=predictions, references=references)
print(f"SQuAD Results: {results}")

输出:

问题解答 (SQuAD)

解释:

  1. 加载 squad 指标。
  2. 以特定字典格式获取预测结果和参考信息,包括预测文本和基本真实答案及其起始位置。
  3. 精确匹配: 与地面真实答案之一完全匹配的预测百分比。
  4. f1:所有问题的平均 F1 分数,考虑标记级别的部分匹配。

使用评估器类进行高级评估

Evaluator 类集成了模型加载、推理和度量计算,从而简化了流程。它对文本分类等标准任务特别有用。

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
# Note: Requires transformers and datasets libraries
# pip install transformers datasets torch # or tensorflow/jax
import evaluate
from evaluate import evaluator
from transformers import pipeline
from datasets import load_dataset
# Load a pre-trained text classification pipeline
# Using a smaller model for potentially faster execution
try:
pipe = pipeline("text-classification", model="distilbert-base-uncased-finetuned-sst-2-english", device=-1) # Use CPU
except Exception as e:
print(f"Could not load pipeline: {e}")
pipe = None
if pipe:
# Load a small subset of the IMDB dataset
try:
data = load_dataset("imdb", split="test").shuffle(seed=42).select(range(100)) # Smaller subset for speed
except Exception as e:
print(f"Could not load dataset: {e}")
data = None
if data:
# Load the accuracy metric
accuracy_metric = evaluate.load("accuracy")
# Create an evaluator for the task
task_evaluator = evaluator("text-classification")
# Correct label_mapping for IMDB dataset
label_mapping = {
'NEGATIVE': 0, # Map NEGATIVE to 0
'POSITIVE': 1 # Map POSITIVE to 1
}
# Compute results
eval_results = task_evaluator.compute(
model_or_pipeline=pipe,
data=data,
metric=accuracy_metric,
input_column="text", # Specify the text column
label_column="label", # Specify the label column
label_mapping=label_mapping # Pass the corrected label mapping
)
print("\nEvaluator Results:")
print(eval_results)
# Compute with bootstrapping for confidence intervals
bootstrap_results = task_evaluator.compute(
model_or_pipeline=pipe,
data=data,
metric=accuracy_metric,
input_column="text",
label_column="label",
label_mapping=label_mapping, # Pass the corrected label mapping
strategy="bootstrap",
n_resamples=10 # Use fewer resamples for faster demo
)
print("\nEvaluator Results with Bootstrapping:")
print(bootstrap_results)
# Note: Requires transformers and datasets libraries # pip install transformers datasets torch # or tensorflow/jax import evaluate from evaluate import evaluator from transformers import pipeline from datasets import load_dataset # Load a pre-trained text classification pipeline # Using a smaller model for potentially faster execution try: pipe = pipeline("text-classification", model="distilbert-base-uncased-finetuned-sst-2-english", device=-1) # Use CPU except Exception as e: print(f"Could not load pipeline: {e}") pipe = None if pipe: # Load a small subset of the IMDB dataset try: data = load_dataset("imdb", split="test").shuffle(seed=42).select(range(100)) # Smaller subset for speed except Exception as e: print(f"Could not load dataset: {e}") data = None if data: # Load the accuracy metric accuracy_metric = evaluate.load("accuracy") # Create an evaluator for the task task_evaluator = evaluator("text-classification") # Correct label_mapping for IMDB dataset label_mapping = { 'NEGATIVE': 0, # Map NEGATIVE to 0 'POSITIVE': 1 # Map POSITIVE to 1 } # Compute results eval_results = task_evaluator.compute( model_or_pipeline=pipe, data=data, metric=accuracy_metric, input_column="text", # Specify the text column label_column="label", # Specify the label column label_mapping=label_mapping # Pass the corrected label mapping ) print("\nEvaluator Results:") print(eval_results) # Compute with bootstrapping for confidence intervals bootstrap_results = task_evaluator.compute( model_or_pipeline=pipe, data=data, metric=accuracy_metric, input_column="text", label_column="label", label_mapping=label_mapping, # Pass the corrected label mapping strategy="bootstrap", n_resamples=10 # Use fewer resamples for faster demo ) print("\nEvaluator Results with Bootstrapping:") print(bootstrap_results)
# Note: Requires transformers and datasets libraries
# pip install transformers datasets torch # or tensorflow/jax
import evaluate
from evaluate import evaluator
from transformers import pipeline
from datasets import load_dataset
# Load a pre-trained text classification pipeline
# Using a smaller model for potentially faster execution
try:
   pipe = pipeline("text-classification", model="distilbert-base-uncased-finetuned-sst-2-english", device=-1) # Use CPU
except Exception as e:
   print(f"Could not load pipeline: {e}")
   pipe = None
if pipe:
   # Load a small subset of the IMDB dataset
   try:
       data = load_dataset("imdb", split="test").shuffle(seed=42).select(range(100)) # Smaller subset for speed
   except Exception as e:
       print(f"Could not load dataset: {e}")
       data = None
   if data:
       # Load the accuracy metric
       accuracy_metric = evaluate.load("accuracy")
       # Create an evaluator for the task
       task_evaluator = evaluator("text-classification")
       # Correct label_mapping for IMDB dataset
       label_mapping = {
           'NEGATIVE': 0,  # Map NEGATIVE to 0
           'POSITIVE': 1   # Map POSITIVE to 1
       }
       # Compute results
       eval_results = task_evaluator.compute(
           model_or_pipeline=pipe,
           data=data,
           metric=accuracy_metric,
           input_column="text",  # Specify the text column
           label_column="label", # Specify the label column
           label_mapping=label_mapping  # Pass the corrected label mapping
       )
       print("\nEvaluator Results:")
       print(eval_results)
       # Compute with bootstrapping for confidence intervals
       bootstrap_results = task_evaluator.compute(
           model_or_pipeline=pipe,
           data=data,
           metric=accuracy_metric,
           input_column="text",
           label_column="label",
           label_mapping=label_mapping,  # Pass the corrected label mapping
           strategy="bootstrap",
           n_resamples=10  # Use fewer resamples for faster demo
       )
       print("\nEvaluator Results with Bootstrapping:")
       print(bootstrap_results)

输出:

Device set to use cpuEvaluator Results:{'accuracy': 0.9, 'total_time_in_seconds': 24.277618517999997,'samples_per_second': 4.119020155368932, 'latency_in_seconds':0.24277618517999996}Evaluator Results with Bootstrapping:{'accuracy': {'confidence_interval': (np.float64(0.8703044820750653),np.float64(0.9335706530476571)), 'standard_error':np.float64(0.02412928142780514), 'score': 0.9}, 'total_time_in_seconds':23.871316319000016, 'samples_per_second': 4.189128017226537,'latency_in_seconds': 0.23871316319000013}

解释:

  1. 我们加载了用于文本分类的转换器管道和 IMDb 数据集样本。
  2. 我们创建了一个专门用于“text-classification”的评估器。
  3. 计算方法负责向管道输入数据(文本列)、获取预测结果、使用指定指标将预测结果与真实标签(标签列)进行比较,并应用标签映射。
  4. 该方法会返回指标得分以及总时间和每秒采样次数等性能统计数据。
  5. 使用 strategy=“bootstrap ”会执行重采样以估计指标的置信区间和标准误差,从而了解得分的稳定性。

使用评估套件

评估套件捆绑了多个评估,通常以 GLUE 等特定基准为目标。这样就可以针对一组标准任务运行模型。

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
# Note: Running a full suite can be computationally intensive and time-consuming.
# This example demonstrates the concept but might take a long time or require significant resources.
# It also installs multiple datasets and may require specific model configurations.
import evaluate
try:
print("\nLoading GLUE evaluation suite (this might download datasets)...")
# Load the GLUE task directly
# Using "mrpc" as an example task, but you can choose from the valid ones listed above
task = evaluate.load("glue", "mrpc") # Specify the task like "mrpc", "sst2", etc.
print("Task loaded.")
# You can now run the task on a model (for example: "distilbert-base-uncased")
# WARNING: This might take time for inference or fine-tuning.
# results = task.compute(model_or_pipeline="distilbert-base-uncased")
# print("\nEvaluation Results (MRPC Task):")
# print(results)
print("Skipping model inference for brevity in this example.")
print("Refer to Hugging Face documentation for full EvaluationSuite usage.")
except Exception as e:
print(f"Could not load or run evaluation suite: {e}")
# Note: Running a full suite can be computationally intensive and time-consuming. # This example demonstrates the concept but might take a long time or require significant resources. # It also installs multiple datasets and may require specific model configurations. import evaluate try: print("\nLoading GLUE evaluation suite (this might download datasets)...") # Load the GLUE task directly # Using "mrpc" as an example task, but you can choose from the valid ones listed above task = evaluate.load("glue", "mrpc") # Specify the task like "mrpc", "sst2", etc. print("Task loaded.") # You can now run the task on a model (for example: "distilbert-base-uncased") # WARNING: This might take time for inference or fine-tuning. # results = task.compute(model_or_pipeline="distilbert-base-uncased") # print("\nEvaluation Results (MRPC Task):") # print(results) print("Skipping model inference for brevity in this example.") print("Refer to Hugging Face documentation for full EvaluationSuite usage.") except Exception as e: print(f"Could not load or run evaluation suite: {e}")
# Note: Running a full suite can be computationally intensive and time-consuming.
# This example demonstrates the concept but might take a long time or require significant resources.
# It also installs multiple datasets and may require specific model configurations.
import evaluate
try:
   print("\nLoading GLUE evaluation suite (this might download datasets)...")
   # Load the GLUE task directly
   # Using "mrpc" as an example task, but you can choose from the valid ones listed above
   task = evaluate.load("glue", "mrpc")  # Specify the task like "mrpc", "sst2", etc.
   print("Task loaded.")
   # You can now run the task on a model (for example: "distilbert-base-uncased")
   # WARNING: This might take time for inference or fine-tuning.
   # results = task.compute(model_or_pipeline="distilbert-base-uncased")
   # print("\nEvaluation Results (MRPC Task):")
   # print(results)
   print("Skipping model inference for brevity in this example.")
   print("Refer to Hugging Face documentation for full EvaluationSuite usage.")
except Exception as e:
   print(f"Could not load or run evaluation suite: {e}")

输出:

Loading GLUE evaluation suite (this might download datasets)...Task loaded.Skipping model inference for brevity in this example.Refer to Hugging Face documentation for full EvaluationSuite usage.

解释:

  1. EvaluationSuite.load 会加载一组预定义的评估任务(此处仅演示 GLUE 基准中的 MRPC 任务)。
  2. suite.run(“model_name”) 命令通常会在套件中的每个数据集上执行模型,并计算相关指标。
  3. 输出通常是一个字典列表,每个字典包含套件中一个任务的结果(注:运行此命令通常需要特定的环境设置和大量的计算时间)。

评估结果可视化

可视化有助于比较不同指标下的多个模型。雷达图在这方面很有效。

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
import evaluate
import matplotlib.pyplot as plt # Ensure matplotlib is installed
from evaluate.visualization import radar_plot
# Sample data for multiple models across several metrics
# Lower latency is better, so we might invert it or consider it separately.
data = [
{"accuracy": 0.99, "precision": 0.80, "f1": 0.95, "latency_inv": 1/33.6},
{"accuracy": 0.98, "precision": 0.87, "f1": 0.91, "latency_inv": 1/11.2},
{"accuracy": 0.98, "precision": 0.78, "f1": 0.88, "latency_inv": 1/87.6},
{"accuracy": 0.88, "precision": 0.78, "f1": 0.81, "latency_inv": 1/101.6}
]
model_names = ["Model A", "Model B", "Model C", "Model D"]
# Generate the radar plot
# Higher values are generally better on a radar plot
try:
# Generate radar plot (ensure you pass a correct format and that data is valid)
plot = radar_plot(data=data, model_names=model_names)
# Display the plot
plt.show() # Explicitly show the plot, might be necessary in some environments
# To save the plot to a file (uncomment to use)
# plot.savefig("model_comparison_radar.png")
plt.close() # Close the plot window after showing/saving
except ImportError:
print("Visualization requires matplotlib. Run: pip install matplotlib")
except Exception as e:
print(f"Could not generate plot: {e}")
import evaluate import matplotlib.pyplot as plt # Ensure matplotlib is installed from evaluate.visualization import radar_plot # Sample data for multiple models across several metrics # Lower latency is better, so we might invert it or consider it separately. data = [ {"accuracy": 0.99, "precision": 0.80, "f1": 0.95, "latency_inv": 1/33.6}, {"accuracy": 0.98, "precision": 0.87, "f1": 0.91, "latency_inv": 1/11.2}, {"accuracy": 0.98, "precision": 0.78, "f1": 0.88, "latency_inv": 1/87.6}, {"accuracy": 0.88, "precision": 0.78, "f1": 0.81, "latency_inv": 1/101.6} ] model_names = ["Model A", "Model B", "Model C", "Model D"] # Generate the radar plot # Higher values are generally better on a radar plot try: # Generate radar plot (ensure you pass a correct format and that data is valid) plot = radar_plot(data=data, model_names=model_names) # Display the plot plt.show() # Explicitly show the plot, might be necessary in some environments # To save the plot to a file (uncomment to use) # plot.savefig("model_comparison_radar.png") plt.close() # Close the plot window after showing/saving except ImportError: print("Visualization requires matplotlib. Run: pip install matplotlib") except Exception as e: print(f"Could not generate plot: {e}")
import evaluate
import matplotlib.pyplot as plt # Ensure matplotlib is installed
from evaluate.visualization import radar_plot
# Sample data for multiple models across several metrics
# Lower latency is better, so we might invert it or consider it separately.
data = [
   {"accuracy": 0.99, "precision": 0.80, "f1": 0.95, "latency_inv": 1/33.6},
   {"accuracy": 0.98, "precision": 0.87, "f1": 0.91, "latency_inv": 1/11.2},
   {"accuracy": 0.98, "precision": 0.78, "f1": 0.88, "latency_inv": 1/87.6},
   {"accuracy": 0.88, "precision": 0.78, "f1": 0.81, "latency_inv": 1/101.6}
]
model_names = ["Model A", "Model B", "Model C", "Model D"]
# Generate the radar plot
# Higher values are generally better on a radar plot
try:
   # Generate radar plot (ensure you pass a correct format and that data is valid)
   plot = radar_plot(data=data, model_names=model_names)
   # Display the plot
   plt.show()  # Explicitly show the plot, might be necessary in some environments
   # To save the plot to a file (uncomment to use)
   # plot.savefig("model_comparison_radar.png")
   plt.close() # Close the plot window after showing/saving
except ImportError:
   print("Visualization requires matplotlib. Run: pip install matplotlib")
except Exception as e:
   print(f"Could not generate plot: {e}")

输出:

评估结果可视化

解释:

  1. 我们准备了四个模型在准确度、精确度、F1 和反转延迟(越高越好)方面的样本结果。
  2. radar_plot 创建了一个图,每个轴代表一个指标,直观地显示了模型的比较情况。

保存评估结果

您可以将评估结果保存到文件中(通常为 JSON 格式),以便保存记录或日后分析。

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
import evaluate
from pathlib import Path
# Perform an evaluation
accuracy_metric = evaluate.load("accuracy")
result = accuracy_metric.compute(references=[0, 1, 0, 1], predictions=[1, 0, 0, 1])
print(f"Result to save: {result}")
# Define hyperparameters or other metadata
hyperparams = {"model_name": "my_custom_model", "learning_rate": 0.001}
run_details = {"experiment_id": "run_42"}
# Combine results and metadata
save_data = {**result, **hyperparams, **run_details}
# Define save directory and filename
save_dir = Path("./evaluation_results")
save_dir.mkdir(exist_ok=True) # Create directory if it doesn't exist
# Use evaluate.save to store the results
try:
saved_path = evaluate.save(save_directory=save_dir, **save_data)
print(f"Results saved to: {saved_path}")
# You can also manually save as JSON
import json
manual_save_path = save_dir / "manual_results.json"
with open(manual_save_path, 'w') as f:
json.dump(save_data, f, indent=4)
print(f"Results manually saved to: {manual_save_path}")
except Exception as e:
# Catch potential git-related errors if run outside a repo
print(f"evaluate.save encountered an issue (possibly git related): {e}")
print("Attempting manual JSON save instead.")
import json
manual_save_path = save_dir / "manual_results_fallback.json"
with open(manual_save_path, 'w') as f:
json.dump(save_data, f, indent=4)
print(f"Results manually saved to: {manual_save_path}")
import evaluate from pathlib import Path # Perform an evaluation accuracy_metric = evaluate.load("accuracy") result = accuracy_metric.compute(references=[0, 1, 0, 1], predictions=[1, 0, 0, 1]) print(f"Result to save: {result}") # Define hyperparameters or other metadata hyperparams = {"model_name": "my_custom_model", "learning_rate": 0.001} run_details = {"experiment_id": "run_42"} # Combine results and metadata save_data = {**result, **hyperparams, **run_details} # Define save directory and filename save_dir = Path("./evaluation_results") save_dir.mkdir(exist_ok=True) # Create directory if it doesn't exist # Use evaluate.save to store the results try: saved_path = evaluate.save(save_directory=save_dir, **save_data) print(f"Results saved to: {saved_path}") # You can also manually save as JSON import json manual_save_path = save_dir / "manual_results.json" with open(manual_save_path, 'w') as f: json.dump(save_data, f, indent=4) print(f"Results manually saved to: {manual_save_path}") except Exception as e: # Catch potential git-related errors if run outside a repo print(f"evaluate.save encountered an issue (possibly git related): {e}") print("Attempting manual JSON save instead.") import json manual_save_path = save_dir / "manual_results_fallback.json" with open(manual_save_path, 'w') as f: json.dump(save_data, f, indent=4) print(f"Results manually saved to: {manual_save_path}")
import evaluate
from pathlib import Path
# Perform an evaluation
accuracy_metric = evaluate.load("accuracy")
result = accuracy_metric.compute(references=[0, 1, 0, 1], predictions=[1, 0, 0, 1])
print(f"Result to save: {result}")
# Define hyperparameters or other metadata
hyperparams = {"model_name": "my_custom_model", "learning_rate": 0.001}
run_details = {"experiment_id": "run_42"}
# Combine results and metadata
save_data = {**result, **hyperparams, **run_details}
# Define save directory and filename
save_dir = Path("./evaluation_results")
save_dir.mkdir(exist_ok=True) # Create directory if it doesn't exist
# Use evaluate.save to store the results
try:
   saved_path = evaluate.save(save_directory=save_dir, **save_data)
   print(f"Results saved to: {saved_path}")
   # You can also manually save as JSON
   import json
   manual_save_path = save_dir / "manual_results.json"
   with open(manual_save_path, 'w') as f:
       json.dump(save_data, f, indent=4)
   print(f"Results manually saved to: {manual_save_path}")
except Exception as e:
    # Catch potential git-related errors if run outside a repo
    print(f"evaluate.save encountered an issue (possibly git related): {e}")
    print("Attempting manual JSON save instead.")
    import json
    manual_save_path = save_dir / "manual_results_fallback.json"
    with open(manual_save_path, 'w') as f:
        json.dump(save_data, f, indent=4)
    print(f"Results manually saved to: {manual_save_path}")

输出:

Result to save: {'accuracy': 0.5}evaluate.save encountered an issue (possibly git related): save() missing 1 required positional argument: 'path_or_file'Attempting manual JSON save instead.Results manually saved to: evaluation_results/manual_results_fallback.json

解释:

  1. 我们会将计算出的结果字典与超参数等其他元数据结合起来。
  2. evaluate.save 会尝试将这些数据保存到指定目录下的 JSON 文件中。如果在版本库中运行,它可能会尝试添加git 提交信息,否则会导致错误(如原始日志所示)。
  3. 我们提供了手动将字典保存为 JSON 文件的备用方法,通常这样就足够了。

选择正确的度量

选择合适的度量标准至关重要。请考虑以下几点:

  1. 任务类型:是分类、翻译、摘要、NER 还是 QA?使用该任务的标准度量(分类使用 Accuracy/F1,生成使用 BLEU/ROUGE,NER 使用 Seqeval,QA 使用 SQuAD)。
  2. 数据集:有些基准(如 GLUE、SQuAD)有特定的相关指标。排行榜(例如,Papers With Code)通常会显示特定数据集的常用指标。
  3. 目标:哪方面的性能最重要?
    • 准确性:整体正确性(对平衡类有好处)。
    • 精度/召回率/F1:对于不平衡类或误报/负报的代价不同时非常重要。
    • BLEU/ROUGE:文本生成中的流畅性和内容重叠。
    • 复杂度:语言模型对样本的预测程度(越低越好,常用于生成模型)。
  4. 度量卡:阅读 Hugging Face 指标卡(文档),了解详细解释、限制和适当的使用案例(如 BLEU 卡SQuAD 卡)。

小结

Hugging Face 评估库为评估大型语言模型和数据集提供了一种多功能且用户友好的方法。它提供了标准指标、数据集测量以及EvaluatorEvaluationSuite等工具来简化流程。通过使用这些工具并选择适合您任务的指标,您可以清楚地了解模型的优缺点。

有关详细信息和高级用法,请查阅官方资源:

评论留言