利用LangGraph Reflection提高程式碼質量

利用LangGraph Reflection提高程式碼質量

LangGraph Reflection 是一種代理框架,它提供了一種透過使用生成式人工智慧的迭代批判過程來改進語言模型輸出的強大方法。本文將介紹如何實現一個反射代理,使用 Pyright 驗證 Python 程式碼,並使用 GPT-4o mini 提高程式碼質量。人工智慧代理在此框架中發揮著至關重要的作用,它透過結合推理、反思和反饋機制來自動執行決策過程,從而提高模型效能。

學習目標

  • 瞭解 LangGraph Reflection 框架的工作原理。
  • 學習如何實施該框架以提高 Python 程式碼的質量。
  • 透過實際操作體驗該框架的執行效果。

LangGraph Reflection框架架構

LangGraph Reflection 框架採用簡單而有效的代理架構:

  1. 主代理:根據使用者請求生成初始程式碼。
  2. 批判代理:使用 Pyright 驗證生成的程式碼。
  3. 反射過程:如果發現錯誤,會再次呼叫主代理來完善程式碼,直到沒有問題為止。

LangGraph Reflection框架架構

如何實現LangGraph Reflection框架

以下是示例性實現和使用的分步指南

Step 1:環境設定

首先,安裝所需的依賴項:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
pip install langgraph-reflection langchain pyright
pip install langgraph-reflection langchain pyright
pip install langgraph-reflection langchain pyright

Step 2:使用Pyright進行程式碼分析

我們將使用 Pyright 分析生成的程式碼並提供錯誤細節。

Pyright 分析功能

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
from typing import TypedDict, Annotated, Literal
import json
import os
import subprocess
import tempfile
from langchain.chat_models import init_chat_model
from langgraph.graph import StateGraph, MessagesState, START, END
from langgraph_reflection import create_reflection_graph
os.environ["OPENAI_API_KEY"] = "your_openai_api_key"
def analyze_with_pyright(code_string: str) -> dict:
"""Analyze Python code using Pyright for static type checking and errors.
Args:
code_string: The Python code to analyze as a string
Returns:
dict: The Pyright analysis results
"""
with tempfile.NamedTemporaryFile(suffix=".py", mode="w", delete=False) as temp:
temp.write(code_string)
temp_path = temp.name
try:
result = subprocess.run(
[
"pyright",
"--outputjson",
"--level",
"error", # Only report errors, not warnings
temp_path,
],
capture_output=True,
text=True,
)
try:
return json.loads(result.stdout)
except json.JSONDecodeError:
return {
"error": "Failed to parse Pyright output",
"raw_output": result.stdout,
}
finally:
os.unlink(temp_path)
from typing import TypedDict, Annotated, Literal import json import os import subprocess import tempfile from langchain.chat_models import init_chat_model from langgraph.graph import StateGraph, MessagesState, START, END from langgraph_reflection import create_reflection_graph os.environ["OPENAI_API_KEY"] = "your_openai_api_key" def analyze_with_pyright(code_string: str) -> dict: """Analyze Python code using Pyright for static type checking and errors. Args: code_string: The Python code to analyze as a string Returns: dict: The Pyright analysis results """ with tempfile.NamedTemporaryFile(suffix=".py", mode="w", delete=False) as temp: temp.write(code_string) temp_path = temp.name try: result = subprocess.run( [ "pyright", "--outputjson", "--level", "error", # Only report errors, not warnings temp_path, ], capture_output=True, text=True, ) try: return json.loads(result.stdout) except json.JSONDecodeError: return { "error": "Failed to parse Pyright output", "raw_output": result.stdout, } finally: os.unlink(temp_path)
from typing import TypedDict, Annotated, Literal
import json
import os
import subprocess
import tempfile
from langchain.chat_models import init_chat_model
from langgraph.graph import StateGraph, MessagesState, START, END
from langgraph_reflection import create_reflection_graph
os.environ["OPENAI_API_KEY"] = "your_openai_api_key"
def analyze_with_pyright(code_string: str) -> dict:
"""Analyze Python code using Pyright for static type checking and errors.
Args:
code_string: The Python code to analyze as a string
Returns:
dict: The Pyright analysis results
"""
with tempfile.NamedTemporaryFile(suffix=".py", mode="w", delete=False) as temp:
temp.write(code_string)
temp_path = temp.name
try:
result = subprocess.run(
[
"pyright",
"--outputjson",
"--level",
"error",  # Only report errors, not warnings
temp_path,
],
capture_output=True,
text=True,
)
try:
return json.loads(result.stdout)
except json.JSONDecodeError:
return {
"error": "Failed to parse Pyright output",
"raw_output": result.stdout,
}
finally:
os.unlink(temp_path)

Step 3:用於程式碼生成的主輔助模型

GPT-4o Mini 模型設定

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
def call_model(state: dict) -> dict:
"""Process the user query with the GPT-4o mini model.
Args:
state: The current conversation state
Returns:
dict: Updated state with the model response
"""
model = init_chat_model(model="gpt-4o-mini", openai_api_key = 'your_openai_api_key')
return {"messages": model.invoke(state["messages"])}
def call_model(state: dict) -> dict: """Process the user query with the GPT-4o mini model. Args: state: The current conversation state Returns: dict: Updated state with the model response """ model = init_chat_model(model="gpt-4o-mini", openai_api_key = 'your_openai_api_key') return {"messages": model.invoke(state["messages"])}
def call_model(state: dict) -> dict:
"""Process the user query with the GPT-4o mini model.
Args:
state: The current conversation state
Returns:
dict: Updated state with the model response
"""
model = init_chat_model(model="gpt-4o-mini", openai_api_key = 'your_openai_api_key')
return {"messages": model.invoke(state["messages"])}

注意:安全使用  os.environ[“OPENAI_API_KEY”] = “YOUR_API_KEY  ,切勿在程式碼中硬編碼金鑰。

Step 4:程式碼提取和驗證

程式碼提取型別

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
# Define type classes for code extraction
class ExtractPythonCode(TypedDict):
"""Type class for extracting Python code. The python_code field is the code to be extracted."""
python_code: str
class NoCode(TypedDict):
"""Type class for indicating no code was found."""
no_code: bool
# Define type classes for code extraction class ExtractPythonCode(TypedDict): """Type class for extracting Python code. The python_code field is the code to be extracted.""" python_code: str class NoCode(TypedDict): """Type class for indicating no code was found.""" no_code: bool
# Define type classes for code extraction
class ExtractPythonCode(TypedDict):
"""Type class for extracting Python code. The python_code field is the code to be extracted."""
python_code: str
class NoCode(TypedDict):
"""Type class for indicating no code was found."""
no_code: bool

GPT-4o Mini 的系統提示

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
# System prompt for the model
SYSTEM_PROMPT = """The below conversation is you conversing with a user to write some python code. Your final response is the last message in the list.
Sometimes you will respond with code, othertimes with a question.
If there is code - extract it into a single python script using ExtractPythonCode.
If there is no code to extract - call NoCode."""
# System prompt for the model SYSTEM_PROMPT = """The below conversation is you conversing with a user to write some python code. Your final response is the last message in the list. Sometimes you will respond with code, othertimes with a question. If there is code - extract it into a single python script using ExtractPythonCode. If there is no code to extract - call NoCode."""
# System prompt for the model
SYSTEM_PROMPT = """The below conversation is you conversing with a user to write some python code. Your final response is the last message in the list.
Sometimes you will respond with code, othertimes with a question.
If there is code - extract it into a single python script using ExtractPythonCode.
If there is no code to extract - call NoCode."""

右鍵程式碼驗證功能

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
def try_running(state: dict) -> dict | None:
"""Attempt to run and analyze the extracted Python code.
Args:
state: The current conversation state
Returns:
dict | None: Updated state with analysis results if code was found
"""
model = init_chat_model(model="gpt-4o-mini")
extraction = model.bind_tools([ExtractPythonCode, NoCode])
er = extraction.invoke(
[{"role": "system", "content": SYSTEM_PROMPT}] + state["messages"]
)
if len(er.tool_calls) == 0:
return None
tc = er.tool_calls[0]
if tc["name"] != "ExtractPythonCode":
return None
result = analyze_with_pyright(tc["args"]["python_code"])
print(result)
explanation = result["generalDiagnostics"]
if result["summary"]["errorCount"]:
return {
"messages": [
{
"role": "user",
"content": f"I ran pyright and found this: {explanation}\n\n"
"Try to fix it. Make sure to regenerate the entire code snippet. "
"If you are not sure what is wrong, or think there is a mistake, "
"you can ask me a question rather than generating code",
}
]
}
def try_running(state: dict) -> dict | None: """Attempt to run and analyze the extracted Python code. Args: state: The current conversation state Returns: dict | None: Updated state with analysis results if code was found """ model = init_chat_model(model="gpt-4o-mini") extraction = model.bind_tools([ExtractPythonCode, NoCode]) er = extraction.invoke( [{"role": "system", "content": SYSTEM_PROMPT}] + state["messages"] ) if len(er.tool_calls) == 0: return None tc = er.tool_calls[0] if tc["name"] != "ExtractPythonCode": return None result = analyze_with_pyright(tc["args"]["python_code"]) print(result) explanation = result["generalDiagnostics"] if result["summary"]["errorCount"]: return { "messages": [ { "role": "user", "content": f"I ran pyright and found this: {explanation}\n\n" "Try to fix it. Make sure to regenerate the entire code snippet. " "If you are not sure what is wrong, or think there is a mistake, " "you can ask me a question rather than generating code", } ] }
def try_running(state: dict) -> dict | None:
"""Attempt to run and analyze the extracted Python code.
Args:
state: The current conversation state
Returns:
dict | None: Updated state with analysis results if code was found
"""
model = init_chat_model(model="gpt-4o-mini")
extraction = model.bind_tools([ExtractPythonCode, NoCode])
er = extraction.invoke(
[{"role": "system", "content": SYSTEM_PROMPT}] + state["messages"]
)
if len(er.tool_calls) == 0:
return None
tc = er.tool_calls[0]
if tc["name"] != "ExtractPythonCode":
return None
result = analyze_with_pyright(tc["args"]["python_code"])
print(result)
explanation = result["generalDiagnostics"]
if result["summary"]["errorCount"]:
return {
"messages": [
{
"role": "user",
"content": f"I ran pyright and found this: {explanation}\n\n"
"Try to fix it. Make sure to regenerate the entire code snippet. "
"If you are not sure what is wrong, or think there is a mistake, "
"you can ask me a question rather than generating code",
}
]
}

Step 5:建立反射圖

建立主圖和判定圖

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
def create_graphs():
"""Create and configure the assistant and judge graphs."""
# Define the main assistant graph
assistant_graph = (
StateGraph(MessagesState)
.add_node(call_model)
.add_edge(START, "call_model")
.add_edge("call_model", END)
.compile()
)
# Define the judge graph for code analysis
judge_graph = (
StateGraph(MessagesState)
.add_node(try_running)
.add_edge(START, "try_running")
.add_edge("try_running", END)
.compile()
)
# Create the complete reflection graph
return create_reflection_graph(assistant_graph, judge_graph).compile()
reflection_app = create_graphs()
def create_graphs(): """Create and configure the assistant and judge graphs.""" # Define the main assistant graph assistant_graph = ( StateGraph(MessagesState) .add_node(call_model) .add_edge(START, "call_model") .add_edge("call_model", END) .compile() ) # Define the judge graph for code analysis judge_graph = ( StateGraph(MessagesState) .add_node(try_running) .add_edge(START, "try_running") .add_edge("try_running", END) .compile() ) # Create the complete reflection graph return create_reflection_graph(assistant_graph, judge_graph).compile() reflection_app = create_graphs()
def create_graphs():
"""Create and configure the assistant and judge graphs."""
# Define the main assistant graph
assistant_graph = (
StateGraph(MessagesState)
.add_node(call_model)
.add_edge(START, "call_model")
.add_edge("call_model", END)
.compile()
)
# Define the judge graph for code analysis
judge_graph = (
StateGraph(MessagesState)
.add_node(try_running)
.add_edge(START, "try_running")
.add_edge("try_running", END)
.compile()
)
# Create the complete reflection graph
return create_reflection_graph(assistant_graph, judge_graph).compile()
reflection_app = create_graphs()

Step 6:執行應用程式

執行示例

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
if __name__ == "__main__":
"""Run an example query through the reflection system."""
example_query = [
{
"role": "user",
"content": "Write a LangGraph RAG app",
}
]
print("Running example with reflection using GPT-4o mini...")
result = reflection_app.invoke({"messages": example_query})
print("Result:", result)
if __name__ == "__main__": """Run an example query through the reflection system.""" example_query = [ { "role": "user", "content": "Write a LangGraph RAG app", } ] print("Running example with reflection using GPT-4o mini...") result = reflection_app.invoke({"messages": example_query}) print("Result:", result)
if __name__ == "__main__":
"""Run an example query through the reflection system."""
example_query = [
{
"role": "user",
"content": "Write a LangGraph RAG app",
}
]
print("Running example with reflection using GPT-4o mini...")
result = reflection_app.invoke({"messages": example_query})
print("Result:", result)

輸出分析

輸出分析  輸出分析 

示例中發生了什麼?

我們的 LangGraph Reflection 系統旨在實現以下功能:

  1. 獲取初始程式碼片段。
  2. 執行 Pyright(Python 的靜態型別檢查器)檢測錯誤。
  3. 使用 GPT-4o mini 模型分析錯誤、理解錯誤並生成改進程式碼建議

迭代 1 – 已識別的錯誤

1. 匯入 “faiss” 無法解決。

  • 解釋: 當 faiss 庫未安裝或 Python 環境無法識別匯入時會出現此錯誤。
  • 解決方法:建議執行代理
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
pip install faiss-cpu
pip install faiss-cpu
pip install faiss-cpu

2. 無法訪問類 “OpenAIEmbeddings” 的屬性 “embed”。

  • 解釋:程式碼引用了 .embed,但在較新版本的 langchain 中,嵌入方法是 .embed_documents() 或 .embed_query()。
  • 解決辦法:代理將 .embed 正確替換為 .embed_query。

3. 缺少引數 “docstore”、”index_to_docstore_id”。

  • 解釋:FAISS 向量儲存現在需要 docstore 物件和 index_to_docstore_id 對映。
  • 解決方法:代理透過建立 InMemoryDocstore 和字典對映新增了這兩個引數。

迭代 2 – 進展

在第二次迭代中,系統改進了程式碼,但仍發現了以下問題:

1. 無法解析匯入的 “langchain.document”。

  • 解釋: 程式碼試圖從錯誤的模組匯入 Document。
  • 解決方案: 代理將匯入更新為從 langchain.docstore 匯入 Document。

2. 未定義 “InMemoryDocstore”。

  • 2. “InMemoryDocstore ”未定義: 發現缺少 InMemoryDocstore 的匯入。
  • 解決方法 正確新增代理:
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
from langchain.docstore import InMemoryDocstore
from langchain.docstore import InMemoryDocstore
from langchain.docstore import InMemoryDocstore

迭代 3 – 最終解決方案

在最後一次迭代中,反射代理透過以下方式成功解決了所有問題:

  • 正確匯入 faiss。
  • 將嵌入函式的 .embed 切換為 .embed_query。
  • 為文件管理新增有效的 InMemoryDocstore。
  • 建立正確的 index_too_docstore_id 對映。
  • 使用 .page_content 正確訪問文件內容,而不是將文件視為簡單的字串。

改進後的程式碼成功執行,沒有出現錯誤。

為何重要

  • 自動錯誤檢測:LangGraph Reflection 框架使用 Pyright 分析程式碼錯誤並生成可操作的見解,從而簡化了除錯過程。
  • 迭代改進:該框架模仿開發人員手動除錯和改進程式碼的方式,不斷改進程式碼,直到錯誤得到解決。
  • 自適應學習:系統能適應不斷變化的程式碼結構,如更新的庫語法或版本差異。

小結

LangGraph Reflection 框架展示了將人工智慧評論代理與強大的靜態分析工具相結合的威力。這種智慧反饋環路能更快地修正程式碼,改進編碼實踐,提高整體開發效率。無論是對於初學者還是經驗豐富的開發人員,LangGraph Reflection 都是提高程式碼質量的強大工具。

主要收穫

  • 透過在 LangGraph Reflection Framework 中結合 LangChain、Pyright 和 GPT-4o mini,該解決方案提供了一種自動驗證程式碼的有效方法。
  • 該框架可幫助 LLM 迭代生成改進的解決方案,並透過反思和批判迴圈確保更高質量的輸出。
  • 這種方法增強了人工智慧生成的程式碼的穩健性,並提高了在真實世界場景中的效能。

評論留言