利用LangGraph Reflection提高代码质量

利用LangGraph Reflection提高代码质量

LangGraph Reflection 是一种代理框架,它提供了一种通过使用生成式人工智能的迭代批判过程来改进语言模型输出的强大方法。本文将介绍如何实现一个反射代理,使用 Pyright 验证 Python 代码,并使用 GPT-4o mini 提高代码质量。人工智能代理在此框架中发挥着至关重要的作用,它通过结合推理、反思和反馈机制来自动执行决策过程,从而提高模型性能。

学习目标

  • 了解 LangGraph Reflection 框架的工作原理。
  • 学习如何实施该框架以提高 Python 代码的质量。
  • 通过实际操作体验该框架的运行效果。

LangGraph Reflection框架架构

LangGraph Reflection 框架采用简单而有效的代理架构:

  1. 主代理:根据用户请求生成初始代码。
  2. 批判代理:使用 Pyright 验证生成的代码。
  3. 反射过程:如果发现错误,会再次调用主代理来完善代码,直到没有问题为止。

LangGraph Reflection框架架构

如何实现LangGraph Reflection框架

以下是示例性实现和使用的分步指南

Step 1:环境设置

首先,安装所需的依赖项:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
pip install langgraph-reflection langchain pyright
pip install langgraph-reflection langchain pyright
pip install langgraph-reflection langchain pyright

Step 2:使用Pyright进行代码分析

我们将使用 Pyright 分析生成的代码并提供错误细节。

Pyright 分析功能

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
from typing import TypedDict, Annotated, Literal
import json
import os
import subprocess
import tempfile
from langchain.chat_models import init_chat_model
from langgraph.graph import StateGraph, MessagesState, START, END
from langgraph_reflection import create_reflection_graph
os.environ["OPENAI_API_KEY"] = "your_openai_api_key"
def analyze_with_pyright(code_string: str) -> dict:
"""Analyze Python code using Pyright for static type checking and errors.
Args:
code_string: The Python code to analyze as a string
Returns:
dict: The Pyright analysis results
"""
with tempfile.NamedTemporaryFile(suffix=".py", mode="w", delete=False) as temp:
temp.write(code_string)
temp_path = temp.name
try:
result = subprocess.run(
[
"pyright",
"--outputjson",
"--level",
"error", # Only report errors, not warnings
temp_path,
],
capture_output=True,
text=True,
)
try:
return json.loads(result.stdout)
except json.JSONDecodeError:
return {
"error": "Failed to parse Pyright output",
"raw_output": result.stdout,
}
finally:
os.unlink(temp_path)
from typing import TypedDict, Annotated, Literal import json import os import subprocess import tempfile from langchain.chat_models import init_chat_model from langgraph.graph import StateGraph, MessagesState, START, END from langgraph_reflection import create_reflection_graph os.environ["OPENAI_API_KEY"] = "your_openai_api_key" def analyze_with_pyright(code_string: str) -> dict: """Analyze Python code using Pyright for static type checking and errors. Args: code_string: The Python code to analyze as a string Returns: dict: The Pyright analysis results """ with tempfile.NamedTemporaryFile(suffix=".py", mode="w", delete=False) as temp: temp.write(code_string) temp_path = temp.name try: result = subprocess.run( [ "pyright", "--outputjson", "--level", "error", # Only report errors, not warnings temp_path, ], capture_output=True, text=True, ) try: return json.loads(result.stdout) except json.JSONDecodeError: return { "error": "Failed to parse Pyright output", "raw_output": result.stdout, } finally: os.unlink(temp_path)
from typing import TypedDict, Annotated, Literal
import json
import os
import subprocess
import tempfile
from langchain.chat_models import init_chat_model
from langgraph.graph import StateGraph, MessagesState, START, END
from langgraph_reflection import create_reflection_graph
os.environ["OPENAI_API_KEY"] = "your_openai_api_key"
def analyze_with_pyright(code_string: str) -> dict:
"""Analyze Python code using Pyright for static type checking and errors.
Args:
code_string: The Python code to analyze as a string
Returns:
dict: The Pyright analysis results
"""
with tempfile.NamedTemporaryFile(suffix=".py", mode="w", delete=False) as temp:
temp.write(code_string)
temp_path = temp.name
try:
result = subprocess.run(
[
"pyright",
"--outputjson",
"--level",
"error",  # Only report errors, not warnings
temp_path,
],
capture_output=True,
text=True,
)
try:
return json.loads(result.stdout)
except json.JSONDecodeError:
return {
"error": "Failed to parse Pyright output",
"raw_output": result.stdout,
}
finally:
os.unlink(temp_path)

Step 3:用于代码生成的主辅助模型

GPT-4o Mini 模型设置

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
def call_model(state: dict) -> dict:
"""Process the user query with the GPT-4o mini model.
Args:
state: The current conversation state
Returns:
dict: Updated state with the model response
"""
model = init_chat_model(model="gpt-4o-mini", openai_api_key = 'your_openai_api_key')
return {"messages": model.invoke(state["messages"])}
def call_model(state: dict) -> dict: """Process the user query with the GPT-4o mini model. Args: state: The current conversation state Returns: dict: Updated state with the model response """ model = init_chat_model(model="gpt-4o-mini", openai_api_key = 'your_openai_api_key') return {"messages": model.invoke(state["messages"])}
def call_model(state: dict) -> dict:
"""Process the user query with the GPT-4o mini model.
Args:
state: The current conversation state
Returns:
dict: Updated state with the model response
"""
model = init_chat_model(model="gpt-4o-mini", openai_api_key = 'your_openai_api_key')
return {"messages": model.invoke(state["messages"])}

注意:安全使用  os.environ[“OPENAI_API_KEY”] = “YOUR_API_KEY  ,切勿在代码中硬编码密钥。

Step 4:代码提取和验证

代码提取类型

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
# Define type classes for code extraction
class ExtractPythonCode(TypedDict):
"""Type class for extracting Python code. The python_code field is the code to be extracted."""
python_code: str
class NoCode(TypedDict):
"""Type class for indicating no code was found."""
no_code: bool
# Define type classes for code extraction class ExtractPythonCode(TypedDict): """Type class for extracting Python code. The python_code field is the code to be extracted.""" python_code: str class NoCode(TypedDict): """Type class for indicating no code was found.""" no_code: bool
# Define type classes for code extraction
class ExtractPythonCode(TypedDict):
"""Type class for extracting Python code. The python_code field is the code to be extracted."""
python_code: str
class NoCode(TypedDict):
"""Type class for indicating no code was found."""
no_code: bool

GPT-4o Mini 的系统提示

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
# System prompt for the model
SYSTEM_PROMPT = """The below conversation is you conversing with a user to write some python code. Your final response is the last message in the list.
Sometimes you will respond with code, othertimes with a question.
If there is code - extract it into a single python script using ExtractPythonCode.
If there is no code to extract - call NoCode."""
# System prompt for the model SYSTEM_PROMPT = """The below conversation is you conversing with a user to write some python code. Your final response is the last message in the list. Sometimes you will respond with code, othertimes with a question. If there is code - extract it into a single python script using ExtractPythonCode. If there is no code to extract - call NoCode."""
# System prompt for the model
SYSTEM_PROMPT = """The below conversation is you conversing with a user to write some python code. Your final response is the last message in the list.
Sometimes you will respond with code, othertimes with a question.
If there is code - extract it into a single python script using ExtractPythonCode.
If there is no code to extract - call NoCode."""

右键代码验证功能

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
def try_running(state: dict) -> dict | None:
"""Attempt to run and analyze the extracted Python code.
Args:
state: The current conversation state
Returns:
dict | None: Updated state with analysis results if code was found
"""
model = init_chat_model(model="gpt-4o-mini")
extraction = model.bind_tools([ExtractPythonCode, NoCode])
er = extraction.invoke(
[{"role": "system", "content": SYSTEM_PROMPT}] + state["messages"]
)
if len(er.tool_calls) == 0:
return None
tc = er.tool_calls[0]
if tc["name"] != "ExtractPythonCode":
return None
result = analyze_with_pyright(tc["args"]["python_code"])
print(result)
explanation = result["generalDiagnostics"]
if result["summary"]["errorCount"]:
return {
"messages": [
{
"role": "user",
"content": f"I ran pyright and found this: {explanation}\n\n"
"Try to fix it. Make sure to regenerate the entire code snippet. "
"If you are not sure what is wrong, or think there is a mistake, "
"you can ask me a question rather than generating code",
}
]
}
def try_running(state: dict) -> dict | None: """Attempt to run and analyze the extracted Python code. Args: state: The current conversation state Returns: dict | None: Updated state with analysis results if code was found """ model = init_chat_model(model="gpt-4o-mini") extraction = model.bind_tools([ExtractPythonCode, NoCode]) er = extraction.invoke( [{"role": "system", "content": SYSTEM_PROMPT}] + state["messages"] ) if len(er.tool_calls) == 0: return None tc = er.tool_calls[0] if tc["name"] != "ExtractPythonCode": return None result = analyze_with_pyright(tc["args"]["python_code"]) print(result) explanation = result["generalDiagnostics"] if result["summary"]["errorCount"]: return { "messages": [ { "role": "user", "content": f"I ran pyright and found this: {explanation}\n\n" "Try to fix it. Make sure to regenerate the entire code snippet. " "If you are not sure what is wrong, or think there is a mistake, " "you can ask me a question rather than generating code", } ] }
def try_running(state: dict) -> dict | None:
"""Attempt to run and analyze the extracted Python code.
Args:
state: The current conversation state
Returns:
dict | None: Updated state with analysis results if code was found
"""
model = init_chat_model(model="gpt-4o-mini")
extraction = model.bind_tools([ExtractPythonCode, NoCode])
er = extraction.invoke(
[{"role": "system", "content": SYSTEM_PROMPT}] + state["messages"]
)
if len(er.tool_calls) == 0:
return None
tc = er.tool_calls[0]
if tc["name"] != "ExtractPythonCode":
return None
result = analyze_with_pyright(tc["args"]["python_code"])
print(result)
explanation = result["generalDiagnostics"]
if result["summary"]["errorCount"]:
return {
"messages": [
{
"role": "user",
"content": f"I ran pyright and found this: {explanation}\n\n"
"Try to fix it. Make sure to regenerate the entire code snippet. "
"If you are not sure what is wrong, or think there is a mistake, "
"you can ask me a question rather than generating code",
}
]
}

Step 5:创建反射图

创建主图和判定图

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
def create_graphs():
"""Create and configure the assistant and judge graphs."""
# Define the main assistant graph
assistant_graph = (
StateGraph(MessagesState)
.add_node(call_model)
.add_edge(START, "call_model")
.add_edge("call_model", END)
.compile()
)
# Define the judge graph for code analysis
judge_graph = (
StateGraph(MessagesState)
.add_node(try_running)
.add_edge(START, "try_running")
.add_edge("try_running", END)
.compile()
)
# Create the complete reflection graph
return create_reflection_graph(assistant_graph, judge_graph).compile()
reflection_app = create_graphs()
def create_graphs(): """Create and configure the assistant and judge graphs.""" # Define the main assistant graph assistant_graph = ( StateGraph(MessagesState) .add_node(call_model) .add_edge(START, "call_model") .add_edge("call_model", END) .compile() ) # Define the judge graph for code analysis judge_graph = ( StateGraph(MessagesState) .add_node(try_running) .add_edge(START, "try_running") .add_edge("try_running", END) .compile() ) # Create the complete reflection graph return create_reflection_graph(assistant_graph, judge_graph).compile() reflection_app = create_graphs()
def create_graphs():
"""Create and configure the assistant and judge graphs."""
# Define the main assistant graph
assistant_graph = (
StateGraph(MessagesState)
.add_node(call_model)
.add_edge(START, "call_model")
.add_edge("call_model", END)
.compile()
)
# Define the judge graph for code analysis
judge_graph = (
StateGraph(MessagesState)
.add_node(try_running)
.add_edge(START, "try_running")
.add_edge("try_running", END)
.compile()
)
# Create the complete reflection graph
return create_reflection_graph(assistant_graph, judge_graph).compile()
reflection_app = create_graphs()

Step 6:运行应用程序

执行示例

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
if __name__ == "__main__":
"""Run an example query through the reflection system."""
example_query = [
{
"role": "user",
"content": "Write a LangGraph RAG app",
}
]
print("Running example with reflection using GPT-4o mini...")
result = reflection_app.invoke({"messages": example_query})
print("Result:", result)
if __name__ == "__main__": """Run an example query through the reflection system.""" example_query = [ { "role": "user", "content": "Write a LangGraph RAG app", } ] print("Running example with reflection using GPT-4o mini...") result = reflection_app.invoke({"messages": example_query}) print("Result:", result)
if __name__ == "__main__":
"""Run an example query through the reflection system."""
example_query = [
{
"role": "user",
"content": "Write a LangGraph RAG app",
}
]
print("Running example with reflection using GPT-4o mini...")
result = reflection_app.invoke({"messages": example_query})
print("Result:", result)

输出分析

输出分析  输出分析 

示例中发生了什么?

我们的 LangGraph Reflection 系统旨在实现以下功能:

  1. 获取初始代码片段。
  2. 运行 Pyright(Python 的静态类型检查器)检测错误。
  3. 使用 GPT-4o mini 模型分析错误、理解错误并生成改进代码建议

迭代 1 – 已识别的错误

1. 导入 “faiss” 无法解决。

  • 解释: 当 faiss 库未安装或 Python 环境无法识别导入时会出现此错误。
  • 解决方法:建议运行代理
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
pip install faiss-cpu
pip install faiss-cpu
pip install faiss-cpu

2. 无法访问类 “OpenAIEmbeddings” 的属性 “embed”。

  • 解释:代码引用了 .embed,但在较新版本的 langchain 中,嵌入方法是 .embed_documents() 或 .embed_query()。
  • 解决办法:代理将 .embed 正确替换为 .embed_query。

3. 缺少参数 “docstore”、”index_to_docstore_id”。

  • 解释:FAISS 矢量存储现在需要 docstore 对象和 index_to_docstore_id 映射。
  • 解决方法:代理通过创建 InMemoryDocstore 和字典映射添加了这两个参数。

迭代 2 – 进展

在第二次迭代中,系统改进了代码,但仍发现了以下问题:

1. 无法解析导入的 “langchain.document”。

  • 解释: 代码试图从错误的模块导入 Document。
  • 解决方案: 代理将导入更新为从 langchain.docstore 导入 Document。

2. 未定义 “InMemoryDocstore”。

  • 2. “InMemoryDocstore ”未定义: 发现缺少 InMemoryDocstore 的导入。
  • 解决方法 正确添加代理:
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
from langchain.docstore import InMemoryDocstore
from langchain.docstore import InMemoryDocstore
from langchain.docstore import InMemoryDocstore

迭代 3 – 最终解决方案

在最后一次迭代中,反射代理通过以下方式成功解决了所有问题:

  • 正确导入 faiss。
  • 将嵌入函数的 .embed 切换为 .embed_query。
  • 为文档管理添加有效的 InMemoryDocstore。
  • 创建正确的 index_too_docstore_id 映射。
  • 使用 .page_content 正确访问文档内容,而不是将文档视为简单的字符串。

改进后的代码成功运行,没有出现错误。

为何重要

  • 自动错误检测:LangGraph Reflection 框架使用 Pyright 分析代码错误并生成可操作的见解,从而简化了调试过程。
  • 迭代改进:该框架模仿开发人员手动调试和改进代码的方式,不断改进代码,直到错误得到解决。
  • 自适应学习:系统能适应不断变化的代码结构,如更新的库语法或版本差异。

小结

LangGraph Reflection 框架展示了将人工智能评论代理与强大的静态分析工具相结合的威力。这种智能反馈环路能更快地修正代码,改进编码实践,提高整体开发效率。无论是对于初学者还是经验丰富的开发人员,LangGraph Reflection 都是提高代码质量的强大工具。

主要收获

  • 通过在 LangGraph Reflection Framework 中结合 LangChain、Pyright 和 GPT-4o mini,该解决方案提供了一种自动验证代码的有效方法。
  • 该框架可帮助 LLM 迭代生成改进的解决方案,并通过反思和批判循环确保更高质量的输出。
  • 这种方法增强了人工智能生成的代码的稳健性,并提高了在真实世界场景中的性能。

评论留言