CrewAI:一個集眾傢所長的MutiAgent框架

2024年2月6日 18点热度 0人点赞

在前面的文章裡《一文探秘LLM應用開發(26)-Prompt(架構模式之Agent框架AutoGPT、AutoGen等)》,筆者曾介紹了一些諸如AutoGen、ChatDev這樣的多Agent框架。最近行業內又出現一款不錯的框架——CrewAI,可謂是站在AutoGen這樣的框架肩膀上,以實際投產為目標,兼具AutoGen對話代理的靈活性和ChatDev的領域流程化的優點,規避了AutoGen缺乏框架層面的流程支持和ChatDev領域流程過於狹窄不夠泛化的問題,支持動態的泛場景的流程設計,能夠無縫適應開發和生產工作流程。也就是說,在CrewAI中可以像AutoGen一樣結合場景定義自己的角色,又能像ChatDev那樣約定一定的流程執行,讓這些Agents能夠更好完成特定的復雜目標。該項目現已收獲3.9K星,並獲得ProductHunt排名第二的好位置。

https://github.com/joaomdmoura/crewAI

CrewAI具備以下一些關鍵feature:

  • 基於角色的Agent設計:定制具有特定角色、目標和工具的Agent。
  • Agent間自主委托:Agent可自主委派任務,並在Agent之間進行詢問,從而提高解決問題的效率。
  • 靈活的任務管理:使用可定制的工具定義任務,並將其動態分配給Agent。
  • 流程驅動(最大的亮點):目前僅支持順序(sequential)任務執行,但在規劃中的有更高階的流程定義,如共識和分層流程。

創建一個流程大體如下:

import os
from crewai import Agent, Task, Crew, Process
os.environ["OPENAI_API_KEY"] = "YOUR KEY"
# You can choose to use a local model through Ollama for example.
#
# from langchain.llms import Ollama
# ollama_llm = Ollama(model="openhermes")
# Install duckduckgo-search for this example:
# !pip install -U duckduckgo-search
from langchain.tools import DuckDuckGoSearchRun
search_tool = DuckDuckGoSearchRun()
# Define your agents with roles and goals
researcher = Agent(
  role='Senior Research Analyst',
  goal='Uncover cutting-edge developments in AI and data science',
  backstory="""You work at a leading tech think tank.
  Your expertise lies in identifying emerging trends.
  You have a knack for dissecting complex data and presenting
  actionable insights.""",
  verbose=True,
  allow_delegation=False,
  tools=[search_tool]
  # You can pass an optional llm attribute specifying what mode you wanna use.
  # It can be a local model through Ollama / LM Studio or a remote
  # model like OpenAI, Mistral, Antrophic of others (https://python.langchain.com/docs/integrations/llms/)
  #
  # Examples:
  # llm=ollama_llm # was defined above in the file
  # llm=ChatOpenAI(model_name="gpt-3.5", temperature=0.7)
)
writer = Agent(
  role='Tech Content Strategist',
  goal='Craft compelling content on tech advancements',
  backstory="""You are a renowned Content Strategist, known for
  your insightful and engaging articles.
  You transform complex concepts into compelling narratives.""",
  verbose=True,
  allow_delegation=True,
  # (optional) llm=ollama_llm
)
# Create tasks for your agents
task1 = Task(
  description="""Conduct a comprehensive analysis of the latest advancements in AI in 2024.
  Identify key trends, breakthrough technologies, and potential industry impacts.
  Your final answer MUST be a full analysis report""",
  agent=researcher
)
task2 = Task(
  description="""Using the insights provided, develop an engaging blog
  post that highlights the most significant AI advancements.
  Your post should be informative yet accessible, catering to a tech-savvy audience.
  Make it sound cool, avoid complex words so it doesn't sound like AI.
  Your final answer MUST be the full blog post of at least 4 paragraphs.""",
  agent=writer
)
# Instantiate your crew with a sequential process
crew = Crew(
  agents=[researcher, writer],
  tasks=[task1, task2],
  verbose=2,# You can set it to 1 or 2 to different logging levels
  process=Process.sequential 
)
# Get your crew to work!
result = crew.kickoff()
print("######################")
print(result)

如上,定義了兩個Agent,researcher負責調研收集信息,writer基於這些信息寫作,然後根據任務順序執行的流程進行執行。對於任務本身也可以進行自定義,比如指定特定的信源收集信息,如下是在reddit的LocalLLaMA版上獲取信息。

# pip install praw 
from langchain.tools import tool
class BrowserTool():
    @tool("Scrape reddit content")
    def scrape_reddit(max_comments_per_post=5):
        """Useful to scrape a reddit content"""
        reddit = praw.Reddit(
            client_id="your-client-id",
            client_secret="your-client-secret",
            user_agent="your-user-agent",
        )
        subreddit = reddit.subreddit("LocalLLaMA")
        scraped_data = []
        for post in subreddit.hot(limit=10):
            post_data = {"title": post.title, "url": post.url, "comments": []}
            try:
                post.comments.replace_more(limit=0)  # Load top-level comments only
                comments = post.comments.list()
                if max_comments_per_post is not None:
                    comments = comments[:5]
                for comment in comments:
                    post_data["comments"].append(comment.body)
                scraped_data.append(post_data)
            except praw.exceptions.APIException as e:
                print(f"API Exception: {e}")
                time.sleep(60)  # Sleep for 1 minute before retrying
        return scraped_data

而使用它隻需要將search_tool更換為BrowserTool().scrape_reddit。同時,它也支持將human作為工具編織在Agent中。對於這樣做的好處可以參考文章《人充當LLM Agent的工具(Human-In-The-Loop ),提升復雜問題解決成功率》。

# Define your agents with roles and goals
researcher = Agent(
  role='Senior Research Analyst',
  goal='Uncover cutting-edge developments in AI and data science in',
  backstory="""You are a Senior Research Analyst at a leading tech think tank.
  Your expertise lies in identifying emerging trends and technologies in AI and
  data science. You have a knack for dissecting complex data and presenting
  actionable insights.""",
  verbose=True,
  allow_delegation=False,
  # Passing human tools to the agent
  tools=[search_tool] human_tools
)

另外,為了節省token費用以及隱私性,還可以對接本地LLM平臺Ollama,有關Ollama介紹參看《一文探秘LLM應用開發(17)-模型部署與推理(框架工具-ggml、mlc-llm、ollama)》,安裝,配置完成後可通過以下方式集成,Ollama上有很多本地模型,推薦使用當紅的mistral模型。

from langchain.llms import Ollama
ollama_openhermes = Ollama(model="openhermes")
# Pass Ollama Model to Agents: When creating your agents within the CrewAI framework, you can pass the Ollama model as an argument to the Agent constructor. For instance:
local_expert = Agent(
  role='Local Expert at this city',
  goal='Provide the BEST insights about the selected city',
  backstory="""A knowledgeable local guide with extensive information
  about the city, it's attractions and customs""",
  tools=[
    SearchTools.search_internet,
    BrowserTools.scrape_and_summarize_website,
  ],
  llm=ollama_openhermes, # Ollama model passed here
  verbose=True
)

更多官方案例:
https://github.com/joaomdmoura/crewAI-examples