LLM-Projects

🕵️‍♂️ Intelligent Agent Hub

repository contains a Streamlit application that integrates various AI-powered tools to assist with coding, mathematical calculations, real-time information retrieval, and research. It leverages the capabilities of LangChain agents, Groq’s LLM (Large Language Model), and several APIs to provide a versatile and dynamic querying environment.

image

🌟 Features

Agent 1:

Tools: Python REPL, Shell commands. Use Case: Ideal for performing mathematical calculations, running Python code, and executing shell commands.

Agent 2:

Tools: DuckDuckGo, Google Search, YouTube Search. Use Case: Suitable for retrieving real-time information from the web, including searching for videos on YouTube.

Agent 3:

Tools: DuckDuckGo, Wikipedia, ArXiv, PubMed, Google Search. Use Case: Perfect for research purposes, providing access to scientific literature and encyclopedic knowledge. image

Manager Agent:

Manages the three agents above to provide a comprehensive response depending on the query type.

🛠️ Setup Instructions

1. Clone the Repository

git clone https://github.com/Himanshulodha/LLM-Projects.git cd intelligent-agent-hub

2. Create a Virtual Environment

python3 -m venv venv source venv/bin/activate

3. Install Dependencies

pip install -r requirements.txt Set Up Environment Variables

4. Create a .env file in the root directory and add your API keys:

GROQ_API_KEY=your_groq_api_key SERP_API_KEY=your_serp_api_key

5. Run the Streamlit App

streamlit run app.py

🔧 Usage

1.Open the application in your browser after running the Streamlit command. 2.Enter your query in the input box. 3.Click “Get Answer” to see the response generated by the appropriate agent.

🙌 Acknowledgments

LangChain for the powerful AI framework. Streamlit for the interactive web application interface.

I’ll explain how the multi-agent system works in detail! Let me break down the entire architecture and workflow:

  1. System Overview The application uses a hierarchical agent system with 4 agents: 1 Manager Agent (orchestrator) 2 Specialized Agents (task-specific)

  2. Agent Specializations Agent1 - Code & Math Specialist Agent2 - Real-time Information Specialist Agent3 - Research Specialist

  3. Manager Agent (Orchestrator) Purpose: Analyzes user input and routes to the best specialized agent

How the Workflow Works

Step 1: User Input

Step 2: Manager Analysis The manager agent: Analyzes the question using the LLM (Groq llama3-70b-8192) Determines the best agent based on question type Routes the task to the appropriate specialized agent

Step 3: Specialized Agent Processing The chosen agent: Receives the task from the manager Selects appropriate tools from its toolkit Executes the task using its specialized tools Returns results to the manager

Step 4: Response Delivery The manager: Receives results from the specialized agent Formats the response for the user Displays the answer in the Streamlit interface

I’ll explain the model used and how the thought process works in detail!

Model Details Model: llama3-70b-8192 (Meta’s Llama 3 70B parameter model) Provider: Groq (ultra-fast inference platform) Size: 70 billion parameters Context Window: 8192 tokens Speed: Optimized for real-time inference

🧠 How the Thought Process Works

  1. Agent Architecture: Zero-Shot React Description
  2. The ReAct (Reasoning + Acting) Framework The agents use the ReAct pattern which follows this cycle:

    ### Thought → Action → Observation → Thought → Action → … → Final Answer

📝 Real Thought Process Example

Step 1: Initial Analysis

Step 2: Processing Results

Step 3: Synthesizing Information

Step 4: Final Answer Formation

🎨 Visual Thought Process Flow

User Question

#### ↓

Manager Agent Analysis

#### ↓

[Thought: “What type of question is this?”]

#### ↓

[Action: “Choose appropriate agent”]

#### ↓

Specialized Agent

#### ↓

[Thought: “What tools do I need?”]

#### ↓

[Action: “Use specific tool”]

#### ↓

[Observation: “Process results”]

#### ↓

[Thought: “Do I need more information?”]

#### ↓

[Action: “Use another tool if needed”]

#### ↓

Final Answer