repository contains a Streamlit application that integrates various AI-powered tools to assist with coding, mathematical calculations, real-time information retrieval, and research. It leverages the capabilities of LangChain agents, Groq’s LLM (Large Language Model), and several APIs to provide a versatile and dynamic querying environment.
Tools: Python REPL, Shell commands. Use Case: Ideal for performing mathematical calculations, running Python code, and executing shell commands.
Tools: DuckDuckGo, Google Search, YouTube Search. Use Case: Suitable for retrieving real-time information from the web, including searching for videos on YouTube.
Tools: DuckDuckGo, Wikipedia, ArXiv, PubMed, Google Search.
Use Case: Perfect for research purposes, providing access to scientific literature and encyclopedic knowledge.
Manages the three agents above to provide a comprehensive response depending on the query type.
git clone https://github.com/Himanshulodha/LLM-Projects.git cd intelligent-agent-hub
python3 -m venv venv source venv/bin/activate
pip install -r requirements.txt Set Up Environment Variables
GROQ_API_KEY=your_groq_api_key SERP_API_KEY=your_serp_api_key
streamlit run app.py
1.Open the application in your browser after running the Streamlit command. 2.Enter your query in the input box. 3.Click “Get Answer” to see the response generated by the appropriate agent.
LangChain for the powerful AI framework. Streamlit for the interactive web application interface.
System Overview The application uses a hierarchical agent system with 4 agents: 1 Manager Agent (orchestrator) 2 Specialized Agents (task-specific)
Agent Specializations Agent1 - Code & Math Specialist Agent2 - Real-time Information Specialist Agent3 - Research Specialist
Manager Agent (Orchestrator) Purpose: Analyzes user input and routes to the best specialized agent
Step 1: User Input
Step 2: Manager Analysis The manager agent: Analyzes the question using the LLM (Groq llama3-70b-8192) Determines the best agent based on question type Routes the task to the appropriate specialized agent
Step 3: Specialized Agent Processing The chosen agent: Receives the task from the manager Selects appropriate tools from its toolkit Executes the task using its specialized tools Returns results to the manager
Step 4: Response Delivery The manager: Receives results from the specialized agent Formats the response for the user Displays the answer in the Streamlit interface
Model Details Model: llama3-70b-8192 (Meta’s Llama 3 70B parameter model) Provider: Groq (ultra-fast inference platform) Size: 70 billion parameters Context Window: 8192 tokens Speed: Optimized for real-time inference
The ReAct (Reasoning + Acting) Framework The agents use the ReAct pattern which follows this cycle:
### Thought → Action → Observation → Thought → Action → … → Final Answer
#### ↓
#### ↓
#### ↓
#### ↓
#### ↓
#### ↓
#### ↓
#### ↓
#### ↓
#### ↓