MHamdan commited on
Commit
3cd3230
Β·
1 Parent(s): 300162c

Initial commit with full functionality extend app req tools

Browse files
Files changed (3) hide show
  1. app.py +16 -2
  2. prompts.yaml +18 -40
  3. requirements.txt +2 -1
app.py CHANGED
@@ -1,5 +1,5 @@
1
  # app.py
2
- from smolagents import CodeAgent, HfApiModel
3
  import logging
4
  from Gradio_UI import GradioUI
5
 
@@ -10,9 +10,22 @@ logging.basicConfig(
10
  )
11
  logger = logging.getLogger(__name__)
12
 
 
 
 
 
 
 
 
 
 
 
13
  def create_agent():
14
  """Create and configure the agent."""
15
  try:
 
 
 
16
  # Initialize model
17
  model = HfApiModel(
18
  model_id='Qwen/Qwen2.5-Coder-32B-Instruct',
@@ -20,9 +33,10 @@ def create_agent():
20
  temperature=0.5
21
  )
22
 
23
- # Create agent with simple configuration
24
  return CodeAgent(
25
  model=model,
 
26
  max_steps=3,
27
  verbosity_level=1
28
  )
 
1
  # app.py
2
+ from smolagents import CodeAgent, HfApiModel, Tool
3
  import logging
4
  from Gradio_UI import GradioUI
5
 
 
10
  )
11
  logger = logging.getLogger(__name__)
12
 
13
+ class AnalysisTool(Tool):
14
+ """Tool for analyzing web content."""
15
+
16
+ name = "web_analyzer"
17
+ description = "Analyzes web content for summaries, sentiment, and topics"
18
+
19
+ def forward(self, text: str, analysis_type: str) -> str:
20
+ """Process the text based on analysis type."""
21
+ return text
22
+
23
  def create_agent():
24
  """Create and configure the agent."""
25
  try:
26
+ # Initialize tools
27
+ analyzer = AnalysisTool()
28
+
29
  # Initialize model
30
  model = HfApiModel(
31
  model_id='Qwen/Qwen2.5-Coder-32B-Instruct',
 
33
  temperature=0.5
34
  )
35
 
36
+ # Create agent with tools
37
  return CodeAgent(
38
  model=model,
39
+ tools=[analyzer], # Add the tools here
40
  max_steps=3,
41
  verbosity_level=1
42
  )
prompts.yaml CHANGED
@@ -1,50 +1,28 @@
1
  # prompts.yaml
2
  system_prompt: |
3
- You are a sophisticated AI assistant specialized in web content analysis. You have access to these capabilities:
4
- - Text extraction: Clean and extract meaningful content from URLs
5
- - Sentiment analysis: Determine emotional tone and context
6
- - Summarization: Create concise, informative summaries
7
- - Topic detection: Identify main themes and subjects
8
- - Web search: Gather additional context when needed
9
- - Temporal analysis: Consider time-based context
10
 
11
- Always structure your responses in JSON format with these keys:
12
- - clean_text: The extracted and cleaned content
13
- - summary: A concise summary if requested
14
- - sentiment: Sentiment analysis if requested
15
- - topics: Main topics if requested
16
-
17
- Think step by step and use the most appropriate tools for each task.
18
-
19
- user: |
20
- User query: {input}
21
-
22
- Analysis process:
23
- 1. Validate and process the URL
24
- 2. Determine required analysis types
25
- 3. Execute analysis in order: extraction β†’ summary β†’ sentiment β†’ topics
26
- 4. Format results in JSON structure
27
-
28
- Available tools: {tools}
29
 
30
- A: |
31
- I'll analyze the content systematically:
 
32
 
33
- {thoughts}
34
 
35
- Executing analysis with appropriate tools...
 
 
 
36
 
37
  observation: |
38
- Tool response: {output}
39
-
40
- final: |
41
- Analysis complete. Results formatted in JSON:
42
 
43
- {response}
44
-
45
- error: |
46
- An error occurred: {error}
47
-
48
- Technical details: {error_details}
49
 
50
- I'll attempt an alternative approach or provide partial results if possible.
 
1
  # prompts.yaml
2
  system_prompt: |
3
+ You are an AI assistant specialized in analyzing web content. You can:
4
+ - Extract and clean text from web pages
5
+ - Create concise summaries
6
+ - Analyze sentiment
7
+ - Identify main topics
 
 
8
 
9
+ Always provide clear, structured responses.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
 
11
+ user_prompt: |
12
+ Please analyze the following content: {input}
13
+ Analysis types requested: {analysis_types}
14
 
15
+ Return your analysis in a clear, organized format.
16
 
17
+ assistant_prompt: |
18
+ I'll analyze that content step by step:
19
+
20
+ {analysis}
21
 
22
  observation: |
23
+ Analysis results: {results}
 
 
 
24
 
25
+ final_prompt: |
26
+ Here's the complete analysis:
 
 
 
 
27
 
28
+ {final_results}
requirements.txt CHANGED
@@ -3,4 +3,5 @@ gradio>=4.0.0
3
  requests>=2.31.0
4
  beautifulsoup4>=4.12.2
5
  smolagents>=0.2.0
6
- python-dotenv>=1.0.0
 
 
3
  requests>=2.31.0
4
  beautifulsoup4>=4.12.2
5
  smolagents>=0.2.0
6
+ python-dotenv>=1.0.0
7
+ pyyaml>=6.0.1