kedar-bhumkar commited on
Commit
3d833be
·
verified ·
1 Parent(s): f9c4f9f

Upload 5 files

Browse files
Files changed (5) hide show
  1. README.md +75 -14
  2. app.py +255 -0
  3. backend.py +180 -0
  4. pydantic_model.py +14 -0
  5. requirements.txt +6 -0
README.md CHANGED
@@ -1,14 +1,75 @@
1
- ---
2
- title: Code Change Impact Analyzer
3
- emoji: 🌍
4
- colorFrom: pink
5
- colorTo: indigo
6
- sdk: streamlit
7
- sdk_version: 1.43.2
8
- app_file: app.py
9
- pinned: false
10
- license: apache-2.0
11
- short_description: Analyze the impact of a code change (GIT repo) using LLM
12
- ---
13
-
14
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Code Impact Analyzer
3
+ emoji: 🔍
4
+ colorFrom: blue
5
+ colorTo: indigo
6
+ sdk: streamlit
7
+ sdk_version: 1.32.0
8
+ app_file: app.py
9
+ pinned: false
10
+ ---
11
+
12
+ # Code Impact Analyzer
13
+
14
+ A powerful tool that analyzes code changes in Git repositories using AI to provide detailed impact analysis.
15
+
16
+ ## Features
17
+
18
+ - 🔍 **Git Repository Analysis**: Clone and analyze any public Git repository
19
+ - 🤖 **AI-Powered Analysis**: Uses GPT-4 and Claude Sonnet for intelligent code analysis
20
+ - 📊 **Impact Assessment**: Provides detailed analysis of code changes and their impact
21
+ - 🔒 **Secure API Key Management**: Supports both environment variables and session-based API keys
22
+ - 📝 **Structured Output**: Returns analysis in a standardized JSON format
23
+ - 📦 **Large Codebase Support**: Handles large repositories through intelligent chunking
24
+
25
+ ## Usage
26
+
27
+ 1. Enter a Git repository URL
28
+ 2. Select your preferred AI model (GPT-4 or Claude Sonnet)
29
+ 3. Enter your code/configuration changes
30
+ 4. Click "Analyze" to get detailed impact analysis
31
+
32
+ ## API Key Setup
33
+
34
+ ### Option 1: Environment Variables
35
+ Set your API keys in the `.env` file:
36
+ ```
37
+ OPENAI_API_KEY=your_openai_key_here
38
+ ANTHROPIC_API_KEY=your_anthropic_key_here
39
+ ```
40
+
41
+ ### Option 2: In-App Input
42
+ Enter your OpenAI API key directly in the application interface.
43
+
44
+ ## Analysis Output
45
+
46
+ The tool provides analysis in the following format:
47
+ ```json
48
+ {
49
+ "severity_level": "LOW/MEDIUM/HIGH",
50
+ "number_of_files_impacted": <integer>,
51
+ "files_impacted": [
52
+ {
53
+ "files_impacted": "file_path",
54
+ "impact_details": "detailed_impact_description"
55
+ }
56
+ ]
57
+ }
58
+ ```
59
+
60
+ ## Severity Levels
61
+
62
+ - **LOW**: 1-3 files impacted
63
+ - **MEDIUM**: 4-8 files impacted
64
+ - **HIGH**: More than 8 files impacted
65
+
66
+ ## Technical Details
67
+
68
+ - Built with Streamlit
69
+ - Uses OpenAI's GPT-4 and Anthropic's Claude Sonnet
70
+ - Supports multiple programming languages
71
+ - Handles large codebases through token-based chunking
72
+
73
+ ## License
74
+
75
+ MIT License
app.py ADDED
@@ -0,0 +1,255 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import streamlit as st
2
+ import tempfile
3
+ import json
4
+ from backend import (
5
+ clone_repository,
6
+ read_code_files,
7
+ analyze_code,
8
+ check_api_keys
9
+ )
10
+
11
+ def get_severity_color(severity):
12
+ """Get color based on severity level."""
13
+ colors = {
14
+ "LOW": "#FFA500", # Orange
15
+ "MEDIUM": "#FF6B6B", # Light Red
16
+ "HIGH": "#FF0000" # Red
17
+ }
18
+ return colors.get(severity.upper(), "#000000")
19
+
20
+ def render_analysis_results(analysis_text):
21
+ """Render the analysis results according to the Pydantic model schema."""
22
+ try:
23
+ # Parse the analysis text as JSON
24
+ analysis_data = json.loads(analysis_text)
25
+
26
+ # Custom CSS for styling
27
+ st.markdown("""
28
+ <style>
29
+ .severity-box {
30
+ background-color: #f0f2f6;
31
+ padding: 1rem;
32
+ border-radius: 0.5rem;
33
+ margin: 1rem 0;
34
+ }
35
+ .file-impact {
36
+ background-color: #ffffff;
37
+ padding: 1rem;
38
+ border-radius: 0.5rem;
39
+ margin: 0.5rem 0;
40
+ border: 1px solid #e1e4e8;
41
+ }
42
+ .impact-count {
43
+ background-color: #e6f3ff;
44
+ padding: 0.5rem 1rem;
45
+ border-radius: 0.5rem;
46
+ margin: 1rem 0;
47
+ }
48
+ </style>
49
+ """, unsafe_allow_html=True)
50
+
51
+ # Calculate severity level based on number of files impacted
52
+ severity_level = analysis_data['severity_level']
53
+ if(analysis_data['number_of_files_impacted'] == None or analysis_data['number_of_files_impacted'] == 0):
54
+ severity_level = "No Impact"
55
+ elif(analysis_data['number_of_files_impacted'] > 0 and analysis_data['number_of_files_impacted'] <= 3):
56
+ severity_level = "Low"
57
+ elif(analysis_data['number_of_files_impacted'] > 3 and analysis_data['number_of_files_impacted'] <= 8):
58
+ severity_level = "Medium"
59
+ else:
60
+ severity_level = "High"
61
+
62
+ # Display Severity Level with custom styling
63
+ severity_color = get_severity_color(severity_level)
64
+ st.markdown(f"""
65
+ <div class="severity-box">
66
+ <h3 style='color: {severity_color}; margin: 0; font-size: 1.5rem; font-weight: bold;'>
67
+ Severity Level: {severity_level}
68
+ </h3>
69
+ </div>
70
+ """, unsafe_allow_html=True)
71
+
72
+ # Display Number of Files Impacted with custom styling
73
+ st.markdown(f"""
74
+ <div class="impact-count">
75
+ <h3 style='color: #1f77b4; margin: 0; font-size: 1.2rem;'>
76
+ Number of Files Impacted: {analysis_data['number_of_files_impacted']}
77
+ </h3>
78
+ </div>
79
+ """, unsafe_allow_html=True)
80
+
81
+ # Display Files Impacted with custom styling
82
+ st.markdown("<h3 style='color: #2c3e50; font-size: 1.3rem;'>Files Impacted</h3>", unsafe_allow_html=True)
83
+
84
+ for file_impact in analysis_data['files_impacted']:
85
+ with st.expander(f"📄 {file_impact['files_impacted']}", expanded=False):
86
+ st.markdown(f"""
87
+ <div class="file-impact">
88
+ <p style='color: #34495e; font-size: 1rem; line-height: 1.6;'>
89
+ {file_impact['impact_details']}
90
+ </p>
91
+ </div>
92
+ """, unsafe_allow_html=True)
93
+
94
+ except json.JSONDecodeError:
95
+ # If the response is not valid JSON, display it as plain text
96
+ st.markdown(analysis_text)
97
+ except Exception as e:
98
+ st.error(f"Error rendering analysis results: {str(e)}")
99
+ st.markdown(analysis_text)
100
+
101
+ def main():
102
+ st.title("Git Repository Code Analyzer")
103
+ st.write("Enter a Git repository URL and a prompt to analyze the code.")
104
+
105
+ # Example data
106
+ examples = [
107
+ {
108
+ "Git URL": "https://github.com/kedar-bhumkar/SFRoutingFramework",
109
+ "Code/Config Changes": "Enum USER_INTERFACE removed from file: BaseAppLiterals.cls"
110
+ },
111
+ {
112
+ "Git URL": "https://github.com/kedar-bhumkar/SFDynamicFields",
113
+ "Code/Config Changes": "Removed a field Value__c from DynamicFieldTable__c.object"
114
+
115
+ }
116
+ ]
117
+
118
+ # Initialize session state if not exists
119
+ if 'selected_example' not in st.session_state:
120
+ st.session_state.selected_example = None
121
+ if 'openai_key' not in st.session_state:
122
+ st.session_state.openai_key = ""
123
+
124
+ # API Key input section
125
+ with st.expander("🔑 API Key Settings", expanded=False):
126
+ st.markdown("""
127
+ <style>
128
+ .api-key-section {
129
+ background-color: #f8f9fa;
130
+ padding: 1rem;
131
+ border-radius: 0.5rem;
132
+ margin: 0.5rem 0;
133
+ }
134
+ </style>
135
+ """, unsafe_allow_html=True)
136
+
137
+ st.markdown("""
138
+ <div class="api-key-section">
139
+ <p style='color: #2c3e50; font-size: 0.9rem;'>
140
+ Enter your OpenAI API key to use the GPT-4 model. The key will be stored in the session and not saved permanently.
141
+ </p>
142
+ </div>
143
+ """, unsafe_allow_html=True)
144
+
145
+ openai_key = st.text_input(
146
+ "OpenAI API Key",
147
+ value=st.session_state.openai_key,
148
+ type="password",
149
+ help="Enter your OpenAI API key to use GPT-4"
150
+ )
151
+
152
+ if openai_key:
153
+ st.session_state.openai_key = openai_key
154
+ st.success("API key saved for this session")
155
+
156
+ # Display examples table with Select buttons
157
+ st.subheader("Example Cases")
158
+
159
+ # Create columns for the table
160
+ col1, col2, col3 = st.columns([2, 2, 1])
161
+
162
+ # Table header
163
+ with col1:
164
+ st.write("**Git URL**")
165
+ with col2:
166
+ st.write("**Code/Config Changes**")
167
+ with col3:
168
+ st.write("**Action**")
169
+
170
+ # Table rows
171
+ for idx, example in enumerate(examples):
172
+ with col1:
173
+ st.write(example["Git URL"])
174
+ with col2:
175
+ st.write(example["Code/Config Changes"])
176
+ with col3:
177
+ if st.button("Select", key=f"select_{idx}"):
178
+ st.session_state.selected_example = idx
179
+ st.session_state.repo_url = example["Git URL"]
180
+ st.session_state.prompt = example["Code/Config Changes"]
181
+ st.experimental_rerun()
182
+
183
+ # Get user inputs
184
+ repo_url = st.text_input("Git Repository URL",
185
+ value=st.session_state.get("repo_url", ""))
186
+
187
+ # Model selection
188
+ model = st.selectbox(
189
+ "Select AI Model",
190
+ ["gpt-4", "claude-sonnet (coming soon)"],
191
+ help="Choose the AI model to analyze the code"
192
+ )
193
+
194
+ prompt = st.text_area("Code or configuration changes",
195
+ value=st.session_state.get("prompt", "List down the code/configuration changes to be performed"))
196
+
197
+ # Clear button
198
+ if st.button("Clear Selection"):
199
+ st.session_state.selected_example = None
200
+ st.session_state.repo_url = ""
201
+ st.session_state.prompt = "List down the code/configuration changes to be performed"
202
+ st.experimental_rerun()
203
+
204
+ if st.button("Analyze"):
205
+ if not repo_url:
206
+ st.error("Please enter a Git repository URL")
207
+ return
208
+
209
+ # Check API keys
210
+ api_keys_status = check_api_keys()
211
+ if model == "gpt-4":
212
+ # First check session state for OpenAI key
213
+ if st.session_state.openai_key:
214
+ # Use the key from session state
215
+ api_keys_status["gpt-4"] = True
216
+ elif not api_keys_status["gpt-4"]:
217
+ st.error("OpenAI API key not found. Please enter your key in the API Key Settings section or set the OPENAI_API_KEY environment variable.")
218
+ return
219
+ elif model == "claude-sonnet" and not api_keys_status["claude-sonnet"]:
220
+ st.error("Anthropic API key not found. Please set the ANTHROPIC_API_KEY environment variable.")
221
+ return
222
+
223
+ with st.spinner("Cloning repository and analyzing code..."):
224
+ # Create a temporary directory
225
+ with tempfile.TemporaryDirectory() as temp_dir:
226
+ # Clone the repository
227
+ success, error = clone_repository(repo_url, temp_dir)
228
+ if not success:
229
+ st.error(f"Error cloning repository: {error}")
230
+ return
231
+
232
+ # Read code files
233
+ code_files, warnings = read_code_files(temp_dir)
234
+
235
+ # Display any warnings from reading files
236
+ for warning in warnings:
237
+ st.warning(warning)
238
+
239
+ if not code_files:
240
+ st.warning("No code files found in the repository.")
241
+ return
242
+
243
+ # Analyze the code
244
+ analysis, error = analyze_code(code_files, prompt, model)
245
+
246
+ if error:
247
+ st.error(f"Error during analysis: {error}")
248
+ return
249
+
250
+ if analysis:
251
+ st.subheader("Analysis Results")
252
+ render_analysis_results(analysis)
253
+
254
+ if __name__ == "__main__":
255
+ main()
backend.py ADDED
@@ -0,0 +1,180 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import git
3
+ from pathlib import Path
4
+ from openai import OpenAI
5
+ from anthropic import Anthropic
6
+ from dotenv import load_dotenv
7
+ from pydantic_model import ImpactAnalysis
8
+ import tiktoken
9
+ import json
10
+ from typing import List, Tuple, Dict, Any
11
+
12
+ # Load environment variables
13
+ load_dotenv()
14
+
15
+ # Initialize API clients
16
+ openai_client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
17
+ anthropic_client = Anthropic(api_key=os.getenv("ANTHROPIC_API_KEY"))
18
+
19
+ def clone_repository(repo_url, temp_dir):
20
+ """Clone a git repository to a temporary directory."""
21
+ try:
22
+ git.Repo.clone_from(repo_url, temp_dir)
23
+ return True, None
24
+ except Exception as e:
25
+ return False, str(e)
26
+
27
+ def read_code_files(directory):
28
+ """Read all code files from the directory."""
29
+ code_files = []
30
+ code_extensions = {'.py', '.js', '.jsx', '.ts', '.tsx', '.java', '.cpp', '.c', '.cs', '.go', '.rb', '.php', '.cls', '.object','.page'}
31
+ warnings = []
32
+
33
+ for root, _, files in os.walk(directory):
34
+ for file in files:
35
+ if Path(file).suffix in code_extensions:
36
+ file_path = os.path.join(root, file)
37
+ try:
38
+ with open(file_path, 'r', encoding='utf-8') as f:
39
+ content = f.read()
40
+ relative_path = os.path.relpath(file_path, directory)
41
+ code_files.append({
42
+ 'path': relative_path,
43
+ 'content': content
44
+ })
45
+ except Exception as e:
46
+ warnings.append(f"Could not read file {file_path}: {str(e)}")
47
+
48
+ return code_files, warnings
49
+
50
+ def count_tokens(text: str, model: str = "gpt-4") -> int:
51
+ """Count the number of tokens in a text string."""
52
+ encoding = tiktoken.encoding_for_model(model)
53
+ return len(encoding.encode(text))
54
+
55
+ def chunk_files(code_files: List[Dict[str, str]], model: str = "gpt-4", max_tokens: int = 120000) -> List[List[Dict[str, str]]]:
56
+ """Split files into chunks that fit within the context window."""
57
+ chunks = []
58
+ current_chunk = []
59
+ current_tokens = 0
60
+
61
+ for file in code_files:
62
+ file_content = f"File: {file['path']}\nContent:\n{file['content']}\n"
63
+ file_tokens = count_tokens(file_content, model)
64
+
65
+ # If a single file is larger than max_tokens, skip it
66
+ if file_tokens > max_tokens:
67
+ print(f"Warning: File {file['path']} is too large ({file_tokens} tokens) and will be skipped")
68
+ continue
69
+
70
+ # If adding this file would exceed max_tokens, start a new chunk
71
+ if current_tokens + file_tokens > max_tokens:
72
+ if current_chunk: # Only add non-empty chunks
73
+ chunks.append(current_chunk)
74
+ current_chunk = [file]
75
+ current_tokens = file_tokens
76
+ else:
77
+ current_chunk.append(file)
78
+ current_tokens += file_tokens
79
+
80
+ # Add the last chunk if it's not empty
81
+ if current_chunk:
82
+ chunks.append(current_chunk)
83
+
84
+ return chunks
85
+
86
+ def analyze_code_chunk(chunk: List[Dict[str, str]], prompt: str, model: str) -> Tuple[str, str]:
87
+ """Analyze a chunk of code files."""
88
+ try:
89
+ # Prepare the context from the chunk
90
+ context = "Here are the relevant code files:\n\n"
91
+ for file in chunk:
92
+ context += f"File: {file['path']}\n```\n{file['content']}\n```\n"
93
+
94
+ if model == "gpt-4":
95
+ json_schema = ImpactAnalysis.model_json_schema()
96
+ messages = [
97
+ {"role": "system", "content": "You are a code analysis expert. Analyze the provided code based on the user's prompt."},
98
+ {"role": "user", "content": f"Please check the impact of performing the below code/configuration changes on the above codebase. Provide only the summary of the impact in a table with aggregate analysis that outputs a JSON object with the following schema : {json_schema} . Pls note : Do not add the characters ``` json anywhere in the response. Do not respond with messages like 'Here is the response in the required JSON format:'.\n\nCode or configuration changes: {prompt}\n\n{context}"}
99
+ ]
100
+
101
+ response = openai_client.chat.completions.create(
102
+ model="gpt-4o",
103
+ messages=messages,
104
+ temperature=0.7,
105
+ max_tokens=2000
106
+ )
107
+ return response.choices[0].message.content, ""
108
+ else:
109
+ # Keep original Claude implementation
110
+ system_message = "You are a code analysis expert. Analyze the provided code based on the user's prompt."
111
+ user_message = f"Please check the impact of performing the below code/configuration changes on the above codebase. Provide only the summary of the impact in a table with aggregate analysis that includes 1) List of files impacted. 2) No of files impacted 3) Impactd etail on each file impacted . Surface a 'Severity Level' at the top of table with possible values: Low, Medium, High based on the 'Number of impacted files' impacted. E.g. if 'Number of impacted files' > 0 but < 3 then LOW, if 'Number of impacted files' > 3 but < 8 then MEDIUM, if 'Number of impacted files' > 8 then HIGH.\n\nCode or configuration changes: {prompt}\n\n{context}"
112
+
113
+ response = anthropic_client.messages.create(
114
+ model="claude-3-7-sonnet-20250219",
115
+ max_tokens=2000,
116
+ temperature=0.7,
117
+ system=system_message,
118
+ messages=[{"role": "user", "content": user_message}]
119
+ )
120
+ return response.content[0].text, ""
121
+ except Exception as e:
122
+ return "", str(e)
123
+
124
+ def analyze_code(code_files: List[Dict[str, str]], prompt: str, model: str) -> Tuple[str, str]:
125
+ """Analyze code files with chunking to handle large codebases."""
126
+ try:
127
+ # Split files into chunks
128
+ chunks = chunk_files(code_files, model)
129
+
130
+ if not chunks:
131
+ return "", "No valid files to analyze"
132
+
133
+ # Analyze each chunk
134
+ all_analyses = []
135
+ for i, chunk in enumerate(chunks):
136
+ analysis, error = analyze_code_chunk(chunk, prompt, model)
137
+ if error:
138
+ return "", f"Error analyzing chunk {i+1}: {error}"
139
+ if analysis:
140
+ all_analyses.append(analysis)
141
+
142
+ if not all_analyses:
143
+ return "", "No analysis results generated"
144
+
145
+ # Combine results from all chunks
146
+ combined_analysis = {
147
+ "severity_level": "LOW", # Default to lowest severity
148
+ "number_of_files_impacted": 0,
149
+ "files_impacted": []
150
+ }
151
+
152
+ # Merge results from all chunks
153
+ for analysis in all_analyses:
154
+ try:
155
+ chunk_data = json.loads(analysis)
156
+ combined_analysis["number_of_files_impacted"] += chunk_data.get("number_of_files_impacted", 0)
157
+ combined_analysis["files_impacted"].extend(chunk_data.get("files_impacted", []))
158
+
159
+ # Update severity level based on the highest severity found
160
+ severity_map = {"LOW": 1, "MEDIUM": 2, "HIGH": 3}
161
+ current_severity = severity_map.get(combined_analysis["severity_level"], 0)
162
+ chunk_severity = severity_map.get(chunk_data.get("severity_level", "LOW"), 0)
163
+ if chunk_severity > current_severity:
164
+ combined_analysis["severity_level"] = chunk_data["severity_level"]
165
+ except json.JSONDecodeError:
166
+ continue
167
+
168
+ return json.dumps(combined_analysis), ""
169
+
170
+ except Exception as e:
171
+ return "", str(e)
172
+
173
+ def check_api_keys():
174
+ """Check if required API keys are set."""
175
+ openai_key = os.getenv("OPENAI_API_KEY") is not None
176
+ anthropic_key = os.getenv("ANTHROPIC_API_KEY") is not None
177
+ return {
178
+ "gpt-4": openai_key,
179
+ "claude-sonnet": anthropic_key
180
+ }
pydantic_model.py ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ from pydantic import BaseModel, Field
3
+ from typing import List, Optional, Literal
4
+
5
+ class FileImpact(BaseModel):
6
+ files_impacted: str
7
+ impact_details: str
8
+
9
+ class ImpactAnalysis(BaseModel):
10
+ files_impacted: List[FileImpact]
11
+ number_of_files_impacted: int
12
+ severity_level: Optional[Literal["Low", "Medium", "High"]] = Field(description="possible values: Low, Medium, High based on the 'number_of_files_impacted' impacted. E.g. if 'number_of_files_impacted' > 0 but < 3 then LOW, if 'number_of_files_impacted' > 3 but < 8 then MEDIUM, if 'number_of_files_impacted' > 8 then HIGH.")
13
+
14
+
requirements.txt ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ streamlit==1.32.0
2
+ openai==1.12.0
3
+ python-dotenv==1.0.1
4
+ gitpython==3.1.42
5
+ anthropic==0.18.1
6
+ tiktoken==0.6.0