Lesterchia174 commited on
Commit
b479e5d
·
verified ·
1 Parent(s): 2af1802

Upload 3 files

Browse files
Files changed (3) hide show
  1. app.py +272 -0
  2. apt.txt +1 -0
  3. requirements.txt +14 -0
app.py ADDED
@@ -0,0 +1,272 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # -*- coding: utf-8 -*-
2
+ """App
3
+
4
+ Automatically generated by Colab.
5
+
6
+ Original file is located at
7
+ https://colab.research.google.com/drive/1TdjbTSA8V5GUProQ3Bd-uYmTLXSInoWf
8
+ """
9
+
10
+ import gradio as gr
11
+ import numpy as np
12
+ from transformers import pipeline
13
+ import os
14
+ import time
15
+ import groq
16
+ import uuid # For generating unique filenames
17
+
18
+ # Updated imports to address LangChain deprecation warnings:
19
+ from langchain_groq import ChatGroq
20
+ from langchain.schema import HumanMessage
21
+ from langchain.text_splitter import RecursiveCharacterTextSplitter
22
+ from langchain_community.vectorstores import Chroma
23
+ from langchain_community.embeddings import HuggingFaceEmbeddings
24
+ from langchain.docstore.document import Document
25
+
26
+ # Importing chardet (make sure to add chardet to your requirements.txt)
27
+ import chardet
28
+
29
+ import fitz # PyMuPDF for PDFs
30
+ import docx # python-docx for Word files
31
+ import gtts # Google Text-to-Speech library
32
+ from pptx import Presentation # python-pptx for PowerPoint files
33
+ import re
34
+
35
+ # Initialize Whisper model for speech-to-text
36
+ transcriber = pipeline("automatic-speech-recognition", model="openai/whisper-base.en")
37
+
38
+ # Set API Key (Ensure it's stored securely in an environment variable)
39
+ groq.api_key = os.getenv("GROQ_API_KEY") # Replace with a valid API key
40
+
41
+ # Initialize Chat Model
42
+ chat_model = ChatGroq(model_name="DeepSeek-R1-Distill-Llama-70b", api_key=groq.api_key)
43
+
44
+ # Initialize Embeddings and chromaDB
45
+ embedding_model = HuggingFaceEmbeddings()
46
+ vectorstore = Chroma(embedding_function=embedding_model)
47
+
48
+ # Short-term memory for the LLM
49
+ chat_memory = []
50
+
51
+ # Prompt for quiz generation with added remark
52
+ quiz_prompt = """
53
+ You are an AI assistant specialized in education and assessment creation. Given an uploaded document or text, generate a quiz with a mix of multiple-choice questions (MCQs) and fill-in-the-blank questions. The quiz should be directly based on the key concepts, facts, and details from the provided material.
54
+ Remove all unnecessary formatting generated by the LLM, including <think> tags, asterisks, markdown formatting, and any bold or italic text, as well as **, ###, ##, and # tags.
55
+ For each question:
56
+ - Provide 4 answer choices (for MCQs), with only one correct answer.
57
+ - Ensure fill-in-the-blank questions focus on key terms, phrases, or concepts from the document.
58
+ - Include an answer key for all questions.
59
+ - Ensure questions vary in difficulty and encourage comprehension rather than memorization.
60
+ - Additionally, implement an instant feedback mechanism:
61
+ - When a user selects an answer, indicate whether it is correct or incorrect.
62
+ - If incorrect, provide a brief explanation from the document to guide learning.
63
+ - Ensure responses are concise and educational to enhance understanding.
64
+ Output Example:
65
+ 1. Fill in the blank: The LLM Agent framework has a central decision-making unit called the _______________________.
66
+ Answer: Agent Core
67
+ Feedback: The Agent Core is the central component of the LLM Agent framework, responsible for managing goals, tool instructions, planning modules, memory integration, and agent persona.
68
+ 2. What is the main limitation of LLM-based applications?
69
+ a) Limited token capacity
70
+ b) Lack of domain expertise
71
+ c) Prone to hallucination
72
+ d) All of the above
73
+ Answer: d) All of the above
74
+ Feedback: LLM-based applications have several limitations, including limited token capacity, lack of domain expertise, and being prone to hallucination, among others.
75
+ """
76
+
77
+ # Function to clean AI response by removing unwanted formatting
78
+ def clean_response(response):
79
+ """Removes <think> tags, asterisks, and markdown formatting."""
80
+ cleaned_text = re.sub(r"<think>.*?</think>", "", response, flags=re.DOTALL)
81
+ cleaned_text = re.sub(r"(\*\*|\*)", "", cleaned_text)
82
+ cleaned_text = re.sub(r"^#+\s*", "", cleaned_text, flags=re.MULTILINE)
83
+ cleaned_text = re.sub(r"\\", "", cleaned_text)
84
+ return cleaned_text.strip()
85
+
86
+ # Function to generate quiz based on content
87
+ def generate_quiz(content):
88
+ prompt = f"{quiz_prompt}\n\nDocument content:\n{content}"
89
+ response = chat_model([HumanMessage(content=prompt)])
90
+ cleaned_response = clean_response(response.content)
91
+ return cleaned_response
92
+
93
+ # Function to retrieve relevant documents from vectorstore based on user query
94
+ def retrieve_documents(query):
95
+ results = vectorstore.similarity_search(query, k=3)
96
+ return [doc.page_content for doc in results]
97
+
98
+ # Function to handle chatbot interactions with short-term memory
99
+ def chat_with_groq(user_input):
100
+ try:
101
+ # Retrieve relevant documents for additional context
102
+ relevant_docs = retrieve_documents(user_input)
103
+ context = "\n".join(relevant_docs) if relevant_docs else "No relevant documents found."
104
+
105
+ # Construct proper prompting with conversation history
106
+ system_prompt = "You are a helpful AI assistant. Answer questions accurately and concisely."
107
+ conversation_history = "\n".join(chat_memory[-10:]) # Keep the last 10 exchanges
108
+ prompt = f"{system_prompt}\n\nConversation History:\n{conversation_history}\n\nUser Input: {user_input}\n\nContext:\n{context}"
109
+
110
+ # Call the chat model
111
+ response = chat_model([HumanMessage(content=prompt)])
112
+
113
+ # Clean response to remove any unwanted formatting
114
+ cleaned_response_text = clean_response(response.content)
115
+
116
+ # Append conversation history
117
+ chat_memory.append(f"User: {user_input}")
118
+ chat_memory.append(f"AI: {cleaned_response_text}")
119
+
120
+ # Convert response to speech
121
+ audio_file = speech_playback(cleaned_response_text)
122
+
123
+ return cleaned_response_text, audio_file
124
+ except Exception as e:
125
+ return f"Error: {str(e)}", None
126
+
127
+ # Function to play response as speech using gTTS
128
+ def speech_playback(text):
129
+ try:
130
+ # Generate a unique filename for each audio file
131
+ unique_id = str(uuid.uuid4())
132
+ audio_file = f"output_audio_{unique_id}.mp3"
133
+
134
+ # Convert text to speech
135
+ tts = gtts.gTTS(text, lang='en')
136
+ tts.save(audio_file)
137
+
138
+ # Return the path to the audio file
139
+ return audio_file
140
+ except Exception as e:
141
+ print(f"Error in speech_playback: {e}")
142
+ return None
143
+
144
+ # Function to detect encoding safely
145
+ def detect_encoding(file_path):
146
+ try:
147
+ with open(file_path, "rb") as f:
148
+ raw_data = f.read(4096)
149
+ detected = chardet.detect(raw_data)
150
+ encoding = detected["encoding"]
151
+ return encoding if encoding else "utf-8"
152
+ except Exception:
153
+ return "utf-8"
154
+
155
+ # Function to extract text from PDF
156
+ def extract_text_from_pdf(pdf_path):
157
+ try:
158
+ doc = fitz.open(pdf_path)
159
+ text = "\n".join([page.get_text("text") for page in doc])
160
+ return text if text.strip() else "No extractable text found."
161
+ except Exception as e:
162
+ return f"Error extracting text from PDF: {str(e)}"
163
+
164
+ # Function to extract text from Word files (.docx)
165
+ def extract_text_from_docx(docx_path):
166
+ try:
167
+ doc = docx.Document(docx_path)
168
+ text = "\n".join([para.text for para in doc.paragraphs])
169
+ return text if text.strip() else "No extractable text found."
170
+ except Exception as e:
171
+ return f"Error extracting text from Word document: {str(e)}"
172
+
173
+ # Function to extract text from PowerPoint files (.pptx)
174
+ def extract_text_from_pptx(pptx_path):
175
+ try:
176
+ presentation = Presentation(pptx_path)
177
+ text = ""
178
+ for slide in presentation.slides:
179
+ for shape in slide.shapes:
180
+ if hasattr(shape, "text"):
181
+ text += shape.text + "\n"
182
+ return text if text.strip() else "No extractable text found."
183
+ except Exception as e:
184
+ return f"Error extracting text from PowerPoint: {str(e)}"
185
+
186
+ # Function to process documents safely
187
+ def process_document(file):
188
+ try:
189
+ file_extension = os.path.splitext(file.name)[-1].lower()
190
+ if file_extension in [".png", ".jpg", ".jpeg"]:
191
+ return "Error: Images cannot be processed for text extraction."
192
+ if file_extension == ".pdf":
193
+ content = extract_text_from_pdf(file.name)
194
+ elif file_extension == ".docx":
195
+ content = extract_text_from_docx(file.name)
196
+ elif file_extension == ".pptx":
197
+ content = extract_text_from_pptx(file.name)
198
+ else:
199
+ encoding = detect_encoding(file.name)
200
+ with open(file.name, "r", encoding=encoding, errors="replace") as f:
201
+ content = f.read()
202
+ text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=50)
203
+ documents = [Document(page_content=chunk) for chunk in text_splitter.split_text(content)]
204
+ vectorstore.add_documents(documents)
205
+ quiz = generate_quiz(content)
206
+ return f"Document processed successfully (File Type: {file_extension}). Quiz generated:\n{quiz}"
207
+ except Exception as e:
208
+ return f"Error processing document: {str(e)}"
209
+
210
+ # Function to handle speech-to-text conversion
211
+ def transcribe_audio(audio):
212
+ sr, y = audio
213
+ if y.ndim > 1:
214
+ y = y.mean(axis=1)
215
+ y = y.astype(np.float32)
216
+ y /= np.max(np.abs(y))
217
+ return transcriber({"sampling_rate": sr, "raw": y})["text"]
218
+
219
+ # Your cleanup function
220
+ def cleanup_old_files(directory=".", age_limit=60):
221
+ """Delete files older than `age_limit` seconds."""
222
+ current_time = time.time()
223
+ for filename in os.listdir(directory):
224
+ file_path = os.path.join(directory, filename)
225
+ if os.path.isfile(file_path) and filename.startswith("output_audio_"):
226
+ file_age = current_time - os.path.getmtime(file_path)
227
+ if file_age > age_limit:
228
+ os.remove(file_path)
229
+
230
+
231
+ # Gradio UI with Video Clip
232
+ with gr.Blocks() as demo:
233
+ gr.HTML("<h2 style='text-align: center;'>AI Tutor - We.</h2>")
234
+
235
+ # Align image and video side by side
236
+ with gr.Row():
237
+ with gr.Column(scale=1): # Adjust scale to control width ratio
238
+ gr.HTML("""
239
+ <div style="text-align: center; margin-bottom: 20px;">
240
+ <img src="https://img.freepik.com/premium-photo/little-girl-is-seen-sitting-front-laptop-computer-engaged-with-nearby-robot-robot-assistant-helping-child-with-homework-ai-generated_585735-12266.jpg"
241
+ style="width: 100%; height: auto; border-radius: 10px; box-shadow: 0 4px 8px rgba(0,0,0,0.2);" />
242
+ </div>
243
+ """)
244
+
245
+ #with gr.Column(scale=1): # Adjust scale for equal width
246
+ gr.Video("https://github.com/lesterchia1/AI_tutor/raw/main/We%20not%20me%20video.mp4", label="Introduction Video")
247
+
248
+
249
+ # Add other UI elements below
250
+ with gr.Row():
251
+ with gr.Column():
252
+ audio_input = gr.Audio(type="numpy", label="Record Audio")
253
+ transcription_output = gr.Textbox(label="Transcription")
254
+ user_input = gr.Textbox(label="Ask a question")
255
+ chat_output = gr.Textbox(label="Response")
256
+ audio_output = gr.Audio(label="Audio Playback")
257
+ submit_btn = gr.Button("Ask")
258
+ with gr.Column():
259
+ file_upload = gr.File(label="Upload a document")
260
+ process_status = gr.Textbox(label="Processing Status")
261
+
262
+ # Define button actions
263
+ submit_btn.click(chat_with_groq, inputs=user_input, outputs=[chat_output, audio_output])
264
+ audio_input.change(transcribe_audio, inputs=audio_input, outputs=transcription_output)
265
+ transcription_output.change(fn=lambda x: x, inputs=transcription_output, outputs=user_input)
266
+ file_upload.change(process_document, inputs=file_upload, outputs=process_status)
267
+
268
+ # Add cleanup function to be triggered periodically (e.g., every time a button is clicked or after certain actions)
269
+ demo.load(lambda: cleanup_old_files(directory="./", age_limit=60), inputs=[], outputs=[])
270
+
271
+
272
+ demo.launch()
apt.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ espeak
requirements.txt ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ gradio
2
+ numpy
3
+ chromadb
4
+ transformers
5
+ groq
6
+ langchain
7
+ langchain-groq
8
+ langchain-community
9
+ pymupdf
10
+ python-docx
11
+ gtts
12
+ python-pptx
13
+ chardet
14
+ langdetect