question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
79,320,289 | 2024-12-31 | https://stackoverflow.com/questions/79320289/why-cant-i-wrap-lgbm | I'm using LGBM to forecast the relative change of a numerical quantity. I'm using the MSLE (Mean Squared Log Error) loss function to optimize my model and to get the correct scaling of errors. Since MSLE isn't native to LGBM, I have to implement it myself. But lucky me, the math can be simplified a ton. This is my implementation; class MSLELGBM(LGBMRegressor): def __init__(self, **kwargs): super().__init__(**kwargs) def predict(self, X): return np.exp(super().predict(X)) def fit(self, X, y, eval_set=None, callbacks=None): y_log = np.log(y.copy()) print(super().get_params()) # This doesn't print any kwargs if eval_set: eval_set = [(X_eval, np.log(y_eval.copy())) for X_eval, y_eval in eval_set] super().fit(X, y_log, eval_set=eval_set, callbacks=callbacks) As you can see, it's very minimal. I basically just need to apply a log transform to the model target, and exponentiate the predictions to return to our own non-logarithmic world. However, my wrapper doesn't work. I call the class with; model = MSLELGBM(**lgbm_params) model.fit(data[X_cols_all], data[y_col_train]) And I get the following exception; --------------------------------------------------------------------------- KeyError Traceback (most recent call last) Cell In[31], line 38 32 callbacks = [ 33 lgbm.early_stopping(10, verbose=0), 34 lgbm.log_evaluation(period=0), 35 ] 37 model = MSLELGBM(**lgbm_params) ---> 38 model.fit(data[X_cols_all], data[y_col_train]) 40 feature_importances_df = pd.DataFrame([model.booster_.feature_importance(importance_type='gain')], columns=X_cols_all).T.sort_values(by=0, ascending=False) 41 feature_importances_df.iloc[:30] Cell In[31], line 17 15 if eval_set: 16 eval_set = [(X_eval, np.log(y_eval.copy())) for X_eval, y_eval in eval_set] ---> 17 super().fit(X, y_log, eval_set=eval_set, callbacks=callbacks) File c:\X\.venv\lib\site-packages\lightgbm\sklearn.py:1189, in LGBMRegressor.fit(self, X, y, sample_weight, init_score, eval_set, eval_names, eval_sample_weight, eval_init_score, eval_metric, feature_name, categorical_feature, callbacks, init_model) 1172 def fit( # type: ignore[override] 1173 self, 1174 X: _LGBM_ScikitMatrixLike, (...) 1186 init_model: Optional[Union[str, Path, Booster, LGBMModel]] = None, 1187 ) -> "LGBMRegressor": 1188 """Docstring is inherited from the LGBMModel.""" ... --> 765 if isinstance(params["random_state"], np.random.RandomState): 766 params["random_state"] = params["random_state"].randint(np.iinfo(np.int32).max) 767 elif isinstance(params["random_state"], np.random.Generator): KeyError: 'random_state' I have no idea how random_state is missing from the fit method, as it isnt even required for that function. I get the impression that this is a complicated software engineering issue that's above my head. Anybody knows whats up? If it's of any help, I tried illustrating what I want using a simpler non-lgbm structure; I just want to pass whatever parameters I provide to the MSLELGBM to the original LGBM, but I'm running into a ton of issues doing so. | Root Cause scikit-learn expects that each of the keyword arguments to an estimator's __init__() will exactly correspond to a public attribute on instances of the class. Per https://scikit-learn.org/stable/developers/develop.html every keyword argument accepted by __init__ should correspond to an attribute on the instance. Scikit-learn relies on this to find the relevant attributes to set on an estimator when doing model selection Its .get_params() method on estimators take advantage of this by inspecting the signature of __init__() to figure out which attributes to expect (scikit-learn / sklearn / base.py). lightgbm's estimators call .get_params() and then expect the key "random_state" to exist in the dictionary it returns... because that parameter is in the keyword arguments to LGBMRegressor (LightGBM / python-package / lightgbm / sklearn.py). Your estimator's __init__() does not have random_state as a keyword argument, so when self.get_params() is called it returns a dictionary that does not contain "random_state", leading to the error your observed. How to fix this If you do not need to add any other custom parameters, then just do not define an __init__() method on your subclass. Here's a minimal, reproducible example that works with lightgbm 4.5.0 and Python 3.11: import numpy as np from lightgbm import LGBMRegressor from sklearn.datasets import make_regression class MSLELGBM(LGBMRegressor): def predict(self, X): return np.exp(super().predict(X)) def fit(self, X, y, eval_set=None, callbacks=None): y_log = np.log(y.copy()) if eval_set: eval_set = [(X_eval, np.log(y_eval.copy())) for X_eval, y_eval in eval_set] super().fit(X, y_log, eval_set=eval_set, callbacks=callbacks) # modifying bias and tail_strength to ensure every value in 'y' is positive X, y = make_regression( n_samples=5_000, n_features=3, bias=500.0, tail_strength=0.001, random_state=708, ) reg = MSLELGBM(num_boost_round=5) # print params (you'll see all the LGBMRegressor params) reg.get_params() # fit the model reg.fit(X, y) If you do need to define any custom parameters, then for lightgbm<=4.5.0: add an __init__() on your subclass copy all of the parameters from the signature of lightgbm.LGBMModel.__init__() into that __init__() call super().__init__() in your subclass's __init__(), and pass it all of the keyword arguments explicitly 1 at a time with = Like this: class MSLELGBM(LGBMRegressor): # just including 'random_state' to keep it short... you # need to include more params here, depending on LightGBM version def __init__(self, random_state=None, **kwargs): super().__init__( random_state=random_state, **kwargs ) | 1 | 1 |
79,320,303 | 2024-12-31 | https://stackoverflow.com/questions/79320303/artifacts-with-pygame-when-trying-to-update-visible-sprites-only | I'm learning the basics of the pygame library and already struggling. The "game" at this point only has a player and walls. There are 2 main surfaces: "world" (the actual game map) and "screen" (which serves as a viewport for "view_src" w/ scaling & scrolling, "viewport" is the corresponding rect). Here's the problem: I want to implement at least basic optimisation and only render sprites that are actually visible, so I'm filtering the "all" group to whatever collides with the viewport. That acts as expected. But when I call the rendering functions on the "visible" ad hoc group I get artifacts whereas calling them on "all" works just fine. Here's the relevant snippet from the game loop: # clear old sprites all.clear(world, background) # this should clear the OLD position of all sprites, right? # handle input and generic game logic here if player.move(key_state, walls) != (0,0): # moves the player's rect if possible scroll_view(world, player.last_move, view_src) # shifts view_src if applicable # this does very little and should be unrelated to the issue all.update() # draw the new scene visible = pg.sprite.Group([ spr for spr in all.sprites() if view_src.colliderect(spr.rect) ]) print(visible.sprites()) # confirms the visible sprites are chosen correctly visible.draw(world) # results in drawing each sprite in its new AND old position #all.draw(world) # acts as it should if used instead scaled = pg.transform.scale(world.subsurface(view_src), viewport.size) screen.blit(scaled, viewport.topleft) pg.display.flip() (I do .empty() the "visible" group at the end of the loop) Even if I determine "visible" earlier and call visible.clear(world, background) and then go all.draw(world) I get the exact same issue, it only works if both .clear() and .draw() are called on "all". This is already after consulting an AI which told me this works just fine so hopefully a good old fashioned human can point me in the right direction. | Found the problem and the fix thanks to Kingsley's nudge. The issue: Group.clear() clears the sprites drawn by the last .draw() of that exact same group. So using a different group for .clear() and .draw() doesn't work, and the continuity it needs to function is also lost by re-assigning the "visible" group each time. The solution: Initialise "visible" before the loop, persist it between iterations and add/remove sprites as needed. Fixed code: # clear old sprites visible.clear(world, background) # clears sprites from the last .draw() # handle input and generic game logic here if player.move(key_state, walls) != (0,0): scroll_view(world, player, view_src) # "step event", update positions etc here all.update() # draw the new scene visible.empty() visible.add([ spr for spr in all if view_src.colliderect(spr.rect) ]) visible.draw(world) render_view(screen, world, view_src, viewport) # this is still what it was before | 2 | 0 |
79,316,973 | 2024-12-30 | https://stackoverflow.com/questions/79316973/improve-computational-time-and-memory-usage-of-the-calculation-of-a-large-matrix | I want to calculate a Matrix G that its elements is a scalar and are calculated as: I want to calculated this matrix for a large n > 10000, d>30. My code is below but it has a huge overhead and it still takes very long time. How can I make this computation at the fastest possible way? Without using GPU and Minimize the memory usage. import numpy as np from sklearn.gaussian_process.kernels import Matern from tqdm import tqdm from joblib import Parallel, delayed # Pre-flattened computation to minimize data transfer overhead def precompute_differences(R, Z): n, d = R.shape R_diff_flat = (R[:, None, :] - R[None, :, :]).reshape(n * n, d) Z_diff = Z[:, None, :] - Z[None, :, :] return R_diff_flat, Z_diff def compute_G_row(i, R_diff_flat, Z_diff, W, gamma_val, kernel, n, d): """ Compute the i-th row for j >= i and store them in a temporary array. """ row_values = np.zeros(n) for j in range(i, n): Z_ij = gamma_val * Z_diff[i, j].reshape(1, d) K_flat = kernel(R_diff_flat, Z_ij) K_ij = K_flat.reshape(n, n) row_values[j] = np.sum(W * K_ij) return i, row_values def compute_G(M, gamma, R, Z, nu=1.5, length_scale=1.0, use_parallel=True): """ Compute the G matrix with fewer kernel evaluations by exploiting symmetry: G[i,j] = G[j,i]. We only compute for j >= i, then mirror the result. """ R = np.asarray(R) Z = np.asarray(Z) M = np.asarray(M).reshape(-1, 1) # ensure (n,1) n, d = R.shape # Precompute data R_diff_flat, Z_diff = precompute_differences(R, Z) W = M @ M.T # Weight matrix G = np.zeros((n, n)) kernel = Matern(length_scale=length_scale, nu=nu) if use_parallel and n > 1: # Parallel computation results = Parallel(n_jobs=-1)( delayed(compute_G_row)(i, R_diff_flat, Z_diff, W, gamma, kernel, n, d) for i in tqdm(range(n), desc="Computing G matrix") ) else: # Single-threaded computation results = [] for i in tqdm(range(n), desc="Computing G matrix"): row_values = np.zeros(n) for j in range(i, n): Z_ij = gamma * Z_diff[i, j].reshape(1, d) K_flat = kernel(R_diff_flat, Z_ij) K_ij = K_flat.reshape(n, n) row_values[j] = np.sum(W * K_ij) results.append((i, row_values)) # Sort and fill final G by symmetry results.sort(key=lambda x: x[0]) for i, row_vals in results: for j in range(i, n): G[i, j] = row_vals[j] G[j, i] = row_vals[j] # mirror for symmetry # Delete auxiliary variables to save memory del R_diff_flat, Z_diff, W, kernel, results # Optional checks is_symmetric = np.allclose(G, G.T, atol=1e-8) eigenvalues = np.linalg.eigvalsh(G) is_semi_positive_definite = np.all(eigenvalues >= -1e-8) print(f"G is semi-positive definite: {is_semi_positive_definite}") print(f"G is symmetric: {is_symmetric}") # Delete all local auxiliary variables except G to save memory local_vars = list(locals().keys()) for var_name in local_vars: if var_name not in ["G"]: del locals()[var_name] return G Toy Example # Example usage: if __name__ == "__main__": __spec__ = None n = 20 d = 10 gamma = 0.9 R = np.random.rand(n, d) Z = np.random.rand(n, d) M = np.random.rand(n, 1) G = compute_G(M, gamma, R, Z, nu=1.5, length_scale=1.0, use_parallel=True) print("G computed with shape:", G.shape) | A convenient way is to note that each entry could also be written as : with above notation the computation could be much easier and: import numpy as np from tqdm import tqdm from sklearn.gaussian_process.kernels import Matern from yaspin import yaspin import time from memory_profiler import profile ##----------------- @profile def G_einsum_block(M, gamma, R, Z, nu=1.5, length_scale=1.0, block_size=100): n, d = R.shape G = np.zeros((n, n)) Gamma = M.ravel() # Ensure shape is (n,) # Initialize the Matern kernel kernel = Matern(length_scale=length_scale, nu=nu) # with yaspin(text="Computing Matrix G", spinner="dots") as spinner: # Iterate over chunks of ell for ell_start in tqdm(range(0, n, block_size), desc="Computing G by ell-Chunks"): ell_end = min(ell_start + block_size, n) ell_indices = np.arange(ell_start, ell_end) Gamma_ell = Gamma[ell_indices] # Compute shifted points for current ell chunk # Shape: (n, block_size, d) X_ell = gamma * Z[:, np.newaxis, :] + R[ell_indices] # Iterate over chunks of m for m_start in range(0, n, block_size): m_end = min(m_start + block_size, n) m_indices = np.arange(m_start, m_end) Gamma_m = Gamma[m_indices] # Compute shifted points for current m chunk # Shape: (n, block_size, d) X_m = gamma * Z[:, np.newaxis, :] + R[m_indices] # Reshape for kernel computation # Each pair (i, ell) and (j, m) needs to be compared # We compute pairwise distances between X_ell and X_m # To vectorize, reshape to (n * block_size, d) X_i_ell = X_ell.reshape(n * (ell_end - ell_start), d) X_j_m = X_m.reshape(n * (m_end - m_start), d) # Compute the kernel matrix for the current chunks # Shape: (n * block_size, n * block_size) K_chunk = kernel(X_i_ell, X_j_m) # Reshape K_chunk to (n, ell_chunk, n, m_chunk) K_chunk = K_chunk.reshape(n, ell_end - ell_start, n, m_end - m_start) # Multiply by M for current chunks # Shape: (ell_chunk,) and (m_chunk,) # Use broadcasting in einsum # 'iljm,l,m->ij' corresponds to: # i: row index of G # j: column index of G # l: current ell chunk # m: current m chunk G += np.einsum('iljm,l,m->ij', K_chunk, Gamma_ell, Gamma_m) # spinner.ok("β") print("") # Optional checks is_symmetric = np.allclose(G, G.T, atol=1e-8) eigenvalues = np.linalg.eigvalsh(G) is_semi_positive_definite = np.all(eigenvalues >= -1e-8) print(f"G is semi-positive definite: {is_semi_positive_definite}") print(f"G is symmetric: {is_symmetric}") return G #%% ##-------------------------------------------------------------- ### --- Example usage --- #### if __name__ == "__main__": # Example dimensions n = 20 d = 10 gamma = 0.9 # Generate dummy data R = np.random.rand(n, d) Z = np.random.rand(n, d) M = np.random.rand(n, 1) # Compute G with a progress bar G = G_einsum_block(M, gamma, R, Z, nu=1.5, length_scale=1.0) print("Shape of G:", G.shape) | 1 | 2 |
79,313,502 | 2024-12-28 | https://stackoverflow.com/questions/79313502/extracting-owner-s-username-from-nested-page-on-huggingface | I am scraping the HuggingFace research forum (https://discuss.huggingface.co/c/research/7/l/latest) using Selenium. I am able to successfully extract the following attributes from the main page of the forum: Activity Date View Count Replies Count Title URL However, I am encountering an issue when trying to extract the ownerβs username from the individual topic pages. The ownerβs username is located on a nested page that is accessible via the URL found in the main pageβs topic link. For example, on the main page, I have the following HTML snippet for a topic: from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.common.by import By from selenium.webdriver.common.keys import Keys from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC import time # Set up Chrome options to use headless mode (for Colab) chrome_options = Options() chrome_options.add_argument("--headless") # Run in headless mode chrome_options.add_argument("--no-sandbox") chrome_options.add_argument("--disable-dev-shm-usage") chrome_options.add_argument("--disable-gpu") chrome_options.add_argument("--window-size=1920,1080") chrome_options.add_argument("--disable-infobars") chrome_options.add_argument("--disable-popup-blocking") chrome_options.add_argument("--ignore-certificate-errors") chrome_options.add_argument("--incognito") chrome_options.add_argument("user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36") chrome_options.add_experimental_option("excludeSwitches", ["enable-automation"]) chrome_options.add_experimental_option("useAutomationExtension", False) # Set the path to chromedriver explicitly (installed by apt) chrome_path = "/usr/bin/chromedriver" # Initialize the WebDriver with the updated path driver = webdriver.Chrome(options=chrome_options) # Open the HuggingFace page url = "https://discuss.huggingface.co/c/research/7/l/latest" # URL for HuggingFace Issues driver.get(url) # Wait for the page to load time.sleep(6) def scrape_huggingface_issues(): titles_and_links = [] seen_titles_and_links = set() owner = [] replies = [] views = [] activity = [] while True: try: # Find all issue rows (elements in the table) elements = driver.find_elements(By.CSS_SELECTOR, 'tr.topic-list-item') # Extract and store the titles, links, and other data for elem in elements: topic_id = elem.get_attribute("data-topic-id") if topic_id in seen_titles_and_links: continue seen_titles_and_links.add(topic_id) # Extract title and link selected_title = elem.find_element(By.CSS_SELECTOR, 'a.title.raw-link.raw-topic-link') title = selected_title.text.strip() relative_link = selected_title.get_attribute('href') # Get the relative URL from the href attribute full_link = relative_link # Construct the absolute URL (if needed) # Extract replies count try: replies_elem = elem.find_element(By.CSS_SELECTOR, 'button.btn-link.posts-map.badge-posts') replies_count = replies_elem.find_element(By.CSS_SELECTOR, 'span.number').text.strip() except: replies_count = "0" # Extract views count try: views_elem = elem.find_element(By.CSS_SELECTOR, 'td.num.views.topic-list-data') views_count = views_elem.find_element(By.CSS_SELECTOR, 'span.number').text.strip() except: views_count = "0" # Extract activity (last activity) try: activity_elem = elem.find_element(By.CSS_SELECTOR, 'td.num.topic-list-data.age.activity') activity_text = activity_elem.get_attribute('title').strip() except: activity_text = "N/A" # Use the helper function to get the owner info from the topic page owner_text = scrape_issue_details(relative_link) # Store the extracted data in the lists titles_and_links.append((title, full_link, owner_text, replies_count, views_count, activity_text)) seen_titles_and_links.add((title, full_link)) # Add to the seen set to avoid duplicates # Scroll down to load more content (if the forum uses infinite scroll) driver.find_element(By.TAG_NAME, "body").send_keys(Keys.END) time.sleep(3) # Adjust based on loading speed # Check if the "Next" button is available and click it try: next_button = driver.find_element(By.CSS_SELECTOR, 'a.next.page-numbers') next_button.click() time.sleep(3) # Wait for the next page to load except: # If there's no "Next" button, exit the loop print("No more pages to scrape.") break except Exception as e: print(f"Error occurred: {e}") continue return titles_and_links def scrape_issue_details(url): """ Navigate to the topic page and scrape additional details like the owner's username. """ # Go to the topic page driver.get(url) time.sleep(3) # Wait for the page to load # Extract the owner's username try: owner_elem = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.CSS_SELECTOR, 'span.first.username.new-user'))) owner_username_fetch = owner_elem.find_element(By.CSS_SELECTOR, 'a').text.strip() owner_username = owner_elem.text.strip() # Extract the username from the link except Exception as e: owner_username = "N/A" # Default value if no owner found return owner_username # Scrape the HuggingFace issues across all pages issues = scrape_huggingface_issues() # Print the titles, links, and additional data (owner, replies, views, activity) print("Scraped Titles, Links, Owner, Replies, Views, Activity:") for i, (title, link, owner_text, replies_count, views_count, activity_text) in enumerate(issues, 1): print(f"{i}: {title} - {link} - Owner: {owner_text} - Replies: {replies_count} - Views: {views_count} - Activity: {activity_text}") # Close the browser driver.quit() Problem: I cannot fetch the ownerβs username from the individual topic page. After following the URL, I am unable to locate and extract the ownerβs username even though I know its location in the HTML. <a href="/t/model-that-can-generate-both-text-and-image-as-output/132209" role="heading" aria-level="2" class="title raw-link raw-topic-link" data-topic-id="132209">Model that can generate both text and image as output</a> The ownerβs username is located on the topicβs individual page at the following HTML snippet: <span class="first username new-user"><a href="/u/InsertOPUsername" data-user-card="InsertOPUsername" class="">InsertOPUsername</a></span> What Iβve Tried: I used driver.get(url) to navigate to the individual topic pages. I attempted to locate the username using WebDriverWait and the correct CSS selector (span.first.username.new-user a). I am successfully scraping other details like Activity, Views, and Replies from the main page but unable to retrieve the ownerβs username from the topic page. | All the data you're after comes from two API endpoints. Most of what you already have can be fetched from the frist one. If you follow the post, you'll get even more data and you'll find the posters section, there you can find your owner aka Original Poster. This is just to push you in the right direction (and no selenium needed!). Once you know the endpoints you can massage the data to whatever you like it to be. import requests from tabulate import tabulate API_ENDPOINT = "https://discuss.huggingface.co/c/research/7/l/latest.json?filter=latest" TRACK_ENDPOINT = "https://discuss.huggingface.co/t/{}.json?track_visit=true&forceLoad=true" HEADERS = { "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36 Edg/131.0.0.0", "Accept": "application/json", "X-Requested-With": "XMLHttpRequest" } def get_posters(track_id: str, current_session: requests.Session) -> dict: track = current_session.get(TRACK_ENDPOINT.format(track_id), headers=HEADERS) posts = track.json()["post_stream"]["posts"] return { "owner": posts[0]["username"], "owner_name": posts[0]["name"], "owner_id": posts[0]["id"], "posters": [p["name"] for p in posts], } with requests.Session() as session: response = session.get(API_ENDPOINT, headers=HEADERS) topics_data = response.json()["topic_list"]["topics"] topics = [] for topic in topics_data: posters = get_posters(topic["id"], session) topics.append( [ topic["title"], f"https://discuss.huggingface.co/t/{topic['slug']}/{topic['id']}", topic["posts_count"], topic["views"], topic["like_count"], topic["id"], posters["owner_name"], posters["owner_id"], # ", ".join(posters["posters"]), ] ) columns = ["Title", "URL", "Posts", "Views", "Likes", "ID", "Owner", "Owner ID"] table = tabulate(topics, headers=columns, tablefmt="pretty", stralign="left") print(table) You should get this table: +----------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------+-------+-------+-------+--------+------------------------+----------+ | Title | URL | Posts | Views | Likes | ID | Owner | Owner ID | +----------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------+-------+-------+-------+--------+------------------------+----------+ | Merry Christmas & We have released "Awesome-Neuro-Symbolic-Learning-with-LLM" | https://discuss.huggingface.co/t/merry-christmas-we-have-released-awesome-neuro-symbolic-learning-with-llm/133045 | 1 | 36 | 4 | 133045 | Lan-Zhe Guo | 191786 | | Why do some commits have zero insertions and zero deletions? | https://discuss.huggingface.co/t/why-do-some-commits-have-zero-insertions-and-zero-deletions/132603 | 1 | 12 | 0 | 132603 | Sandra | 191238 | | Model that can generate both text and image as output | https://discuss.huggingface.co/t/model-that-can-generate-both-text-and-image-as-output/132209 | 5 | 73 | 7 | 132209 | Bibhuti Bhusan Padhi | 190689 | | Using mixup on RoBERTa | https://discuss.huggingface.co/t/using-mixup-on-roberta/306 | 8 | 2228 | 8 | 306 | FRAN Valero | 576 | | Seeking Guidance on Training a Model for Generating Gregorian Chant Music | https://discuss.huggingface.co/t/seeking-guidance-on-training-a-model-for-generating-gregorian-chant-music/131700 | 3 | 21 | 4 | 131700 | Martim Ramos | 189949 | | Interest in Contributing PEFT Educational Resources - Seeking Community Input | https://discuss.huggingface.co/t/interest-in-contributing-peft-educational-resources-seeking-community-input/131143 | 3 | 30 | 6 | 131143 | Jen Wei | 188941 | | LLM for analysing JSON data | https://discuss.huggingface.co/t/llm-for-analysing-json-data/130407 | 2 | 67 | 2 | 130407 | S. Gow | 188022 | | Models for Document Image Annotation Without OCR | https://discuss.huggingface.co/t/models-for-document-image-annotation-without-ocr/129604 | 2 | 109 | 3 | 129604 | Pavel Spirin | 186986 | | Get gaierror when trying to access HF Token for login | https://discuss.huggingface.co/t/get-gaierror-when-trying-to-access-hf-token-for-login/128870 | 3 | 36 | 3 | 128870 | S. Gow | 186043 | | Evaluation metrics for BERT-like LMs | https://discuss.huggingface.co/t/evaluation-metrics-for-bert-like-lms/1256 | 5 | 4455 | 1 | 1256 | Vladimir Blagojevic | 3083 | | Introducing ClearerVoice-Studio: Your One-Stop Speech Processing Platform! | https://discuss.huggingface.co/t/introducing-clearervoice-studio-your-one-stop-speech-processing-platform/129193 | 3 | 92 | 0 | 129193 | Alibaba_Speech_Lab_SG | 186434 | | Seeking Advice on Building a Custom Virtual Try-On Model Using Pre-Existing Models | https://discuss.huggingface.co/t/seeking-advice-on-building-a-custom-virtual-try-on-model-using-pre-existing-models/128946 | 1 | 44 | 1 | 128946 | Abeer Ilyas | 186127 | | LLM Hackathon in Ecology | https://discuss.huggingface.co/t/llm-hackathon-in-ecology/128906 | 1 | 35 | 0 | 128906 | Jennifer D'Souza | 186080 | | Retrieving Meta Data on Models for Innovation Research | https://discuss.huggingface.co/t/retrieving-meta-data-on-models-for-innovation-research/128646 | 1 | 33 | 1 | 128646 | Fabian F | 185762 | | (Research/Personal) Projects Ideas | https://discuss.huggingface.co/t/research-personal-projects-ideas/71651 | 3 | 1410 | 0 | 71651 | HeHugging | 111782 | | Understanding Technical Drawings | https://discuss.huggingface.co/t/understanding-technical-drawings/78903 | 2 | 287 | 1 | 78903 | Yakoi | 121186 | | Ionic vs. React Native vs. Flutter | https://discuss.huggingface.co/t/ionic-vs-react-native-vs-flutter/128132 | 1 | 97 | 0 | 128132 | yaw | 185084 | | Choosing Benchmarks for Fine-Tuned Models in Emotion Analysis | https://discuss.huggingface.co/t/choosing-benchmarks-for-fine-tuned-models-in-emotion-analysis/127106 | 1 | 38 | 1 | 127106 | Pavol | 183654 | | I have a project Skin Lens Please can you fill the form | https://discuss.huggingface.co/t/i-have-a-project-skin-lens-please-can-you-fill-the-form/108980 | 2 | 48 | 2 | 108980 | Soopramanien | 158453 | | How does an API work? | https://discuss.huggingface.co/t/how-does-an-api-work/121828 | 5 | 102 | 2 | 121828 | riddhi patel | 176354 | | More expressive attention with negative weights | https://discuss.huggingface.co/t/more-expressive-attention-with-negative-weights/119667 | 2 | 252 | 4 | 119667 | AngLv | 173243 | | Biases in AI Hallucinations Based on Context | https://discuss.huggingface.co/t/biases-in-ai-hallucinations-based-on-context/117082 | 1 | 28 | 1 | 117082 | That Prommolmard | 169443 | | RAG performance | https://discuss.huggingface.co/t/rag-performance/116048 | 1 | 59 | 1 | 116048 | Salah Ghalyon | 168143 | | Gangstalkers AI harassment voice to skull | https://discuss.huggingface.co/t/gangstalkers-ai-harassment-voice-to-skull/115897 | 1 | 87 | 0 | 115897 | Andrew Cruz AKA OmegaT | 167944 | | How Pika Effects works? π€ | https://discuss.huggingface.co/t/how-pika-effects-works/115760 | 1 | 45 | 0 | 115760 | JiananZHU | 167769 | | An idea about LLMs | https://discuss.huggingface.co/t/an-idea-about-llms/115462 | 1 | 56 | 1 | 115462 | Garrett Johnson | 167279 | | Different response from different UI's | https://discuss.huggingface.co/t/different-response-from-different-uis/115192 | 3 | 49 | 2 | 115192 | Marvin Snell | 166941 | | Gradio is more than UI? | https://discuss.huggingface.co/t/gradio-is-more-than-ui/114715 | 5 | 62 | 4 | 114715 | Zebra | 166264 | | Narrative text generation | https://discuss.huggingface.co/t/narrative-text-generation/114869 | 2 | 43 | 1 | 114869 | QUANGDUC | 166472 | | Say goodbye to manual testing of your LLM-based apps β automate with EvalMy.AI beta! π | https://discuss.huggingface.co/t/say-goodbye-to-manual-testing-of-your-llm-based-apps-automate-with-evalmy-ai-beta/114533 | 1 | 38 | 1 | 114533 | Petr Pascenko | 166007 | +----------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------+-------+-------+-------+--------+------------------------+----------+ Bonus: To get more of the latest you can paginate the API by adding the page=<PAGE_VALUE> parameter to the first endpoint. For example, latest.json?page=2 | 2 | 2 |
79,319,663 | 2024-12-31 | https://stackoverflow.com/questions/79319663/fastapi-apache-409-response-from-fastapi-is-converted-to-502-what-can-be-the | I have a FastAPI application, which, in general, works fine. My setup is Apache as a proxy and FastAPI server behind it. This is the apache config: ProxyPass /fs http://127.0.0.1:8000/fs retry=1 acquire=3000 timeout=600 Keepalive=On disablereuse=ON ProxyPassReverse /fs http://127.0.0.1:8000/fs I have one endpoint that can return 409 HTTP response, if an object exists. FastAPI works correctly. I can see in logs: INFO: 172.**.0.25:0 - "PUT /fs/Automation/123.txt HTTP/1.1" 409 Conflict But the final response to the client is "502 Bad Gateway". Apache error log has a record for this: [Tue Dec 31 04:45:54.545972 2024] [proxy:error] [pid 3019178:tid 140121168807680] (32)Broken pipe: [client 172.31.0.25:63759] AH01084: pass request body failed to 127.0.0.1:8000 (127.0.0.1), referer: https://10.100.21.13/fs/view/Automation [Tue Dec 31 04:45:54.545996 2024] [proxy_http:error] [pid 3019178:tid 140121168807680] [client 172.31.0.25:63759] AH01097: pass request body failed to 127.0.0.1:8000 (127.0.0.1) from 172.31.0.25 (), referer: https://10.100.21.13/fs/view/Automation What can be the reason? Another interesting thing is that it doesn't happen for any PUT request. How can I debug this? Maybe FastAPI has to return something else, some header? Or it returns too much , some extra data? How to catch this? | So, i have found the reason. When there is file upload you need to read the input buffer in any case, even if you want to return the error. In my case i had to add try: except: to empty the buffer when exception happens. Something like try: ... my original code except Exception as e: # Empty input buffer here to avoid proxy problems await request.body() raise e | 1 | 0 |
79,316,958 | 2024-12-30 | https://stackoverflow.com/questions/79316958/mlagents-learn-help-is-giving-errors-python-3-11-3-10-3-9-3-8 | I am trying to install mlagents. I got to the part in python but after creating a virtual enviorment with pyenv and setting the local version to 3.10, 3.9, and 3.8 it works on none of them. I upgraded pip, installed mlagents, then torch,torchvision, and torchaudio. Then I tested mlagents-learn --help and then because of a error installed protobuf 3.20.3. I then tested again to get the following error (venv) D:\Unity\AI Ecosystem>mlagents-learn --help Traceback (most recent call last): File "<frozen runpy>", line 198, in _run_module_as_main File "<frozen runpy>", line 88, in _run_code File "D:\Unity\AI Ecosystem\venv\Scripts\mlagents-learn.exe\__main__.py", line 4, in <module> File "D:\Unity\AI Ecosystem\venv\Lib\site-packages\mlagents\trainers\learn.py", line 2, in <module> from mlagents import torch_utils File "D:\Unity\AI Ecosystem\venv\Lib\site-packages\mlagents\torch_utils\__init__.py", line 1, in <module> from mlagents.torch_utils.torch import torch as torch # noqa ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Unity\AI Ecosystem\venv\Lib\site-packages\mlagents\torch_utils\torch.py", line 6, in <module> from mlagents.trainers.settings import TorchSettings File "D:\Unity\AI Ecosystem\venv\Lib\site-packages\mlagents\trainers\settings.py", line 644, in <module> class TrainerSettings(ExportableSettings): File "D:\Unity\AI Ecosystem\venv\Lib\site-packages\mlagents\trainers\settings.py", line 667, in TrainerSettings cattr.register_structure_hook( File "D:\Unity\AI Ecosystem\venv\Lib\site-packages\cattr\converters.py", line 207, in register_structure_hook self._structure_func.register_cls_list([(cl, func)]) File "D:\Unity\AI Ecosystem\venv\Lib\site-packages\cattr\dispatch.py", line 55, in register_cls_list self._single_dispatch.register(cls, handler) File "C:\Users\Ebrah\AppData\Local\Programs\Python\Python311\Lib\functools.py", line 864, in register raise TypeError( TypeError: Invalid first argument to `register()`. typing.Dict[mlagents.trainers.settings.RewardSignalType, mlagents.trainers.settings.RewardSignalSettings] is not a class or union type. I tried installing cattrs 1.5.0 but the error remains. As I said before I also tried in 3.11, 3.10, 3.9 and 3.8 and got the same error in all of them. My unity version is 2022.3.5f1 but I don't see how that would make a difference. My pyenv version is 3.1.1. I am on windows 11 and am using pyenv-win. | Try deleting your unity project and making a new one. Unity says to use conda so try that too. Use python 3.9. | 1 | 2 |
79,318,540 | 2024-12-30 | https://stackoverflow.com/questions/79318540/django-model-foreign-key-to-whichever-model-calls-it | I am getting back into Django after a few years, and am running into the following problem. I am making a system where there are 2 models; a survey, and an update. I want to make a notification model that would automatically have an object added when I add a survey object or update object, and the notification object would have a foreign key to the model object which caused it to be added. However I am running into a brick wall figuring out how I would do this, to have a model with a foreign key which can be to one of two models, which would be automatically set to the model object which creates it. Any help with this would be appreciated. I am trying to make a model that looks something like this (psuedocode): class notification(models.model): source = models.ForeignKey(to model that created it) #this is what I need help with start_date = models.DateTimeField(inherited from model that created it) end_date = models.DateTimeField(inherited from model that created it) Also, just to add some context to the question and in case I am looking at this from the wrong angle, I am wanting to do this because both surveys and updates will be displayed on the same page, so my plan is to query the notification model, and then have the view do something like this: from .models import notification notifications = notification.objects.filter(start_date__lte=now, end_date__gte=now).order_by('-start_date') for notification in notifications: if notification.__class__.__name__ == "survey_question": survey = notification.survey_question.all() question = survey.question() elif notification.__class__.__name__ == "update": update = notification.update.all() update = update.update() I am also doing this instead of combining the 2 queries and then sorting them by date as I want to have notifications for each specific user anyways, so my plan is (down the road) to have a notification created for each user. Here are my models (that I reference in the question): from django.db import models from datetime import timedelta from django.utils import timezone def tmrw(): return timezone.now() + timedelta(days=1) class update(models.Model): update = models.TextField() start_date = models.DateTimeField(default=timezone.now, null=True, blank=True) end_date = models.DateTimeField(default=tmrw, null=True, blank=True) class Meta: verbose_name = 'Update' verbose_name_plural = f'{verbose_name}s' class survey_question(models.Model): question = models.TextField() start_date = models.DateTimeField(default=timezone.now, null=True, blank=True) end_date = models.DateTimeField(default=tmrw, null=True, blank=True) class Meta: verbose_name = 'Survey' verbose_name_plural = f'{verbose_name}s' | GenericForeignKey to the rescue: A normal ForeignKey can only βpoint toβ one other model, which means that if the TaggedItem model used a ForeignKey it would have to choose one and only one model to store tags for. The contenttypes application provides a special field type (GenericForeignKey) which works around this and allows the relationship to be with any model from django.contrib.contenttypes.fields import GenericForeignKey from django.contrib.contenttypes.models import ContentType class notification(models.model): content_type = models.ForeignKey(ContentType, on_delete=models.CASCADE) object_id = models.PositiveIntegerField() source = GenericForeignKey("content_type", "object_id") EDIT: How to use survey_object = survery_question.objects.first() notification_for_survey = notification.objects.create(source=survey_object) update_object = update.objects.first() notification_for_update = notification.objects.create(source=update_object) | 2 | 2 |
79,319,263 | 2024-12-31 | https://stackoverflow.com/questions/79319263/why-does-geopandas-dissolve-function-keep-working-forever | All, I am trying to use the Geopandas dissolve function to aggregate a few countries; the function countries.dissolve keeps running forever. Here is a minimal script. import geopandas as gpd shape='/Volumes/TwoGb/shape/fwdshapfileoftheworld/' countries=gpd.read_file(shape+'TM_WORLD_BORDERS-0.3.shp') # Add columns countries['wmosubregion'] = '' countries['dummy'] = '' country_count = len(countries) # If the country list is empty then use all countries. country_list=['SO','SD','DJ','KM'] default = 'Null' for i in range(country_count): countries.at[i, 'wmosubregion'] = default if countries.ISO2[i] in country_list: countries.at[i, 'wmosubregion'] = "EAST_AFRICA" print(countries.ISO2[i]) region_shapes = countries.dissolve(by='wmosubregion') I am using the TM_WORLD_BORDERS-0.3 shape files, which is freely accessible. You can get the shape files (TM_WORLD_BORDERS-0.3.shp, TM_WORLD_BORDERS-0.3.dbf, TM_WORLD_BORDERS-0.3.shx, TM_WORLD_BORDERS-0.3.shp ) from the following GitHub https://github.com/rmichnovicz/Sick-Slopes/tree/master Thanks | Dissolve is working when I try it, it finishes in a few seconds. My Geopandas version is 1.0.1. import geopandas as gpd df = gpd.read_file(r"C:\Users\bera\Downloads\TM_WORLD_BORDERS-0.3.shp") df.plot(column="NAME") df2 = df.dissolve() df2.plot() There are some invalid geometries that might cause problems for you? Try fixing them: #df.geometry.is_valid.all() #np.False_ #Four geometries are invalid df.loc[~df.geometry.is_valid] # FIPS ISO2 ... LAT geometry # 23 CA CA ... 59.081 MULTIPOLYGON (((-65.61362 43.42027, -65.61972 ... # 32 CI CL ... -23.389 MULTIPOLYGON (((-67.21278 -55.89362, -67.24695... # 154 NO NO ... 61.152 MULTIPOLYGON (((8.74361 58.40972, 8.73194 58.4... # 174 RS RU ... 61.988 MULTIPOLYGON (((131.87329 42.95694, 131.82413 ... # [4 rows x 12 columns] df.geometry = df.geometry.make_valid() #df.geometry.is_valid.all() #np.True_ | 1 | 2 |
79,318,939 | 2024-12-31 | https://stackoverflow.com/questions/79318939/loaded-keras-model-throws-error-while-predicting-likely-issues-with-masking | I am currently developing and testing a RNN that relies upon a large amount of data for training, and so have attempted to separate my training and testing files. I have one file where I create, train, and save a tensorflow.keras model to a file 'model.keras' I then load this model in another file and predict some values, but get the following error: Failed to convert elements of {'class_name': '__tensor__', 'config': {'dtype': 'float64', 'value': [0.0, 0.0, 0.0, 0.0]}} to Tensor. Consider casting elements to a supported type. See https://www.tensorflow.org/api_docs/python/tf/dtypes for supported TF dtypes By the way, I have tried running model.predict with this exact same data in the file where I train the model, and it works smoothly. The model loading must be the problem, not the data used to predict. This mysterious float64 tensor is the value I passed into the masking layer. I don't understand why keras is unable to recognize this JSON object as a Tensor and apply the masking operation as such. I have included snippets of my code below, edited for clarity and brevity: model_generation.py: # Create model model = tf.keras.Sequential([ tf.keras.layers.Input((352, 4)), tf.keras.layers.Masking(mask_value=tf.convert_to_tensor(np.array([0.0, 0.0, 0.0, 0.0]))), tf.keras.layers.GRU(50, return_sequences=True, activation='tanh'), tf.keras.layers.Dropout(0.2), tf.keras.layers.GRU(50,activation='tanh'), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(units=1, activation='sigmoid')]) # Compile Model... # Train Model... model.save('model.keras') model.predict(data) # Line works here model_testing.py model = tf.keras.models.load_model('model.keras') model.predict(data) # this line generates the error EDIT: Moved the load command into the same file as the training, still receiving the exact same error message. | That error is due to the mask_value that you pass into tf.keras.layers.Masking not getting serialized compatibly for deserialization. But because you masking layer is a tensor containing all 0s anyway, you can instead just pass a scalar value like this and it will eliminate the need to serialize a tensor while storing the model tf.keras.layers.Masking(mask_value=0.0) and it broadcasts it to effectively make it equivalent to comparing it against the tensor containing all 0s. Here is the source where the mask is applied like this ops.any(ops.not_equal(inputs, self.mask_value), axis=-1, keepdims=True) and ops.not_equal supports broadcasting. | 1 | 1 |
79,320,886 | 2024-12-31 | https://stackoverflow.com/questions/79320886/numpy-einsum-why-did-this-happen | Can you explain why this happened? import numpy as np a = np.array([[1,2], [3,4], [5,6] ]) b = np.array([[2,2,2], [2,2,2]]) print(np.einsum("xy,zx -> yx",a,b)) and output of the code is:[[ 4 12 20] [ 8 16 24]] Which means the answer is calculated like this : ββ[1*2+1*2 , 3*2+3*2 , ...] But I expected it to be calculated like this: [[1*2 , 3*2 , 5*2],[2*2 , 4*2 , 6*2]] Where did I make a mistake? | Your code is equivalent to: (a[None] * b[..., None]).sum(axis=0).T You start with a (x, y) and b (z, x). First let's align the arrays: # a[None] shape: (1, x, y) array([[[1, 2], [3, 4], [5, 6]]]) # b[..., None] shape: (z, x, 1) array([[[2], [2], [2]], [[2], [2], [2]]]) and multiply: # a[None] * b[..., None] shape: (z, x, y) array([[[ 2, 4], [ 6, 8], [10, 12]], [[ 2, 4], [ 6, 8], [10, 12]]]) sum over axis = 0 (z): # (a[None] * b[..., None]).sum(axis=0) shape: (x, y) array([[ 4, 8], [12, 16], [20, 24]]) Swap x and y: # (a[None] * b[..., None]).sum(axis=0).T shape: (y, x) array([[ 4, 12, 20], [ 8, 16, 24]]) What you want is np.einsum('yx,xy->xy', a, b): array([[ 2, 6, 10], [ 4, 8, 12]]) | 1 | 1 |
79,320,784 | 2024-12-31 | https://stackoverflow.com/questions/79320784/bot-not-responding-to-channel-posts-in-telegram-bot-api-python-telegram-bot | I'm developing a Telegram bot using python-telegram-bot to handle and reply to posts in a specific channel. The bot starts successfully and shows "Bot is running...", but it never replies to posts in the channel. Here's the relevant code for handling channel posts: async def handle_channel_post(self, update: Update, context: ContextTypes.DEFAULT_TYPE): """Handle new channel posts by adding the message link as a reply.""" try: # Get the message and channel info message = update.channel_post or update.message if not message: return # Verify this is from our target channel if message.chat.username != self.channel_username: return channel_id = message.chat.id message_id = message.message_id # Construct the message link if str(channel_id).startswith("-100"): # Private channels (or supergroups) link = f"https://t.me/c/{str(channel_id)[4:]}/{message_id}" else: # Public channels link = f"https://t.me/{self.channel_username.replace('@', '')}/{message_id}" # Create the reply text reply_text = f"View message: [Click here]({link})" # Reply to the channel post await context.bot.send_message( chat_id=channel_id, text=reply_text, reply_to_message_id=message_id, parse_mode="Markdown" ) except Exception as e: print(f"Error handling channel post: {e}") This is the main method: async def main(): BOT_TOKEN = "<MT_BOT_TOKEN>" CHANNEL_USERNAME = "@TestTGBot123" bot = ChannelBot(BOT_TOKEN, CHANNEL_USERNAME) await bot.start() I tried with another channel and different channel types but still not working. The bot is admin and also has privileges to post in the channel. | The issue is with this part of the code: if message.chat.username != self.channel_username: return The message.chat.username returns the channel username without the '@' and your self.channel.username includes '@' Try this: if message.chat.username != self.channel_username.replace("@", ""): return It removes '@' from self.channel.username and your bot should work as expected. | 3 | 2 |
79,318,200 | 2024-12-30 | https://stackoverflow.com/questions/79318200/return-placeholder-values-with-formatting-if-a-key-is-not-found | I want to silently ignore KeyErrors and instead replace them with placeholders if values are not found. For example: class Name: def __init__(self, name): self.name = name self.capitalized = name.capitalize() def __str__(self): return self.name "hello, {name}!".format(name=Name("bob")) # hello, bob! "greetings, {name.capitalized}!".format(name=Name("bob")) # greetings, Bob! # but, if no name kwarg is given... "hello, {name}!".format(age=34) # hello, {name}! "greetings, {name.capitalized}!".format(age=34) # greetings, {name.capitalized}! My goal with this is that I'm trying to create a custom localization package for personal projects (I couldn't find existing ones that did everything I wanted to). Messages would be user-customizable, but I want users to have a flawless experience, so for example, if they make a typo and insert {nmae} instead of {name}, I don't want users to have to deal with errors, but I want to instead signal to them that they made a typo by giving them the placeholder value. I found several solutions on stackoverflow, but none of them can handle attributes. My first solution was this: class Default(dict): """A dictionary that returns the key itself wrapped in curly braces if the key is not found.""" def __missing__(self, key: str) -> str: return f"{{{key}}}" But this results in an error when trying to use it with attributes: AttributeError: 'str' object has no attribute 'capitalized', it does print "hello, {name}!" with no issues. Same goes for my second solution using string.Formatter: class CustomFormatter(string.Formatter): def get_value(self, key, args, kwargs): try: value = super().get_value(key, args, kwargs) except KeyError: value = f'{{{key}}}' except AttributeError: value = f'{{{key}}}' return value formatter.format("hello, {name}!", name=Name("bob")) # hello, bob! formatter.format("greetings, {name.capitalized}!", name=Name("bob")) # greetings, Bob! formatter.format("hello, {name}!", age=42) # hello, {name}! formatter.format("greetings, {name.capitalized}!", age=42) # AttributeError: 'str' object has no attribute 'capitalized' So how could I achieve something like this? "hello, {name}!".format(name=Name("bob")) # hello, bob! "greetings, {name.capitalized}!".format(name=Name("bob")) # greetings, Bob! # but, if no name kwarg is given... "hello, {name}!".format(age=34) # hello, {name}! "greetings, {name.capitalized}!".format(age=34) # greetings, {name.capitalized}! | TL;DR The best solution is to override get_field instead of get_value in CustomFormatter: class CustomFormatter(string.Formatter): def get_field(self, field_name, args, kwargs): try: return super().get_field(field_name, args, kwargs) except (AttributeError, KeyError): return f"{{{field_name}}}", None Kuddos to @blhsing for suggesting this solution. Details The issue is that the AttributeError gets raised when formatter.get_field() is called, not in get_value(), so you also need to override get_field(). By adding this function to your CustomFormatter class, I was able to get the behaviour you want with {name.capitalized} shown when you pass name="bob" or name=34 instead of name=Name("bob"): def get_field(self, field_name, args, kwargs): try: return super().get_field(field_name, args, kwargs) except AttributeError: return f"{{{field_name}}}", None The return value is a tuple, to respect get_field's return value: a tuple with the result, and the key used. In action >>> formatter = CustomFormatter() >>> formatter.format("greetings, {name.capitalized}!", name="bob") 'greetings, {name.capitalized}!' >>> formatter.format("greetings, {name.capitalized}!", name=34) 'greetings, {name.capitalized}!' >>> formatter.format("greetings, {name.capitalized}!", name=Name("bob")) 'greetings, Bob!' >>> formatter.format("{name.capitalized}, you are {age} years old.", name=Name("bob")) 'Bob, you are {age} years old.' Tracing the code for deeper understanding When I added some debugging print statements, namely: class CustomFormatter(string.Formatter): def get_value(self, key, args, kwargs): print(f"get_value({key=}, {args=}, {kwargs=}") ... def get_field(self, field_name, args, kwargs): print(f"get_field({field_name=}, {args=}, {kwargs=}") ... I could see this log when using name=Name("bob"): get_field(field_name='name.capitalized', args=(), kwargs={'name': <__main__.Name object at 0x000002818AA8E8D0>} get_value(key='name', args=(), kwargs={'name': <__main__.Name object at 0x000002818AA8E8D0>} and this log with for age=34 and leaving out name: get_field(field_name='name', args=(), kwargs={'age': 34} get_value(key='name', args=(), kwargs={'age': 34} so you see it's your overriden get_value that handles the wrong key, and my overriden get_field that handles the missing attribute. Making the code more concise As @blhsing pointed out, if you also catch the KeyError in get_field, then you don't need to override get_value at all, leading to the final solution in the TL;DR above. | 2 | 2 |
79,320,041 | 2024-12-31 | https://stackoverflow.com/questions/79320041/python-flask-blueprint-parameter | I need to pass a parameter (some_url) from the main app to the blueprint using Flask This is my (oversimplified) app app = Flask(__name__) app.register_blueprint(my_bp, url_prefix='/mybp', some_url ="http....") This is my (oversimplified) blueprint my_bp = Blueprint('mybp', __name__, url_prefix='/mybp') @repositories_bp.route('/entrypoint', methods=['GET', 'POST']) def entrypoint(): some_url = ???? Not sure this is the way to go, but I parsed countless threads, I just cannot find any Info about this Thanks for your help | you can use g object for the current request which stores temporary data, or you can use session to maintain data between multiple requests which usually stores this data in the client browser as a cookie, or you can store the data in the app.config to maintain a constant value. | 1 | 0 |
79,318,743 | 2024-12-30 | https://stackoverflow.com/questions/79318743/how-to-create-combinations-from-dataframes-for-a-specific-combination-size | Say I have a dataframe with 2 columns, how would I create all possible combinations for a specific combination size? Each row of the df should be treated as 1 item in the combination rather than 2 unique separate items. I want the columns of the combinations to be appended to the right. The solution should ideally be efficient since it takes long to generate all the combinations with a large list. For example, I want to create all possible combinations with a combination size of 3. import pandas as pd df = pd.DataFrame({'A':['a','b','c','d'], 'B':['1','2','3','4']}) How would I get my dataframe to look like this? A B A B A B 0 a 1 b 2 c 3 1 a 1 b 2 d 4 2 a 1 c 3 d 4 3 b 2 c 3 d 4 | An approach is itertools to generate the combinations. Define the combination size and generate all possible combinations of rows using itertools.combinations Flatten each combination into a single list of values using itertools.chain. combination_df is created from the flattened combinations and the columns are dynamically generated to repeat 'A' and 'B' for each combination Sample import itertools combination_size = 3 combinations = list(itertools.combinations(df.values, combination_size)) combination_df = pd.DataFrame( [list(itertools.chain(*comb)) for comb in combinations], columns=[col for i in range(combination_size) for col in df.columns] ) ) EDIT : Optimisation as suggested by @ouroboros1 combination_df = pd.DataFrame( (chain.from_iterable(c) for c in combinations), columns=np.tile(df.columns, combination_size) ) Output A B A B A B 0 a 1 b 2 c 3 1 a 1 b 2 d 4 2 a 1 c 3 d 4 3 b 2 c 3 d 4 | 1 | 1 |
79,319,708 | 2024-12-31 | https://stackoverflow.com/questions/79319708/confused-by-documentation-about-behavior-of-globals-within-a-function | Per the Python documentation of globals(): For code within functions, this is set when the function is defined and remains the same regardless of where the function is called. I understood this as calling globals() from within a function returns an identical dict to the one that represented the global namespace when the function was defined, even if there have been modifications to the global namespace since then. However, my experiment below showed that my understanding is apparently incorrect. What does the documentation mean, then? (In the example below I expected the second call of foo() to give the same result as the first. Of course, if that was the case I would question the utility of globals(), but that seems to be what the documentation means.) def foo(): if 'x' in globals(): print(f"Found x: {globals()['x']}") else: global x x = 1 print(f"Not found. Set x = {x}.") foo() # Not found. Set x = 1. foo() # Found x: 1 | In fact this problem is only loosely related to the globals() builtin function but more closely related to the behaviour of mutable objects. Long story made short, your observation is correct, and the documentation is absolutely correct and accurate. The underlying cause, is that Python variables are only references to the actual objects. Let us look at an example: a = {'a': 1, 'b': 2} b = a # ok, we take a "copy" print(b) {'a': 1, 'b': 2} # no surprise here a['c'] = 3 # let us MUTATE the original object print(b) {'a': 1, 'b': 2, 'c': 3} What happens here is that both variable are references to the very same object, what can be confirmed with print(id(a), id(b)) But if we use a different object: a = {'a': 1, 'b': 2, 'c': 3} # a is now a new and distinct object # even if it has the same value a['d'] = 4 print(a, b) {'a': 1, 'b': 2, 'c': 3, 'd': 4} {'a': 1, 'b': 2, 'c': 3} b is still a reference to the original object, so the new changes to a are not accessible through b. You can confirm that they are now distinct objects with print(id(a), id(b)). The documentation is just a warning that if for any reason the global directory is changed to a new and different object(*), the function will still keep a reference of the object that existed when the function was defined. (*) AFAIK, the specification of the language has no guarantee that the global directory will be the very same object during all the program lifetime | 1 | 1 |
79,319,434 | 2024-12-31 | https://stackoverflow.com/questions/79319434/duplicate-null-columns-created-during-pivot-in-polars | I have this example dataframe in polars: df_example = pl.DataFrame( { "DATE": ["2024-11-11", "2024-11-11", "2024-11-12", "2024-11-12", "2024-11-13"], "A": [None, None, "option1", "option2", None], "B": [None, None, "YES", "YES", "NO"], } ) Which looks like this: DATE A B 0 2024-11-11 1 2024-11-11 2 2024-11-12 option1 YES 3 2024-11-12 option2 YES 4 2024-11-13 NO As you can see this is a long format dataframe. I want to have it in a wide format, meaning that I want the DATE to be unique per row and for each other column several columns have to be created. What I want to achieve is: DATE A_option1 A_option2 B_YES B_NO 2024-11-11 Null Null Null Null 2024-11-12 True True True Null 2024-11-13 Null Null Null True I have tried doing the following: df_example.pivot( index="DATE", on=["A", "B"], values=["A", "B"], aggregate_function="first" ) However, I get this error: DuplicateError: column with name 'null' has more than one occurrence Which is logical, as it tries to create a column for the Null values in columns A, and a column for the Null values in column B. I am looking for a clean solution to this problem. I know I can impute the nulls per column with something unique and then do the pivot. Or by pivoting per column and then dropping the Null columns. However, this will create unnecessary columns. I want something more elegant. | I ended up with: ( df_example.pipe( lambda df: df.group_by("DATE").agg( [ pl.col(col).eq(val).any().alias(f"{col}_{val}") for col in df.select(pl.exclude("DATE")).columns for val in df.get_column(col).unique().drop_nulls() ] ) ).sort("DATE") ) | 2 | 1 |
79,319,156 | 2024-12-31 | https://stackoverflow.com/questions/79319156/how-to-add-python-type-annotations-to-a-class-that-inherits-from-itself | I'm trying add type annotations to an ElementList object that inherits from list and can contain either Element objects or other ElementGroup objects. When I run the following code through mypy: from typing import Self class Element: pass class ElementList(list[Element | Self]): pass elements = ElementList( [ Element(), Element(), ElementList( [ Element(), Element(), ] ), ] ) I get the following error: element.py:8: error: Self type is only allowed in annotations within class definition [misc] Found 1 error in 1 file (checked 1 source file) What's the recommended way to add typing annotations to this so that mypy doesn't throw an error? | Your sample list argument to the ElementList constructor contains not just Elements and ElementLists but also actual lists, so a workaround of class ElementList(list["Element | ElementList"]): ... would not have worked, as @dROOOze pointed out in the comment, because list is not a subtype of ElementList. You can work around this limitation with a type alias, which can refer to itself without creating a subtype: class Element: pass type ElementListType[T] = Element | T | list[ElementListType[T]] class ElementList(list[ElementListType["ElementList"]]): pass elements = ElementList( [ Element(), Element(), [ Element(), ElementList( [ Element(), Element(), ] ) ], ] ) Demo with mypy here Demo with pyright here | 1 | 1 |
79,317,395 | 2024-12-30 | https://stackoverflow.com/questions/79317395/multi-columns-legend-in-geodataframe | I tried to plot Jakarta's map based on the district. fig, ax = plt.subplots(1, figsize=(4.5,10)) jakarta_mandiri_planar.plot(ax=ax, column='Kecamatan', legend=True, legend_kwds={'loc':'center left'}) leg= ax.get_legend() leg.set_bbox_to_anchor((1.04, 0.5)) I plotted the legend on the right of the map, but I think it's too long. Can I make the legend into two or three columns? If so, how? | Use the ncols keyword: df.plot(column="NAME", cmap="tab20", legend=True, figsize=(8,8)) df.plot(column="NAME", cmap="tab20", legend=True, figsize=(10,10), legend_kwds={"ncols":2, "loc":"lower left"}) | 1 | 1 |
79,315,937 | 2024-12-29 | https://stackoverflow.com/questions/79315937/in-ta-lib-cython-compiler-errors-internalerror-internal-compiler-error-com | While running a program on pycharm I am getting below error while running on pycharm using python. Unable to run the program due to below error: ERROR: Failed building wheel for TA-Lib-Precompiled ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (TA-Lib-Precompiled) > Package :TA-Lib-Precompiled > Python Version : Python 3.12.1 > Cython version 3.0.11 Please help in finding the solution !! Below are the logs : > Collecting TA-Lib-Precompiled Using cached TA-Lib-Precompiled-0.4.25.tar.gz (276 kB) Preparing metadata (setup.py): started Preparing metadata (setup.py): finished with status 'done' Requirement already satisfied: numpy in c:\python312\lib\site-packages (from TA-Lib-Precompiled) (1.26.4) Building wheels for collected packages: TA-Lib-Precompiled Building wheel for TA-Lib-Precompiled (setup.py): started Building wheel for TA-Lib-Precompiled (setup.py): finished with status 'error' Running setup.py clean for TA-Lib-Precompiled Failed to build TA-Lib-Precompiled ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "Cython\\Utils.py", line 129, in Cython.Utils.cached_method.wrapper File "C:\Python312\Lib\site-packages\Cython\Build\Dependencies.py", line 574, in cimports_externs_incdirs for include in self.included_files(filename): ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "Cython\\Utils.py", line 129, in Cython.Utils.cached_method.wrapper File "C:\Python312\Lib\site-packages\Cython\Build\Dependencies.py", line 556, in included_files include_path = self.context.find_include_file(include, source_file_path=filename) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Python312\Lib\site-packages\Cython\Compiler\Main.py", line 299, in find_include_file error(pos, "'%s' not found" % filename) File "C:\Python312\Lib\site-packages\Cython\Compiler\Errors.py", line 178, in error raise InternalError(message) Cython.Compiler.Errors.InternalError: Internal compiler error: '_common.pxi' not found [end of output] > note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for TA-Lib-Precompiled ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (TA-Lib-Precompiled) | The stable release of TA-Lib-Precompiled only has wheels for Python 3.8 - 3.11 for Linux. You can install The Windows Subsystem for Linux (WSL) which provides a Linux environment on your Windows machine and then use a supported Python version such as Python 3.11. See How to install Linux on Windows with WSL for detailed instructions on this. | 2 | 1 |
79,317,602 | 2024-12-30 | https://stackoverflow.com/questions/79317602/python-selenium-need-help-in-locating-username-and-password | i am new to selenium . i am trying to scrape financial data on tradingview. i am trying to log into https://www.tradingview.com/accounts/signin/ . i understand that i am facing a timeout issue right now, is there any way to fix this? thank you to anybody helping. much appreciated. however, i am facing alot of errors with logging in. the error i am facing is --------------------------------------------------------------------------- TimeoutException Traceback (most recent call last) <ipython-input-29-7f9f0236fad7> in <cell line: 24>() 22 # Login process (replace with your email and password) 23 # Locate the email/username field using the 'id' or 'name' attribute ---> 24 email_field = wait.until(EC.presence_of_element_located((By.ID, "id_username"))) 25 email_field.send_keys("[email protected]") # Replace with your email 26 /usr/local/lib/python3.10/dist-packages/selenium/webdriver/support/wait.py in until(self, method, message) 103 break 104 time.sleep(self._poll) --> 105 raise TimeoutException(message, screen, stacktrace) 106 107 def until_not(self, method: Callable[[D], T], message: str = "") -> Union[T, Literal[True]]: TimeoutException: Message: Stacktrace: #0 0x5677df5f58fa <unknown> #1 0x5677df106d20 <unknown> #2 0x5677df155a66 <unknown> #3 0x5677df155d01 <unknown> this is my code over here. from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC import time # Set up Selenium WebDriver for Colab options = webdriver.ChromeOptions() options.add_argument('--headless') # Run Chrome in headless mode options.add_argument('--no-sandbox') # Needed for Colab options.add_argument('--disable-dev-shm-usage') # Overcome resource limitations options.add_argument('--disable-gpu') # Disable GPU for compatibility options.add_argument('--window-size=1920x1080') # Set a default window size driver = webdriver.Chrome(options=options) # Example: Open the TradingView login page driver.get("https://www.tradingview.com/accounts/signin/") # Wait for the login page to load wait = WebDriverWait(driver, 15) # Login process (replace with your email and password) # Locate the email/username field using the 'id' or 'name' attribute email_field = wait.until(EC.presence_of_element_located((By.ID, "id_username"))) email_field.send_keys("[email protected]") # Replace with your email # Locate the password field using the 'id' or 'name' attribute password_field = driver.find_element(By.ID, "id_password") password_field.send_keys("Fs5u+exxxx1") # Replace with your password # Locate and click the login button login_button = driver.find_element(By.XPATH, "//button[@type='submit']") login_button.click() # Wait for login to complete (adjust sleep time as necessary) time.sleep(5) if "Sign In" in driver.page_source: print("Login failed. Check your credentials.") else: print("Login successful!") # Navigate to a chart page (e.g., btc chart) driver.get("https://www.tradingview.com/chart/kvfFlBvq/?symbol=INDEX%3ABTCUSD") time.sleep(5) # Example: Extract data from a visible container try: data_container = driver.find_element(By.CLASS_NAME, "container") print("Extracted Data:") print(data_container.text) except Exception as e: print("Failed to extract data:", e) # Close the browser driver.quit() | To locate the login form on the sign-in page, it is necessary to click the "Email" button first in order to proceed with submitting the login form. I have included the following two lines in the script to accomplish this. email_button = driver.find_element(By.XPATH, "//button[@name='Email']") email_button.click() The login form does not contain a button of type "submit." Instead, there is only a button without a specified type. To perform the login action, I used the span text "Sign in" to identify and click the appropriate button. Your code: login_button = driver.find_element(By.XPATH, "//button[@type='submit']") Updated code by me: login_button = driver.find_element(By.XPATH, "//span[text()='Sign in']") The login process was successful. However, after logging in, the system is unable to locate any container elements. I trust you will be able to address this issue. The complete code, with corrections applied, is presented below: from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC import time # Set up Selenium WebDriver for Colab options = webdriver.ChromeOptions() options.add_argument('--headless') # Run Chrome in headless mode options.add_argument('--no-sandbox') # Needed for Colab options.add_argument('--disable-dev-shm-usage') # Overcome resource limitations options.add_argument('--disable-gpu') # Disable GPU for compatibility options.add_argument('--window-size=1920x1080') # Set a default window size driver = webdriver.Chrome(options=options) # Example: Open the TradingView login page driver.get("https://www.tradingview.com/accounts/signin/") # Wait for the login page to load wait = WebDriverWait(driver, 15) # newly added by me email_button = driver.find_element(By.XPATH, "//button[@name='Email']") email_button.click() # Login process (replace with your email and password) # Locate the email/username field using the 'id' or 'name' attribute email_field = wait.until(EC.presence_of_element_located((By.ID, "id_username"))) email_field.send_keys("[email protected]") # Replace with your email # Locate the password field using the 'id' or 'name' attribute password_field = driver.find_element(By.ID, "id_password") password_field.send_keys("Fs5u+exxxx1") # Replace with your password # Locate and click the login button # Edited by me login_button = driver.find_element(By.XPATH, "//span[text()='Sign in']") login_button.click() # Wait for login to complete (adjust sleep time as necessary) time.sleep(5) if "Sign In" in driver.page_source: print("Login failed. Check your credentials.") else: print("Login successful!") # Navigate to a chart page (e.g., btc chart) driver.get("https://www.tradingview.com/chart/kvfFlBvq/? symbol=INDEX%3ABTCUSD") time.sleep(5) # Example: Extract data from a visible container try: data_container = driver.find_element(By.CLASS_NAME, "container") print("Extracted Data:") print(data_container.text) except Exception as e: print("Failed to extract data:", e) # Close the browser driver.quit() | 1 | 1 |
79,317,247 | 2024-12-30 | https://stackoverflow.com/questions/79317247/how-to-do-a-clean-install-of-python-from-source-in-a-docker-container-image-ge | Currently I have to create Docker images that build python from source (for example we do need two different python versions in a container, one python version for building and one for testing the application, also we need to exactly specify the python version we want to install and newer versions are not supported via apt install for example). My Problem is a.t.m. that the size of the image gets really large if you build python from source and yet I do not fully understand why. Let's take the following image as an example: # we start with prebuild python image to set system python to 3.13 FROM WWW.SOMEURL.COM/python:3.13-slim-bullseye # now we install the build dependencies required to build python from source RUN apt update -y &&\ apt upgrade -y &&\ apt-get install --no-install-recommends --yes \ build-essential \ zlib1g-dev \ libncurses5-dev \ libgdbm-dev \ libnss3-dev \ libssl-dev \ libreadline-dev \ libffi-dev \ libsqlite3-dev \ libbz2-dev \ git \ wget &&\ apt-get clean # next we altinstall another python version by building it from source RUN cd /usr/src &&\ wget "https://www.python.org/ftp/python/3.11.11/Python-3.11.11.tgz" &&\ tar xzf "Python-3.11.11.tgz" &&\ cd "Python-3.11.11" &&\ ./configure &&\ make altinstall # finally we remove the build dependencies to safe some space RUN apt-get remove --purge -y \ build-essential \ zlib1g-dev \ libncurses5-dev \ libgdbm-dev \ libnss3-dev \ libssl-dev \ libreadline-dev \ libffi-dev \ libsqlite3-dev \ libbz2-dev \ git \ wget &&\ apt-get autoremove --purge -y &&\ apt-get autoclean -y # verify installation RUN echo "DEBUG: Path to alt python: $(which python3.11) which has version $(python3.11 --version)" For me this process results in a very large image, while the python installation itself should not be that large (~150-200 MB on a local machine). However, it seems like the pure installation of python from source adds around 800MB to the image. Why is this the case? Thank you for your help! New Dockerfile according to answers, that greatly reduces (~50%) the final size of the image: # we start with prebuild python image to set system python to 3.13, if you dont need that you can just use any other image and perform the same steps (maybe swap altinstall to install) FROM WWW.SOMEURL.COM/python:3.13-slim-bullseye # install and remove build dependencies in a single stage RUN bash install_build_deps.sh && \ bash altinstall_python.sh && \ bash remove_build_deps.sh # verify installation RUN echo "DEBUG: Path to alt python: $(which python3.11) which has version $(python3.11 --version)" Script install_build_deps.sh (addition of removing /var/lib/apt/lists/*): apt-get update -y apt-get upgrade -y apt-get install --no-install-recommends --yes build-essential zlib1g-dev libncurses5-dev libgdbm-dev libnss3-dev libssl-dev libreadline-dev libffi-dev libsqlite3-dev libbz2-dev wget rm -rf /var/lib/apt/lists/* apt-get clean Script altinstall_python.sh (delete tarball and added files to /usr/local/src): cd "/usr/local/src" wget "https://www.python.org/ftp/python/3.11.11/Python-3.11.11.tgz" tar xzf "Python-3.11.11.tgz" cd "Python-3.11.11" ./configure make altinstall rm "Python-3.11.11.tgz" rm -r Python-3.11.11 Script remove_build_deps.sh: apt-get remove --purge -y \ build-essential \ zlib1g-dev \ libncurses5-dev \ libgdbm-dev \ libnss3-dev \ libssl-dev \ libreadline-dev \ libffi-dev \ libsqlite3-dev \ libbz2-dev \ wget &&\ apt-get autoremove --purge -y &&\ apt-get autoclean -y Thanks a lot for the help, if there are further optimizations, let me know and I will update this, if somebody wants to use it as a reference. | Research and read dockerfile best practices, for example https://docs.docker.com/build/building/best-practices/#apt-get . Remove src directory and any build aftefacts after you are done installing. Remove packages in the same stage as you install them. Additionally, you might be interested in pyenv project that streamlines python compilation. Do not use /usr/src for your stuff, it's a system directory. Research linux FHS. I usually use home directory in docker, but i guess /usr/local/src looks also fine. | 2 | 1 |
79,317,098 | 2024-12-30 | https://stackoverflow.com/questions/79317098/python-logging-filter-works-with-console-but-still-writes-to-file | I am saving the logs to a text file and displaying them to the console at the same time. I would like to apply a filter on the logs, so that some logs neither make it to the text file nor the console output. However, with this code, the logs that I would like to filter out are still being saved to the text file. The filter only seems to work on the console output. How can I apply the filter to the text file and the console output? Thank you very much import logging class applyFilter(logging.Filter): def filter(self, record): return not record.getMessage().startswith('Hello') logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s', filename='log_file.txt', filemode='a') console = logging.StreamHandler() console.addFilter(applyFilter()) console.setLevel(logging.INFO) formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s') console.setFormatter(formatter) logging.getLogger('').addHandler(console) logging.info('Hello world') | basicConfig created a FileHandler and a StreamHandler was also created and added to the logger. The filter was only applied to the StreamHandler. To filter both handlers, add the filter to the logger instead: import logging class applyFilter(logging.Filter): def filter(self, record): return not record.getMessage().startswith('Hello') logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s', filename='log_file.txt', filemode='a') console = logging.StreamHandler() # console.addFilter(applyFilter()) # not here console.setLevel(logging.INFO) formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s') console.setFormatter(formatter) logger = logging.getLogger('') logger.addFilter(applyFilter()) # here logger.addHandler(console) logging.info('Hello world') logging.info('world only') Output (console): 2024-12-30 00:44:31,702 - INFO - world only Output (log_file.txt): 2024-12-30 00:44:31,702 - INFO - world only | 1 | 0 |
79,316,851 | 2024-12-30 | https://stackoverflow.com/questions/79316851/sympy-integration-with-cosine-function-under-a-square-root | I am trying to solve the integration integrate( sqrt(1 + cos(2 * x)), (x, 0, pi) ) Clearly, through pen and paper this is not hard, and the result is: But when doing this through Sympy, something does not seem correct. I tried the sympy codes as below. from sympy import * x = symbols("x", real=True) integrate(sqrt(1 + cos(2 * x)), (x, 0, pi)).doit() It then gives me a ValueError saying something in the complex domain not defined. But I've already defined the symbol x as a variable in the real domain. Here is the full error: --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[7], line 4 1 from sympy import * 3 x = symbols("x", real=True) ----> 4 integrate(sqrt(1 + cos(2 * x)), (x, 0, pi)).doit() File C:\Dev_Tools\Anaconda3\Lib\site-packages\sympy\integrals\integrals.py:1567, in integrate(meijerg, conds, risch, heurisch, manual, *args, **kwargs) 1564 integral = Integral(*args, **kwargs) 1566 if isinstance(integral, Integral): -> 1567 return integral.doit(**doit_flags) 1568 else: 1569 new_args = [a.doit(**doit_flags) if isinstance(a, Integral) else a 1570 for a in integral.args] File C:\Dev_Tools\Anaconda3\Lib\site-packages\sympy\integrals\integrals.py:499, in Integral.doit(self, **hints) 497 if reps: 498 undo = {v: k for k, v in reps.items()} --> 499 did = self.xreplace(reps).doit(**hints) 500 if isinstance(did, tuple): # when separate=True 501 did = tuple([i.xreplace(undo) for i in did]) File C:\Dev_Tools\Anaconda3\Lib\site-packages\sympy\integrals\integrals.py:710, in Integral.doit(self, **hints) 707 uneval = Add(*[eval_factored(f, x, a, b) 708 for f in integrals]) 709 try: --> 710 evalued = Add(*others)._eval_interval(x, a, b) 711 evalued_pw = piecewise_fold(Add(*piecewises))._eval_interval(x, a, b) 712 function = uneval + evalued + evalued_pw File C:\Dev_Tools\Anaconda3\Lib\site-packages\sympy\core\expr.py:956, in Expr._eval_interval(self, x, a, b) 953 domain = Interval(b, a) 954 # check the singularities of self within the interval 955 # if singularities is a ConditionSet (not iterable), catch the exception and pass --> 956 singularities = solveset(self.cancel().as_numer_denom()[1], x, 957 domain=domain) 958 for logterm in self.atoms(log): 959 singularities = singularities | solveset(logterm.args[0], x, 960 domain=domain) File C:\Dev_Tools\Anaconda3\Lib\site-packages\sympy\solvers\solveset.py:2252, in solveset(f, symbol, domain) 2250 if symbol not in _rc: 2251 x = _rc[0] if domain.is_subset(S.Reals) else _rc[1] -> 2252 rv = solveset(f.xreplace({symbol: x}), x, domain) 2253 # try to use the original symbol if possible 2254 try: File C:\Dev_Tools\Anaconda3\Lib\site-packages\sympy\solvers\solveset.py:2276, in solveset(f, symbol, domain) 2273 f = f.xreplace({d: e}) 2274 f = piecewise_fold(f) -> 2276 return _solveset(f, symbol, domain, _check=True) File C:\Dev_Tools\Anaconda3\Lib\site-packages\sympy\solvers\solveset.py:1060, in _solveset(f, symbol, domain, _check) 1057 result = Union(*[solver(m, symbol) for m in f.args]) 1058 elif _is_function_class_equation(TrigonometricFunction, f, symbol) or \ 1059 _is_function_class_equation(HyperbolicFunction, f, symbol): -> 1060 result = _solve_trig(f, symbol, domain) 1061 elif isinstance(f, arg): 1062 a = f.args[0] File C:\Dev_Tools\Anaconda3\Lib\site-packages\sympy\solvers\solveset.py:612, in _solve_trig(f, symbol, domain) 610 sol = None 611 try: --> 612 sol = _solve_trig1(f, symbol, domain) 613 except _SolveTrig1Error: 614 try: File C:\Dev_Tools\Anaconda3\Lib\site-packages\sympy\solvers\solveset.py:688, in _solve_trig1(f, symbol, domain) 685 if g.has(x) or h.has(x): 686 raise _SolveTrig1Error("change of variable not possible") --> 688 solns = solveset_complex(g, y) - solveset_complex(h, y) 689 if isinstance(solns, ConditionSet): 690 raise _SolveTrig1Error("polynomial has ConditionSet solution") File C:\Dev_Tools\Anaconda3\Lib\site-packages\sympy\solvers\solveset.py:2284, in solveset_complex(f, symbol) 2283 def solveset_complex(f, symbol): -> 2284 return solveset(f, symbol, S.Complexes) File C:\Dev_Tools\Anaconda3\Lib\site-packages\sympy\solvers\solveset.py:2252, in solveset(f, symbol, domain) 2250 if symbol not in _rc: 2251 x = _rc[0] if domain.is_subset(S.Reals) else _rc[1] -> 2252 rv = solveset(f.xreplace({symbol: x}), x, domain) 2253 # try to use the original symbol if possible 2254 try: File C:\Dev_Tools\Anaconda3\Lib\site-packages\sympy\solvers\solveset.py:2276, in solveset(f, symbol, domain) 2273 f = f.xreplace({d: e}) 2274 f = piecewise_fold(f) -> 2276 return _solveset(f, symbol, domain, _check=True) File C:\Dev_Tools\Anaconda3\Lib\site-packages\sympy\solvers\solveset.py:1110, in _solveset(f, symbol, domain, _check) 1106 result += _solve_radical(equation, u, 1107 symbol, 1108 solver) 1109 elif equation.has(Abs): -> 1110 result += _solve_abs(f, symbol, domain) 1111 else: 1112 result_rational = _solve_as_rational(equation, symbol, domain) File C:\Dev_Tools\Anaconda3\Lib\site-packages\sympy\solvers\solveset.py:918, in _solve_abs(f, symbol, domain) 916 """ Helper function to solve equation involving absolute value function """ 917 if not domain.is_subset(S.Reals): --> 918 raise ValueError(filldedent(''' 919 Absolute values cannot be inverted in the 920 complex domain.''')) 921 p, q, r = Wild('p'), Wild('q'), Wild('r') 922 pattern_match = f.match(p*Abs(q) + r) or {} ValueError: Absolute values cannot be inverted in the complex domain. How do I properly integrate this using Sympy? | Adding a simplification in there will produce the correct result, but I'm not sure why it is having an issue in the first place. integrate(sqrt(1+cos(2*x)).simplify(), (x, 0, pi)) # 2*sqrt(2) | 5 | 3 |
79,316,346 | 2024-12-29 | https://stackoverflow.com/questions/79316346/how-to-include-exception-handling-within-a-python-pool-starmap-multiprocess | I'm using the metpy library to do weather calculations. I'm using the multiprocessing library to run them in parallel, but I get rare exceptions, which completely stop the program. I am not able to provide a minimal, reproducible example because I can't replicate the problems with the metpy library functions and because there is a huge amount of code that runs before the problem occurs that I can't put here. I want to know how to write multiprocessing code to tell the pool.starmap function to PASS if it encounters an error. The first step in my code produces an argument list, which then gets passed to the pool.starmap function, along with the metpy function (metpy.ccl, in this case). The argument list for metpy.ccl includes a list of pressure levels, air temperatures, and dew point values. ccl_pooled = pool.starmap(mpcalc.ccl, ccl_argument_list) I tried to write a generalized function that would take the metpy function I pass to it and tell it to pass when it encounters an error. def run_ccl(p,t,td): try: result = mpcalc.ccl(p,t,td) except IndexError: pass Is there a way for me to write the "run_ccl" function so I can check for errors in my original code line - something like this: ccl_pooled = pool.starmap(run_ccl, ccl_argument_list) If not, what would be the best way to do this? EDIT: To clarify, these argument lists are thousands of data points long. I want to pass on the data point that causes the problem (and enter a nan in the result, "ccl_pooled", for that data point), and keep going. | You can generalize run_ccl with a wrapper function that suppresses specified exceptions and returns NaN as a default value: from contextlib import suppress def suppressor(func, *exceptions): def wrapper(*args, **kwargs): with suppress(*exceptions): return func(*args, **kwargs) return float('nan') return wrapper with which you can then rewrite the code into something like: ccl_pooled = pool.starmap(suppressor(mpcalc.ccl, IndexError), ccl_argument_list) | 1 | 2 |
79,316,278 | 2024-12-29 | https://stackoverflow.com/questions/79316278/is-there-a-more-elegant-rewrite-for-this-python-enum-value-of-implementation | I would like to get a value_of implementation for the StrEnum (Python 3.9.x). For example: from enum import Enum class StrEnum(str, Enum): """Enum with str values""" pass class BaseStrEnum(StrEnum): """Base Enum""" @classmethod def value_of(cls, value): try: return cls[value] except KeyError: try: return cls(value) except ValueError: return None and then can use it like this: class Fruits(BaseStrEnum): BANANA = "Banana" PEA = "Pea" APPLE = "Apple" print(Fruits.value_of('BANANA')) print(Fruits.value_of('Banana')) it is just that the nested try-except doesn't look amazing, is there a better more idiomatic rewrite? | Since upon success of the first try block the function will return and won't execute the code that follows, there is no need to nest the second try block in the error handler of the first try block to begin with: def value_of(cls, value): try: return cls[value] except KeyError: pass try: return cls(value) except ValueError: return None And since both of the error handlers are really meant to ignore the respective exceptions, you can use contextlib.suppress to simply suppress those errors: from contextlib import suppress def value_of(cls, value): with suppress(KeyError): return cls[value] with suppress(ValueError): return cls(value) # return None Note that a function returns None by default so you don't have to explicitly return None as a fallback unless you want to make it perfectly clear. | 2 | 2 |
79,316,309 | 2024-12-29 | https://stackoverflow.com/questions/79316309/how-does-this-code-execute-the-finally-block-even-though-its-never-evaluated-to | def divisive_recursion(n): try: if n <= 0: return 1 else: return n + divisive_recursion(n // divisive_recursion(n - 1)) except ZeroDivisionError: return -1 finally: if n == 2: print("Finally block executed for n=2") elif n == 1: print("Finally block executed for n=1") print(divisive_recursion(5)) Here, divisive_recursion(1) results in 1 + (1 // divisive_recursion(0)), then divisive_recursion(0) returns 1 and it goes into an infinite recursion, where divisive_recursion(1) and divisive_recursion(0) gets executed repeatedly. I expected the code to give and RecursionError due to this, which it does, but the finally blocks get executed before that somehow, I know that they get executed always but for the prints to be printed n should be equal to 1 or 2, which it never does due to the infinite recursion so how come both the print statements are printed when the condition inside them is never evaluated to be True? | In one of the comments, you ask "does that mean once the program encounters the crash, it will execute all the finally blocks upward the recursion before it finally crashes". And the answer is basically "yes". An exception isn't really a "crash", or perhaps think of it as a controlled way of crashing. Here is a simple example to illustrate, in this case where the exception is caught and handled: >>> def foo(n): ... if n == 0: ... raise RuntimeError ... try: ... foo(n - 1) ... except: ... print(f'caught exception at {n=}') ... finally: ... print(f'in finally at {n=}') ... >>> foo(5) caught exception at n=1 in finally at n=1 in finally at n=2 in finally at n=3 in finally at n=4 in finally at n=5 And perhaps even more clarifying, here is a case with an uncaught exception: >>> def foo(n): ... if n == 0: ... raise RuntimeError ... try: ... foo(n - 1) ... except ZeroDivisionError: ... print(f'caught exception at {n=}') ... finally: ... print(f'in finally at {n=}') ... >>> foo(5) in finally at n=1 in finally at n=2 in finally at n=3 in finally at n=4 in finally at n=5 Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 5, in foo File "<stdin>", line 5, in foo File "<stdin>", line 5, in foo [Previous line repeated 2 more times] File "<stdin>", line 3, in foo RuntimeError | 2 | 3 |
79,316,399 | 2024-12-29 | https://stackoverflow.com/questions/79316399/how-do-i-remove-an-image-overlay-in-matplotlib | Using matplotlib and python, I have a grey-scale image of labeled objects, on which I want to draw a homogeneously coloured overlay image with a position and shape based on a changeable input parameter - an object identifier. Basically an outline and enhancement of on of the objects in the image. I can generate the overlay, and re-generate it correctly (I think) every time the input value changes. But I don't know how to clear the previous overlay before drawing the new one. So, in the end, the grey-scale image is overlaid with multiple overlays. This is what I have tried, and it doesn't work. 'overlay',and 'object_data' are defined and used in the calling function: def overlay_object(object_num): try: overlay except NameError: # nothing pass else: # remove old overlay for handle in overlay: handle.remove() # color the selected object componentMask = (object_data == object_num) masked = ma.masked_where(componentMask == 0, componentMask) overlay = ax.imshow(masked, 'jet', interpolation='none', alpha=0.5) return overlay Edit: This is the creation of the grey-scale image in the main program: fig, ax = plt.subplots() ax.imshow(object_data, cmap='gray') ax.axis('off') | If youβre trying to update overlay on a grayscale without accumulating overlays, you should use this approach: import matplotlib.pyplot as plt import numpy as np import numpy.ma as ma def create_interactive_overlay(object_data): """ Creates a figure with a grayscale base image and functions to update overlays. Parameters: object_data : numpy.ndarray The labeled image data where each object has a unique integer value Returns: fig : matplotlib.figure.Figure The figure object update_overlay : function Function to call to update the overlay """ # Create the figure and base image fig, ax = plt.subplots() ax.imshow(object_data, cmap='gray') ax.axis('off') # Initialize overlay as None overlay_artist = [None] # Use list to allow modification in nested function def update_overlay(object_num): """ Updates the overlay to highlight a specific object number. Parameters: object_num : int The object identifier to highlight """ # Remove existing overlay if it exists if overlay_artist[0] is not None: overlay_artist[0].remove() # Create mask for selected object component_mask = (object_data == object_num) masked = ma.masked_where(component_mask == 0, component_mask) # Create new overlay overlay_artist[0] = ax.imshow(masked, cmap='jet', interpolation='none', alpha=0.5) # Redraw the figure fig.canvas.draw_idle() return fig, update_overlay # Example usage: """ # Create sample data object_data = np.zeros((100, 100)) object_data[20:40, 20:40] = 1 object_data[60:80, 60:80] = 2 # Create interactive figure fig, update_overlay = create_interactive_overlay(object_data) # Update overlay for different objects update_overlay(1) # Highlight object 1 plt.pause(1) # Pause to see the change update_overlay(2) # Highlight object 2 plt.show() """ In the above solution, the overlay management is handled by: Keeping track of the overlay artist using a list (to allow modification in the nested function) Properly removing the previous overlay before creating a new one Using draw_idle() to update the figure The code is structured to separate the setup from the update functionality: create_interactive_overlay handles the initial setup update_overlay handles the dynamic updates To use this in your code, you would do something like this: fig, update_overlay = create_interactive_overlay(object_data) # When you want to highlight a different object: update_overlay(new_object_number) Hope this helps | 1 | 1 |
79,306,760 | 2024-12-25 | https://stackoverflow.com/questions/79306760/how-to-get-full-traceback-messages-when-the-open-syscall-is-banned | I am working on providing an environment for running users' untrusted python code. I use the python bindings of libseccomp library to avoid triggering unsafe system calls, and the service is running in a docker container. Here is the script that will be executed in my environment. P.S. The list of banned syscalls is from this project: https://github.com/langgenius/dify-sandbox/blob/f40de1f6bc5f87d0e847cbf52076280bf61c05d5/internal/static/python_syscall/syscalls_amd64.go import sys from seccomp import * import json import requests import datetime import math import re import os import signal import urllib.request allowed_syscalls_str = "syscall.SYS_NEWFSTATAT, syscall.SYS_IOCTL, syscall.SYS_LSEEK, syscall.SYS_GETDENTS64,syscall.SYS_WRITE, syscall.SYS_CLOSE, syscall.SYS_OPENAT, syscall.SYS_READ,syscall.SYS_FUTEX,syscall.SYS_MMAP, syscall.SYS_BRK, syscall.SYS_MPROTECT, syscall.SYS_MUNMAP, syscall.SYS_RT_SIGRETURN,syscall.SYS_MREMAP,syscall.SYS_SETUID, syscall.SYS_SETGID, syscall.SYS_GETUID,syscall.SYS_GETPID, syscall.SYS_GETPPID, syscall.SYS_GETTID,syscall.SYS_EXIT, syscall.SYS_EXIT_GROUP,syscall.SYS_TGKILL, syscall.SYS_RT_SIGACTION, syscall.SYS_IOCTL,syscall.SYS_SCHED_YIELD,syscall.SYS_SET_ROBUST_LIST, syscall.SYS_GET_ROBUST_LIST, syscall.SYS_RSEQ,syscall.SYS_CLOCK_GETTIME, syscall.SYS_GETTIMEOFDAY, syscall.SYS_NANOSLEEP,syscall.SYS_EPOLL_CREATE1,syscall.SYS_EPOLL_CTL, syscall.SYS_CLOCK_NANOSLEEP, syscall.SYS_PSELECT6,syscall.SYS_TIME,syscall.SYS_RT_SIGPROCMASK, syscall.SYS_SIGALTSTACK, syscall.SYS_CLONE,syscall.SYS_MKDIRAT,syscall.SYS_MKDIR,syscall.SYS_SOCKET, syscall.SYS_CONNECT, syscall.SYS_BIND, syscall.SYS_LISTEN, syscall.SYS_ACCEPT, syscall.SYS_SENDTO, syscall.SYS_RECVFROM,syscall.SYS_GETSOCKNAME, syscall.SYS_RECVMSG, syscall.SYS_GETPEERNAME, syscall.SYS_SETSOCKOPT, syscall.SYS_PPOLL, syscall.SYS_UNAME,syscall.SYS_SENDMSG, syscall.SYS_SENDMMSG, syscall.SYS_GETSOCKOPT,syscall.SYS_FSTAT, syscall.SYS_FCNTL, syscall.SYS_FSTATFS, syscall.SYS_POLL, syscall.SYS_EPOLL_PWAIT" allowed_syscalls_tmp = allowed_syscalls_str.split(',') L = [] for item in allowed_syscalls_tmp: item = item.strip() parts = item.split('.')[1][4:].lower() L.append(parts) # create a filter object with a default KILL action f = SyscallFilter(defaction=KILL) for item in L: f.add_rule(ALLOW, item) f.add_rule(ALLOW, 307) f.add_rule(ALLOW, 318) f.add_rule(ALLOW, 334) f.load() #User's code, triggers ZeroDivision a = 10 / 0 However, since the syscall open is banned, I can't provide the full error message for users. Is it safe to provide both open and write for users? Or is there another way to get the full traceback message? Thanks. | EDIT: You will have to grant write access to stdout and stderr. Since these files are opened as the process is started, you can selectively restrict write access to these files only without having to worry about untrusted code modifying other files. You can add write permissions to stdout and stderr in your code like this: f.add_rule(ALLOW, "open") f.add_rule(ALLOW, "close") f.add_rule(ALLOW, "write", Arg(0, EQ, sys.stdout.fileno())) f.add_rule(ALLOW, "write", Arg(0, EQ, sys.stderr.fileno())) In case you would like read access from stdin, it can be added as: f.add_rule(ALLOW, "read", Arg(0, EQ, sys.stdin.fileno())) You can see an example using these filter rules from the seccomp library source code here. You might also find this blog on python sandboxes useful too. Original approach: You can use the traceback library for this. It has a try/except block where you can place the user's code in try and catch and print any exceptions. This code example shows the use of this library with your example: # importing module import traceback try: a = 10/0 except: # printing stack trace traceback.print_exc() The output would be similar to: Traceback (most recent call last): File "example.py", line 5, in <module> a = 10/0 ZeroDivisionError: division by zero | 1 | 1 |
79,311,280 | 2024-12-27 | https://stackoverflow.com/questions/79311280/dask-var-and-std-with-ddof-in-groupby-context-and-other-aggregations | Suppose I want to compute variance and/or standard deviation with non-default ddof in a groupby context, I can do: df.groupby("a")["b"].var(ddof=2) If I want that to happen together with other aggregations, I can use: df.groupby("a").agg(b_var = ("b", "var"), c_sum = ("c", "sum")) My understanding is that to be able to have non default ddof I should create a custom aggregation. Here what I got so far: def var(ddof: int = 1) -> dd.Aggregation: import dask.dataframe as dd return dd.Aggregation( name="var", chunk=lambda s: (s.count(), s.sum(), (s.pow(2)).sum()), agg=lambda count, sum_, sum_sq: (count.sum(), sum_.sum(), sum_sq.sum()), finalize=lambda count, sum_, sum_sq: (sum_sq - (sum_ ** 2 / count)) / (count - ddof), ) Yet, I encounter a RuntimeError: df.groupby("a").agg({"b": var(2)}) RuntimeError('Failed to generate metadata for DecomposableGroupbyAggregation(frame=df, arg={βbβ: <dask.dataframe.groupby.Aggregation object at 0x7fdfb8469910>} What am I missing? Is there a better way to achieve this? Replacing s.pow(2) with s**2 also results in an error. Full script: import dask.dataframe as dd data = { "a": [1, 1, 1, 1, 2, 2, 2], "b": range(7), "c": range(10, 3, -1), } df = dd.from_dict(data, 2) def var(ddof: int = 1) -> dd.Aggregation: import dask.dataframe as dd return dd.Aggregation( name="var", chunk=lambda s: (s.count(), s.sum(), (s.pow(2)).sum()), agg=lambda count, sum_, sum_sq: (count.sum(), sum_.sum(), sum_sq.sum()), finalize=lambda count, sum_, sum_sq: (sum_sq - (sum_ ** 2 / count)) / (count - ddof), ) df.groupby("a").agg(b_var = ("b", "var"), c_sum = ("c", "sum")) # <- no issue df.groupby("a").agg(b_var = ("b", var(2)), c_sum = ("c", "sum")) # <- RuntimeError | As answered in Dask Discourse Forum, I don't think your custom Aggregation implementation is correct. However, a simpler solution can be applied: import dask.dataframe as dd import functools data = { "a": [1, 1, 1, 1, 2, 2, 2], "b": range(7), "c": range(10, 3, -1), } df = dd.from_dict(data, 2) var_ddof_2 = functools.partial(dd.groupby.DataFrameGroupBy.var, ddof=2) df.groupby("a").agg(b_var = ("b", var_ddof_2), c_sum = ("c", "sum")) | 2 | 3 |
79,314,406 | 2024-12-28 | https://stackoverflow.com/questions/79314406/n-unique-aggregation-using-duckdb-relational-api | Say I have import duckdb rel = duckdb.sql('select * from values (1, 4), (1, 5), (2, 6) df(a, b)') rel Out[3]: βββββββββ¬ββββββββ β a β b β β int32 β int32 β βββββββββΌββββββββ€ β 1 β 4 β β 1 β 5 β β 2 β 6 β βββββββββ΄ββββββββ I can group by a and find the mean of 'b' by doing: rel.aggregate( [duckdb.FunctionExpression('mean', duckdb.ColumnExpression('b'))], group_expr='a', ) βββββββββββ β mean(b) β β double β βββββββββββ€ β 4.5 β β 6.0 β βββββββββββ which works wonderfully Is there a similar way to create a "n_unique" aggregation? I'm looking for something like rel.aggregate( [duckdb.FunctionExpression('count_distinct', duckdb.ColumnExpression('b'))], group_expr='a', ) but that doesn't exist. Is there something that does? | updated. I couldn't find proper way of doing count distinct, but you could use combination of array_agg() and array_unique() functions: rel.aggregate( [duckdb.FunctionExpression( 'array_unique', duckdb.FunctionExpression( 'array_agg', duckdb.ColumnExpression('b') ) )], group_expr='a', ) ββββββββββββββββββββββββββββββ β array_unique(array_agg(b)) β β uint64 β ββββββββββββββββββββββββββββββ€ β 1 β β 2 β ββββββββββββββββββββββββββββββ old. you can pre-select distinct a and b columns? ( rel.select(*[duckdb.ColumnExpression('a'), duckdb.ColumnExpression('b')]) .distinct() .aggregate( [duckdb.FunctionExpression('count', duckdb.ColumnExpression('b'))], group_expr='a', ) ) | 2 | 1 |
79,314,321 | 2024-12-28 | https://stackoverflow.com/questions/79314321/use-an-expression-dictionary-to-calculate-row-wise-based-on-a-column-in-polars | I want to use an expression dictionary to perform calculations for a new column. I have this Polars dataframe: df=pl.DataFrame( "col1": ["a", "b", "a"], "x": [1,2,3], "y": [2,2,5] ) And I have an expression dictionary: expr_dict = { "a": pl.col("x") * pl.col("y"), "b": pl.col("x"), } I want to create a column where each value is calculated based on a key in in another column, but I do not know how. I want to hhave result like this: >>> df.with_columns(r=pl.col("col1").apply(lambda x: expr_dict[X]) >>> shape: (3, 3) ββββββββ¬ββββββ¬ββββββ¬ββββββ β col1 β x β y β r β β --- β --- β --- β --- β β str β i64 β i64 β i64 β ββββββββͺββββββͺββββββͺββββββ‘ β a β 1 β 2 β 2 β β b β 2 β 2 β 4 β β a β 3 β 5 β 15 β ββββββββ΄ββββββ΄ββββββ΄ββββββ Is this possible? | pl.when() for conditional expression. pl.coalesce() to combine conditional expressions together. df.with_columns( r = pl.coalesce( pl.when(pl.col.col1 == k).then(v) for k, v in expr_dict.items() ) ) shape: (3, 4) ββββββββ¬ββββββ¬ββββββ¬ββββββ β col1 β x β y β r β β --- β --- β --- β --- β β str β i64 β i64 β i64 β ββββββββͺββββββͺββββββͺββββββ‘ β a β 1 β 2 β 2 β β b β 2 β 2 β 2 β β a β 3 β 5 β 15 β ββββββββ΄ββββββ΄ββββββ΄ββββββ | 1 | 2 |
79,310,142 | 2024-12-26 | https://stackoverflow.com/questions/79310142/how-to-extract-sub-arrays-from-a-larger-array-with-two-start-and-two-stop-1-d-ar | I am looking for a way to vectorize the following code, # Let cube have shape (N, M, M) sub_arrays = np.empty(len(cube), 3, 3) row_start = ... # Shape (N,) and are integers in range [0, M-2] row_end = ... # Shape (N,) and are integers in range [1, M-1] col_start = ... # Shape (N,) and are integers in range [0, M-2] col_end = ... # Shape (N,) and are integers in range [1, M-1] # extract sub arrays from cube and put them in sub_arrays for i in range(len(cube)): # Note that the below is extracting a (3, 3) sub array from cube sub_arrays[i] = cube[i, row_start[i]:row_end[i], col_start[i]:col_end[i]] Instead of the loop, I would like to do something like, sub_arrays = cube[:, row_start:row_end, col_start:col_end] But this throws the exception, TypeError: only integer scalar arrays can be converted to a scalar index Is there instead some valid way to vectorize the loop? | I believe this question is a duplicate of the one about Slicing along axis with varying indices. However, since it may not be obvious, I think it's okay to provide the answer in a new context with a somewhat different approach. From what I can see, you want to extract data from the cube using a sliding window of a fixed size (3Γ3 in this case), applied to a separate slice along the first axis with varying shifts within the slices. In contrast to the previously mentioned approach using as_strided, let's use sliding_window_view this time. As a result, we get two additional axes for row_start and col_start, followed by the window dimensions. Note that row_end and col_end appear as if they are equal to the corresponding starting points increased by a fixed square window side, which is 3 in this case: from numpy.lib.stride_tricks import sliding_window_view cube_view = sliding_window_view(cube, window_shape=(3, 3), axis=(1, 2)) output = cube_view[range(cube.shape[0]), row_start, col_start].copy() That's all. But to be sure, let's compare the output with the original code, using test data: import numpy as np from numpy.random import randint from numpy.lib.stride_tricks import sliding_window_view n, m, w = 100, 10, 3 # w - square window size row_start = randint(m-w+1, size=n) col_start = randint(m-w+1, size=n) # Test cube cube = np.arange(n*m*m).reshape(n, m, m) # Data to compare with sub_arrays = np.empty((n, w, w), dtype=cube.dtype) for i in range(cube.shape[0]): sub_arrays[i] = cube[i, row_start[i]:row_start[i]+w, col_start[i]:col_start[i]+w] # Subarrays from the sliding window view cube_view = sliding_window_view(cube, window_shape=(w, w), axis=(1, 2)) output = cube_view[range(cube.shape[0]), row_start, col_start].copy() # No exceptions should occur at this step assert np.equal(output, sub_arrays).all() | 3 | 1 |
79,313,103 | 2024-12-28 | https://stackoverflow.com/questions/79313103/asof-join-with-multiple-inequality-conditions | I have two dataframes: a (~600M rows) and b (~2M rows). What is the best approach for joining b onto a, when using 1 equality condition and 2 inequality conditions on the respective columns? a_1 = b_1 a_2 >= b_2 a_3 >= b_3 I have explored the following paths so far: Polars: join_asof(): only allows for 1 inequality condition join_where() with filter(): even with a small tolerance window, the standard Polars installation runs out of rows (4.3B row limit) during the join, and the polars-u64-idx installation runs out of memory (512GB) DuckDB: ASOF LEFT JOIN: also only allows for 1 inequality condition Numba: As the above didn't work, I tried to create my own join_asof() function - see code below. It works fine but with increasing lengths of a, it becomes prohibitively slow. I tried various different configurations of for/ while loops and filtering, all with similar results. Now I'm running a bit out of ideas... What would be a more efficient way to implement this? Thank you import numba as nb import numpy as np import polars as pl import time @nb.njit(nb.int32[:](nb.int32[:], nb.int32[:], nb.int32[:], nb.int32[:], nb.int32[:], nb.int32[:], nb.int32[:]), parallel=True) def join_multi_ineq(a_1, a_2, a_3, b_1, b_2, b_3, b_4): output = np.zeros(len(a_1), dtype=np.int32) for i in nb.prange(len(a_1)): for j in range(len(b_1) - 1, -1, -1): if a_1[i] == b_1[j]: if a_2[i] >= b_2[j]: if a_3[i] >= b_3[j]: output[i] = b_4[j] break return output length_a = 5_000_000 length_b = 2_000_000 start_time = time.time() output = join_multi_ineq(a_1=np.random.randint(1, 1_000, length_a, dtype=np.int32), a_2=np.random.randint(1, 1_000, length_a, dtype=np.int32), a_3=np.random.randint(1, 1_000, length_a, dtype=np.int32), b_1=np.random.randint(1, 1_000, length_b, dtype=np.int32), b_2=np.random.randint(1, 1_000, length_b, dtype=np.int32), b_3=np.random.randint(1, 1_000, length_b, dtype=np.int32), b_4=np.random.randint(1, 1_000, length_b, dtype=np.int32)) print(f"Duration: {(time.time() - start_time):.2f} seconds") | Using Numba here is a good idea since the operation is particularly expensive. That being said, the complexity of the algorithm is O(nΒ²) though it is not easy to do much better (without making the code much more complex). Moreover, the array b_1, which might not fit in the L3 cache, is fully read 5_000_000 times making the code rather memory bound. We can strongly speed up the code by building an index so not to travel the whole array b_1, but only the values where a_1[i] == b_1[j]. This is not enough to improve the complexity since a lot of j values fulfil this condition. We can improve the (average) complexity by building a kind of tree for all nodes of the index but in practice, this makes the code much more complex and the time to build the tree would be so big that it actually does not worth doing that in practice. Indeed, a basic index is enough to strongly reduce the execution time on the provided random dataset (with uniformly distributed numbers). Here is the resulting code: import numba as nb import numpy as np import time length_a = 5_000_000 length_b = 2_000_000 a_1=np.random.randint(1, 1_000, length_a, dtype=np.int32) a_2=np.random.randint(1, 1_000, length_a, dtype=np.int32) a_3=np.random.randint(1, 1_000, length_a, dtype=np.int32) b_1=np.random.randint(1, 1_000, length_b, dtype=np.int32) b_2=np.random.randint(1, 1_000, length_b, dtype=np.int32) b_3=np.random.randint(1, 1_000, length_b, dtype=np.int32) b_4=np.random.randint(1, 1_000, length_b, dtype=np.int32) IntList = nb.types.ListType(nb.types.int32) @nb.njit(nb.int32[:](nb.int32[:], nb.int32[:], nb.int32[:], nb.int32[:], nb.int32[:], nb.int32[:], nb.int32[:]), parallel=True) def join_multi_ineq_fast(a_1, a_2, a_3, b_1, b_2, b_3, b_4): output = np.zeros(len(a_1), dtype=np.int32) b1_indices = nb.typed.Dict.empty(key_type=nb.types.int32, value_type=IntList) for j in range(len(b_1)): val = b_1[j] if val in b1_indices: b1_indices[val].append(j) else: lst = nb.typed.List.empty_list(item_type=np.int32) lst.append(j) b1_indices[val] = lst kmean = 0 for i in nb.prange(len(a_1)): if a_1[i] in b1_indices: indices = b1_indices[a_1[i]] v2 = a_2[i] v3 = a_3[i] for k in range(len(indices) - 1, -1, -1): j = indices[np.uint32(k)] #assert a_1[i] == b_1[j] if v2 >= b_2[j] and v3 >= b_3[j]: output[i] = b_4[j] break return output %time join_multi_ineq_fast(a_1, a_2, a_3, b_1, b_2, b_3, b_4) Note that, in average, only 32 k values are tested (which is reasonable enough not to build a more efficient/complicated data structure). Also please note that the result is strictly identical to the one provided by the naive implementation. Benchmark Here are results on my i5-9600KF CPU (6 cores): Roman's code: >120.00 sec (require a HUGE amount of RAM: >16 GiB) Naive Numba code: 24.85 sec This implementation: 0.83 sec <----- Thus, this implementation is about 30 times faster than the initial code. | 5 | 2 |
79,313,133 | 2024-12-28 | https://stackoverflow.com/questions/79313133/sqlalchemy-one-or-more-mappers-failed-to-initialize | I know this Question has been asked a lot and believe me I checked the answers and to me my code looks fine even tough it gives error so it's not. Basically, I was trying to set up a relationship between two Entities: User and Workout. from sqlalchemy import Integer,VARCHAR,TIMESTAMP from sqlalchemy.orm import mapped_column,relationship from sqlalchemy.sql import func from app.schemas.baseschema import Base class User(Base): __tablename__="users" id=mapped_column(Integer,primary_key=True,autoincrement=True) username=mapped_column(VARCHAR(255),unique=True,nullable=False) email=mapped_column(VARCHAR(50),unique=True,nullable=False) created_at=mapped_column(TIMESTAMP(timezone=True),default=func.current_timestamp()) updated_at=mapped_column(TIMESTAMP(timezone=True)) password_hash=mapped_column(VARCHAR(255),nullable=False) workouts=relationship("Workout",back_populates="user") from sqlalchemy import Integer,DATE,TEXT,ForeignKey from sqlalchemy.orm import mapped_column,relationship from sqlalchemy.sql import func from app.schemas.baseschema import Base from sqlalchemy.schema import ForeignKeyConstraint class Workout(Base): __tablename__="workouts" id=mapped_column(Integer,primary_key=True,autoincrement=True) date=mapped_column(DATE,default=func.current_date) notes=mapped_column(TEXT) user_id=mapped_column(Integer,ForeignKey("users.id"),nullable=False) user=relationship("User",back_populates="workouts") the error I'm getting is this one: InvalidRequestError("When initializing mapper Mapper[User(users)], expression 'Workout' failed to locate a name ('Workout'). If this is a class name, consider adding this relationship() to the <class 'app.schemas.userschemas.User'> class after both dependent classes have been defined."). Can someone help me identify the issue? To me, it looks like there's the Workout class and that it has an user field. I never used sql alchemy and I'm new to Python as well. I checked both classes to see If i spelled the relationship wrong but it looked fine to me. I also tried to compare other answers given here to the same problem and tried to contextualize them to my situation but I didn't succeed | This is sort of a weird problem that I have not seen a perfect solution to. SQLAlchemy allows this "deferred" referencing of other models/etc by str name so that you don't end up with circular imports, ie. User must import Workout and Workout must import User. The problem that happens is that by not directly referencing them they might not ever be loaded/executed and do not end up in the registry and in this example sqlalchemy cannot find "Workout". Some options to mitigate this: Put all the models in the same file, if you import User then Workout will also be executed and included in the registry because it the whole module is loaded and it exists in the same module. (This is the easiest) """ models.py """ class User(Base): #... pass class Workout(Base): #... pass Import all the models into a "middle" module and use models from that registry therefore forcing all the models to be loaded/registered. (Now you have to remember to do this song-and-dance every time you make a new class) """ models/classes.py """ from .user import User from .workout import Workout #... """ handlers.py """ # Load Workout as a side-effect. from .models.classes import User def handle_user_request(request): return to_json([u.id for u in request.db.scalars(select(User))]) Carefully import only the necessary models (this is not a great solution) Another option I don't know about | 2 | 0 |
79,313,343 | 2024-12-28 | https://stackoverflow.com/questions/79313343/how-to-fix-setuptools-scm-file-finders-git-listing-git-files-failed | I am using pyproject.toml to build a package. I use setuptools_scm to automatically determine the version number. I use python version 3.11.2, setuptools 66.1.1 and setuptools-scm 8.1.0. Here are the relevant parts of pyproject.toml # For a discussion on single-sourcing the version, see # https://packaging.python.org/guides/single-sourcing-package-version/ dynamic = ["version"] [tool.setuptools_scm] # can be empty if no extra settings are needed, presence enables setuptools-scm I build the project with python3 -m build When I run the build command, I see ERROR setuptools_scm._file_finders.git listing git files failed - pretending there aren't any What I've Tried: There is a .git directory at the root of my project. It's readable by all users. Git is installed and accessible from my PATH. I've committed changes to ensure there's Git history available. How can I fix this error? Are there additional configurations or checks I should perform to ensure setuptools_scm can correctly interact with Git for version determination? Reproducible example cd /tmp/ mkdir setuptools_scm_example cd setuptools_scm_example git init touch .gitignore git add . git commit -m "Initial commit" Add the following to pyproject.toml [build-system] requires = ["setuptools>=61.0", "setuptools_scm>=7.0"] build-backend = "setuptools.build_meta" [project] name = "example_package" dynamic = ["version"] [tool.setuptools_scm] # No additional configuration needed, but can add if needed Create and build a python package mkdir -p example_package touch example_package/__init__.py echo "print('Hello from example package')" > example_package/__init__.py python3 -m build I see the error ERROR setuptools_scm._file_finders.git listing git files failed - pretending there aren't any | python3 -m build builds in 2 phases: 1st it builds sdist and then it builds wheel from the sdist in an isolated environment where there is no .git directory. It doesn't matter because at the wheel building phase version is already set in sdist and build gets the version from sdist, not from setuptools_scm. In short: you may safely ignore the error. Reference: https://github.com/pypa/setuptools-scm/issues/997 . Found in https://github.com/pypa/setuptools-scm/issues?q=is%3Aissue+setuptools_scm._file_finders.git Another approach to try: prevent build isolation, install build dependencies into the current environment and build sdist and wheel explicitly: python3 -m pip install build setuptools-scm python3 -m build --no-isolation --sdist --wheel | 1 | 1 |
79,313,112 | 2024-12-28 | https://stackoverflow.com/questions/79313112/combine-two-pandas-dataframes-side-by-side-with-resulting-length-being-maxdf1 | Essentially, what I described in the title. I am trying to combine two dataframes (i.e. df1 & df2) where they have different amounts of columns (df1=3, df2=8) with varying row lengths. (The varying row lengths stem from me having a script that breaks main two excel lists into blocks based on a date condition). My goal is to combine the two length-varying dataframes into one dataframe, where they both start at index 0 instead of one after the other. What is currently happening: A B C D 0 1 2 nan nan 1 3 4 nan nan 2 nan nan 5 6 3 nan nan 7 8 4 nan nan 9 10 This is how I would like it to be: A B C D 0 1 2 5 6 1 3 4 7 8 2 nan nan 9 10 I tried many things, but this is the last code that worked (but with wrong results): import pandas as pd hours_df = pd.read_excel("hours.xlsx").fillna("") hours_columns = hours_df.columns material_df = pd.read_excel("material.xlsx").fillna("") material_df = material_df.rename(columns={'Date': 'Material Date'}) material_columns = material_df.columns breaker = False temp = [] combined_df = pd.DataFrame() last_date = "1999-01-01" for _, row in hours_df.iterrows(): if row["Date"] != "": block_df = pd.DataFrame(temp, columns=hours_columns) if temp: cell_a1 = block_df.iloc[0,0] filtered_df = material_df.loc[ (material_df["Material Date"] < cell_a1) & (material_df["Material Date"] >= last_date)] last_date = cell_a1 combined_block = pd.concat([block_df, filtered_df], axis=1) combined_df = pd.concat([combined_df, combined_block], ignore_index=True) temp = [] temp.append(row) if temp: block_df = pd.DataFrame(temp, columns=hours_columns) combined_df = pd.concat([combined_df, block_df], ignore_index=True) print(combined_df) I am not getting any errors. Just stacked output -- like the one I showed above. | Your issue arises because you are concatenating dataframes vertically rather than horizontally. To achieve the desired output, you need to align rows from df1 and df2 with the same index and then concatenate horizontally. Hereβs the updated code that would produce the output you want. I have added comments on the places where I've made the changes. import pandas as pd # Loading dataframes hours_df = pd.read_excel("hours.xlsx").fillna("") material_df = pd.read_excel("material.xlsx").fillna("") material_df = material_df.rename(columns={'Date': 'Material Date'}) temp = [] combined_df = pd.DataFrame() last_date = "1999-01-01" for _, row in hours_df.iterrows(): if row["Date"] != "": block_df = pd.DataFrame(temp, columns=hours_df.columns) if temp: # Filter material_df based on the date range first_date_in_block = block_df.iloc[0, 0] filtered_df = material_df.loc[ (material_df["Material Date"] < first_date_in_block) & (material_df["Material Date"] >= last_date) ] last_date = first_date_in_block # Reset indices for horizontal alignment block_df.reset_index(drop=True, inplace=True) filtered_df.reset_index(drop=True, inplace=True) # Concatenate horizontally combined_block = pd.concat([block_df, filtered_df], axis=1) combined_df = pd.concat([combined_df, combined_block], ignore_index=True) temp = [] temp.append(row) # Handling the remaining block if temp: block_df = pd.DataFrame(temp, columns=hours_df.columns) combined_df = pd.concat([combined_df, block_df], ignore_index=True) print(combined_df) | 4 | 3 |
79,312,644 | 2024-12-27 | https://stackoverflow.com/questions/79312644/extracting-substring-between-optional-substrings | I need to extract a substring which is between two other substrings. But I would like to make the border substrings optional - if no substrings found then the whole string should be extracted. patt = r"(?:bc)?(.*?)(?:ef)?" a = re.sub(patt, r"\1", "bcdef") # d - as expected a = re.sub(patt, r"\1", "abcdefg") # adg - as expected # I'd like to get `d` only without `a` and `g` # Trying to remove `a`: patt = r".*(?:bc)?(.*?)(?:ef)?" a = re.sub(patt, r"\1", "bcdef") # empty !!! a = re.sub(patt, r"\1", "abcdef") # empty !!! # make non-greedy patt = r".*?(?:bc)?(.*?)(?:ef)?" a = re.sub(patt, r"\1", "bcdef") # d - as expected a = re.sub(patt, r"\1", "abcdef") # `ad` instead of `d` - `a` was not captured # make `a` non-captured patt = r"(?:.*?)(?:bc)?(.*?)(?:ef)?" a = re.sub(patt, r"\1", "abcdef") # ad !!! `a` still not captured I also tried to use re.search without any success. How can I extract d only (a substring between optional substrings bc and ef) from abcdefg? The same pattern should return hij when applied to hij. | By making the bc and ef patterns optional, you'll get into situations where the one is matched, while the other is not. Yet, you'd need both of them or neither. The requirement that you need the whole input to match when these delimiters are not present really overcomplicates it. Realise that if there is no match, sub will not alter the input, and so that would actually achieve the desired result. In other words, don't make these delimiter patterns optional -- make them mandatory. When there is a match, you'll want to replace all of the input with the captured group. This means you should also match what follows ef, so it gets replaced (removed) too. Bringing all that together, you could use: patt = r".*?bc(.*?)ef.*" Be aware that this will only match the first occurrence of the bc...ef pattern. If the input string has more occurrences of those, the sub call will only return the first delimited text. | 3 | 3 |
79,312,133 | 2024-12-27 | https://stackoverflow.com/questions/79312133/getting-all-leaf-words-reverse-stemming-into-one-python-list | On the same lines as the solution provided in this link, I am trying to get all leaf words of one stem word. I am using the community-contributed (@Divyanshu Srivastava) package get_word_forms Imagine I have a shorter sample word list as follows: my_list = [' jail', ' belief',' board',' target', ' challenge', ' command'] If I work it manually, I do the following (which is go word-by-word, which is very time-consuming if I have a list of 200 words): get_word_forms("command") and get the following output: {'n': {'command', 'commandant', 'commandants', 'commander', 'commanders', 'commandership', 'commanderships', 'commandment', 'commandments', 'commands'}, 'a': set(), 'v': {'command', 'commanded', 'commanding', 'commands'}, 'r': set()} 'n' is noun, 'a' is adjective, 'v' is verb, and 'r' is adverb. If I try to reverse-stem the entire list in one go: [get_word_forms(word) for word in sample] I fail at getting any output: [{'n': set(), 'a': set(), 'v': set(), 'r': set()}, {'n': set(), 'a': set(), 'v': set(), 'r': set()}, {'n': set(), 'a': set(), 'v': set(), 'r': set()}, {'n': set(), 'a': set(), 'v': set(), 'r': set()}, {'n': set(), 'a': set(), 'v': set(), 'r': set()}, {'n': set(), 'a': set(), 'v': set(), 'r': set()}, {'n': set(), 'a': set(), 'v': set(), 'r': set()}] I think I am failing at saving the output to the dictionary. Eventually, I would like my output to be a list without breaking it down into noun, adjective, adverb, or verb: something like: ['command','commandant','commandants', 'commander', 'commanders', 'commandership', 'commanderships','commandment', 'commandments', 'commands','commanded', 'commanding', 'commands', 'jail', 'jailer', 'jailers', 'jailor', 'jailors', 'jails', 'jailed', 'jailing'.....] .. and so on. | One solution using nested list comprehensions after stripping forgotten spaces: all_words = [setx for word in my_list for setx in get_word_forms(word.strip()).values() if len(setx)] # Flatten the list of sets all_words = [word for setx in all_words for word in setx] # Remove the repetitions and sort the set all_words = sorted(set(all_words)) print(all_words) ['belief', 'beliefs', 'believabilities', 'believability', 'believable', 'believably', 'believe', 'believed', 'believer', 'believers', 'believes', 'believing', 'board', 'boarded', 'boarder', 'boarders', 'boarding', 'boards', 'challenge', 'challengeable', 'challenged', 'challenger', 'challengers', 'challenges', 'challenging', 'command', 'commandant', 'commandants', 'commanded', 'commander', 'commanders', 'commandership', 'commanderships', 'commanding', 'commandment', 'commandments', 'commands', 'jail', 'jailed', 'jailer', 'jailers', 'jailing', 'jailor', 'jailors', 'jails', 'target', 'targeted', 'targeting', 'targets'] | 1 | 1 |
79,313,107 | 2024-12-28 | https://stackoverflow.com/questions/79313107/how-to-have-pyright-infer-type-from-an-enum-check | Can types be associated with enums, so that Pyright can infer the type from an equality check? (Without cast() or isinstance().) from dataclasses import dataclass from enum import Enum, auto class Type(Enum): FOO = auto() BAR = auto() @dataclass class Foo: type: Type @dataclass class Bar: type: Type item = next(i for i in (Foo(Type.FOO), Bar(Type.BAR)) if i.type == Type.BAR) reveal_type(item) # How to have this be `Bar` instead of `Foo | Bar`? | You want a discriminated union (also known as tagged union). In a discriminated union, there exists a discriminator (also known as a tag field) which can be used to differentiate the members. You currently have an union of Foo and Bar, and you want to discriminate them using the .type attribute. However, this field cannot be the discriminator since it isn't different for each member of the union. (playgrounds: Pyright, Mypy) for i in (Foo(Type.FOO), Bar(Type.BAR)): reveal_type(i) # Foo | Bar mischievous_foo = Foo(Type.BAR) # This is valid naughty_bar = Bar(Type.FOO) # This too for i in (mischievous_foo, naughty_bar): if i.type == Type.FOO: reveal_type(i) # Runtime: Bar, not Foo If Foo.type can only ever be Type.FOO and Bar.Type be Type.BAR, then it is important that you reflect this in the types: (Making type a dataclass field no longer makes sense at this point, but I'm assuming they are only dataclasses for the purpose of this question.) @dataclass class Foo: type: Literal[Type.FOO] @dataclass class Bar: type: Literal[Type.BAR] As Literal[Type.FOO] and Literal[Type.BAR] are disjoint types, i will then be narrowable by checking for the type of .type: (playgrounds: Pyright, Mypy) for i in (Foo(Type.FOO), Bar(Type.BAR)): if i.type == Type.FOO: reveal_type(i) # Foo Foo(Type.BAR) # error Bar(Type.FOO) # error ...even in a generator, yes: item = next(i for i in (Foo(Type.FOO), Bar(Type.BAR)) if i.type == Type.BAR) reveal_type(item) # Bar | 2 | 2 |
79,312,774 | 2024-12-27 | https://stackoverflow.com/questions/79312774/inconsistent-url-error-in-django-from-following-along-to-beginner-yt-tutorial | As you can see in the first screenshot, /products/new isn't showing up as a valid URL although I followed the coding tutorial from YouTube exactly. For some reason there's a blank character before "new" but no blank space in the current path I'm trying to request. I don't know if that's normal or not. I'm using django version 2.1 if that matters The URL does work for products/salt/. What's weird is the URL used to be products/trending/ but I got the same error as with products/new so I randomly changed the URL to products/salt and it started working for me. [Page not found (404) Request Method: GET Request URL: http://127.0.0.1:8000/products/new/ Using the URLconf defined in pyshop.urls, Django tried these URL patterns, in this order: admin/ products/ products/ salt products/ new The current path, products/new/, didn't match any of these.]1 from django.http import HttpResponse from django.shortcuts import render def index(request): return HttpResponse('Hello World') def trending(request): return HttpResponse('Trending Products') def new(request): return HttpResponse('New Products')[2] from django.urls import path from . import views urlpatterns = [ path('', views.index), path('salt', views.trending), path('new', views.new)[3] | Add a trailing slash / to your URLpatterns to resolve this issue i.e. new/ and trending/. Also as mentioned in my comment, I would suggest you upgrade to a secure version of Django to access newer features. | 3 | 2 |
79,310,840 | 2024-12-27 | https://stackoverflow.com/questions/79310840/pil-generate-an-image-from-applying-a-gradient-to-a-numpy-array | I have a 2d NumPy array with values from 0 to 1. I want to turn this array into a Pillow image. I can do the following, which gives me a nice greyscale image: arr = np.random.rand(100,100) img = Image.fromarray((255 * arr).astype(np.uint8)) Now, instead of making a greyscale image, I'd like to apply a custom gradient. To clarify, instead of drawing bands of colors in a linear gradient as in this example, I'd like to specify apply a gradient colormap to an existing 2d array and turn it into a 3d array. Example: If my gradient is [color1, color2, color3], then all 0s should be color1, all 1s should be color3, and 0.25 should be somewhere in between color1 and color2. I was already able to write a simple function that does this: gradient = [(0, 0, 0), (255, 80, 0), (0, 200, 255)] # black -> orange -> blue def get_color_at(x): assert 0 <= x <= 1 n = len(gradient) if x == 1: return gradient[-1] pos = x * (n - 1) idx1 = int(pos) idx2 = idx1 + 1 frac = pos - idx1 color1 = gradient[idx1] color2 = gradient[idx2] color_in_between = [round(color1[i] * (1 - frac) + color2[i] * frac) for i in range(3)] return tuple(color_in_between) So get_color_at(0) returns (0,0,0) and get_color_at(0.75) equals (153, 128, 102), which is this tan/brownish color in between orange and blue. Now, how can I apply this to the original NumPy array? I shouldn't apply get_color_at directly to the NumPy array, since that would still give a 2d array, where each element is a 3-tuple. Instead, I think I want an array whose shape is (n, m, 3), so I can feed that to Pillow and create an RGB image. If possible, I'd prefer to use vectorized operations whenever possible - my input arrays are quite large. If there is builtin-functionality to use a custom gradient, I would also love to use that instead of my own get_color_at function, since my implementation is pretty naive. Thanks in advance. | Method 1: vectorization of your code Your code is almost already vectorized. Almost all operations of it can work indifferently on a float or on an array of floats Here is a vectorized version def get_color_atArr(arr): assert (arr>=0).all() and (arr<=1).all() n=len(gradient) gradient.append(gradient[-1]) gradient=np.array(gradient, dtype=np.uint8) pos = arr*(n-1) idx1 = pos.astype(np.uint8) idx2 = idx1+1 frac = (pos - idx1)[:,:,None] color1 = gradient[idx1] color2 = gradient[idx2] color_in_between = np.round(color1*(1-frac) + color2*frac).astype(np.uint8) Basically, the changes are, the assert (can't use a<b<c notation with numpy arrays). Note that this assert iterates all values of array to check for assertion. That is not for free. So I included it because you did. But you need to be aware that this is not a compile-time verification. It does run code to check all values, which is a non-negligible part of all execution time of the code. more an implementation choice than a vectorization step (a pure translation of your code would have translated that if x==1 into some np.where, or masks. But I am never comfortable with usage of == on floats any way. So I prefer my way. Which costs nothing. It is not another iteration on the image. It adds a sentinel (In Donald Kuth sense of "sentinel": a few bytes that avoid special cases) to the gradient color. So that, in the unlikely even that arr is really 1.0, the gradient happen between last color and last color). frac is broadcasted in 3D array, so that it can be used as a coefficient on 3d arrays color1 and color2 Plus of course, int or floor can't be used on numpy arrays Method 2: not reinventing the wheel Matplotlib (and, I am certain, many other libraries) already have a whole colormap module to deal with this kind of transformations. Let's use it thresh=np.linspace(0,1,len(gradient)) cmap=LinearSegmentedColormap.from_list('mycmap', list(zip(thresh, np.array(gradient)/255.0)), N=256*len(gradient)) arr2 = cmap(arr)[:,:,:3] This is building a custom colormap, using LinearSegmentedColormap, which takes, as 2nd argument, a list of pair (threshold, color). Such as [(0, (0,0,0)), (0.3, (1,0,0)), (0.8, (0,1,0)), (1, (0,0,1))] for a color map that goes from black to red when x goes from 0 tom 0.3, then from red to green when x goes from 0.3 to 0.8, then from green to blue. In this case, your gradient can be transformed to such a list, with just a zip with a linspace. It takes a N= argument, since it creates a discretization of all possible colors (with interpolation in between). Here I take an exaggerated option (my N is more than the maximum number of different colors than can exist, once uint8d) Also since it returns a RGBA array, and to remain strictly identical to what you did, I drop the A using [:,:,:3]. Of course, both method need the final translation into PIL, but you already know how to do that. For this one, it also needs mapping between 0 and 255, which I can do with your own code: Image.fromarray((255 * arr).astype(np.uint8)) Note that, while using matplotlib colormap, you may want to take a tour at what that module has to offer. For example some of the zillions of already existing colormaps may suit you. Or some other way to build colors map non-linearly. | 2 | 2 |
79,311,978 | 2024-12-27 | https://stackoverflow.com/questions/79311978/how-can-i-optimize-python-code-for-analysis-of-a-large-sales-dataset | Iβm working on a question where I have to process a large set of sales transactions stored in a CSV file and summarize the results. The code is running slower than expected and taking too much time for execution, especially as the size of the dataset increases. I am using pandas to load and process the data, are there any optimizations I can make to reduce computational time and get the output faster. Here is the code i am using: import pandas as pd import numpy as np # Sample dataset n = 10**6 # million rows np.random.seed(0) transaction_ids = np.arange(1, n+1) customer_ids = np.random.randint(100, 200, n) sale_amounts = np.random.uniform(50, 500, n) transaction_dates = pd.date_range('2023-01-01', periods=n, freq='T') # DataFrame df = pd.DataFrame({ 'transaction_id': transaction_ids, 'customer_id': customer_ids, 'sale_amount': sale_amounts, 'transaction_date': transaction_dates }) # Categorization function def categorize_transaction(sale_amount): if sale_amount > 400: return 'High Value' elif sale_amount > 200: return 'Medium Value' else: return 'Low Value' category_map = { 'High Value': (df['sale_amount'] > 400), 'Medium Value': (df['sale_amount'] > 200) & (df['sale_amount'] <= 400), 'Low Value': (df['sale_amount'] <= 200) } df['category'] = np.select( [category_map['High Value'], category_map['Medium Value'], category_map['Low Value']], ['High Value', 'Medium Value', 'Low Value'], default='Unknown' ) # Aggregation category_summary = df.groupby('category')['sale_amount'].agg( total_sales='sum', avg_sales='mean', transaction_count='count' ).reset_index() # Additional optimization using 'transaction_date' for time-based grouping df['transaction_month'] = df['transaction_date'].dt.to_period('M') monthly_summary = df.groupby(['transaction_month', 'category'])['sale_amount'].agg( total_sales='sum', avg_sales='mean', transaction_count='count' ).reset_index() print(category_summary.head()) print(monthly_summary.head()) | First of all, the df['category'] = np.select(...) line is slow because of the implicit conversion of all strings to a list of string objects. You can strongly speed this up by creating a categorical column rather than string-based one, since strings are inherently slow to compute. df['category'] = pd.Categorical.from_codes(np.select( [category_map['High Value'], category_map['Medium Value'], category_map['Low Value']], [0, 1, 2], default=3 ), ['High Value', 'Medium Value', 'Low Value', 'Unknown']) This create a categorical column with 4 possible values (integers associated to predefined strings). This is about 8 times faster on my machine. Once you use the above code, the aggregation is also running much faster (about 5 times) because Pandas operates on integers rather than slow string objets. It also speed up the very-last operation (about twice faster). The df['transaction_date'].dt.to_period('M') is particularly slow. Directly using Numpy (with .astype('datetime64[M]')) does not make this faster. Since this operation is compute bound, you can parallelize it. Alternatively, you can write your own (parallel) implementation with Numba (or Cython) though this is tedious to write since one need to case about leap years (and possibly even leap seconds). Update: You can make the first code even faster thanks to 8-bit integers (assuming there are less than 128 categories). This can be done by replacing [0, 1, 2] to np.array([0, 1, 2], dtype=np.int8). This is about 35% faster than the default 32-bit categories. | 1 | 3 |
79,311,933 | 2024-12-27 | https://stackoverflow.com/questions/79311933/how-to-solve-multiple-and-nested-discriminators-with-pydantic-v2 | I am trying to validate Slack interaction payloads, that look like these: type: block_actions container: type: view ... type: block_actions container: type: message ... type: view_submission ... I use 3 different models for payloads coming to the same interaction endpoint: class MessageContainer(BaseModel): type: Literal["message"] ... class ViewContainer(BaseModel): type: Literal["view"] ... class MessageActions(ActionsBase): type: Literal["block_actions"] container: MessageContainer ... class ViewActions(ActionsBase): type: Literal["block_actions"] container: ViewContainer ... class ViewSubmission(BaseModel): type: Literal["view_submission"] ... and I was planning to use BlockActions = Annotated[ MessageActions | ViewActions, Field(discriminator="container.type"), ] SlackInteraction = Annotated[ ViewSubmission | BlockActions, Field(discriminator="type"), ] SlackInteractionAdapter = TypeAdapter(SlackInteraction) but cannot make it work with v2.10.4. Do I have to dispatch them manually or there is a way to solve it with Pydantic? | Not sure it's possible to use 2 discriminators to resolve one type (as you are trying to do). I can suggest you 3 options: 1. Split block_actions into block_message_actions and block_view_actions: from typing import Annotated, Literal from pydantic import BaseModel, Field, TypeAdapter class MessageContainer(BaseModel): pass class ViewContainer(BaseModel): pass class ActionsBase(BaseModel): pass class MessageActions(ActionsBase): type: Literal["block_message_actions"] container: MessageContainer class ViewActions(ActionsBase): type: Literal["block_view_actions"] container: ViewContainer class ViewSubmission(BaseModel): type: Literal["view_submission"] SlackInteraction = Annotated[ ViewSubmission | ViewActions | MessageActions, Field(discriminator="type"), ] SlackInteractionAdapter = TypeAdapter(SlackInteraction) a = SlackInteractionAdapter.validate_python({"type": "view_submission"}) assert isinstance(a, ViewSubmission) b = SlackInteractionAdapter.validate_python( {"type": "block_message_actions", "container": {}}, ) assert isinstance(b, MessageActions) assert isinstance(b.container, MessageContainer) c = SlackInteractionAdapter.validate_python( {"type": "block_view_actions", "container": {}}, ) assert isinstance(c, ViewActions) assert isinstance(c.container, ViewContainer) 2. Use Discriminated Unions with callable Discriminator: def get_discriminator_value(v: Any) -> str: if isinstance(v, dict): if v["type"] == "view_submission": return "view_submission" return "message_action" if v["container"]["type"] == "message" else "view_action" if v.type == "view_submission": return "view_submission" return "message_action" if v.container.type == "message" else "view_action" SlackInteraction = Annotated[ Union[ Annotated[ViewSubmission, Tag("view_submission")], Annotated[MessageActions, Tag("message_action")], Annotated[ViewActions, Tag("view_action")], ], Discriminator(get_discriminator_value), ] SlackInteractionAdapter = TypeAdapter(SlackInteraction) 3. Use nested discriminated unions: from typing import Annotated, Literal from pydantic import BaseModel, Field, TypeAdapter class MessageContainer(BaseModel): type: Literal["message"] class ViewContainer(BaseModel): type: Literal["view"] ActionContainer = Annotated[ MessageContainer | ViewContainer, Field(discriminator="type"), ] class BlockActions(BaseModel): type: Literal["block_actions"] container: ActionContainer class ViewSubmission(BaseModel): type: Literal["view_submission"] SlackInteraction = Annotated[ ViewSubmission | BlockActions, Field(discriminator="type"), ] SlackInteractionAdapter = TypeAdapter(SlackInteraction) a = SlackInteractionAdapter.validate_python({"type": "view_submission"}) assert isinstance(a, ViewSubmission) b = SlackInteractionAdapter.validate_python( {"type": "block_actions", "container": {"type": "message"}}, ) assert isinstance(b, BlockActions) assert isinstance(b.container, MessageContainer) c = SlackInteractionAdapter.validate_python( {"type": "block_actions", "container": {"type": "view"}}, ) assert isinstance(c, BlockActions) assert isinstance(c.container, ViewContainer) | 1 | 2 |
79,309,271 | 2024-12-26 | https://stackoverflow.com/questions/79309271/pandas-series-subtract-pandas-dataframe-strange-result | I'm wondering why pandas Series subtract a pandas dataframe produce such a strange result. df = pd.DataFrame(np.arange(10).reshape(2, 5), columns='a-b-c-d-e'.split('-')) df.max(axis=1) - df[['b']] What are the steps for pandas to produce the result? b 0 1 0 NaN NaN NaN 1 NaN NaN NaN | By default an operation between a DataFrame and a Series is broadcasted on the DataFrame by column, over the rows. This makes it easy to perform operations combining a DataFrame and aggregation per column: # let's subtract the DataFrame to its max per column df.max(axis=0) - df[['b']] a b c d e b NaN 5 NaN NaN NaN 1 NaN 0 NaN NaN NaN Here, since you're aggregating per row, this is no longer possible. You should use rsub with the parameter axis=0: df[['b']].rsub(df.max(axis=1), axis=0) Output: b 0 3 1 3 Note that using two Series would also align the values: df.max(axis=1) - df['b'] Output: 0 3 1 3 dtype: int64 Why 3 columns with df.max(axis=1) - df[['b']]? First, let's have a look at each operand: # df.max(axis=1) 0 4 1 9 dtype: int64 # df[['b']] b 0 1 1 6 Since df[['b']] is 2D (DataFrame), and df.max(axis=1) is 1D (Series), df.max(axis=1) will be used as if it was a "wide" DataFrame: # df.max(axis=1).to_frame().T 0 1 0 4 9 There are no columns in common, thus the output is only NaNs with the union of column names ({'b'}|{0, 1} -> {'b', 0, 1}). If you replace the NaNs that are used in the operation by 0 this makes it obvious how the values are used: df[['b']].rsub(df.max(axis=1).to_frame().T, fill_value=0) b 0 1 0 -1.0 4.0 9.0 1 -6.0 NaN NaN Now let's check a different example in which one of the row indices has the same name as one of the selected columns: df = pd.DataFrame(np.arange(10).reshape(2, 5), columns=['a', 'b', 'c', 'd', 'e'], index=['b', 0] ) df.max(axis=1) - df[['b']] Now the output only has 2 columns, b the common indice and 1 the second index in the Series ({'b', 1}|{'b'} -> {'b', 1}): 1 b b NaN 3 1 NaN -2 | 1 | 1 |
79,310,713 | 2024-12-27 | https://stackoverflow.com/questions/79310713/how-to-apply-the-capitalize-with-condition | I'm wondering how to use the capitalize function when another column has a specific value. For example, I want to change the first letter of students with Master's degree. # importing pandas as pd import pandas as pd # creating a dataframe df = pd.DataFrame({ 'A': ['john', 'bODAY', 'minA', 'peter', 'nicky'], 'B': ['Masters', 'Graduate', 'Graduate', 'Masters', 'Graduate'], 'C': [27, 23, 21, 23, 24] }) # Expected result # A B C #0 John Masters 27 #1 bODAY Graduate 23 #2 minA Graduate 21 #3 Peter Masters 23 #4 nicky Graduate 24 I tried it like this, but it didn't apply well. df[df['B']=='Masters']['A'].str = df[df['B']=='Masters']['A'].str.capitalize() | Here is the complete code: import pandas as pd # Creating the DataFrame df = pd.DataFrame({ 'A': ['john', 'bODAY', 'minA', 'peter', 'nicky'], 'B': ['Masters', 'Graduate', 'Graduate', 'Masters', 'Graduate'], 'C': [27, 23, 21, 23, 24] }) # Capitalize column A conditionally based on B df['A'] = df.apply(lambda row: row['A'].capitalize() if row['B'] == 'Masters' else row['A'], axis=1) # Display the updated DataFrame print(df) Output: A B C 0 John Masters 27 1 bODAY Graduate 23 2 minA Graduate 21 3 Peter Masters 23 4 nicky Graduate 24 | 1 | 1 |
79,309,886 | 2024-12-26 | https://stackoverflow.com/questions/79309886/parsing-units-out-of-column | I've got some data I'm reading into Python using Pandas and want to keep track of units with the Pint package. The values have a range of scales, so have mixed units, e.g. lengths are mostly meters but some are centimeters. For example the data: what,length foo,5.3 m bar,72 cm and I'd like to end up with the length column in some form that Pint understands. Pint's Pandas integration suggests that it only supports the whole column having the same datatype, which seems reasonable. I'm happy with some arbitrary unit being picked (e.g. the first, most common, or just SI base unit) and everything expressed in terms of that. I was expecting some nice way of getting from the data I have to what's expected, but I don't see anything. import pandas as pd import pint_pandas length = pd.Series(['5.3 m', "72 cm"], dtype='pint[m]') Doesn't do the correct thing at all, for example: length * 2 outputs 0 5.3 m5.3 m 1 72 cm72 cm dtype: pint[meter] so it's just leaving things as strings. Calling length.pint.convert_object_dtype() doesn't help and everything stays as strings. | Going through the examples, it looks like pint_pandas is expecting numbers rather than strings. You can use apply to do the conversion: from pint import UnitRegistry ureg = UnitRegistry() df["length"].apply(lambda i: ureg(i)).astype("pint[m]") However, why keep the column as Quantity objects instead of just plain float numbers? | 1 | 2 |
79,309,190 | 2024-12-26 | https://stackoverflow.com/questions/79309190/numpy-convention-for-storing-time-series-of-vectors-and-matrices-items-in-rows | I'm working with discrete-time simulations of ODEs with time varying parameters. I have time series of various data (e.g. time series of state vectors generated by solve_ivp, time series of system matrices generated by my control algorithm, time series of system matrices in modal form, and so on). My question: in what order should I place the indices? My intuition is that since numpy arrays are (by default) stored in row-major order, and I want per-item locality, each row should contain the "item" (i.e. a vector or matrix), and so the number of rows is the number of time points, and the number of columns is the dimension of my vector, e.g.: x_k = np.array((5000, 4)) # a time series of 5000, 4-vectors display(x_k[25]) # the 26th timepoint Or for matrices I might use: A_k = np.array((5000, 4, 4)) # a time series of 5000, 4x4-matrices However, solve_ivp appears to do the opposite and returns a row-major array with the time series in columns (sol.y shape is (4, 5000)). Furthermore, transposing the result with .T just flips a flag to column-major so it is not really clear what the developers of solve_ivp and numpy intend me to do to write cache efficient code. What are the conventions? Should I use the first index for the time index, as in my examples above, or last index as solve_ivp does? | This is strongly dependent of the algorithms applied on your dataset. This problem is basically known as AoS versus SoA. For algorithm that does not benefit much from SIMD operations and accessing all fields, AoS can be better, otherwise SoA is often better. The optimal data structure is often AoSoA, but it is often a pain to manipulate (so it is rare in Numpy codes). On top of that, Numpy is not efficient to operate on arrays having a very small last axis because of the way it is currently implemented (more specifically because internal generators, unneeded function calls, and lack of function specialization which is hard to do due to the high number of possible combinations). Example Here is a first practical example showing this (center of a point cloud): aos = np.random.rand(1024*1024, 2) %timeit -n 10 aos.mean(axis=0) # 17.6 ms Β± 503 Β΅s per loop soa = np.random.rand(2, 1024*1024) %timeit -n 10 soa.mean(axis=1) # 1.7 ms Β± 77 Β΅s per loop Here we can see that the SoA version is much faster (about 10 times). This is because the SoA version benefit from SIMD instruction while the former does not and suffer from the internal Numpy iterator overhead. Technically, please note that the AoS version could be implemented to be nearly as fast as the SoA version here but Numpy is not able to optimize this yet (nor any similar cases which are actually not so easy to optimize). In your case For matrices, Numpy can call BLAS functions on contiguous arrays, which is good for performance. However, a 4x4 matrix-vector operation takes a tiny amount of time: only few nanoseconds on mainstream CPUs (for AoS). Indeed, multiplying the 4x4 matrix rows by a vector takes only 4 AVX instructions that can typically be computed in only 2 cycles. Then comes the sum reduction which takes few nanoseconds too (~4 cycles per line for a naive hadd reduction that is 16 cycles for the whole matrix). Meanwhile, a BLAS function call from Numpy and the management of internal iterators takes significantly more than 10 ns per matrix to compute. This means most of the time will be spent in Numpy overheads with a AoS layout. Thus, a np.array((5000, 4, 4)) will certainly not be so bad, but clearly far from being optimal. You can strongly reduce these overheads by writing your own specialized implementation (with Cython/Numba) specifically designed for 4x4 matrices. Here is an example of relatively-fast AoS computation using Numba. With a SoA data layout (i.e. (4, 4, 5000)), you can write your own vectorized operations (e.g. SoA-based matrix multiplication). A naive implementation will certainly not be very efficient either because creating/filling temporary Numpy array is expensive. However, temporary arrays can often be preallocated/reused and operations can be often done in-place so to reduce the overheads. On top of that, you can tune the size of the temporary array so it can fit in the L1 cache (though this is tedious to do since it makes the code more complex so generally Numpy users don't want to do that). That being said, calling Numpy functions from CPython also has a significant overhead (generally 0.2-3.0 Β΅s on my i5-9600KF CPU). This is a problem since doing basic computation on 5000 double-precision floating-point numbers in the L1 cache typically takes less than 1 Β΅s. As a result, there is a good chance for most of the time to be spent in CPython/Numpy overheads with a SoA array having only 5000 items manipulated only using Numpy. Here again, Cython/Numba can be used to nearly remove these overheads. The resulting Cython/Numba code should be faster on SoA than AoS arrays (mainly because of horizontal SIMD operations are generally inefficient and AoS operations tends to be hard to optimize, especially on modern CPUs with wide SIMD instruction set). Conclusion This is a complicated topic. In your specific case, I expect both SoA and AoS to be inefficient if you only use Numpy (but the SoA version might be a bit faster) : most of the time will be spent in overheads. As a result, the speed of the best implementation is dependent of the exact algorithm implementation and even the CPU used (so the best is to try which one is better in practice in practice). That being said, I think using SoA is significantly better performance-wise than AoS. Indeed, codes operating on SoA arrays can be optimized more easily and further than AoS ones (see Cython/Numba or even native C code). On top of that, SoA-based codes are much more likely to benefit from accelerators like GPUs. Indeed, GPUs are massively-SIMD hardware devices operating on wide SIMD vectors (e.g. 32 items at once). 4x4 contiguous AoS matrix operation are generally pretty inefficient on them, meanwhile SIMD-friendly SoA ones are cheap. I advise you to write a clean/simple Numpy code first while preferring a SoA layout for your array, and then optimize slow parts of the code later (possibly with Cython/Numba/native codes). This strategy often results in relatively-clean codes that are simple to optimize. | 1 | 2 |
79,309,025 | 2024-12-26 | https://stackoverflow.com/questions/79309025/why-does-summing-data-grouped-by-df-iloc-0-also-sum-up-the-column-names | I have a DataFrame with a species column and four arbitrary data columns. I want to group it by species and sum up the four data columns for each one. I've tried to do this in two ways: once by grouping by df.columns[0] and once by grouping by df.iloc[:, 0]. data = { 'species': ['a', 'b', 'c', 'd', 'e', 'rt', 'gh', 'ed', 'e', 'd', 'd', 'q', 'ws', 'f', 'fg', 'a', 'a', 'a', 'a', 'a'], 's1': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 's2': [9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9], 's3': [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21], 's4': [10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10] } df = pd.DataFrame(data) grouped_df1 = df.groupby(df.columns[0], as_index=False).sum() grouped_df2 = df.groupby(df.iloc[:, 0], as_index=False).sum() Both methods correctly sum the data in the four rightmost columns. But for some reason, the second method also sums up the names of the species, concatenating them into one long, repeating string. Here's the result from the first method, which is what I'm looking for: print(grouped_df1) species s1 s2 s3 s4 0 a 91 54 97 60 1 b 2 9 3 10 2 c 3 9 4 10 3 d 25 27 28 30 4 e 14 18 16 20 5 ed 8 9 9 10 6 f 14 9 15 10 7 fg 15 9 16 10 8 gh 7 9 8 10 9 q 12 9 13 10 10 rt 6 9 7 10 11 ws 13 9 14 10 And here's the result from the df.iloc method, which incorrectly sums up the species data: print(grouped_df2) species s1 s2 s3 s4 0 aaaaaa 91 54 97 60 1 b 2 9 3 10 2 c 3 9 4 10 3 ddd 25 27 28 30 4 ee 14 18 16 20 5 ed 8 9 9 10 6 f 14 9 15 10 7 fg 15 9 16 10 8 gh 7 9 8 10 9 q 12 9 13 10 10 rt 6 9 7 10 11 ws 13 9 14 10 Why is the second method summing up the species names as well as the numerical data? | In groupby - column name is treated as an intrinsic grouping key, while a Series is treated as an external key. Reference - https://pandas.pydata.org/docs/reference/groupby.html When using df.iloc[:, 0]: Pandas considers the string values in the species column as a separate grouping key independent of the DataFrame structure. When using df.columns[0]: Pandas directly uses the column 'species' within the DataFrame as the grouping key. This allows Pandas to manage the grouping and summation correctly. Code COrrection You should always reference the column name explicitly grouped_df1 = df.groupby('species', as_index=False).sum() Or this also works grouped_df1 = df.groupby(df[df.columns[0]], as_index=False).sum() | 2 | 0 |
79,308,731 | 2024-12-26 | https://stackoverflow.com/questions/79308731/safest-way-to-incrementally-append-to-a-file | I'm performing some calculations to generate chaotic solutions to a mathematical function. I have an infinite loop that looks something like this: f = open('solutions.csv', 'a') while True: x = generate_random_parameters() # x is a list of floats success = test_parameters(x) if success: print(','.join(map(str, x)), file=f, flush=True) The implementation of generate_random_parameters() and test_parameters() is not very important here. When I want to stop generating solutions I want to ^C, but I want to ensure that solutions.csv keeps its integrity/doesn't get corrupted/etc, in case I happen to interrupt when the file is being written to. So far I haven't observed this happening, but I'd like to remove any possibility that this could occur. Additionally, since the program will never terminate on its own I don't have a corresponding f.close() -- this should be fine, correct? Appreciate any clarification. | One simple approach to ensuring that the current call to print finishes before the program exits from a keyboard interrupt is to use a signal handler to unset a flag on which the while loop runs. Set the signal handler only when you're about to call print and reset the signal handler to the original when print returns, so that the preceding code in the loop can be interrupted normally: import signal def interrupt_handler(signum, frame): global running running = False text = 'a' * 99999 running = True with open('solutions.csv', 'a') as f: while running: ... # your calculations original_handler = signal.signal(signal.SIGINT, interrupt_handler) print(text, file=f, flush=True) # your output signal.signal(signal.SIGINT, original_handler) Also note that it is more idiomatic to use open as a context manager to handle the closure of an open file when exiting the block for any reason. | 3 | 2 |
79,307,295 | 2024-12-25 | https://stackoverflow.com/questions/79307295/what-is-the-best-way-to-avoid-detecting-words-as-lines-in-opencv-linedetector | I am using OpenCV LineDetector class in order to parse tables. However, I face an issue when I try to detect lines inside the table. for the following image: I use img = cv2.imread(TABLE_PATH) gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) lsd = cv2.createLineSegmentDetector(cv2.LSD_REFINE_ADV, sigma_scale=0.6) dlines = lsd.detect(gray) lines = (Line(x0, y0, x1, y1) for x0, y0, x1, y1 in dlines[0][:, 0]) in order to detect line segments. However, the results are lousy. these are the lines it detects: How can I make sure that words are not detected as lines. I cannot use hardcoded thresholds since they would work for one example but not for the other. Solutions in python or java would be appreciated | You got some lines detected, but that set contained some undesirable ones. You could just filter the set of lines for line length. If you do that, you can easily exclude the very short lines coming from the text in that picture. Implementation: that's a list comprehension, only including lines that are long enough. Write a predicate function that gives you the length of one line, then you can use that in the list comprehension. That is independent of how you scraped lines out of the picture. the LSD is one, but there are routines based on the Hough transform too, which might fare better or worse than what you have. You probably also noticed that your approach didn't find some lines that it should have. You might want to tweak the parameters you pass to your line detector. Or try another line detection approach. | 3 | 0 |
79,332,328 | 2025-1-6 | https://stackoverflow.com/questions/79332328/pydantic-model-how-to-exclude-field-from-being-hashed-eq-compared | I have the following hashable pydantic model: class TafReport(BaseModel, frozen=True): download_date: dt icao: str issue_time: dt validity_time_start: dt validity_time_stop: dt raw_report: str Now I don't want these reports to be considered different just because their download date is different (I insert that with the datetime.now()). How can i exclude download_date from being considered in the __hash__ and __eq__ functions so that I can do stunts like: tafs = list(set(tafs)) and have a unique set of tafs even though two might have differing download date? I'm looking for a solution where I don't have to overwrite the __hash__ and __eq__ methods... I checked out this topic but it only answers how to exclude a field from the model in general (so it doesn't show up in the json dumps), but I do want it to show up in the json dump. | Unfortunately there is no built-in option at the moment, but there are two options that you can consider: Changing from BaseModel to a Pydantic dataclass: from dataclasses import field from datetime import datetime as dt from pydantic import TypeAdapter from pydantic.dataclasses import dataclass @dataclass(frozen=True) class TafReport: download_date: dt = field(compare=False) icao: str issue_time: dt validity_time_start: dt validity_time_stop: dt raw_report: str TafReportAdapter = TypeAdapter(TafReport) SameTime = dt.now() TafReport1 = TafReport(download_date=dt.now(), icao='icao', issue_time=SameTime, validity_time_start=SameTime, validity_time_stop=SameTime, raw_report='raw_report') TafReport2 = TafReport(download_date=dt.now(), icao='icao', issue_time=SameTime, validity_time_start=SameTime, validity_time_stop=SameTime, raw_report='raw_report') print(TafReportAdapter.dump_json(TafReport1), hash(TafReport1)) print(TafReportAdapter.dump_json(TafReport2), hash(TafReport2)) This will give the same hash while the download_date is different. Exclude the download_date from the model and allow extra fields: from datetime import datetime as dt from pydantic import BaseModel class TafReport(BaseModel, frozen=True, extra='allow'): icao: str issue_time: dt validity_time_start: dt validity_time_stop: dt raw_report: str SameTime = dt.now() TafReport1 = TafReport(icao='icao', issue_time=SameTime, validity_time_start=SameTime, validity_time_stop=SameTime, raw_report='raw_report', download_date=dt.now()) TafReport2 = TafReport(icao='icao', issue_time=SameTime, validity_time_start=SameTime, validity_time_stop=SameTime, raw_report='raw_report', download_date=dt.now()) print(TafReport1.model_dump(), hash(TafReport1)) print(TafReport2.model_dump(), hash(TafReport2)) In this case the hash function is build based on the fields provided in the model. But allowing extra fields without defining them in the model gives you the ability to add the download_date without affecting the hash function build in the model. | 5 | 1 |
79,336,604 | 2025-1-7 | https://stackoverflow.com/questions/79336604/failed-creating-mock-folders-with-pyfakefs | I'm working on a project that uses pyfakefs to mock my filesystem to test folder creation and missing folders in a previously defined tree structure. I'm using Python 3.13 on Windows and get this output from the terminal after running my test: Terminal output: (Does anyone have a tip for formatting terminal output without getting automatic syntax highlighting?) E ====================================================================== ERROR: test_top_folders_exist (file_checker.tests.file_checker_tests.TestFolderCheck.test_top_folders_exist) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Users\juank\dev\projects\python\gamedev_eco\file_checker\tests\file_checker_tests.py", line 20, in test_top_folders_exist self.fs.create_dir(Path.cwd() / "gdd") ~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^ File "C:\Users\juank\AppData\Local\Programs\Python\Python313\Lib\site-packages\pyfakefs\fake_filesystem.py", line 2191, in create_dir dir_path = self.absnormpath(dir_path) File "C:\Users\juank\AppData\Local\Programs\Python\Python313\Lib\site-packages\pyfakefs\fake_filesystem.py", line 1133, in absnormpath path = self.replace_windows_root(path) File "C:\Users\juank\AppData\Local\Programs\Python\Python313\Lib\site-packages\pyfakefs\fake_filesystem.py", line 1418, in replace_windows_root if path and self.is_windows_fs and self.root_dir: ^^^^^^^^^^^^^ File "C:\Users\juank\AppData\Local\Programs\Python\Python313\Lib\site-packages\pyfakefs\fake_filesystem.py", line 357, in root_dir return self._mount_point_dir_for_cwd() ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^ File "C:\Users\juank\AppData\Local\Programs\Python\Python313\Lib\site-packages\pyfakefs\fake_filesystem.py", line 631, in _mount_point_dir_for_cwd if path.startswith(str_root_path) and len(str_root_path) > len(mount_path): ^^^^^^^^^^^^^^^ AttributeError: 'WindowsPath' object has no attribute 'startswith' ---------------------------------------------------------------------- Ran 1 test in 0.011s FAILED (errors=1) Test: from pyfakefs.fake_filesystem_unittest import TestCase class TestFolderCheck(TestCase): """Test top folders = gdd marketing business""" @classmethod def setUp(cls): cls.setUpClassPyfakefs() cls.fake_fs().create_dir(Path.cwd() / "gamedev_eco") cls.fake_fs().cwd = Path.cwd() / "gamedev_eco" def test_top_folders_exist(self): self.fs.create_dir(Path.cwd() / "gdd") What is confusing for me is that the Setup class method can create a folder and change cwd to that new folder but I'm not able to create a folder inside a test. Does anyone have experience working with pyfakefs? Can anyone lend me a hand with this issue please? | The issue has been acknowledged, fixed, and the fix has been included in the 5.7.4 release of pyfakefs. No workaround should thus be necessary, any longer. | 1 | 1 |
79,321,826 | 2025-1-1 | https://stackoverflow.com/questions/79321826/seleniumbase-cdp-mode-opening-new-tabs | I am currently writing a python program which uses a seleniumbase web bot with CDP mode activated: with SB(uc=True, test=True, xvfb=True, incognito=True, agent=<user_agent>, headless=True) as sb: temp_email_gen_url = "https://temp-mail.org/en" sb.activate_cdp_mode(temp_email_gen_url) ... I need to be able to create new tab and switch between the new and original tab. I have read the CDP docs but have not seen a solution to this, does anybody know how this can be done? | For better or worse there isn't an "open tab" feature in CDP mode. The main developer of seleniumbase suggests using a separate driver in CDP mode for each tab as follows, equivalent to using "open in new window" on every link: from seleniumbase import SB # opens all links on the target page with a second driver with SB(uc=True, test=True) as sb: temp_email_gen_url = "https://temp-mail.org/en" sb.driver.uc_open_with_reconnect(temp_email_gen_url) links = sb.get_unique_links() for link in links: driver2 = sb.get_new_driver(undetectable=True) driver2.uc_open_with_reconnect(link) print(driver2.title) sb.quit_extra_driver() You may want to consider reusing the second driver for each link instead of creating and destroying a driver for each link. It would be faster and more efficient, but it's possible that the site could use cookies and session storage to detect a suspicious number of page accesses coming from the same browser session. To elaborate on a question in the comments: there is indeed a way in non-CDP mode to open tabs but I don't recommend it. Connecting the WebDriver leaves traces that bot detection scripts can find, both obvious (in years past hard-coded variable names were added to the JS environment) and subtle such as exploiting obscure behavior around how stack traces and logging commands are buffered and normally run lazily, but not if WebDriver is connected. seleniumbase's UC mode was an attempt at addressing this by using a WebDriver most of the time, but disconnecting for a while just before doing something that can result in detection then waiting until the danger is assumed to have passed before reconnecting. It worked for a while but hosting platforms have adapted. CDP mode is a relatively new entrant in this cat-and-mouse game that is much harder to detect. The growing counter to CDP mode is to track requests and UI interactions such as mouse movements and clicks over time and deploy models like recaptcha v3 that predict the probability a browser is a bot. The counter to that will be increased reliance on pyautogui and similar tools to simulate human interaction with the UI. | 1 | 1 |
79,330,304 | 2025-1-5 | https://stackoverflow.com/questions/79330304/optimizing-sieving-code-in-the-self-initializing-quadratic-sieve-for-pypy | I've coded up the Self Initializing Quadratic Sieve (SIQS) in Python, but it has been coded with respect to being as fast as possible in PyPy(not native Python). Here is the complete code: import logging import time import math from math import sqrt, ceil, floor, exp, log2, log, isqrt from rich.live import Live from rich.table import Table from rich.console import Console import random import sys LOWER_BOUND_SIQS = 1000 UPPER_BOUND_SIQS = 4000 logging.basicConfig( format='[%(levelname)s] %(asctime)s - %(message)s', level=logging.INFO ) def get_gray_code(n): gray = [0] * (1 << (n - 1)) gray[0] = (0, 0) for i in range(1, 1 << (n - 1)): v = 1 j = i while (j & 1) == 0: v += 1 j >>= 1 tmp = i + ((1 << v) - 1) tmp >>= v if (tmp & 1) == 1: gray[i] = (v - 1, -1) else: gray[i] = (v - 1, 1) return gray MULT_LIST = [ 1, 2, 3, 5, 7, 9, 10, 11, 13, 14, 15, 17, 19, 21, 23, 25, 29, 31, 33, 35, 37, 39, 41, 43, 45, 47, 49, 51, 53, 55, 57, 59, 61, 63, 65, 67, 69, 71, 73, 75, 77, 79, 83, 85, 87, 89, 91, 93, 95, 97, 101, 103, 105, 107, 109, 111, 113, 115, 119, 121, 123, 127, 129, 131, 133, 137, 139, 141, 143, 145, 147, 149, 151, 155, 157, 159, 161, 163, 165, 167, 173, 177, 179, 181, 183, 185, 187, 191, 193, 195, 197, 199, 201, 203, 205, 209, 211, 213, 215, 217, 219, 223, 227, 229, 231, 233, 235, 237, 239, 241, 249, 251, 253, 255 ] def create_table(relations, target_relations, num_poly, start_time): end = time.time() elapsed = end - start_time relations_per_second = len(relations) / elapsed if elapsed > 0 else 0 poly_per_second = num_poly / elapsed if elapsed > 0 else 0 percent = (len(relations) / target_relations) * 100 if target_relations > 0 else 0 percent_per_second = percent / elapsed if elapsed > 0 else 0 remaining_percent = 100.0 - percent seconds = int(remaining_percent / percent_per_second) if percent_per_second > 0 else 0 m, s = divmod(seconds, 60) h, m = divmod(m, 60) table = Table(title="Processing Status") table.add_column("Metric", style="cyan", no_wrap=True) table.add_column("Value", style="magenta") table.add_row("Relations per second", f"{relations_per_second:,.2f}") table.add_row("Poly per second", f"{poly_per_second:,.2f}") table.add_row("Percent", f"{percent:,.2f}%") table.add_row("Percent per second", f"{percent_per_second:,.4f}%") table.add_row("Estimated Time", f"{h:d}:{m:02d}:{s:02d}") return table class QuadraticSieve: def __init__(self, M, B=None, T=2, prime_limit=20, eps=30, lp_multiplier=20, multiplier=None): self.logger = logging.getLogger(__name__) self.prime_log_map = {} self.root_map = {} self.M = M self.B = B self.T = T self.prime_limit = prime_limit self.eps = eps self.lp_multiplier = lp_multiplier self.multiplier = multiplier self.console = Console() print(f"B: {B}") print(f"M: {M}") print(f"prime_limit: {prime_limit}") print(f"eps: {eps}") print(f"lp_multiplier: {lp_multiplier}") @staticmethod def gcd(a, b): a, b = abs(a), abs(b) while a: a, b = b % a, a return b @staticmethod def legendre(n, p): val = pow(n, (p - 1) // 2, p) return val - p if val > 1 else val @staticmethod def jacobi(a, m): a = a % m t = 1 while a != 0: while a % 2 == 0: a //= 2 if m % 8 in [3, 5]: t = -t a, m = m, a if a % 4 == 3 and m % 4 == 3: t = -t a %= m return t if m == 1 else 0 @staticmethod def modinv(n, p): n = n % p x, u = 0, 1 while n: x, u = u, x - (p // n) * u p, n = n, p % n return x def factorise_fast(self, value, factor_base): factors = set() if value < 0: factors ^= {-1} value = -value for factor in factor_base[1:]: while value % factor == 0: factors ^= {factor} value //= factor return factors, value @staticmethod def tonelli_shanks(a, p): a %= p if p % 8 in [3, 7]: x = pow(a, (p + 1) // 4, p) return x, p - x if p % 8 == 5: x = pow(a, (p + 3) // 8, p) if pow(x, 2, p) != a % p: x = (x * pow(2, (p - 1) // 4, p)) % p return x, p - x d = 2 symb = 0 while symb != -1: symb = QuadraticSieve.jacobi(d, p) d += 1 d -= 1 n = p - 1 s = 0 while n % 2 == 0: n //= 2 s += 1 t = n A = pow(a, t, p) D = pow(d, t, p) m = 0 for i in range(s): i1 = pow(2, s - 1 - i) i2 = (A * pow(D, m, p)) % p i3 = pow(i2, i1, p) if i3 == p - 1: m += pow(2, i) x = (pow(a, (t + 1) // 2, p) * pow(D, m // 2, p)) % p return x, p - x @staticmethod def prime_sieve(n): sieve = [True] * (n + 1) sieve[0], sieve[1] = False, False for i in range(2, int(n**0.5) + 1): if sieve[i]: for j in range(i * 2, n + 1, i): sieve[j] = False return [i for i, is_prime in enumerate(sieve) if is_prime] def find_b(self, N): x = ceil(exp(0.5 * sqrt(log(N) * log(log(N))))) return x def choose_multiplier(self, N, B): prime_list = self.prime_sieve(B) if self.multiplier is not None: self.logger.info("Using multiplier k = %d", self.multiplier) return prime_list NUM_TEST_PRIMES = 300 LN2 = math.log(2) num_primes = min(len(prime_list), NUM_TEST_PRIMES) log2n = math.log(N) scores = [0.0 for _ in MULT_LIST] num_multipliers = 0 for i, curr_mult in enumerate(MULT_LIST): knmod8 = (curr_mult * (N % 8)) % 8 logmult = math.log(curr_mult) scores[i] = 0.5 * logmult if knmod8 == 1: scores[i] -= 2 * LN2 elif knmod8 == 5: scores[i] -= LN2 elif knmod8 in (3, 7): scores[i] -= 0.5 * LN2 num_multipliers += 1 for i in range(1, num_primes): prime = prime_list[i] contrib = math.log(prime) / (prime - 1) modp = N % prime for j in range(num_multipliers): curr_mult = MULT_LIST[j] knmodp = (modp * curr_mult) % prime if knmodp == 0 or self.legendre(knmodp, prime) == 1: if knmodp == 0: scores[j] -= contrib else: scores[j] -= 2 * contrib best_score = float('inf') best_mult = 1 for i in range(num_multipliers): if scores[i] < best_score: best_score = scores[i] best_mult = MULT_LIST[i] self.multiplier = best_mult self.logger.info("Using multiplier k = %d", best_mult) return prime_list def get_smooth_b(self, N, B, prime_list): factor_base = [-1, 2] self.prime_log_map[2] = 1 for p in prime_list[1:]: if self.legendre(N, p) == 1: factor_base.append(p) self.prime_log_map[p] = round(log2(p)) self.root_map[p] = self.tonelli_shanks(N, p) return factor_base def decide_bound(self, N, B=None): if B is None: B = self.find_b(N) self.B = B self.logger.info("Using B = %d", B) return B def build_factor_base(self, N, B, prime_list): fb = self.get_smooth_b(N, B, prime_list) self.logger.info("Factor base size: %d", len(fb)) return fb def new_poly_a(self, factor_base, N, M, poly_a_list): small_B = 1024 lower_polypool_index = 2 upper_polypool_index = small_B - 1 poly_low_found = False for i in range(small_B): if factor_base[i] > LOWER_BOUND_SIQS and not poly_low_found: lower_polypool_index = i poly_low_found = True if factor_base[i] > UPPER_BOUND_SIQS: upper_polypool_index = i - 1 break # Compute target_a and bit threshold target_a = int(math.sqrt(2 * N) / M) target_mul = 0.9 target_bits = int(target_a.bit_length() * target_mul) too_close = 10 close_range = 5 min_ratio = LOWER_BOUND_SIQS while True: poly_a = 1 afact = [] qli = [] while True: found_a_factor = False while(found_a_factor == False): randindex = random.randint(lower_polypool_index, upper_polypool_index) potential_a_factor = factor_base[randindex] found_a_factor = True if potential_a_factor in afact: found_a_factor = False poly_a = poly_a * potential_a_factor afact.append(potential_a_factor) qli.append(randindex) j = target_a.bit_length() - poly_a.bit_length() if j < too_close: poly_a = 1 s = 0 afact = [] qli = [] continue elif j < (too_close + close_range): break a1 = target_a // poly_a if a1 < min_ratio: continue mindiff = 100000000000000000 randindex = 0 for i in range(small_B): if abs(a1 - factor_base[i]) < mindiff: mindiff = abs(a1 - factor_base[i]) randindex = i found_a_factor = False while not found_a_factor: potential_a_factor = factor_base[randindex] found_a_factor = True if potential_a_factor in afact: found_a_factor = False if not found_a_factor: randindex += 1 if randindex > small_B: continue poly_a = poly_a * factor_base[randindex] afact.append(factor_base[randindex]) qli.append(randindex) diff_bits = (target_a - poly_a).bit_length() if diff_bits < target_bits: if poly_a in poly_a_list: if target_bits > 1000: print("SOMETHING WENT WRONG") sys.exit() target_bits += 1 continue else: break poly_a_list.append(poly_a) return poly_a, sorted(qli), set(afact) def generate_first_polynomial(self, factor_base, N, M, poly_a_list): a, qli, factors_a = self.new_poly_a(factor_base, N, M, poly_a_list) s = len(qli) B = [] for l in range(s): p = factor_base[qli[l]] r1 = self.root_map[p][0] aq = a // p invaq = self.modinv(aq, p) gamma = r1 * invaq % p if gamma > p // 2: gamma = p - gamma B.append(aq * gamma) b = sum(B) % a c = (b * b - N) // a soln_map = {} Bainv = {} for p in factor_base: Bainv[p] = [] if a % p == 0 or p == 2: continue ainv = self.modinv(a, p) # store bainv for j in range(s): Bainv[p].append((2 * B[j] * ainv) % p) # store roots r1, r2 = self.root_map[p] r1 = ((r1 - b) * ainv) % p r2 = ((r2 - b) * ainv) % p soln_map[p] = [r1, r2] return a, b, c, B, Bainv, soln_map, s, factors_a def sieve(self, N, B, factor_base, M): # ------------------------------------------------ # 1) TIMING # ------------------------------------------------ start = time.time() # ------------------------------------------------ # 2) FACTOR BASE & RELATED # ------------------------------------------------ fb_len = len(factor_base) fb_map = {val: i for i, val in enumerate(factor_base)} target_relations = fb_len + self.T large_prime_bound = B * self.lp_multiplier # ------------------------------------------------ # 3) THRESHOLD & MISC # ------------------------------------------------ threshold = int(math.log2(M * math.sqrt(N)) - self.eps) lp_found = 0 ind = 1 matrix = [0] * fb_len relations = [] roots = [] partials = {} num_poly = 0 interval_size = 2 * M + 1 grays = get_gray_code(20) poly_a_list = [] poly_ind = 0 sieve_values = [0] * interval_size r1 = 0 r2 = 0 def process_sieve_value(x, sieve_values, partials, relations, roots, a, b, c, factors_a): nonlocal ind val = sieve_values[x] sieve_values[x] = 0 lpf = 0 if val > threshold: xval = x - M relation = a * xval + b poly_val = a * xval * xval + 2 * b * xval + c local_factors, value = self.factorise_fast(poly_val, factor_base) local_factors ^= factors_a if value != 1: if value < large_prime_bound: if value in partials: rel, lf, pv = partials[value] relation *= rel local_factors ^= lf poly_val *= pv lpf = 1 else: partials[value] = (relation, local_factors, poly_val * a) return 0 else: return 0 for fac in local_factors: idx = fb_map[fac] matrix[idx] |= ind ind = ind + ind relations.append(relation) roots.append(poly_val * a) return lpf with Live(console=self.console) as live: while len(relations) < target_relations: if num_poly % 10 == 0: live.update(create_table(relations, target_relations, num_poly, start)) if poly_ind == 0: a, b, c, B, Bainv, soln_map, s, factors_a = self.generate_first_polynomial(factor_base, N, M, poly_a_list) end = 1 << (s - 1) poly_ind += 1 else: v, e = grays[poly_ind] b = (b + 2 * e * B[v]) c = (b * b - N) // a poly_ind += 1 if poly_ind == end: poly_ind = 0 v, e = grays[poly_ind] # v, e for next iteration for p in factor_base: if p < self.prime_limit or a % p == 0: continue log_p = self.prime_log_map[p] r1, r2 = soln_map[p] soln_map[p][0] = (r1 - e * Bainv[p][v]) % p soln_map[p][1] = (r2 - e * Bainv[p][v]) % p amx = r1 + M bmx = r2 + M apx = amx - p bpx = bmx - p k = p while k < M: sieve_values[apx + k] += log_p sieve_values[bpx + k] += log_p sieve_values[amx - k] += log_p sieve_values[bmx - k] += log_p k += p num_poly += 1 x = 0 while x < 2 * M - 6: # for some reason need to do all this for max performance gain in PyPy3 lp_found += process_sieve_value(x, sieve_values, partials, relations, roots, a, b, c, factors_a) x += 1 lp_found += process_sieve_value(x, sieve_values, partials, relations, roots, a, b, c, factors_a) x += 1 lp_found += process_sieve_value(x, sieve_values, partials, relations, roots, a, b, c, factors_a) x += 1 lp_found += process_sieve_value(x, sieve_values, partials, relations, roots, a, b, c, factors_a) x += 1 lp_found += process_sieve_value(x, sieve_values, partials, relations, roots, a, b, c, factors_a) x += 1 lp_found += process_sieve_value(x, sieve_values, partials, relations, roots, a, b, c, factors_a) x += 1 lp_found += process_sieve_value(x, sieve_values, partials, relations, roots, a, b, c, factors_a) x += 1 lp_found += process_sieve_value(x, sieve_values, partials, relations, roots, a, b, c, factors_a) x += 1 print(f"\n{num_poly} polynomials sieved") print(f"{lp_found} relations from partials") print(f"{target_relations - lp_found} normal smooth relations") print(f"{target_relations} total relations\n") return matrix, relations, roots def solve_bits(self, matrix, n): self.logger.info("Solving linear system in GF(2).") lsmap = {lsb: 1 << lsb for lsb in range(n)} # GAUSSIAN ELIMINATION m = len(matrix) marks = [] cur = -1 # m -> number of primes in factor base # n -> number of smooth relations mark_mask = 0 for row in matrix: if cur % 100 == 0: print("", end=f"{cur, m}\r") cur += 1 lsb = (row & -row).bit_length() - 1 if lsb == -1: continue marks.append(n - lsb - 1) mark_mask |= 1 << lsb for i in range(m): if matrix[i] & lsmap[lsb] and i != cur: matrix[i] ^= row marks.sort() # NULL SPACE EXTRACTION nulls = [] free_cols = [col for col in range(n) if col not in marks] k = 0 for col in free_cols: shift = n - col - 1 val = 1 << shift fin = val for v in matrix: if v & val: fin |= v & mark_mask nulls.append(fin) k += 1 if k == self.T: break return nulls def extract_factors(self, N, relations, roots, null_space): n = len(relations) for vector in null_space: prod_left = 1 prod_right = 1 for idx in range(len(relations)): bit = vector & 1 vector = vector >> 1 if bit == 1: prod_left *= relations[idx] prod_right *= roots[idx] idx += 1 sqrt_right = isqrt(prod_right) prod_left = prod_left % N sqrt_right = sqrt_right % N factor_candidate = self.gcd(N, prod_left - sqrt_right) if factor_candidate not in (1, N): other_factor = N // factor_candidate self.logger.info("Found factors: %d, %d", factor_candidate, other_factor) return factor_candidate, other_factor return 0, 0 def factor(self, N, B=None): overall_start = time.time() self.logger.info("========== Quadratic Sieve V4 Start ==========") self.logger.info("Factoring N = %d", N) step_start = time.time() B = self.decide_bound(N, self.B) step_end = time.time() self.logger.info("Step 1 (Decide Bound) took %.3f seconds", step_end - step_start) step_start = time.time() prime_list = self.choose_multiplier(N, self.B) step_end = time.time() self.logger.info("Step 2 (Choose Multiplier) took %.3f seconds", step_end - step_start) kN = self.multiplier * N if kN.bit_length() < 140: LOWER_BOUND_SIQS = 3 step_start = time.time() factor_base = self.build_factor_base(kN, B, prime_list) step_end = time.time() self.logger.info("Step 3 (Build Factor Base) took %.3f seconds", step_end - step_start) step_start = time.time() matrix, relations, roots = self.sieve(kN, B, factor_base, self.M) step_end = time.time() self.logger.info("Step 4 (Sieve Interval) took %.3f seconds", step_end - step_start) n = len(relations) step_start = time.time() null_space = self.solve_bits(matrix, n) step_end = time.time() self.logger.info("Step 5 (Solve Dependencies) took %.3f seconds", step_end - step_start) step_start = time.time() f1, f2 = self.extract_factors(N, relations, roots, null_space) step_end = time.time() self.logger.info("Step 6 (Extract Factors) took %.3f seconds", step_end - step_start) if f1 and f2: self.logger.info("Quadratic Sieve successful: %d * %d = %d", f1, f2, N) else: self.logger.warning("No non-trivial factors found with the current settings.") overall_end = time.time() self.logger.info("Total time for Quadratic Sieve: %.10f seconds", overall_end - overall_start) self.logger.info("========== Quadratic Sieve End ==========") return f1, f2 if __name__ == '__main__': ## 60 digit number #N = 373784758862055327503642974151754627650123768832847679663987 #qs = QuadraticSieve(B=111000, M=400000, T=10, prime_limit=45, eps=34, lp_multiplier=20000) ### 70 digit number N = 3605578192695572467817617873284285677017674222302051846902171336604399 qs = QuadraticSieve(B=300000, M=350000, prime_limit=47, eps=40, T=10, lp_multiplier=256) ## 80 digit number #N = 4591381393475831156766592648455462734389 * 1678540564209846881735567157366106310351 #qs = QuadraticSieve(B=700_000, M=600_000, prime_limit=52, eps=45, T=10, lp_multiplier=256) factor1, factor2 = qs.factor(N) Now, the main running time in the comes from the following section which is where basically a giant sieving process: def sieve(self, N, B, factor_base, M): start = time.time() fb_len = len(factor_base) fb_map = {val: i for i, val in enumerate(factor_base)} target_relations = fb_len + self.T large_prime_bound = B * self.lp_multiplier threshold = int(math.log2(M * math.sqrt(N)) - self.eps) lp_found = 0 ind = 1 matrix = [0] * fb_len relations = [] roots = [] partials = {} num_poly = 0 interval_size = 2 * M + 1 grays = get_gray_code(20) poly_a_list = [] poly_ind = 0 sieve_values = [0] * interval_size r1 = 0 r2 = 0 def process_sieve_value(x, sieve_values, partials, relations, roots, a, b, c, factors_a): nonlocal ind val = sieve_values[x] sieve_values[x] = 0 lpf = 0 if val > threshold: xval = x - M relation = a * xval + b poly_val = a * xval * xval + 2 * b * xval + c local_factors, value = self.factorise_fast(poly_val, factor_base) local_factors ^= factors_a if value != 1: if value < large_prime_bound: if value in partials: rel, lf, pv = partials[value] relation *= rel local_factors ^= lf poly_val *= pv lpf = 1 else: partials[value] = (relation, local_factors, poly_val * a) return 0 else: return 0 for fac in local_factors: idx = fb_map[fac] matrix[idx] |= ind ind = ind + ind relations.append(relation) roots.append(poly_val * a) return lpf with Live(console=self.console) as live: while len(relations) < target_relations: if num_poly % 10 == 0: live.update(create_table(relations, target_relations, num_poly, start)) if poly_ind == 0: a, b, c, B, Bainv, soln_map, s, factors_a = self.generate_first_polynomial(factor_base, N, M, poly_a_list) end = 1 << (s - 1) poly_ind += 1 else: v, e = grays[poly_ind] b = (b + 2 * e * B[v]) c = (b * b - N) // a poly_ind += 1 if poly_ind == end: poly_ind = 0 v, e = grays[poly_ind] # v, e for next iteration for p in factor_base: if p < self.prime_limit or a % p == 0: continue log_p = self.prime_log_map[p] r1, r2 = soln_map[p] soln_map[p][0] = (r1 - e * Bainv[p][v]) % p soln_map[p][1] = (r2 - e * Bainv[p][v]) % p amx = r1 + M bmx = r2 + M apx = amx - p bpx = bmx - p k = p while k < M: sieve_values[apx + k] += log_p sieve_values[bpx + k] += log_p sieve_values[amx - k] += log_p sieve_values[bmx - k] += log_p k += p num_poly += 1 x = 0 while x < 2 * M - 6: # for some reason need to do all this for max performance gain in PyPy3 lp_found += process_sieve_value(x, sieve_values, partials, relations, roots, a, b, c, factors_a) x += 1 lp_found += process_sieve_value(x, sieve_values, partials, relations, roots, a, b, c, factors_a) x += 1 lp_found += process_sieve_value(x, sieve_values, partials, relations, roots, a, b, c, factors_a) x += 1 lp_found += process_sieve_value(x, sieve_values, partials, relations, roots, a, b, c, factors_a) x += 1 lp_found += process_sieve_value(x, sieve_values, partials, relations, roots, a, b, c, factors_a) x += 1 lp_found += process_sieve_value(x, sieve_values, partials, relations, roots, a, b, c, factors_a) x += 1 lp_found += process_sieve_value(x, sieve_values, partials, relations, roots, a, b, c, factors_a) x += 1 lp_found += process_sieve_value(x, sieve_values, partials, relations, roots, a, b, c, factors_a) x += 1 print(f"\n{num_poly} polynomials sieved") print(f"{lp_found} relations from partials") print(f"{target_relations - lp_found} normal smooth relations") print(f"{target_relations} total relations\n") return matrix, relations, roots I have "optimized" the code as much as I can so far, and it runs pretty fast in PyPy, but I am wondering if there is anything I am missing that can tweak out more performance gain. I haven't really been able to get anything meaningful out of profiling the code because of the way that PyPy works, but through a bit of testing have made various improvements that have cut down the time a lot like some loop unrolling, precision reduction, and the way I work with the arrays. Unfortunately, this sieving code is still not fast enough to be feasible in factoring the size of numbers which I'm targeting(100 to 115 digit semiprimes). By my initial estimates with semi-decent parameter selection and without multiprocessing, the sieving code itself would taken around 70 hours when compiled using PyPy3 on my device. Is there anything I can do to better optimize this code for the PyPy JIT? | This answer provides a way to significantly speed up the code by: transformation of a fraction of the Python code into a C++ code low-level optimization of the most expensive parts parallelization of one very-expensive part The overall algorithm is left unchanged and the logic of the Python code too. That being said many non-trivial optimizations are performed and most of the changes applied are nearly impossible to do purely in PyPy (this why I choose to move the code to C++), starting from the (efficient) use of multiple threads. In the end, this makes the code 7 times faster for 80-digit numbers (4 times faster for 70-digit ones). Note that the answer mostly focus only on "Sieve Interval" step though few optimization tips are provided at the end. Details of the optimizations performed First of all, I profiled the code and found out that two parts of the code was slow: the while loop calling many times process_sieve_value the while loop filling sieve_values The former tends to be the bottleneck on numbers with 60~70 digits, while the later is clearly the bottleneck for numbers having 80~90 digits. Let us focus on the former first. Make the code SIMD-friendly: To optimize the process_sieve_value-based loop, we can search the next value where sieve_values[i] > threshold and only then call process_sieve_value with the value found rather than calling the function many times. This enable further optimizations. Indeed, searching for sieve_values[i] > threshold can be done in a SIMD-friendly way while the initial solution is clearly not SIMD-friendly. SIMD units of modern CPU can operate on many items simultaneously (e.g. 8 x 32-bit integers, or even 16 x 16-bit integers) with a cost close to scalar operation (for most operations except very-expensive ones like integer divisions). This means codes compiled to used SIMD instructions can be much faster than ones using scalar instructions. PyPy is generally unable to do such optimization. I think this is mainly because vectorizing codes is a very expensive operation (not great for JITs like PyPy) and Python code tends to be really hard to vectorize efficiently when this is even possible. This searching operation is done in C++. Converting lists to Numpy arrays: One issue with this optimization (like the others) is that sieve_values is a list. Lists are not very efficient (because they contain reference to dynamically-typed garbage-collected dynamically-allocated Python objects) and we cannot (easily) operate on them with modules like CFFI, Cython, etc. Thus, lists should be converted to native-friendly data-structures like plain arrays. Numpy is the perfect module for that. However, converting lists to Numpy arrays is painfully slow in PyPy. Getting/Setting items is insanely slow too (10 times slower than CPython which is already >200 times slower than C++ for that). The key to make this fast is unfortunately to write a C++ function to load/store values in Numpy arrays. This is clearly sub-optimal (i.e. one function call per access) but surprisingly fast in practice compared to all other alternatives. It is actually fast enough to be competitive with list accesses! Converting dictionaries to Numpy arrays: Dictionaries are pretty expensive, even in C++. Moreover, like lists, Python dictionaries does not fit well native codes. Statements like soln_map[p][0] = (r1 - e * Bainv[p][v]) % p are clearly a problem. Indeed, it is not only pretty slow (not a bottleneck except once the rest of the code has been optimized), but it also prevents the transformation of the loop containing the statement in a native code at least neither easily nor efficiently). The thing is such dictionary are read in a p-based loop (so they are read in order). Thus, we can just replace such dictionaries by basic 2D Numpy arrays. There is a catch though: the array needs to be transposed to be read in a cache-friendly way. Indeed, v is a constant in the loop so the contiguous axis has to be the one indexing p instead of v. Converting big numbers: Unfortunately, >64-bit numbers cannot directly be provided to native codes (not supported yet). Thus, one way to pass large integers to native codes is simply to split it in 64-bit parts. It is a bit tedious but this is fine for <192-bit numbers since it only results in 3 parts (+1 for the sign). Optimizing the factorization: PyPy turns out to be relatively efficient for computing modulo on large numbers since it uses different algorithms regarding the size of the integers (because small integers are internally a different type). That being said, we can write a bit faster code by checking if the high bits of the target large number are set to zero in order to reduce the number of (very expensive) native 64-bit modulo operations. Having large number stored in parts turns out to be good for convenience and performance in this case. Indeed, we can first check the first 64-bit part trivially and natively. Besides, PyPy needs to follows the semantic of Python (a modulo must be always positive) while C++ code does not (we generally know that the operands are positive so there is no need to guarantee that with additional instructions). Optimizing the update of sieve_values: This is clearly the hardest part of the algorithm to optimize. Indeed, the operation is memory bound and also hard to parallelize due to the memory access pattern (huge stride for relatively small p and pseudo-random memory accesses for large p). Each memory access tends to result in a full cache-line fetch (slow). For large N numbers (requiring a rather large M for the code to be fast), the whole array tends to be so big that it does not fit in the L1/L2 cache of most mainstream CPUs. The key to make this operation cache-friendly is to store sieve_values in a very-compact way. I choose to store items in 8-bit integers. Since this is not enough to store all value contained in the array, the idea is to accumulate the log_p value in 8-bit items as much as possible and fallback on a 16-bit array when an item of the 8-bit array overflows (the item is then reset to 0 so to mitigate the number of accesses to the 16-bit array). Accesses to the 16-bit array are unlikely since log_p values are generally tiny (they are all less than 20 in practice). This part can be parallelized (though it clearly does not scale). Indeed, we can allocate multiple temporary 8-bit array for accumulating the log_p values and finally sum the 8-bit arrays in sieve_values. This means the quite-expensive divisions are fully executed in parallel. Unfortunately, the 8-bit array access does not scale well: only 3-4 threads are enough to saturate the memory hierarchy on my machine (i5-9600KF CPU). Even worse: more threads results in random-like memory accesses on a wider memory area for the CPU and the L3 cache may not be large enough to contain all arrays. If this happens, the DRAM is used instead and it makes things significantly slower due to its much higher latency and significantly lower throughput. Unfortunately, I did not find any additional (significant) optimization for this expensive part, and it is still a bottleneck on large 80~90 bit numbers in the optimized code... Note that the optimized code might better scale on some recent CPUs having significantly bigger caches than my CPU (especially the L3 cache on AMD X3D-like CPUs, and the L2 of all recent x86-64 CPU). Besides, note that tuning the B and M parameters of the algorithm can help to reduce the bottleneck a bit. Actual optimized code This section provide the optimized C++ code, parts of the modified Python code (too big to fit in this answer anyway) and additional resources (e.g. how to build the libraries). I used CFFI to call it from PyPy (pretty-fast in PyPy and relatively simple to use). Here is the section of the optimized Python code: # Include-like section import numpy as np from kernel_cffi import lib as kernel import cffi ffi = cffi.FFI() # Convenient function to extract the pointer of Numpy arrays def get_pointer(numpy_array): return ffi.cast(f'{numpy_array.dtype}_t*', ffi.from_buffer(numpy_array)) # Modified function so to build and return Numpy arrays instead of dicts/lists def generate_first_polynomial(self, factor_base, N, M, poly_a_list): a, qli, factors_a = self.new_poly_a(factor_base, N, M, poly_a_list) s = len(qli) B = [] for l in range(s): p = factor_base[qli[l]] r1 = self.root_map[p][0] aq = a // p invaq = self.modinv(aq, p) gamma = r1 * invaq % p if gamma > p // 2: gamma = p - gamma B.append(aq * gamma) b = sum(B) % a c = (b * b - N) // a factor_base_size = len(factor_base) np_Bainv = np.zeros((s,factor_base_size), dtype=np.int32) np_Bainv_ptr = get_pointer(np_Bainv) np_soln_map = np.zeros((2,factor_base_size), dtype=np.int32) np_soln_map_ptr = get_pointer(np_soln_map) for i,p in enumerate(factor_base): if a % p == 0 or p == 2: continue ainv = self.modinv(a, p) # store bainv for j in range(s): kernel.set_i32_2D(np_Bainv_ptr, factor_base_size, j, i, (2 * B[j] * ainv) % p) # store roots r1, r2 = self.root_map[p] r1 = ((r1 - b) * ainv) % p r2 = ((r2 - b) * ainv) % p kernel.set_i32_2D(np_soln_map_ptr, factor_base_size, 0, i, r1) kernel.set_i32_2D(np_soln_map_ptr, factor_base_size, 1, i, r2) return a, b, c, B, np_Bainv, np_soln_map, s, factors_a # Main modified function (without the same old initialization code) def sieve(self, N, B, factor_base, M): # [...] same as before (initialization in 3 steps) # ------------------------------------------------ # 4) CONVERSIONS # ------------------------------------------------ np_factor_base = np.array(factor_base, dtype=np.int32) np_factor_base_ptr = get_pointer(np_factor_base) np_factor_base_size = np_factor_base.size np_prime_log_map = np.zeros(np_factor_base_size, dtype=np.int32) np_prime_log_map_ptr = get_pointer(np_prime_log_map) for i,p in enumerate(factor_base[1:]): kernel.set_i32_1D(np_prime_log_map_ptr, i+1, self.prime_log_map[p]) np_sieve_values = np.zeros(interval_size, dtype=np.int16) np_sieve_values_ptr = get_pointer(np_sieve_values) np_sieve_values_size = np_sieve_values.size def process_sieve_value_new(x, partials, relations, roots, a, b, c, factors_a): nonlocal ind lpf = 0 xval = x - M relation = a * xval + b poly_val = a * xval * xval + 2 * b * xval + c assert poly_val < (1 << 190) poly_val_sign = 1 if poly_val >= 0 else -1 poly_val_hi = (abs(poly_val) >> 128) & 0xFFFFFFFF_FFFFFFFF poly_val_mi = (abs(poly_val) >> 64) & 0xFFFFFFFF_FFFFFFFF poly_val_lo = abs(poly_val) & 0xFFFFFFFF_FFFFFFFF factor_buff = np.zeros(192, dtype=np.int32) factor_buff_ptr = get_pointer(factor_buff) value = kernel.factorise(factor_buff_ptr, poly_val_sign, poly_val_hi, poly_val_mi, poly_val_lo, np_factor_base_ptr, np_factor_base_size) local_factors = set(factor_buff[factor_buff > 0].tolist()) local_factors ^= factors_a if value != 1: if value < large_prime_bound: if value in partials: rel, lf, pv = partials[value] relation *= rel local_factors ^= lf poly_val *= pv lpf = 1 else: partials[value] = (relation, local_factors, poly_val * a) return 0 else: return 0 for fac in local_factors: idx = fb_map[fac] matrix[idx] |= ind ind = ind + ind relations.append(relation) roots.append(poly_val * a) return lpf with Live(console=self.console) as live: while len(relations) < target_relations: if num_poly % 10 == 0: live.update(create_table(relations, target_relations, num_poly, start)) if poly_ind == 0: a, b, c, B, np_Bainv, np_soln_map, s, factors_a = self.generate_first_polynomial(factor_base, N, M, poly_a_list) np_soln_map_ptr = get_pointer(np_soln_map) np_Bainv_ptr = get_pointer(np_Bainv) end = 1 << (s - 1) poly_ind += 1 else: v, e = grays[poly_ind] b = (b + 2 * e * B[v]) c = (b * b - N) // a poly_ind += 1 if poly_ind == end: poly_ind = 0 v, e = grays[poly_ind] # v, e for next iteration assert 0 <= a < (1 << 190) a_abs = abs(a) a_sign = 1 if a >= 0 else -1 a_hi = (a_abs >> 128) & 0xFFFFFFFF_FFFFFFFF a_mi = (a_abs >> 64) & 0xFFFFFFFF_FFFFFFFF a_lo = a_abs & 0xFFFFFFFF_FFFFFFFF kernel.compute_big_loop(np_factor_base_ptr, np_factor_base_size, np_sieve_values_ptr, np_sieve_values_size, np_Bainv_ptr, np_soln_map_ptr, np_prime_log_map_ptr, a_sign, a_hi, a_mi, a_lo, self.prime_limit, M, v, e) num_poly += 1 # Sanity checks assert 0 <= threshold < 16384 # Check overflows assert np_sieve_values_size == 2 * M - 6 + 7 x = 0 xmax = np_sieve_values_size #2 * M - 6 + 7 # Correct? while True: x = kernel.search_next(np_sieve_values_ptr, x, xmax, threshold) if x >= xmax: break lp_found += process_sieve_value_new(x, partials, relations, roots, a, b, c, factors_a) x += 1 print(f"\n{num_poly} polynomials sieved") print(f"{lp_found} relations from partials") print(f"{target_relations - lp_found} normal smooth relations") print(f"{target_relations} total relations\n") return matrix, relations, roots Here is the C++ code: #include <cstdio> #include <cstdlib> #include <cstdint> #include <cassert> #include <algorithm> #include <unordered_set> #include <vector> #include <omp.h> // Faster than Clang types (mainly for 256-bit integers) #include <boost/multiprecision/cpp_int.hpp> using namespace boost::multiprecision; // Unsafe 32-bit contiguous array single load extern "C" int32_t get_i32_1D(int32_t* sieve_values, int32_t pos) { return sieve_values[pos]; } // Unsafe 32-bit contiguous array single store extern "C" void set_i32_1D(int32_t* sieve_values, int32_t pos, int32_t val) { sieve_values[pos] = val; } // Unsafe 32-bit contiguous array single load extern "C" int32_t get_i32_2D(int32_t* sieve_values, int32_t ld, int32_t n, int32_t m) { return sieve_values[n*ld+m]; } // Unsafe 32-bit contiguous array single store extern "C" void set_i32_2D(int32_t* sieve_values, int32_t ld, int32_t n, int32_t m, int32_t val) { sieve_values[n*ld+m] = val; } static bool is_factor(uint64_t poly_val_hi, uint64_t poly_val_mi, uint64_t poly_val_lo, int32_t factor) { //assert(factor > 1); uint64_t value = 0; // Speed up the computation for small numbers if(poly_val_hi == 0) { // Speed up the computation for tiny numbers if(poly_val_mi == 0) return (poly_val_lo % factor) == 0; value = poly_val_mi % factor; } else { value = poly_val_hi % factor; value = (value << 32) | (poly_val_mi >> 32); value %= factor; value = (value << 32) | (poly_val_mi & 0xFFFFFFFF); value %= factor; } value = (value << 32) | (poly_val_lo >> 32); value %= factor; value = (value << 32) | (poly_val_lo & 0xFFFFFFFF); value %= factor; return value == 0; } static void compute_loop(uint8_t* worker_sieve_values_ptr, int16_t* sieve_values, int32_t p, int32_t r1, int32_t r2, int32_t M, int32_t log_p) { const int32_t amx = r1 + M; const int32_t bmx = r2 + M; const int32_t apx = amx - p; const int32_t bpx = bmx - p; auto safe_inc = [=] (int32_t idx, int32_t inc){ uint8_t& value = worker_sieve_values_ptr[idx]; if (int32_t(value) + inc < 256) [[likely]] { value += inc; } else { #pragma omp atomic sieve_values[idx] += value; value = 0; } }; for(int32_t k = p; k < M; k += p) { safe_inc(apx + k, log_p); safe_inc(bpx + k, log_p); safe_inc(amx - k, log_p); safe_inc(bmx - k, log_p); } } int32_t py_mod(int32_t n, int32_t d) { // assert(d > 0); const int32_t r = n % d; return r + (r < 0 ? d : 0); } struct SavedValues { int32_t idx, r1, r2; }; extern "C" void compute_big_loop(int32_t* factor_base, int32_t factor_base_size, int16_t* sieve_values, int32_t sieve_values_size, int32_t* Bainv, int32_t* soln_map, int32_t* prime_log_map, int32_t a_sign, uint64_t a_hi, uint64_t a_mi, uint64_t a_lo, int32_t prime_limit, int32_t M, int32_t v, int32_t e) { const int num_threads = 4; std::vector<uint8_t*> workers_sieve_values_ptrs; workers_sieve_values_ptrs.reserve(num_threads); // Spawn a thread-pool with only few threads since the computation // does not scale well (seems rather memory-bound). // Using too many threads cause data not to fit in the L3 cache anymore and // DRAM accesses are significantly slower, not to mention the DRAM can // quickly be saturated, so the additional threads are not really useful. // Note that this is strongly dependent of the B parameter of the algorithm // and the size of the cache on the target platform. #pragma omp parallel num_threads(num_threads) { // Make the items as compact as possible to fit in the CPU cache (critical for performance) std::vector<uint8_t> worker_sieve_values(sieve_values_size); uint8_t* worker_sieve_values_ptr = worker_sieve_values.data(); // Record the thread-local arrays of each worker so to sum them up later #pragma omp critical workers_sieve_values_ptrs.push_back(worker_sieve_values_ptr); // Execute many `compute_loop` calls in parallel on thread-local arrays #pragma omp for schedule(static,1) for (size_t i = 0; i < factor_base_size; ++i) { const int32_t p = factor_base[i]; assert(p > 0 || p == -1); if(p < prime_limit or p == -1 or is_factor(a_hi, a_mi, a_lo, p)) continue; assert(p > 0); const int32_t r1 = get_i32_2D(soln_map, factor_base_size, 0, i); const int32_t r2 = get_i32_2D(soln_map, factor_base_size, 1, i); const int32_t Bainv_val = get_i32_2D(Bainv, factor_base_size, v, i); assert(prime_log_map[i] < 32); const uint8_t log_p = prime_log_map[i]; set_i32_2D(soln_map, factor_base_size, 0, i, py_mod(r1 - e * Bainv_val, p)); set_i32_2D(soln_map, factor_base_size, 1, i, py_mod(r2 - e * Bainv_val, p)); // This part is the bottleneck on big numbers because of random // accesses on large arrays (typically stored in DRAM) compute_loop(worker_sieve_values_ptr, sieve_values, p, r1, r2, M, log_p); } // Sum up all the thread-local arrays in parallel #pragma omp for schedule(static) for (int32_t ib = 0; ib < sieve_values_size; ib+=128) { int16_t tmp[128] = {0}; for (int32_t w = 0; w < workers_sieve_values_ptrs.size(); ++w) for (int32_t i = 0; i < std::min(128, sieve_values_size-ib); ++i) tmp[i] += workers_sieve_values_ptrs[w][ib+i]; for (int32_t i = 0; i < std::min(128, sieve_values_size-ib); ++i) sieve_values[ib+i] = tmp[i]; } } } extern "C" uint64_t factorise(int32_t* final_factors, int32_t poly_val_sign, uint64_t poly_val_hi, uint64_t poly_val_mi, uint64_t poly_val_lo, int32_t* factor_base, int32_t factor_base_size) { std::unordered_set<int32_t> factors; if(poly_val_sign < 0) factors.insert(-1); uint256_t value = (uint256_t(poly_val_hi) << 128u) | (uint256_t(poly_val_mi) << 64u) | uint256_t(poly_val_lo); int32_t i; for (int32_t i = 1; i < factor_base_size; ++i) { const int32_t factor = factor_base[i]; while(is_factor(poly_val_hi, poly_val_mi, poly_val_lo, factor)) { // Rarely executed if(factors.erase(factor) == 0) factors.insert(factor); value /= factor; poly_val_hi = uint64_t(value >> 128u); poly_val_mi = uint64_t(value >> 64u); poly_val_lo = uint64_t(value); } } assert(factors.size() <= 192); std::copy(factors.begin(), factors.end(), final_factors); std::sort(final_factors, final_factors + factors.size()); assert(value < (uint256_t(1) << 63u)); return uint64_t(value); } extern "C" int32_t search_next(int16_t* sieve_values, int32_t x, int32_t xmax, int16_t threshold) { while(x < xmax) { // Fast path (SIMD-friendly) while (x + 32 < xmax) { bool found = false; for (int i = 0; i < 32; ++i) found |= sieve_values[x+i] > threshold; if(found) break; memset(sieve_values+x, 0, 32*sizeof(int16_t)); x += 32; } const int16_t val = sieve_values[x]; sieve_values[x] = 0; if(val > threshold) [[unlikely]] return x; x++; } return xmax; } Here is the CFFI wrapper (just the prototypes of the C++ extern "C" function): from cffi import FFI ffibuilder = FFI() ffibuilder.cdef(""" int32_t search_next(int16_t* sieve_values, int32_t x, int32_t xmax, int16_t threshold); uint64_t factorise(int32_t* factors, int32_t poly_val_sign, uint64_t poly_val_hi, uint64_t poly_val_mi, uint64_t poly_val_lo, int32_t* factor_base_ptr, int32_t factor_base_size); int32_t get_i32_1D(int32_t* sieve_values, int32_t pos); void set_i32_1D(int32_t* sieve_values, int32_t pos, int32_t val); int32_t get_i32_2D(int32_t* sieve_values, int32_t ld, int32_t n, int32_t m); void set_i32_2D(int32_t* sieve_values, int32_t ld, int32_t n, int32_t m, int32_t val); void compute_big_loop(int32_t* factor_base, int32_t factor_base_size, int16_t* sieve_values, int32_t sieve_values_size, int32_t* Bainv, int32_t* soln_map, int32_t* prime_log_map, int32_t a_sign, uint64_t a_hi, uint64_t a_mi, uint64_t a_lo, int32_t prime_limit, int32_t M, int32_t v, int32_t e); """) # There is probably a cleaner way than copy-pasting the cdef code and just prefix the lines with "extern" ffibuilder.set_source( 'kernel_cffi', ''' extern int32_t search_next(int16_t* sieve_values, int32_t x, int32_t xmax, int16_t threshold); extern uint64_t factorise(int32_t* final_factors, int32_t poly_val_sign, uint64_t poly_val_hi, uint64_t poly_val_mi, uint64_t poly_val_lo, int32_t* factor_base_ptr, int32_t factor_base_size); extern int32_t get_i32_1D(int32_t* sieve_values, int32_t pos); extern void set_i32_1D(int32_t* sieve_values, int32_t pos, int32_t val); extern int32_t get_i32_2D(int32_t* sieve_values, int32_t ld, int32_t n, int32_t m); extern void set_i32_2D(int32_t* sieve_values, int32_t ld, int32_t n, int32_t m, int32_t val); extern void compute_big_loop(int32_t* factor_base, int32_t factor_base_size, int16_t* sieve_values, int32_t sieve_values_size, int32_t* Bainv, int32_t* soln_map, int32_t* prime_log_map, int32_t a_sign, uint64_t a_hi, uint64_t a_mi, uint64_t a_lo, int32_t prime_limit, int32_t M, int32_t v, int32_t e); ''', libraries=['kernel'], include_dirs=['build'], library_dirs=['.', 'build'] ) ffibuilder.compile(tmpdir='build', verbose=False) Here is the command-line to compile the libraries: mkdir -p build clang++ -std=c++17 -O3 -mavx2 -fPIC -c kernel.cpp -o build/kernel.o -fopenmp clang++ -shared -fPIC build/kernel.o -o build/libkernel.so -fopenmp ./pypy/bin/pypy build_cffi_module.py And the one to run the code: PYTHONPATH=build LD_LIBRARY_PATH=build ./pypy/bin/pypy main.py Further optimizations The factorization can be parallelized so to only take a fraction of the overall execution. However, this requires to transform the Python while loop calling search_next and process_sieve_value_new mostly into C++. This means more variables must be converted to Numpy arrays like at least xval, relation and poly_val which might be large integers (tedious to pass to C++ code). All the call to factorise can be done in parallel in C++. The rest of the process_sieve_value_new function does not have to be transformed to C++ code since you can store all the resulting value and then run the remaining code of process_sieve_value_new. This reduce the amount of work needed to do the transformation (and still most of the code is Python one). This optimization should significantly reduce the time to factorize small numbers but not so much for big numbers (especially beyond 90-digit numbers). The "Solving linear system in GF(2)" step can be massively optimized. However this first require to convert the huge numbers stored in matrix to native-integer-based arrays. You can then convert the code into C++ like I did for the other part of the code. Then 3 main optimizations can be applied: make the code SIMD-friendly so the compiled can auto-vectorize it (the current PyPy code waste most of its time performing inefficient scalar moves in this part); perform the Gaussian elimination block by block (so to make the operation less memory-bound); try to parallelize the Gaussian elimination Besides, AFAIK there are more efficient algorithm for this part (linear algebra in GF(2)). On this part, Wikipedia mentions the Block Wiedemann algorithm. On top of that, using GPUs for this specific part might also help (especially since the matrix seems a dense one). | 1 | 1 |
79,336,731 | 2025-1-7 | https://stackoverflow.com/questions/79336731/mock-date-today-but-leave-other-date-methods-alone | I am trying to test some python code that involves setting/comparing dates, and so I am trying to leverage unittest.mock in my testing (using pytest). The current problem I'm hitting is that using patch appears to override all the other methods for the patched class (datetime.date) and so causes other errors because my code is using other methods of the class. Here is a simplified version of my code. #main.py from datetime import date, timedelta, datetime def date_distance_from_today(dt: str | date) -> timedelta: if not isinstance(dt, date): dt = datetime.strptime(dt, "%Y-%m-%d").date() return date.today() - dt #tests.py from datetime import date, timedelta from unittest.mock import patch from mock_experiment import main def test_normal(): # passes fine today, Jan 7 assert main.date_distance_from_today(date(2025, 1, 1)) == timedelta(6) def test_normal_2(): # passes fine today, Jan 7 assert main.date_distance_from_today("2025-01-01") == timedelta(6) def test_with_patch_on_date(): # exception thrown with patch("mock_experiment.main.date") as patch_date: patch_date.today.return_value = date(2025, 1, 2) assert main.date_distance_from_today(date(2025, 1, 1)) == timedelta(1) When I run these tests, the first two pass but the third gets the following exception: def func1(dt: str | date) -> timedelta: > if not isinstance(dt, date): E TypeError: isinstance() arg 2 must be a type, a tuple of types, or a union This makes sense to me (although not what I want) since I borked out the date object and turned it into a MagicMock and so it doesn't get handled how I want in this isinstance call. I also tried patching date.today, which also failed as shown below: def test_with_mock_on_today(): with patch("mock_experiment.main.date.today") as patch_today: patch_today.return_value = date(2025, 1, 2) assert main.distance_from_today(date(2025, 1, 1)) == timedelta(1) Exception TypeError: cannot set 'today' attribute of immutable type 'datetime.date' | Description of changes to the file main.py I have found a possible solution by the modification of the import in your production code (main.py): instead of import datetime from the module datetime I add the import of the module datetime: # following are my imports import datetime from datetime import date, timedelta # this was your import #from datetime import date, timedelta, datetime to reflect the changes in the import, in the code the invocation of the function strptime() has become datetime.datetime.strptime() instead datetime.strptime() furthermore the invocation of the function today() has become datetime.date.today() instead date.today() Description of changes to the file tests.py To remain compliant with the production code I have changed the test method code test_with_patch_on_date() with the modification of the path of the patch(): # this is your patch() #with patch("mock_experiment.main.date") as patch_date: # the following is my patch() with patch('mock_experiment.main.datetime.date') as patch_date: The new code So the code of main.py has become the following: #main.py # following are my imports import datetime from datetime import date, timedelta # this was your import #from datetime import date, timedelta, datetime def date_distance_from_today(dt: str | date) -> timedelta: if not isinstance(dt, date): # HERE I HAVE USED datetime.datetime.strptime() instead datetime.strptime() dt = datetime.datetime.strptime(dt, "%Y-%m-%d").date() # HERE I HAVE USED datetime.date.today() instead date.today() return datetime.date.today() - dt while the code of the test file has become: import unittest from datetime import date, timedelta from unittest.mock import patch from mock_experiment import main class MyTestCase(unittest.TestCase): def test_normal(self): # passes fine today, Jan 10 assert main.date_distance_from_today(date(2025, 1, 1)) == timedelta(9) def test_normal_2(self): # passes fine today, Jan 10 assert main.date_distance_from_today("2025-01-01") == timedelta(9) def test_with_patch_on_date(self): # exception thrown, but now pass # this is your patch() #with patch("mock_experiment.main.date") as patch_date: # the following is my patch() with patch('mock_experiment.main.datetime.date') as patch_date: patch_date.today.return_value = date(2025, 1, 2) assert main.date_distance_from_today(date(2025, 1, 1)) == timedelta(1) if __name__ == '__main__': unittest.main() With these modification the 3 tests pass and this is the output on my system: ... ---------------------------------------------------------------------- Ran 3 tests in 0.003s OK Note. I don't have used pytest; I have used the module unittest, so the test functions in my code are methods of the Test Class MyTestCase. | 2 | 1 |
79,336,417 | 2025-1-7 | https://stackoverflow.com/questions/79336417/why-is-my-shared-memory-reading-zeroes-on-macos | I am writing an interface to allow communication between a main program written in C and extension scripts written in python and run in a separate python interpreter process. The interface uses a UNIX socket for small amounts of data and POSIX shared memory for large arrays. The C program handles all creation, resource tracking and final unlinking of shared memory. This works perfectly on Linux. I can transfer data between the two processes as expected using the shm. However when exactly the same code runs on MacOS, although it runs without error the shared memory is always full of zeroes when read from the other process to the one that populated the memory. e.g. if I write image data into the shm from C, and read it from python, it's all zero. If I write image data into the shm from python and read it from C, again it's all zero. I create the shm in C as follows: (some error handling lines removed for clarity) void *shm_ptr = NULL; snprintf(shm_name_ptr, 30, "/%08x%08x%08x%04x", my_random_int(), my_random_int(), my_random_int(), my_random_int()); debug_print("shm name: %s\n", shm_name_ptr); *fd = shm_open(shm_name_ptr, O_CREAT | O_RDWR | O_EXCL, S_IRUSR | S_IWUSR); ftruncate(*fd, aligned_size) == -1); shm_ptr = mmap(NULL, (size_t) aligned_size, PROT_READ | PROT_WRITE, MAP_SHARED, *fd, 0); *shm_ptr_ptr = shm_ptr; And then the details are passed to python through the socket. I haven't reproduced the details of that because the code is fairly long with #ifdefs for Windows etc., but the socket mechanism provably works and I can print the shm name as it is created in C and as it is received in python and show that they are the same: shm name: /b6c31655f708d0e20760b60bc483 SHM allocation: Original size: 25941632, Aligned size: 25944064, Page size: 4096 Truncating shm file to 25941632 bytes log: b'/b6c31655f708d0e20760b60bc483' (The first lines of text are printed from C, the last is printed from python.) The __init__ function from the wrapper class I'm using in python to open the shm is shown below: class SharedMemoryWrapper: """ Wrapper class to handle shared memory creation and cleanup across platforms. """ def __init__(self, name: str, size: int): self.name = name self.size = size # Store intended size separately self._shm = None try: # First try to attach to existing shared memory self._shm = shared_memory.SharedMemory(name=self.name) unregister(self._shm._name, "shared_memory") except FileNotFoundError: # If it doesn't exist, create new shared memory print("Existing SHM not found, creating a new one...") self._shm = shared_memory.SharedMemory(name=self.name, create=True, size=self.size) (The unregister() call is there because the SHM is always allocated, tracked and cleaned up by the C program and passed to the python program, so this suppresses warning messages about shm leaks when the python script exits.) Once the name and size are received over the socket the shared memory object is initialized using SharedMemoryWrapper(name=name_from_socket, size=size_from_socket) On Linux the try block works and the shm is opened correctly, however on MacOS I see the message "Existing SHM not found, creating a new one..." Please can someone explain what is different about MacOS here, and how can I fix it so that the existing shm is opened correctly? | I'm a colleague of OP and have looked into the issue we were having. When creating a shared memory object on the C side, use a leading slash in its name: int fd = shm_open("/mem", O_RDWR|O_CREAT, S_IRUSR|S_IWUSR); When accessing it on the Python side, do not use a leading slash as it is added automatically on POSIX systems: shm = shared_memory.SharedMemory(name="mem", create=False) And now Python is able to find the object. supplemental edit by OP: this is indeed the answer, and having updated the start of the __init__ function above to: def __init__(self, name: str, size: int): if os.name != "nt": name = name.lstrip('/') # Remove leading '/' on POSIX systems # because SharedMemory.__init__ will add it back self.name = name self.size = size # Store intended size separately self._shm = None ... everything now works. | 3 | 3 |
79,337,064 | 2025-1-7 | https://stackoverflow.com/questions/79337064/how-to-run-async-code-in-ipython-startup-files | I have set IPYTHONDIR=.ipython, and created a startup file at .ipython/profile_default/startup/01_hello.py. Now, when I run ipython, it executes the contents of that file as if they had been entered into the IPython shell. I can run sync code this way: # contents of 01_hello.py print( "hello!" ) $ ipython Python 3.12.0 (main, Nov 12 2023, 10:40:37) [GCC 11.4.0] Type 'copyright', 'credits' or 'license' for more information IPython 8.31.0 -- An enhanced Interactive Python. Type '?' for help. hello In [1]: I can also run async code directly in the shell: # contents of 01_hello.py print( "hello!" ) async def foo(): print( "foo" ) $ ipython Python 3.12.0 (main, Nov 12 2023, 10:40:37) [GCC 11.4.0] Type 'copyright', 'credits' or 'license' for more information IPython 8.31.0 -- An enhanced Interactive Python. Type '?' for help. hello In [1]: await foo() foo In [2]: However, I cannot run async code in the startup file, even though it's supposed to be as if that code was entered into the shell: # contents of 01_hello.py print( "hello!" ) async def foo(): print( "foo" ) await foo() $ ipython Python 3.12.0 (main, Nov 12 2023, 10:40:37) [GCC 11.4.0] Type 'copyright', 'credits' or 'license' for more information IPython 8.31.0 -- An enhanced Interactive Python. Type '?' for help. [TerminalIPythonApp] WARNING | Unknown error in handling startup files: File ~/proj/.ipython/profile_default/startup/01_imports.py:5 await foo() ^ SyntaxError: 'await' outside function Question: Why doesn't this work, and is there a way to run async code in the startup file without explicitly starting a new event loop just for that? (asyncio.run()) Doing that wouldn't make sense, since that event loop would have to close by the end of the file, which makes it impossible to do any initialization work that involves context vars (which is where Tortoise-ORM stores its connections), which defeats the purpose. Or stated differently: How can I access the event loop that IPython starts for the benefit of the interactive shell? | From version 8, ipython uses a function called get_asyncio_loop to get access to the event loop that it runs async cells on. You can use this event loop during your startup script to run any tasks you want on the same event loop that async cells will run on. NB. This is only uses for the asyncio package in Python's standard library and not any other async libraries (such as trio). from IPython.core.async_helpers import get_asyncio_loop as _get_asyncio_loop async def foo(): print("foo") _get_asyncio_loop().run_until_complete(foo()) Caveat The event loop that ipython uses DOES NOT run in the background. What this means is that unless you are running an async cell, no tasks that you have started will be running. ie. None of your Tortoise ORM connections will be serviced, which may cause them to break. As such, you may need to run your Tortoise ORM in a separate event loop anyway, and write some glue for passing data back and forth between the two event loops. | 1 | 2 |
79,337,434 | 2025-1-7 | https://stackoverflow.com/questions/79337434/whats-the-best-way-to-use-a-sklearn-feature-selector-in-a-grid-search-to-evalu | I am training a sklearn classifier, and inserted in a pipeline a feature selection step. Via grid search, I would like to determine what's the number of features that allows me to maximize performance. Still, I'd like to explore in the grid search the possibility that no feature selection, just a "passthrough" step, is the optimal choice to maximize performance. Here's a reproducible example: import seaborn as sns from sklearn.pipeline import Pipeline from sklearn.model_selection import GridSearchCV from sklearn.linear_model import LogisticRegression from sklearn.feature_selection import SequentialFeatureSelector from sklearn.preprocessing import StandardScaler, OneHotEncoder from sklearn.compose import ColumnTransformer from sklearn.impute import SimpleImputer # Load the Titanic dataset titanic = sns.load_dataset('titanic') # Select features and target features = ['age', 'fare', 'sex'] X = titanic[features] y = titanic['survived'] # Preprocessing pipelines for numeric and categorical features numeric_features = ['age', 'fare'] numeric_transformer = Pipeline(steps=[ ('imputer', SimpleImputer(strategy='constant')), ('scaler', StandardScaler()) ]) categorical_features = ['sex'] categorical_transformer = Pipeline(steps=[ ('imputer', SimpleImputer(strategy='constant')), ('onehot', OneHotEncoder(drop='first')) ]) # Combine preprocessing steps preprocessor = ColumnTransformer(transformers=[ ('num', numeric_transformer, numeric_features), ('cat', categorical_transformer, categorical_features) ]) # Initialize classifier and feature selector clf = LogisticRegression(max_iter=1000, solver='liblinear') sfs = SequentialFeatureSelector(clf, direction='forward') # Create a pipeline that includes preprocessing, feature selection, and classification pipeline = Pipeline(steps=[ ('preprocessor', preprocessor), ('feature_selection', sfs), ('classifier', clf) ]) # Define the parameter grid to search over param_grid = { 'feature_selection__n_features_to_select': [2], 'classifier__C': [0.1, 1.0, 10.0], # Regularization strength } # Create and run the grid search grid_search = GridSearchCV(pipeline, param_grid, cv=5) grid_search.fit(X, y) # Output the best parameters and score print("Best parameters found:", grid_search.best_params_) print("Best cross-validation score:", grid_search.best_score_) X here has three features (even after the preprocessor step), but the grid search code above doesn't allow to explore models in which all 3 features are used, as setting feature_selection__n_features_to_select: [2,3] will give a ValueError: n_features_to_select must be < n_features. The obstacle here is that SequentialFeatureSelector doesn't consider the selection of all features (aka a passthrough selector) as a valid feature selection. In other words, I would like to run a grid search that considers also the setting of ('feature_selection', 'passthrough') in the space of possible pipeline configurations. Is there an idiomatic/nice way to do that? | The parameter n_features_to_select can be an integer (number of features) or a float (proportion of features). So instead of [1, 2, 3], the pipeline can run with [1/3, 2/3, 1.0]. To get the scores for each combination of parameters in the grid search, you can run display(pd.DataFrame(grid_search.cv_results_)) The results for n_features = 1.0 and those for a pipeline without the SequentialFeatureSelector (e.g. setting that to 'passthrough') should be the same. | 1 | 1 |
79,336,594 | 2025-1-7 | https://stackoverflow.com/questions/79336594/uwsgi-with-https-getting-socket-option-missing | I am running a Flask application on Docker with uwsgi. I have been running it for years now, but we need to add https to it. I know I can use an HAProxy and do ssl offloading, but in our current setup we cant do it this way, at least not right now. We need to do the SSL directly on the application. I have tried multiple options and I keep getting "The -s/--socket option is missing and stdin is not a socket." Not sure what else to try. The server is uWSGI==2.0.26. Please help. below is my uwsgi.ini file. [uwsgi] module = wsgi:app master = true processes = 5 enable-threads = true single-interpreter = true buffer-size = 32768 # protocol = http # socket = 0.0.0.0:5000 # protocol = https shared-socket = 0.0.0.0:5000 https = 0,/app/ssl/app_cert.crt,/app/ssl/app_cert.key stdout_logfile = /dev/stdout stdout_logfile_maxbytes = 0 stderr_logfile = /dev/stdout stderr_logfile_maxbytes = 0 chmod-socket = 660 vacuum = true die-on-term = true py-autoreload = 1 | You can use the following example to run your flask application with uwsgi and docker. I will provide a minimal example and you can use it to expand to your needs. The uwsgi conf were extracted from docs. uwsgi.ini [uwsgi] shared-socket = 0.0.0.0:443 https = =0,ssl/server.crt,ssl/server.key master = true module = app:app uid = uwsgi gid = uwsgi app.py from flask import Flask app = Flask(__name__) @app.route("/") def hello(): return "Hello, World!" if __name__ == "__main__": app.run() requirements.txt Flask uWSGI==2.0.26 Dockerfile FROM python:3.10-slim RUN apt-get update && apt-get install -y \ build-essential \ gcc \ libssl-dev \ && apt-get clean \ && rm -rf /var/lib/apt/lists/* RUN groupadd -r uwsgi && useradd -r -g uwsgi -m uwsgi WORKDIR /app COPY requirements.txt /app/ RUN pip install --no-cache-dir -r requirements.txt COPY . /app/ COPY ssl/ /app/ssl/ RUN chown -R uwsgi:uwsgi /app EXPOSE 443 USER uwsgi CMD ["uwsgi", "--ini", "uwsgi.ini"] Create an ssl directory and generate a self-signed cert. mkdir ssl openssl req -x509 \ -newkey rsa:2048 \ -keyout ssl/server.key \ -out ssl/server.crt \ -days 365 -nodes -subj "/CN=localhost" Now you should have this folder structure: . βββ app.py βββ Dockerfile βββ requirements.txt βββ ssl β βββ server.crt β βββ server.key βββ uwsgi.ini Now build and run: docker build -t flask-uwsgi-example . docker run --rm --name flask -p 443:443 flask-uwsgi-example And test with curl: $ curl -k https://localhost:443 Hello, World! | 1 | 0 |
79,330,032 | 2025-1-5 | https://stackoverflow.com/questions/79330032/generalized-nonsymmetric-eigensolver-python | How do I solve a nonsymmetric eigenproblem. In terms of scipy.sparse.linalg.eigsh the matrix needs to be "real symmetric square matrix or complex Hermitian matrix A" (https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.linalg.eigsh.html). and the same for scipy.sparse.linalg.eigs. "M must represent a real symmetric matrix if A is real, and must represent a complex Hermitian matrix if A is complex. For best results, the data type of M should be the same as that of A" (https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.linalg.eigs.html#scipy.sparse.linalg.eigs). I need this for a vibroacoustic problem finite element problem. Which would have the matrix shape matrix description for vibroacoustic problem I am using a windows computer. ev, phi = scipy.sparse.linalg.eigs(A=stiff, k=nev, M=mass, which='SM') ev, phi = scipy.sparse.linalg.eigs(A=stiff, k=nev, M=mass, sigma=0) Before realizing the functions do not work for nonsymmetric problems. The matrices are sparse. Matlab Code: k = load('k_global.mat'); m = load('m_global.mat'); [V,D] = eigs(k.array, m.array, 20,0); D = diag(D) natural_frequency = sqrt(D)/(2*pi) Returns: 0.00000000000000 + 0.000356591992067188i 0.000668911454165071 + 0.00000000000000i 0.00000000000000 + 0.000973128785222090i 0.00222975851379527 + 0.00000000000000i 0.00246434216130016 + 0.00000000000000i 0.00000000000000 + 0.00372951940564144i 8.06883871646537 + 0.00000000000000i 64.7482150103242 + 0.00000000000000i 234.453670549319 + 0.00000000000000i 268.154072409059 + 0.00000000000000i 312.537263749716 + 0.00000000000000i 356.103849178590 + 0.00000000000000i 389.038117338274 + 0.00000000000000i 412.048267727649 + 0.00000000000000i 473.729345964820 + 0.00000000000000i 2996.35112385098 + 0.00000000000000i 3240.96766107255 + 0.00000000000000i 4186.42444133727 + 0.00000000000000i 4585.99172192305 + 0.00000000000000i 4794.52737053778 + 0.00000000000000i Python code: import pickle import scipy import numpy as np if __name__ == '__main__': with open('../k_full.pickle', 'rb') as f: print('loading matrix K') k_global = pickle.load(f) with open('../m_full.pickle', 'rb') as f: print('loading matrix M') m_global = pickle.load(f) eigvalues, eigvectors = scipy.sparse.linalg.eigs(k_global, M=m_global, k = 20, which='SM') natural_frequency_Hz = np.sqrt(np.abs(eigvalues))/(2*np.pi) for i, nat_freq in enumerate(natural_frequency_Hz): print(f'[{i + 1: 3.0f}] : Freq = {nat_freq: 8.2f}') Returns: scipy.sparse.linalg._eigen.arpack.arpack.ArpackNoConvergence: ARPACK error -1: No convergence (321 iterations, 4/20 eigenvectors converged) Stiffness: https://pastebin.com/Ri0rebyt Mass https://pastebin.com/mPKnTt8A | Despite your vibroacoustic link, you are (presumably) solving the undamped system where M is the mass matrix and K is the stiffness matrix. With solutions proportional to eiΟt this becomes or This is the system that you want to solve. However, M should be an invertible matrix, so you can rewrite this as so looking for the eigensolutions of Mβ1K. If you look at the documentation ( https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.linalg.eigs.html#scipy.sparse.linalg.eigs ), the routine scipy.sparse.linalg.eigs does NOT require the main matrix to be symmetric/hermitian and it doesnβt require the second matrix as an argument at all: that is optional. For completeness, the final frequency f (in Hz) comes from BTW, your matlab snippet is picking up some negative eigenvalues for lambda / complex values for omega. They are rather small in magnitude, so may be an artefact of floating-point operations. However, you should check for, and be careful how you deal with, such values. EDIT - You have set k=20 (and which='SM'#allest). Thus, you aren't actually finding all the eigenvalues. If you increase k significantly you can find more, together with a warning about the scheme switching to the non-sparse version, scipy.linalg.eig. Given the relatively small size of your matrices, even if sparse, I recommend switching to that anyway. This then finds the particular eigenvalue that you want. Code: import numpy as np import scipy K = np.loadtxt( 'stiffness.txt' ) M = np.loadtxt( 'mass.txt' ) M1K = np.linalg.inv( M ) @ K #eigenvalues, eigenvectors = scipy.sparse.linalg.eigs(M1K, k=20, which='SM') # sparse version; 20 eigenvalues eigenvalues, eigenvectors = scipy.linalg.eig(M1K) # general version -> 32 eigenvalues natural_frequency_Hz = np.sqrt( np.abs( eigenvalues ) ) / ( 2 * np.pi ) for i, nat_freq in enumerate( natural_frequency_Hz ): print( f'[{i + 1: 3.0f}] : Freq = {nat_freq: 9.3f}' ) Output: [ 1] : Freq = 291049.779 [ 2] : Freq = 291052.098 [ 3] : Freq = 291056.220 [ 4] : Freq = 291053.901 [ 5] : Freq = 155654.905 [ 6] : Freq = 155654.199 [ 7] : Freq = 155632.590 [ 8] : Freq = 155567.798 [ 9] : Freq = 155596.123 [ 10] : Freq = 155588.053 [ 11] : Freq = 155630.579 [ 12] : Freq = 155578.601 [ 13] : Freq = 4794.527 [ 14] : Freq = 4585.992 [ 15] : Freq = 3240.968 [ 16] : Freq = 2996.351 [ 17] : Freq = 4186.424 [ 18] : Freq = 473.730 [ 19] : Freq = 411.719 [ 20] : Freq = 390.610 [ 21] : Freq = 356.104 [ 22] : Freq = 312.524 [ 23] : Freq = 268.154 [ 24] : Freq = 234.454 [ 25] : Freq = 64.748 [ 26] : Freq = 8.068 [ 27] : Freq = 0.022 [ 28] : Freq = 0.002 [ 29] : Freq = 0.002 [ 30] : Freq = 0.002 [ 31] : Freq = 0.002 [ 32] : Freq = 0.002 | 2 | 4 |
79,335,173 | 2025-1-7 | https://stackoverflow.com/questions/79335173/good-r2-score-but-huge-parameter-uncertainty | I'm using a quadratic function to fit my data. I have good R2 score but huge uncertainty in my fitting parameters Here is the graph and the results: R2 score: 0.9698143924536671 uncertainty in a, b, and y0: 116.93787913, 10647.11867972, 116.93787935 How should I intepret this result? Here is how I defined the quadratic function: def my_quad(x, a, b, y0): return a*(1-x**2/(2*b**2))+ y0 Here's how I calculated the uncertainty for the parameters and R2 score: popt, pcov = curve_fit(my_quad, x_data,y_data, bounds=([0, 0, -np.inf], [np.inf, np.inf, np.inf])) a, b, y= popt err = np.sqrt(np.diag(pcov)) y_pred = my_quad(x_data, *popt) r2 = r2_score(y_data, y_pred)) | Your model is over-parametrized. You can tell when you expand the polynomial: a * (1 - x**2 / (2*b**2)) + y0 -> a - x**2 * a / (2*b**2) + y0 -> y0+a - x**2 * a / (2*b**2) The are only two independent parameters, y0 + a and a / (2*b**2). You will be able to fit just as well with any two of your original parameters, and then the uncertainty will be reduced significantly. For example: import numpy as np import matplotlib.pyplot as plt from scipy import stats from scipy.optimize import curve_fit # generate data rng = np.random.default_rng(23457834572346) x = np.linspace(-1, 1, 30) noise = 0.05 * rng.standard_normal(size=x.shape) y = -2*x**2 + 1 + noise # over-parameterized fit def my_quad(x, a, b, y0): return a*(1-x**2/(2*b**2))+ y0 bounds=([0, 0, -np.inf], [np.inf, np.inf, np.inf]) popt, pcov = curve_fit(my_quad, x, y, bounds=bounds) err = np.sqrt(np.diag(pcov)) # array([3028947.74320428, 544624.83253159, 3028947.74412785]) y_ = my_quad(x, *popt) r2 = np.corrcoef(y, y_)[0, 1] # 0.9968876754155439 # remove any one parameter def my_quad(x, a, b): return a*(1-x**2/(2*b**2)) bounds=([0, 0], [np.inf, np.inf]) popt, pcov = curve_fit(my_quad, x, y, bounds=bounds) err = np.sqrt(np.diag(pcov)) # array([0.01460553, 0.00260903]) y_ = my_quad(x, *popt) r2 = np.corrcoef(y, y_)[0, 1] # 0.9968876754155439 # plot results plt.plot(x, y, '.') plt.plot(x, y_, '-') plt.plot(x, y_2, '--') | 1 | 3 |
79,333,976 | 2025-1-6 | https://stackoverflow.com/questions/79333976/is-it-possible-to-convert-from-qdatetime-to-python-datetime-without-loosing-time | I am trying to convert a QDateTime object in pyside6 to a python datetime object. Consider the following code: from PySide6.QtCore import Qt, QDateTime, QTimeZone import datetime qdatetime = QDateTime.currentDateTime() print(qdatetime.offsetFromUtc()) qdatetime.setTimeZone(QTimeZone.UTC) print(qdatetime.toString(Qt.ISODate)) print(qdatetime.offsetFromUtc()) pt = qdatetime.toPython() print(pt) print(pt.tzinfo) ptt = pt.replace(tzinfo = datetime.timezone.utc) print(ptt.tzinfo) print(ptt) The output looks like this: 3600 2025-01-06T18:56:43Z 0 2025-01-06 18:56:43.251000 None UTC 2025-01-06 18:56:43.251000+00:00 Obviously I can attach a timezone both to the QDateTime object and the python datetime object. The conversion however seems to delete the timezone information. The workaround would be to reattach the timezone after conversion. This seems like a complicated way to do this. Is there a way to convert without loosing timezone in the first place? | The toPython() function of PySide originates from the original toPyDateTime() function of PyQt4, and at that time the QDateTime class didn't provide time zone information. The behavior wasn't changed even with the more recent PyQt5/6 implementation, which wasn't updated to reflect the time zone info introduced since Qt 5.2. For various reasons, including the different implementation between Qt and Python, that was probably not implemented as a choice, and I sincerely doubt it will. Those helper functions were intended for simple cases, and more complex situation may require custom implementation. One way to properly convert the QDateTime while preserving the offset could be to just use the explicit date time string for datetime.fromisoformat() and adding the UTC offset from the original QDateTime: from PyQt6.QtCore import QDateTime from datetime import datetime def datetimeQt2Py(qDateTime): s = qDateTime.offsetFromUtc() if not s: return datetime.fromisoformat( qDateTime.toString(Qt.DateFormat.ISODate)) sign = '+' if s > 0 else '-' s = abs(s) h = m = 0 if s >= 60: m = s // 60 s %= 60 if m >= 60: h = m // 60 m %= 60 return datetime.fromisoformat(qDateTime.toUTC().toString( f'yyyy-MM-ddTHH:mm:ss{sign}{h:02}:{m:02}:{s:02}')) now = QDateTime.currentDateTime() dt = datetimeQt2Py(now) print(now) print(dt) print(dt.tzinfo) The above will obviously not preserve the time zone name, nor any DST information. All that could be achieved by dynamically creating datetime.tzinfo objects. | 2 | 0 |
79,335,053 | 2025-1-7 | https://stackoverflow.com/questions/79335053/replace-substring-if-key-is-found-in-another-file | I have files associated with people scattered around different directories. I can find them with a master file. Some of them need to be pulled into my working directory. Once I've pulled them, I need to update the master file to reflect the change. To keep track of which files were moved, I record the person's name in another file. master.txt Bob "/home/a/bob.txt" Linda "/home/b/linda.txt" Joshua "/home/a/josh.txt" Sam "/home/f/sam.txt" moved.txt Linda Sam Expected result of master.txt Bob "/home/a/bob.txt" Linda "/workingdir/linda.txt" Joshua "/home/a/josh.txt" Sam "/workingdir/sam.txt" I've tried grep -f moved.txt master.txt | sed "s?\/.*\/?"`pwd`"\/?" grep -f moved.txt master.txt | sed "s?\/.*\/?"`pwd`"\/?" master.txt grep -f moved.txt master.txt | sed -i "s?\/.*\/?"`pwd`"\/?" As an added complication, this is going to execute as part of a python script, so it needs to be able to work within a subprocess.run(cmd). Update 1: Based on some questions, here is what the relevant section of my Python code looks like. I'm trying to figure out what the next step should be in order to update the paths of the flagged files in master. commands = ['program finder.exe "flaggedfile" > master.txt' ,'sed "\#"`pwd`"#d" list.txt | sed "s/:.*//" > moved.txt' ,'program mover.exe moved.txt .' #,'cry' ] for cmd in commands: status = subprocess.run(cmd ,cwd=folder ,shell=True ,stdout=subprocess.DEVNULL ,stderr=subprocess.DEVNULL ) "program" is a program that I work with, and "finder.exe" and "mover.exe" are executables for that program, which I'm using to locate flagged files and move into the working directory. | Frame challenge: I don't need to compare the two files, I can do sed shenanigans. Extract matching lines from master to have the entire line. Wrangle the lines to have just the path. Use sed grouping to copy the old path and filename, then build a sed command in place inside a new file. Execute the new file. This means that the python snippet looks like: commands = ['program finder.exe "flaggedfile" > list.txt' ,'sed "\#"`pwd`"#d" list.txt | sed "s/:.*//" > moved.txt' ,'program mover.exe moved.txt .' ,'grep -f moved.txt master.txt | grep -o "\/.*\.txt" | sed -r "s?^(.*/)(.*)?sed -i \\"s#\\1\\2#`pwd`/\\2#g\\" master.txt?g" > updatemaster.txt' ,'. ./updatemaster.txt' ] I tested this and it does work. Thank you to everyone for your advice. I understand that I have weird constraints that I'm working with, and I'm sorry that I can't use python properly because of it. | 3 | 1 |
79,337,201 | 2025-1-7 | https://stackoverflow.com/questions/79337201/mypy-explicit-package-based-vs-setuptools | Iβve a project structured as follows: . βββ hello β βββ __init__.py β βββ animal.py βββ tests β βββ __init__.py β βββ test_animal.py βββ README βββ pyproject.toml This is just a personal Python library, and doesnβt need to be published or distributed. The usage consists of running pytest and mypy from the root directory. Among other things, the pyproject.toml contains the following sections: [project.optional-dependencies] test = [ "pytest", ] lint = [ "ruff", "mypy", ] [tool.mypy] exclude = [ 'venv', ] ignore_errors = false warn_return_any = true disallow_untyped_defs = true I install the dependencies locally as follows: % $(brew --prefix python)/bin/python3 -m venv ./venv % ./venv/bin/python -m pip install --upgrade pip '.[test]' '.[lint]' But my GitHub CI fails with the following error: hello/__init__.py: error: Duplicate module named "hello" (also at "./build/lib/hello/__init__.py") hello/__init__.py: note: See https://mypy.readthedocs.io/en/stable/running_mypy.html#mapping-file-paths-to-modules for more info hello/__init__.py: note: Common resolutions include: a) using `--exclude` to avoid checking one of them, b) adding `__init__.py` somewhere, c) using `--explicit-package-bases` or adjusting MYPYPATH Found 1 error in 1 file (errors prevented further checking) Error: Process completed with exit code 2. As suggested, running mypy with --explicit-package-bases fixes this problem, but so does addition of the following section in pyproject.toml. [tool.setuptools] py-modules = [] Iβve reviewed the mypy, and setuptools documentations, but am not sure which of the two is better suited for my purpose, or why are they even necessary. As mentioned earlier, Iβm not trying to publish or distribute this as a Python package. Which one of the two configurations the recommended way to go, and why? | I found a pip ticket for this exact problem where pytest was confused by the presence of a build directory. One of the suggestions in the ticket was to ignore the build directory. Apparently, in-place builds were introduced in pip 20.1, and is now the default. The following configuration in pyproject.toml solves the problem. However, it's a surprise that mypy does't exclude the directory automatically. I've created a ticket on their GitHub. [tool.mypy] exclude = [ "venv", "build", ] | 1 | 0 |
79,337,249 | 2025-1-7 | https://stackoverflow.com/questions/79337249/load-dll-with-ctypes-fails | I use a proprietary Python package. Within this Python package the following DLL load command fails. scripting_api = ctypes.CDLL("scripting_api_interface") Could not find module 'scripting_api_interface' (or one of its dependencies). Try using the full path with constructor syntax. I know the path to the DLL scripting_api-interface.dll and added within my Python code the following DLL path. os.environ['PATH'] = 'L:\win64' + os.pathsep But still loading the DLL will fail. I created a test environment where I used the following command. scripting_api = ctypes.CDLL("L:\win64\scripting_api_interface.dll") Which works as expected. But I can't change the DLL call, because it is provided by the mentioned Python package. Are there any other options to get this running? | Call CDLL with the version that works before importing the package. Once the DLL is loaded, additional loads are ignored. Example (Win11 x64)... test.c (simple DLL source, MSVC: cl /LD test.c): __declspec(dllexport) void func() {} With test.dll in the current directory, loading will fail without an explicit path for DLLs not in the "standard" search path. The current directory is not part of the search path for security reasons. >>> import ctypes as ct >>> dll = ct.CDLL('test') Traceback (most recent call last): File "<python-input-1>", line 1, in <module> dll = ct.CDLL('test') File "C:\dev\Python313\Lib\ctypes\__init__.py", line 390, in __init__ self._handle = _dlopen(self._name, mode) ~~~~~~~^^^^^^^^^^^^^^^^^^ FileNotFoundError: Could not find module 'test' (or one of its dependencies). Try using the full path with constructor syntax. >>> dll = ct.CDLL('./test') # explicit relative path works >>> dll = ct.CDLL('test') # now without path works because already loaded >>> So in your case the following should work: import ctypes as ct ct.CDLL('L:/win64/scripting_api_interface.dll') import your_package | 1 | 1 |
79,336,866 | 2025-1-7 | https://stackoverflow.com/questions/79336866/half-precision-in-ctypes | I need to be able to seamlessly interact with half-precision floating-point values in a ctypes structure. I have a working solution, but I'm dissatisfied with it: import ctypes import struct packed = struct.pack('<Ife', 4, 2.3, 1.2) print('Packed:', packed.hex()) class c_half(ctypes.c_ubyte*2): @property def value(self) -> float: result, = struct.unpack('e', self) return result class Triple(ctypes.LittleEndianStructure): _pack_ = 1 _fields_ = ( ('index', ctypes.c_uint32), ('x', ctypes.c_float), ('y', c_half), ) unpacked = Triple.from_buffer_copy(packed) print(unpacked.y.value) Packed: 0400000033331340cd3c 1.2001953125 I am dissatisfied because, unlike with c_float, c_uint32 etc., there is no automatic coercion of the buffer data to the Python primitive (float and int respectively for those examples); I would expect float in this half-precision case. Reading into the CPython source, the built-in types are subclasses of _SimpleCData: static PyType_Spec pycsimple_spec = { .name = "_ctypes._SimpleCData", .flags = (Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_IMMUTABLETYPE), .slots = pycsimple_slots, }; and only declare a _type_, for instance class c_float(_SimpleCData): _type_ = "f" However, attempting the naive class c_half(ctypes._SimpleCData): _type_ = 'e' results in AttributeError: class must define a '_type_' attribute which must be a single character string containing one of 'cbBhHiIlLdfuzZqQPXOv?g'. as defined by SIMPLE_TYPE_CHARS: static const char SIMPLE_TYPE_CHARS[] = "cbBhHiIlLdfuzZqQPXOv?g"; // ... if (!strchr(SIMPLE_TYPE_CHARS, *proto_str)) { PyErr_Format(PyExc_AttributeError, "class must define a '_type_' attribute which must be\n" "a single character string containing one of '%s'.", SIMPLE_TYPE_CHARS); goto error; } The end goal is to have a c_half type that I can use with the exact same API as the other built-in ctypes.c_ classes, ideally without myself writing a C module. I think I need to mimic much of the behaviour seen in the neighbourhood of PyCSimpleType_init but that code is difficult for me to follow. | Using descriptors gets close to what you want. Declare the ctypes fields with underscores and add the descriptors as class variables. If you are not familiar with descriptors, read the guide in the link above. import ctypes as ct import struct # Descriptor implementation class Half: def __set_name__(self, owner, name): self.field = f'_{name}' # name of ctypes field def __get__(self, obj, objtype=None): # Translate ctypes c_half field to float data = getattr(obj, self.field) return struct.unpack('e', data)[0] def __set__(self, obj, value): # Translate float to ctypes c_half field setattr(obj, self.field, c_half(*struct.pack('e', value))) # two-byte field with display overrides class c_half(ct.c_ubyte*2): def __repr__(self): return f'c_half({self})' def __str__(self): return str(struct.unpack('e', bytes(self))[0]) class Quad(ct.Structure): y = Half() # Declare descriptors z = Half() # Descriptor name must match _name of ctypes field _pack_ = 1 _fields_ = (('index', ct.c_uint32), ('x', ct.c_float), ('_y', c_half), # ctypes fields ('_z', c_half)) # Only needed if you want Quad(1,2,3,4) construction. # Without it, Quad() initializes all fields to zero # and must set them manually. You can do Quad(1,2) # to set the non-c_half fields (index and x). def __init__(self, index=0, x=0, y=0, z=0): self.index = index self.x = x self.y = y # needed to call descriptor __set__ self.z = z # needed to call descriptor __set__ def __repr__(self): return f'Quad(index={self.index}, x={self.x}, y={self.y}, z={self.z})' # Examples t = Quad(4, 1.2, 2.3, 3.4) print(t) print(f'{t.y=} {repr(t._y)}, {t.z=} {repr(t._z)}') t.y, t.z = 8.8, 9.9 print(f'{t.y=} {repr(t._y)}, {t.z=} {repr(t._z)}') Output: Quad(index=4, x=1.2000000476837158, y=2.30078125, z=3.400390625) t.y=2.30078125 c_half(2.30078125), t.z=3.400390625 c_half(3.400390625) t.y=8.796875 c_half(8.796875), t.z=9.8984375 c_half(9.8984375) | 5 | 3 |
79,336,210 | 2025-1-7 | https://stackoverflow.com/questions/79336210/unable-to-accurately-detect-top-7-prominent-peaks-in-data-using-python-s-find-pe | I hope to identify the peaks in a segment of data (selecting the top 7 points with the highest prominences), which are clearly visible to the naked eye. However, I am unable to successfully obtain the results using the find_peaks function. The data is accessible in this gist. Error Result: If I directly use find_peaks: find_peaks(series, prominence=np.max(series) * 0.1, distance=48) and then select the top 7 points with the highest prominences, I end up with some undesired points. Clumsy Method: I can first smooth the data: percentile_80 = series.rolling( window=61, center=True, min_periods=1 ).apply(lambda x: np.percentile(x, 80)) smoothed_series = series - percentile_80 Then, use find_peaks(smoothed_series, prominence=np.max(smoothed_series) * 0.1, distance=48), and select the top 7 points with the highest prominences, which yields the expected results. However, this approach is much slower. Edit on 2025.1.9: Thanks mozway, this is a good method. And I found another way to speed up: first find all peaks, then compare peaks with neighboring peaks,find the prominence peaks with neighbor. is this a good method? def find_significant_peaks(x, prominence_diff_ratio=0.1, initial_distance=3): # Step 1: Get all candidate peaks and their properties peaks, properties = find_peaks( x, distance=initial_distance, prominence=np.max(x) * 0.01 ) if len(peaks) == 0: return peaks, properties # Get prominences of all peaks prominences = properties["prominences"] # Calculate prominence differences using vectorized operations diffs = np.abs(np.subtract.outer(prominences, prominences)) threshold_values = prominences * prominence_diff_ratio valid_peaks_mask = np.ones(len(peaks), dtype=bool) # For each peak, check prominence difference with neighbors and local maximality compare_num = 10 for i in range(len(peaks)): # Get local window range start_idx = max(0, i - compare_num) end_idx = min(len(peaks), i + 1 + compare_num) # Get prominence values within local window local_prominences = prominences[start_idx:end_idx] current_prominence = prominences[i] # Condition 1: Check if it's a local maximum if current_prominence < np.max(local_prominences): valid_peaks_mask[i] = False continue # Condition 2: Get prominence differences with neighbors neighbor_diffs = diffs[i, start_idx:end_idx] neighbor_diffs = neighbor_diffs[neighbor_diffs != 0] # Remove self-difference # Check if all neighboring differences are greater than threshold if np.any(neighbor_diffs <= threshold_values[i]): valid_peaks_mask[i] = False # Filter valid peaks and properties valid_peaks = peaks[valid_peaks_mask] # Update all properties in the properties dictionary valid_properties = {} for key in properties: valid_properties[key] = properties[key][valid_peaks_mask] return valid_peaks, valid_properties | The issue with your approach is that you rely on the prominence, which is the local height of the peaks, and not a good fit with your type of data. From your total dataset, it looks indeed clear to the naked eye that there are high "peaks" relative to the top of the large blue area, but this is no longer obvious once we consider the exact local data: NB. the scale of the insets' Y-axis is the same. Also, let's compute the prominence of all peaks (see how the middle peak has a much greater prominence): As you can see, there are peaks everywhere and what you would define as a peak in the left inset is actually a relatively small peak compared to peaks that you would not want to detect in the right inset. What you want is a peak that is higher than the surrounding peaks, and you want to fully ignore the baseline, thus your approach of using a smoothing function to get the local trend is good. Since your issue seems to be about speed, you can greatly improve it by using the native rolling.quantile over a custom rolling.apply with np.percentile: from scipy.signal import find_peaks percentile_80 = series.rolling(window=61, center=True, min_periods=1).quantile(0.8) smoothed_series = series.sub(percentile_80).clip(lower=0) peaks, peak_data = find_peaks(smoothed_series, prominence=np.max(smoothed_series) * 0.1, distance=48) series.plot() series.loc[smoothed_series.iloc[peaks].nlargest(7).index].plot(ls='', marker='o') This runs in just a few milliseconds compared to more than one second for the custom apply: # series.rolling(window=61, center=True, min_periods=1).apply(lambda x: np.percentile(x, 80)) 1.47 s Β± 25 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) # series.rolling(window=61, center=True, min_periods=1).quantile(0.8) 3.9 ms Β± 56.4 Β΅s per loop (mean Β± std. dev. of 7 runs, 100 loops each) Output: I also added a clip step after smoothing to get the following intermediate: | 4 | 5 |
79,333,765 | 2025-1-6 | https://stackoverflow.com/questions/79333765/type-hinting-and-type-checking-for-intenum-custom-types | Qt has several IntEnum's that support custom , user-specified types or roles. A few examples are: QtCore.Qt.ItemDataRole.UserRole QtCore.QEvent.Type.User In both cases, a user type/role is created by choosing an integer >= the User type/role myType = QtCore.QEvent.Type.User + 1 The problem is that all of the functions that deal with these type/roles expect an instance of the IntEnum, not an int, and mypy will report an error. from PySide6.QtCore import QEvent class MyEvent(QEvent): def __init__(self) -> None: super().__init__(QEvent.Type.User + 1) Mypy error: No overload variant of "__init__" of "QEvent" matches argument type "int" Integrated type checking in VS code with Pylance gives a similar error: No overloads for "__init__" match the provided arguments PylancereportCallIssue QtCore.pyi(2756, 9): Overload 2 is the closest match Argument of type "int" cannot be assigned to parameter "type" of type "Type" in function "__init__" "int" is not assignable to "Type" PylancereportArgumentType What type hinting can I do from my end to satisfy mypy? Is this something that needs to be changed in Qt type hinting? | In PySide6/PyQt6, the type of a user-defined int enum member should be preserved by using the constructor of the enum: from PySide6.QtCore import QEvent class MyEvent(QEvent): def __init__(self) -> None: super().__init__(QEvent.Type(QEvent.Type.User + 1)) Assuming the latest PySide6 stubs are installed, a file with the above contents will produce no errors when checked with the mypy command-line tool. NB: the implementation defines _missing_ to handle unknown members, and this works for all enums that subclass IntEnum, regardless of whether the values make any sense: >>> QEvent.Type.User <Type.User: 1000> >>> QEvent.Type(QEvent.Type.User + 1) <Type.1001: 1001> >>> QEvent.Type.MaxUser <Type.MaxUser: 65535> >>> QEvent.Type(QEvent.Type.MaxUser + 10) <Type.65545: 65545> >>> >>> QFrame.Shape.__members__ mappingproxy({'NoFrame': <Shape.NoFrame: 0>, 'Box': <Shape.Box: 1>, 'Panel': <Shape.Panel: 2>, 'WinPanel': <Shape.WinPanel: 3>, 'HLine': <Shape.HLine: 4>, 'VLine': <Shape.VLine: 5>, 'StyledPanel': <Shape.StyledPanel: 6>}) >>> QFrame.Shape(42) <Shape.42: 42> | 2 | 1 |
79,336,023 | 2025-1-7 | https://stackoverflow.com/questions/79336023/forward-fill-numpy-matrix-mask-with-values-based-on-condition | I have the following matrix import numpy as np A = np.array([ [0, 0, 0, 0, 1, 0, 1], [0, 0, 0, 0, 0, 0, 1], [1, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0] ]).astype(bool) How do I fill all the rows column-wise after a column is True? My desired output: [0, 0, 0, 0, 1, 1, 1], [0, 0, 0, 0, 0, 0, 1], [1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0] | You could use logical_or combined with accumulate: np.logical_or.accumulate(A, axis=1) Output: array([[False, False, False, False, True, True, True], [False, False, False, False, False, False, True], [ True, True, True, True, True, True, True], [False, False, False, False, False, False, False]]) If you want integers, go with maximum: np.maximum.accumulate(A.astype(int), axis=1) array([[0, 0, 0, 0, 1, 1, 1], [0, 0, 0, 0, 0, 0, 1], [1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0]]) | 3 | 6 |
79,335,580 | 2025-1-7 | https://stackoverflow.com/questions/79335580/getting-strange-output-when-using-group-by-apply-with-np-select-function | I am working with a Timeseries data wherein I am trying to perform outlier detection using IQR method. Sample Data: import pandas as pd import numpy as np df = pd.DataFrame({'datecol' : pd.date_range('2024-1-1', '2024-12-31'), 'val' : np.random.random.randin(low = 100, high = 5000, size = 8366}) my function: def is_outlier(x): iqr = x.quantile(.75) - x.quantile(.25) outlier = (x <= x.quantile(.25) - 1.5*iqr) | (x >= x.quantile(.75) + 1.5*iqr) return np.select([outlier], [1], 0) df.groupby(df['datecol'].dt.weekday)['val'].apply(is_outlier) to which the output is something like below: 0 [1,1,0,0,.... 1 [1,0,0,0,.... 2 [1,1,0,0,.... 3 [1,0,1,0,.... 4 [1,1,0,0,.... 5 [1,1,0,0,.... 6 [1,0,0,1,.... I am expecting a single series as output which I can add back to the original dataframe as a flag column. Can someone please help me with this | You should use groupby.transform, not apply: df['flag'] = df.groupby(df['datecol'].dt.weekday)['val'].transform(is_outlier) Alternatively, explicitly return a Series and use group_keys=False: def is_outlier(x): iqr = x.quantile(.75) - x.quantile(.25) outlier = (x <= x.quantile(.25) - 1.5*iqr) | (x >= x.quantile(.75) + 1.5*iqr) return pd.Series(np.where(outlier, 1, 0), index=x.index) df['flag'] = (df.groupby(df['datecol'].dt.weekday, group_keys=False) ['val'].apply(is_outlier) ) Note that with a single condition, np.where should be preferred to np.select. You could also use a vectorial approach with groupby.quantile: wd = df['datecol'].dt.weekday g = df.groupby(wd)['val'] q25 = g.quantile(.25) q75 = g.quantile(.75) iqr = wd.map(q75-q25) df['flag'] = 1 - df['val'].between(wd.map(q25) - 1.5*iqr, wd.map(q75) + 1.5*iqr) Output: datecol val flag 0 2024-01-01 3193 0 1 2024-01-02 1044 0 2 2024-01-03 2963 0 3 2024-01-04 4448 0 4 2024-01-05 1286 0 .. ... ... ... 361 2024-12-27 1531 0 362 2024-12-28 4565 0 363 2024-12-29 3396 0 364 2024-12-30 1870 0 365 2024-12-31 3818 0 | 1 | 1 |
79,334,958 | 2025-1-7 | https://stackoverflow.com/questions/79334958/python-web-scraping-bulk-downloading-linked-files-from-the-sec-aaer-site-403 | I've been trying to download 300 linked files from SEC's AAER site. Most of the links are pdf's, but some are websites that I would need to save to pdf instead of just downloading. I'm teaching myself some python web scraping and this didn't seem like too hard a task, but I havent been able to get past the 403 error when downloading. This code is working fine to scrape the links to the files and the 4 digit code I would like to name the files: from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.chrome.options import Options from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC import time import os import requests # Set up Chrome options to allow direct PDF download (for the download step) download_path = "C:/Users/taylo/Downloads/sec_aaer_downloads" chrome_options = Options() chrome_options.add_experimental_option("prefs", { "download.default_directory": download_path, # Specify your preferred download directory "download.prompt_for_download": False, # Disable download prompt "plugins.always_open_pdf_externally": True, # Automatically open PDF in browser "safebrowsing.enabled": False, # Disable Chromeβs safe browsing check that can block downloads "profile.default_content_settings.popups": 0 # Disable popups }) # Set up the webdriver with options driver = webdriver.Chrome(executable_path="C:/chromedriver/chromedriver", options=chrome_options) # URLs for pages 1, 2, and 3 urls = [ "https://www.sec.gov/enforcement-litigation/accounting-auditing-enforcement-releases?page=0", "https://www.sec.gov/enforcement-litigation/accounting-auditing-enforcement-releases?page=1", "https://www.sec.gov/enforcement-litigation/accounting-auditing-enforcement-releases?page=2" ] # Initialize an empty list to store the URLs and AAER numbers pdf_data = [] # Loop through each URL (pages 1, 2, and 3) for url in urls: print(f"Scraping URL: {url}...") driver.get(url) # Wait for the table rows containing links to be loaded WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.XPATH, '//*[@id="block-uswds-sec-content"]/div/div/div[3]/div/table/tbody/tr[1]'))) # Extract the link and AAER number from each row on the current page rows = driver.find_elements(By.XPATH, '//*[@id="block-uswds-sec-content"]/div/div/div[3]/div/table/tbody/tr') for row in rows: try: # Extract the link from the first column (PDF link) link_element = row.find_element(By.XPATH, './/td[2]/div[1]/a') link_href = link_element.get_attribute('href') # Extract the AAER number from the second column aaer_text_element = row.find_element(By.XPATH, './/td[2]/div[2]/span[2]') aaer_text = aaer_text_element.text aaer_number = aaer_text.split("AAER-")[1].split()[0] # Extract the number after AAER- # Store the data in a list of dictionaries pdf_data.append({'link': link_href, 'aaer_number': aaer_number}) except Exception as e: print(f"Error extracting data from row: {e}") # Print the scraped data (optional for verification) for entry in pdf_data: print(f"Link: {entry['link']}, AAER Number: {entry['aaer_number']}") But when I try to do something like this, I can't get the downloads to go through: import os import time import requests # Set the download path download_path = "C:/Users/taylo/Downloads/sec_aaer_downloads" os.makedirs(download_path, exist_ok=True) # Loop through each entry in the pdf_data list for entry in pdf_data: try: # Extract the PDF link and AAER number link_href = entry['link'] aaer_number = entry['aaer_number'] # Send a GET request to download the PDF pdf_response = requests.get(link_href, stream=True, headers={ "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36" }) # Check if the request was successful if pdf_response.status_code == 200: # Save the PDF to the download folder, using the AAER number as the filename pdf_file_path = os.path.join(download_path, f"{aaer_number}.pdf") with open(pdf_file_path, "wb") as pdf_file: for chunk in pdf_response.iter_content(chunk_size=8192): pdf_file.write(chunk) print(f"Downloaded: {aaer_number}.pdf") else: print(f"Failed to download the file from {link_href}, status code: {pdf_response.status_code}") except Exception as e: print(f"Error downloading the PDF for AAER {aaer_number}: {e}") At this point it would have been faster to manually download the files but I want to know what I'm doing wrong. I've tried Setting User-Agent Header and Simulating User Click with Selenium. Thanks for any advice you may have! | After copying all the Headers inside the request header when you manually open the link containing the PDF: pdf_response = requests.get(link_href, headers={ "Host": "www.sec.gov", "User-Agent": "YOUR_USER_AGENT", "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", "Accept-Language": "en-US,en;q=0.5", "Accept-Encoding": "gzip, deflate, br, zstd", "Connection": "keep-alive", "Cookie": "YOUR_COOKIE", "Upgrade-Insecure-Requests": "1", "Sec-Fetch-Dest": "document", "Sec-Fetch-Mode": "navigate", "Sec-Fetch-Site": "none", "Sec-Fetch-User": "?1", "Priority": "u=0, i", "Pragma": "no-cache", }) I was able to download the files: You also need to remove the stream=true argument inside the requests. These answers why Status Code 403 Forbidden is occurs, you need all the headers to access the URLs. Hope this helps! | 2 | 1 |
79,334,065 | 2025-1-6 | https://stackoverflow.com/questions/79334065/geopandas-read-file-of-a-shapefile-gives-error-if-crs-parameter-is-specified | All, I use ESRI World Countries Generalized shape file, that is available here using GeoPandas shp_file =gpd.read_file('World_Countries/World_Countries_Generalized.shp') print(shp_file.crs) The CRS I got is EPSG:3857, yet once I add the CRS to the gpd.read_file as the following shp_file1 =gpd.read_file('../../Downloads/World_Countries/World_Countries_Generalized.shp',crs='EPSG:3857') I got the following error /opt/anaconda3/envs/geo_env/lib/python3.12/site-packages/pyogrio/raw.py:198: RuntimeWarning: driver ESRI Shapefile does not support open option CRS return ogr_read( Do you know why I get this error, and does it mean the file is not read correctly? Thanks | Since geopandas 1.0, another, faster, underlying library is used by default to read files in geopandas: pyogrio. Some more info can be found here: fiona vs pyogrio. When this new library is used, the crs parameter is not supported. The easiest solution for this specific case is to just remove the crs='EPSG:3857' parameter as is is useless anyway as the crs is already read correctly? shp_file1 = gpd.read_file('../../Downloads/World_Countries/World_Countries_Generalized.shp') If you want to read a shapefile that doesn't have a .prj file, or a wrong one, you can e.g. use the set_crs function of the GeoDataFrame to set or overrule the crs after reading the file: shp_file2 = gpd.read_file('wrong_crs.shp').set_crs(crs='EPSG:3857', allow_override=True) | 1 | 1 |
79,333,616 | 2025-1-6 | https://stackoverflow.com/questions/79333616/remove-a-tiny-repeated-object-from-an-image-using-opencv | I use OpenCV and Python and I want to remove the "+" sign that is repeated on my image. The following is an example of an image in question. The goal is to produce the same image, but with the "+" signs removed. How can I achieve this? I've tried using the below code to achieve this. img = cv2.imread(image_path) gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) img_inverted = cv2.bitwise_not(gray) thresh = cv2.adaptiveThreshold(img_inverted, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 15, -2) cross_structure = cv2.getStructuringElement(cv2.MORPH_CROSS, (3, 3)) detected_cross = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, cross_structure) mask = detected_cross kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3, 3)) dilated_mask = cv2.dilate(mask, kernel, iterations=1) output_telea = cv2.inpaint(img, dilated_mask, 3, cv2.INPAINT_TELEA) But it seems like the detected_cross isn't really detecting the expected objects, and as a result, it returns the same threshold image. This is the result I end up with. | This is basically implementing the comment of fmw42; that is, create the mask by thresholding for white regions (also I increased your dilation to 2 iterations for a visually slightly better result): import cv2 import numpy as np image_path = ... # TODO: adjust as necessary img = cv2.imread(image_path) mask = (cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) > 250).astype(np.uint8) kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3, 3)) dilated_mask = cv2.dilate(mask, kernel, iterations=2) output_telea = cv2.inpaint(img, dilated_mask, 3, cv2.INPAINT_TELEA) cv2.imshow("result", output_telea) cv2.waitKey(0) cv2.destroyAllWindows() Result: In general, I agree with Cris Luengo's comments though: If you have multiple such images where the crosses are always in the same place, create a mask once (possibly manually) and reuse it. Even better, try to remove the crosses at image acquisition time (i.e. don't acquire them in the first place) rather than as a post-processing step. That is to say, if you have control over the acquisition process, of course. | 3 | 6 |
79,324,851 | 2025-1-2 | https://stackoverflow.com/questions/79324851/parsing-multi-index-pandas-data-frame-for-tuple-list-appendage | Problem/Task: create a function that inputs a pandas data frame represented by the markdown in Fig 1 and converts/outputs it to a list with the structure represented in Fig 2. I look forward to any feedback/support anyone might have! Fig 1: Pandas Data Frame (Function Input) as Markdown resources ('Widget A (idx = 0)', 't1') ('Widget A (idx = 0)', 't2') ('Widget A (idx = 0)', 't3') ('Widget A (idx = 0)', 't4') ('Widget A (idx = 0)', 't5') ('Widget A (idx = 0)', 't6') ('Widget A (idx = 0)', 't7') ('Widget A (idx = 0)', 't8') ('Widget A (idx = 0)', 't9') ('Widget A (idx = 0)', 't10') ('Widget A (idx = 0)', 't11') ('Widget A (idx = 0)', 't12') ('Widget A (idx = 0)', 't13') ('Widget A (idx = 0)', 't14') ('Widget A (idx = 0)', 't15') ('Widget B (idx = 1)', 't1') ('Widget B (idx = 1)', 't2') ('Widget B (idx = 1)', 't3') ('Widget B (idx = 1)', 't4') ('Widget B (idx = 1)', 't5') ('Widget B (idx = 1)', 't6') ('Widget B (idx = 1)', 't7') ('Widget B (idx = 1)', 't8') ('Widget B (idx = 1)', 't9') ('Widget B (idx = 1)', 't10') ('Widget B (idx = 1)', 't11') ('Widget B (idx = 1)', 't12') ('Widget B (idx = 1)', 't13') ('Widget B (idx = 1)', 't14') ('Widget B (idx = 1)', 't15') ('Widget C (idx =2)', 't1') ('Widget C (idx =2)', 't2') ('Widget C (idx =2)', 't3') ('Widget C (idx =2)', 't4') ('Widget C (idx =2)', 't5') ('Widget C (idx =2)', 't6') ('Widget C (idx =2)', 't7') ('Widget C (idx =2)', 't8') ('Widget C (idx =2)', 't9') ('Widget C (idx =2)', 't10') ('Widget C (idx =2)', 't11') m_1 10 nan nan nan nan nan nan nan nan nan nan nan nan nan nan 23 nan nan nan nan nan nan nan nan nan nan nan nan nan nan 17 nan nan nan nan nan nan nan nan nan nan m_2 nan nan 15 nan nan nan 17 nan nan nan nan nan nan nan nan nan nan 30 nan nan nan 23 nan nan nan nan nan nan nan nan nan nan 24 nan nan nan nan nan nan nan nan m_3 nan nan 23 nan nan nan 15 nan nan nan nan nan nan nan nan nan nan 26 nan nan nan 21 nan nan nan nan nan nan nan nan nan nan 22 nan nan nan nan nan nan nan nan m_4 nan nan 27 nan nan nan 19 nan nan nan nan nan nan nan nan nan nan 22 nan nan nan 18 nan nan nan nan nan nan nan nan nan nan 29 nan nan nan nan nan nan nan nan m_5 nan nan nan nan nan nan nan nan nan nan 15 nan nan nan nan nan nan nan nan nan nan nan nan nan nan 21 nan nan nan nan nan nan nan nan nan nan 23 nan nan nan nan m_6 nan nan nan nan nan nan nan nan nan nan 16 nan nan nan nan nan nan nan nan nan nan nan nan nan nan 16 nan nan nan nan nan nan nan nan nan nan 25 nan nan nan nan m_7 nan nan nan nan nan nan nan nan nan nan 23 nan nan nan nan nan nan nan nan nan nan nan nan nan nan 14 nan nan nan nan nan nan nan nan nan nan 30 nan nan nan nan m_8 nan nan nan nan 10 nan nan nan 10 nan nan nan 10 nan nan nan nan nan nan 15 nan nan nan 15 nan nan nan 15 nan nan nan nan nan nan 13 nan nan nan 13 nan nan m_9 nan nan nan nan 10 nan nan nan 10 nan nan nan 10 nan nan nan nan nan nan 15 nan nan nan 15 nan nan nan 15 nan nan nan nan nan nan 13 nan nan nan 13 nan nan m_10 nan nan nan nan 10 nan nan nan 10 nan nan nan 10 nan nan nan nan nan nan 15 nan nan nan 15 nan nan nan 15 nan nan nan nan nan nan 13 nan nan nan 13 nan nan m_11 nan nan nan nan nan nan nan nan nan nan nan nan nan nan 14 nan nan nan nan nan nan nan nan nan nan nan nan nan nan 12 nan nan nan nan nan nan nan nan nan nan 10 m_12 nan 1 nan 1 nan 1 nan 1 nan 1 nan 1 nan 1 nan nan 1 nan 1 nan 1 nan 1 nan 1 nan 1 nan 1 nan nan 1 nan 1 nan 1 nan 1 nan 1 nan Fig 2: Example of Target Data Structure (Function Output) for List ` components = [ # widget A -> [task_0...task_i] -> [(machine_id_0, dur_0)...machine_id_i, dur_i] [ [(1, 10)], #t1 [(12, 1)], #t2 [(2, 15), (3, 23), (4,27)], #t3 [(12, 1)], #t4 [(8,10), (9,10), (10,10)], #t5 [(12, 1)], #t6 [(2, 17), (3, 15), (4,19)], #t7 [(12, 1)], #t8 [(8,10), (9,10), (10,10)], #t9 [(12, 1)], #t10 [(5, 15), (6, 16), (7,23)], #t11 [(12, 1)], #t12 [(8,10), (9,10), (10,10)], #t13 [(12, 1)], #t14 [(11,14)], #t15 ], # widget B -> [task_0...task_i] -> [(machine_id_0, dur_0)...machine_id_i, dur_i] [ [(1, 23)], #t1 [(12, 1)], #t2 [(2, 30), (3, 26), (4,22)], #t2 [(12, 1)], #t2 [(8,15), (9,15), (10,15)], #t3 [(12, 1)], #t2 [(2, 23), (3, 21), (4,18)], #t4 [(12, 1)], #t2 [(8,15), (9,15), (10,15)], #t5 [(12, 1)], #t2 [(5, 21), (6, 16), (7,14)], #t6 [(12, 1)], #t2 [(8,15), (9,15), (10,15)], #t7 [(12, 1)], #t2 [(11,12)], #t8 ], # widget C -> [task_0...task_i] -> [(machine_id_0, dur_0)...machine_id_i, dur_i] [ [(1, 17)], #t1 [(12, 1)], #t2 [(2, 24), (3, 22), (4,29)], #t3 [(12, 1)], #t4 [(8,13), (9,13), (10,13)], #t5 [(12, 1)], #t6 [(2, 23), (3, 25), (4,30)], #t7 [(12, 1)], #t8 [(8,13), (9,13), (10,13)], #t9 [(12, 1)], #t10 [(11,10)], #t11 ],] ` | Here's one approach: Minimal Reproducible Example import pandas as pd import numpy as np data = [[1, np.nan, np.nan], [np.nan, 2, 2], [np.nan, 3, np.nan]] m_idx = pd.MultiIndex.from_tuples( [('A', 't1'), ('A', 't2'), ('B', 't1')] ) idx = pd.Index([f'm_{i}' for i in range(1, 4)], name='resources') df = pd.DataFrame(data, columns=m_idx, index=idx) A B t1 t2 t1 resources m_1 1.0 NaN NaN m_2 NaN 2.0 2.0 m_3 NaN 3.0 NaN Desired output components = [ [ # A [(1, 1)], # t1 [(2, 2), (3, 3)] # t2 ], [ # B [(2, 2)] # t1 ] ] Code components = ( df.reset_index() .melt([('resources','')]) .dropna(subset='value') .assign( tmp=lambda x: list( zip( x[('resources','')].str.split('_').str[1].astype(int), x['value'].astype(int)) ) ) .groupby(['variable_0', 'variable_1'], sort=False)['tmp'] .apply(list) .groupby('variable_0', sort=False) .apply(list) .to_list() ) Output: components [[[(1, 1)], [(2, 2), (3, 3)]], [[(2, 2)]]] Explanation / Intermediates Use df.reset_index to apply df.melt on the previous index (now: ('resources', '')) + df.dropna on 'value' column. df.reset_index().melt([('resources','')]).dropna(subset='value') (resources, ) variable_0 variable_1 value 0 m_1 A t1 1.0 4 m_2 A t2 2.0 5 m_3 A t2 3.0 7 m_2 B t1 2.0 Use df.assign to add a column ('tmp') as a tuple (list + zip) containing the digits from 'resources' (via Series.str.split + Series.astype) and values from 'value'. .assign(...) (resources, ) variable_0 variable_1 value tmp 0 m_1 A t1 1.0 (1, 1) 4 m_2 A t2 2.0 (2, 2) 5 m_3 A t2 3.0 (3, 3) 7 m_2 B t1 2.0 (2, 2) Now, use df.groupby with the variable columns (original pd.MultiIndex) with sort=False to preserve order, and get 'tmp' as list (groupby.apply). .groupby(['variable_0', 'variable_1'])['tmp'].apply(list) variable_0 variable_1 A t1 [(1, 1)] t2 [(2, 2), (3, 3)] B t1 [(2, 2)] Name: tmp, dtype: object Chain another df.groupby, now solely with 'variable_0' (level 0 from original pd.MultIndex) and get list again. .groupby('variable_0').apply(list) variable_0 A [[(1, 1)], [(2, 2), (3, 3)]] B [[(2, 2)]] Name: tmp, dtype: object Finally, chain Series.to_list. | 1 | 1 |
79,333,087 | 2025-1-6 | https://stackoverflow.com/questions/79333087/how-do-i-resolve-snowparksqlexception-user-is-empty-in-function-for-snowpar | When invoking a Snowpark-registered SPROC, I get the following error: SnowparkSQLException: (1304): <uuid>: 100357 (P0000): <uuid>: Python Interpreter Error: snowflake.connector.errors.ProgrammingError: 251005: User is empty in function MY_FUNCTION with handler compute for the following python code and invocation: def my_function(session: Session, input_table: str, limit: int) -> None: # Even doing nothing doesn't work! return sproc_my_function = my_session.sproc.register(func=my_function, name='my_function', is_permanent=True, replace=True, stage_location='@STAGE_LOC', execute_as="owner", input_table = 'x.y.MY_INPUT_TABLE' sproc_process_row(my_session, input_table, 100, ) I can't find a reference to this exception and "User is empty in function" anywhere on the internet - which makes me wonder if its a drop-through of some sort. I also can't find a way to pass a user to the register method (this is already done successfully when my_session is set up). Please help! | Using Snowflake Notebooks: from snowflake.snowpark.session import Session my_session = snowflake.snowpark.context.get_active_session() def my_function(session: Session, input_table: str, limit: int) -> None: return None sproc_my_function = my_session.sproc.register(func=my_function, name='my_function', is_permanent=True, replace=True, stage_location='STAGE_LOC', execute_as="owner", packages=["snowflake-snowpark-python"]) Output: | 1 | 1 |
79,326,576 | 2025-1-3 | https://stackoverflow.com/questions/79326576/writing-to-application-insights-from-fastapi-with-managed-identity | I am trying to log from a FastAPI application to Azure application insights. It is working with a connection string, but I would like it to be working with managed identity. The code below does not fail - no errors or anything. But it does not log anything. Any suggestions to sove the problem, or how to troubleshoot as I get no errors: from fastapi import FastAPI,Request from fastapi.middleware.cors import CORSMiddleware from fastapi_azure_auth import SingleTenantAzureAuthorizationCodeBearer import uvicorn from fastapi import FastAPI, Security import os from typing import Dict from azure.identity import DefaultAzureCredential import logging from azure.monitor.opentelemetry import configure_azure_monitor from opentelemetry import trace,metrics from settings import Settings from pydantic import AnyHttpUrl,BaseModel from contextlib import asynccontextmanager from typing import AsyncGenerator from fastapi_azure_auth.user import User settings = Settings() @asynccontextmanager async def lifespan(app: FastAPI) -> AsyncGenerator[None, None]: """ Load OpenID config on startup. """ await azure_scheme.openid_config.load_config() yield app = FastAPI( swagger_ui_oauth2_redirect_url='/oauth2-redirect', swagger_ui_init_oauth={ 'usePkceWithAuthorizationCodeGrant': True, 'clientId': settings.OPENAPI_CLIENT_ID, 'scopes': settings.SCOPE_NAME, }, ) if settings.BACKEND_CORS_ORIGINS: app.add_middleware( CORSMiddleware, allow_origins=[str(origin) for origin in settings.BACKEND_CORS_ORIGINS], allow_credentials=True, allow_methods=['*'], allow_headers=['*'], ) azure_scheme = SingleTenantAzureAuthorizationCodeBearer( app_client_id=settings.APP_CLIENT_ID, tenant_id=settings.TENANT_ID, scopes=settings.SCOPES, ) class User(BaseModel): name: str roles: list[str] = [] logger = logging.getLogger(__name__) logger.setLevel(logging.INFO) credential = DefaultAzureCredential() configure_azure_monitor( credential=credential, connection_string="InstrumentationKey=xx-xx-xx-xx-xx" ) @app.get("/log", dependencies=[Security(azure_scheme)]) async def root(): print("Yo test") logger.info("Segato5", extra={"custom_dimension": "Kam_value","test1": "val1"}) meter = metrics.get_meter_provider().get_meter(__name__) counter = meter.create_counter("segato2") counter.add(8) return {"whoIsTheBest": "!!"} if __name__ == '__main__': uvicorn.run('main:app', reload=True) | Yes, you can use managed identity but you can need to use connection string along with it too as its a mandatory parameter. Also, to use managed identity, you need to deploy your code to any Azure resource such as Function app or Web App. I have deployed below code to web app and then enabled system managed identity in it. I have granted Monitoring Metrics Publisher RBAC role to my Web App in Application Insight. from fastapi import FastAPI import logging import uvicorn from azure.identity import ManagedIdentityCredential from azure.monitor.opentelemetry import configure_azure_monitor from opentelemetry import metrics from opentelemetry.instrumentation.fastapi import FastAPIInstrumentor app = FastAPI() logger = logging.getLogger(__name__) logger.setLevel(logging.INFO) credential = ManagedIdentityCredential() configure_azure_monitor( connection_string="InstrumentationKey=06a13******8b029b9", credential=credential ) FastAPIInstrumentor().instrument_app(app) @app.get("/log") async def root(): print("Yo test") logger.info("Segato5", extra={"custom_dimension": "Kam_value","test1": "val1"}) meter = metrics.get_meter_provider().get_meter(__name__) counter = meter.create_counter("segato2") counter.add(8) return {"whoIsTheBest": "!!"} if __name__ == '__main__': uvicorn.run('main:app', reload=True) Add gunicorn -w 2 -k uvicorn.workers.UvicornWorker -b 0.0.0.0:8000 main:app in Configuration. I am able to get the logs in Application insight. Thanks AnuragSingh-MSFT for clarifying the use of managed identity and connection string in this question. | 1 | 1 |
79,331,362 | 2025-1-5 | https://stackoverflow.com/questions/79331362/how-to-keep-track-of-ongoing-tasks-and-prevent-duplicate-tasks | I have a python script that receives incoming messages from web and process the data: async with asyncio.TaskGroup() as task_group: processor_task = task_group.create_task(processor.start(message), name=f"process_message_{message.sender_id}_task" I added naming for tasks with f-string interpolation so in the debugging i know which task is doing what and for who. Each iteration of message processing may take up to an hour. I want to only process one message from a sender at a time, however, senders sometimes send multiple messages. How can i drop further messages from the same sender_id until there is no task running for them? How can i check all running tasks for a same name before creating a task in TaskGroup? I Tried reading the documentation of TaskGroup but i didn't found anything related. | you can keep track of the working tasks like this background_tasks = set() async with asyncio.TaskGroup() as task_group: if sender_id not in background_tasks: # check if the sender has a task running processor_task = task_group.create_task(processor.start(message), name=f"process_message_{message.sender_id}_task") background_tasks.add(sender_id) # add to background tasks processor_task.add_done_callback(lambda x: background_tasks.discard(sender_id)) # remove from background tasks after the task is finished else: print(f'user {sender_id} already has a task running') | 1 | 1 |
79,330,931 | 2025-1-5 | https://stackoverflow.com/questions/79330931/gridsearchcv-with-data-indexed-by-time | I am trying to use the GridSearchCV from sklearn.model_selection. My data is a set of classification that is indexed by time. As a result, when doing cross validation, I want the training set to be exclusively the data with time all before the data in the test set. So my training set X_train, y_train looks like Time feature_1 feature_2 result 2020-01-30 3 6 1 2020-02-01 4 2 0 2021-03-02 7 1 0 and the test set X_test, y_test looks like Time feature_1 feature_2 result 2023-01-30 3 6 1 2023-02-01 4 2 0 2024-03-02 7 1 0 Suppose I am using a model such as xgboost, then to optimise the hyperparameters, I used GridSearchCV and the code looks like param_grid = { 'max_depth': [1,2,3,4,5], 'min_child_weight': [0,1,2,3,4,5], 'gamma': [0.5, 1, 1.5, 2, 5], 'colsample_bytree': [0.6, 0.8, 1.0], } clf = XGBClassifier(learning_rate=0.02, n_estimators=600, objective='binary:logistic', silent=True, nthread=1) grid_search = GridSearchCV( estimator=clf, param_grid=param_grid, scoring='accuracy', n_jobs= -1) grid_search.fit(X_train, y_train) However, how should i set the cv in grid_search? Thank you so much in advance. Edit: So I tried to set cv=0 since I want my training data to be strictly "earlier" then test data and I got the following errors: InvalidParameterError: The 'cv' parameter of GridSearchCV must be an int in the range [2, inf), an object implementing 'split' and 'get_n_splits', an iterable or None. Got 0 instead. | the default cross-validation in GridSearchCV does not consider temporal dependency when splitting. You can use TimeSeriesSplit instead of the default CV from model selection. TimeSeriesSplit is built for this exact use case of yours. | 1 | 1 |
79,331,099 | 2025-1-5 | https://stackoverflow.com/questions/79331099/converting-nested-query-string-requests-to-a-dictionary | I'm experiencing some difficulties converting a querystring data to a well formed dictionary in my view. Here's my view class VendorPayloadLayerView(generics.GenericAPIView): permission_classes = (permissions.AllowAny,) def get(self, request, *args, **kwargs): print("Here's the request *****") print(request) payload = request.GET print("Here's the decoded queryDict data") print(payload) data = payload.dict() print("Here's the dictionary") print(data) Here is the request to the view: <rest_framework.request.Request: GET '/turnalerts/api/v2/vendor?%7B%22_vnd%22:%20%7B%22v1%22:%20%7B%22author%22:%20%7B%22id%22:%20%22d2e805b5-4a25-4102-a629-e6b67c798ad6%22,%20%22name%22:%20%22WhatsApp%20Business%20Cloud%20API%22,%20%22request_id%22:%20%22GBV3LTlUEUtfjuMHaDYi%22,%20%22type%22:%20%22SYSTEM%22%7D,%20%22card_uuid%22:%20null,%20%22chat%22:%20%7B%22assigned_to%22:%20%7B%22id%22:%20%2278c711b6-2673-cd8b-0fd9-9a6f03bbcdc5%22,%20%22name%22:%20%22Chima%20Chinda%22,%20%22type%22:%20%22OPERATOR%22%7D,%20%22contact_uuid%22:%20%225ba732cf-d424-4163-9d73-98680d4f53f9%22,%20%22inserted_at%22:%20%222022-05-10T10:15:38.808899Z%22,%20%22owner%22:%20%22%202349039756628%22,%20%22permalink%22:%20%22https://whatsapp-praekelt-cloud.turn.io/app/c/ebd12728-e787-4f29-b938-1059b67f4abd%22,%20%22state%22:%20%22OPEN%22,%20%22state_reason%22:%20%22Re-opened%20by%20inbound%20message.%22,%20%22unread_count%22:%2018,%20%22updated_at%22:%20%222024-12-28T22:17:48.825870Z%22,%20%22uuid%22:%20%22ebd12728-e787-4f29-b938-1059b67f4abd%22%7D,%20%22direction%22:%20%22outbound%22,%20%22faq_uuid%22:%20null,%20%22in_reply_to%22:%20null,%20%22inserted_at%22:%20%222024-12-28T22:17:48.817259Z%22,%20%22labels%22:%20[],%20%22last_status%22:%20null,%20%22last_status_timestamp%22:%20null,%20%22on_fallback_channel%22:%20false,%20%22rendered_content%22:%20null,%20%22uuid%22:%20%227d5fc64e-fd77-325f-8a50-6475e4496775%22%7D%7D,%20%22from%22:%20%2227726968450%22,%20%22id%22:%20%22wamid.HBgNMjM0OTAzOTc1NjYyOBUCABEYEjdCMTJFNUZDNzNFQjkxQ0IyRQA%27:%20%27%22,%20%22preview_url%22:%20false,%20%22recipient_type%22:%20%22individual%22,%20%22text%22:%20%7B%22body%22:%20%22The%20MomConnect%20ADA%20Symptom%20Checker%20is%20unfortunately%20no%20longer%20available.%20%5C%5Cn%5C%5CnPlease%20reply%20*ASK*%20if%20you%20have%20questions%20or%20need%20help.%22%7D,%20%22timestamp%22:%20%221735424268%22,%20%22to%22:%20%222349039756628%22,%20%22type%22:%20%22text%22%7D'> Here's the decoded queryDict with request.GET <QueryDict: {'{"_vnd": {"v1": {"author": {"id": "d2e805b5-4a25-4102-a629-e6b67c798ad6", "name": "WhatsApp Business Cloud API", "request_id": "GBV3LTlUEUtfjuMHaDYi", "type": "SYSTEM"}, "card_uuid": null, "chat": {"assigned_to": {"id": "78c711b6-2673-cd8b-0fd9-9a6f03bbcdc5", "name": "Chima Chinda", "type": "OPERATOR"}, "contact_uuid": "5ba732cf-d424-4163-9d73-98680d4f53f9", "inserted_at": "2022-05-10T10:15:38.808899Z", "owner": " 2349039756628", "permalink": "https://whatsapp-praekelt-cloud.turn.io/app/c/ebd12728-e787-4f29-b938-1059b67f4abd", "state": "OPEN", "state_reason": "Re-opened by inbound message.", "unread_count": 18, "updated_at": "2024-12-28T22:17:48.825870Z", "uuid": "ebd12728-e787-4f29-b938-1059b67f4abd"}, "direction": "outbound", "faq_uuid": null, "in_reply_to": null, "inserted_at": "2024-12-28T22:17:48.817259Z", "labels": [], "last_status": null, "last_status_timestamp": null, "on_fallback_channel": false, "rendered_content": null, "uuid": "7d5fc64e-fd77-325f-8a50-6475e4496775"}}, "from": "27726968450", "id": "wamid.HBgNMjM0OTAzOTc1NjYyOBUCABEYEjdCMTJFNUZDNzNFQjkxQ0IyRQA\': \'", "preview_url": false, "recipient_type": "individual", "text": {"body": "The MomConnect ADA Symptom Checker is unfortunately no longer available. \\\\n\\\\nPlease reply *ASK* if you have questions or need help."}, "timestamp": "1735424268", "to": "2349039756628", "type": "text"}': ['']}> Lastly, here's the dictionary as payload.dict() Here's the dictionary {'{"_vnd": {"v1": {"author": {"id": "d2e805b5-4a25-4102-a629-e6b67c798ad6", "name": "WhatsApp Business Cloud API", "request_id": "GBV3LTlUEUtfjuMHaDYi", "type": "SYSTEM"}, "card_uuid": null, "chat": {"assigned_to": {"id": "78c711b6-2673-cd8b-0fd9-9a6f03bbcdc5", "name": "Chima Chinda", "type": "OPERATOR"}, "contact_uuid": "5ba732cf-d424-4163-9d73-98680d4f53f9", "inserted_at": "2022-05-10T10:15:38.808899Z", "owner": " 2349039756628", "permalink": "https://whatsapp-praekelt-cloud.turn.io/app/c/ebd12728-e787-4f29-b938-1059b67f4abd", "state": "OPEN", "state_reason": "Re-opened by inbound message.", "unread_count": 18, "updated_at": "2024-12-28T22:17:48.825870Z", "uuid": "ebd12728-e787-4f29-b938-1059b67f4abd"}, "direction": "outbound", "faq_uuid": null, "in_reply_to": null, "inserted_at": "2024-12-28T22:17:48.817259Z", "labels": [], "last_status": null, "last_status_timestamp": null, "on_fallback_channel": false, "rendered_content": null, "uuid": "7d5fc64e-fd77-325f-8a50-6475e4496775"}}, "from": "27726968450", "id": "wamid.HBgNMjM0OTAzOTc1NjYyOBUCABEYEjdCMTJFNUZDNzNFQjkxQ0IyRQA\': \'", "preview_url": false, "recipient_type": "individual", "text": {"body": "The MomConnect ADA Symptom Checker is unfortunately no longer available. \\\\n\\\\nPlease reply *ASK* if you have questions or need help."}, "timestamp": "1735424268", "to": "2349039756628", "type": "text"}': ''} The problem here is that the final result is not a valid json as it has an extra brace with an single quote surrounding the dictionary. What I'm trying to get: {"_vnd": {"v1": {"author": {"id": "d2e805b5-4a25-4102-a629-e6b67c798ad6", "name": "WhatsApp Business Cloud API", "request_id": "GBV3LTlUEUtfjuMHaDYi", "type": "SYSTEM"}, "card_uuid": null, "chat": {"assigned_to": {"id": "78c711b6-2673-cd8b-0fd9-9a6f03bbcdc5", "name": "Chima Chinda", "type": "OPERATOR"}, "contact_uuid": "5ba732cf-d424-4163-9d73-98680d4f53f9", "inserted_at": "2022-05-10T10:15:38.808899Z", "owner": " 2349039756628", "permalink": "https://whatsapp-praekelt-cloud.turn.io/app/c/ebd12728-e787-4f29-b938-1059b67f4abd", "state": "OPEN", "state_reason": "Re-opened by inbound message.", "unread_count": 18, "updated_at": "2024-12-28T22:17:48.825870Z", "uuid": "ebd12728-e787-4f29-b938-1059b67f4abd"}, "direction": "outbound", "faq_uuid": null, "in_reply_to": null, "inserted_at": "2024-12-28T22:17:48.817259Z", "labels": [], "last_status": null, "last_status_timestamp": null, "on_fallback_channel": false, "rendered_content": null, "uuid": "7d5fc64e-fd77-325f-8a50-6475e4496775"}}, "from": "27726968450", "id": "wamid.HBgNMjM0OTAzOTc1NjYyOBUCABEYEjdCMTJFNUZDNzNFQjkxQ0IyRQA\': \'", "preview_url": false, "recipient_type": "individual", "text": {"body": "The MomConnect ADA Symptom Checker is unfortunately no longer available. \\\\n\\\\nPlease reply *ASK* if you have questions or need help."}, "timestamp": "1735424268", "to": "2349039756628", "type": "text"} The extra characters (brace and single comma at the start and end of the data) do not appear in the query string so it's not clear to me what's creating it. Thanks. | That is because a the querystring is just a "one level" dictionary: it makes keys which are strings, to values which are strings. The fact that the value looks like a JSON blob, does not make much sense. Here you even make it worse because your use one key in the querystring, that maps to no value. The key here is a JSON blob which is ugly. You can get this working by trying to parse the keys into a JSON blob, like: import json result = {} for ky in request.GET: try: ky = json.loads(ky) if isinstance(ky, dict): result.update(ky) except ValueError: pass But still it is ugly. If you pass JSON blobs as QueryString, do so for a fixed key, and then JSON decode that specific key. Here you are essentially abusing the querystring part of a URL. | 1 | 1 |
79,330,953 | 2025-1-5 | https://stackoverflow.com/questions/79330953/lemma-of-puncutation-in-spacy | I'm using spacy for some downstream tasks, mainly noun phrase extraction. My texts contain a lot of parentheses, and while applying the lemma, I noticed all the punctuation that doesn't end sentences becomes --: import spacy nlp = spacy.load("de_core_news_sm") doc = nlp("(Das ist ein Test!)") for token in doc: print(f"Text: '{token.text}', Lemma: '{token.lemma_}'") Output: Text: '(', Lemma: '--' Text: 'Das', Lemma: 'der' Text: 'ist', Lemma: 'sein' Text: 'ein', Lemma: 'ein' Text: 'Test', Lemma: 'Test' Text: '!', Lemma: '--' Text: ')', Lemma: '--' Is that normal, and if yes, why, and what can I do to keep the parentheses? I'm on 3.7.4 with Python 3.11 | I can confirm the issue with German, but when I try the equivalent sentence in Dutch the ( and ) are kept as lemma instead of --. So this is something particular in the German model. You can override the default lemmata if you want: import spacy nlp = spacy.load("de_core_news_sm") nlp.get_pipe("attribute_ruler").add([[{"TEXT": "("}]], {"LEMMA": "("}) nlp.get_pipe("attribute_ruler").add([[{"TEXT": ")"}]], {"LEMMA": ")"}) doc = nlp("(Das ist ein Test!)") print(doc.text) for token in doc: print(token.text, token.lemma_, token.pos_, token.dep_) Result: (Das ist ein Test!) ( ( PUNCT punct Das der PRON sb ist sein AUX ROOT ein ein DET nk Test Test NOUN pd ! -- PUNCT punct ) ) PUNCT punct | 2 | 2 |
79,330,764 | 2025-1-5 | https://stackoverflow.com/questions/79330764/how-can-i-silence-undefinedmetricwarning | How can I silence the following warning while running GridSearchCV(model, params, cv=10, scoring='precision', verbose=1, n_jobs=20, refit=True)? /opt/dev/myenv/lib/python3.9/site-packages/sklearn/metrics/_classification.py:1531: UndefinedMetricWarning: Precision is ill-defined and being set to 0.0 due to no predicted samples. Use `zero_division` parameter to control this behavior. I have tried without success: import os, warnings warnings.simplefilter("ignore") warnings.filterwarnings("ignore") with warnings.catch_warnings(): warnings.simplefilter("ignore") os.environ["PYTHONWARNINGS"] = "ignore" | Try this code import warnings from sklearn.exceptions import UndefinedMetricWarning with warnings.catch_warnings(): # Ignore only UndefinedMetricWarning warnings.filterwarnings("ignore", category=UndefinedMetricWarning) | 1 | 1 |
79,330,650 | 2025-1-5 | https://stackoverflow.com/questions/79330650/there-are-some-basics-i-need-to-understand-regarding-python-custom-user-models | I wanted to delete a user. After a bit of struggling I ended up with: views.py the_user = get_user_model() @login_required def del_user(request): email = request.user.email the_user.objects.filter(email=email).delete() messages.warning(request, "bruker slettet.") return redirect("index") But I really do not understand the following line: email = request.user.email. And why not? email = request.the_user.email Is this because the user is referring to the AbstractBaseUser? | request is an instance of an HttpRequest object. It represents the current received request. The AuthenticationMiddleware authenticates the user that made the request (based on cookies or however you have configured it) and adds the request.user attribute to it, so your code can get who the current user is that made the request. What you call the_user is the User model class. That's an arbitrary name you gave to a variable in your code. FYI, the request.user object is already a full fledged User instance, which already has a delete method. You don't need to find the same user again via the the_user model class, you can just do: @login_required def del_user(request): request.user.delete() messages.warning(request, "bruker slettet.") return redirect("index") | 2 | 3 |
79,330,420 | 2025-1-5 | https://stackoverflow.com/questions/79330420/how-to-get-maximum-average-of-subarray | I have been working this leet code questions https://leetcode.com/problems/maximum-average-subarray-i/description/ I have been able to create a solution after understanding the sliding window algorithm. I was wondering with my code where my logic is going wrong, I do think my my issue seems to be in this section section of the code, but I am unable to pinpoint why. while temp > k: temp -= nums[left] left += 1 ans = temp / (curr - left + 1) While I do appreciate other solutions and ways solving this problem, I want to understand and get solution working first before I start looking at different ways of doing the problem, this way i get a better understanding of the algorithm. Full code reference def findMaxAverage(self, nums, k): """ :type nums: List[int] :type k: int :rtype: float """ left = 0 ans = 0 temp = 0 for curr in range(len(nums)): temp += nums[curr] curr += 1 while temp > k: temp -= nums[left] left += 1 ans = temp / (curr - left + 1) return ans | temp is the sum of the subarray, while k is the number of elements in it - you shouldn't be comparing the two, they're two totally different things. To implement a sliding window, I'd sum the first k elements of nums, and then run over it where in each iteration I drop the first element and add the last and then check if the sum has increased or not. Once you find the maximum sum, you can find the average by dividing it by k. class Solution: def findMaxAverage(self, nums: List[int], k: int) -> float: currSum = sum(nums[:k]) maxSum = currSum for i in range(k, len(nums)): currSum = currSum - nums[i - k] + nums[i] maxSum = max(maxSum, currSum) return maxSum / k | 1 | 1 |
79,329,522 | 2025-1-4 | https://stackoverflow.com/questions/79329522/filtering-from-index-and-comparing-row-value-with-all-values-in-column | Starting with this DataFrame: df_1 = pl.DataFrame({ 'name': ['Alpha', 'Alpha', 'Alpha', 'Alpha', 'Alpha'], 'index': [0, 3, 4, 7, 9], 'limit': [12, 18, 11, 5, 9], 'price': [10, 15, 12, 8, 11] }) βββββββββ¬ββββββββ¬ββββββββ¬ββββββββ β name β index β limit β price β β --- β --- β --- β --- β β str β i64 β i64 β i64 β βββββββββͺββββββββͺββββββββͺββββββββ‘ β Alpha β 0 β 12 β 10 β β Alpha β 3 β 18 β 15 β β Alpha β 4 β 11 β 12 β β Alpha β 7 β 5 β 8 β β Alpha β 9 β 9 β 11 β βββββββββ΄ββββββββ΄ββββββββ΄ββββββββ I need to add a new column to tell me at which index (greater than the current one) the price is equal or higher than the current limit. With this example above, the expected output is: βββββββββ¬ββββββββ¬ββββββββ¬ββββββββ¬ββββββββββββ β name β index β limit β price β min_index β β --- β --- β --- β --- β --- β β str β i64 β i64 β i64 β i64 β βββββββββͺββββββββͺββββββββͺββββββββͺββββββββββββ‘ β Alpha β 0 β 12 β 10 β 3 β β Alpha β 3 β 18 β 15 β null β β Alpha β 4 β 11 β 12 β 9 β β Alpha β 7 β 5 β 8 β 9 β β Alpha β 9 β 9 β 11 β null β βββββββββ΄ββββββββ΄ββββββββ΄ββββββββ΄ββββββββββββ Explaining the "min_index" column results: 1st row, where the limit is 12: from the 2nd row onwards, the minimum index whose price is equal or greater than 12 is 3. 2nd row, where the limit is 18: from the 3rd row onwards, there is no index whose price is equal or greater than 18. 3rd row, where the limit is 11: from the 4th row onwards, the minimum index whose price is equal or greater than 11 is 9. 4th row, where the limit is 5: from the 5th row onwards, the minimum index whose price is equal or greater than 5 is 9. 5th row, where the limit is 9: as this is the last row, there is no further index whose price is equal or greater than 9. My solution is shown below - but what would be a neat Polars way of doing it? I was able to solve it in 8 steps, but I'm sure there is a more effective way of doing it. # Import Polars. import polars as pl # Create a sample DataFrame. df_1 = pl.DataFrame({ 'name': ['Alpha', 'Alpha', 'Alpha', 'Alpha', 'Alpha'], 'index': [0, 3, 4, 7, 9], 'limit': [12, 18, 11, 5, 9], 'price': [10, 15, 12, 8, 11] }) # Group by name, so that we can vertically stack all row's values into a single list. df_2 = df_1.group_by('name').agg(pl.all()) # Put the lists with the original DataFrame. df_3 = df_1.join( other=df_2, on='name', suffix='_list' ) # Explode the dataframe to long format by exploding the given columns. df_3 = df_3.explode([ 'index_list', 'limit_list', 'price_list', ]) # Filter the DataFrame for the condition we want. df_3 = df_3.filter( (pl.col('index_list') > pl.col('index')) & (pl.col('price_list') >= pl.col('limit')) ) # Get the minimum index over the index column. df_3 = df_3.with_columns( pl.col('index_list').min().over('index').alias('min_index') ) # Select only the relevant columns and drop duplicates. df_3 = df_3.select( pl.col(['index', 'min_index']) ).unique() # Finally join the result. df_final = df_1.join( other=df_3, on='index', how='left' ) print(df_final) | Option 1: df.join_where (experimental) out = ( df_1.join( df_1 .join_where( df_1.select('index', 'price'), pl.col('index_right') > pl.col('index'), pl.col('price_right') >= pl.col('limit') ) .group_by('index') .agg( pl.col('index_right').min().alias('min_index') ), on='index', how='left' ) ) Output: shape: (5, 5) βββββββββ¬ββββββββ¬ββββββββ¬ββββββββ¬ββββββββββββ β name β index β limit β price β min_index β β --- β --- β --- β --- β --- β β str β i64 β i64 β i64 β i64 β βββββββββͺββββββββͺββββββββͺββββββββͺββββββββββββ‘ β Alpha β 0 β 12 β 10 β 3 β β Alpha β 3 β 18 β 15 β null β β Alpha β 4 β 11 β 12 β 9 β β Alpha β 7 β 5 β 8 β 9 β β Alpha β 9 β 9 β 11 β null β βββββββββ΄ββββββββ΄ββββββββ΄ββββββββ΄ββββββββββββ Explanation / Intermediates Use df.join_where and for other use df.select (note that you don't need 'limit'), adding the filter predicates. # df_1.join_where(...) shape: (4, 6) βββββββββ¬ββββββββ¬ββββββββ¬ββββββββ¬ββββββββββββββ¬ββββββββββββββ β name β index β limit β price β index_right β price_right β β --- β --- β --- β --- β --- β --- β β str β i64 β i64 β i64 β i64 β i64 β βββββββββͺββββββββͺββββββββͺββββββββͺββββββββββββββͺββββββββββββββ‘ β Alpha β 0 β 12 β 10 β 3 β 15 β β Alpha β 0 β 12 β 10 β 4 β 12 β β Alpha β 4 β 11 β 12 β 9 β 11 β β Alpha β 7 β 5 β 8 β 9 β 11 β βββββββββ΄ββββββββ΄ββββββββ΄ββββββββ΄ββββββββββββββ΄ββββββββββββββ Since order is not maintained, use df.group_by to retrieve pl.Expr.min per 'index'. # df_1.join_where(...).group_by('index').agg(...) shape: (3, 2) βββββββββ¬ββββββββββββ β index β min_index β β --- β --- β β i64 β i64 β βββββββββͺββββββββββββ‘ β 0 β 3 β β 7 β 9 β β 4 β 9 β βββββββββ΄ββββββββββββ The result we add to df_1 with a left join. Option 2: df.join with "cross" + df.filter (Adding this option, since df.join_where is experimental. This will be more expensive though.) out2 = ( df_1.join( df_1 .join(df_1.select('index', 'price'), how='cross') .filter( pl.col('index_right') > pl.col('index'), pl.col('price_right') >= pl.col('limit') ) .group_by('index') .agg( pl.col('index_right').min().alias('min_index') ), on='index', how='left' ) ) out2.equals(out) # True | 2 | 2 |
79,322,010 | 2025-1-1 | https://stackoverflow.com/questions/79322010/how-to-make-mypy-correctly-type-check-a-function-using-functools-partial | I'm trying to create a function that returns a partially applied callable, but I'm encountering issues with mypy type checking. HEre Is my first implementation: Help me to explain my question for stackoverflow. i.e find a title and the body this code : from collections.abc import Callable from functools import partial def f(i: int, j: float, k: int) -> int: return i + int(j) + k def g(a: float) -> Callable[[int, int], int]: return partial(f, j=a) fun: Callable[[int, int], int] = g(3.0) r: int = fun(4, 5) print(r) It is successfully checked by mypy but can not run r: int = fun(4, 5) TypeError: f() got multiple values for argument 'j' to solve this problem, I call the function with named argument from functools import partial def f(i: int, j: float, k: int) -> int: return i + int(j) + k def g(a: float) -> Callable[[int, int], int]: return partial(f, j=a) fun: Callable[[int, int], int] = g(3.0) # line 12 in my code (where the error message comes from) r: int = fun(i=4, k=5) print(r) it works fine now but mypy checking fails main.py:12: error: Unexpected keyword argument "i" [call-arg] main.py:12: error: Unexpected keyword argument "k" [call-arg] Found 2 errors in 1 file (checked 1 source file) Is there a way to annotate this code so that it both runs correctly and passes mypy's type checking? I've tried various combinations of type hints, but I haven't found a solution that satisfies both the runtime behavior and static type checking. I know there is this solution, without using Partial from collections.abc import Callable def f(i :int,j : float,k :int) ->int: return i+int(j)+k def g(a :float) -> Callable[[int,int],int]: def ret(i,k): return f(i,a,k) return ret fun :Callable[[int,int],int]= g(3.0) r : int = fun(4,5) print(r) But I really want to use Callable because for I am working with functions with a lot of parameter and it this much more simplier to just say which paramters are replaced | Callable is not expressive enough to be able to describe the function signature you are returning. Instead, you should use a Protocol. This is caused by your use of partial() to fill in the argument for j. That is, partial(f, j=4)(1, 2) is equivalent to f(1, 2, j=4) which means Python tries to pass both 2 and 4 as the argument for j. In cannot do this, and so instead throws an error. Thus, k MUST be passed as a keyword argument instead of a positional argument. Instead of: # All this says is that the function takes two ints as argument. It does not # say what the name of the argument is. So you cannot use keyword arguments. # Even if that is what is required in the case of arg `k` FuncType = Callable[[int,int],int] Use: from typing import Protocol # Callable takes two int args called `i` and `k`. `i` can be passed as a normal # arg or as a keyword arg, and `k` MUST be passed as a keyword arg. class FuncType(Protocol): def __call__(self, i: int, *, k: int) -> int: ... Thus, the latter part of your code would become: class FuncType(Protocol): def __call__(self, i: int, *, k: int) -> int: ... def g(a: float) -> FuncType: # NB. mypy does not check this cast, even with the --strict flag return partial(f, j=a) fun: FuncType = g(3.0) r: int = fun(4, j=5) # OR r = fun(i=4, j=5) # mypy is happy with either print(r) Passing k as a positional argument It is possible to pass k as a positional argument, but not by using partial. Instead you must use a closure to wrap your calls to f. For example: def g(a: float) -> Callable[[int, int], int]: def wrapper(i: int, k: int) -> int: return f(i, a, k) return wrapper g(1.0)(2, 3) # mypy is happy with this You'll note that because k can now be passed as a positional argument you can use Callable to express the function signature again. However, even though the args of wrapper are i and k, mypy will not let you pass these args by keyword. This is because Callable does not expose the names of the arguments of the callable. If you want to be able to use keyword arguments, then you will need to use a Protocol again. | 4 | 4 |
79,328,082 | 2025-1-4 | https://stackoverflow.com/questions/79328082/python-cant-access-attribute-of-an-object-unless-i-check-nullity-why | A LeetCode problem which is about linked lists that goes with this structure: # Definition for singly-linked list. # class ListNode: # def __init__(self, val=0, next=None): # self.val = val # self.next = next Gave an error while attempting to print the val of the next node, but still worked when given a nullity check (it never even went to the else statement). Assuming l1 is an instance of the class ListNode with print(l1.nextNode) gives: ListNode{val: 4, next: ListNode{val: 3, next: None}} And: nextNode = l1.next Why does this Fail: print(nextNode.val) AttributeError: 'NoneType' object has no attribute 'val' While this Works: if nextNode is not None: print(nextNode.val) else: print("Node is None") Extra: I wonder if the answer to the above is related to why this also Fails with try/catch: try: print("try block executed") print(nextNode.val) except: **print("except block executed1") print(nextNode.val)** if nextNode is not None: print("except block executed"2) print(nextNode.val) else: print("Node is None") While this Works and prints try block executed: try: print("try block executed") print(nextNode.val) except: if nextNode is not None: print("except block executed") print(nextNode.val) else: print("Node is None") EDIT: Found the cause, turned out that the code fails for certain test case where it has only 1 node, but when it succeeds, it shows another test case result Found this out while trying to create a copy-able code, rookie mistake... For more details check the LeetCode problem in the link provided at the start. | When you reach the end of the list, nextNode will be None. None doesn't have a val attribute, so nextNode.val raises an exception. If you check whether it's None first, you don't execute that erroneous expression, so there's no error. When you use try/except, it catches the exception. But then in the except: block you try to print the same thing. It raises the exception again, and this time there's no try/except to catch it, so the program stops with an error. | 1 | 1 |
79,327,275 | 2025-1-3 | https://stackoverflow.com/questions/79327275/gekko-using-apopt-isnt-optimizing-a-single-linear-equation-represented-as-a-pwl | I've run into an issue where I can't get APOPT to optimize an unconstrained single piecewise linear, and it's really throwing me for a loop. I feel like there's something I'm not understanding about model.pwl, but it's hard (for me) to find documentation outside of the GEKKO docs. Here's my minimal example: model = GEKKO(remote=False) model.options.SOLVER = 1 model.solver_options = ["minlp_as_nlp 0"] x = model.sos1([0, 1, 2, 3, 4]) # This can also be model.Var(lb=0, ub=4), same result. pwl = model.Var() model.pwl(x, pwl, [0, 1, 2, 3, 4], [30, 30.1, 30.2, 30.3, 30.4], bound_x=True) model.Minimize(pwl) model.solve(display=True) print(x.value) print(pwl.value) print(model.options.objfcnval) The output that I get is: ---------------------------------------------------------------- APMonitor, Version 1.0.3 APMonitor Optimization Suite ---------------------------------------------------------------- --------- APM Model Size ------------ Each time step contains Objects : 1 Constants : 0 Variables : 2 Intermediates: 0 Connections : 2 Equations : 1 Residuals : 1 Piece-wise linear model pwl1points: 5 Number of state variables: 12 Number of total equations: - 5 Number of slack variables: - 0 --------------------------------------- Degrees of freedom : 7 ---------------------------------------------- Steady State Optimization with APOPT Solver ---------------------------------------------- Iter Objective Convergence 0 3.39503E+01 3.01000E+01 1 3.22900E+01 1.00000E-10 2 3.22000E+01 2.22045E-16 4 3.22000E+01 0.00000E+00 Successful solution --------------------------------------------------- Solver : APOPT (v1.0) Solution time : 3.819999999541324E-002 sec Objective : 32.2000000000000 Successful solution --------------------------------------------------- 2.0 30.2 32.2 This is unexpected to me, as the obvious minimal value is 30 for the pwl. | A cubic spline is much more reliable in optimization than a piecewise linear function because it doesn't rely on slack variables and switching conditions. from gekko import GEKKO model = GEKKO(remote=False) model.options.SOLVER = 1 x = model.Var(lb=0, ub=4, integer=True) y = model.Var() model.cspline(x, y, [0, 1, 2, 3, 4], [30, 30.1, 30.2, 30.3, 30.4], bound_x=True) model.Minimize(y) model.solve(display=True) print(x.value) print(y.value) print(model.options.objfcnval) Here is the output: ---------------------------------------------------------------- APMonitor, Version 1.0.3 APMonitor Optimization Suite ---------------------------------------------------------------- --------- APM Model Size ------------ Each time step contains Objects : 1 Constants : 0 Variables : 2 Intermediates: 0 Connections : 2 Equations : 1 Residuals : 1 Number of state variables: 2 Number of total equations: - 1 Number of slack variables: - 0 --------------------------------------- Degrees of freedom : 1 ---------------------------------------------- Steady State Optimization with APOPT Solver ---------------------------------------------- Iter: 1 I: 0 Tm: 0.00 NLPi: 2 Dpth: 0 Lvs: 0 Obj: 3.00E+01 Gap: 0.00E+00 Successful solution --------------------------------------------------- Solver : APOPT (v1.0) Solution time : 1.970000000437722E-002 sec Objective : 30.0000000000000 Successful solution --------------------------------------------------- [0.0] [30.0] 30.0 A few other notes: Use m.Var(integer=True) instead of m.sos1() when potential values are integers. The m.sos1() function is for discrete non-integer values. There is more information on the PWL function in the APMonitor documentation. Gekko is an interface to the APMonitor Modeling Language and writes gk0_model.apm as a text file in the run directory m._path (or open tmp folder with m.open_folder()). | 3 | 2 |
79,327,573 | 2025-1-3 | https://stackoverflow.com/questions/79327573/django-db-utils-operationalerror-no-such-column-home-student-schoolyear | models.py ''' class Person(models.Model): firstname = models.CharField(max_length=30) lastname = models.CharField(max_length=30) othernames = models.CharField(max_length=40) dateOfBirth = models.DateField() gender = models.CharField(max_length=20) birthGender = models.CharField(max_length=20) email = models.EmailField(max_length=100) class Student(Person): studentId = models.IntegerField() admissionDate = models.DateField() enrolmentStatus = models.BooleanField() studentExamCode = models.IntegerField() schoolYear = models.IntegerField() ''' forms.py ''' class studentF(ModelForm): class Meta: model = Student fields = ['firstname', 'lastname', 'othernames', 'dateOfBirth', 'gender', 'birthGender', 'email', 'studentId', 'admissionDate', 'enrolmentStatus', 'studentExamCode', 'schoolYear'] ''' I am getting the error in the title, how do I fix it? | make sure you have done what @raphael said you should do. if issue persist, python manage.py makemigrations --merge python manage.py migrate --fake reset db python manage.py migrate *yourapp* zero python manage.py migrate | 1 | 1 |
79,327,540 | 2025-1-3 | https://stackoverflow.com/questions/79327540/how-to-reference-an-inner-class-or-attribute-before-it-is-fully-defined | I have a scenario where a class contains an inner class, and I want to reference that inner class (or its attributes) within the outer class. Hereβs a concrete example using Django: from django.db import models from django.utils.translation import gettext_lazy as _ class DummyModel(models.Model): class StatusChoices(models.TextChoices): ACTIVE = "active", _("Active") INACTIVE = "inactive", _("Inactive") status = models.CharField( max_length=15, choices=StatusChoices.choices, verbose_name=_("Status"), help_text=_("Current status of the model."), default=StatusChoices.ACTIVE, null=False, blank=False, ) class Meta: verbose_name = _("Dummy Model") verbose_name_plural = _("Dummy Models") constraints = [ models.CheckConstraint( name="%(app_label)s_%(class)s_status_valid", check=models.Q(status__in=[choice.value for choice in DummyModel.StatusChoices]), ) ] In this case, the constraints list in the Meta class tries to reference DummyModel.StatusChoices. However, at the time this reference is evaluated, DummyModel is not fully defined, leading to an error (neither StatusChoices is accessible in that line). I would like to solve this without significantly altering the structure of the codeβStatusChoices must remain defined inside DummyModel. How can I resolve this issue while keeping the inner class and its attributes accessible as intended? | You can probably do this by defining the choices outside the class first, because the Meta class is actually constructed even before the status is accessible: # π outside DummyModel class StatusChoices(models.TextChoices): ACTIVE = 'active', _('Active') INACTIVE = 'inactive', _('Inactive') class DummyModel(models.Model): status = models.CharField( max_length=15, choices=StatusChoices.choices, verbose_name=_('Status'), help_text=_('Current status of the model.'), default=StatusChoices.ACTIVE, null=False, blank=False, ) class Meta: verbose_name = _('Dummy Model') verbose_name_plural = _('Dummy Models') constraints = [ models.CheckConstraint( name='%(app_label)s_%(class)s_status_valid', check=models.Q( status__in=[choice.value for choice in StatusChoices] ), ) ] DummyModel.StatusChoices = StatusChoices For what it is worth, I made a small Django package named django-enforced-choices [GitHub] that can enforce choices at the database by just looking at the field with choices. | 1 | 1 |
79,322,646 | 2025-1-2 | https://stackoverflow.com/questions/79322646/problems-obtaining-an-intersected-linestring-using-geopandas | I am having an issue obtaining a linestring from an intersection with a polygon using GeoPandas. The linestring is self-intersecting, which is what is causing my issues. A line intersecting a polygon: Given the following code: import geopandas as gp from shapely.geometry import LineString, Polygon # Draw a polygon that is 100 x 100 units, starting at coordinates 0, 0 polygon = Polygon([(50, 0), (50, 100), (100, 100), (100, 0)]) # Convert the polygon to a geodataframe polygon = gp.GeoDataFrame(index=[0], crs='EPSG:4326', geometry=[polygon]) # Draw a horizontal line that starts at coordinates 50, 0 and is 200 units long line = LineString([(0, 50), (75, 50), (70, 35), (55, 40), (250, 50)]) # Convert the line to a geodataframe line = gp.GeoDataFrame(index=[0], crs='EPSG:4326', geometry=[line]) print(line) # Intersect the line with the polygon intersection = line.intersection(polygon) print(intersection) I have the following results: 0 LINESTRING (0.00000 50.00000, 75.00000 50.0000... 0 MULTILINESTRING ((50.00000 50.00000, 75.00000 ... After intersect, I am returned a multilinestring instead of a linestring. The line is being split by the polygon (desired) but is also being split into multiple lines where it self-intersects (not desired). I have tried to re-join the multiline with unary-union without success. The output remains a multilinestring. I'm unsure what else I can do to keep only the portion of the line contained within the polygon, as a single line. Any thoughts on how I might be able to accomplish this? | Because line_merge apparently doesn't reconstruct the single linestring when it is self-intersecting, the only option I see is to extract the coordinates and create a new linestring, which results in a single linestring. In the code sample I use some functions that are only available in shapely 2, so it needs a relatively up-to-date version of shapely. Code sample: import geopandas as gp import shapely from shapely.geometry import LineString, Polygon # Draw a polygon that is 100 x 100 units, starting at coordinates 0, 0 polygon = Polygon([(50, 0), (50, 100), (100, 100), (100, 0)]) # Convert the polygon to a geodataframe polygon = gp.GeoDataFrame(index=[0], crs='EPSG:4326', geometry=[polygon]) # Draw a horizontal line that starts at coordinates 50, 0 and is 200 units long line = LineString([(0, 50), (75, 50), (70, 35), (55, 40), (250, 50)]) # Convert the line to a geodataframe line = gp.GeoDataFrame(index=[0], crs='EPSG:4326', geometry=[line]) # print(line) # Intersect the line with the polygon intersection = line.intersection(polygon) print(f"{intersection=}") # Recreate the intersection line from its coordinates intersection_line = shapely.linestrings(shapely.get_coordinates(intersection)) print(f"{intersection_line=}") | 2 | 1 |
79,327,355 | 2025-1-3 | https://stackoverflow.com/questions/79327355/pandas-non-negative-integers-to-n-bits-binary-representation | I have a pandas Series containing strictly non-negative integers like so: 1 2 3 4 5 I want to convert them into n-bits binary representation based on the largest value. For example, the largest value here is 5, so we would have 3 bits/3 columns, and the resulting series would be something like this 0 0 1 0 1 0 0 1 1 1 0 0 1 0 1 Thanks a lot in advance! | If your values are less than 255, you could unpackbits: s = pd.Series([1, 2, 3, 4, 5]) N = int(np.log2(s.max())) powers = 2**np.arange(N, -1, -1) out = pd.DataFrame(np.unpackbits(s.to_numpy(np.uint8)[:, None], axis=1)[:, -N-1:], index=s.index, columns=powers) If your have larger numbers, compute a mask with & and an array of powers of 2: s = pd.Series([1, 2, 3, 4, 5]) powers = 2**np.arange(int(np.log2(s.max())), -1, -1) out = pd.DataFrame((s.to_numpy()[:, None] & powers).astype(bool).astype(int), index=s.index, columns=powers) Output: 4 2 1 0 0 0 1 1 0 1 0 2 0 1 1 3 1 0 0 4 1 0 1 | 2 | 1 |
79,325,274 | 2025-1-3 | https://stackoverflow.com/questions/79325274/how-to-prevent-type-alias-defined-in-a-stub-file-from-being-used-in-other-module | I'm working on a Python 3.13.1 project using mypy 1.14.0 for static type checking. I have a module named module.py with a function function that returns a type with a very long name, Type_whose_name_is_so_long_that_we_do_not_want_to_call_it_over_and_over_again. To make the code more readable, I've defined a type alias T in the corresponding stub file module.pyi . Here's a simplified version of my code module.pyi: T = Type_whose_name_is_so_long_that_we_do_not_want_to_call_it_over_and_over_again def function()->T: pass class Type_whose_name_is_so_long_that_we_do_not_want_to_call_it_over_and_over_again: pass I want to prevent following illegal_usage_of_T.py from using the T type alias. import module foo:module.T = module.function() Ideally, when I run mypy illegal_usage_of_T.py, I'd like to get an error message indicating that the type T is undefined. What I tried Google search I've google searched for "mypy type alias only used in stub file" but couldn't find a solution that prevents mypy from recognizing the type alias in other modules. I expected that defining T only in the stub file would limit its scope, but it seems that mypy is able to find the type alias even in other modules. fixing code I've tried several approaches, including: Renaming the type alias: I changed T to _T to make it less likely to be found by other modules, but this didn't resolve the issue. Using if TYPE_CHECKING: I tried conditionally defining the type alias within an if TYPE_CHECKING block, but this also didn't prevent the type alias from being used in other modules. Limiting exports: I added __all__ = ["function", "Type_whose_name_is_so_long_that_we_do_not_want_to_call_it_over_and_over_again"] to the module.pyi file to explicitly control what names are exported, but the type alias T was still accessible. Here's the modified code for module.pyi: from typing import TYPE_CHECKING if TYPE_CHECKING: _T = Type_whose_name_is_so_long_that_we_do_not_want_to_call_it_over_and_over_again def function()->_T: pass class Type_whose_name_is_so_long_that_we_do_not_want_to_call_it_over_and_over_again: pass __all__ = ["function", "Type_whose_name_is_so_long_that_we_do_not_want_to_call_it_over_and_over_again"] And here's the illegal_usage_of_T.py file: import module foo:module._T = module.function() # Still works I expected that at least one of these approaches would prevent illegal_usage_of_T.py from accessing the _T type alias, but none of them worked. | Apart from writing your own mypy plugin, there isn't really a way to do this. Prefixing items with an underscore is by far the overwhelmingly adopted convention to indicate names which aren't supposed to be exported (used outside of the module it is defined in); you can see this convention adopted in Python's own typeshed project. Some IDEs (like PyCharm or VSCode with pyright) in fact do show errors if you try to access underscore-prefixed items from a module, but this isn't part of mypy. Apart from just using pyright instead of mypy, the closest thing that exists is Ruff's import-private-name rule, but this doesn't activate unless you use the name in a runtime context (type annotations, like foo: module.T, don't count, and won't trigger the linting). As for the others: if TYPE_CHECKING - this has no effect in .pyi stub files. __all__ - this only has effect for star imports (from module import *) and names which would otherwise not be re-exported. Direct access to module attributes (like module.T or from module import T) is never prevented due to the lack of a name ("T") in __all__, if T was defined inside module. | 1 | 3 |
79,325,674 | 2025-1-3 | https://stackoverflow.com/questions/79325674/blackboxprotobuf-showing-positive-values-instead-of-negative-values-for-protobuf | I have an issue where blackboxprotobuf takes response of protobuf & returning the dictionary where i see few values where suppose to be negative instead coming as positive value. Calling an APi with lat (40.741895) & long(-73.989308). Using these lat & long, a key is genereated '81859706' that be used in the api. For Key Generation we are using paid framework. url = "https://gspe85-ssl.ls.apple.com/wifi_request_tile" response =requests.get(url, headers={ 'Accept': '*/*', 'Connection': 'keep-alive', 'X-tilekey': "81859706", 'User-Agent': 'geod/1 CFNetwork/1496.0.7 Darwin/23.5.0', 'Accept-Language': 'en-US,en-GB;q=0.9,en;q=0.8', 'X-os-version': '17.5.21F79' }) Which returns protobuf as response. For the same using blackboxprotobuf to convert protobuf_to_json snippet message, typedef = blackboxprotobuf.protobuf_to_json(response.content) json1_data = json.loads(message) Response: "2": [ { "4": { "2": { "1": 1, "2": 1 } }, "5": 124103876854927, "6": { "1": 407295068, "2": 3555038608 //This values should be negative } }, Any help how to debug this response & fix this issue. Thank you | The bit value that comes for the number is >>> bin(3555038608) '0b11010011111001011001010110010000' If you take 2s complement of that number, you will get the negative that you want. >>> def twos_comp(val, bits): ... """compute the 2's complement of int value val""" ... if (val & (1 << (bits - 1))) != 0: # if sign bit is set e.g., 8bit: 128-255 ... val = val - (1 << bits) # compute negative value ... return val # return positive value as is ... >>> twos_comp(3555038608, 32) -739928688 If you know the bit length of coordinates, in this case 32. Since its an int. You can convert it back, using this. | 1 | 1 |
79,325,219 | 2025-1-3 | https://stackoverflow.com/questions/79325219/how-can-i-scrape-event-links-and-contact-information-from-a-website-with-python | I am trying to scrape event links and contact information from the RaceRoster website (https://raceroster.com/search?q=5k&t=upcoming) using Python, requests, Pandas, and BeautifulSoup. The goal is to extract the Event Name, Event URL, Contact Name, and Email Address for each event and save the data into an Excel file so we can reach out to these events for business development purposes. However, the script consistently reports that no event links are found on the search results page, despite the links being visible when inspecting the HTML in the browser. Hereβs the relevant HTML for the event links from the search results page: <a href="https://raceroster.com/events/2025/98542/13th-annual-delaware-tech-chocolate-run-5k" target="_blank" rel="noopener noreferrer" class="search-results__card-event-name"> 13th Annual Delaware Tech Chocolate Run 5k </a> Steps Taken: Verified the correct selector for event links: soup.select("a.search-results__card-event-name") Checked the response content from the requests.get() call using soup.prettify(). The HTML appears to lack the event links that are visible in the browser, suggesting the content may be loaded dynamically via JavaScript. Attempted to scrape the data using BeautifulSoup but consistently get: Found 0 events on the page. Scraped 0 events. No contacts were scraped. What I Need Help With: How can I handle this JavaScript-loaded content? Is there a way to scrape it directly, or do I need to use a tool like Selenium? If Selenium is required, how do I properly integrate it with BeautifulSoup for parsing the rendered HTML? Current Script: import requests from bs4 import BeautifulSoup import pandas as pd def scrape_event_contacts(base_url, search_url): headers = { "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36" } event_contacts = [] # Fetch the main search page print(f"Scraping page: {search_url}") response = requests.get(search_url, headers=headers) if response.status_code != 200: print(f"Failed to fetch page: {search_url}, status code: {response.status_code}") return event_contacts soup = BeautifulSoup(response.content, "html.parser") # Select event links event_links = soup.select("a.search-results__card-event-name") print(f"Found {len(event_links)} events on the page.") for link in event_links: event_url = link['href'] event_name = link.text.strip() # Extract Event Name try: print(f"Scraping event: {event_url}") event_response = requests.get(event_url, headers=headers) if event_response.status_code != 200: print(f"Failed to fetch event page: {event_url}, status code: {event_response.status_code}") continue event_soup = BeautifulSoup(event_response.content, "html.parser") # Extract contact name and email contact_name = event_soup.find("dd", class_="event-details__contact-list-definition") email = event_soup.find("a", href=lambda href: href and "mailto:" in href) contact_name_text = contact_name.text.strip() if contact_name else "N/A" email_address = email['href'].split("mailto:")[1].split("?")[0] if email else "N/A" if contact_name or email: print(f"Found contact: {contact_name_text}, email: {email_address}") event_contacts.append({ "Event Name": event_name, "Event URL": event_url, "Event Contact": contact_name_text, "Email": email_address }) else: print(f"No contact information found for {event_url}") except Exception as e: print(f"Error scraping event {event_url}: {e}") print(f"Scraped {len(event_contacts)} events.") return event_contacts def save_to_spreadsheet(data, output_file): if not data: print("No data to save.") return df = pd.DataFrame(data) df.to_excel(output_file, index=False) print(f"Data saved to {output_file}") if __name__ == "__main__": base_url = "https://raceroster.com" search_url = "https://raceroster.com/search?q=5k&t=upcoming" output_file = "/Users/my_name/Documents/event_contacts.xlsx" contact_data = scrape_event_contacts(base_url, search_url) if contact_data: save_to_spreadsheet(contact_data, output_file) else: print("No contacts were scraped.") Expected Outcome: Extract all event links from the search results page. Navigate to each eventβs detail page. Scrape the contact name () and email () from the detail page. Save the results to an Excel file. | Use the API endpoint to get the data on upcoming events. Here's how: import requests from tabulate import tabulate import pandas as pd url = 'https://search.raceroster.com/search?q=5k&t=upcoming' headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36', } events = requests.get(url,headers=headers).json()['data'] loc_keys = ["address", "city", "country"] table = [ [ event["name"], event["url"], " ".join([event["location"][key] for key in loc_keys if key in event["location"]]) ] for event in events ] columns = ["Name", "URL", "Location"] print(tabulate(table, headers=columns)) df = pd.DataFrame(table, columns=columns) df.to_csv('5k_events.csv', index=False, header=True) This should print: Name URL Location ------------------------------------------- ------------------------------------------------------------------------------------------ ---------------------------------------------------------------------------------------------------------------------------- Credit Union Cherry Blossom https://raceroster.com/events/2025/72646/credit-union-cherry-blossom Washington, D.C. Washington United States Big Cork Wine Run 5k https://raceroster.com/events/2025/98998/big-cork-wine-run-5k Big Cork Vineyards, 4236 Main Street, Rohrersville, MD 21779, U.S. Rohrersville United States 3rd Annual #OptOutside Black Friday Fun Run https://raceroster.com/events/2025/98146/3rd-annual-number-optoutside-black-friday-fun-run Grain H2O, Summit Harbour Place, Bear, DE, USA Bear United States Ryan's Race 5K walk Run https://raceroster.com/events/2025/97852/ryans-race-5k-walk-run Odessa High School, Tony Marchio Drive, Townsend, DE Townsend United States 13th Annual Delaware Tech Chocolate Run 5k https://raceroster.com/events/2025/98542/13th-annual-delaware-tech-chocolate-run-5k Delaware Technical Community College - Charles L. Terry Jr. Campus - Dover, Campus Drive, Dover, DE, USA Dover United States Builders Dash 5k https://raceroster.com/events/2025/99146/builders-dash-5k Rail Haus - Beer Garden, North West Street, Dover, DE Dover United States The Ivy Scholarship 5k https://raceroster.com/events/2025/96874/the-ivy-scholarship-5k Hare Pavilion, River Place, Wilmington, DE Wilmington United States 39th Firecracker 5k Run Walk https://raceroster.com/events/2025/96907/39th-firecracker-5k-run-walk Rockford Tower, Lookout Drive, Wilmington, DE Wilmington United States 24th Annual John D Kelly Logan House 5k https://raceroster.com/events/2025/97364/24th-annual-john-d-kelly-logan-house-5k Kelly's Logan House, Delaware Avenue, Wilmington, DE, USA Wilmington United States 2nd Annual Scott Trot 5K https://raceroster.com/events/2025/96904/2nd-annual-scott-trot-5k American Legion Post 17, American Legion Road, Lewes, DE Lewes United States Bonus: To get more events data, just paginate the API with these parameters: l=10&p=1. For example, https://search.raceroster.com/search?q=5k&l=10&p=1&t=upcoming Also, note there's a field in meta -> hits that holds the number of found events. For your query that's 1465. | 3 | 3 |
79,325,561 | 2025-1-3 | https://stackoverflow.com/questions/79325561/replace-first-row-value-with-last-row-value | I'm trying to take the value from the last row of a df col and replace it with the first value. I'm returning a value error. import pandas as pd df = pd.DataFrame({'name': ['tom','jon','sam','jane','bob'], 'age': [24,25,18,26,17], 'Notification': [np.nan,'2025-01-03 14:19:35','2025-01-03 14:19:39','2025-01-03 14:19:41','2025-01-03 14:19:54'], 'sex':['male','male','male','female','male']}) df_test = df.copy() df_test['Notification'] = pd.to_datetime(df_test['Notification']) df_test['Notification'].iloc[0] = df_test['Notification'].tail(1) Error: ValueError: Could not convert object to NumPy datetime | You need to edit this line: df.loc[df.index[0], 'age'] = df.loc[df.index[-1], 'age'] Complete code: import pandas as pd import numpy as np df = pd.DataFrame({'name': ['tom', 'jon', 'sam', 'jane', 'bob'], 'age': [np.nan, 25, 18, 26, 17], 'sex': ['male', 'male', 'male', 'female', 'male']}) df.loc[df.index[0], 'age'] = df.loc[df.index[-1], 'age'] Cheers!!! | 2 | 2 |
79,324,668 | 2025-1-2 | https://stackoverflow.com/questions/79324668/how-to-get-only-the-first-occurrence-of-each-increasing-value-in-numpy-array | While working on first-passage probabilities, I encountered this problem. I want to find a NumPythonic way (without explicit loops) to leave only the first occurrence of strictly increasing values in each row of a numpy array, while replacing repeated or non-increasing values with zeros. For instance, if arr = np.array([ [1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 5, 5], [1, 1, 2, 2, 2, 3, 2, 2, 3, 3, 3, 4, 4], [3, 2, 1, 2, 1, 1, 2, 3, 4, 5, 4, 3, 2]]) I would like to get as output: out = np.array([ [1, 0, 0, 2, 0, 0, 3, 0, 0, 4, 0, 5, 0], [1, 0, 2, 0, 0, 3, 0, 0, 0, 0, 0, 4, 0], [3, 0, 0, 0, 0, 0, 0, 0, 4, 5, 0, 0, 0]]) | Maximum can be accumulated per-row: >>> arr array([[1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 5, 5], [1, 1, 2, 2, 2, 3, 2, 2, 3, 3, 3, 4, 4], [3, 2, 1, 2, 1, 1, 2, 3, 4, 5, 4, 3, 2]]) >>> np.maximum.accumulate(arr, axis=1) array([[1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 5, 5], [1, 1, 2, 2, 2, 3, 3, 3, 3, 3, 3, 4, 4], [3, 3, 3, 3, 3, 3, 3, 3, 4, 5, 5, 5, 5]]) Then you can easily mask out non-increasing values: >>> m_arr = np.maximum.accumulate(arr, axis=1) >>> np.where(np.diff(m_arr, axis=1, prepend=0), arr, 0) array([[1, 0, 0, 2, 0, 0, 3, 0, 0, 4, 0, 5, 0], [1, 0, 2, 0, 0, 3, 0, 0, 0, 0, 0, 4, 0], [3, 0, 0, 0, 0, 0, 0, 0, 4, 5, 0, 0, 0]]) | 2 | 3 |
79,324,524 | 2025-1-2 | https://stackoverflow.com/questions/79324524/attributeerror-with-instance-of-model-with-generic-foreign-field-in-created-by | I have 3 models that I am dealing with here: SurveyQuestion, Update, and Notification. I use a post_save signal to create an instance of the Notification model whenever an instance of SurveyQuestion or Update was created. The Notification model has a GenericForeignKey which goes to whichever model created it. Inside the Notification model I try to use the ForeignKey to set __str__ as the title field of the instance of the model that created it. Like so: class Notification(models.Model): source_object = models.ForeignKey(ContentType, on_delete=models.CASCADE) object_id = models.PositiveIntegerField() source = GenericForeignKey("source_object", "object_id") #more stuff def __str__(self): return f'{self.source.title} notification' I am able to create instances of SurveyQuestion and Update from the admin panel, which is then (supposed to be) creating an instance of Notification. However, when I query instances of Notification in the shell: from hotline.models import Notification notifications = Notification.objects.all() for notification in notifications: print (f"Notification object: {notification}") NoneType --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) 1 for notification in notifications: ----> 2 print (notification) File ~/ygzsey/hotline/models.py:27, in Notification.__str__(self) 26 def __str__(self): ---> 27 return f'{self.source.title} notification' AttributeError: 'NoneType' object has no attribute 'title' When I query instances of SurveyQuestion: from hotline.models import SurveyQuestion surveys = SurveyQuestion.objects.all() for survey in surveys: print (f"Model: {survey.__class__.__name__}") Model: SurveyQuestion When I query instances of Notification and try to print the class name of their ForeignKey field (I labled it source), I get this: for notification in notifications: print (f"Notification for {notification.source.__class__.__name__}") Notification for NoneType Notification for NoneType Notification for NoneType So it seems that the SurveyQuestion, Update, and Notification instances are saving properly, but there is some problem with the GenericForeignKey. I had the post_save create an instance of Notification using Notification(source_object=instance, start_date=instance.start_date, end_date=instance.end_date), but that would give me an error when trying to save an instance of SurveyQuestion or Update in the admin panel: ValueError at /admin/hotline/update/add/ Cannot assign "<Update: Update - ad>": "Notification.source_object" must be a "ContentType" instance. So I changed it to Notification(source_object=ContentType.objects.get_for_model(instance), start_date=instance.start_date, end_date=instance.end_date). My full models.py: from django.db import models from datetime import timedelta from django.utils import timezone from django.contrib.contenttypes.fields import GenericForeignKey, GenericRelation from django.contrib.contenttypes.models import ContentType from django.db.models.signals import post_save from django.dispatch import receiver def tmrw(): return timezone.now() + timedelta(days=1) class Notification(models.Model): source_object = models.ForeignKey(ContentType, on_delete=models.CASCADE) object_id = models.PositiveIntegerField() source = GenericForeignKey("source_object", "object_id") start_date = models.DateTimeField(default=timezone.now) end_date = models.DateTimeField(default=tmrw) class Meta: verbose_name = 'Notification' verbose_name_plural = f'{verbose_name}s' def __str__(self): return f'{self.source.title} notification' class Update(models.Model): title = models.CharField(max_length=25) update = models.TextField() start_date = models.DateTimeField(default=timezone.now) end_date = models.DateTimeField(default=tmrw) #notification = GenericRelation(Notification, related_query_name='notification') class Meta: verbose_name = 'Update' verbose_name_plural = f'{verbose_name}s' def __str__(self): return f'{self.__class__.__name__} - {self.title}' class SurveyQuestion(models.Model): title = models.CharField(max_length=25) question = models.TextField() start_date = models.DateTimeField(default=timezone.now) end_date = models.DateTimeField(default=tmrw) #notification = GenericRelation(Notification, related_query_name='notification') class Meta: verbose_name = 'Survey' verbose_name_plural = f'{verbose_name}s' def __str__(self): return f'{self.__class__.__name__} - {self.title}' class SurveyOption(models.Model): survey = models.ForeignKey(SurveyQuestion, on_delete=models.CASCADE, related_name='options') option = models.TextField() id = models.AutoField(primary_key=True) class Meta: verbose_name = 'Survey option' verbose_name_plural = f'{verbose_name}s' def __str__(self): return f'{self.survey.title} option #{self.id}' @receiver(post_save) def create_notification(instance, **kwargs): #""" print (f"instance: {instance}") print (f"instance.__class__: {instance.__class__}") print (f"instance.__class__.__name__: {instance.__class__.__name__}") #""" senders = ['SurveyQuestion', 'Update'] if instance.__class__.__name__ in senders: notification = Notification(source_object=ContentType.objects.get_for_model(instance), start_date=instance.start_date, end_date=instance.end_date) notification.save() post_save.connect(create_notification) | You should use source, not source_object: Notification( source=instance, start_date=instance.start_date, end_date=instance.end_date ) A GenericForeignKey essentially combines two columns, the source_object (very bad name) that points to the type of the item the GenericForeignKey refers to, and a column that stores the primary key (or another unique column) of that object. | 2 | 1 |
79,322,581 | 2025-1-2 | https://stackoverflow.com/questions/79322581/is-this-a-false-positive-override-error-signature-of-method-incompatible-w | Although the method signature in Sub is compatible with Super, mypy rejects the override: Signature of "method" incompatible with supertype "Super". I'm using python 3.13.1 mypy 1.14.0 First, I made following test.pyi. from typing import overload class Super: def method(self, arg:Other|Super)->Super: pass class Sub(Super): @overload def method(self, arg:Other|Sub)->Sub: pass @overload def method(self, arg:Super)->Super: pass class Other: pass Then, when I ran mypy test.pyi in command line, mypy produced the following diagnostic: test.pyi:7: error: Signature of "method" incompatible with supertype "Super" [override] test.pyi:7: note: Superclass: test.pyi:7: note: def method(self, arg: Other | Super) -> Super test.pyi:7: note: Subclass: test.pyi:7: note: @overload test.pyi:7: note: def method(self, arg: Other | Sub) -> Sub test.pyi:7: note: @overload test.pyi:7: note: def method(self, arg: Super) -> Super Found 1 error in 1 file (checked 1 source file) I checked type of both Super.method and Sub.method's I/O, and found that there's no pattern that violates LSP (Liskov Substitution Principle). Overloaded Sub.method can input arg of type Other|Super (= Other|Sub + Super) and output type Super (= Sub + Super ). Above input and output type matches the signature of Super.method . So, I have no idea to be Signature of "method" incompatible with supertype "Super". Following is I/O table of method. I\O Super.method Sub.method Compared to Super, Sub's return is: Adhering to LSP* Other Super Sub narrower Yes Sub Super Sub narrower Yes Super Super Super the same Yes Other|Sub Super Sub narrower Yes Other|Super Super Super** the same Yes *The LSP requires that the return type of a sub method be narrower or equal to the return type of the super method. **Sub.method returns Sub when Other input and Super when Super input, so it returns Sub|Super when Other|Super input. Sub|Super means Super. As you can see from the table above, there are no patterns that violate LSP. So, I think that mypy error message Signature of "method" incompatible with supertype "Super" is incorrect. Is my code wrong? Also, if my code is not wrong and mypy's error message is wrong, where can I ask? P.S. Quick way to hide the error. Although it's not a complete solution, I found a simple way to hide the error. from typing import overload class Super: def method(self, arg:Other|Super)->Super: pass class Sub(Super): @overload def method(self, arg:Other|Sub)->Sub: pass @overload def method(self, arg:Super)->Super: pass # Start of hiding error @overload def method( # type: ignore[overload-cannot-match] self, arg:Other|Super)->Super: pass # End of hiding error class Other: pass As a last overload, I added a Sub.method with the exact same signature as Super.method. However, when this issue is resolved in a future version, it will mean that the code I added will not be reached and we should get a [overload-cannot-match] error. Therefore, I added # type: ignore[overload-cannot-match] to ignore this error in advance. (At first glance, it may seem like I am simply silencing errors with type: ignore, but this is not relevant to mypy as of now. This is merely a deterrent against future error.) | As pointed out in a pyright ticket about this, the typing spec mentions this behaviour explicitly: If a callable B is overloaded with two or more signatures, it is assignable to callable A if at least one of the overloaded signatures in B is assignable to A There's a mypy ticket asking about the same problem. However, your code is in fact safe, and the table in your post proves that. That's just a limitation of the specification and/or typecheckers. As discussed in comments, it may seem like the problem is the union type itself (Other | Super can't be dispatched to either of overloads), but it isn't true: mypy uses union math in that case, and overloaded call return type is a union of return types of matched overloads if all union members can be dispatched to one of them. Here's the source where this magic happens, read the comments there if you're interested - the main checker path is documented well. Now, given that your code doesn't typecheck only due to a typechecker/spec issue, you have several options: # type: ignore[override] - why not? Your override is safe, just tell mypy to STFU. Add a signature to match the spec requirements as in your last paragraph. But please don't add an unused ignore comment - there's a --warn-unused-ignores flag for mypy which is really useful. Don't add such ignore, just add a free-text comment explaining the problem and linking here or to the mypy issue. Just extend the second signature. That's safe - overloads are tried in order, the first match wins (well, not exactly, but in simple cases without *args/**kwargs and ParamSpec that's true): class Sub(Super): @overload def method(self, arg: Other | Sub) -> Sub: ... @overload def method(self, arg: Other | Super) -> Super: ... But I'd just go with an ignore comment and explain the problem: class Sub(Super): # The override is safe, but doesn't conform to the spec. # https://github.com/python/mypy/issues/12379 @overload # type: ignore[override] def method(self, arg: Other | Sub) -> Sub: ... @overload def method(self, arg: Super) -> Super: ... | 4 | 4 |
79,323,799 | 2025-1-2 | https://stackoverflow.com/questions/79323799/how-to-convert-matrix-to-block-matrix-using-numpy | Say I have a matrix like Matrix = [[A11, A12, A13, A14], [A21, A22, A23, A24], [A31, A32, A33, A34], [A41, A42, A43, A44]], and suppose I want to convert it to a block matrix [[A,B], [C,D]], where A = [[A11, A12], [A21, A22]] B = [[A13, A14], [A23, A24]] C = [[A31, A32], [A41, A42]] D = [[A33, A34], [A43, A44]]. What do I need to type to quickly extract the matrices A, B, C, and D? | Without using loops, you can reshape your array (and reorder the dimensions with moveaxis): A, B, C, D = np.moveaxis(Matrix.reshape((2,2,2,2)), 1, 2).reshape(-1, 2, 2) Or: (A, B), (C, D) = np.moveaxis(Matrix.reshape((2,2,2,2)), 1, 2) For a generic answer on an arbitrary shape: x, y = Matrix.shape (A, B), (C, D) = np.moveaxis(Matrix.reshape((2, x//2, 2, y//2)), 1, 2) Output: # A array([['A11', 'A12'], ['A21', 'A22']], dtype='<U3') # B array([['A13', 'A14'], ['A23', 'A24']], dtype='<U3') # C array([['A31', 'A32'], ['A41', 'A42']], dtype='<U3') # D array([['A33', 'A34'], ['A43', 'A44']], dtype='<U3') | 2 | 3 |
79,323,215 | 2025-1-2 | https://stackoverflow.com/questions/79323215/how-to-combine-columns-with-extra-strings-into-a-concatenated-string-column-in-p | I am trying to add another column that will contain combination of two columns (Total & percentage) into a result column(labels_value) which look like: (Total) percentage%. Basically to wrap bracket strings on Total column and add % string at the end of combination of these two columns. df import polars as pl so_df = pl.DataFrame( [{'Flag': 'Outof Range', 'Category': 'Thyroid', 'len': 7, 'Total': 21, 'percentage': 33.33, 'value': 33.33}, {'Flag': 'Outof Range', 'Category': 'Inflammatory Marker', 'len': 2, 'Total': 8, 'percentage': 25.0, 'value': 25.0}, {'Flag': 'Outof Range', 'Category': 'Lipid', 'len': 12, 'Total': 63, 'percentage': 19.05, 'value': 19.05}, {'Flag': 'Outof Range', 'Category': 'LFT', 'len': 14, 'Total': 87, 'percentage': 16.09, 'value': 16.09}, {'Flag': 'Outof Range', 'Category': 'DLC', 'len': 11, 'Total': 126, 'percentage': 8.73, 'value': 8.73}, {'Flag': 'Outof Range', 'Category': 'Vitamin', 'len': 1, 'Total': 14, 'percentage': 7.14, 'value': 7.14}, {'Flag': 'Outof Range', 'Category': 'CBC', 'len': 2, 'Total': 45, 'percentage': 4.44, 'value': 4.44}, {'Flag': 'Outof Range', 'Category': 'KFT', 'len': 2, 'Total': 56, 'percentage': 3.57, 'value': 3.57}, {'Flag': 'Outof Range', 'Category': 'Urine Examination', 'len': 1, 'Total': 28, 'percentage': 3.57, 'value': 3.57}, {'Flag': 'Within Range', 'Category': 'Thyroid', 'len': 14, 'Total': 21, 'percentage': 66.67, 'value': -66.67}, {'Flag': 'Within Range', 'Category': 'Inflammatory Marker', 'len': 6, 'Total': 8, 'percentage': 75.0, 'value': -75.0}, {'Flag': 'Within Range', 'Category': 'Lipid', 'len': 51, 'Total': 63, 'percentage': 80.95, 'value': -80.95}, {'Flag': 'Within Range', 'Category': 'LFT', 'len': 73, 'Total': 87, 'percentage': 83.91, 'value': -83.91}, {'Flag': 'Within Range', 'Category': 'DLC', 'len': 115, 'Total': 126, 'percentage': 91.27, 'value': -91.27}, {'Flag': 'Within Range', 'Category': 'Vitamin', 'len': 13, 'Total': 14, 'percentage': 92.86, 'value': -92.86}, {'Flag': 'Within Range', 'Category': 'CBC', 'len': 43, 'Total': 45, 'percentage': 95.56, 'value': -95.56}, {'Flag': 'Within Range', 'Category': 'KFT', 'len': 54, 'Total': 56, 'percentage': 96.43, 'value': -96.43}, {'Flag': 'Within Range', 'Category': 'Urine Examination', 'len': 27, 'Total': 28, 'percentage': 96.43, 'value': -96.43}, {'Flag': 'Within Range', 'Category': 'Anemia', 'len': 38, 'Total': 38, 'percentage': 100.0, 'value': -100.0}, {'Flag': 'Within Range', 'Category': 'Diabetes', 'len': 22, 'Total': 22, 'percentage': 100.0, 'value': -100.0}, {'Flag': 'Within Range', 'Category': 'Electrolyte', 'len': 46, 'Total': 46, 'percentage': 100.0, 'value': -100.0}] ) shape: (21, 6) Flag Category len Total percentage value str str i64 i64 f64 f64 "Outof Range" "Thyroid" 7 21 33.33 33.33 "Outof Range" "Inflammatory Marker" 2 8 25.0 25.0 "Outof Range" "Lipid" 12 63 19.05 19.05 "Outof Range" "LFT" 14 87 16.09 16.09 "Outof Range" "DLC" 11 126 8.73 8.73 β¦ β¦ β¦ β¦ β¦ β¦ "Within Range" "KFT" 54 56 96.43 -96.43 "Within Range" "Urine Examination" 27 28 96.43 -96.43 "Within Range" "Anemia" 38 38 100.0 -100.0 "Within Range" "Diabetes" 22 22 100.0 -100.0 "Within Range" "Electrolyte" 46 46 100.0 -100.0 I have tried below three ways and none of them worked: (so_df # .with_columns(labels_value = "("+str(pl.col("Total"))+") "+str(pl.col("percentage"))+"%") # .with_columns(labels_value = "".join(["(",str(pl.col("Total")),") ",str(pl.col("percentage")),"%"])) # .with_columns(labels_value =pl.concat_str([pl.col("Total"),pl.col("percentage")]))) Desired Result would be like: (21) 33.33% # 1st row of labels_value column (8) 25% (63) 19.05% . . | Here are a few options: so_df.with_columns( labels_value_1=pl.format("({}) {}%", "Total", "percentage"), labels_value_2=pl.concat_str( pl.lit("("), "Total", pl.lit(") "), "percentage", pl.lit("%") ), labels_value_3=( "(" + pl.col("Total").cast(pl.String) + ") " + pl.col("percentage").cast(pl.String) + "%" ), ) # Only Total, percentage and outputs shown for brevity # βββββββββ¬βββββββββββββ¬βββββββββββββββββ¬βββββββββββββββββ¬βββββββββββββββββ # β Total β percentage β labels_value_1 β labels_value_2 β labels_value_3 β # β --- β --- β --- β --- β --- β # β i64 β f64 β str β str β str β # βββββββββͺβββββββββββββͺβββββββββββββββββͺβββββββββββββββββͺβββββββββββββββββ‘ # β 21 β 33.33 β (21) 33.33% β (21) 33.33% β (21) 33.33% β # β 8 β 25.0 β (8) 25.0% β (8) 25.0% β (8) 25.0% β # β 63 β 19.05 β (63) 19.05% β (63) 19.05% β (63) 19.05% β # β 87 β 16.09 β (87) 16.09% β (87) 16.09% β (87) 16.09% β # β 126 β 8.73 β (126) 8.73% β (126) 8.73% β (126) 8.73% β # β β¦ β β¦ β β¦ β β¦ β β¦ β # β 56 β 96.43 β (56) 96.43% β (56) 96.43% β (56) 96.43% β # β 28 β 96.43 β (28) 96.43% β (28) 96.43% β (28) 96.43% β # β 38 β 100.0 β (38) 100.0% β (38) 100.0% β (38) 100.0% β # β 22 β 100.0 β (22) 100.0% β (22) 100.0% β (22) 100.0% β # β 46 β 100.0 β (46) 100.0% β (46) 100.0% β (46) 100.0% β # βββββββββ΄βββββββββββββ΄βββββββββββββββββ΄βββββββββββββββββ΄βββββββββββββββββ pl.format I would say is most idiomatic, and it does the casting to string for you (much like Python's f-strings). It uses {} as a placeholder. This is what I'd personally recommend out of the options presented. For concat_str, you can add the parentheses and "%" with pl.lit (strings are parsed as column names in this function). It will cast numeric columns to string for you. Lastly, string concatenation with the + operator does work, but you must cast the column(s) with Polars' .cast method, rather than Python's builtin str. One thing to note is that row 2 of the output does differ slightly from your desired output. (8) 25% (expected) vs (8) 25.0% (actual). This is because the string representation of floating point columns in Polars contain at least one decimal place. If that is an issue, drop a comment and I will try to come up with a solution. | 2 | 3 |
Subsets and Splits