Datasets:
Tasks:
Image Classification
Modalities:
Image
Formats:
imagefolder
Size:
10K - 100K
Tags:
steganography
License:
Dataset Viewer
Search is not available for this dataset
image
imagewidth (px) 256
256
|
---|
End of preview. Expand
in Data Studio
Dataset Card: BOSS-Based Cropped Steganography Dataset
Dataset Overview
- Name: BOSS-Based Cropped Steganography Dataset (working title)
- Description: This dataset is a prepared and cropped version of the BOSSbase v1.01 dataset, commonly used in steganography and steganalysis research. Each original 512×512 grayscale image is split into four non-overlapping 256×256 patches to support deep learning experiments with smaller, manageable image sizes. The dataset supports binary classification tasks such as cover vs. stego detection.
- Purpose: Designed to enable reproducibility of research in image steganalysis.
- Supported Tasks:
- Binary classification (cover vs. stego)
- Steganalysis model evaluation
- Author: Italo Amaya
- License: MIT License
- Intended Use: Academic research only
Dataset Structure
- Total Images:
- 20,000 cover images
- 20,000 WOW stego images
- 20,000 S-UNIWARD stego images (Each stego set is embedded from the same 20,000 cover images, using different algorithms and randomized keys.)
- Image Format:
.png
- Image Size: 256×256 pixels
- Color: Grayscale (1 channel)
- Directory Layout:
cropdataset/ ├── cover/ └── stego/ ├── S-UNIWARD/ │ └── 0.4bpp/ │ └── stego/ └── WOW/ └── 0.4bpp/ └── stego/
- Labels: Not explicitly stored — labels should be inferred during data loading (
cover = 0
,stego = 1
)
Data Generation Process
- Original Dataset: BOSSbase v1.01
- Cropping Script (Python):
from PIL import Image import os from tqdm import tqdm def crop_images(input_folder, output_folder, patch_size=(256, 256)): os.makedirs(output_folder, exist_ok=True) for filename in tqdm(os.listdir(input_folder), desc="Cropping"): try: input_path = os.path.join(input_folder, filename) if os.path.isfile(input_path): img = Image.open(input_path) if img.size == (512, 512): w, h = patch_size patches = { "top_left": img.crop((0, 0, w, h)), "top_right": img.crop((512-w, 0, 512, h)), "bottom_left": img.crop((0, 512-h, w, 512)), "bottom_right": img.crop((512-w, 512-h, 512, 512)), } for key, patch in patches.items(): patch.save(os.path.join(output_folder, f"{os.path.splitext(filename)[0]}_{key}.png")) except Exception as e: print(f"Error processing {filename}: {e}")
- Embedding Tools:
The embedding was done using random keys and as such this dataset is a random key dataset
- Embedding Parameters:
- Randomized keys per image
- 0.4 bits per pixel (bpp)
Preprocessing & Transformations
- Cropping: Non-overlapping 256×256 patches from 512×512 originals
- Augmentation: None
- Normalization: None — images retain original pixel value ranges
Citation and Attribution
- GitHub Repository: [Link to be added once public]
- Associated Paper: [Placeholder for paper title or DOI] Please cite this work when using the dataset.
- Downloads last month
- 336