--- url: 'https://ashim-hq.github.io/ashim/api/ai.md' --- # AI Engine Reference The `@ashim/ai` package bridges Node.js to a **persistent Python sidecar** for all ML operations. The dispatcher process stays alive between requests for fast warm-start performance. GPU is auto-detected at startup and used when available. 13 AI tool routes. All models run locally - no internet required after initial model download. ## Architecture ``` Node.js Tool Route │ ▼ @ashim/ai bridge.ts │ (stdin/stdout JSON + stderr progress events) ▼ Python dispatcher (persistent process) │ ├─ remove_bg.py (rembg / BiRefNet) ├─ upscale.py (RealESRGAN) ├─ inpaint.py (LaMa ONNX) ├─ ocr.py (PaddleOCR / Tesseract) ├─ detect_faces.py (MediaPipe) ├─ face_landmarks.py (MediaPipe landmarks) ├─ enhance_faces.py (GFPGAN / CodeFormer) ├─ colorize.py (DDColor) ├─ noise_removal.py (tiered denoising) ├─ red_eye_removal.py (landmark + color analysis) ├─ restore.py (scratch repair + enhancement + denoising) └─ seam_carving (Go caire binary - not Python) ``` **Timeouts:** 300 s default; OCR and BiRefNet background removal get 600 s. ## Background Removal **Function:** `removeBackground`\ **Tool route:** `remove-background`\ **Model:** rembg with BiRefNet (default) or U2-Net variants | Parameter | Type | Default | Description | |-----------|------|---------|-------------| | `model` | string | `birefnet-general` | Model variant - see table below | | `alphaMattingForeground` | number (1–255) | 240 | Foreground threshold for alpha matting | | `alphaMattingBackground` | number (1–255) | 10 | Background threshold for alpha matting | | `returnMask` | boolean | false | Return the mask instead of the cutout | | `backgroundColor` | string | - | Fill removed area (hex color or "transparent") | **Available models:** | Model ID | Best for | |----------|---------| | `birefnet-general` | General purpose (default) | | `birefnet-portrait` | People / portraits | | `birefnet-dis` | Dichotomous Image Segmentation | | `birefnet-hrsod` | High-resolution salient objects | | `birefnet-cod` | Camouflaged objects | | `u2net` | Fast general purpose | | `u2net_human_seg` | Human segmentation | | `isnet-general-use` | High quality general | ## Image Upscaling **Function:** `upscale`\ **Tool route:** `upscale`\ **Model:** RealESRGAN (with Lanczos fallback on CPU-constrained systems) | Parameter | Type | Default | Description | |-----------|------|---------|-------------| | `scale` | 2 | 4 | 4 | Upscale factor | | `model` | string | `realesrgan-x4plus` | Model variant | | `faceEnhance` | boolean | false | Apply GFPGAN face enhancement pass | | `denoise` | number (0–1) | 0.5 | Denoising strength | | `format` | string | - | Output format override | | `quality` | number | 95 | Output quality (for JPEG/WebP) | ## OCR / Text Extraction **Function:** `extractText`\ **Tool route:** `ocr`\ **Models:** Tesseract (fast), PaddleOCR PP-OCRv5 (balanced), PaddleOCR-VL 1.5 (best) | Parameter | Type | Default | Description | |-----------|------|---------|-------------| | `quality` | `fast` | `balanced` | `best` | `balanced` | Processing tier | | `language` | string | `en` | Language code (ISO 639-1) | | `enhance` | boolean | false | Pre-process image to improve OCR accuracy | Returns structured results with bounding boxes, confidence scores, and extracted text blocks. ## Face / PII Blur **Function:** `blurFaces`\ **Tool route:** `blur-faces`\ **Model:** MediaPipe face detection | Parameter | Type | Default | Description | |-----------|------|---------|-------------| | `blurRadius` | number | 30 | Gaussian blur radius | | `sensitivity` | number (0–1) | 0.5 | Detection confidence threshold | ## Face Enhancement **Function:** `enhanceFaces`\ **Tool route:** `enhance-faces`\ **Models:** GFPGAN, CodeFormer | Parameter | Type | Default | Description | |-----------|------|---------|-------------| | `model` | `gfpgan` | `codeformer` | `gfpgan` | Enhancement model | | `strength` | number (0–1) | 0.7 | Enhancement strength | | `sensitivity` | number (0–1) | 0.5 | Face detection threshold | | `centerFace` | boolean | false | Focus enhancement on center face only | ## AI Colorization **Function:** `colorize`\ **Tool route:** `colorize`\ **Model:** DDColor (with OpenCV DNN fallback) Converts black-and-white or grayscale photos to full color. | Parameter | Type | Default | Description | |-----------|------|---------|-------------| | `intensity` | number (0–1) | 0.85 | Color saturation strength | | `model` | string | `ddcolor` | Model variant | ## Noise Removal **Function:** `noiseRemoval`\ **Tool route:** `noise-removal` Three-tier denoising pipeline (fast: OpenCV bilateral filter; balanced: frequency-domain; best: deep learning model). | Parameter | Type | Default | Description | |-----------|------|---------|-------------| | `quality` | `fast` | `balanced` | `best` | `balanced` | Processing tier | | `strength` | number (0–1) | 0.5 | Denoising strength | | `preserveDetail` | boolean | true | Edge-preserving mode | | `colorNoise` | boolean | false | Target color noise specifically | ## Red Eye Removal **Function:** `removeRedEye`\ **Tool route:** `red-eye-removal` Detects face landmarks, locates eye regions, and corrects red-channel oversaturation. | Parameter | Type | Default | Description | |-----------|------|---------|-------------| | `sensitivity` | number (0–1) | 0.5 | Red pixel detection threshold | | `strength` | number (0–1) | 0.9 | Correction strength | ## Photo Restoration **Function:** `restorePhoto`\ **Tool route:** `restore-photo` Multi-step pipeline for old or damaged photos: scratch/tear detection and repair → face enhancement → denoising → optional colorization. | Parameter | Type | Default | Description | |-----------|------|---------|-------------| | `mode` | `auto` | `light` | `heavy` | `auto` | Restoration intensity | | `scratchRemoval` | boolean | true | Detect and repair scratches, tears | | `faceEnhancement` | boolean | true | Apply face enhancement pass | | `fidelity` | number (0–1) | 0.7 | Face enhancement strength | | `denoise` | boolean | true | Apply denoising pass | | `denoiseStrength` | number (0–100) | 40 | Denoising strength | | `colorize` | boolean | false | Colorize after restoration | ## Passport Photo **Function:** Uses `detectFaceLandmarks` + `removeBackground`\ **Tool route:** `passport-photo`\ **Model:** MediaPipe face landmarks Generates government-compliant ID photos. Supports **37 countries** across 6 regions (Americas, Europe, Asia, Africa, Oceania, Middle East). Each spec includes physical dimensions, DPI, head-height ratio, eye-line position, and background color requirements. | Parameter | Type | Default | Description | |-----------|------|---------|-------------| | `country` | string | `us` | ISO country code (see list in UI) | | `printLayout` | `4x6` | `A4` | `none` | `none` | Output as print sheet or standalone | | `backgroundColor` | string | country default | Background fill color | ## Object Erasing (Inpainting) **Function:** `inpaint`\ **Tool route:** `erase-object`\ **Model:** LaMa via ONNX Runtime | Parameter | Type | Required | Description | |-----------|------|---------|-------------| | `maskData` | string | Yes | Base64-encoded PNG mask (white = erase) | | `maskThreshold` | number (0–255) | No | Threshold for mask binarization | GPU-accelerated when an NVIDIA GPU is available. ## Smart Crop **Function:** Uses MediaPipe + Sharp attention/entropy\ **Tool route:** `smart-crop`\ **Model:** MediaPipe face detection | Parameter | Type | Default | Description | |-----------|------|---------|-------------| | `mode` | `subject` | `face` | `trim` | `subject` | Crop strategy | | `width` | number | - | Output width | | `height` | number | - | Output height | | `facePreset` | string | - | Preset framing when `mode=face` | **Face presets:** | Preset | Head ratio | Best for | |--------|-----------|---------| | `close-up` | 1.8× face | Headshots | | `head-and-shoulders` | 2.8× face | Profile photos | | `upper-body` | 4.5× face | LinkedIn / formal | | `half-body` | 7.0× face | Full upper body | ## Content-Aware Resize (Seam Carving) **Function:** `seamCarve`\ **Tool route:** `content-aware-resize`\ **Engine:** Go `caire` binary (not Python - no GPU benefit) Intelligently resizes images by removing or adding low-energy seams, preserving important content. | Parameter | Type | Default | Description | |-----------|------|---------|-------------| | `width` | number | - | Target width | | `height` | number | - | Target height | | `protectFaces` | boolean | true | Protect detected face regions from seam removal | | `blurRadius` | number | 0 | Pre-blur to reduce noise sensitivity | | `sobelThreshold` | number | 10 | Edge sensitivity threshold | | `square` | boolean | false | Force square output | Max input edge before auto-downscaling: **1200 px**. --- --- url: 'https://ashim-hq.github.io/ashim/guide/architecture.md' --- # Architecture ashim is a monorepo managed with pnpm workspaces and Turborepo. Everything ships as a single Docker container. ## Project structure ``` ashim/ ├── apps/ │ ├── api/ # Fastify backend │ ├── web/ # React + Vite frontend │ └── docs/ # This VitePress site ├── packages/ │ ├── image-engine/ # Sharp-based image operations │ ├── ai/ # Python AI model bridge │ └── shared/ # Types, constants, i18n └── docker/ # Dockerfile and Compose config ``` ## Packages ### `@ashim/image-engine` The core image processing library built on [Sharp](https://sharp.pixelplumbing.com/). It handles all non-AI operations: resize, crop, rotate, flip, convert, compress, strip metadata, and color adjustments (brightness, contrast, saturation, grayscale, sepia, invert, color channels). This package has no network dependencies and runs entirely in-process. ### `@ashim/ai` A bridge layer that calls Python scripts for ML operations. On first use, the bridge starts a persistent Python dispatcher process that pre-imports heavy libraries (PIL, NumPy, MediaPipe, rembg) so subsequent AI calls skip the import overhead. If the dispatcher is not yet ready, the bridge falls back to spawning a fresh Python subprocess per request. **Models are not pre-loaded.** Each tool script loads its model weights from disk at request time and discards them when the request finishes. See [Resource footprint](#resource-footprint) for the full memory profile. Supported operations: background removal (rembg/BiRefNet), upscaling (RealESRGAN), face blur (MediaPipe), face enhancement (GFPGAN/CodeFormer), object erasing (LaMa ONNX), OCR (PaddleOCR/Tesseract), colorization (DDColor), noise removal, red eye removal, photo restoration, passport photo generation, and content-aware resize (Go caire binary). Python scripts live in `packages/ai/python/`. The Docker image pre-downloads all model weights during the build so the container works fully offline. ### `@ashim/shared` Shared TypeScript types, constants (like `APP_VERSION` and tool definitions), and i18n translation strings used by both the frontend and backend. ## Applications ### API (`apps/api`) A Fastify v5 server exposing 47 tool routes (34 standard image operations + 13 AI-powered) that handles: * File uploads, temporary workspace management, and persistent file storage * User file library with version chains (`user_files` table) -- each processed result links back to its source file and records which tool was applied, with auto-generated thumbnails for the Files page * Tool execution (routes each tool request to the image engine or AI bridge) * Pipeline orchestration (chaining multiple tools sequentially) * Batch processing with concurrency control via p-queue * User authentication, RBAC (admin/user roles with a full permission set), API key management, and rate limiting * Teams management -- admin-only CRUD; users are assigned to a team via the `team` field on their profile * Runtime settings -- a key-value store in the `settings` table that controls `disabledTools`, `enableExperimentalTools`, `loginAttemptLimit`, and other operational knobs without redeploying * Custom branding -- logo upload endpoint; the uploaded image is stored at `data/branding/logo.png` and served to the frontend * Swagger/OpenAPI documentation at `/api/docs` * Serving the built frontend as a SPA in production Key dependencies: Fastify, Drizzle ORM, better-sqlite3, Sharp, Piscina (worker thread pool), Zod for validation. The server handles graceful shutdown on SIGTERM/SIGINT: it drains HTTP connections, stops the worker pool, shuts down the Python dispatcher, and closes the database. ### Web (`apps/web`) A React 19 single-page app built with Vite. Uses Zustand for state management, Tailwind CSS v4 for styling, and Lucide for icons. Communicates with the API over REST and SSE (for progress tracking). Pages include a tool workspace, a Files page for managing persistent uploads and results, an automation/pipeline builder, and an admin settings panel. The built frontend gets served by the Fastify backend in production, so there is no separate web server in the Docker container. ### Docs (`apps/docs`) This VitePress site. Deployed to GitHub Pages automatically on push to `main`. ## How a request flows 1. The user picks a tool in the web UI and uploads an image. 2. The frontend sends a multipart POST to `/api/v1/tools/:toolId` with the file and settings. 3. The API route validates the input with Zod, then dispatches processing. 4. For standard tools, the request is offloaded to a Piscina worker thread pool so Sharp operations don't block the main event loop. The worker auto-orients the image based on EXIF metadata, runs the tool's process function, and returns the result. If the worker pool is unavailable, processing falls back to the main thread. 5. For AI tools, the TypeScript bridge sends a request to the persistent Python dispatcher (or spawns a fresh subprocess as fallback), waits for it to finish, and reads the output file. 6. Job progress is persisted to the `jobs` SQLite table so state survives container restarts. Real-time updates are delivered via SSE at `/api/v1/jobs/:jobId/progress`. 7. The API returns a `jobId` and `downloadUrl`. The user downloads the processed image from `/api/v1/download/:jobId/:filename`. For pipelines, the API feeds the output of each step as input to the next, running them sequentially. For batch processing, the API uses p-queue with a configurable concurrency limit (`CONCURRENT_JOBS`) and returns a ZIP file with all processed images. ## Resource footprint ashim is designed for low idle memory use. Nothing is preloaded or kept warm at startup. ### At idle Only the Node.js/Fastify process is running. Typical idle RAM is **~100-150 MB** (Node.js process + SQLite connection). No Python process, no worker threads, no model weights in memory. ### What starts, and when | Component | Starts when | Memory while active | |-----------|-------------|---------------------| | Fastify server | Container start | ~100-150 MB | | Piscina worker threads | First standard tool request | Spawned on demand, terminated after **30 s idle** | | Python dispatcher | First AI tool request | Python interpreter + pre-imported libraries (PIL, NumPy, MediaPipe, rembg) - no model weights | | AI model weights | During the specific tool's request | Loaded from disk, freed when the request finishes | ### Model loading All model weight files (totalling several GB) sit on disk in `/opt/models/` at all times. Each AI tool script loads only its own model(s) into memory for the duration of a request, then releases them. Some scripts explicitly call `del model` and `torch.cuda.empty_cache()` after inference to ensure memory is returned immediately. There is no model cache between requests. Running the same AI tool back-to-back reloads the model each time. This keeps idle memory near zero at the cost of a model-load delay on every AI request. ### First AI request cold start The Python dispatcher is not running when the container starts. The first AI request triggers two things in parallel: the dispatcher starts warming up in the background, and the request itself falls back to a one-off Python subprocess spawn. Once the dispatcher signals ready, all subsequent AI requests use it directly and skip the subprocess spawn cost. --- --- url: 'https://ashim-hq.github.io/ashim/guide/configuration.md' --- # Configuration All configuration is done through environment variables. Every variable has a sensible default, so ashim works out of the box without setting any of them. ## Environment variables ### Server | Variable | Default | Description | |---|---|---| | `PORT` | `1349` | Port the server listens on. | | `RATE_LIMIT_PER_MIN` | `100` | Maximum requests per minute per IP. | ### Authentication | Variable | Default | Description | |---|---|---| | `AUTH_ENABLED` | `false` | Set to `true` to require login. The Docker image defaults to `true`. | | `DEFAULT_USERNAME` | `admin` | Username for the initial admin account. Only used on first run. | | `DEFAULT_PASSWORD` | `admin` | Password for the initial admin account. Change this after first login. | | `MAX_USERS` | `5` | Maximum number of registered user accounts | | `SKIP_MUST_CHANGE_PASSWORD` | - | Set to any non-empty value to bypass the forced password-change prompt on first login | ### Storage | Variable | Default | Description | |---|---|---| | `STORAGE_MODE` | `local` | `local` or `s3`. Only local storage is currently implemented. | | `DB_PATH` | `./data/ashim.db` | Path to the SQLite database file. | | `WORKSPACE_PATH` | `./tmp/workspace` | Directory for temporary files during processing. Cleaned up automatically. | | `FILES_STORAGE_PATH` | `./data/files` | Directory for persistent user files (uploaded images, saved results). | ### Processing limits | Variable | Default | Description | |---|---|---| | `MAX_UPLOAD_SIZE_MB` | `100` | Maximum file size per upload in megabytes. | | `MAX_BATCH_SIZE` | `200` | Maximum number of files in a single batch request. | | `CONCURRENT_JOBS` | `3` | Number of batch jobs that run in parallel. Higher values use more memory. | | `MAX_MEGAPIXELS` | `100` | Maximum image resolution allowed. Rejects images larger than this. | ### Cleanup | Variable | Default | Description | |---|---|---| | `FILE_MAX_AGE_HOURS` | `24` | How long temporary files are kept before automatic deletion. | | `CLEANUP_INTERVAL_MINUTES` | `30` | How often the cleanup job runs. | ### Appearance | Variable | Default | Description | |---|---|---| | `APP_NAME` | `ashim` | Display name shown in the UI. | | `DEFAULT_THEME` | `light` | Default theme for new sessions. `light` or `dark`. | | `DEFAULT_LOCALE` | `en` | Default interface language. | ## Docker example ```yaml services: ashim: image: ashimhq/ashim:latest ports: - "1349:1349" volumes: - ashim-data:/data - ashim-workspace:/tmp/workspace environment: - AUTH_ENABLED=true - DEFAULT_USERNAME=admin - DEFAULT_PASSWORD=changeme - MAX_UPLOAD_SIZE_MB=200 - CONCURRENT_JOBS=4 - FILE_MAX_AGE_HOURS=12 restart: unless-stopped ``` ## Volumes The Docker container uses two volumes: * `/data` -- Persistent storage for the SQLite database and user files. Mount this to keep users, API keys, saved pipelines, and uploaded images across container restarts. * `/tmp/workspace` -- Temporary storage for images being processed. This can be ephemeral, but mounting it avoids filling up the container's writable layer. --- --- url: 'https://ashim-hq.github.io/ashim/guide/contributing.md' --- # Contributing Thanks for your interest in ashim. Community feedback helps shape the project, and there are several ways to get involved. ## How to contribute The best way to contribute is through [GitHub Issues](https://github.com/ashim-hq/ashim/issues): * **Bug reports** - Found something broken? Open a bug report with steps to reproduce, your Docker setup, and what you expected to happen. * **Feature requests** - Have an idea for a new tool or improvement? Describe the problem you want solved and why it matters to you. * **Feedback** - Thoughts on the UI, workflow, documentation, or anything else? We want to hear it. ## Pull requests We do not accept pull requests. All development is handled internally to maintain architectural consistency and code quality across the project. If you have found a bug, open an issue describing it rather than submitting a fix. If you have a suggestion for how something should work, describe it in a feature request. Your input is valuable even without a code contribution. ## Forking You are welcome to fork the project for your own use under the terms of the [AGPLv3 license](https://github.com/ashim-hq/ashim/blob/main/LICENSE). The [Developer Guide](/guide/developer) covers setup, architecture, and how to add new tools. ## Security If you discover a security vulnerability, please report it privately through [GitHub Security Advisories](https://github.com/ashim-hq/ashim/security/advisories/new) rather than opening a public issue. --- --- url: 'https://ashim-hq.github.io/ashim/guide/database.md' --- # Database ashim uses SQLite with [Drizzle ORM](https://orm.drizzle.team/) for data persistence. The schema is defined in `apps/api/src/db/schema.ts`. The database file lives at the path set by `DB_PATH` (defaults to `./data/ashim.db`). In Docker, mount the `/data` volume to persist it across container restarts. ## Tables ### users Stores user accounts. Created automatically on first run from `DEFAULT_USERNAME` and `DEFAULT_PASSWORD`. | Column | Type | Notes | |---|---|---| | `id` | integer | Primary key, auto-increment | | `username` | text | Unique, required | | `passwordHash` | text | bcrypt hash | | `role` | text | `admin` or `user` | | `mustChangePassword` | integer | Boolean flag for forced password reset | | `createdAt` | text | ISO timestamp | | `updatedAt` | text | ISO timestamp | ### sessions Active login sessions. Each row ties a session token to a user. | Column | Type | Notes | |---|---|---| | `id` | text | Primary key (session token) | | `userId` | integer | Foreign key to `users.id` | | `expiresAt` | text | ISO timestamp | | `createdAt` | text | ISO timestamp | ### teams Groups for organizing users. Admins can assign users to teams. | Column | Type | Description | |--------|------|-------------| | `id` | text UUID | Primary key | | `name` | text (unique, max 50 chars) | Team name | | `createdAt` | integer | Unix timestamp | ### api\_keys API keys for programmatic access. The raw key is shown once on creation; only the hash is stored. | Column | Type | Notes | |---|---|---| | `id` | integer | Primary key, auto-increment | | `userId` | integer | Foreign key to `users.id` | | `keyHash` | text | SHA-256 hash of the key | | `name` | text | User-provided label | | `createdAt` | text | ISO timestamp | | `lastUsedAt` | text | Updated on each authenticated request | Keys are prefixed with `si_` followed by 96 hex characters (48 random bytes). ### pipelines Saved tool chains that users create in the UI. | Column | Type | Notes | |---|---|---| | `id` | integer | Primary key, auto-increment | | `name` | text | Pipeline name | | `description` | text | Optional description | | `steps` | text | JSON array of `{ toolId, settings }` objects | | `createdAt` | text | ISO timestamp | ### user\_files Persistent file library with version chain tracking. Each processing step that saves a result creates a new row linked to its parent via `parentId`, forming a version tree. | Column | Type | Description | |--------|------|-------------| | `id` | text UUID | Primary key | | `userId` | text UUID | FK → users (CASCADE DELETE) | | `originalName` | text | Original upload filename | | `storedName` | text | Filename on disk | | `mimeType` | text | MIME type | | `size` | integer | File size in bytes | | `width` | integer | Image width in px | | `height` | integer | Image height in px | | `version` | integer | Version number (1 = original) | | `parentId` | text UUID | null | FK → user\_files (parent version) | | `toolChain` | text (JSON array) | Tool IDs applied in order to produce this version | | `createdAt` | integer | Unix timestamp | ### jobs Tracks processing jobs for progress reporting and cleanup. | Column | Type | Notes | |---|---|---| | `id` | text | Primary key (UUID) | | `type` | text | Tool or pipeline identifier | | `status` | text | `queued`, `processing`, `completed`, or `failed` | | `progress` | real | 0.0–1.0 fraction | | `inputFiles` | text | JSON array of input file paths | | `outputPath` | text | Path to the result file | | `settings` | text | JSON of the tool settings used | | `error` | text | Error message if failed | | `createdAt` | text | ISO timestamp | | `completedAt` | text | ISO timestamp | ### settings Key-value store for server-wide settings that admins can change from the UI. | Column | Type | Notes | |---|---|---| | `key` | text | Primary key | | `value` | text | Setting value | | `updatedAt` | text | ISO timestamp | ## Migrations Drizzle handles schema migrations. The config is in `apps/api/drizzle.config.ts`. During development, run: ```bash pnpm --filter @ashim/api drizzle-kit push ``` In production, the schema is applied automatically on startup. --- --- url: 'https://ashim-hq.github.io/ashim/guide/deployment.md' --- # Deployment ashim ships as a single Docker container. The image supports **linux/amd64** (with NVIDIA CUDA) and **linux/arm64** (CPU), so it runs natively on Intel/AMD servers, Apple Silicon Macs, and ARM devices like the Raspberry Pi 4/5. See [Docker Image](./docker-tags) for GPU setup, Docker Compose examples, and version pinning. ## Docker Compose (recommended) ```yaml services: ashim: image: ashimhq/ashim:latest container_name: ashim ports: - "1349:1349" volumes: - ashim-data:/data - ashim-workspace:/tmp/workspace environment: - AUTH_ENABLED=true - DEFAULT_USERNAME=admin - DEFAULT_PASSWORD=admin restart: unless-stopped volumes: ashim-data: ashim-workspace: ``` ```bash docker compose up -d ``` The app is then available at `http://localhost:1349`. > **Docker Hub rate limits?** Replace `ashimhq/ashim:latest` with `ghcr.io/ashim-hq/ashim:latest` to pull from GitHub Container Registry instead. Both registries receive the same image on every release. ## What's inside the container The Docker image uses a multi-stage build: 1. **Build stage** -- Installs Node.js dependencies and builds the React frontend with Vite. 2. **Production stage** -- Copies the built frontend and API source into a Node 22 image, installs system dependencies (Python 3, ImageMagick, Tesseract, potrace), sets up a Python virtual environment with all ML packages, and pre-downloads model weights. Everything runs from a single process. The Fastify server handles API requests and serves the frontend SPA. ### System dependencies installed in the image * Python 3 with pip * ImageMagick * Tesseract OCR * libraw (RAW image support) * potrace (bitmap to vector conversion) ### Python packages * rembg with BiRefNet-Lite (background removal) * RealESRGAN (upscaling) * PaddleOCR (text recognition) * MediaPipe (face detection) * OpenCV (inpainting/object removal) * onnxruntime, opencv-python, Pillow, numpy Model weights are downloaded at build time, so the container works fully offline. ### Architecture notes All tools work on both amd64 and arm64. AI tools (background removal, upscaling, OCR, face detection) use CUDA-accelerated packages on amd64 and CPU packages on arm64. GPU acceleration is auto-detected at runtime when `--gpus all` is passed. ## Volumes Mount these to persist data: | Mount point | Purpose | |---|---| | `/data` | SQLite database (users, API keys, pipelines, settings) | | `/tmp/workspace` | Temporary image processing files | The `/data` volume is the important one. Without it, you lose all user accounts and saved pipelines on container restart. The workspace volume is optional but prevents the container's writable layer from growing. ## Health check The container includes a health check that hits `GET /api/v1/health`. Docker uses this to report container status: ```bash docker inspect --format='{{.State.Health.Status}}' ashim ``` ## Reverse proxy If you're running ashim behind nginx or Caddy, point it at port 1349. Example nginx config: ```nginx server { listen 80; server_name images.example.com; client_max_body_size 200M; location / { proxy_pass http://localhost:1349; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; } } ``` Set `client_max_body_size` to match your `MAX_UPLOAD_SIZE_MB` value. ## CI/CD The GitHub repository has three workflows: * **ci.yml** -- Runs automatically on every push and PR. Lints, typechecks, tests, builds, and validates the Docker image (without pushing). * **release.yml** -- Triggered manually via `workflow_dispatch`. Runs semantic-release to create a version tag and GitHub release, then builds a multi-arch Docker image (amd64 + arm64) and pushes to Docker Hub (`ashimhq/ashim`) and GitHub Container Registry (`ghcr.io/ashim-hq/ashim`). * **deploy-docs.yml** -- Builds this documentation site and deploys it to GitHub Pages on push to `main`. To create a release, go to **Actions > Release > Run workflow** in the GitHub UI, or run: ```bash gh workflow run release.yml ``` Semantic-release determines the version from commit history. The `latest` Docker tag always points to the most recent release. --- --- url: 'https://ashim-hq.github.io/ashim/guide/developer.md' --- # Developer guide How to set up a local development environment and contribute code to ashim. ## Prerequisites * [Node.js](https://nodejs.org/) 22+ * [pnpm](https://pnpm.io/) 9+ (`corepack enable && corepack prepare pnpm@latest --activate`) * [Docker](https://www.docker.com/) (for container builds and AI features) * Git Python 3.10+ is only needed if you are working on the AI/ML sidecar (background removal, upscaling, OCR). ## Setup ```bash git clone https://github.com/ashim-hq/ashim.git cd ashim pnpm install pnpm dev ``` This starts two dev servers: | Service | URL | Notes | |----------|--------------------------|------------------------------------| | Frontend | http://localhost:1349 | Vite dev server, proxies /api | | Backend | http://localhost:13490 | Fastify API (accessed via proxy) | Open http://localhost:1349 in your browser. Login with `admin` / `admin`. You will be prompted to change the password on first login. ## Project structure ``` apps/ api/ Fastify backend web/ Vite + React frontend docs/ VitePress documentation (this site) packages/ shared/ Constants, types, i18n strings image-engine/ Sharp-based image operations ai/ Python sidecar bridge for ML models tests/ unit/ Vitest unit tests integration/ Vitest integration tests (full API) e2e/ Playwright end-to-end specs fixtures/ Small test images ``` ## Commands ```bash pnpm dev # start frontend + backend pnpm build # build all workspaces pnpm typecheck # TypeScript check across monorepo pnpm lint # Biome lint + format check pnpm lint:fix # auto-fix lint + format pnpm test # unit + integration tests pnpm test:unit # unit tests only pnpm test:integration # integration tests only pnpm test:e2e # Playwright e2e tests pnpm test:coverage # tests with coverage report ``` ## Code conventions * Double quotes, semicolons, 2-space indentation (enforced by Biome) * ES modules in all workspaces * [Conventional commits](https://www.conventionalcommits.org/) for semantic-release * Zod for all API input validation * No modifications to Biome, TypeScript, or editor config files. Fix the code, not the linter. ## Database SQLite via Drizzle ORM. The database file lives at `./data/ashim.db` by default. ```bash cd apps/api npx drizzle-kit generate # generate a migration from schema changes npx drizzle-kit migrate # apply pending migrations ``` Schema is defined in `apps/api/src/db/schema.ts`. Tables: users, sessions, settings, jobs, apiKeys, pipelines, teams, userFiles. ## Adding a new tool Every tool follows the same pattern. Here is a minimal example. ### 1. Backend route Create `apps/api/src/routes/tools/my-tool.ts`: ```ts import { z } from "zod"; import type { FastifyInstance } from "fastify"; import { createToolRoute } from "../tool-factory.js"; const settingsSchema = z.object({ intensity: z.number().min(0).max(100).default(50), }); export function registerMyTool(app: FastifyInstance) { createToolRoute(app, { toolId: "my-tool", settingsSchema, async process(inputBuffer, settings, filename) { // Use sharp or other libraries to process the image const sharp = (await import("sharp")).default; const result = await sharp(inputBuffer) // ... your processing logic .toBuffer(); return { buffer: result, filename: filename.replace(/\.[^.]+$/, ".png"), contentType: "image/png", }; }, }); } ``` Then register it in `apps/api/src/routes/tools/index.ts`. ### 2. Frontend settings component Create `apps/web/src/components/tools/my-tool-settings.tsx`: ```tsx import { useState } from "react"; import { useToolProcessor } from "@/hooks/use-tool-processor"; import { useFileStore } from "@/stores/file-store"; export function MyToolSettings() { const { files } = useFileStore(); const { processFiles, processing, error, downloadUrl } = useToolProcessor("my-tool"); const [intensity, setIntensity] = useState(50); const handleProcess = () => { processFiles(files, { intensity }); }; return (
{/* your controls here */}
); } ``` Then register it in the frontend tool registry at `apps/web/src/lib/tool-registry.tsx`: ```tsx // Add the lazy import const MyToolSettings = lazy(() => import("@/components/tools/my-tool-settings").then((m) => ({ default: m.MyToolSettings, })), ); // Add to the toolRegistry Map ["my-tool", { displayMode: "before-after", Settings: MyToolSettings }], ``` Display modes: `"side-by-side"`, `"before-after"`, `"live-preview"`, `"no-comparison"`, `"interactive-crop"`, `"interactive-eraser"`, `"no-dropzone"`. ### 3. i18n entry Add to `packages/shared/src/i18n/en.ts`: ```ts "my-tool": { name: "My Tool", description: "Short description of what this tool does", }, ``` ### 4. Tests Add a `data-testid` attribute to your action button (as shown above) so e2e tests can target it reliably. ## Docker builds Build the full production image locally: ```bash docker build -f docker/Dockerfile -t ashim:latest . ``` Use BuildKit cache mounts for faster rebuilds: ```bash DOCKER_BUILDKIT=1 docker build -f docker/Dockerfile -t ashim:latest . ``` ## Environment variables See the [Configuration guide](/guide/configuration) for the full list. Key ones for development: | Variable | Default | Description | |-----------------------------|-----------|------------------------------------------------| | `AUTH_ENABLED` | `true` | Enable/disable authentication | | `DEFAULT_USERNAME` | `admin` | Default admin username | | `DEFAULT_PASSWORD` | `admin` | Default admin password | | `SKIP_MUST_CHANGE_PASSWORD` | `false` | Skip forced password change (CI/dev only) | | `RATE_LIMIT_PER_MIN` | `100` | API rate limit per minute | | `MAX_UPLOAD_SIZE_MB` | `100` | Maximum upload size in MB | --- --- url: 'https://ashim-hq.github.io/ashim/guide/docker-tags.md' --- # Docker Image ashim ships as a single Docker image that works on all platforms. ## Quick start ```bash docker run -d --name ashim -p 1349:1349 -v ashim-data:/data ashimhq/ashim:latest ``` The app is available at `http://localhost:1349`. ## GPU acceleration The image includes CUDA support on amd64. If you have an NVIDIA GPU with the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) installed, add `--gpus all`: ```bash docker run -d --name ashim --gpus all -p 1349:1349 -v ashim-data:/data ashimhq/ashim:latest ``` The image auto-detects your GPU at runtime. Without `--gpus all`, it runs on CPU. Same image either way. ### Benchmarks Tested on an NVIDIA RTX 4070 (12 GB VRAM) with a 572x1024 JPEG portrait. #### Warm performance | Tool | CPU | GPU | Speedup | |------|-----|-----|---------| | Background removal (u2net) | 2,415ms | 879ms | 2.7x | | Background removal (isnet) | 2,457ms | 1,137ms | 2.2x | | Upscale 2x | 350ms | 309ms | 1.1x | | Upscale 4x | 910ms | 310ms | 2.9x | | OCR (PaddleOCR) | 137ms | 94ms | 1.5x | | Face blur | 139ms | 122ms | 1.1x | #### Cold start (first request after container start) | Tool | CPU | GPU | Speedup | |------|-----|-----|---------| | Background removal | 22,286ms | 4,792ms | 4.7x | | Upscale 2x | 3,957ms | 2,318ms | 1.7x | | OCR (PaddleOCR) | 1,469ms | 1,090ms | 1.3x | ### GPU health check After the first AI request, the admin health endpoint reports GPU status: ``` GET /api/v1/admin/health {"ai": {"gpu": true}} ``` ## Docker Compose ```yaml services: ashim: image: ashimhq/ashim:latest ports: - "1349:1349" volumes: - ashim-data:/data - ashim-workspace:/tmp/workspace restart: unless-stopped logging: driver: json-file options: max-size: "10m" max-file: "3" volumes: ashim-data: ashim-workspace: ``` For GPU acceleration via Docker Compose, add the deploy section: ```yaml services: ashim: image: ashimhq/ashim:latest ports: - "1349:1349" volumes: - ashim-data:/data - ashim-workspace:/tmp/workspace deploy: resources: reservations: devices: - driver: nvidia count: 1 capabilities: [gpu] restart: unless-stopped volumes: ashim-data: ashim-workspace: ``` ## Version pinning | Tag | Description | |-----|------------| | `latest` | Latest release | | `1.11.0` | Exact version | | `1.11` | Latest patch in 1.11.x | | `1` | Latest minor in 1.x | ## Platforms | Architecture | GPU support | Notes | |---|---|---| | linux/amd64 | NVIDIA CUDA | Full GPU acceleration for AI tools | | linux/arm64 | CPU only | Raspberry Pi 4/5, Apple Silicon via Docker Desktop | ## Migration from previous tags If you were using the `:cuda` tag, switch to `:latest` and keep `--gpus all`. Same GPU support, unified image. Your data and settings are preserved in the volumes. --- --- url: 'https://ashim-hq.github.io/ashim/guide/getting-started.md' --- # Getting Started ## Quick Start ```bash docker run -d --name ashim -p 1349:1349 -v ashim-data:/data ghcr.io/ashim-hq/ashim:latest ``` Open in your browser. ::: tip Also on Docker Hub ```bash docker run -d --name ashim -p 1349:1349 -v ashim-data:/data ashimhq/ashim:latest ``` Both registries publish the same image on every release. ::: **Default credentials:** | Field | Value | |----------|---------| | Username | `admin` | | Password | `admin` | You will be asked to change your password on first login. ::: tip NVIDIA GPU acceleration Add `--gpus all` for GPU-accelerated background removal, upscaling, OCR, face enhancement, and restoration: ```bash docker run -d --name ashim -p 1349:1349 --gpus all -v ashim-data:/data ashimhq/ashim:latest ``` Requires the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html). Falls back to CPU automatically. See [Docker Tags](/guide/docker-tags) for benchmarks. ::: ## Docker Compose ```yaml services: ashim: image: ghcr.io/ashim-hq/ashim:latest # or ashimhq/ashim:latest ports: - "1349:1349" volumes: - ashim-data:/data environment: - AUTH_ENABLED=true - DEFAULT_USERNAME=admin - DEFAULT_PASSWORD=admin restart: unless-stopped volumes: ashim-data: ``` See [Configuration](/guide/configuration) for all environment variables. ## Build from Source **Prerequisites:** Node.js 22+, pnpm 9+, Python 3.10+ (for AI features), Git. ```bash git clone https://github.com/ashim-hq/ashim.git cd ashim pnpm install pnpm dev ``` * Frontend: * Backend: ## What You Can Do ### Image Processing (45+ Tools) | Category | Tools | |----------|-------| | **Essentials** | Resize, Crop, Rotate & Flip, Convert, Compress | | **Optimization** | Optimize for Web, Strip Metadata, Edit Metadata, Bulk Rename, Image to PDF, Favicon Generator | | **Adjustments** | Adjust Colors, Sharpening, Replace Color | | **AI Tools** | Remove Background, Upscale, Erase Object, OCR, Blur Faces, Smart Crop, Image Enhancement, Enhance Faces, Colorize, Noise Removal, Red Eye Removal, Restore Photo, Passport Photo, Content-Aware Resize | | **Watermark & Overlay** | Text Watermark, Image Watermark, Text Overlay, Image Composition | | **Utilities** | Image Info, Compare, Find Duplicates, Color Palette, QR Code Generator, Barcode Reader, Image to Base64 | | **Layout** | Collage, Stitch, Split, Border & Frame | | **Format** | SVG to Raster, Vectorize, GIF Tools, PDF to Image | ### Pipelines Chain tools into multi-step workflows and apply them to one image or a whole batch: 1. Open **Pipelines** in the sidebar. 2. Add steps (any tool, any settings). 3. Run on a single file - or up to 200 files at once. 4. Save the pipeline for later reuse. Pipelines can have up to 20 steps. ### File Library Every file you process can be saved to your **Files** library. ashim tracks the full version history so you can trace every processing step from the original upload to the final output. ### REST API & API Keys Every tool is accessible via HTTP: ```bash curl -X POST http://localhost:1349/api/v1/tools/resize \ -H "Authorization: Bearer si_" \ -F "file=@photo.jpg" \ -F 'settings={"width":800,"height":600,"fit":"cover"}' ``` Generate API keys under **Settings → API Keys**. See the [REST API reference](/api/rest) for all endpoints, or visit for the interactive reference. ### Multi-User & Teams Enable multiple users with role-based access control: * **Admin**: full access - manage users, teams, settings, all files/pipelines/API keys * **User**: use tools, manage own files/pipelines/API keys Create teams under **Settings → Teams** to group users. Set `AUTH_ENABLED=true` (or `false` for single-user/self-use without login). --- --- url: 'https://ashim-hq.github.io/ashim/api/image-engine.md' --- # Image engine The `@ashim/image-engine` package handles all non-AI image operations. It wraps [Sharp](https://sharp.pixelplumbing.com/) and runs entirely in-process with no external dependencies. ## Operations ### resize Scale an image to specific dimensions or by percentage. | Parameter | Type | Description | |---|---|---| | `width` | number | Target width in pixels | | `height` | number | Target height in pixels | | `fit` | string | `cover`, `contain`, `fill`, `inside`, or `outside` | | `withoutEnlargement` | boolean | If true, won't upscale smaller images | | `percentage` | number | Scale by percentage instead of absolute dimensions | You can set `width`, `height`, or both. If you only set one, the other is calculated to maintain the aspect ratio. ### crop Cut out a rectangular region from the image. | Parameter | Type | Description | |---|---|---| | `left` | number | X offset from the left edge | | `top` | number | Y offset from the top edge | | `width` | number | Width of the crop area | | `height` | number | Height of the crop area | ### rotate Rotate the image by a given angle. | Parameter | Type | Description | |---|---|---| | `angle` | number | Rotation angle in degrees (0-360) | | `background` | string | Fill color for the exposed area (default: transparent or white) | ### flip Mirror the image horizontally or vertically. | Parameter | Type | Description | |---|---|---| | `direction` | string | `horizontal` or `vertical` | ### convert Change the image format. | Parameter | Type | Description | |---|---|---| | `format` | string | Target format: `jpeg`, `png`, `webp`, `avif`, `tiff`, `gif`, `heic` | | `quality` | number | Compression quality (1-100, applies to lossy formats) | ### compress Reduce file size while keeping the same format. | Parameter | Type | Description | |---|---|---| | `quality` | number | Target quality (1-100) | | `format` | string | Optional format override | ### strip-metadata Remove EXIF, IPTC, and XMP metadata from the image. Useful for privacy before sharing photos publicly. Takes no parameters. ### Color adjustments These operations modify the color properties of an image. Each takes a single numeric value. | Operation | Parameter | Range | Description | |---|---|---|---| | `brightness` | `value` | -100 to 100 | Adjust brightness | | `contrast` | `value` | -100 to 100 | Adjust contrast | | `saturation` | `value` | -100 to 100 | Adjust color saturation | ### Color filters These apply a fixed color transformation. They take no parameters. | Operation | Description | |---|---| | `grayscale` | Convert to grayscale | | `sepia` | Apply a sepia tone | | `invert` | Invert all colors | ### Color channels Adjust individual RGB color channels. | Parameter | Type | Description | |---|---|---| | `red` | number | Red channel adjustment (-100 to 100) | | `green` | number | Green channel adjustment (-100 to 100) | | `blue` | number | Blue channel adjustment (-100 to 100) | ## Format detection The engine detects input formats automatically from file headers, not just file extensions. This means a `.jpg` file that is actually a PNG will be handled correctly. Supported input formats: JPEG, PNG, WebP, AVIF, TIFF, GIF, HEIC/HEIF, SVG, RAW (via libraw). ## Metadata extraction The `info` tool returns image metadata: ```json { "width": 1920, "height": 1080, "format": "jpeg", "size": 245678, "channels": 3, "hasAlpha": false, "dpi": 72, "exif": { ... } } ``` --- --- url: 'https://ashim-hq.github.io/ashim/api/rest.md' --- # REST API Reference Interactive API docs with request/response examples are available at . Machine-readable specs: * `/api/v1/openapi.yaml` - OpenAPI 3.1 spec * `/llms.txt` - LLM-friendly summary * `/llms-full.txt` - Complete LLM-friendly docs ## Authentication All endpoints require authentication unless `AUTH_ENABLED=false`. ### Session Token ```bash # Login curl -X POST http://localhost:1349/api/auth/login \ -H "Content-Type: application/json" \ -d '{"username":"admin","password":"admin"}' # Returns: {"token":""} # Use token curl http://localhost:1349/api/v1/tools/resize \ -H "Authorization: Bearer " ``` Sessions expire after 24 hours. ### API Keys ```bash # Create a key (returns key once - store it) curl -X POST http://localhost:1349/api/v1/api-keys \ -H "Authorization: Bearer " \ -H "Content-Type: application/json" \ -d '{"name":"my-script"}' # Returns: {"key":"si_<96 hex chars>","id":"...","name":"my-script"} # Use the key curl http://localhost:1349/api/v1/tools/resize \ -H "Authorization: Bearer si_" ``` Keys are prefixed `si_` and stored as SHA-256 hashes - the raw key is shown once and never retrievable again. ### Auth Endpoints | Method | Path | Access | Description | |--------|------|--------|-------------| | `POST` | `/api/auth/login` | Public | Login, get session token | | `POST` | `/api/auth/logout` | Auth | Destroy current session | | `GET` | `/api/auth/session` | Auth | Validate current session | | `POST` | `/api/auth/change-password` | Auth | Change own password (invalidates all other sessions + API keys) | | `GET` | `/api/auth/users` | Admin | List all users | | `POST` | `/api/auth/register` | Admin | Create a new user | | `PUT` | `/api/auth/users/:id` | Admin | Update user role or team | | `POST` | `/api/auth/users/:id/reset-password` | Admin | Reset user's password | | `DELETE` | `/api/auth/users/:id` | Admin | Delete a user | ### Permissions | Permission | Admin | User | |-----------|:-----:|:----:| | Use tools | ✓ | ✓ | | Own files/pipelines/API keys | ✓ | ✓ | | See all users' files/pipelines/keys | ✓ | - | | Write settings | ✓ | - | | Manage users & teams | ✓ | - | | Manage branding | ✓ | - | ## Using Tools Every tool follows the same pattern: ```bash # Single file curl -X POST http://localhost:1349/api/v1/tools/ \ -H "Authorization: Bearer " \ -F "file=@input.jpg" \ -F 'settings={"width":800,"height":600}' # Batch (returns ZIP) curl -X POST http://localhost:1349/api/v1/tools//batch \ -H "Authorization: Bearer " \ -F "files=@a.jpg" \ -F "files=@b.jpg" \ -F 'settings={...}' ``` * Upload is `multipart/form-data`. * `settings` is a JSON string with tool-specific options. * Response is the processed file directly (or a ZIP for batch). * Progress is tracked via SSE (see [Progress Tracking](#progress-tracking)). ## Tools Reference ### Essentials | Tool ID | Name | Key settings | |---------|------|-------------| | `resize` | Resize | `width`, `height`, `fit` (cover/contain/fill/inside/outside), `percentage`, `withoutEnlargement`, plus 23 social media presets | | `crop` | Crop | `left`, `top`, `width`, `height`, `aspectRatio`, `shape` (rectangle/circle/rounded) | | `rotate` | Rotate & Flip | `angle`, `flip` (horizontal/vertical/both), `background` | | `convert` | Convert | `format` (jpeg/png/webp/avif/tiff/gif/heif), `quality` | | `compress` | Compress | `quality` (1–100), `format`, `targetSizeKB` | ### Optimization | Tool ID | Name | Key settings | |---------|------|-------------| | `optimize-for-web` | Optimize for Web | `format` (auto/jpeg/webp/avif), `quality`, `maxWidthPx`, `stripMetadata` | | `strip-metadata` | Strip Metadata | - | | `edit-metadata` | Edit Metadata | `title`, `description`, `author`, `copyright`, `keywords`, `gps` (lat/lon), `dateTime` | | `bulk-rename` | Bulk Rename | `pattern` (supports `{n}`, `{date}`, `{original}`), `startIndex`, `padding` | | `image-to-pdf` | Image to PDF | `pageSize` (A4/Letter/…), `orientation`, `margin`, `fitMode` | | `favicon` | Favicon Generator | `padding`, `backgroundColor`, `borderRadius` - generates all standard sizes | ### Adjustments | Tool ID | Name | Key settings | |---------|------|-------------| | `color-adjustments` | Adjust Colors | `brightness`, `contrast`, `exposure`, `saturation`, `temperature`, `sharpness`, `vibrance`, effects (grayscale/sepia/invert/vignette) | | `sharpening` | Sharpening | `mode` (adaptive/unsharp/highpass), `amount`, `radius`, `threshold` | | `replace-color` | Replace Color | `targetColor`, `replacementColor`, `tolerance`, `invert` | ### AI Tools All AI tools run on your hardware (CPU or NVIDIA GPU). No internet required. | Tool ID | Name | AI Model | Key settings | |---------|------|---------|-------------| | `remove-background` | Remove Background | rembg (BiRefNet / U2-Net) | `model`, `alphaMattingForeground`, `alphaMattingBackground`, `returnMask`, background color/image | | `upscale` | Image Upscaling | RealESRGAN | `scale` (2/4), `model`, `faceEnhance`, `denoise`, `format`, `quality` | | `erase-object` | Object Eraser | LaMa (ONNX) | `maskData` (base64 PNG), `maskThreshold` | | `ocr` | OCR / Text Extraction | PaddleOCR / Tesseract | `quality` (fast/balanced/best), `language`, `enhance` | | `blur-faces` | Face / PII Blur | MediaPipe | `blurRadius`, `sensitivity` | | `smart-crop` | Smart Crop | MediaPipe + Sharp | `mode` (subject/face/trim), `width`, `height`, `facePreset` (close-up/head-and-shoulders/upper-body/half-body) | | `image-enhancement` | Image Enhancement | Analysis-based | `mode` (auto/exposure/contrast/color/sharpness), `strength` | | `enhance-faces` | Face Enhancement | GFPGAN / CodeFormer | `model` (gfpgan/codeformer), `strength`, `sensitivity`, `centerFace` | | `colorize` | AI Colorization | DDColor | `intensity`, `model` | | `noise-removal` | Noise Removal | Tiered denoising | `quality` (fast/balanced/best), `strength`, `preserveDetail`, `colorNoise` | | `red-eye-removal` | Red Eye Removal | Face landmark + color analysis | `sensitivity`, `strength` | | `restore-photo` | Photo Restoration | Multi-step pipeline | `mode` (auto/light/heavy), `scratchRemoval`, `faceEnhancement`, `fidelity`, `denoise`, `denoiseStrength`, `colorize` | | `passport-photo` | Passport Photo | MediaPipe landmarks | `country` (37 countries), `printLayout` (4x6/A4/none), `backgroundColor` | | `content-aware-resize` | Content-Aware Resize | Seam carving (caire) | `width`, `height`, `protectFaces`, `blurRadius`, `sobelThreshold`, `square` | ### Watermark & Overlay | Tool ID | Name | Key settings | |---------|------|-------------| | `watermark-text` | Text Watermark | `text`, `font`, `fontSize`, `color`, `opacity`, `position`, `rotation`, `tile` | | `watermark-image` | Image Watermark | `opacity`, `position`, `scale` - second file is the watermark | | `text-overlay` | Text Overlay | `text`, `font`, `fontSize`, `color`, `x`, `y`, `background`, `padding`, `borderRadius` | | `compose` | Image Composition | `x`, `y`, `opacity`, `blend` - second file is layered on top | ### Utilities | Tool ID | Name | Key settings | |---------|------|-------------| | `info` | Image Info | - (returns width, height, format, size, channels, hasAlpha, DPI, EXIF) | | `compare` | Image Compare | `mode` (side-by-side/overlay/diff), `diffThreshold` - second file is the comparison target | | `find-duplicates` | Find Duplicates | `threshold` (perceptual hash distance, default 8) - multi-file | | `color-palette` | Color Palette | `count` (dominant color count), `format` (hex/rgb) | | `qr-generate` | QR Code Generator | `data`, `size`, `margin`, `colorDark`, `colorLight`, `errorCorrectionLevel`, `dotStyle`, `cornerStyle`, `logo` (optional file) | | `barcode-read` | Barcode Reader | - (auto-detects QR, EAN, Code128, DataMatrix, etc.) | | `image-to-base64` | Image to Base64 | `format` (data-uri/plain), `mimeType` | ### Layout & Composition | Tool ID | Name | Key settings | |---------|------|-------------| | `collage` | Collage / Grid | `template` (25+ layouts), `gap`, `backgroundColor`, `borderRadius` - multi-file | | `stitch` | Stitch / Combine | `direction` (horizontal/vertical/grid), `gap`, `backgroundColor`, `alignment` - multi-file | | `split` | Image Splitting | `mode` (grid/rows/cols), `rows`, `cols`, `tileWidth`, `tileHeight` | | `border` | Border & Frame | `width`, `color`, `style` (solid/gradient/pattern), `borderRadius`, `padding`, `shadow` | ### Format & Conversion | Tool ID | Name | Key settings | |---------|------|-------------| | `svg-to-raster` | SVG to Raster | `format` (png/jpeg/webp/avif/tiff/gif/heif), `width`, `height`, `scale`, `dpi`, `background` | | `vectorize` | Image to SVG | `colorMode` (bw/color), `threshold`, `colorPrecision`, `filterSpeckle`, `pathMode` (none/polygon/spline) | | `gif-tools` | GIF Tools | `action` (resize/optimize/reverse/speed/extract-frames/rotate/add-text), action-specific params | | `pdf-to-image` | PDF to Image | `pages` (all/range), `format`, `dpi`, `quality` | ## Batch Processing Apply any tool to multiple files at once. Returns a ZIP archive. ```bash curl -X POST http://localhost:1349/api/v1/tools/compress/batch \ -H "Authorization: Bearer " \ -F "files=@a.jpg" \ -F "files=@b.jpg" \ -F "files=@c.jpg" \ -F 'settings={"quality":80}' ``` Limits: up to **200 files** per batch. Concurrency controlled by `CONCURRENT_JOBS` (default: 3). ## Pipelines ### Execute a pipeline ```bash # Single file curl -X POST http://localhost:1349/api/v1/pipeline/execute \ -H "Authorization: Bearer " \ -F "file=@input.jpg" \ -F 'pipeline=[ {"toolId":"resize","settings":{"width":1200}}, {"toolId":"compress","settings":{"quality":80}}, {"toolId":"watermark-text","settings":{"text":"© 2025"}} ]' # Batch (multiple files → ZIP) curl -X POST http://localhost:1349/api/v1/pipeline/batch \ -H "Authorization: Bearer " \ -F "files=@a.jpg" \ -F "files=@b.jpg" \ -F 'pipeline=[{"toolId":"resize","settings":{"width":800}}]' ``` Each step's output is the next step's input. Up to **20 steps** per pipeline. ### Save and manage pipelines | Method | Path | Description | |--------|------|-------------| | `POST` | `/api/v1/pipeline/save` | Save a named pipeline (`name`, `description`, `steps`) | | `GET` | `/api/v1/pipeline/list` | List saved pipelines (admins see all; users see own) | | `DELETE` | `/api/v1/pipeline/:id` | Delete (owner or admin) | | `GET` | `/api/v1/pipeline/tools` | List tool IDs valid for pipeline steps | ## Progress Tracking Long-running jobs (AI tools, batch, pipelines) emit real-time progress via Server-Sent Events: ```bash # Connect to the SSE stream (jobId returned in X-Job-Id response header) curl -N http://localhost:1349/api/v1/jobs//progress \ -H "Authorization: Bearer " ``` Event format: ``` data: {"progress":42,"status":"processing","message":"Upscaling frame 2/5"} data: {"progress":100,"status":"completed"} ``` ## File Library Persistent file storage with version history. | Method | Path | Description | |--------|------|-------------| | `POST` | `/api/v1/upload` | Upload files to workspace | | `GET` | `/api/v1/files` | List saved files (paginated, with search) | | `GET` | `/api/v1/files/:id` | Get file metadata + version chain | | `GET` | `/api/v1/files/:id/download` | Download file | | `GET` | `/api/v1/files/:id/thumbnail` | Get 300px JPEG thumbnail | | `DELETE` | `/api/v1/files/:id` | Delete file (and its version chain) | To auto-save a tool result to the library, include `fileId` in the settings payload referencing an existing library file. The processed result will be saved as a new version. ## API Key Management | Method | Path | Access | Description | |--------|------|--------|-------------| | `POST` | `/api/v1/api-keys` | Auth | Generate new key - shown once | | `GET` | `/api/v1/api-keys` | Auth | List keys (name, id, lastUsedAt - not raw key) | | `DELETE` | `/api/v1/api-keys/:id` | Auth | Delete key | ## Teams | Method | Path | Access | Description | |--------|------|--------|-------------| | `GET` | `/api/v1/teams` | Auth | List teams | | `POST` | `/api/v1/teams` | Admin | Create team | | `PUT` | `/api/v1/teams/:id` | Admin | Rename team | | `DELETE` | `/api/v1/teams/:id` | Admin | Delete team (cannot delete default team or teams with members) | ## Branding | Method | Path | Access | Description | |--------|------|--------|-------------| | `POST` | `/api/v1/branding/logo` | Admin | Upload custom logo (max 500 KB, converted to 128×128 PNG) | | `GET` | `/api/v1/branding/logo` | Public | Serve current logo | | `DELETE` | `/api/v1/branding/logo` | Admin | Remove custom logo | ## Settings Runtime key-value configuration (read by any authenticated user, write by admin only). | Method | Path | Description | |--------|------|-------------| | `GET` | `/api/v1/settings` | Get all settings | | `PUT` | `/api/v1/settings/:key` | Set a value | Known keys: `disabledTools` (JSON array of tool IDs), `enableExperimentalTools` (bool string), `loginAttemptLimit` (number), `customLogo` (managed via branding endpoint). ## Error Responses All errors return JSON: ```json { "error": "Human-readable message", "code": "MACHINE_READABLE_CODE" } ``` | Status | Meaning | |--------|---------| | 400 | Invalid request / validation failed | | 401 | Not authenticated | | 403 | Insufficient permissions | | 404 | Resource not found | | 413 | File too large (see `MAX_UPLOAD_SIZE_MB`) | | 429 | Rate limited (see `RATE_LIMIT_PER_MIN`) | | 500 | Internal server error | --- --- url: 'https://ashim-hq.github.io/ashim/guide/translations.md' --- # Translation guide ashim ships with English by default. The i18n system is designed so adding a new language is straightforward. ## How translations work All UI strings live in `packages/shared/src/i18n/`. The reference file is `en.ts`, which exports a typed object with every string the app uses. Other languages are separate files (e.g., `de.ts`, `fr.ts`) that export the same shape. The `TranslationKeys` type is derived from the English file, so TypeScript will catch any missing keys in any translation file. ## Requesting a translation To request a new language or report a mistranslation, open a [GitHub Issue](https://github.com/ashim-hq/ashim/issues) with: * The language name and locale code (e.g., German / `de`) * Any specific strings or sections you want translated * If you have a translation ready, paste the translated strings directly in the issue We do not accept pull requests. Submitting translations via issues is the right path. ## How to create a translation (for your own fork) If you are running a fork and want to add a language yourself: ### 1. Copy the reference file ```bash cp packages/shared/src/i18n/en.ts packages/shared/src/i18n/de.ts ``` ### 2. Translate the strings Open your new file and translate every string value. Keep the object structure and keys exactly the same - only change the values. ```ts // packages/shared/src/i18n/de.ts export const de = { common: { upload: "Vom Computer hochladen", process: "Verarbeiten", download: "Herunterladen", cancel: "Abbrechen", // ... translate all entries }, tools: { resize: { name: "Grosse andern", description: "Grosse nach Pixeln, Prozent oder Social-Media-Vorgaben andern", }, // ... translate all tool entries }, // ... translate all sections: settings, auth, pipeline, nav } as const; ``` Things to keep in mind: * Do not translate object keys, only values. * Keep the `as const` assertion at the end. * If a string is the same in your language (technical terms, proper nouns), leave the English value. ### 3. Export the new language Edit `packages/shared/src/i18n/index.ts` to include your language: ```ts export type { TranslationKeys } from "./en.js"; export { en } from "./en.js"; export { de } from "./de.js"; ``` ### 4. Verify ```bash pnpm typecheck # catches missing or mistyped keys pnpm dev # manually verify strings appear correctly ``` ## Adding new translation keys When adding a new feature that needs new UI strings: 1. Add the new keys to `packages/shared/src/i18n/en.ts` first. This is the reference file. 2. Run `pnpm typecheck` to make sure all language files still satisfy the `TranslationKeys` type. ## File reference | File | Purpose | |------|---------| | `packages/shared/src/i18n/en.ts` | English strings (reference locale) | | `packages/shared/src/i18n/index.ts` | Exports all locales and the `TranslationKeys` type | | `packages/shared/src/constants.ts` | Tool registry (names/descriptions also live here) |