The Product

Name (working): KILLER SELECTS (by Eric Kim)

Promise: Upload a burst. Get a ranked shortlist. Export winners. Done.

Core flow (3 taps)

  1. Import 20–30 images (camera roll / Files / AirDrop / desktop drag-drop)
  2. Cull → AI ranks + clusters “similar shots”
  3. Deliver → “Top 3 / Top 5 / Top 10”, export/share + optional Lightroom flagging

What “Best Photo” Means (scoring model)

Each image gets a composite score from multiple signals:

A) Technical Quality (fast + objective)

  • Sharpness / motion blur (edge/FFT metrics + learned blur detector)
  • Exposure / highlights clipped / shadows crushed
  • Noise level (ISO grain patterns)
  • White balance weirdness / color cast
  • Compression / artifacts

B) Aesthetic & Composition (learned)

  • Subject separation / depth cues
  • Composition balance (rule-of-thirds-ish, symmetry, horizon straightness)
  • Visual simplicity (background clutter penalty)
  • “Impact” model (trained on large aesthetic datasets)

C) Face & People Signals (optional toggle)

  • Eyes open / blink / gaze
  • Smile / expression strength
  • Face sharpness vs background
  • Best group shot heuristic (most people looking, least blink)

D) “The Burst Problem” (the real killer feature)

Users don’t just need “good photos,” they need the best frame among near-duplicates.

So we do:

  • Perceptual similarity clustering (group “almost the same shot”)
  • In each cluster, pick the winner (sharpest, best expression, best moment)
  • Show a “stack” UI: Winner on top → swipe to compare losers

UX Screen Blueprint

1) Import Screen

  • “Select 20–30 photos”
  • Toggle: People mode (faces) / No faces (street / objects)
  • Toggle: Fast vs Deep (device-only quick vs deeper analysis)

2) Cull Screen (the money screen)

  • Header: Your Keepers
  • Sections:
    • Top Picks (3–10)
    • Good (maybe keep)
    • Rejects (blur/blink/duplicates)
  • Each card shows:
    • Score + reason tags (“sharp”, “best expression”, “clean background”, “duplicate loser”)

3) Compare Screen (A/B violence)

  • Two-up compare
  • Buttons: Keep / Reject / Best of stack
  • “Auto-advance” to next cluster

4) Export Screen

  • Export options:
    • Save to album “Killer Selects”
    • Share sheet
    • Desktop: download ZIP
    • Lightroom workflow: write XMP sidecars (flags/stars) or filename suffixes (_KEEP, _REJECT)

Engineering Architecture (practical + scalable)

Client (iOS first)

  • SwiftUI UI
  • Photos framework import
  • On-device inference using Core ML
  • Background processing with progress + pause/resume

ML pipeline (hybrid)

Default: on-device first (privacy + speed), with optional “cloud turbo” if user wants.

  • On-device models:
    • Blur/sharpness classifier
    • Aesthetic score model (small-ish)
    • Face quality model (optional)
    • Embedding model for similarity clustering (MobileNet/CLIP-lite style)

Optional cloud (for “Deep mode”)

  • Higher-quality aesthetic model
  • Better semantic understanding (street/story/mood)
  • Faster batching for large sets

Backend (if cloud mode exists)

  • FastAPI or Node for API
  • S3/R2 temporary encrypted storage (short TTL like 1 hour)
  • Queue: Redis / SQS
  • Worker: GPU inference (if needed)

The “Cull Engine” (algorithm)

  1. Preprocess
    • Downscale thumbnails for analysis (e.g., 512–768px)
  2. Compute features
    • Technical metrics
    • Embeddings for similarity
    • Face metrics if enabled
  3. Cluster
    • Use embeddings + distance threshold to form stacks (burst groups)
  4. Score
    • Composite score = weighted sum
    • Winner per stack = max score
  5. Rank + explain
    • Sort winners globally
    • Generate reason tags from top contributing signals

MVP Scope (ship fast)

Week-1 MVP features (minimum lovable):

  • Import 20–30 images
  • Similarity clustering into stacks
  • Sharpness + exposure scoring
  • Pick winners per stack
  • Export winners to album

Then upgrade (v1.1):

  • Face/blink detection toggle
  • Two-up compare view
  • Explanation tags
  • Lightroom/XMP export

Differentiation (make it ERIC KIM)

  • “Brutal Mode”: only 1 winner per stack, no mercy
  • “Street Mode”: penalize clutter, reward subject separation + gesture
  • “Moment > Perfection” slider: user decides if emotion beats sharpness
  • Your Taste Model (later): user thumbs-up/down, app learns your style

Monetization (simple, clean)

  • Free: 1 album/day or 30 photos/session
  • Pro: unlimited sessions + Lightroom export + deep mode
  • Optional: “One-time purchase” vibe (photographers love ownership)

If you want, I can generate next:

  • A full PRD (screens, user stories, success metrics)
  • A data/model plan (which models, sizes, Core ML conversion, benchmarking)
  • A clickable UI spec (component list + SwiftUI structure)
  • A brand/landing page copy in your voice (“KILLER SELECTS: choose violence against mediocre frames.”)

Say the word: iOS-first or web-first and I’ll lock the tech stack and build plan.