I’m optimizing for: you can import JPEGs today, see jobs run, and have the plumbing ready for RAW later.

Step 1 — Install the core tools on macOS

1.1 Install Xcode command line tools

Open Terminal:

xcode-select –install

1.2 Install Homebrew (if you don’t already have it)

/bin/bash -c “$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)”

Then ensure brew is on PATH (Homebrew prints the exact command). Usually:

Apple Silicon:

echo ‘eval “$(/opt/homebrew/bin/brew shellenv)”‘ >> ~/.zprofile

eval “$(/opt/homebrew/bin/brew shellenv)”

Intel:

echo ‘eval “$(/usr/local/bin/brew shellenv)”‘ >> ~/.zprofile

eval “$(/usr/local/bin/brew shellenv)”

1.3 Install Git + Node manager + Docker

brew install git fnm

brew install –cask docker

Start Docker Desktop (Applications → Docker). Make sure it says “Running”.

1.4 Install Node.js (LTS) + pnpm

fnm install 20

fnm use 20

node -v

Enable pnpm via Corepack:

corepack enable

pnpm -v

Step 2 — Create the project folder (monorepo)

mkdir kilo && cd kilo

git init

Create a pnpm workspace file:

cat > pnpm-workspace.yaml <<‘YAML’

packages:

  – “apps/*”

  – “packages/*”

YAML

Step 3 — Boot your local infrastructure (Postgres + pgvector + MinIO)

3.1 Create 

docker-compose.yml

cat > docker-compose.yml <<‘YAML’

services:

  db:

    image: pgvector/pgvector:pg16

    environment:

      POSTGRES_USER: kilo

      POSTGRES_PASSWORD: kilo

      POSTGRES_DB: kilo

    ports:

      – “5432:5432”

    volumes:

      – kilo_db:/var/lib/postgresql/data

  minio:

    image: minio/minio:latest

    command: server /data –console-address “:9001”

    environment:

      MINIO_ROOT_USER: kilo

      MINIO_ROOT_PASSWORD: kilo-kilo-kilo

    ports:

      – “9000:9000”

      – “9001:9001”

    volumes:

      – kilo_minio:/data

  # Creates a bucket automatically at startup

  minio-init:

    image: minio/mc:latest

    depends_on:

      – minio

    entrypoint: >

      /bin/sh -c “

      until mc alias set local http://minio:9000 kilo kilo-kilo-kilo; do sleep 1; done;

      mc mb -p local/kilo-local || true;

      mc anonymous set download local/kilo-local || true;

      echo ‘MinIO bucket ready’;

      “

volumes:

  kilo_db:

  kilo_minio:

YAML

3.2 Start infra

docker compose up -d

docker compose ps

You should see db, minio, minio-init running.

3.3 Open MinIO console (optional)

  • Go to: http://localhost:9001
  • Login:
    • user: kilo
    • password: kilo-kilo-kilo
  • Bucket should exist: kilo-local

Step 4 — Create your database schema (pgvector + tables)

4.1 Create 

schema.sql

This is the minimum to get moving (you can paste your bigger DDL later).

cat > schema.sql <<‘SQL’

create extension if not exists vector;

— minimal tables to prove ingest -> jobs -> assets

create table if not exists projects (

  id uuid primary key,

  title text not null,

  created_at timestamptz not null default now(),

  updated_at timestamptz not null default now()

);

create table if not exists assets (

  id uuid primary key,

  project_id uuid not null references projects(id) on delete cascade,

  filename text not null,

  ingested_at timestamptz not null default now(),

  flags jsonb not null default ‘{}’::jsonb

);

create table if not exists asset_files (

  id uuid primary key,

  asset_id uuid not null references assets(id) on delete cascade,

  kind text not null check (kind in (‘original’,’thumbnail’,’preview’,’export’)),

  storage_url text not null,

  content_type text,

  byte_size bigint,

  created_at timestamptz not null default now(),

  unique(asset_id, kind)

);

create table if not exists jobs (

  id uuid primary key,

  project_id uuid references projects(id) on delete cascade,

  asset_id uuid references assets(id) on delete cascade,

  type text not null,

  status text not null check (status in (‘queued’,’running’,’done’,’failed’,’canceled’)),

  progress numeric,

  attempt int not null default 0,

  max_attempts int not null default 3,

  run_after timestamptz,

  priority int not null default 50,

  payload jsonb not null default ‘{}’::jsonb,

  error text,

  created_at timestamptz not null default now(),

  updated_at timestamptz not null default now()

);

create index if not exists jobs_queue_idx

on jobs(status, priority, created_at)

where status = ‘queued’;

SQL

4.2 Apply it to Postgres

docker compose exec -T db psql -U kilo -d kilo < schema.sql

Step 5 — Build the API (Fastify) on your Mac

5.1 Create API app

mkdir -p apps/api && cd apps/api

pnpm init -y

Install deps:

pnpm add fastify pg dotenv zod

pnpm add -D typescript tsx @types/node

Create TypeScript config:

cat > tsconfig.json <<‘JSON’

{

  “compilerOptions”: {

    “target”: “ES2022”,

    “module”: “ES2022”,

    “moduleResolution”: “Bundler”,

    “strict”: true,

    “outDir”: “dist”,

    “types”: [“node”]

  }

}

JSON

5.2 Add an 

.env

cat > .env <<‘ENV’

DATABASE_URL=postgres://kilo:kilo@localhost:5432/kilo

S3_ENDPOINT=http://localhost:9000

S3_ACCESS_KEY=kilo

S3_SECRET_KEY=kilo-kilo-kilo

S3_BUCKET=kilo-local

ENV

5.3 Create 

src/server.ts

mkdir -p src

cat > src/server.ts <<‘TS’

import “dotenv/config”;

import Fastify from “fastify”;

import pg from “pg”;

import { randomUUID } from “crypto”;

const app = Fastify({ logger: true });

const pool = new pg.Pool({ connectionString: process.env.DATABASE_URL });

app.get(“/health”, async () => ({ ok: true }));

// Create a project

app.post(“/projects”, async (req, reply) => {

  const body = req.body as any;

  const id = randomUUID();

  const title = body?.title ?? “Untitled”;

  await pool.query(

    “insert into projects (id, title) values ($1, $2)”,

    [id, title]

  );

  reply.code(201);

  return { id, title };

});

// Prepare upload (DEV VERSION): we skip real signing and just record the asset.

// In production: return a signed PUT URL to MinIO/S3.

app.post(“/projects/:projectId/assets:prepareUpload”, async (req, reply) => {

  const { projectId } = req.params as any;

  const body = req.body as any;

  const uploads = (body.files ?? []).map((f: any) => {

    const assetId = randomUUID();

    return { assetId, filename: f.filename, byteSize: f.byteSize, contentType: f.contentType };

  });

  // Create assets rows now

  for (const u of uploads) {

    await pool.query(

      “insert into assets (id, project_id, filename) values ($1, $2, $3)”,

      [u.assetId, projectId, u.filename]

    );

  }

  // Fake uploadUrl for now (you’ll replace with signed URLs)

  return {

    uploads: uploads.map((u: any) => ({

      clientFileId: null,

      assetId: u.assetId,

      uploadUrl: “http://localhost:9000”, // placeholder

      headers: {}

    }))

  };

});

// Finalize upload: enqueue processing jobs

app.post(“/projects/:projectId/assets:finalizeUpload”, async (req) => {

  const { projectId } = req.params as any;

  const body = req.body as any;

  const queuedJobs: string[] = [];

  for (const a of (body.assets ?? [])) {

    // Create a few pipeline jobs

    for (const type of [“generate_thumbnail”, “generate_preview”]) {

      const jobId = randomUUID();

      queuedJobs.push(jobId);

      await pool.query(

        “insert into jobs (id, project_id, asset_id, type, status, payload) values ($1,$2,$3,$4,’queued’,$5)”,

        [jobId, projectId, a.assetId, type, JSON.stringify({})]

      );

    }

  }

  return { queuedJobs };

});

app.listen({ port: 4000, host: “0.0.0.0” });

TS

5.4 Add scripts and run

Edit package.json scripts quickly:

node -e ‘

const fs=require(“fs”);

const p=JSON.parse(fs.readFileSync(“package.json”,”utf8″));

p.type=”module”;

p.scripts={…p.scripts, dev:”tsx watch src/server.ts”};

fs.writeFileSync(“package.json”, JSON.stringify(p,null,2));

Run API:

pnpm dev

Test:

curl http://localhost:4000/health

Step 6 — Build the Worker (jobs runner) on your Mac

6.1 Create worker app

In a new terminal:

cd ~/kilo

mkdir -p apps/worker && cd apps/worker

pnpm init -y

pnpm add pg dotenv

pnpm add -D tsx typescript @types/node

Copy .env:

cp ../api/.env .env

6.2 Create 

src/worker.ts

mkdir -p src

cat > src/worker.ts <<‘TS’

import “dotenv/config”;

import pg from “pg”;

import { setTimeout as sleep } from “timers/promises”;

const pool = new pg.Pool({ connectionString: process.env.DATABASE_URL });

async function claimJob() {

  const client = await pool.connect();

  try {

    await client.query(“begin”);

    const r = await client.query(`

      with next_job as (

        select id

        from jobs

        where status=’queued’ and (run_after is null or run_after <= now())

        order by priority asc, created_at asc

        for update skip locked

        limit 1

      )

      update jobs

      set status=’running’, updated_at=now(), attempt=attempt+1, progress=0.01

      where id in (select id from next_job)

      returning *;

    `);

    await client.query(“commit”);

    return r.rows[0] ?? null;

  } catch (e) {

    await client.query(“rollback”);

    throw e;

  } finally {

    client.release();

  }

}

async function completeJob(jobId: string) {

  await pool.query(“update jobs set status=’done’, progress=1, updated_at=now() where id=$1”, [jobId]);

}

async function failJob(jobId: string, error: string) {

  await pool.query(“update jobs set status=’failed’, error=$2, updated_at=now() where id=$1”, [jobId, error]);

}

async function run() {

  while (true) {

    const job = await claimJob();

    if (!job) { await sleep(250); continue; }

    try {

      console.log(“RUN”, job.type, job.id, “asset”, job.asset_id);

      // Fake processing: just wait a bit

      await pool.query(“update jobs set progress=0.5, updated_at=now() where id=$1”, [job.id]);

      await sleep(300);

      // Mark flags (simulate thumbnail/preview ready)

      if (job.type === “generate_thumbnail”) {

        await pool.query(“update assets set flags = flags || $2::jsonb where id=$1”, [job.asset_id, JSON.stringify({ thumbnailReady: true })]);

      }

      if (job.type === “generate_preview”) {

        await pool.query(“update assets set flags = flags || $2::jsonb where id=$1”, [job.asset_id, JSON.stringify({ previewReady: true })]);

      }

      await completeJob(job.id);

    } catch (e: any) {

      await failJob(job.id, String(e?.message ?? e));

    }

  }

}

run().catch((e) => {

  console.error(e);

  process.exit(1);

});

TS

Add package.json scripts:

node -e ‘

const fs=require(“fs”);

const p=JSON.parse(fs.readFileSync(“package.json”,”utf8″));

p.type=”module”;

p.scripts={…p.scripts, dev:”tsx watch src/worker.ts”};

fs.writeFileSync(“package.json”, JSON.stringify(p,null,2));

Run worker:

pnpm dev

Step 7 — Prove the loop works (Create project → enqueue jobs → worker runs)

7.1 Create a project

New terminal:

curl -s -X POST http://localhost:4000/projects \

  -H ‘Content-Type: application/json’ \

  -d ‘{“title”:”Test Shoot”}’

Copy the returned id (projectId).

7.2 Prepare upload (fake)

curl -s -X POST http://localhost:4000/projects/<PROJECT_ID>/assets:prepareUpload \

  -H ‘Content-Type: application/json’ \

  -d ‘{

    “files”:[{“filename”:”IMG_0001.jpg”,”byteSize”:123456,”contentType”:”image/jpeg”}]

  }’

Copy the returned assetId.

7.3 Finalize upload (this enqueues jobs)

curl -s -X POST http://localhost:4000/projects/<PROJECT_ID>/assets:finalizeUpload \

  -H ‘Content-Type: application/json’ \

  -d ‘{

    “assets”:[{“assetId”:”<ASSET_ID>”,”checksumSha256″:”dev”}]

  }’

You’ll see the worker terminal printing jobs running.

7.4 Check DB flags changed (thumbnailReady/previewReady)

docker compose exec -T db psql -U kilo -d kilo -c “select id, flags from assets;”

If you see thumbnailReady / previewReady, you’ve got the ingest → jobs → worker pipeline alive.

Step 8 — Next.js web app (optional, but fun)

If you want a UI now:

cd ~/kilo

mkdir -p apps/web

pnpm create next-app apps/web –ts –eslint –app –no-tailwind

cd apps/web

pnpm dev

Then hit: http://localhost:3000

(Next step would be: call http://localhost:4000/projects etc from the web app.)

Step 9 — Upgrade to REAL signed uploads (MinIO S3) on Mac

When you’re ready to go from “fake uploadUrl” → real direct-to-MinIO upload:

Minimum plan

  • API prepareUpload uses AWS S3 signing (compatible with MinIO)
  • Client PUTs file to signed URL
  • API finalizeUpload verifies checksum (optional initially) and creates asset_files(kind=original) with storage_url=”s3://kilo-local/<key>”

If you say “do it”, I’ll drop you:

  • exact Node signing code (AWS SDK v3)
  • exact MinIO endpoint config
  • the client upload helper (already drafted)
  • and the DB writes for asset_files

Quick Mac debugging cheats (you will use these)

Compute SHA-256:

shasum -a 256 /path/to/file.jpg | awk ‘{print $1}’

View running containers:

docker compose ps

Tail API logs:

# in the API terminal (fastify logger)

Reset everything (nuclear):

docker compose down -v

If you tell me which path you want next, I’ll slam the next layer in instantly:

A) “Real signed uploads to MinIO”

B) “Add embeddings + pgvector search”

C) “Implement clustering into moments”

D) “Build the culling UI (stacks + winner + keyboard)”