Background: What Is Nelsoncon?
Nelsoncon is a personal convention, an annual multi-day event among a close group of friends. Think less "Comic-Con", more "we rented a house and now we're running tournaments, trivia nights, and talent shows for a week." By the eighth year, the event had grown structured enough that it warranted its own infrastructure: a dedicated app to track who did what, award points, and crown a champion at the end.
The result is Nelco Achievements: a web app that handles user authentication, an achievement system with rarity tiers and photo claims, house-based team competition, event management, a dual leaderboard (individual + house), notifications, and an admin panel to manage all of it.
This post covers every decision that shaped it, from infrastructure selection through database design through code-level architecture.
The Problem
Before the app existed, organizing Nelsoncon's competitive elements was all he-said/she-said. Win-loss ratios were all tracked in everyone's own head, and everyone seemed to have a different record. . . On top of that, actually getting everyone to show up for large team events, like volleyball, was a herculean feat. Organizing the time and place, getting everyone to agree or acknowledge, then actually wrangling the cats there continued to be a pain point.
Core problems to solve:
- No centralized record of participant achievements or scores
- No self-service way for attendees to see standings in real time
- Admin burden - all score updates required someone manually editing a spreadsheet
- No history - after the event, the data largely evaporated
On top of the "problems" to solve, I also wanted to make sure attendees were getting as much out of the event as possible, and thus the idea of "achievements" was born. We were already a gamer-centric group, so it feels like a natural idea to everyone already. The achievements should act as an additional motivator to do all of the cool things that make "Nelsoncon" Nelsoncon. Creating meaningful achievements in and of itself was its own task, and one that is repeated and re-evaluated every year, but that is a separate, and less technically interesting discussion.
Requirements
Functional:
- Private/members only access to the app
- Email/password and select SSO auth options
- Achievement catalog with categories, point values, and rarity tiers
- Manual achievement awards (admin) and self-service claims (user + photo evidence)
- Automated achievements that trigger on user actions (profile setup, house join, etc.)
- Houses: semi-permanent groups that accumulate points from all members
- Events: individual and team competitions with configurable point rewards
- Leaderboards: individual scores + collective house scores
- In-app notifications for claim approvals/rejections and events
- Activity feed for app-wide achievement and event result announcements
- Admin panel: user management, achievement management, event management, claims queue
- Interactive tutorial for first-time users
Non-functional:
- Only actively used for one week → traffic spike is short, predictable, and small (~40 users)
- Must be optimized for mobile (attendees will primarily access on phones (metrics recorded from last year))
- Low operational overhead: nobody wants to babysit a server during a convention
- Cost should be zero in the off-season (11 months of essentially zero traffic)
Infrastructure Selection
The Core Constraint: Minimal Ops, Near-Zero Cost
A convention app has a strange traffic profile: zero usage for most of the year, then around 40 concurrent users hammering it for a week. That rules out anything requiring provisioned capacity. The infrastructure needs to either be serverless or generously free-tiered.
Options considered:
Option | Auth | DB | Storage | Cost at Idle | Verdict |
|---|---|---|---|---|---|
Firebase | ✅ | Firestore | ✅ | ~Free | NoSQL, no RLS, query limitations |
PlanetScale + Auth0 + S3 | Custom | MySQL | ✅ | $0 + $0 + $0.02 | Too many moving parts |
Railway + Clerk | Custom | PostgreSQL | No native | ~$5/mo | Viable but not free at idle |
DO Droplet | KeyCloak | MySQL/PostgreSQL | ✅ | ~$5/mo | Too much setup and config when simpler options exist |
Supabase | ✅ | PostgreSQL | ✅ | Free tier or $20/mo | Selected |
Supabase won because it bundles everything needed into one platform: PostgreSQL with Row Level Security, built-in auth (email/password, magic link, OAuth), file storage, and auto-generated TypeScript types from the schema. The free tier covers far more than a convention app with 40 users will ever need. A single platform means one integration, one SDK, one dashboard.
Hosting: Vercel
Next.js on Vercel is the obvious pairing - zero config, automatic preview deployments, serverless API routes, edge middleware. I already have multiple other projects hosted in Vercel, so there isn't really an additional cost to me. The project could easily exist within a free-tier project though.
Frontend Framework: Next.js App Router
Chosen over plain React SPA for:
- Server Components reduce client bundle size
- API routes live in the same repo (no separate backend)
- Middleware runs at the edge for auth redirects before any page renders
React 19 + Next.js 16, both cutting-edge at build time.
UI: Mantine 8 + Tailwind CSS 4
Mantine provides the heavy component primitives (modals, tabs, notifications, drawers) so custom components don't need to be built from scratch. Tailwind handles spacing, layout, and utility classes. The two coexist without friction.
Package Manager: pnpm
Faster installs, strict dependency isolation, disk-efficient. No meaningful trade-off for this project size, I just generally prefer it atm.
Database Design
Migration-Based Schema Evolution
The schema is managed as sequential SQL migrations in supabase/migrations/. 26 migrations cover the full lifecycle from initial tables to the final notification system. Each migration is a forward-only SQL file. No down migrations, since the project doesn't need rollback in the traditional sense; broken migrations get fixed with a follow-up migration.
Migration naming is numeric and descriptive:
1001_initial_schema.sql
2002_set_admin_default.sql
3...
4009_fix_rls_recursion.sql
5...
6026_tutorial_seen.sqlThis gives a full audit trail of schema changes and a reproducible initial state.
Core Tables
1users — attendee profiles, auth mirror, approval status
2achievements — achievement catalog (name, description, points, category, rarity)
3user_achievements — join table (who earned what)
4achievement_claims — claim queue (pending photo evidence review)
5houses — convention teams with color/icon
6house_members — user → house assignment (1:1)
7events — individual or team competitions
8event_placements — configurable point tiers per event
9event_teams — ad-hoc teams formed per event
10event_team_members — user → team membership
11event_results — final placement records
12notifications — per-user inbox
13activity_feed — append-only public event log
14app_settings — key/value config tableScoreboard as a View
Rather than maintaining a denormalized points column on users (which would require triggers on every achievement award and event result), the scoreboard is a PostgreSQL view that aggregates on read:
1CREATE VIEW public.scoreboard AS
2SELECT
3 users.id,
4 users.user_name,
5 users.created_at,
6 COALESCE(SUM(achievements.points), 0) AS achievement_points,
7 COALESCE((
8 SELECT SUM(evr.points_awarded)
9 FROM event_results evr
10 WHERE evr.user_id = users.id
11 ), 0) AS event_points,
12 -- total_points = achievement_points + event_points
13 ...
14FROM users
15LEFT JOIN user_achievements ON users.id = user_achievements.user_id
16LEFT JOIN achievements ON user_achievements.achievement_id = achievements.id
17GROUP BY users.id, users.user_name, users.created_at
18ORDER BY total_points DESC;At 40 users it's fast enough. If this ever needed to scale, materialized views or a denormalized cache would be worth considering.
The house leaderboard works the same way. house_scoreboard is a view that JOINs houses → house_members → scoreboard, aggregating member points per house.
Row Level Security: The Recursive Policy Problem
Supabase enforces data access at the database layer via PostgreSQL RLS. The standard pattern for "only admins can write" involves a subquery: EXISTS (SELECT 1 FROM users WHERE users.id = auth.uid() AND users.is_admin = true).
This works fine on most tables. It fails catastrophically on the users table itself. The policy evaluating access to users queries users, triggering the policy again infinitely (Postgres error 42P17: infinite recursion).
Migration 09 solves this with SECURITY DEFINER helper functions:
1CREATE OR REPLACE FUNCTION public.current_user_is_admin()
2RETURNS boolean AS $$
3 SELECT COALESCE(
4 (SELECT is_admin FROM public.users WHERE id = auth.uid()),
5 false
6 );
7$$ LANGUAGE sql SECURITY DEFINER STABLE SET search_path = public;SECURITY DEFINER means the function runs as the function owner (a superuser role), not the calling user. The function reads users with elevated privileges, the RLS policy on users uses the function result, and no cycle forms. The security model is unchanged: the function only queries WHERE id = auth.uid(), so no user can read another user's admin status through the backdoor.
Lesson: RLS self-referential policies require SECURITY DEFINER helper functions. This is a known Supabase pattern, but easy to miss until you hit the error.
Auth & Approval Flow
Nelsoncon is invite-only. Open registration would mean random people can sign up and see the leaderboard. The solution is a two-stage flow:
- User signs up via email/password → Supabase Auth creates an auth.users row → a database trigger fires handle_new_user() which inserts into public.users with status = 'pending'
- User lands on /pending - a holding page explaining they're awaiting approval
- An admin approves the user via the admin panel → PATCH /api/admin/users sets status = 'approved'
- User can now access all app routes
Middleware enforces this at the edge. Every request to a non-public route checks users.status. Non-approved users are redirected to /pending regardless of which URL they try to access. The check uses supabase.auth.getClaims() (not getUser()) because it reads from the JWT without a network round-trip, important for middleware performance.
Public routes exempt from checks: /, /auth/*, /pending.
Fail-closed behavior: if the status query errors or returns no row, the user is treated as non-approved and blocked. This prevents a DB error from accidentally granting access.
Code Architecture
The Database Layer: lib/database.ts
All Supabase queries live in lib/database.ts. Components and API routes import typed helper functions and they never call the Supabase client directly. This provides:
- A single place to find all queries
- Consistent error handling patterns
- Easy mocking in tests
- No scattered Supabase calls to audit during security reviews
export async function getUserAchievements(userId: string): Promise<AchievementWithEarnedStatus[]> {
const supabase = await createClient();
// ... single function, all the logic in one place
}Server vs. Client Supabase Clients
Two separate Supabase client factories:
- lib/supabase/server.ts: creates a server-side client using cookies, for API routes and Server Components. Never exposed to the browser.
- lib/supabase/client.ts: creates a browser client for Client Components that need real-time auth state.
This distinction matters because Next.js App Router mixes server and client rendering. Using the wrong client in the wrong context causes session sync bugs (users randomly logged out, stale auth state).
Data Fetching: Server Components + SWR
The app mixes two data fetching strategies depending on the rendering context.
Server Components handle the initial page load (leaderboard, achievement catalog, event list). Data is fetched on the server, serialized, and sent as rendered HTML. No client JavaScript needed, no loading state on first paint.
SWR takes over for interactive client components that need to reflect state changes without a full navigation. The main use cases:
- AchievementsTab: fetches the achievement catalog, the user's earned achievements, and their pending claims. Three independent useSWR calls that compose into a single view. When a claim is submitted, mutate() refreshes the claims key without reloading the page.
- LeaderboardTab: fetches scores and the current user's house assignment. Needs to stay fresh if another user's score changes mid-session.
- ClaimsQueue (in admin): polls for pending claims so the admin panel updates without a refresh.
The rule of thumb: if a component lives in a tab or drawer and its data can change while the user is looking at it, SWR. If it's a top-level page that only changes on navigation, Server Component.
API Route Structure
1app/api/
2├── achievements/ — public achievement catalog + claims
3│ └── claims/
4│ └── photo/ — photo upload endpoint
5├── activity/ — activity feed
6├── admin/ — admin-only endpoints
7│ ├── achievements/
8│ ├── claims/
9│ ├── events/
10│ ├── houses/
11│ ├── settings/
12│ ├── update-achievements/
13│ └── users/
14├── events/ — event browsing + registration + teams
15├── house/ — house data + banner
16├── leaderboard/
17├── notifications/
18├── teams/
19└── users/Every route under app/api/admin/ calls requireAdminAuth() from lib/admin-auth.ts at the top of the handler. The RLS policy on the DB would catch unauthorized admin writes, but the API check gives a cleaner error and prevents unnecessary DB round-trips.
Achievement Rarity System
Achievements have point values from negative (cursed) to 10+. Point value maps to rarity tier:
1export function getRarityTier(points: number): RarityTier {
2 if (points < 0) return "cursed";
3 if (points >= 10) return "mythic";
4 if (points >= 8) return "legendary";
5 if (points >= 6) return "epic";
6 if (points >= 4) return "rare";
7 if (points >= 2) return "uncommon";
8 return "common";
9}Tiers: common → uncommon → rare → epic → legendary → mythic, plus "cursed" for penalty achievements. Each tier has a distinct color and glow effect on the achievement card. Legendary and cursed achievements have ambient CSS glow animations.
Event Point System
Events have configurable point tiers. Rather than hardcode point values per event, a POINT_TABLE in lib/event-points.ts maps event tier × team size bracket to placement rows:
1const POINT_TABLE: Record<EventTier, Record<TeamSizeBracket, PlacementRow[]>> = {
2 showcase: {
3 '1-2': [{ placement: 1, label: '1st Place', points_awarded: 10 }, ...],
4 '3-5': [...],
5 '6+': [...],
6 },
7 competitive: { ... },
8 casual: { ... },
9};Three event tiers (showcase, competitive, casual) × three team size brackets (1-2, 3-5, 6+) = 9 point tables. Larger competitions with more teams award fewer points per placement to prevent events from dominating the achievement-based leaderboard.
When an admin creates an event, they select tier and bracket and the UI pre-populates the placement rows. The placements are then stored in event_placements. The point table is a one-time source of truth, not the ongoing record.
The reasoning for this is to avoid an imbalance between event points and achievement points. We want attendees to feel like they have individual agency, but that their team contributions still matter. Multiple placement tables balances out the agency vs luck, and appropriately rewards skill and effort.
Automated Achievements
Some achievements award themselves based on app state (signed up before event start, uploaded avatar, joined a house, etc.). The automation system uses a checker registry pattern:
1export const CHECKERS: Record<string, (ctx: CheckContext) => boolean> = {
2 beta_tester: (ctx) => ctx.eventStartDate !== null && ctx.userCreatedAt < ctx.eventStartDate,
3 hello_world: (ctx) => ctx.eventStartDate !== null && ctx.userCreatedAt >= ctx.eventStartDate,
4 joined_house: (ctx) => ctx.hasHouse,
5 uploaded_avatar: (ctx) => ctx.hasAvatar,
6 team_player: (ctx) => ctx.hasTeamMembership,
7 early_bird: (ctx) => ctx.isEarlyBird,
8};autoAwardAchievements(userId) is called by API routes after state-changing actions (profile update, house assignment, etc.). It:
- Fetches all achievements with is_automated = true
- Checks which the user hasn't earned yet (idempotency guard)
- Builds a CheckContext via parallel DB queries
- Runs each unearned achievement's checker against the context
- Inserts any newly eligible achievements
Achievement Claims Workflow
Some achievements require evidence. The claim flow:
- User taps "Claim" on an achievement marked requires_photo = true
- App prompts for photo upload → image goes to Supabase Storage
- Claim row inserted into achievement_claims with status = 'pending' and photo_url
- Admin sees pending claims in the claims queue
- Admin approves → user_achievements row inserted, claim updated to approved, notification sent
- Admin rejects → claim row deleted, notification sent with reason
Claims are gated by both RLS (user_id = auth.uid() for inserts) and approval status (only approved users can submit claims). The UNIQUE (user_id, achievement_id) constraint prevents double-submitting.
House captains can optionally be granted claim approval rights via app_settings toggles. This was implemented but kept off by default, since the admin approval model was simpler for the first event.
Notification System
Notifications are stored in the notifications table with per-user rows with a type, a payload JSONB blob, and a read_at timestamp. Three things trigger a server-side insert:
- Claim approved/rejected - PATCH /api/admin/claims/[id]: inserts a notification for the claimant with the outcome and, on rejection, the admin's reason.
- Team invite sent - POST /api/events/[id]/teams/[teamId]/members: inserts a notification for the invitee.
- Join request received: the same route inserts a notification for the team captain when someone requests to join.
The client polls /api/notifications every 30 seconds via the useNotifications hook. No WebSocket or Supabase Realtime subscription. At 40 users a 30-second polling interval is invisible overhead. Unread count drives the badge on the nav bell icon.
Notifications with type: 'team_invite' render accept/decline action buttons inline; other types are read-only informational items. Mark-as-read is optimistic: the UI updates the local state immediately, then PATCH /api/notifications/[id] confirms it server-side. Dismiss deletes the row outright.
RLS on the notifications table ensures users can only SELECT and UPDATE their own rows. Inserts happen server-side via the server Supabase client (which runs with the authenticated user's session in the context of the API route), so the WITH CHECK (true) insert policy is scoped to authenticated users, not public.
Activity Feed
A simple append-only activity_feed table logs achievement awards and event results:
1CREATE TABLE activity_feed (
2 id uuid PRIMARY KEY DEFAULT gen_random_uuid(),
3 type text NOT NULL, -- 'achievement_earned' | 'event_result'
4 payload jsonb NOT NULL DEFAULT '{}',
5 created_at timestamptz NOT NULL DEFAULT now()
6);JSONB payload carries the display data (user name, achievement name, points). No user-specific state, no updates, no deletes, just append. Reads are reverse-chronological by index on created_at DESC. This keeps the activity feed query trivial and the data model simple.
Tutorial
First-time users get an interactive tour powered by [driver.js](https://driverjs.com/). TourContext tracks whether the user has seen the tutorial (backed by a tutorial_seen column on users). The tour is shown once on first login and skippable. AppTour.tsx defines the step sequence. Each step highlights a UI element with an overlay and description.
Testing
Tests live in __tests__/ mirroring the source tree. Two environments are in play:
API route tests require /** @jest-environment node */ at the top of every file. Next.js App Router handlers use Web API globals (Request, Response, Headers) that aren't available in jsdom. Without the pragma, the test environment silently uses jsdom and the globals are undefined. This applies to every file under __tests__/api/.
Component tests use jsdom. Mantine components require a provider to render correctly. Component tests wrap in <MantineProvider> via a local renderWithMantine helper inline in each test file.
The trickiest behavior to test: Mantine tabs render all tab panels in the DOM simultaneously (for animation purposes). Tests that check "is this content visible in the active tab?" must use toBeVisible(), not toBeInTheDocument(), everything is in the document, but only the active panel is visible.
API routes are tested by importing the handler directly and calling it with a constructed NextRequest. Supabase calls are mocked at the module level with jest.mock(). The middleware tests mock getClaims() and the users status query independently, covering unauthenticated, pending, approved, and DB-error paths.
The auto-achievement logic and event point calculations have pure unit tests. The checker functions are stateless given a CheckContext, so no mocking needed.
What I Learned
1. RLS self-referential policies bite. If you have a policy on users that queries users, you'll hit infinite recursion. Security Definer functions can be helpful in certain situations, but require care. Ideal solutions avoid using them.
2. Scoreboard views > denormalized columns. For a read-heavy aggregation like a leaderboard, a view is simpler to maintain and correct by definition. The complexity of keeping a cached column in sync with every source table isn't worth it at this scale. The view approach would only need to change under significant read load. This goes with the idea to not over optimize early. Build for what you need now and optimize as the user-base or load grows.
3. Supabase free tier is genuinely viable for small private apps. One dashboard, one SDK, auth + DB + storage + type generation included. The integration story is smooth enough that there's no meaningful reason to stitch multiple services together for a project this size.
4. The middleware approval gate is the right place for access control. Putting the status check in middleware (not in each page component) means you can't accidentally ship a page that skips the check. The redirect happens at the edge before any React code runs.
5. Separate server and client Supabase clients matter more than it seems. The cookie-based server client and the browser client behave differently. Using the wrong one in the wrong rendering context causes silent session bugs. The distinction should be enforced by the file structure, not by developer memory.
6. pnpm over npm for monorepo-adjacent projects. Faster, stricter, and the lockfile resolves with fewer surprises. Worth the minor setup cost. Still need to play with bun more in real world situations, but it might even replace my usage of pnpm eventually.
Stack Summary
Layer | Technology |
|---|---|
Framework | Next.js 16 (App Router) |
Runtime | React 19 |
Language | TypeScript 5 |
UI Components | Mantine 8 |
Utility CSS | Tailwind CSS 4 |
Database | PostgreSQL via Supabase |
Auth | Supabase Auth |
File Storage | Supabase Storage |
Hosting | Vercel |
Package Manager | pnpm |
Testing | Jest + React Testing Library |
Tutorial | driver.js |
Data Fetching | SWR |
Comments