Host your Twine 2 SugarCube stories as tracked experiments. Upload an HTML export, share a public link, and every passage, choice and save state lands neatly in your own database — under your roof, on your terms.
No engine modification, no bespoke scripting. Drop in a Twine file, get a public play URL with a beacon-grade tracker injected automatically.
The server validates the export, slugs the title, and injects the tracker after the script-sugarcube block — idempotently, so re-uploads stay byte-stable.
Each story gets its own /play/<slug>. Drop a Prolific PID into the query string and resume kicks in automatically when the participant returns.
Save blobs, click history, focus events and conditions are stored per-participant. Download the full study as JSON when you're ready to analyze.
Multi-tenant by default, beacon-resilient by design, and small enough that you can audit every line that touches a participant's session.
Save state is uploaded on every passage, plus on visibilitychange and pagehide via sendBeacon — so dropouts still produce a row in your dataset.
Returning participants don't see a flash of the opening passage — the renderer is patched to repaint the resumed passage cleanly, every time.
Every story, session, and exported dataset is owned by exactly one user. Listing endpoints filter by req.user._id — never by client-supplied owner. Public play URLs are the only unauthenticated read path, and they expose nothing about the researcher behind them.
Pass ?condition=… in the URL and the same story serves multiple experimental arms — no fork required.
Configure ending passages and a debrief screen — or a redirect to your Prolific completion URL with templated tokens.
Admins see all studies and users, but inspecting another researcher's data is a deliberate, logged action — not a default capability.
Clone the repo, run the interactive setup script, answer six prompts, and the platform is on the air. No SaaS account, no participant data crossing a vendor's wire.
$ git clone https://github.com/<you>/twine_platform $ cd twine_platform && ./setup.sh # answers six prompts: hostname, port, mongo URI, # session secret, admin user, admin password ✓ wrote .env ✓ ensured indexes on users, stories, sessions ✓ seeded admin account $ npm start listening on https://your.research.host
“Every choice a participant makes leaves a trace; the platform's only job is to keep that trace honest.”
Spin up a workspace on this instance, or clone the repo and host your own. Either way, your data stays where it should.