Methodology
How the archive is compiled.
HackerLinks publishes deterministic, source-linked pages from structured scout artifacts. This page explains what gets included, how updates flow into the site, and where the archive still has limitations.
Selection rules
The archive focuses on concrete references that surfaced in Hacker News discussion: products, tools, libraries, books, hardware, talks, and similar specific things. The canonical long-term unit is an item page. The daily freshness unit is an issue page.
Provenance rules
Every item page keeps the issue date, the cited HN thread, and the supporting evidence near the summary. The archive prefers source URLs when available and preserves repeat sightings over time instead of replacing them with a single abstract description.
Update flow
Private scout artifacts are synced into the repository, normalized into public JSON, and rendered into static HTML. The public site does not scrape Hacker News or call models at request time.
Automation and AI usage
Automation is used upstream to collect and normalize structured records. The website itself is deterministic downstream: static pages, JSON manifests, and machine-readable metadata are built from checked-in artifacts.
Known limitations
Coverage depends on the upstream artifact quality. Some items still have thin summaries or missing source URLs. The archive is meant to improve over time by preserving history, not by pretending every record is already complete.