Compute Costs logo

ComputeCosts.nl

Observatory for real compute costs and user friction.

Observatory status

Amsterdam time: Last data received: - arXiv records ingested: 0 GitHub records ingested: 0 Hacker News records ingested: 0 Reddit records ingested: 0 Wikipedia records ingested: 0 Stack Overflow records ingested: 0 Observatory active since 2026-02-15: 0 days Archive with updated backlog since: 2025-01-01

Observatory reports

ComputeCosts publishes a series of technical observatory reports describing the measurement framework, archive construction and empirical signals extracted from the observatory datasets.

All reports are archived on Zenodo and assigned DOI identifiers to provide stable references for citation and long term retrieval.

Report 004
Comparison of cloud computing mentions and RTX workstation disclosures in arXiv full text papers.
Zenodo DOI

Report 003
Workstation scale computing in contemporary scientific research.
Zenodo DOI

Report 002
arXiv archive, epoch 1.
Zenodo DOI

Report 001
Observatory framework and first measurement baseline.
Zenodo DOI

What this is

ComputeCosts.nl is an independent observatory that measures the real cost of compute usage, including not only financial expenditure, but also user time loss, operational friction and loss of focus.

The initiative originated from repeated observation that in many practical and scientific workloads, local compute solutions are often faster to start, cheaper to operate, involve less dependency friction and provide substantially more privacy than cloud-based alternatives.

While cloud computing clearly offers advantages at sufficient scale, this threshold appears to be reached less frequently than commonly assumed. ComputeCosts therefore builds a forward-looking measurement framework to map these trade-offs using current, verifiable signals rather than retrospective commentary.

Agents as sensors

The platform runs six automated ingestion agents that continuously monitor arXiv, Wikipedia, GitHub, Hacker News, Stack Overflow and Reddit as measurable signals of how computational infrastructure is applied, discussed and experienced.

Each archive is collected in a controlled, repeatable manner, deduplicated into a single canonical record per item, systematically feature-scored and cryptographically anchored so the resulting dataset forms a verifiable evidence stream rather than opinion.

Balanced observation scope

The observatory measures concrete, documentable signals for both cloud-based and local compute environments. For cloud systems, this includes references to GPU availability, quota limits, billing discussions, egress fees, vendor lock-in and reported instability in primary sources.

For local systems, the agents monitor explicit mentions of self-hosted deployment, local GPU usage, high-end workstation hardware such as RTX-class cards and large memory configurations, and offline inference feasibility.

All signals are extracted from publicly verifiable sources and processed into deterministic feature matrices, enabling comparisons between cloud and local infrastructure based on measurable frequencies and time-series trends.

Archive architecture and verifiable logs

Wikipedia arXiv GitHub Hacker News Stack Overflow Reddit

The observatory operates six independent ingestion archives: arXiv for measuring the operational adoption of cloud and local compute within scientific research environments, Wikipedia for quantifying collective attention intensity toward infrastructure concepts, GitHub for implementation-level friction in active codebases, Hacker News for early-stage technical momentum and infrastructure debate, Stack Overflow for recurring operational constraint patterns, and Reddit for practitioner-reported infrastructure trade-offs in daily use.

Each archive is harvested incrementally and transformed into an append-only dataset with reproducible feature extraction and time-series aggregation.

To guarantee integrity, every archive maintains its own dedicated NFT on the Algorand blockchain. Dataset state hashes are committed as zero-value transactions, creating a public, tamper-evident log of both ingestion baselines and derived analytical outputs.

This structure separates signal acquisition, canonicalization, feature construction and blockchain anchoring into explicit stages, ensuring that conclusions remain traceable to immutable primary sources.

About the COSTS token

Each dataset hash is committed twice on Algorand: once to its archive NFT and once as a one micro COSTS transaction to the central archive address. This creates two independent indexing paths, each optimized for a different retrieval scope.

The COSTS token currently has no financial or operational function. In that sense, it acts as an observatory within the observatory, reflecting interest in the initiative rather than enabling access or value transfer.

Any potential future use, such as data purchase through micropayments, is explicitly out of scope at this stage and would be considered only if a clear, non-speculative utility emerges.