With the NADIKI Registrar, we are releasing the first functional prototype of the Observer architecture as open-source software. The application implements the NADIKI API specification: It provides an API for the registration of data centers, racks, and servers and offers endpoints for transmitting metrics of the registered systems. The submitted data is processed in InfluxDB to calculate the environmental impact. The prototype runs on a dedicated Hetzner server and is available on GitHub for free use and further development.
The source code of the NADIKI Registrar is available as open source on GitHub. We release this implementation as an initial prototype and welcome feedback, bug reports, and contributions. The registrar is a proof-of-concept, deployed by us for an Open Web UI instance in collaboration with data center partners.
With the registrar, we implement the NADIKI API specification into a functional API. The software is the core of the Observer architecture: it receives infrastructure entities, accepts measurement data, and computes the seven environmental impact indicators that an AI workload can retrieve for its reporting (see specifications).
Architecture and Components
The application comprises several services orchestrated using Docker Compose:
Registrar API: A Flask-based REST API, generated from the OpenAPI specification using the Connexion framework. It provides endpoints for registering data centers, racks, and servers, and recording their static properties. The API documentation is accessible via an integrated Swagger UI.
MariaDB: Stores the static information of each registered entity—location data, PUE values, cooling configurations, hardware inventories, and lifecycle data. During initial setup, an initialization script automatically creates the tables for facilities, racks, and servers.
InfluxDB: A time-series database for all dynamic measurements. Energy consumption, temperatures, and performance data are stored as time series, forming the basis for environmental impact calculations. When registering assets (servers, data centers, racks), the API response specifies endpoints that can be used for sending data.
Telegraf (optional): Receives metrics from infrastructure exporters via Prometheus Remote Write and forwards them to InfluxDB. The configuration supports HTTPS and processes incoming data every 30-second intervals. We use this approach when additional data processing is required before writing to InfluxDB.
Flux Tasks: Two periodic computing tasks in InfluxDB transform the raw data into environmental impact indicators: The Operational Task calculates energy consumption (renewable and non-renewable), self-production, generator contribution, and CO₂ emissions per facility every four hours. Missing measurements are linearly interpolated. For CO₂ data, we use the Electricity Maps API. The Embodied Task normalizes embedded emissions from the production of servers and building infrastructure to an hourly rate per business unit every 15 minutes, based on lifespan and climate data. The data for this calculation comes from the Boavizta API.
JupyterLab: An integrated analysis environment that provides direct access to InfluxDB and MariaDB. Here, collected data can be explored, visualized, and new calculation models can be prototypically tested. This environment is also useful for researchers and has been used in follow-up projects (SIEC).
From AWS Prototype to Dedicated Server
The initial attempt ran on AWS with Elastic Container Service (ECS). For the production version, we switched to a dedicated Hetzner server with Docker Compose. The production configuration complements the basic system with Nginx as a reverse proxy with automatic Let's Encrypt certificate management via Route53 DNS validation.
A Prototype for Further Development
The registrar is an initial functional prototype. It demonstrates that the NADIKI API specification is implementable and that the design decision for the Observer architecture is practical. The software is deliberately kept simple: Docker Compose instead of a Kubernetes cluster, SQLite-compatible schemas, standard tooling such as Flask and InfluxDB.
Areas where we expect improvements and welcome contributions:
Data Validation: Stricter verification of incoming metrics and registration data.
Calculation Models: Refinement of Flux Tasks, particularly in interpolating missing values and targeting at workload level.
Security: More granular authentication and authorization per entity.
Scalability: Operational experiences with multiple data centers and a higher number of servers.
The complete source code, Docker Compose configurations, and Flux Tasks are available on GitHub.
Additional Publications
Research
NADIKI API: Open interface for environmental impact data from data centers
Research
Artificial Intelligence
Germany
Research
NADIKI: Observer architecture for measuring environmental impacts in cloud and IT infrastructure
Research
Digital Sustainability
Germany
Research
NADIKI: Why we chose the Observer architecture over a Kubernetes plugin
Research
Digital Sustainability
Germany