Vibe coding: building an asset management web app — and teaching it to speak SQL

Managing hardware for a large detector is as much an information problem as it is an engineering one.
Over the past weeks I built a web application to manage the CMS Tracker backend assets and their topology, with the goal of creating a single operational view of devices, connections, and status.

This post explains what the project is, how it’s structured, and why one feature — a natural-language query interface powered by an LLM — turned out to be surprisingly useful.


The problem: fragmented operational knowledge

Backend hardware ecosystems evolve organically. Boards move between crates, optical links get re-patched, firmware changes, and components are replaced.

The information exists — but often across spreadsheets, ad-hoc notes, and multiple small tools. The result is friction:

  • It’s hard to answer simple operational questions quickly
  • Inconsistencies creep in
  • The cognitive load for shifters and experts grows

The goal of this project was to create a single source of truth that is:

  • Structured and consistent
  • Easy to update safely
  • Immediately useful for day-to-day operations

What the application does

At its core, the app is an asset and topology manager for CMS Tracker backend hardware. It provides:

  • Inventory of ATCA assets (boards, crates, racks, slots, power)
  • Tracking of linked components (SOMs, IPMCs, FireFly modules, fibres)
  • Role-based access control (reader, writer, admin)
  • Automatic propagation of status and location when hardware is installed or moved
  • A normalized relational data model to keep updates consistent
Figure 1 — System dashboard showing global inventory and quick statistics

The emphasis was not only on storing data, but on making operational workflows explicit — for example, installing a board updates multiple related entities automatically, reducing manual bookkeeping.


Architecture in a nutshell

The stack is intentionally simple and pragmatic:

  • Backend: FastAPI
  • Database: PostgreSQL with a normalized schema
  • ORM layer: SQLAlchemy Core with reflection
  • Auth: role-based with future CERN SSO integration
  • UI: server-rendered templates focused on clarity over complexity

The app works directly on an existing schema, so it can evolve with the hardware model without heavy refactoring.


The interesting part: natural-language queries → SQL

One feature I wanted to experiment with was lowering the barrier to querying the database.

Operational questions are often phrased in plain language:

“Show me the FireFly modules connected to boards, including type and connector.”

Instead of requiring users to write SQL, the app includes a Query page where you describe the data you want.

The backend then:

  1. Uses an LLM to generate a read-only SQL query
  2. Validates and executes it
  3. Displays both the query and the results
Figure 2 — Natural-language query interface with generated SQL and results

Why this is useful in practice

This feature isn’t about replacing SQL — it’s about speed and accessibility.

1. Faster operational checks

You can ask ad-hoc questions without remembering table structures.

2. Transparency and trust

Showing the generated SQL makes the process auditable and educational.

3. A bridge between mental and data models

People think in terms of devices and links, not joins — the interface translates between the two.

4. Safe by design

Queries are generated as read-only, preventing accidental modifications.


Lessons learned

Modeling matters more than UI

A clean schema with explicit relationships made everything else easier — including LLM prompting.

LLMs work best with constraints

Providing schema context and enforcing read-only execution keeps outputs reliable.

Small operational tools have big impact

Even simple visibility improvements reduce friction in daily work.


What I’d like to explore next

  • Graph visualization of topology
  • Query history and saved operational views
  • Tighter integration with authentication (SSO)
  • Automated consistency checks across links

Final thoughts

This project sits at the intersection of detector operations, software engineering, and human-computer interaction.

The most rewarding aspect is that it’s immediately useful: a tool built not as a demo, but as part of the operational workflow.

And the LLM query interface is a small glimpse of how interacting with complex technical systems might become more conversational — while still grounded in precise, auditable data.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *