The Journey from Manual Toil to Automated Value
A Gemini-CLI Story
The Beginning: A Dusty Workshop
Our story begins not with a grand design, but with a folder named backup
. This folder was the digital equivalent of a dusty workshop, filled with the remnants of a manual, yet functional, server provisioning system. Inside, directories named after MAC addresses held static user-data
and meta-data
files. A simple dnsmasq.conf
directed PXE booting clients, and a basic index.php
served up the right files. It worked, but it was brittle. Adding a new server meant manually copying files, updating configs, and hoping no typos were made. Scaling was a chore, and documentation was an oral tradition. The system was a monument to manual effort, a solution born of necessity but groaning under its own weight.
The goal was clear: transform this fragile artifact into a robust, repeatable, and scalable platform using Infrastructure as Code (IaC). The traditional path would involve weeks of research, learning the nuances of Ansible best practices, Jinja2 templating, and web design. This is where the journey took a pivotal turn with the introduction of gemini-cli
.
Phase 1: From Static Files to Dynamic Templates
The first step was to tackle the provisioning server itself. Instead of spending days reading Ansible documentation, the conversation started simply: "Gemini, look at the contents of backup/autoinstall_configs/default/user-data
. Can you convert this into a Jinja2 template for an Ansible role?"
In minutes, gemini-cli
responded, not just with the template, but with the context of where to place it. It scaffolded a new directory, ansible-provisioning-server
, with a proper role structure. The static, hardcoded values for network configuration, user passwords, and hostnames were replaced with Ansible variables. What was once a copy-paste task became a configuration change in a single group_vars/all.yml
file.
The iterative process had begun. Gemini helped refactor the monolithic logic into distinct Ansible roles: iso_preparation
for handling the Ubuntu installer, netboot
for managing DHCP and TFTP, and web
for serving the autoinstall scripts. With each step, Gemini acted as an expert pair programmer. When a service needed to be restarted, it suggested an Ansible handler to ensure it only happened when the configuration actually changed—an efficiency I wouldn't have thought of initially. The ansible-provisioning-server
was born, not in weeks, but in a matter of hours. The time to value had been slashed by an order of magnitude.
Phase 2: Expanding the Ambition to a Kubernetes Cluster
With the base operating system provisioning fully automated, the horizon expanded. Why stop at a blank server? The new goal became deploying a full Kubernetes cluster. This would typically require a specialized skillset in Kubernetes administration and complex networking.
Again, the dialogue with gemini-cli
drove the process: "Gemini, create a new Ansible project, ansible-kubernetes-nodes
, to configure a cluster. I need a role for controllers and a role for workers."
Gemini scaffolded the project, complete with tasks to install kubeadm
, kubelet
, and kubectl
. When faced with the challenge of configuring the specific network interfaces for the cluster's storage network, Gemini templated the Netplan configuration files (01-netcfg.yaml.j2
), pulling the IP addresses and interface names from the Ansible inventory. The fix_kernel_modules.yml
and ceph_prereqs
role show how we iteratively tackled challenges as they arose, adding more context and letting Gemini generate the necessary code to solve them. A task that would have been a significant project for a dedicated DevOps engineer was completed in an afternoon.
Phase 3: From Opaque Infrastructure to a Documented, Visible Platform
The infrastructure was now automated, but it was still invisible. The knowledge of what servers existed, their hardware specifications, and their roles was locked away in configuration files and command-line outputs. The final, and perhaps most valuable, phase was to create a user-facing portal to document the entire system.
This is where gemini-cli
truly bridged the skills gap. As someone who is not a web developer, I could describe what I wanted:
- Data Aggregation: "Gemini, I have raw output from
lshw
(as XML) andipmitool
. Write a Python script,build_inventory.py
, to parse these files and generate a singleinventory.json
." - Web Frontend: "Now, create an
inventory.html
page. Use JavaScript (site.js
) to fetch theinventory.json
and display it in a clean, sortable table. Use a modern, readable stylesheet (styles.css
)." - Comprehensive Documentation: "Let's create a full documentation site. Generate pages that explain the end-to-end workflow, the provisioning server, and the Kubernetes cluster. Link them all from a central
index.html
."
The website
directory is the result of this collaboration. It’s not just a collection of files; it’s a high-value asset. It provides a single pane of glass for viewing the hardware inventory. It documents the complex automation in simple, human-readable language. It allows anyone, regardless of their Ansible or Kubernetes expertise, to understand the platform's architecture and status.
Phase 4: Introducing a Staging Environment
As the website grew in importance, making direct changes to the live version became risky. A simple mistake could break the documentation for everyone. To address this, the development process matured with the introduction of a formal staging environment.
The workflow was updated to use Git branches. All new development and content updates now happen on a staging
branch. When changes are ready, they are pushed to this branch, which automatically deploys to a separate, non-production staging website. This provides a crucial opportunity to review and test everything in a live-like environment. We can catch broken links, layout issues, or data parsing errors before they ever impact the main site. This two-step process—develop, stage, and then promote to production—is a critical best practice. It ensures stability, minimizes risk, and guarantees that the live documentation remains a reliable resource for the entire team.
Conclusion: A Force Multiplier
The journey from the backup
folder to the final, integrated system of Ansible automation and a documentation website would have traditionally been a multi-month endeavor requiring specialized expertise in systems administration, network engineering, and web development.
With gemini-cli
, it became an iterative, collaborative process of days. It acted as a force multiplier, translating high-level intent into expert-level code. It democratized the creation of complex infrastructure, removing the need for a specialized skillset and allowing the focus to remain on the desired outcome. The result is not just the code in the repositories, but a dramatic compression of the time required to gain value—a transformation from a manual, high-friction process to a documented, automated, and valuable platform.