In 2019, I walked into a national fleet carrier as its first Director of Digital Transformation. The company had been moving freight across North America for decades. The systems running it were built to match: a monolithic ERP, disconnected fleet telematics, manual dispatch workflows, and a data architecture that predated the cloud era. My job was to change all of it.

The first thing I did was not write a strategy deck. It was draw a map.

The problem with transformation programs

Most digital transformation programs fail in the same way. They produce a beautiful future-state architecture diagram, get executive approval, and then collapse under the weight of their own ambiguity. Teams don't know how their work connects to anything else. Dependencies surface late. Unplanned work — the incidents, the integration failures, the compliance emergencies — consumes the capacity that was supposed to go toward progress.

Gene Kim's The Phoenix Project gave me the vocabulary to name this problem: the four types of work. Business projects (visible, funded, prioritized). Internal IT projects (the transformation work itself — underfunded and invisible to the business). Changes (the output of the first two). And unplanned work — what Kim calls the silent killer. Every organization I've been in dramatically underestimates how much of their capacity is consumed by Type 4. Until you measure it, you can't reduce it. And you can't make the case for transformation investment without showing what unplanned work costs.

The second insight from Kim: there is always one constraint limiting the entire system. In our case, it was deployment. Releases happened quarterly, manually, and with significant risk. Development teams could build faster than we could ship. Adding more developers just made the queue at the deployment bottleneck bigger. The Theory of Constraints tells you: improve the constraint first, or your improvements are an illusion.

Naming things after stars

I called the program The Night Sky. Not because the name was clever, but because the metaphor was structurally useful.

The hierarchy was this:

The Night Sky program map. Center outward: program → asterisms (portfolios) → constellations (projects) → stars (products).

This maps cleanly onto the bounded context model from Sam Newman's Building Microservices. A bounded context is a portion of the domain where a particular model applies and is internally consistent. In trucking, "trip" means something different to dispatch, billing, safety compliance, and fleet maintenance. The astronomy naming honoured those differences rather than forcing a single unified model.

The Summer Triangle held business-facing projects: driver experience, customer experience, NTS portal. The Winter Triangle held internal platform and infrastructure work: the EKS cluster, CI/CD pipelines, service mesh, event streaming. The Spring Triangle held change management: training, executive communications, process documentation. Centaurus held unplanned work — a named home for the interruptions that would inevitably come, so they didn't silently consume the other portfolios.

That last one was not an accident. Naming unplanned work as a portfolio — giving it a constellation of its own — made it visible. Visible work can be measured. Measured work can be managed.

The architectural logic

Newman is blunt about microservices: don't start with them. A monolith is a sensible starting point. The danger isn't the monolith — it's the distributed monolith, which combines the network overhead and operational complexity of microservices with the tight coupling of a monolith. Teams that draw microservice boundaries before they understand their domain will build a distributed monolith every time.

We used the strangler fig pattern. Rather than a big-bang rewrite, we extracted services incrementally. New capabilities went into new services. Existing functionality migrated constellation by constellation, star by star. The monolith shrank over time. We chose extraction targets by finding the seams — the places where the coupling was already weakest, where a team had clear ownership, where the bounded context was unambiguous.

The infrastructure portfolio (#Winter Triangle) existed precisely to make this possible. You cannot extract services into chaos. You need a platform: container orchestration (AWS EKS), a CI/CD pipeline (GitHub Actions), a service mesh (App Mesh), distributed tracing (X-Ray), a centralized secrets manager. These were not the visible deliverables. They were the foundation without which none of the visible deliverables could ship independently.

Conway's Law will find you

Melvin Conway wrote in 1968: organizations which design systems are constrained to produce designs which are copies of the communication structures of those organizations. This is not a suggestion. It is a forcing function.

If a single cross-functional IT team owns dispatch, billing, fleet, driver experience, and compliance — Conway's Law will build you a monolith regardless of what the architecture diagram says. Teams share databases because it's easier. They deploy together because coordinating independent releases is harder than a joint release. They build implicit dependencies because informal communication within a team is cheap.

The Inverse Conway Maneuver is the deliberate application of this principle in reverse: restructure teams to match the architecture you want to build. A team that owns a star owns it end to end — the data, the API, the UI, the deployment pipeline, the on-call rotation. The constellation boundary is the team boundary. The architecture follows the org, and the org is designed to produce the architecture.

At the organization, this was the hardest part. Technical decomposition is an engineering problem. Organizational decomposition is a people and political problem. Stars could be named before teams were restructured. But the architecture wouldn't stabilize until the ownership did.

Measuring progress

The DORA metrics (deployment frequency, lead time for changes, time to restore service, change failure rate) gave us a measurement system that was technology-agnostic but engineering-specific. They answer the question that every CFO eventually asks: how do you know this is working?

At the start of the program, we were low performers on every dimension. Deployments were quarterly. Lead time from commit to production was measured in months. Incidents took days to resolve. The DORA research shows that high performers are not slower than low performers in exchange for stability — they are faster and more stable. Throughput and reliability are not a tradeoff. That finding is the most important thing to put in front of leadership, because most legacy IT organizations believe they are slow because they are careful. They are slow because their system is broken.

The four DORA metrics became the north star. Every investment in the Night Sky was evaluated against its impact on one of them. CI/CD pipelines: deployment frequency and lead time. Automated testing: change failure rate. Incident response runbooks and observability tooling: MTTR. Not every initiative could move every metric, but every initiative had to move at least one.

What the naming convention did that I didn't expect

When I designed the astronomy hierarchy, I was thinking about program structure and portfolio management. What I didn't anticipate was what it did for culture.

People refer to their work differently when it has a name that isn't a Jira epic number. A developer on the #Rigel event streaming constellation had a different relationship to their work than someone who was "working on the data ingestion backlog." The name created identity. Identity created ownership. Ownership created accountability — not the bureaucratic kind, but the kind that comes from caring about something you've put your name on.

In a transformation program that spans three years and touches every part of the organization, coherence is underrated. The Night Sky gave people a shared map. When someone in Fleet asked what the Driver Experience team was building, the answer wasn't a project code or a slide deck — it was a constellation in the Summer Triangle. Everyone knew where that was on the map.

Three things I'd do differently

First: invest in the constraint earlier. We spent the first six months building product features before the deployment pipeline was mature enough to release them reliably. The Theory of Constraints is right: everything downstream of the constraint is inventory. We built inventory we couldn't ship.

Second: make unplanned work more visible from day one. We named the Centaurus portfolio for unplanned work, but we didn't measure it rigorously until the end of year one. The political case for platform investment — the investment in #Winter Triangle — would have been stronger if we'd quantified unplanned work cost from the start.

Third: run the Inverse Conway Maneuver in parallel with the architecture work, not after it. We let team boundaries lag service boundaries by about a year. That gap produced integration problems that the architecture hadn't anticipated, because the informal communication across teams created dependencies that the API contracts didn't capture.

The Night Sky ran from 2019 to 2023. By the time I left the company, we had migrated five production workloads to microservices on AWS EKS, deployed Amazon Connect and Lex for driver and customer contact, and built the CI/CD and observability foundation that would let the next team move faster than we could. The deployment constraint was no longer quarterly. The stars were shipping independently.


The frameworks referenced in this post: The Phoenix Project by Gene Kim, Kevin Behr, and George Spafford; Building Microservices by Sam Newman (O'Reilly); the DORA State of DevOps Report; and Conway's Law as extended by the Inverse Conway Maneuver.