Decommissioning is where you actually win the migration. If you only add the new stack but never shut down the old one, you inherit double cost, double complexity, and permanent confusion about what is true.

Decommissioning is usually treated as “later.” Later rarely comes. A better rule is that every wave should reduce the legacy surface area, not only build the new world.

The decommissioning mindset

Think of the migration as a controlled reduction of complexity. If the new stack ships but the old stack stays, you did not migrate the system, you only added another one.

What to decommission (a practical inventory)

Start by listing what exists, then validate it with usage. In most organizations the inventory falls into three buckets.

Consumers (often the hidden blockers)

Consumers are the things that break when you remove or change data. Expect to find dashboards, reports, extracts, spreadsheets, manual uploads, downstream apps and APIs, and ML features.

Data production

This is the machinery that creates the data people depend on: pipelines and jobs, ingestion connectors, transformation logic, and the “temporary” scripts and manual fixes that became permanent.

Platforms and contracts

This is the legacy surface area that keeps the old world alive: old data stores, old access pathways, and old operational routines such as on-call and incident workflows.

A simple decommissioning sequence

You can run decommissioning as a short, repeatable playbook.

1) Freeze and measure usage

Instrument the legacy system and collect evidence. Query logs, dashboard usage, and API calls are usually enough to start. The goal is to replace opinions with usage.

2) Classify assets

Sort what you find into three groups: critical and actively used, used but replaceable, and not used or safe to retire.

3) Migrate or kill

Migrate only what is valuable. Remove, redesign, or retire what is not. This is where you avoid moving logic that nobody can explain.