Decommissioning is where you actually win the migration. If you only add the new stack but never shut down the old one, you inherit double cost, double complexity, and permanent confusion about what is true.
Decommissioning is usually treated as “later.” Later rarely comes. A better rule is that every wave should reduce the legacy surface area, not only build the new world.
Think of the migration as a controlled reduction of complexity. If the new stack ships but the old stack stays, you did not migrate the system, you only added another one.
Start by listing what exists, then validate it with usage. In most organizations the inventory falls into three buckets.
Consumers are the things that break when you remove or change data. Expect to find dashboards, reports, extracts, spreadsheets, manual uploads, downstream apps and APIs, and ML features.
This is the machinery that creates the data people depend on: pipelines and jobs, ingestion connectors, transformation logic, and the “temporary” scripts and manual fixes that became permanent.
This is the legacy surface area that keeps the old world alive: old data stores, old access pathways, and old operational routines such as on-call and incident workflows.
You can run decommissioning as a short, repeatable playbook.
Instrument the legacy system and collect evidence. Query logs, dashboard usage, and API calls are usually enough to start. The goal is to replace opinions with usage.
Sort what you find into three groups: critical and actively used, used but replaceable, and not used or safe to retire.
Migrate only what is valuable. Remove, redesign, or retire what is not. This is where you avoid moving logic that nobody can explain.