.png)
Most data management programs fail because they assume all data deserves the same level of attention. Teams try to improve everything, spread effort thin across thousands of datasets, and end up with lots of activity but little business impact.
A value-based classification fixes this by making one decision explicit: which data is worth serious, ongoing investment, and which data should stay lightweight.
The idea that every dataset should be treated as a “data product” is attractive, especially when it comes with promises of tooling and governance at scale.
In practice it creates two predictable problems:
The root issue is simple: value is not evenly distributed. A small subset of datasets powers repeatable decisions and becomes shared infrastructure across teams. Most datasets do not.
Use a three-tier portfolio so effort matches value.
Data products are the “bestsellers.” They drive critical outcomes and are reused across teams and use cases.
What “product treatment” usually means in practice:
Quick test: If this dataset disappeared or broke, would the organization notice within weeks?