“One problem with the instinct to constantly buy more and build more is an immensely short-term focus, with little interest in the organizational and budgetary demands that go along with keeping digital infrastructure in good shape.”

We see this issue with systems, development, DAM, digital preservation, and digital archiving labor.

Data work constantly grapples with new and emerging tech coming on the market and disrupting legacy workflows. We budget for costly licenses, invest in paid training, and spend resources (time, hiring, role assignments) implementing the new tech. But the digital infrastructures that support the newest, costly technologies continue to degrade and lack maintenance and repair.

Digital infrastructures - systems, databases, and AI tools - that make up the foundation of how we manage data.

It doesn’t matter if you have the latest tech if your systems can’t turn on or be accessed.

It’s not sustainable or realistic to constantly buy/implement new technologies.

The article pitches the idea of “decomputing” - focus more on maintenance/repair of existing systems rather than buying every new technology that emerges.

Where do we start? Data work relies on structures like databases, metadata, taxonomies, ontologies, APIs, VPNs, and virtual servers. Often we see each of these structures maintained/repaired in silos (e.g. digital asset manager cares for interoperable metadata, developer updates the API).

Maintenance and repair of these systems could look like more interdepartmental workflows, more customization and configurations in existing databases, and more hiring for in-house systems/developer knowledge among those data roles.

What other ways could we better maintain and repair our digital infrastructure? How does data labor connect to sysadmin if it’s not a developer role? What ways do we all contribute to maintaining and repairing our infrastructure?

Read the article: https://www.policyready.ca/policymaking-today/2025/12/16/less-as-more-decomputing-in-the-age-of-tech-accelerationism