When a GIS program fails, the post-mortem almost always blames the technology or the vendor. The software was too complex. The implementation partner didn't understand the utility's needs. The migration was rushed. The training was inadequate.
These things are sometimes true. But after watching a number of GIS programs succeed and fail over 25 years, I've come to believe that the technology explanation is almost always incomplete, and often wrong. The real failure is almost always visible in the org chart and the governance structure — usually long before go-live.
The pattern
It goes like this. A utility decides to modernize its GIS. A technology is selected — often ArcGIS Utility Network, increasingly with a UPDM-aligned schema. A consultant or systems integrator is brought in. The configuration work is done. The data is migrated. The system goes live.
Then, six months after go-live, the GIS starts drifting from reality. As-built records pile up in a backlog because nobody has been assigned to process them. Attribute data is inconsistent because different crews are using different conventions. Topology errors accumulate because nobody has responsibility for running the QA routines. Two years in, the organization is back where it started: a GIS that nobody trusts, now with a more expensive software license.
What failed? Not the technology. The technology works fine. What failed was the governance layer that was supposed to keep the technology working after the consultants left.
A platform nobody maintains is worse than no platform. At least a paper archive admits what it is.
What governance actually means
GIS governance is not a policy document. It's not a RACI matrix filed in a SharePoint folder and never read again. It's the set of real, operational decisions that determine how data gets created, reviewed, corrected, and maintained — and who is accountable when it isn't.
Effective GIS governance answers these questions with specificity:
- Ownership: Which role, by title and by name, is accountable for the accuracy of each asset class in the GIS? Not "the GIS team" — a person.
- As-built workflow: When construction or maintenance work changes the network, who captures that change, in what system, within what timeframe, reviewed by whom?
- Quality standards: What does "acceptable" data look like? What attributes are mandatory, what are optional, what conventions apply to naming and classification?
- QA cadence: How often is the GIS audited for topology errors, attribute gaps, and network inconsistencies? Who runs the audit, who reviews the findings, who is responsible for remediation?
- Change control: How does a schema change get proposed, reviewed, and approved? Who has authority to modify domain values, add attributes, or deprecate object classes?
- Escalation: When a data quality problem is discovered that crosses department boundaries — a GIS layer that contradicts an engineering drawing, a field observation that doesn't match the as-built — what's the process for resolving it?
Most utilities entering a GIS modernization program cannot answer these questions. The failure to answer them before go-live is the governance failure.
Why it gets skipped
Governance work is unglamorous. It doesn't produce a demo. It doesn't generate screenshots for a project update. It requires difficult conversations about accountability — conversations that most organizations prefer to defer.
It also tends to surface organizational tensions that already exist but haven't been forced into the open. GIS governance intersects with the interests of engineering, operations, field services, IT, and asset management. Each of these departments has opinions about how the data should be structured and who should control it. Designing governance means adjudicating those tensions — and some of them are genuinely hard to resolve.
Consulting firms often collude in the avoidance, because governance design is hard to scope, hard to bill, and easy for a client to push back on as unnecessary. It's much easier to deliver a configured system and a training package and call the engagement complete. The governance problems show up eighteen months later, after the consultant is gone.
What good governance design looks like
The best GIS governance structures I've seen share a few characteristics.
They're designed before the technology, not after it. The ownership structure, as-built workflows, and quality standards need to be established while there's still time to configure the system around them. Governance retrofitted onto a deployed system is always a compromise.
They're embedded in existing workflows, not added on top of them. If the as-built process requires a separate GIS update step after the maintenance management system is updated, it will be skipped. The GIS needs to be on the path of least resistance for field crews and engineers — not an additional burden.
They have named accountabilities, not departmental ones. "The GIS team is responsible for data quality" means nobody is responsible for data quality. A named role with defined accountability and a manager who reviews performance against it is a different thing.
They include an enforcement mechanism. This doesn't mean punishment — it means that data quality problems have a defined escalation path that ends with someone who has authority to resolve them. Without enforcement, standards become suggestions.
They're documented, but not buried. Governance documentation that lives in a shared drive and is never referenced is not governance. The standards need to be accessible in the systems where the work happens — embedded in forms, surfaced in training, visible in QA reports.
The realistic timeline
Good governance design takes time. For a mid-sized utility undertaking a full GIS modernization, the governance design phase — interviews with stakeholders, draft structure, review and revision, approval — typically runs four to eight weeks in parallel with the technical work. It's not a separate project phase that delays go-live; it's a workstream that has to happen alongside the configuration work.
What it produces is: a data ownership matrix, a documented as-built workflow, a set of quality standards with acceptance criteria, a QA schedule and reporting structure, a change control process, and an escalation framework. These documents are worth less than the conversations that produced them — the conversations are where the organizational agreements actually get made. But the documents make the agreements durable.
The question to ask your implementation partner
When evaluating a GIS implementation proposal, one question reliably distinguishes governance-serious consultants from technology-focused ones: "What will our data governance structure look like six months after go-live, and what's your role in designing it?"
If the answer focuses on training and documentation handover, the governance is being left to you. That might be fine, if you have the internal capacity to design it. But most utilities undertaking their first major GIS modernization don't — and the gap between "we have a training package" and "we have a functioning governance structure" is where most programs eventually fail.
Governance is not glamorous. It's also not optional. A GIS that works well at go-live and degrades over eighteen months is a failed implementation, regardless of what the acceptance criteria said.