This week our Director Alex Singleton participated in a roundtable discussion hosted in the House of Lords by Lord Errol, organised by the Connected Places Catapult, bringing together industry and public sector leaders with interests in geospatial data and insight.
Conversation ranged across data stewardship, skills pipelines, the language of business, digital twins, and the growing role of artificial intelligence. What emerged was a picture of a field that is technically rich, strategically undervalued, and badly in need of better connections to the people and organisations it exists to serve.
The following blog post details some of their thoughts…
The Central Challenge: Geospatial Is Not Yet Everyone’s Language
The framing question of the day was put directly: most data scientists in the UK are not taught to think geospatially. They are well-educated technically, but do not necessarily understand how to use spatial relationships to hold data together, or to think geographically. The systems that they build are often fragile as a result, relying on shared identifiers rather than shared location to link information. This resonates with a central tenet of our own Geographic Data Service: that geography is a glue that can bind disparate data together.
The contention put to the room was clear: geospatial knowledge should not be a specialism. It should be an ordinary, accessible capability available to any data scientist or IT professional. That it remains confined to a relatively small professional community represents one of the most significant barriers to extracting national value from the UK’s data infrastructure.
As someone trained in traditional Geographic Information Science, this sits in direct tension with a long-held axiom of the field: that “spatial is special.” The claim is fundamentally ontological; that location is not simply another attribute to be stored in a column, but a property that structures how entities relate to one another. Proximity, adjacency, containment, and directionality all carry meaning that conventional data models tend to flatten or ignore entirely. For example, a postcode is not a place. A shared identifier is not a spatial relationship. These distinctions matter enormously when the goal is to understand how things in the world interact or work.
But the roundtable surfaced an uncomfortable corollary: if spatial is special, it has perhaps remained too special for too long. The distinctiveness that geospatial professionals have used to define and defend their field has also made it easy to exclude. When something is positioned as requiring unique expertise, unique software, and a unique intellectual tradition, the natural response from the broader data science and IT world is to step back and let the specialists handle it. The risk is that the field becomes ever more technically sophisticated and ever more institutionally isolated.
I don’t fully agree with this framing, but I would argue strongly that spatial thinking is foundational, not exceptional. The underlying principles that where something is matters, that relationships between things are shaped by distance and proximity, that geography is one of the most powerful keys for linking otherwise disconnected datasets are not arcane knowledge. They are logical intuitions that any competent analyst could develop, given the right education and tools. The barrier is not innate complexity. It is an accident of how the field grew: through geography departments, specialist software vendors, and professional communities that developed their own vocabulary and standards in relative isolation from mainstream computing and data science.
The ambition should not be to diminish what makes spatial analysis distinctive, but to stop treating the fundamentals as guarded knowledge. Knowing that a water pipe crossing an electrical cable represents a fundamentally different relationship to those pipes running parallel, or close but not touching, should not require a geography degree. It requires only that spatial relationships are recognised as meaningful which is a principle, not a skill set. Until that principle is embedded in the default assumptions of data science, software engineering, and policy analysis, the value that geospatial approaches can offer will continue to depend on whether a geospatial specialist happened to be in the room.
Several themes emerged from the discussion:
Talk the Language of Business, Not Technology
Executives do not think in datasets or coordinate systems. They think in terms of outcomes: resilience, return on investment, operational efficiency, risk reduction. This is a tension we navigate constantly through the Geographic Data Service. The datasets we broker and the spatial methods we apply are often technically complex, but the value they produce is only realised when it is understood in the terms that matter to the people who will act on it. When we produced the Access to Healthy Assets and Hazards index, the analytical substance was extensive spatial accessibility modelling, but what made it useful to public health teams and local authorities was that it tells them in plain and transparent terms which neighbourhoods had the poorest access to the things that support healthy lives. The method was the means. The conversations that follow can then be about outcomes: where to intervene, how to allocate resource, what to prioritise etc.
Data Silos and the Duplication of Effort
A recurring frustration highlighted in the discussion was the extraordinary degree to which organisations duplicate efforts. For example, independently purchasing the same satellite imagery to solve near-identical problems. Each charges the cost to consumers. Or, predictive models built in one region cannot be trained on data from another, even where the technical and scientific rationale for sharing is overwhelming. Addressing precisely this kind of inefficiency is central to what the Geographic Data Service does. Our core function is to negotiate access to data, much of it commercially sensitive, and make it available with strong governance, so that it can be used by many. Rather than dozens of research teams or public bodies each approaching a data owner independently, duplicating negotiation effort and creating inconsistent access terms, we act as a single point of brokerage.
Trust, Provenance, and the Distributed Data Model
Rather than asking organisations to relinquish ownership of their data, a model is required that allows data to remain with its owner while being made available for modelling under strict governance frameworks. Transparency in licensing, legal clarity, and the involvement of regulators early in the process were all identified as necessary conditions for the trust that makes sharing possible.
This resonates with our own experience through the Geographic Data Service, where building trusted relationships with data owners has been a slow, deliberate process, sometimes spanning years before an organisation is willing to make sensitive data available, even within a secure research environment. The lesson is that trust is not a contract. It is a relationship, and it compounds over time. But trust alone is not sufficient. There also has to be a reason to share. Data owners need to see that making their data available produces something they value, whether that is research insight that they could not generate alone, policy influence, or evidence that informs decisions affecting their sector. Without such a feedback loop, and authentic engagement, governance frameworks remain theoretical and data stays where it is.
A broader principle that data sovereignty and data utility need not be in conflict was one of the more constructive notes of the afternoon. The technology to enable distributed sharing exists. The governance models are maturing. What remains is both the institutional will to adopt them and the patient work of demonstrating, case by case, that sharing produces returns worth the risk.
Artificial Intelligence and the Invisible Map
Artificial intelligence was present throughout the discussion, both as opportunity and as a source of new complexity. On one hand, AI is increasingly able to answer questions that previously required specialist geospatial intervention. On the other, agentic AI systems that answer business questions autonomously may render the underlying analytical logic invisible, which creates its own problems. The most important technical intervention identified in this context was not AI itself but metadata: making data discoverable, well-described, and structured in ways that allow AI systems to find, interpret, and use it reliably. Without that foundation, the risk of confident but wrong answers grows substantially.
This is something we are actively working on through the Geographic Data Service, where we are developing a semantic search interface to our extensive data catalogue. The aim is to allow users to describe what they are looking for in natural language and be matched to relevant datasets, even where the terminology they use differs from how the data was originally documented. Bridging that vocabulary gap between how a policy analyst frames a question and how a dataset was catalogued by its producer is precisely the kind of problem where well-structured metadata and AI can work productively together.
There are pedagogic implications here too. As AI-assisted tools increasingly allow analysts to interrogate spatial data through natural language rather than code, the barrier to performing technical geospatial work drops considerably. This is not a loss. But it does sharpen an existing problem: geospatial expertise by and large remains concentrated in a small professional community, and many universities still teach GIScience in mechanistic, tool-focused ways rather than as a problem-oriented subdiscipline. If the tools themselves are becoming easier to use, then what students and professionals most need is not more software training but a stronger grounding in the core principles of Geographic Information Science which remain enduring and become increasingly relevant precisely as the technical barriers fall. When anyone can run an analysis, knowing which analysis to run and why, matters more than ever. It is worth noting, however, that current AI systems still have a limited understanding of geography itself. They can process spatial data, but they do not yet reason well about the properties that make spatial data distinctive. Making AI spatially literate, not just spatially capable, remains a related and very important challenge.
Conclusion
Taken together, these shifts point in the same direction. As AI lowers the technical barriers to spatial analysis, and as data infrastructure matures to make high-quality geographic information more widely accessible, the distinguishing value of geospatial expertise moves upstream towards judgement, problem framing, and the ability to connect analytical capability to real-world need. Teaching students to operate software matters less when the software begins to operate itself; what endures is the capacity to think spatially, to understand why geographic context changes the nature of a problem, and to communicate that understanding to audiences who will never write a line of code. Making AI not just spatially capable but spatially literate is part of that challenge. But so is making the geospatial community itself more literate in the language of policy, governance, and public value. The problem owner will not come looking for a method. They will come looking for an answer. The Geographic Data Service mission is to be ready with one.
