Who Shapes the Future of AI — Markets, Policy, or Culture?
This panel explores how the future of AI is shaped across markets, policy, and culture. Drawing from perspectives in enterprise, government, and education, it examines the structural tensions between systems moving at different speeds, the role of trust in adoption, and why participation matters in determining how AI evolves.

When Different Systems Sit at the Same Table
At Global Summit Vancouver 2026, this question was explored through the perspectives of people working directly inside the systems shaping AI today. The conversation was moderated by Kris Krüg, Executive Director of the BC + AI ecosystem, who has long been involved in connecting local tech communities, founders, and broader innovation networks.
The panel brought together voices from across different domains. Gayathri Narayana, leading product at SAP, works closely with enterprise teams translating AI into real business value. Gokhan Basaran operates within the federal innovation ecosystem, connecting founders, technology teams, and public programs, with a direct view into how policy and innovation interact. Adina Gray, founder of PurpleOwl AI, focuses on education and AI literacy across communities, spanning universities, professionals, and broader public engagement. Colin Knight, CFO of the City of Vancouver, approaches AI from the perspective of public infrastructure and governance, balancing efficiency, risk, and long-term civic impact.
Each of them encounters AI in a different context. That difference shaped the discussion from the start and grounded it in real-world decisions rather than abstract debate.

Three Systems Moving at Different Speeds
A clearer structure began to emerge as the conversation unfolded. AI is shaped by multiple systems evolving at the same time, each operating with its own pace and priorities: markets, policy, and culture.
Markets are moving the fastest. Capabilities build on top of each other, quickly moving from research to product to workflow. New tools are adopted in real-world environments before institutions fully absorb their implications, and each iteration shortens the cycle even further.
Policy operates through a different logic. It absorbs risk, balances competing interests, and formalizes boundaries. That process takes time and relies on deliberation, which gives it a steadier pace.
Culture influences how technology is understood and accepted. It shapes whether people are willing to experiment, whether they trust the outcomes, and whether these tools become part of everyday behavior.
As these systems move together, tension naturally appears. Technology continues to advance, institutions work to keep up, and individuals adjust how they interpret and use these tools. What emerges is an ongoing negotiation between systems rather than a single linear trajectory.

The Challenge Is How Systems Stay Connected
Differences in speed often lead to a desire for alignment. In practice, maintaining full synchronization is difficult over time.
Technology will continue to move ahead due to its iterative nature, while governance frameworks remain grounded in stability and accountability. Over-tight alignment can limit experimentation, while complete separation allows risks to accumulate in ways that are harder to address later.
A more practical direction is beginning to take shape through connection. The question becomes how changes in technology are interpreted, and how policy frameworks remain applicable as those changes unfold.
This is where intermediary layers play an important role. Accelerators, applied research organizations, and cross-sector institutions operate close enough to innovation to understand it, and close enough to governance to translate it. Their value lies in maintaining continuity between systems that otherwise move independently.
Trust Shapes Whether Technology Is Used at All
Beyond capability, trust emerged as a central theme throughout the discussion.
In Canada, AI adoption has been relatively cautious, especially at the enterprise level, while expectations for regulation remain strong. In context, this reflects how new technologies are introduced into environments where existing systems already function with a degree of stability.
AI enters spaces where trust has already been established. As a result, new tools are evaluated within those existing structures rather than in isolation.
In many emerging economies, the dynamic looks different. AI is often seen as a way to expand access, enabling education, healthcare, and services to reach broader populations. These differences are shaped by baseline conditions rather than awareness.
This creates a form of tension. Expectations for clear rules are high, while confidence in how those rules are implemented can vary. Adoption becomes closely tied to institutional credibility.
Trust builds over time through consistent alignment between governance, outcomes, and lived experience. It is not driven by messaging alone.

AI Is Also Reshaping How People Think
Alongside these external dynamics, AI introduces its own internal tension.
On one side, it significantly expands productivity and creativity. Tasks that once required substantial time can now be completed more efficiently, and access to information has changed how people approach problem-solving.
At the same time, deeper questions are surfacing. Data ownership, embedded bias, energy consumption, and the long-term effects on human cognition are becoming part of the conversation.
One area of concern raised during the panel focused on how reliance on external systems may gradually influence independent judgment. When answers are generated quickly and appear reliable, the tendency to question them can decrease over time.
In parallel, advances in agentic AI continue to extend automation into more complex domains, from scientific research to large-scale education. These developments unfold simultaneously, adding layers of complexity to how AI is experienced and understood.

Progress Continues, With or Without Consensus
Recent attempts to slow down AI development have shown how difficult it is to pause technological momentum. When GPT-4 was released, a call for a temporary moratorium gained attention across parts of the industry, yet the broader trajectory remained unchanged.
This moment highlighted a shift in focus. Attention is moving away from whether progress can be paused, toward how it is shaped as it continues.
In a system that keeps evolving, stepping away does not create stability. It reduces the ability to influence direction.
Direction Is Shaped by Participation
With markets moving quickly, it is easy to assume that they determine the outcome. The discussion pointed toward a more layered reality.
Direction is shaped by participation over time.
This pattern is already visible in education. Institutions have moved from early attempts to restrict AI toward integrating it into teaching and learning. Direct engagement has led to adaptation.
A similar shift is taking place in the public sector. Conversations are evolving from whether AI should be used toward how it can be applied within existing systems. In Vancouver, this includes exploring how AI can support public services while maintaining accountability and public value.
Participation influences how systems evolve. When engagement broadens, perspectives expand. When participation narrows, decision-making concentrates.

Continue the Coversation
The conversation around AI is still unfolding, and its direction is shaped in real time by those who choose to engage with it.
What becomes possible depends on how markets, policy, and culture continue to interact, and on who stays involved as those systems evolve.
If you’re thinking about where you fit into this, you’re already part of the process.
For a deeper look at the ideas explored at Global Summit Vancouver, you can find the full recap on our blog.
You can also subscribe below to stay updated on upcoming conversations and our next Summit.