When intelligence learn to listen
The first time I heard the phrase “agentic AI,” it felt clinical. A neat term for systems learning to act on their own. But as I followed the conversations around this year’s AI for Good Summit, the word began to feel less mechanical. It carried weight, questions, even tension.
Speakers like Meredith Whittaker reminded us that AI isn’t an abstract force of intelligence but a reflection of power; computation fueled by data and driven by human intent. I began to sense a pattern: intelligence is no longer defined by precision but by participation. How technology enters the rhythm of human intention.
1. Recalibrating “Intelligence”; From Predictive to Participatory
Much of what dominated this year’s AI discourse was not whether machines could think but how they would act. The shift toward agentic AI (models capable of making decisions, initiating actions and collaborating with other systems) marks a quiet turning point. Intelligence is evolving from a passive pattern-matching function into something almost conversational: AI that doesn’t just answer but assists, plans and sometimes even persuades.
At the same summit, AI’s role in energy systems also came into focus. Experts highlighted how AI can optimize power grids, forecast demand, and manage renewable energy sources like solar and wind. By analyzing consumption patterns and predicting peaks, AI helps maintain grid stability, reduce waste, and improve efficiency. Yet, as AI models grow more powerful, they themselves require significant energy, creating a delicate balance between consumption and optimization. Collaborative initiatives are now exploring ways for AI to enhance grid reliability and support sustainable energy use.
In theory, these developments are the dream of efficiency; in practice, they force us to confront agency itself. If an AI can act independently, whether balancing electricity flows or autonomously adjusting other systems, whose intentions does it serve? Meredith Whittaker has been firm on this point: “AI isn’t autonomous from power; it’s built within it.” Her words hang like a calibration note, a reminder that every decision made by a model reflects a human hierarchy somewhere upstream.
This is what makes the discussion around alignment so complex. Technical researchers speak of aligning models with human values but the question of whose values remains slippery. From agentic AI to energy optimization, alignment is no longer just a mathematical target but an ethical landscape. Designing intelligent systems now means designing for collaboration, context and care, not just accuracy.
2. Between Circuits and Conscience; The Future of Agency
The more I read about agentic systems the clearer it becomes that the real question isn’t how autonomous AI can become but how intentional we are willing to be. The future of AI isn’t a contest between human and machine agency, it’s a negotiation of coexistence. Machines may soon act on our behalf but meaning still begins with us.
In that sense the next frontier of AI isn’t purely technical; it’s moral infrastructure. We are engineering systems that reason, adapt and sometimes surprise us but rarely do we pause to ask how our own reasoning adapts in return. As AI grows more conversational, the boundaries of authorship, empathy and trust blur. It’s easy to imagine a future built on optimization but harder to design one anchored in understanding.
Meredith Whittaker’s words echo again here: AI is not neutral. It’s built through choices, what to collect, what to exclude and who benefits from the pattern. In a world obsessed with “smartness,” wisdom might be the rarest resource.
That’s why paying attention to AI’s applications, whether in grids, predictive systems, or autonomous decision-making, matters. They remind us that intelligence can look like listening, that design can be a form of care and that the best kind of agency is shared. Every system, from energy-balancing algorithms to adaptive models in other domains, adds a new definition of what it means for AI to serve humanity rather than the other way around.
So maybe the shape of agency isn’t the gleaming curve of a neural network after all. Maybe it’s the imperfect, resilient loop between human curiosity and machine capability, the place where insight meets intention. And perhaps, if we’re attentive enough, that loop could become less about automation and more about understanding, less about prediction and more about participation.
Because the future we’re building isn’t just intelligent.
It’s becoming increasingly aware of its makers.

