At one of Asia’s largest integrated energy companies, refinery R&D teams evaluate hundreds of cru...
The Future Is Agentic: How AI Agents Bring Back Calm Technology
The Future Is Agentic: How AI Agents Bring Back Calm Technology
During my PhD, I read an article by Mark Weiser, one of the pioneers at Xerox PARC, back when Silicon Valley was still more lab than marketplace. He coined the term “ubiquitous computing” to describe a world where computers would be everywhere and invisible, blending into daily life like electricity or running water. It was the late 1980s - there were only a few tens of thousands of personal computers worldwide - yet Weiser had already envisioned smartphones, tablets, and connected devices. More than that, he had grasped their cultural and social impact, the transformative potential they’d have on how humans and technology connect.
I remember that article made me feel both humbled and fortunate. Humbled by the scope of his thinking, and fortunate to witness an era when ideas like these were just beginning to feel possible. In 2008, inspired by that very article, I got into “ubiquitous mining” and started my doctoral research. The idea was that all those devices, now proliferating, could generate useful data for intelligent algorithms. If computers were everywhere, then data was everywhere too, and could reveal fundamental information and patterns.
Of course, there were technical limitations: the great variety and fragility of the data, computational constraints, mobile networks. And the first ethical cracks, such as privacy, surveillance, and content monetization, which would later become critical concerns, were already beginning to appear. Weiser had dreamed of quiet, intuitive computers in service of humanity. He wrote:
“The best computer is a quiet, invisible servant.” And “Technology should create calm.”
But the decade that followed gave us the opposite: smartphones blaring notifications, social networks rewarding outrage, algorithms designed not to serve, but to hold our attention for longer.
That vision seemed lost. Yet I believe something has shifted again. Today, with the advent of generative AI, I find myself returning to Weiser’s words. It’s not just nostalgia - perhaps we’re circling back towards that original promise. Not in the linear way you’d expect from industrial progress, but in a new, organic way.
In 2018, my team and I were working on a sentiment analysis model. It was supposed to interpret the emotional tone of a text: positive, negative, neutral. We were using BERT, one of the first bidirectional transformer models, and had achieved remarkable results on a dataset from Twitter posts in Italian. But we still had to supervise everything, to label, to explain to the machine what was positive or negative in semantic terms.
Around that same time, a team at OpenAI was working on an unsupervised model and noticed that one of the neurons in a neural network consistently activated in the presence of positive sentences. Without instructions, labels or rules. A neural network had “discovered” the concept of sentiment on its own. And from there, a new enthusiasm sparked for the capabilities of large-scale models.
Generative AI has broken the pattern where software is a set of rules coded by humans. In these new models, it’s the human who suggests an intention, and the agent who navigates the context to achieve a specific goal. The difference isn’t just technical, it’s ontological. It’s the difference between a tool and an ally. Think about traditional software: to buy a ticket, we must navigate through screens, forms, interfaces. Select, click, fill out. Every action is a compromise between what we want and what the interface allows. Software mediates these processes but doesn’t understand them. Today, however, we’re beginning to imagine software that listens, interprets, and then acts.
This is the essence of an agent. It’s no longer a web page or an app. It’s a computational entity with autonomy, that knows tools, understands objectives, makes choices, learns.
And here we return to Weiser. A well-designed agent isn’t invasive, doesn’t compete for our attention. It works in the background, quietly, but effectively. It creates calm and gives us back time.
So, is the future agentic?
At ROMBO AI, we’re working to make it possible. We’re building an initial agent specialized in a niche and extremely high-value domain: Spectroscopy. In industrial and scientific contexts, interpreting NMR spectra requires time, expertise, and attention. Our agent doesn’t replace human experts but supports them. It analyzes the spectrum, identifies patterns, suggests interpretations, proposes models and shares reports.
This isn’t just about efficiency. It’s about shifting the paradigm. The future won’t just be smarter - it will be more human, because it puts humans at the center. And if that’s true, then maybe the future will be dominated by a calm technology, driven by AI agents.
If you work with spectroscopy and want to discover how an intelligent agent can support you in your daily analysis, we’re ready to show you.
**Get in touch now via contact@rombo.ai. **
Posted By: Andrea Zanda