Experience

From Terminals to Agents: A Personal Journey Through the Evolution of User Interfaces

Technological advancements have transformed the way humans interact with computers, yet one fundamental principle remains unchanged: the interface defines the nature of that relationship.

Over my career, I have witnessed this relationship's transformation—from Unix command-line terminals to sophisticated, intention-driven agentic interfaces acting on our behalf.

This paper traces the evolution of user interfaces through the lens of my professional journey, arguing that each successive generation of interfaces has not only transformed technological interaction but also fundamentally influenced how we think, work, and develop trust in technology. The following sections are organized chronologically, beginning with early text-based interfaces and advancing through graphical, web-driven, and agentic systems. By situating personal experience within these major technological shifts, I aim to demonstrate how changes in interface design have continuously reshaped the relationship between humans and computers.

I began my professional career in the early 1990s, working for MCI on Unix and Linux-based systems, where everything was text-based. The environment lacked graphical elements such as icons or color gradients; instead, users interacted with the system via a blinking cursor, waiting for commands.

Back then, the interface was the system itself. You learned to think in commands, not clicks. Each keystroke carried meaning, and there was no "undo" feature; deleting a directory meant it was permanently gone.

During those formative years, I developed an appreciation for the importance of precision, control, and effective feedback. Well-crafted interfaces provided clear communication regarding what the system was doing, which became a guiding principle in my approach.

While I acknowledge the constraints of early text-based systems, they offered a notable advantage: users were entrusted to comprehend the consequences of their actions. This sense of responsibility and empowerment has fundamentally shaped my perspective on UX design.

By the late 1990s, I transitioned from text-based user interfaces to Windows-based environments. Adopting Windows development using Visual C++ and Java was akin to learning a new language, as it fundamentally shifted the computing experience to a more visual paradigm.

Commands were swapped out for buttons, and users navigated menus instead of depending just on memorization. For the first time, I created a user interface that let users see what they were doing before making it happen.

The WIMP model—Windows, Icons, Menus, and Pointer—transformed computing by enhancing accessibility for a broader user base. Interfaces evolved to prioritize usability, consistency, and visual appeal over sheer control and power.

I learned to design graphical interfaces that anyone could use, not just engineers. Visual C++ introduced event-driven design, while Java added portability. For developers, this era marked a shift towards creating comprehensive user experiences rather than focusing solely on functional aspects.

Nonetheless, interfaces of that time remained largely static, executing commands as received. This led users to attribute errors to themselves, often without considering potential limitations in system design.

In the early 2000s, my approach to UI development changed dramatically as the internet gained prominence. I started working with JSP (JavaServer Pages) and other early frameworks as I moved applications online. At first, the web felt more like a library than a workspace, with each page representing a distinct world and every click triggering a refreshing cycle.

However, the landscape shifted dramatically with the emergence of JavaScript frameworks, such as AngularJS and later, React. These frameworks brought dynamism to applications, enabling us to update specific sections of a page without reloading the entire page. Interactivity became fluid, revolutionizing the user experience.

Throughout this transformative journey, I witnessed the evolution from JSP to AngularJS, then to Angular with TypeScript, and React.js. I even ventured into mobile frameworks like NativeScript, observing continuous improvements in user interfaces.

Throughout the web and mobile eras, despite continuous technological advancements, the core structure of user interfaces remained essentially unchanged. Developers continued to design application flows by anticipating and mapping every conceivable user action, rigorously testing these paths, and guiding users along predetermined routes.

This era, defined by the web and mobile revolutions, was characterized by refinement. We mastered the art of crafting visually appealing, functional, and responsive screens. However, we had yet to develop systems that understood the user's needs and preferences. For example, while e-commerce platforms allowed users to browse and purchase products easily, they mostly relied on generic recommendations and static navigation menus, often failing to anticipate individual interests or adapt to users’ unique behaviors. This limitation underscored the gap between surface-level usability and genuine user understanding.

Towards the end of 2022, the world underwent another transformation with the release of ChatGPT. Artificial intelligence (AI) began to permeate various aspects of our lives, initially appearing in chatbots and recommendation systems, and eventually in assistants embedded in everyday tools.

For the first time, user interfaces (UIs) started to transcend their reactive nature. They began anticipating users’ needs and providing relevant information.

As an architect and designer, I found this experience both exhilarating and unsettling. It felt like I was relinquishing control over the meticulously crafted user experience I had always designed. However, I also recognized the immense potential; interfaces could now respond to user intent rather than relying solely on input.

Early versions of AI-powered user interfaces were somewhat rudimentary, often appearing as add-ons rather than fully integrated into the user experience. However, the integration of AI into user interfaces is not without critique. Some argue that early implementations risked confusing users by obscuring the logic behind automated actions or decisions, which could undermine trust and foster user reliance without adequate understanding. Despite these concerns, these developments marked the beginning of a new era in which interfaces would not merely present options but actively interpret user goals.

By 2025, the focus shifted to Agentic AI, which acts on behalf of users rather than relying on direct clicks. Agentic Interfaces interpret user intent, analyze context, and choose suitable actions rather than just following commands.

For example, consider a banking app. Instead of navigating through a series of menus to transfer money, a user can now express their intention like this:

"Send $500 to Emma next Friday, but only if my account maintains a balance of at least $2,000."

The agent promptly checks the user's account balance, schedules the transfer, and confirms the action by saying, "I've scheduled your transfer. Your account balance will remain above $2,000. Would you like me to send a reminder before the transfer is processed?"

Users now manage their intentions directly, and interfaces adapt dynamically to support them for a seamless experience. This shift is fundamental, transforming design today.

With the rise of personalization and intelligent technologies, digital products increasingly anticipate user needs, allowing individuals to take control of their journeys. Adaptive interfaces employ machine learning and responsive design to modify layouts, suggest relevant actions, and prioritize features based on individual preferences. For example, e-commerce sites might highlight products based on browsing history, while productivity tools can reorganize their dashboards to surface frequently used functions. From my own experience designing such systems, I have observed how these innovations not only enhance efficiency but also deepen users’ sense of ownership over their digital environment. By facilitating intuitive interactions and reducing friction, these advancements foster engagement and satisfaction, making technology more accessible and meaningful in everyday life.

Although significant changes are underway, buttons, forms, and cards will remain integral components of user interactions, as they continue to foster trust in technological systems. However, the function of these elements will evolve. They will serve as instruments of transparency, enabling users to grasp the actions performed by AI agents. When an AI agent operates, it should present its decisions and rationale through familiar visual formats.

For illustration:

  • A button may initiate a comprehensive workflow for the agent, rather than executing a singular command.
  • A card can deliver a concise summary of the agent's actions, such as: "Paid credit card bill early; saved $15 in interest."
  • A form might primarily provide visibility, allowing users to review and confirm parameters before execution.

Architects and Designers must now strike a balance between user autonomy and explainability. Users should never feel manipulated or controlled; they should always feel in control of their own experience.

Over time, I have come to believe that next-generation user experience (UX) design should be built around five core principles:

  • Human in Command: AI must support users by enhancing their abilities without taking away their autonomy. Users should always make the final decisions.
  • Trust and Transparency: Interfaces need to show what the AI is doing and why openly. Hidden automation can erode trust and result in poorer experiences.
  • Security and Consent: All autonomous actions performed by AI should require explicit permission, be recorded, and be reversible, ensuring users' safety and control.
  • Explainability and Feedback: Users deserve clear explanations for every AI decision, which strengthens trust and encourages active engagement.
  • Fluid, Adaptive Experience: Interfaces ought to adapt seamlessly to the user's situation and intent, adjusting layouts and priorities much as a natural conversation would.

AI is a remarkable resource that enhances human capabilities, yet it is fundamentally just a tool. As architects, designers, and engineers, we must ensure that people retain authority over these technologies, no matter how sophisticated they become.AI is a remarkable resource that enhances human capabilities, yet it is fundamentally just a tool. As architects, designers, and engineers, we must ensure that people retain authority over these technologies, no matter how sophisticated they become.

To achieve this, we must focus on:

  • Develop explainable agents that transparently articulate their reasoning processes.
  • Implement trust loops, ensuring every action includes a clear option for confirmation or reversal.
  • Guarantee that users maintain the ability to override, pause, or review the AI's actions at any time.

Trust, security, and explainability are not just technical goals; they are ethical obligations. These factors influence whether users feel comfortable entrusting tasks to an intelligent system.

Over the next three years, we will undergo a significant transformation in user interface (UI) and user experience (UX), moving beyond traditional screen-based interfaces. Instead, we will shift towards conversational interactions, orchestrated workflows, and adaptive surfaces.

While visual interfaces will still be utilized, they will increasingly become agent-aware. These interfaces will be designed to guide, inform, and visualize AI-driven tasks.

The following key developments are anticipated:

  • Conversational dashboards that integrate chat, cards, and graphs in a unified interface.
  • Explainability panels designed to deliver clear and concise explanations of an agent's decision-making process.
  • Multi-agent collaboration views enable users to monitor real-time interactions as multiple agents negotiate or coordinate tasks.
  • Voice and text parity that allows seamless transitions between spoken and written communication without loss of context.

In enterprise systems, this shift will empower individuals by freeing them from repetitive and procedural tasks. In consumer tools, it will lead to unprecedented levels of personalization in user experiences.

However, while the core principle may remain unchanged—humans setting the goals and agents executing them transparently—it is important to recognize that future technological disruptions could challenge or reshape this dynamic. Emerging innovations or paradigm shifts may alter the balance of control, necessitating ongoing evaluation to ensure that human intent continues to drive agentic execution.

Reflecting on my career, I see a clear progression: from text terminals requiring precision, to graphical interfaces that broadened access, to web-based systems connecting experiences, and now to Agentic AI acting on our behalf.

This transition is defined not only by technological capability, but also by its scale and speed.

As previously noted, the next three years are expected to bring substantial, measurable changes that will transform interface design and usage. Moving forward, it is essential to evaluate both emerging competitors and enabling technologies that may influence or disrupt these projected developments. For example, advancements such as quantum computing and the rise of new AI startups can either expedite the adoption of agent-based interfaces or introduce alternative paradigms that could impede their adoption.

  • Thirty to forty percent of enterprise workflows will move from screen-driven navigation to intent-driven, agent-orchestrated interactions, especially in finance, operations, customer support, and internal tooling. In these environments, the shift will be marked by a clear human–agent handshake, where human users initiate requests, set boundaries, and evaluate outcomes. In contrast, agents manage execution and report back. This clear delineation of decision boundaries will not only improve efficiency but also enhance trust and credibility in the transition from screens to intents. Consider how smartphones unexpectedly altered the PC market; a similar catalyst could redefine our current vision of interface evolution, underscoring the need for adaptability.
  • Fifty to sixty percent of new digital products will use conversational or agent-assisted interfaces as their primary interaction model, rather than as simple chat add-ons.
  • Traditional UI elements such as buttons, forms, tables, and cards will remain, but over 70% of their use will focus on confirmation, explanation, and oversight rather than direct execution.
  • By 2028, one in three user actions in knowledge-work applications will be delegated to AI agents, while humans will retain approval, override, and audit control.
  • In consumer applications, personalized, agent-mediated flows will improve task completion times by twenty to thirty percent, thereby redefining expectations for speed and effort. This projection aligns with empirical evidence: a recent McKinsey report analyzing the impact of AI-powered automation within customer-facing processes across multiple industries observed that organizations adopting such technologies achieved an average 25% reduction in task completion times (McKinsey & Company, 2023).

These shifts do not signal the end of traditional UX. Instead, they mark its evolution.

Interfaces are no longer where work is performed. They are becoming the space where intent is declared, actions are explained, and trust is reinforced through transparent data provenance. Transparent data provenance refers to the system's ability to trace and clearly display the origins, transformations, and ownership of the data used by agents. By specifying the complete lineage of the data that agents process—including source, intermediary steps, and final output—users gain a deeper understanding and confidence in the actions taken on their behalf. This level of transparency enables users to verify the sources and processing of an agent's knowledge, which bolsters belief in its reliability and integrity.

As architects, designers, and developers, our responsibility is not to remove humans from the loop but to redesign the loop itself. We must build reliable systems that work alongside us rather than operate on our behalf.

The future of user experience (UX) is not about eliminating control but about making it effortless. AI can only reach its full potential if it stays transparent, secure, and is guided by human intent.

This principle has guided my work from terminals to agents, and it will continue to shape the next chapter of user experience.

About the Author

Richard Onorato has spent over three decades building user interfaces across platforms—from Unix terminals to modern AI-driven systems. His work bridges software engineering and human experience, with a belief that technology should always serve people, never replace them.

Disclaimer: Artificial intelligence tools were used in the formatting and preparation of this article to enhance clarity and presentation. All substantive content remains the work of the author.

Author

Richard Onorato

Senior Enterprise Architect

Reviewer

Vishwanath Taware

VP - Unit Technology Officer