RAG has been the go-to pattern for making AI smarter, but the way we build and think about agents is changing fast. At ODSC AI West, I’ll be leading a workshop on how Agent2Agent and the Model Context Protocol (MCP) shift the conversation from retrieval-first to capability-first. Think of it as moving from giant, monolithic models to modular agents that can hand off tasks, extend themselves, and even act in the real world… all while borrowing practices we know from traditional software engineering.
This shift matters because the industry is rethinking what “intelligent” systems should look like. From specialized agents that act like software libraries to dynamic extensions that let AI perform new actions without retraining, the landscape is evolving. With real-world concerns around governance, observability, and trust, this workshop connects directly to today’s biggest questions in AI.
UI design is hitting a weird but exciting pivot. Static screens and fixed navigation are giving way to adaptive interfaces that come together in real time… driven by what you’re doing, what you like, and where you are. In my RenderATL session, The Future of UI/UX: AI-Generated Interfaces Tailored Just-in-Time, I framed this as Just-in-Time (JIT) UI/UX: interfaces that don’t sit around waiting for clicks; they anticipate, compose, and retire elements as your context changes. In an extreme case, think of a shopping app that reconfigures its layout mid-session based on your micro-behaviors, or an IDE that pulls the proper toolchain to the foreground when your cursor hesitates over a gnarly function. The vision isn’t science fiction; it’s the natural endgame of behavior modeling, on-device inference, and agentic workflows.
This post is the written version of that talk. We’ll start by picturing the “ideal” experience (why Minority Report isn’t the destination and why Star Trek‘s context-aware computer is closer), then name the features/requirements to make this a reality, and finally map today’s signals which are already in motion.
Picture The Ideal UI/UX
The “ideal” interface isn’t Tom Cruise air-conducting tabs in mid-air. It looks slick, but it burns calories and attention. Gestures are a transitional UI… great for demos, poor for eight-hour workflows. The bar I care about is time-to-intent: how fast a user moves from goal to outcome with minimal cognitive load. By that measure, a system that quietly assembles the proper controls at the right moment beats any holographic finger yoga.
Something closer to what I am talking about is the computer from Star Trek… not because it talks (although I think voice interaction will be a significant part), but because it listens to context. It knows the ship’s status, the mission history, and what’s unfolding on deck and answers as a partner, not a parrot. That’s the leap: from command-and-control to context-and-collaboration. When the system perceives environment, history, and intent, voice becomes helpful, touch becomes rare, and many screens disappear because the next step shows up on its own.
So the ideal JIT UI feels invisible and adaptive. It composes micro-interfaces on demand, hides them when they’re no longer helpful, and shifts modality (voice, glance, touch) based on situation. It minimizes choices without removing control. It explains why it’s doing what it’s doing. It lets you override at any time. In other words, you steer and the agent drives, rerouting in real time.
Required Future Enhancements
First, the system needs situational awareness and durable memory. Situational awareness involves perceiving the user’s surroundings (device state, location, recent actions, task context) and inferring intent, rather than guessing from a single click. Durable memory enables the interface to remember patterns across sessions (both short-term and long-term), allowing it to pre-compose the next step instead of replaying the same onboarding process repeatedly. This isn’t hand-wavy UX poetry; it’s a well-studied Human Computer Interaction (HCI) idea (context-aware computing) meeting very practical product work on persistent memory in assistants.
Second, we need data with scope, which covers macro → micro → you → now. Macro captures broad priors (region, seasonality, norms). Micro grounds it in local reality (city, neighborhood, even venue constraints). “You” encodes stable preferences and history (interests, tolerances, accessibility needs). “Now” streams real-time context (time, activity, current goal). JIT UI/UX only works when all four layers are fused into a single context model that can be queried in milliseconds. That’s the pipeline that lets the interface collapse choices and surface the one or two actions that matter right now.
Third, the adaptation engine needs policy. Presenting the user with a choice should be driven by a learned policy with safety rails: minimize cognitive load, avoid surprise, and explain changes. Reinforcement-learning approaches in HCI already show how to plan conservative adaptations that help when evidence is strong and stand down when it isn’t. That’s how you keep the UI from thrashing while still earning the right to be invisible.
What’s Happening Now
Mainstream platforms are shipping the building blocks for JIT UI/UX. OpenAI’s purchase of Jony Ive’s company io is pushing real-time, multimodal agents that see the world through your camera and respond live. This type of acquisition isn’t the first time; please see all the. variations of glasses/goggles/etc from Google, Meta, Apple, etc, etc. On Windows, Copilot+ features like Recall (controversial, but on-device context at OS scope) show how ambient history can shorten time-to-intent across apps. These aren’t mockups; they’re production vectors toward interfaces that assemble themselves around you.
Two other shifts matter: memory and tooling. Memory is moving from novelty to substrate. Assistants that retain preferences and task history across sessions change UI from “ask-and-answer” to “anticipate-and-compose.” OpenAI’s memory controls are one concrete example. On the tooling side, the Model Context Protocol (MCP) is becoming the standard way to wire assistants into live data and actions so that UI elements can be generated on demand with real context instead of generic templates. And yes… our oldest telemetry still matters: signals like “likes” remain remarkably predictive of what to surface next, which is why preference capture continues to anchor personalization pipelines.
Hardware is feeling its way toward ambient, context-aware UX. Wearables like Rabbit R1 and (the now defunct) Humane AI Pin tried to externalize the assistant into a pocketable form factor (rough beginnings). Still, they helped shake out speech-first and camera-first interaction (and the cost of weak task completion). Meanwhile, reports on Jony Ive and Sam Altman exploring an AI device underscore a broader appetite for purpose-built hardware that can perceive context and adapt UI in real time. Expect more experiments here; the winners will be the ones that convert perception → policy → action without burning user trust.
What this could look like…. is in a simulated demo I had created and demo’ed at Render. Take a look at the video above.
Looking Down The Road
The future of UI/UX isn’t louder or more expressive interfaces… it’s about quieter intent. The best solutions will be the one you don’t even know is there.
When systems perceive context, remember across sessions, and act with restraint, the interface fades into the background and the task comes forward. That’s the heart of JIT UI/UX: reduce time-to-intent, not add spectacle. We don’t need a wall of widgets or mid-air calisthenics. We need micro-interfaces that appear when evidence is strong, vanish when it isn’t, and always explain themselves.
If you’re building toward this, start small and concrete. Instrument time-to-intent. Capture macro → micro → you → now signals. Add persistent memory with clear and precise controls. Define adaptation policies (and fallbacks). Make every change explainable and always overridable. Ship one adaptive interface, learn from the data, expand. The teams that do this well will earn something rare in software: an interface that gets out of the way and still has your back.
We’ve all heard the phrase “treat infrastructure as code,” but what if I told you it’s time to give your datasets the same respect? In AI and machine learning, tiny shifts in data ripple through models, causing unpredictable behavior that can lead to compliance nightmares. Our latest blog explores a fundamental shift in mindset—treating data like code—showing you how immutable commits, provable lineage, and policy-driven pipelines can transform your AI from opaque magic to transparent, audit-ready solutions.
As regulators increasingly scrutinize AI applications (hello, EU AI Act!), understanding exactly what changed in your models is no longer optional—it’s essential. Whether you’re a data scientist tired of explaining mysterious drifts or an AI stakeholder who needs to keep auditors happy (and lawsuits at bay), this piece aligns perfectly with today’s urgent conversations around AI governance and compliance. Imagine confidently pinpointing every decision to a specific data commit—no more guessing, just receipts.