Tag Archives: Machine Learning

Anything to do with Machine Learning

When AI Gets Real: Takeaways from a Week at Devoxx Morocco

Devoxx Morocco brought together a mix of engineers, architects, and AI practitioners who were focused on the practical side of AI. The conversations were grounded in real problems rather than vague theory. People wanted to talk through design decisions, past failures, and the parts of their stack that still worry them. That set the tone for the conversations happening at the conference. The setting made it easy to slow down, listen, and think through the patterns that kept showing up.

Devoxx Morocco

Two themes came up everywhere: agentic systems built with MCP, and how to get AI pilots into production. The surprising part was how technical the hallway conversations were; as in a higher caliber than I have recently been to! Folks wanted to compare notes on everything from context-window constraints to the tradeoffs of graph-based retrieval. Instead of repeating the same “AI is the future” line, people were honest about the limits of current tools and focused on how to deal with them. It made the event feel useful in a way most conferences don’t.

This was a stark contrast to KubeCon, which took place the same week. Granted, the conferences’ purposes were different: KubeCon focuses primarily on application/container infrastructure, and Devoxx is engineer/developer-focused… but the difference in the AI discussions was VERY different. At KubeCon, the AI discussions felt stuck in the marketing and hype phase, and at Devoxx, the promise of the hype was meeting the realities of implementation.

Let’s dive into some of the discussions at Devoxx Morocco!

Highlights from Devoxx Morocco

The technical depth of the discussions stood out right away. Two of the most detailed conversations centered around vector embeddings and why they often fall short in production. Both engineers told nearly the same story: the embeddings looked fine on paper, but the answers drifted, broke down when correctness mattered, or hallucinated when the domain got too specific. What surprised me was that they brought up graph-based retrieval before I did. They wanted to talk about ontology design, schema choices, and how to build a structure that reflects the real domain. You could tell they had already run into the limits of semantic search and were looking for something more grounded in facts.

Devoxx Booth 1

Another strong thread came from someone who had been working with Model Context Protocol (MCP) and ran into context-window and tool-count limits. The way they described it felt familiar. You can scale the prompt window and restructure your tools, but there’s a point where the entire system becomes fragile. After thinking it over, it’s clear that Anthropic Skills are meant to address this problem. At a very high level, Skills act like folders that hide or load tools only when needed. It’s a smart workaround, but it also kicks the problem down the road rather than addressing it head-on. Additionally, it made me wonder whether these dynamic “folders” introduce their own risks, like tool hijacking within a skill if security boundaries aren’t well-defined.

Devoxx Booth 2

Production-readiness came up again and again. Attendees weren’t trying to “explore possibilities” the way many teams still do. They were focused on the details that block deployment: data pipelines, observability, governance, and proving that an AI system delivers value. It was refreshing to hear people skip the marketing talk and get straight to real outcomes. They cared about what actually works, not the slideware version of AI.

David's Session 1

My session titled “Rethinking RAG: How MCP and Multi-Agents Will Transform the Future of Intelligent Search” explored how Model Context Protocol (MCP) and Agent2Agent (A2A) can reshape the future of intelligent search by moving beyond flat, opaque vector embeddings toward adaptive, explainable, and secure agentic systems. The talk highlighted how current RAG pipelines often fail due to a lack of reasoning depth and fragile context handling. Through live demos, we showed how combining MCP’s structured data access with A2A’s multi-agent collaboration enables scalable solutions that enable agents to reason, search, and cooperate in real time. Key takeaways included designing modular agentic architectures inspired by software engineering principles, using reinforcement learning loops to safely promote verified knowledge into the RAG corpus, and considering small, specialized language models (SLMs) for faster, cheaper, and more transparent performance.

If you are interested in seeing the slides or the demo material, you can find them on my GitHub repo: github.com/davidvonthenen/2025-devoxx-morocco.

David's Session 2

The more conversations I had, the clearer it became that many teams here are further along in their AI journey than what I typically hear at other events. The questions were sharper. The examples were real. And the pressure to deliver value was obvious. You could see that stakeholders weren’t looking for experimentation anymore. They wanted impact, and the engineers at the event were working toward that with real urgency.

Personal Note

This was my first time visiting Morocco, and after hearing about Devoxx Morocco last year, I was definitely interested in attending. I didn’t know what to expect, but I knew this would easily be different from any other place I had visited in all my travels. The things I usually look for when traveling anywhere are: I want to know the real history of a place, see some amazing art or architecture, chat with people who live and have roots in the region, and, of course, eat all of the food.

Morocco is so colorful

On the architecture and art front, I was blown away by the bright colors and mix of Arabic and Middle Eastern influences. I have never seen buildings or architecture where the colors were just absolutely in your face. Truly a stand out.

And Soo Green With Plants

And to see how nature is integrated into the architecture, buildings, and landscape was simply beautiful.

If you find yourself in the area (in Europe) and you want something different, I recommend adding a stop in Marrakesh. If you like traveling the road less traveled, you won’t be disappointed.

Until Next Time!

Devoxx Morocco showed a clear shift in how engineers think about AI today. The excitement is still there, but it’s grounded in real work. People talked about the friction points they’ve faced, the parts they’ve rebuilt, and the choices they regret. And the common thread was simple: value matters. Teams want systems that work in production, not demos that run well on a stage. The conversations made it clear that AI is entering a phase where engineering discipline matters as much as model quality. That honesty made the event stand out.

Amazing Pool

On that personal note, given the chance, I would totally come back to Marrakesh and Morocco. There is plenty to do and I couldn’t possibly do it all in the time that I had. I will be back!

Doing More with Less: The Quiet Revolution Powering AI Through IoT and Edge

Edge AI is quietly changing how we think about machine learning deployment. Instead of running models in the cloud, intelligence is moving closer to where data is created—on sensors, gateways, and even microcontrollers. This shift isn’t about shrinking models for fun; it’s about making AI useful where latency, bandwidth, and power all matter. In this piece, we look at how tools like ExecuTorch and TorchScript let you run PyTorch models anywhere… whether that’s a GPU rack, a Raspberry Pi, or a factory floor controller and why efficiency is the next big driver of innovation in AI.

Pushing AI/ML to the Edge

What’s fueling this movement is necessity. As the world races toward smaller, faster, and more sustainable AI systems, the focus has moved from “how big can we make it” to “how far can we take it.” One of the best examples was DeepSeek’s efficiency breakthroughs… it’s clear that doing more with less is the future of machine learning.

Dive into the full article here: https://bit.ly/4mSCFOl.

Invisible by Design: Context-Aware Interfaces that Assemble Themselves

UI design is hitting a weird but exciting pivot. Static screens and fixed navigation are giving way to adaptive interfaces that come together in real time… driven by what you’re doing, what you like, and where you are. In my RenderATL session, The Future of UI/UX: AI-Generated Interfaces Tailored Just-in-Time, I framed this as Just-in-Time (JIT) UI/UX: interfaces that don’t sit around waiting for clicks; they anticipate, compose, and retire elements as your context changes. In an extreme case, think of a shopping app that reconfigures its layout mid-session based on your micro-behaviors, or an IDE that pulls the proper toolchain to the foreground when your cursor hesitates over a gnarly function. The vision isn’t science fiction; it’s the natural endgame of behavior modeling, on-device inference, and agentic workflows.

This post is the written version of that talk. We’ll start by picturing the “ideal” experience (why Minority Report isn’t the destination and why Star Trek‘s context-aware computer is closer), then name the features/requirements to make this a reality, and finally map today’s signals which are already in motion.

Picture The Ideal UI/UX

The “ideal” interface isn’t Tom Cruise air-conducting tabs in mid-air. It looks slick, but it burns calories and attention. Gestures are a transitional UI… great for demos, poor for eight-hour workflows. The bar I care about is time-to-intent: how fast a user moves from goal to outcome with minimal cognitive load. By that measure, a system that quietly assembles the proper controls at the right moment beats any holographic finger yoga.

Something closer to what I am talking about is the computer from Star Trek… not because it talks (although I think voice interaction will be a significant part), but because it listens to context. It knows the ship’s status, the mission history, and what’s unfolding on deck and answers as a partner, not a parrot. That’s the leap: from command-and-control to context-and-collaboration. When the system perceives environment, history, and intent, voice becomes helpful, touch becomes rare, and many screens disappear because the next step shows up on its own.

So the ideal JIT UI feels invisible and adaptive. It composes micro-interfaces on demand, hides them when they’re no longer helpful, and shifts modality (voice, glance, touch) based on situation. It minimizes choices without removing control. It explains why it’s doing what it’s doing. It lets you override at any time. In other words, you steer and the agent drives, rerouting in real time.

Required Future Enhancements

First, the system needs situational awareness and durable memory. Situational awareness involves perceiving the user’s surroundings (device state, location, recent actions, task context) and inferring intent, rather than guessing from a single click. Durable memory enables the interface to remember patterns across sessions (both short-term and long-term), allowing it to pre-compose the next step instead of replaying the same onboarding process repeatedly. This isn’t hand-wavy UX poetry; it’s a well-studied Human Computer Interaction (HCI) idea (context-aware computing) meeting very practical product work on persistent memory in assistants.

Macro-level Big Data - Think Country and State

Second, we need data with scope, which covers macro → micro → you → now. Macro captures broad priors (region, seasonality, norms). Micro grounds it in local reality (city, neighborhood, even venue constraints). “You” encodes stable preferences and history (interests, tolerances, accessibility needs). “Now” streams real-time context (time, activity, current goal). JIT UI/UX only works when all four layers are fused into a single context model that can be queried in milliseconds. That’s the pipeline that lets the interface collapse choices and surface the one or two actions that matter right now.

Micro-level Big Data - Think County and City and Maybe Even Block

Third, the adaptation engine needs policy. Presenting the user with a choice should be driven by a learned policy with safety rails: minimize cognitive load, avoid surprise, and explain changes. Reinforcement-learning approaches in HCI already show how to plan conservative adaptations that help when evidence is strong and stand down when it isn’t. That’s how you keep the UI from thrashing while still earning the right to be invisible.

What’s Happening Now

Mainstream platforms are shipping the building blocks for JIT UI/UX. OpenAI’s purchase of Jony Ive’s company io is pushing real-time, multimodal agents that see the world through your camera and respond live. This type of acquisition isn’t the first time; please see all the. variations of glasses/goggles/etc from Google, Meta, Apple, etc, etc. On Windows, Copilot+ features like Recall (controversial, but on-device context at OS scope) show how ambient history can shorten time-to-intent across apps. These aren’t mockups; they’re production vectors toward interfaces that assemble themselves around you.

Sam Altman and Jony Ive

Two other shifts matter: memory and tooling. Memory is moving from novelty to substrate. Assistants that retain preferences and task history across sessions change UI from “ask-and-answer” to “anticipate-and-compose.” OpenAI’s memory controls are one concrete example. On the tooling side, the Model Context Protocol (MCP) is becoming the standard way to wire assistants into live data and actions so that UI elements can be generated on demand with real context instead of generic templates. And yes… our oldest telemetry still matters: signals like “likes” remain remarkably predictive of what to surface next, which is why preference capture continues to anchor personalization pipelines.

Sam Altman Tweet About User Agent Memory

Hardware is feeling its way toward ambient, context-aware UX. Wearables like Rabbit R1 and (the now defunct) Humane AI Pin tried to externalize the assistant into a pocketable form factor (rough beginnings). Still, they helped shake out speech-first and camera-first interaction (and the cost of weak task completion). Meanwhile, reports on Jony Ive and Sam Altman exploring an AI device underscore a broader appetite for purpose-built hardware that can perceive context and adapt UI in real time. Expect more experiments here; the winners will be the ones that convert perception → policy → action without burning user trust.

What this could look like…. is in a simulated demo I had created and demo’ed at Render. Take a look at the video above.

Looking Down The Road

The future of UI/UX isn’t louder or more expressive interfaces… it’s about quieter intent. The best solutions will be the one you don’t even know is there.

When systems perceive context, remember across sessions, and act with restraint, the interface fades into the background and the task comes forward. That’s the heart of JIT UI/UX: reduce time-to-intent, not add spectacle. We don’t need a wall of widgets or mid-air calisthenics. We need micro-interfaces that appear when evidence is strong, vanish when it isn’t, and always explain themselves.

User Awareness and Happening Now

If you’re building toward this, start small and concrete. Instrument time-to-intent. Capture macro → micro → you → now signals. Add persistent memory with clear and precise controls. Define adaptation policies (and fallbacks). Make every change explainable and always overridable. Ship one adaptive interface, learn from the data, expand. The teams that do this well will earn something rare in software: an interface that gets out of the way and still has your back.