The world has gone digital, and so have our conversations. With over 1.5 billion messages sent weekly on Slack, 300 million daily virtual meetings on Zoom at its peak, and millions of interactions across Facebook, TikTok, and other platforms, the volume of conversation data is staggering. These conversations hold valuable insights, from trend detection to user behavior analysis. How do we extract and mine that data?
Here is a brief rundown on what I will be covering during the session…
Breaking Down Conversations
Data is power, and conversation data is a goldmine waiting to be tapped. In this session, we’ll go step by step through the process of creating and training NLP models that can understand the context and meaning behind messages, whether from video meetings, audio calls, or text conversations.
It all starts with data. We’ll begin by learning how to collect raw conversation data from various sources, such as WebRTC applications like LiveKit. Once collected, the next challenge is preprocessing this data. We’ll explore strategies to clean and prepare text for machine learning pipelines, including noise reduction and tokenization.
Once the data is ready, we’ll develop machine learning models to extract critical information to classify sentences and perform named entity recognition. We’ll cover how to build these models using Python, PyTorch, and other state-of-the-art NLP tools.
This session will highlight live demos, where I’ll showcase how to deploy and integrate these models into workflows for practical applications, such as customer service analysis, social media trend detection, and even compliance monitoring.
Why Attend This Session?
At the end of the session, you’ll walk away with more than just theory – you’ll get access to working code and resources that you can immediately apply to your projects. Whether you’re building conversation analytics tools for a social network or mining customer feedback from virtual meetings, this session is designed to provide actionable takeaways. By understanding how to train NLP models to analyze conversations, you can transform raw data into valuable insights for your organization.
You can use the discount code FFSPKR to get $200 off registration. Don’t miss this opportunity to explore the future of machine learning and NLP – register today and be part of the conversation!
The intersection of machine learning (ML) and Healthcare is not just a technological revolution—it’s a profound shift in how we understand and approach human well-being. ML is a tool that’s been evolving quietly behind the scenes for years, but its recent surge in healthcare applications feels like a leap into the future. From diagnostic imaging to personalized treatment plans, we’re witnessing the birth of a healthcare system that’s not only data-driven but capable of adapting to the complexities of the human body and mind.
At the heart of this transformation is the idea that Healthcare can be predictive rather than reactive. Instead of waiting for symptoms to worsen, we can use machine learning models to analyze subtle cues across multiple forms of data—audio, video, images, and sensor data—to detect conditions like Parkinson’s Disease early on. In a field where time is critical, this capability can be the difference between early intervention and advanced illness.
The Human Factor in ML-Driven Diagnostics
However, it’s easy to get lost in the jargon and overlook the human element behind this revolution. Yes, algorithms can analyze more data in seconds than a doctor might in a lifetime, but these technologies are not about replacing medical professionals—they are about empowering them.
Every pixel, soundwave, and movement analyzed by an ML model carries real human implications. It represents a person’s struggle, their hope for answers, and, ultimately, their health outcomes. By embracing machine learning, we are giving healthcare professionals the tools they need to better understand, diagnose, and treat patients on an individual level.
The healthcare industry’s adoption of ML, particularly in diagnostics, is reshaping the role of doctors from solitary decision-makers to orchestrators of advanced technological tools. The naysayers will paint a picture of a world where machines (or, in today’s terms… AI) will replace humans, but this shouldn’t scare us from embracing these tools. History has shown us that when these new technologies enter our lives, the work doesn’t disappear; it transforms into something new. After all, people said the same thing about automobiles and computers, and look at how that turned out.
Machine Learning Using Multi-Modal Data
What makes this moment even more exciting is the use of multi-modal data—combining information from multiple sources like audio, video, and images. For example, in Parkinson’s Disease diagnosis, an ML model can analyze a patient’s voice, capturing the smallest vocal tremors that may signal early-stage neurodegenerative changes. Simultaneously, video footage of the patient’s movements can be analyzed for physical symptoms, such as tremors or rigidity, that might otherwise go unnoticed in a short clinical visit.
This holistic view of patient data allows for more comprehensive and nuanced diagnoses. It’s not just about analyzing a static image or isolated metric but about building a complete narrative from diverse data sources. These advanced models can sift through the noise and detect meaningful patterns across multiple channels of information, dramatically improving early diagnosis and treatment options.
The Future: Empowering Professionals and Patients
The future of ML in Healthcare isn’t just about technical prowess. It’s about how we as a society choose to harness this power. The goal is not to create a future where machines replace human doctors but one where they augment the capabilities of medical professionals, allowing them to provide more personalized and effective care.
Moreover, these advancements don’t just benefit the healthcare providers. Patients themselves stand to gain significantly, with more accurate diagnoses, earlier interventions, and a more involved role in managing their health. With open access to ML tools and resources, professionals from all backgrounds can build tailored recognition solutions that address their specific needs. The future is about democratizing access to these powerful tools, ensuring more people can benefit from the next wave of medical innovation.
With every new advancement, we’re reminded that this isn’t just about technology—it’s about people. The most exciting part of this journey is how ML is transforming Healthcare not just by numbers and codes, but by improving lives, one model at a time.
You can use the discount code FFSPKR to get $200 off registration. Don’t miss this opportunity to explore the future of machine learning and Healthcare—register today and be part of the conversation!
In March, I had the fortune of attending and speaking at one of my favorite conferences, Southern California Linux Expo (SCaLE) 21x. As the name suggests, this is the 21st iteration of this tech-heavy yet family-oriented event, which usually takes place in Pasadena but, in some years, in the greater Los Angeles area. This is my sixth time attending (and 3rd time presenting), and I am glad to say that this year’s conference knocked it out of the park again.
What is SCaLE?
SCaLE is North America’s largest community-run open source and free software conference. The entire event, from setting up the networking to managing the session introductions, is all volunteer-based. This allows SCaLE to skip over the pay-for-play sessions you typically see at larger corporate events and focus on quality sessions that attendees are interested in. More importantly, this allows the event to keep the cost of attendance to under $100 for the entire 4-day event and maximum inclusion for those that want to attend.
The content ranges from topics like Kubernetes to Open Source AI to the Low-level Linux kernel. My favorite session topics always revolve around IoT/Edge, Security, and anything unique and obscure which you will definitely find a lot of here. I wanted to highlight a few of the more interesting (and hilarious) things I was able to participate in at SCaLE this year. I hope you will enjoy this too…
Kwaai Summit: Personal AI
You want to discuss a very meta but also a very real topic that will arrive at our doorsteps soon: Personal AI. What is Personal AI? It’s the idea that we will have AI systems making decisions on behalf of individuals, or more specifically, you. Whether you know this or not, this is already happening on a small scale (excuse the pun). These are things like your iPhone or Android making reservations at a restaurant or, a more concrete example, making recommendations on things you might be interested in purchasing based on your Instagram or TikTok feed.
Now, imagine we have all this data, information, choices, relationships, and associations to all these different disparate data points. How will these choices and products find their way to grab your attention? In the past decade, it’s been done through associations (when you Instagram heart or Facebook like something) and then extrapolating what else you might enjoy based on probabilities. For example, if you like baseball, you might want to purchase a Dodgers jersey.
The next wave will resemble a personal assistant in the form of an AI agent talking to external AI agents and then making decisions based on those interactions. Your AI agent knows everything about you. What you like, who your friends are, your background, and all other aspects of your life. Based on your unique profile, this AI agent will genuinely know how to interact with your digital environment based on who you are and what apps and access you have.
The Kwaai Summit discussed the new interactions and connections we will have with these AI systems. This was a fascinating series of talks. I recommend checking out The AI Accelerated Personalized Computing Revolution by Pankaj Kedia below.
If we start interacting with the world via proxy using our AI Agents, there will be a lot of interesting fallout from these interactions. First, what controls your AI Agents’ access, and how does it establish trust with these external AI agents? This is important because if these agents act on our behalf, what determines whether these interactions are good and allowed? Second, where did your AI Agent come from? As a precarious scenario, if your agent was created by Amazon, it might steer you to Whole Foods for all your grocery needs. Definite conflicts of interest there.
As a follow-up to this topic, I would check out AI and Trust by Bruce Schneier below. What an interesting future indeed.
Shameless Plug: My Session About Voice AI Assistants
My session at SCaLE was entitled Voice-Activated AI Collaborators: A Hands-On Guide Using LLMs in IoT & Edge Devices. The discussion was framed by landing LLMs and other machine learning models on IoT and Edge Devices and the complications from working in resource-constrained environments, that is, environments with smaller amounts of memory, CPU, etc. When building your IoT or Edge device, you have decisions on how much “work” you want to do on your Edge Device versus remotely in the cloud. More work means more resources. More resources mean a high-priced device.
Since Voice AI Agents, like Alexa, Siri, or Google Home, don’t have traditional graphical user interfaces and solely rely on using spoken word for interaction, the focus of this talk centered around how the transcription accuracy of the commands you give can dramatically impact the quality of the prompt to your LLM or the input to your machine learning models.
If you are interested in learning more about how to optimize running machine learning models at the Edge, check out my recording below:
Turn on the Funnies
I promised something funny, and one of the staples at SCaLE is your annual talk by Corey Quinn. He often pokes fun at topics all throughout the tech industry. He literally does this every single year. It’s tradition at this point. This year’s topic is where I spent a good 7 years of my life dealing with… Kubernetes. A good portion of it is spot on. His talk Terrible Ideas in Kubernetes was another huge success.
SCaLE Recap
Wrapping up an event like SCaLE is no small feat. I would highly recommend attending this conference next year for those who’ve never had the pleasure of attending. What sets SCaLE apart isn’t just its impressive array of sessions ranging from Kubernetes intricacies to the latest in open source AI, but SCaLE stands as a beacon of community, innovation, and inclusivity, and drawing tech enthusiasts from every corner. For me, the biggest draw is to hear from diverse perspectives all throughout the tech industry and meeting new people in a techy social setting.
For those contemplating bringing their families along, you’ll find SCaLE to be an unexpectedly family-friendly event. Imagine sharing your passion for tech while your loved ones enjoy many activities, like Saturday’s Game Night, which offers everything from board games and video games to virtual reality headsets. If you’re based in or near Los Angeles or are looking to attend a conference on the west coast, SCaLE is the place to be with its information-packed sessions, grassroots vibe, and watercooler-style discussions with subject matter experts throughout the industry.