Tag Archives: SCaLE

Top Reasons to Mark Your Calendar for SCaLE Next Year

In March, I had the fortune of attending and speaking at one of my favorite conferences, Southern California Linux Expo (SCaLE) 21x. As the name suggests, this is the 21st iteration of this tech-heavy yet family-oriented event, which usually takes place in Pasadena but, in some years, in the greater Los Angeles area. This is my sixth time attending (and 3rd time presenting), and I am glad to say that this year’s conference knocked it out of the park again.

What is SCaLE?

SCaLE is North America’s largest community-run open source and free software conference. The entire event, from setting up the networking to managing the session introductions, is all volunteer-based. This allows SCaLE to skip over the pay-for-play sessions you typically see at larger corporate events and focus on quality sessions that attendees are interested in. More importantly, this allows the event to keep the cost of attendance to under $100 for the entire 4-day event and maximum inclusion for those that want to attend.

Southern California Linux Expo 21x

The content ranges from topics like Kubernetes to Open Source AI to the Low-level Linux kernel. My favorite session topics always revolve around IoT/Edge, Security, and anything unique and obscure which you will definitely find a lot of here. I wanted to highlight a few of the more interesting (and hilarious) things I was able to participate in at SCaLE this year. I hope you will enjoy this too…

Kwaai Summit: Personal AI

You want to discuss a very meta but also a very real topic that will arrive at our doorsteps soon: Personal AI. What is Personal AI? It’s the idea that we will have AI systems making decisions on behalf of individuals, or more specifically, you. Whether you know this or not, this is already happening on a small scale (excuse the pun). These are things like your iPhone or Android making reservations at a restaurant or, a more concrete example, making recommendations on things you might be interested in purchasing based on your Instagram or TikTok feed.

Now, imagine we have all this data, information, choices, relationships, and associations to all these different disparate data points. How will these choices and products find their way to grab your attention? In the past decade, it’s been done through associations (when you Instagram heart or Facebook like something) and then extrapolating what else you might enjoy based on probabilities. For example, if you like baseball, you might want to purchase a Dodgers jersey.

Kwaai Summit

The next wave will resemble a personal assistant in the form of an AI agent talking to external AI agents and then making decisions based on those interactions. Your AI agent knows everything about you. What you like, who your friends are, your background, and all other aspects of your life. Based on your unique profile, this AI agent will genuinely know how to interact with your digital environment based on who you are and what apps and access you have.

The Kwaai Summit discussed the new interactions and connections we will have with these AI systems. This was a fascinating series of talks. I recommend checking out The AI Accelerated Personalized Computing Revolution by Pankaj Kedia below.

If we start interacting with the world via proxy using our AI Agents, there will be a lot of interesting fallout from these interactions. First, what controls your AI Agents’ access, and how does it establish trust with these external AI agents? This is important because if these agents act on our behalf, what determines whether these interactions are good and allowed? Second, where did your AI Agent come from? As a precarious scenario, if your agent was created by Amazon, it might steer you to Whole Foods for all your grocery needs. Definite conflicts of interest there.

As a follow-up to this topic, I would check out AI and Trust by Bruce Schneier below. What an interesting future indeed.

Shameless Plug: My Session About Voice AI Assistants

My session at SCaLE was entitled Voice-Activated AI Collaborators: A Hands-On Guide Using LLMs in IoT & Edge Devices. The discussion was framed by landing LLMs and other machine learning models on IoT and Edge Devices and the complications from working in resource-constrained environments, that is, environments with smaller amounts of memory, CPU, etc. When building your IoT or Edge device, you have decisions on how much “work” you want to do on your Edge Device versus remotely in the cloud. More work means more resources. More resources mean a high-priced device.

Since Voice AI Agents, like Alexa, Siri, or Google Home, don’t have traditional graphical user interfaces and solely rely on using spoken word for interaction, the focus of this talk centered around how the transcription accuracy of the commands you give can dramatically impact the quality of the prompt to your LLM or the input to your machine learning models.

If you are interested in learning more about how to optimize running machine learning models at the Edge, check out my recording below:

Turn on the Funnies

I promised something funny, and one of the staples at SCaLE is your annual talk by Corey Quinn. He often pokes fun at topics all throughout the tech industry. He literally does this every single year. It’s tradition at this point. This year’s topic is where I spent a good 7 years of my life dealing with… Kubernetes. A good portion of it is spot on. His talk Terrible Ideas in Kubernetes was another huge success.

SCaLE Recap

Wrapping up an event like SCaLE is no small feat. I would highly recommend attending this conference next year for those who’ve never had the pleasure of attending. What sets SCaLE apart isn’t just its impressive array of sessions ranging from Kubernetes intricacies to the latest in open source AI, but SCaLE stands as a beacon of community, innovation, and inclusivity, and drawing tech enthusiasts from every corner. For me, the biggest draw is to hear from diverse perspectives all throughout the tech industry and meeting new people in a techy social setting.

For those contemplating bringing their families along, you’ll find SCaLE to be an unexpectedly family-friendly event. Imagine sharing your passion for tech while your loved ones enjoy many activities, like Saturday’s Game Night, which offers everything from board games and video games to virtual reality headsets. If you’re based in or near Los Angeles or are looking to attend a conference on the west coast, SCaLE is the place to be with its information-packed sessions, grassroots vibe, and watercooler-style discussions with subject matter experts throughout the industry.

Top Reasons to Mark Your Calendar for SCaLE Next Year

In March, I had the fortune of attending and speaking at one of my favorite conferences, Southern California Linux Expo (SCaLE) 21x. As the name suggests, this is the 21st iteration of this tech-heavy yet family-oriented event, which usually takes place in Pasadena but, in some years, in the greater Los Angeles area. This is my sixth time attending (and 3rd time presenting), and I am glad to say that this year’s conference knocked it out of the park again.

What is SCaLE?

SCaLE is North America’s largest community-run open source and free software conference. The entire event, from setting up the networking to managing the session introductions, is all volunteer-based. This allows SCaLE to skip over the pay-for-play sessions you typically see at larger corporate events and focus on quality sessions that attendees are interested in. More importantly, this allows the event to keep the cost of attendance to under $100 for the entire 4-day event and maximum inclusion for those that want to attend.

Southern California Linux Expo 21x

The content ranges from topics like Kubernetes to Open Source AI to the Low-level Linux kernel. My favorite session topics always revolve around IoT/Edge, Security, and anything unique and obscure which you will definitely find a lot of here. I wanted to highlight a few of the more interesting (and hilarious) things I was able to participate in at SCaLE this year. I hope you will enjoy this too…

Kwaai Summit: Personal AI

You want to discuss a very meta but also a very real topic that will arrive at our doorsteps soon: Personal AI. What is Personal AI? It’s the idea that we will have AI systems making decisions on behalf of individuals, or more specifically, you. Whether you know this or not, this is already happening on a small scale (excuse the pun). These are things like your iPhone or Android making reservations at a restaurant or, a more concrete example, making recommendations on things you might be interested in purchasing based on your Instagram or TikTok feed.

Now, imagine we have all this data, information, choices, relationships, and associations to all these different disparate data points. How will these choices and products find their way to grab your attention? In the past decade, it’s been done through associations (when you Instagram heart or Facebook like something) and then extrapolating what else you might enjoy based on probabilities. For example, if you like baseball, you might want to purchase a Dodgers jersey.

Kwaai Summit

The next wave will resemble a personal assistant in the form of an AI agent talking to external AI agents and then making decisions based on those interactions. Your AI agent knows everything about you. What you like, who your friends are, your background, and all other aspects of your life. Based on your unique profile, this AI agent will genuinely know how to interact with your digital environment based on who you are and what apps and access you have.

The Kwaai Summit discussed the new interactions and connections we will have with these AI systems. This was a fascinating series of talks. I recommend checking out The AI Accelerated Personalized Computing Revolution by Pankaj Kedia below.

If we start interacting with the world via proxy using our AI Agents, there will be a lot of interesting fallout from these interactions. First, what controls your AI Agents’ access, and how does it establish trust with these external AI agents? This is important because if these agents act on our behalf, what determines whether these interactions are good and allowed? Second, where did your AI Agent come from? As a precarious scenario, if your agent was created by Amazon, it might steer you to Whole Foods for all your grocery needs. Definite conflicts of interest there.

As a follow-up to this topic, I would check out AI and Trust by Bruce Schneier below. What an interesting future indeed.

Shameless Plug: My Session About Voice AI Assistants

My session at SCaLE was entitled Voice-Activated AI Collaborators: A Hands-On Guide Using LLMs in IoT & Edge Devices. The discussion was framed by landing LLMs and other machine learning models on IoT and Edge Devices and the complications from working in resource-constrained environments, that is, environments with smaller amounts of memory, CPU, etc. When building your IoT or Edge device, you have decisions on how much “work” you want to do on your Edge Device versus remotely in the cloud. More work means more resources. More resources mean a high-priced device.

Since Voice AI Agents, like Alexa, Siri, or Google Home, don’t have traditional graphical user interfaces and solely rely on using spoken word for interaction, the focus of this talk centered around how the transcription accuracy of the commands you give can dramatically impact the quality of the prompt to your LLM or the input to your machine learning models.

If you are interested in learning more about how to optimize running machine learning models at the Edge, check out my recording below:

Turn on the Funnies

I promised something funny, and one of the staples at SCaLE is your annual talk by Corey Quinn. He often pokes fun at topics all throughout the tech industry. He literally does this every single year. It’s tradition at this point. This year’s topic is where I spent a good 7 years of my life dealing with… Kubernetes. A good portion of it is spot on. His talk Terrible Ideas in Kubernetes was another huge success.

SCaLE Recap

Wrapping up an event like SCaLE is no small feat. I would highly recommend attending this conference next year for those who’ve never had the pleasure of attending. What sets SCaLE apart isn’t just its impressive array of sessions ranging from Kubernetes intricacies to the latest in open source AI, but SCaLE stands as a beacon of community, innovation, and inclusivity, and drawing tech enthusiasts from every corner. For me, the biggest draw is to hear from diverse perspectives all throughout the tech industry and meeting new people in a techy social setting.

For those contemplating bringing their families along, you’ll find SCaLE to be an unexpectedly family-friendly event. Imagine sharing your passion for tech while your loved ones enjoy many activities, like Saturday’s Game Night, which offers everything from board games and video games to virtual reality headsets. If you’re based in or near Los Angeles or are looking to attend a conference on the west coast, SCaLE is the place to be with its information-packed sessions, grassroots vibe, and watercooler-style discussions with subject matter experts throughout the industry.

Applications that Fix Themselves

I know that in my last blog post I said I would be talking (and probably announcing) the FaultSet functionality planned for the next release of the ScaleIO Framework. As all things in the world of technology and software, things don’t always go as planned. So today I am here to talk about some stuff relating to the Framework that will be in my speaking session entitled How Container Schedulers and Software Defined Storage will Change the Cloud at SCaLE 15x this Saturday March 4th at 3pm in Ballroom F of the Pasadena Convention Center.

SCaLE 15x Logo

This new functionality at face value seems straight forward but the implications start to open the doors to next level thought kinda stuff. Ok ok ok. I may have oversold that a little, but the idea itself is still pretty cool and I am super excited to talk about here.

Just make it happen. I don’t care how!

Just this week, I released the ScaleIO Framework version 0.3.1 which has a functionality preview **cough** experimental **cough** for a couple of features that I think is cool. The first feature, although not as interesting, will probably be the most useful immediately to people that want use ScaleIO but was turned off from the installation instructions… starting from a bare Mesos cluster, you can provision the entire ScaleIO storage platform in an highly available 3-node configuration from scratch and have all the storage integrations, like REX-Ray and mesos-module-dvdi, installed automatically.

Easy Street

In case you missed it… without having to know anything about ScaleIO, you can deploy an entirely software-based storage platform that will give your Mesos workloads the ability to persist application data seamlessly, that is globally accessible, and make your apps highly available. This abstracts the complexities of the entire storage platform and transforms it into a simple service where you can simply consume storage from. As far as any user is concerned, the storage platform natively came with Mesos and the first app you deploy can consume ScaleIO volumes from day one. If you want more details on how to make that happen, please check out the documentation.

The Sky is Falling!! Do Something?!?!

I think the second functionality preview **cough** experimental **cough** in the 0.3.1 release has perhaps the most compelling story but may be less useful in practice (at least for now). I have always been fascinated by this idea that applications, when they run into trouble, can go and fix themselves. We often call this self-remediation. In reality, that has always been a pipe dream but there is some really cool infrastructure in the form of Mesos Frameworks that make this idea a possibility.

It's not going to happen

So this second feature comes from my days as both a storage and backup user… where I get the dreaded storage array is full notification. This typically entails getting another expander shelf for your storage array (if you are lucky enough to have expansion capability), populate disks in the expansion bay, and then configure the array to accept this new raw capacity. In the age of Clouds and DevOps, anything is possible and provisioning new resource is only as far as an API call away.

Anything is possible

The idea is that as our ScaleIO storage pool starts to approach full, we can provision more raw disks in the form of EBS volumes to contribute to the storage pool. Since we exist in the cloud or in this case AWS that is only an API call away. That is exactly idea behind this feature… to live in a world where applications can self-remediate and fix themselves. Sounds cool yea?!?! If you are interested in more information about this feature, I urge you to check out the user guide, try it out, provide input and feedback! And if you happen to be at SCaLE 15x this week, I will be doing this exact demo live! BONUS: You can watch that video demo that was performed at SCaLE here:

Where to go next…

So I hope the FaultSet functionality is just around the corner along with the support for CoreOS, or what they are now calling Container Linux, since a lot of the stuff coming out of Mesos and DC/OS is now based on that platform. Let us know if you want more surrounding Mesos and the ScaleIO Framework by hitting me up in our {code} community slack channel #mesos. Additionally, if you are in the Los Angeles area this week, I would highly recommend stopping by SCaLE 15x in Pasadena, catch some of the sessions, and stop by the {code} booth in the expo hall to continue the conversation.