Author Archives: David vonThenen

YAKB: Running Kubernetes on Your Laptop

Hi there! Yes, it’s been a while since I have posted to my personal blog. Before I get to my post, I thought I would just bring you up to speed with what’s been going on in my world since my last post. So I no longer work at Dell EMC. I also no longer work at Dell Technologies. After transfer upon transfer, I currently work at VMware. The change has been going good so far especially moving from companies that are traditionally hardware based to a company that is mostly software focused. While I have only been at VMware since March of this year, the momentum in the Kubernetes and CNCF communities and VMware’s commitment to those communities has made VMware an obvious choice going forward. Which leads me to this blog post…

Let’s get to the Blog!

So I am writing this “Yet Another Kubernetes Blog” post because I needed to document how someone can run Kubernetes on their laptop so that fellow future session attendees can follow along with future presentations that I might give. If this never gets used in one of my presentations, that’s cool… but I thought, hey why not just put this out here so that others might benefit from this. You could potentially use this blog post to just test drive Kubernetes and play around with its functionality.

NOTE: If you are running on Windows, well you might want to install something like VMware Workstation Pro so that you can get a RHEL7 VM running on your laptop. I believe you can try it for 30 days.

Installing Virtual Box

To simplify the installation among the two platforms, we are going to need to install Virtual Box (5.2 is recommended). To do that visit the Virtual Box Homepage, then download the installation package based on your platform.

On MAC, just download the DMG file and install Virtual Box like any other application on your MAC.

On RHEL7, download the appropriate RPM and then install using the following command:

sudo rpm -ivh 

Installing kubectl

Next, we need to install kubectl which is the Kubernetes command line tool to manage a Kubernetes cluster. You will be using this utility for the majority (if not all) of the operations to view what’s going on in your cluster to kicking off applications in your cluster. Fortunately, installation of this component is pretty straightforward.

On MAC, you can run the following command:

curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/

On RHEL7, you can run the following command:

curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/

You can verify that kubectl is installed correctly by running the following command: kubectl help. Let’s move onto the last component minikube.

Installing minikube

So minikube is a simple tool that allows you to quickly deploy a single node Kubernetes cluster on your laptop/computer. We are going to be using that for demonstration purposes and to kick the tires on Kubernetes. You can install minikube by doing the following:

On MAC, you can run the following commands:

curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/

minikube config set memory 4096
minikube config set cpus 2

On RHEL7, you can run the following commands:

curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/

minikube config set memory 4096
minikube config set cpus 2

RHEL7 NOTE: If you see the following error (I did not during my install, but it has been reported to happen sometimes):

Could not read CA certificate "/etc/docker/ca.pem": open /etc/docker/ca.pem: no such file or directory

The fix is to update /etc/sysconfig/docker to ensure that minikube’s environment changes are respected:

< DOCKER_CERT_PATH=/etc/docker
---
> if [ -z "${DOCKER_CERT_PATH}" ]; then
>   DOCKER_CERT_PATH=/etc/docker
> fi

You can verify that minikube is installed correctly by running the following command: minikube version. Pretty simple!

I want a Kubernetes!

So now that you have all the associated software installed on your laptop, let’s bring up a Kubernetes cluster!

On MAC and RHEL7, run the following command:

minikube start

To make sure you can access your Kubernetes cluster, run the following command:

kubectl get pods --all-namespaces

To stop and delete your cluster, run the following command:

minikube stop

Conclusion

Well, there it is! A simple way to get a Kubernetes cluster running on your laptop without requiring a public cloud account or a ton of hardware sitting in a lab or datacenter somewhere. This by far isn’t a magic bullet and it only good for running small light-weight applications. If anything, it’s a simple way to familiarize yourself with Kubernetes and general management of the cluster.

I plan on posting to my blog more often now… so stay tuned on some cool future blog posts! If you have any suggestions on topics you want to hear more on, please let me know! I am interested in everything from intro-101-type blogs to deep-dives. Just drop me a line!

Dell EMC World 2017 – Las Vegas, NV

It looks like that time of year again as we are just days away from Dell EMC World 2017. The {code} team will once again be in attendance and presenting some interesting sessions (16 in total), a Hands-On Lab (ran through it myself and it’s great!), and various materials at the show. The buffet (Yes, we are in Vegas after all!) of information we have lined up is pretty dang awesome! You can find more information about the stuff {code} has going on in our official {code} at Dell EMC World page.

Demos, Demos, Demos

What I wanted to talk about today were the two sessions that I will be presenting at Dell EMC World. The first session called Demos Demos Demos! Containers & {code} is happening on Wednesday, May 10 at 1:30 PM in room Zeno 4602. I will be co-presenting with Travis Rhoden and Vladimir Vivien. Just like the title says this session will have a few slides to set up what is going on and talk about who we are… then it’s nothing but live demos. I think this will be a pretty amazing session that captures what is hot in the container and scheduler space but at the same time, will give you some practical and real-world information to take home with you. Definitely, check this out!

ScaleIO Framework

The second session I will be presenting solo. It’s called Managing ScaleIO As Software On Mesos and is occurring on Thursday, May 11 at 11:30 AM in room Zeno 4602. I floated this idea last year during a session at (the then) EMC World 2016 where I thought it would be cool to be able to treat storage just as another piece of software. Well now its one year later and that idea is a reality now and we are going to talk about and demonstrate the ScaleIO Framework in this session. Many other container schedulers have implementations of this pattern and this concept will change the way how we consume software in the future.

Have fun, but not too much fun!

If you are heading down to Dell EMC World this year, stop by the sessions the {code} team will be presenting at and if you have any questions, feel free to linger around after the presentations to chat. I think this is going to be an awesome conference, do check out some of the social networking opportunities available to connect with some new people, and as always enjoy the show and have fun (but not too much… it’s Vegas after all)!

Applications that Fix Themselves

I know that in my last blog post I said I would be talking (and probably announcing) the FaultSet functionality planned for the next release of the ScaleIO Framework. As all things in the world of technology and software, things don’t always go as planned. So today I am here to talk about some stuff relating to the Framework that will be in my speaking session entitled How Container Schedulers and Software Defined Storage will Change the Cloud at SCaLE 15x this Saturday March 4th at 3pm in Ballroom F of the Pasadena Convention Center.

SCaLE 15x Logo

This new functionality at face value seems straight forward but the implications start to open the doors to next level thought kinda stuff. Ok ok ok. I may have oversold that a little, but the idea itself is still pretty cool and I am super excited to talk about here.

Just make it happen. I don’t care how!

Just this week, I released the ScaleIO Framework version 0.3.1 which has a functionality preview **cough** experimental **cough** for a couple of features that I think is cool. The first feature, although not as interesting, will probably be the most useful immediately to people that want use ScaleIO but was turned off from the installation instructions… starting from a bare Mesos cluster, you can provision the entire ScaleIO storage platform in an highly available 3-node configuration from scratch and have all the storage integrations, like REX-Ray and mesos-module-dvdi, installed automatically.

Easy Street

In case you missed it… without having to know anything about ScaleIO, you can deploy an entirely software-based storage platform that will give your Mesos workloads the ability to persist application data seamlessly, that is globally accessible, and make your apps highly available. This abstracts the complexities of the entire storage platform and transforms it into a simple service where you can simply consume storage from. As far as any user is concerned, the storage platform natively came with Mesos and the first app you deploy can consume ScaleIO volumes from day one. If you want more details on how to make that happen, please check out the documentation.

The Sky is Falling!! Do Something?!?!

I think the second functionality preview **cough** experimental **cough** in the 0.3.1 release has perhaps the most compelling story but may be less useful in practice (at least for now). I have always been fascinated by this idea that applications, when they run into trouble, can go and fix themselves. We often call this self-remediation. In reality, that has always been a pipe dream but there is some really cool infrastructure in the form of Mesos Frameworks that make this idea a possibility.

It's not going to happen

So this second feature comes from my days as both a storage and backup user… where I get the dreaded storage array is full notification. This typically entails getting another expander shelf for your storage array (if you are lucky enough to have expansion capability), populate disks in the expansion bay, and then configure the array to accept this new raw capacity. In the age of Clouds and DevOps, anything is possible and provisioning new resource is only as far as an API call away.

Anything is possible

The idea is that as our ScaleIO storage pool starts to approach full, we can provision more raw disks in the form of EBS volumes to contribute to the storage pool. Since we exist in the cloud or in this case AWS that is only an API call away. That is exactly idea behind this feature… to live in a world where applications can self-remediate and fix themselves. Sounds cool yea?!?! If you are interested in more information about this feature, I urge you to check out the user guide, try it out, provide input and feedback! And if you happen to be at SCaLE 15x this week, I will be doing this exact demo live! BONUS: You can watch that video demo that was performed at SCaLE here:

Where to go next…

So I hope the FaultSet functionality is just around the corner along with the support for CoreOS, or what they are now calling Container Linux, since a lot of the stuff coming out of Mesos and DC/OS is now based on that platform. Let us know if you want more surrounding Mesos and the ScaleIO Framework by hitting me up in our {code} community slack channel #mesos. Additionally, if you are in the Los Angeles area this week, I would highly recommend stopping by SCaLE 15x in Pasadena, catch some of the sessions, and stop by the {code} booth in the expo hall to continue the conversation.