How does machine learning (ML) work? Can tiny computers drive IOT and ML at the edge? How can I get started with ML? Learn all that and more about the collaboration between IOT, tiny computers and machine learning.
It has been edited for length, clarity and readability. Occasional time stamps are shown in square brackets [4:52] so you can watch the action and animations as they are described in the video.
We’ve split it into three parts for easy access.
- Part One: Introductions and the theory of Machine Learning – this post
- Part Two: Real-world examples and practical ideas to get started with Machine Learning – coming soon!
- Part Three: Machine Learning at the edge with microcontrollers and single-board computers – coming soon!
Part One: Introductions and the Theory of Machine Learning.
Chris: Today we’re talking with our good friend Brandon Satrom on the topic: Machine Learning with Tiny Computers in the Internet of Things.
We first met Brandon when he was head of Developer Relations at Particle (Particle.io). I’m sure many of you are familiar with Particle’s innovative IOT devices and ecosystems.
Brandon is currently Director of Developer Experience at Blues Wireless (blues.io) which has a revolutionary communications device. I’d suggest you check them out. It’ll definitely be worth your time.
I honestly don’t know anyone more capable of grasping the minute details of a wide range of arcane technologies while always maintaining a high-level perspective of why it is important, and being able to communicate those things effectively.
So recently, when we saw his presentation on machine learning, we said, “We need to get Brandon to join us for Phase Dock LIVE,” so here we are.
Brandon: Thanks so much for having me.
Chris: Tell us a bit more about yourself and what you are working on right now.
Brandon: Hi, everybody! I’ve been a professional technologist for over twenty years. Most of my background is as a web engineer, front-end. I did some Java work (don’t tell anybody!), spent some time at Microsoft over a decade ago, and I worked for software companies and also hardware companies. For the last six or seven years, I’ve been very involved in the IOT space, first personally, by becoming a self-taught Maker like so many in our space. But also professionally. I spent two years at Particle as head of Developer Relations for them. About seven months ago, I joined a company called Blues Wireless. As Chris hinted, we make hardware and software to make wireless IOT dead simple. It’s been really fun. Until just this week we were in “stealth mode.” We’re officially launched now. Anyone who is interested can check us out at blues.io.
I’ve been working on the launch, as well as the developer documentation at dev.blues.io. We’re just trying to get this thing off the ground. It’s exciting to be part of a company building something from scratch and being able to make a mark on what the developer experience looks like.
[4:52] I’m going to change my camera so I can show people what the product looks like.
Phase Dock WorkBench with Blues Wireless Notecard
This is not just because I’m on the Phase Dock LIVE. I have a WorkBench.
I love this thing. Matte black is one of my favorite colors.
Our product is this little gizmo here. It’s called a Notecard. This is a cellular IOT device. It’s about 30mm by 34mm. It comes with 10 years-worth of cellular for a single price. The devices start at $49 with 500MB of cellular data over 10 years. Over the last several months, I’ve had the privilege of using this product to build documentation and projects for developers.
[Pointing at the devices mounted on the Phase Dock® WorkBenchTM.] This is just a quick little project. It’s not “on” right now. I’m actually showing how the Notecard works with an ESP32 on our device and with some common Adafruit and Sparkfun sensors and other things. And you can see a fleet of other devices on my desk where I’m working up various demos.
Chris: That’s a breakout board in the center, right? You’ve got the MCU off to the side?
Brandon: Yes. The Notecard is our core product. The larger board is the Notecarrier. It’s a carrier board you can plug the Notecard into. This one is called the Notecarrier AF and it has a slot for the Adafruit Feather board. There are 60 or 70 Feather MCUs these days. That’s an ESP-32 on it here.
These boards off to the side are Notecarrier AL. You can use jumper wires to plug into it, and it has a port for a LIPO battery.
It’s been an absolute joy to bring a product like this to life. Our founder and the original team have been working on this for the last couple of years. I’ve only been involved for the last seven months, but it’s been great to get this thing out the door. We’re live!
So…that’s what I’ve been working on!
Chris: I know Blues Wireless is just about to do the soft launch. Thanks so much for taking the time to talk to us today.
The take-away is that machine learning is not your day job. It’s kind of a passion for you, right?
Brandon: Right, it’s not my day job. I got really deep into it during my time at Particle, but even then, it was more personal. I wanted to learn more about machine learning and teach myself about the whole space as it became more popular. At that point I got into it purely on the software side. Then as it became more popular to talk about machine learning on microcontrollers, it became a perfect marriage between my day job and something I’m personally interested in.
Chris: Here at Phase Dock, we’re strong in manufacturing. That’s what I do as a day job. And even though I have a degree in computer science I never paid much attention to machine learning. Until recently, I didn’t believe you could meaningfully do machine learning on either single-board computers or microcontrollers. So… at a very high level can you tell us “what is machine learning?” and “why do we care?”
Brandon: Absolutely. Let me share my screen.
[8:15] This is the kind of thing that is more fun to talk about with visuals.
I know you mentioned that you will share a link to a talk I did in Oslo. If you watch that talk later, you’ll see some of these slides. This is a sneak peak of the high points here. [See the references at the end of each blog post.]
The way I like to talk about machine learning is to cast it in terms we understand today, which is algorithms or simple math. For folks who are programmers or who are learning to become programmers, this is one of the greatest things to teach yourself. It’s so much fun.
You learn very quickly that computers respond to the instruction that we give them. At a fundamental level, those instructions are algorithms, right? You say to a computer, “I want to you add two numbers and give me the result.” This is an algorithm. We give the computer “a+b=c”; we give the computer “2 and 2” and the computer says “oh yeah…that’s 4.”
[9:42] Machine learning is a little bit different.
With machine learning, what you’re doing is you’re providing the computer with all the answers (the outputs) and you give it a whole bunch of data (the raw inputs) and you ask it to tell you what the “math” or the algorithm is. In this case, the algorithm is often referred to as a “model.”
The reason that we do that, is not because we don’t know the algorithm. It is because we want the computer to be able to make predictions based on data that we don’t have. We give it “answers”, we give it “data” but we expect that at some point there will be unforeseen data. There will be data that we couldn’t anticipate. We want the computer to be able to apply that unknown data to what we do know and give us a similarly accurate prediction.
Another way I like to describe this is from a book called “Grokking Deep Learning” by Andrew Trask. For this exercise, we’re referring to something called “the streetlight problem.”
Imagine, if you go to a place that has a completely different streetlight system from anywhere else in the world. If there are multiple lights on and off, and the stoplights are completely unfamiliar, as a pedestrian we will watch.
As a human being, it would only take us 5-6 rounds before we figure out that when the middle light is on, that means “walk.”
[10:24] As humans, we reason this out logically. We are very good at observing patterns and finding correlation.
Really, all machine learning is…specifically the subset called Deep Learning…is just teaching computers to find correlation.
So you tell a computer “This is what I know—i.e., this is my data.” And “This is what I want to know—i.e., these are my answers.” Then you break it down into basic math.
A lot of the real work in machine learning is figuring out how to describe a real-world thing with math or ones and zeros.
You tell the computer that here are the ones and zeros version of the math. You build a set of instructions for the computer. Then you have to run through the instructions over and over again.
As it runs through those instructions, the computer figures “I’m making a prediction. It’s way off from the answers I’ve been given, so let me tune the numbers.” It does it over and over again until (as you can notice at the bottom of the right hand of the screen) the error is so small that now we know that this is how a certain activity works.
The reason I’m giving such a long answer is because I want people who aren’t familiar with machine learning to know that this isn’t some magical thing where we give computers a bunch of raw data and they tell us how the world is supposed to work. This is still a very human exercise. I like to say that machine learning is really Human Teaching.
Everything that I just described is still you, as a human being, instructing the computer as to how this is supposed to work. When you build a program that expresses all of this, what you are ultimately doing is this piece that it here on the right-hand side. [11:48]
You’re still giving it a computer program. But instead of giving it the algorithms, imagine that you are holding its hand and slowly finding the right answers over time. One of my favorite things about machine learning is that it’s still a programming problem, but it’s one that is a little bit different from our traditional way of building algorithms.
Hopefully that all made sense. Tell me if there is anything I should clarify, Chris.
Chris: I think it was very helpful. We’re getting some good feedback on the chat.
A lot of our folks, like me, are pretty algorithmic in how we approach things. This is such a different approach that I think that was a really good overview.
I wanted to touch on something you said before we went live today…which is, this is such a human exercise. That brings out a few things that are cautionary. Things like privacy, obviously. Biases that we may bring to the model when we build it. And ethical questions about this as a powerful tool. As you fine-tune it, if you don’t tell people that you are using machine learning and you are using it to influence them behind the scenes, is that ethical?
Machine learning is very powerful, and it is becoming more widely deployed, so I wanted to mention those issues, but we don’t need to dwell on them because that’s a whole other discussion.
Brandon: I will say this briefly for someone who is getting into machine learning you’ll very quickly find that there are classes of problems where you really want to get as much data as possible. Machine learning is a data-hungry exercise. I like to say that machine learning loves data like the cookie monster loves cookies. The more data the better.
So rather than gathering your own data, you may be looking for existing data sets. The caution when you are looking at existing data sets is to be very careful that the data set you are working from doesn’t perpetuate any biases.
Facial recognition is the best example here. Historically facial recognition data sets come from facial image databases that are not representative of the world population. So, if you are looking to build an algorithm that is representative of the real world, you need to find the representative data set. I’ll just leave it at that.
I think you’ll find that anything that where we talk about bias, where we talk about privacy, you need to be careful and intentional in your design. [16:00]
End Part One: Introductions and the Theory of Machine Learning.
See Part Two: Real-world examples and practical ideas to get started with Machine Learning
Resources to learn more about machine learning:
- Machine Learning on Microcontrollers by Helen Leigh; Volume 75 of Make: Magazine (should be available on newsstands until February 2021); An extensive introductory/overview and how-to article. Available online to subscribers.
- Why the future of machine learning is tiny : blog post by Pete Warden, staff researcher at Google; leads their TensorFlow Mobile team.