Can you give us a couple of real-world examples of some useful machine learning in industry or for other fields? Where do you use machine learning? What for?

This is a transcript of Phase Dock LIVE, December 12, 2020 with Chris Lehenbauer and Brandon Satrom.

It has been edited for length, clarity and readability.  Occasional time stamps are shown in square brackets [4:52] so you can watch the action and animations as they are described in the video.

We’ve split it into three parts for easy access.

  • Part One: Introductions and the theory of Machine Learning – previous post
  • Part Two: Real-world examples and practical ideas to get started with Machine Learning – this post!
  • Part Three: Machine Learning at the edge with microcontrollers and single-board computers – coming soon!

Part Two: Real-world examples and practical ideas to get started with Machine Learning

[16:06] Chris: Can you give us a couple of real-world examples of some useful machine learning in industry or for other fields? I’ve got some thoughts myself because I’m in manufacturing, but where do you use this? What for?

Brandon: I think about it a lot. There are the popular examples of machine learning such as facial recognition, or speech recognition. At a fundamental level, if you are someone who has a smart speaker in your home, and many people do, you are using some form of machine learning.

This ability for a computer to detect and respond to keywords, is a machine learning exercise. The same thing happens on our phones.

A good friend of mine, Pete Warden, who works at Google, likes to tell the story about how smart phones work. And because he works at Google, he uses the “hey google” example. There is a little digital signal processor in an Android phone or smart speaker that will listen for a set of words to wake up the signal processor. That’s a very common pop culture example of where machine learning gets used.

Because I work in the IOT space, I have put a lot of thought into manufacturing. I think preventative maintenance is one of the best examples of where machine learning can shine. There’s still a ton of value to unlock there.

What I mean by preventative maintenance and especially predictive maintenance is that a lot of maintenance work “out in the wild” tends to fall on a schedule. If you have a bunch of industrial machines out on the shop floor, as the owner of those machines you will typically schedule service for them is based on “expected breakage.” If you think the machine is going to fail every three years or every five years, you make sure to service it before that point. You schedule annual service. Then if something breaks you have get a repair technician on an unscheduled time.

That’s a good example of where machine learning can actually help. If you put an accelerometer on that machine and you know what the vibration of a machine that is about to fail looks like, then you can anticipate that when the machine starts vibrating like this, or sounding like this, it is headed for failure. Performing maintenance now will cost less and save the company downtime.

That class of machine learning problems is one that I think will add a whole lot of value for a lot of companies moving forward.

So those are two examples. One in popular culture and one in the industrial space.

Chris: I’d mentioned this before. One of our good friends is working on production equipment in a bio lab out in California. He’s thinking that perhaps he could use machine learning to do quality control in a production environment. Ultimately it comes down to pattern recognition, right?

Brandon: Absolutely, yes. That’s the beauty of it. If you abstract it back to what I was talking about earlier, when it comes to having a set of answers and data, and then telling the computer to tell you what the algorithm is, it is fundamentally pattern matching.

The way you execute “deep learning” is you build something called a “deep neural network.” That term came from this idea that the neurons in our brain fire synapses from one neuron to the next. They create this web-like pattern of how thoughts travel from one side of our brain to the other. And a computer neural network looks very similar. Again, I’ll recommend Andy Trask’s book “Grokking Deep Learning.”

If you build that sort of a model, what you’re basically doing is going from one layer to the next while passing these little data points (like synapses) from one to the other. That gives the computer enough scope of data to create a pattern, so “OK, when I see these kinds of things in my source data that means this is the class of problem or the class of output that I’m seeing on the other side.”

Let’s look at speech recognition. What the computer is doing is looking at sound waves. When it looks at the sound wave, one kind of sound wave corresponds to “hey Google” whereas another sound wave may just be “hey you”, which the machine knows to ignore. It is able to match those patterns by seeing tens of thousands or even millions of examples of what those different kinds of sound waves look like.

That’s how consumer-grade machines like Echo or Google Home can recognize a myriad different voice types without having to be explicitly trained. They know enough to fuzzy match on different classes of data.

Chris: A little bit of personal experience. This was not machine learning, but twenty years ago in a previous company, we owned a CNC lathe which was set up for unattended production. The lathe had a bar-feeder so it would run until the bar feeder emptied, which might be 20 hours or so. But of course, if something went wrong, the machine would destroy itself because it knows no fear and feels no pain. We implemented a system which was a precursor to machine learning. You put an accelerometer on the tool post and there might be as many as eight tools. For each tool the operator would provide an upper and lower limit. The system basically said: “If you don’t reach the lower limit, STOP. If you reach the upper limit, STOP.” Because if the upper limit was exceeded, something had broken, causing a spike in the vibration. And if you didn’t reach the lower limit, nothing was happening, because you’re probably out of material.

It worked brilliantly. We would actually run the lathe overnight without fear. But what the system could not do was learn. So, over time when you run the same thing 100,000 times over the course of the month, it would never fine-tune the parameters. It was completely algorithmic.

Now I look at this twenty years later and there are people doing exactly this (but with machine learning). That’s why this is really exciting to me because we were experimenting with this concept way back in the day.

Brandon: I’m glad you brought that example up. That’s a perfect example of what makes the algorithmic approach so hard. With the algorithmic approach, you as the engineer have to know all of the different corner cases. You have to lay out this is my upper limit, this is my lower limit. And then you’ve got to manually tune it over time.

With machine learning what you’re doing is setting the high limit and the low limit and you’re allowing the model over time to figure out that “the high limit should actually be 20% lower because I have another piece of information.”

Algorithms are ultimately giant “if/then” statements. And machine learning allows the model to avoid the imperatives that we give it. It allows the model to be created organically.

Chris: You’re absolutely right, Brandon. Because what I found over time was that I was close, but sometimes I would get “false positives” or “false negatives” and there was no way to figure out the cause. Unless you look at reams and reams of data…but it was gone. The system I used didn’t store all these wave patterns.

If you can make the machine watch itself and learn from that – which is, of course, where we are going – there is so much latent productive capacity that gets wasted because a human can’t watch this machine. And besides, watching a machine waiting for something to go wrong is a very poor use of a human, so that’s why I’m really excited about this whole thing.

Brandon: Yeah. We got bigger fish to fry than monitoring machines.

Chris: That kind of vectors off into why robots and automation should not be considered a threat because humans really should not be employed as robots. But that’s a topic for a different day.

This has been great. I’d envisioned kind of a theoretical introduction to machine learning and then some “how does it work?”  But if someone wants to do this, how do you get your feet wet, what do you do, and what does that look like?

Brandon: I will again mention “Grokking Deep Learning” by Andrew Trask. That is, bar none, the best book for anyone at any level looking to get into this space. It covers deep learning, in particular, but he also does a good job of framing “learning” as a subset of “machine learning” which is a subset of “artificial intelligence” and creating the Venn diagram of how all of these things work together.

And the, in addition… to be a bit self-promoting… the talk that I gave in Oslo in 2018 about machine learning covers some of those things in the slides, but also there are a few demos that are kind of fun and worth seeing.  [Editor’s note: you can find all the resources referenced at the end of the blog post.]

Chris: If I can interject too. Coincidentally this was a sign, in the most recent issue of Make Magazine there is a great article on Machine Learning on Microcontrollers which has some good down-and-dirty resources. This is issue 75. It should be on newsstands for a while. I do not have stock in Make Magazine, but I love the whole Make empire. Please, please, please support them. That’s just a great resource.

There are some good programming resources too, aren’t there? Some actual tools, not just the books.

Brandon: Andrew Trask’s book is also a very programmatic introduction. He uses Python exclusively. And he doesn’t use any frameworks; it’s a nice thing to actually go through the exercises building machine learning models without relying on a framework. Once you get past that, you will find that there are some great models/frameworks out there that developers will rely on.

One is called TensorFlow. It’s an open-source library but it is largely stewarded by the team at Google. And there’s another popular one called PyTorch. They’re both very Python-based, which is fine by me. I love Python and love an excuse to use it.

And if you’re a C, a C# or Java developer, for each of those frameworks there are actually plug-ins or extensions where you can work with those. Both of those are fantastic.

When we get to machine learning for microcontrollers…and it sounds like we can transition to that now…I have a recommendation for another startup to check out.

Chris: Yes, I think this is a good point to make the transition.

End Part Two: Real-world examples and practical ideas to get started with Machine Learning

See Part Three: Machine Learning at the edge with microcontrollers and single board computers

Resources to learn more about machine learning:

Books:

Articles/Blog posts:

    • Machine Learning on Microcontrollers by Helen Leigh; Volume 75 of Make: Magazine (should be available on newsstands until February 2021); An extensive introductory/overview and how-to article.  Available online to subscribers.
    • Why the future of machine learning is tiny : blog post by Pete Warden, staff researcher at Google; leads their TensorFlow Mobile team.

Videos: