Stanley in his victory pose |
Indeed, this is the first in a multi-part series about machine learning. You may have heard the term carelessly thrown around in many different contexts. That probably isn't a symptom of underinformed people trying to sound smart but rather a display of how ill-defined the topic is. In general, machine learning refers to algorithms that allow computers to make inferences based on past experience. A program starts out performing a task passably well, then it does the task some more, and then all of the sudden it can do the task better... well, that's machine learning. A more rigorous definition than that doesn't really exist.
You probably encounter tons of examples of machine learning every day. When you feed your hand written check into the ATM and it (somewhat creepily) tells you how much it's worth... that's machine learning. When you go to the grocery store and receive a discount for using your "club card"... that discount was probably computed using machine learning techniques. If you have an Android phone and you use the voice search feature... you guessed it, that's machine learning. When that important email from your professor or boss is diverted into the spam folder only to be discovered long past its expiration date... that's machine learning (or lack thereof). After recognizing such diverse applications of machine learning, it's hard to fathom how all of these things could fall under the umbrella of machine learning.
Machine learning is not one algorithm or even one branch of math or science. It's an aggregation of many different techniques borrowed from statistics, optimization, computer science, and even neurology. It is a set of tools borrowed from all of these fields and codified into algorithms that can be implemented on computers. By bringing the techniques of all these fields to bear on difficult problems, programmers can endow computers with the ability to perform tasks one might not expect them to complete.
In recent months, we have seen some powerful demonstrations of the capabilities of machine learning. First, Watson triumphed over the best human Jeopardy! players. This feat required an understanding of the nuanced English language that many never expected computers to achieve. Fresh off Watson's victory in Jeopardy, Google granted increased visibilty into its semi-secret project to create autonomous cars by releasing this video. Where did Google get all of this autonomous car knowledge? If you guessed the DARPA Grand Challenge, then you guessed correctly. Sebastian Thrun, the leader of the victorious 2005 Stanford team and his colleagues now work at Google.
These feats are somewhat jarring because they represent an encroachment of computers into domains previously dominated by humans. Let's just say that their inexorable march is not finished. Researchers will continue to improve machine learning algorithms, and computers will gain ever more aptitude in surprising domains. I love stoking robot apocolypse hysteria as much as the next person, but surprisingly that's not my point here. My point is that machine learning is an incredibly interesting and powerful tool that is applicable accross an expanding number of domains. Something so powerful must be worth understanding... at least a little bit.
If Watson and Google's driverless car weren't enough to get you excited about upcoming installments, then I guarantee this will be. (Watch until the end. It's worth your two minutes).
No comments:
Post a Comment