Machine learning is that branch of Artificial Intelligence (AI) devoted to “teaching” computers to perform tasks without explicit instructions, relying on inferences gained from the absorption and processing of patterns. There’s been pretty amazing analysis in the world of machine learning just in the last month. Citing CrunchbaseLouis Columbus at Forbes  puts the number of startups relying on machine learning “for their main and ancillary applications, products, and services” at a rather stunning 8,705—an increase of almost three fourths over 2017.

There has been talk of AI as a tool to fight climate change, which is certainly promising, but not without its limits. In order for AI to do this, it needs programs that can learn by example rather than always relying on explicit instruction—important because climate change itself is based on patterns. Machine learning is “not a silver bullet” in this regard, according to a report by scholars at the University of Pennsylvania, along with the cofounder of Google Brain, the founder and CEO of DeepMind, the managing director of Microsoft Research, and a recent winner of the Turing Award. “Ultimately,” we read in Technology Review‘s distillation of the report, “policy will be the main driver for effective large-scale climate action,” and policy also means politics. Nevertheless, machine learning/AI can help us predict how much electricity we’ll need for various endeavors, discover new materials, optimize the hauling of freight, aid in the transition to electric vehicles, improve deforestation tracking, make agriculture more efficient, and much more. It sounds like the uncertainties are not enough to give up the promise.

The ultimate goal of machine learning may be characterized as “meta-machine learning,” which is in full swing at Google, where researchers are engaged in “reinforcement learning,” literally rewarding AI robots for learning from older data.

But authors have also been writing about AI/ML’s limitations. Microbiologist Nick Loman warns that machine learning tech is always going to be “garbage in, garbage out” no matter how sophisticated the algorithms get. After all, he says, like statistical models, there’s never a failsafe mechanism for telling you “you’ve done the wrong thing.” This is in line with a piece by Ricardo da Rocha where he likens machine learning models to “children. Imagine that you want to teach a child to distingue dogs and cats. You will present images of dogs and cats and the child will learn based on the characteristics of them. More images you show, [the] better the child will distinguish. After hundreds of images, the child will start to distinguish dogs and cats with an accuracy sufficient to do it without any help. But if you present an image of a chicken, the child will not know what the animal is, because it only knows how to distinguish dogs and cats. Also, if you only showed images of German Shepherd dogs and then you present another kind of dog breed, it will be difficult for the child to actually know if it is a dog or not.”

You may also enjoy watching this astoundingly good 20-minute primer on machine learning.