Max Welling on the Future of Machine Learning (TDS Podcast)

Home > Technology > Data Science > Max Welling on the Future of Machine Learning (TDS Podcast)

Max Welling, former physicist, current VP Technologies at Qualcomm. Max is also a ML researcher affiliated with UC Irvine, CIFAR and the University of Amsterdam.

Max has just shared some great insights about the current state of research in ML, and the future direction of the field:

“Computations cost energy, and drain phone batteries quickly, so machine learning engineers and chipmakers need to come up with clever ways to reduce the computational cost of running deep learning algorithms. One way this is achieved is by compressing neural networks, or identifying neurons that can be removed with minimal consequences for performance, and another is to reduce the number of bits used to represent each network parameter (sometimes all the way down to one bit!). These strategies tend to be used together, and they’re related in some fairly profound ways.”

“Currently, machine learning models are trained on very specific problems (like classifying images into a few hundred categories, or translating from one language to another), and they immediately fail if they’re applied even slightly outside of the domain they were trained for. A computer vision model trained to recognize facial expressions on a dataset featuring people with darker skin will underperform when tested on a different dataset featuring people with lighter skin, for example. Life experience teaches humans that skin tone shouldn’t affect interpretations of facial features, yet this minor difference is enough to throw off even cutting-edge algorithms today.”

“So the real challenge is generalizability — something that humans still do much better than machines. But how can we train machine learning algorithms to generalize? Max believes that the answer has to do with the way humans learn: unlike machines, our brains seem to focus on learning physical principles, like “when I take one thing and throw it at another thing, those things bounce off each other.” This reasoning is somewhat independent of what those two things are. By contrast, machines tend to learn in the other direction, reasoning not in terms of universal patterns or laws, but rather in terms of patterns that hold for a very particular problem class.”

“For that reason, Max feels that the most promising future areas of progress in machine learning will concentrate on learning logical and physical laws, rather than specific applications of those laws or principles.”

Jeremy Harris, Towards Data Science, Jun 3 2020 (https://towardsdatascience.com/the-future-of-machine-learning-cd5b8b6e43cd)

Hear the full topic discussion on Spotify: https://open.spotify.com/episode/20flI9imCj9YhW7HVUL92Z?si=glb6JLwzR86KKc6Yc-LvRQ

This site uses Akismet to reduce spam. Learn how your comment data is processed.

×
%d bloggers like this: