It’s wild how we’ve gotten used to so many amazing technologies in such a short time. It’s even more wild when we find out that there is, in fact, a lot of things happening behind them. Take machine learning, for example: we’ve now gotten used to devices being better as they learn from being used. But what if the same scientists that develop these algorithms don’t always know how or why they work?

An expert’s take

You must be hitting the nail in the head if you receive a 40-second ovation when speaking to a crowd. That’s exactly what happened to Ali Rahimi, a researcher in artificial intelligence from Google, when he gave a talk about AI in San Francisco, California.

The developer took a swipe at his own field of work by claiming that machine learning has been turning into “a form of alchemy.” According to him, the same researchers that develop these functions don’t always know why some algorithms work and others don’t. Additionally, there’s really no established criteria for choosing an AI architecture over another.

Much like the programs they make, experts often rely on a trial-and-error method when dealing with a problem, instead of accurately identifying what’s wrong and directly solving it. “There’s an anguish in the field,” Rahimi claims. “Many of us feel like we’re operating on an alien technology.”

“Many of us feel like we’re operating on an alien technology.”

This is especially because of the AI reproducibility problem. In short, researchers can’t replicate previous results or advancements because of the uncertainty that surrounded their development in the first place. This, coupled with an over secretive industry unwilling to share data and training processes most of the time, inevitably leads to an almost impossible task. Or very difficult, at best.

That’s not to mention the “interpretability” problem in machine learning. Because the idea is for AI to evolve on its own, developers don’t have much control after a certain point and things can get unpredictable. That’s part of the charm, to be sure, but it also leads to the particular difficulty of explaining how AI has come to a certain conclusion by itself.

A human responsibility 

Google, where Rahimi works, is at the forefront of AI development

All of this is exposed in a paper published by Rahimi which he co-signed with other experts in the field. To be fair, these professionals don’t just blame machine learning itself for being difficult, they also put the blame on questionable practices by the industry.

In short, the paper attributes a lot of the problem to sub-par methods employed by developers. Fellow Google scientist François Chollet adds that “people gravitate around cargo-cult practices”, relying on “folklore and magic spells.” It’s an inefficient way to do things, because sometimes the core of the program itself can be broken, and what’s really working are the little tricks and additions mounted on top.

These practices of course lead to wasted effort, talent and a suboptimal performance. The paper gives an example of a cutting-edge translating program that actually worked better when removed of the most complex aspects.

A better way

Not all comes down to criticizing. Rahimi and several other specialists also give hints and recommendations that could lead the sector to better days.

For learning which algorithms work, for example, Rahimi says they should do an “ablation study”. This means deleting parts of the algorithm one at a time to precisely determine the function of each component. That way, you can tell how improvement in some areas can affect others negatively.

Ben Recht, computer scientist at the University of California, gives a broader suggestion. He says researches should take inspiration from psychics, where researchers often shrink a problem down to a smaller “toy problem.” Most of the times, experiments done at a smaller, more controllable scale tend to recreate well when applied at a real, “big” scale.

Finally, a more human angle: stop the big emphasis on competition. If a research paper is more likely to be published because it beats some benchmark instead of genuinely offering a new light on the tech, something is wrong. “The purpose of science is to generate knowledge. You want to produce something that other people can take and build on.”