We have lost one of the greats. Renowned British physicist Stephen Hawking has passed away at the age of 76 in his Cambridge home, leaving behind a enormous legacy for science and human’s overall understanding of the universe.

While Mr. Hawking was strictly a figure in the scientific community, he also had some interesting participations in the tech industry, particularly regarding his interest in artificial intelligence. The tech that’s been on the forefront of every major tech company was a central topic of discussion for the physicist, specially during the last years of his life, when it actually became real.

AI is just in its first steps now, but some of Hawking’s opinions and predictions will be key to understanding what’s about to happen in our immediate future. These are some of the things he had to say.

He warned that AI could lead to the end of mankind

Back in 2014, when Intel developed a new system that allowed Hawking to speak, he commented on the usefulness of AI, even during its current primitive state. However, he also expressed his fears of creating something that could surpass human’s intelligence.

“The development of full artificial intelligence could spell the end of the human race,” he told BBC at the time, explaining that once AI took off, it would keep redesigning itself at an ever-increasing rate. “Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

A year later, he predicted that it would probably take less than a century for that to happen. “Computers will overtake humans with AI at some point within the next 100 years,” he said at a London conference in 2015. “When that happens, we need to make sure the computers have goals aligned with ours.”

Stephen Hawking flying weightless in Zero Gravity in 2007

He signed a pact with Elon Musk (and others) to stop AI’s threat

While figures like Mark Zuckerberg and Bill Gates are more optimistic about AI’s prospects, Hawking nonetheless expressed his fears in an open letter he co-signed with Tesla CEO Elon Musk, where they urged for more research on how to prevent the possible “pitfalls” of AI.

Sometime later, in an interesting development, both men and other important figures came up with the “Asilomar AI Principles”, designed as a guideline for the safe development of artificial intelligence.

“Artificial intelligence has already provided beneficial tools that are used every day by people around the world,” reads the proposal. “Its continued development, guided by the following principles, will offer amazing opportunities to help and empower people in the decades and centuries ahead.”

A total of 23 principles were proposed in three categories: research issues, ethics and values and longer-term issues. It was presented on February 2017.

Tech could end poverty, but he also urged caution

Far from fearing Hollywood-style premises like Skynet, Hawking’s worries had to do more with practical scenarios, like how AI and robots could potentially rob the world of millions of jobs or affect the economy negatively. However, he also admitted that these were promising advancements that could improve humanity’s existence.

“The rise of AI could be the worst or the best thing that has happened for humanity,” he said last year. “We simply need to be aware of the dangers, identify them, employ the best possible practice and management and prepare for its consequences well in advance.”

To counter this, Mr. Hawking called for a creation of a “world government” to fight the upcoming challenges related to AI. Now, he is unfortunately no longer with us, but his contributions to the field have already proved to be extremely valuable, and they will potentially shape the future for the better.