AI principles network brain

Should AI principles be developed for the industry?

Talking about artificial intelligence inevitably leads to discussions about the dangers that this technology could also bring. While AI is still in its infant state, its thread to humans is such a popular and interesting topic that we can’t stop talking about it; even as products like Siri can’t give a proper answer to users most of the time.

Far from science-fiction premises, the industry has to worry more about the immediate negative ramifications that AI could bring in the next 10 to 20 years. We’ve talked before about experts warning us about what those threads might be, and it’s clear that what powerful companies do right now is what’s going to set the stage for what’s to come.

Google is one of such companies. Probably the first one in the field, in fact. The Palo Alto corporation has even gotten into a bit of controversy of its own for its involvement in questionable uses for AI. As a response, CEO Sundar Pichai has answered concerns with 7  AI principles that the company believes are key for following good, ethical practices.

Be socially beneficial

Pichai explains that “advances in AI will have transformative impacts in a wide range of fields, including healthcare, security, energy, transportation, manufacturing, and entertainment.” As a result, the Google leader promises that they will take into account every possible factor ranging from social to economic consequences and will operate under those considerations.

Additionally, he also stresses that Google will “continue to thoughtfully evaluate when to make its technologies available on a non-commercial basis.” This as long as Google complies with the legal requirements of every country and region where they operate.

“We believe that the overall likely benefits substantially exceed the foreseeable risks and downsides.”

Avoid creating or reinforcing unfair bias

According to Pichai, Google its committed to inject its Ai with sensibilities that can allow it to distinguish fair from unfair biases. This is particularly true of “sensitive characteristics” like race, ethnicity, gender, nationality, income, sexual orientation, ability and political or religious belief.

This is no easy task, Pichai admits. And we’re glad he does: Google has faced controversy in the past precisely for this kind of biased data management, where innocent requests on its search engine can lead to questionable results. It’s something the company should already begin to work on, not just for its AI platform.

Be built and tested for safety

“We will design our AI systems to be appropriately cautious,” Pichai explains about the third of the seven AI principles. Google will test its products in constrained environments first, and then monitor operations after deployment. All in the name of avoiding harm and striving for safety.

no jobs with AI
If not handled properly, AI could potentially eliminate millions of jobs in a short time

Be accountable to people

“We will design AI systems that provide appropriate opportunities for feedback, relevant explanations, and appeal.” It seems that Google believes AI must not be left to its own devices, not without human direction, at least. That’s exactly what Pichai makes clear.

Incorporate privacy design principles

In this day an age of strong privacy regulations and big scandals, it’s obvious Google would look to comply with what the public (and the law) deems appropriate.

“We will give opportunity for notice and consent, encourage architectures with privacy safeguards, and provide appropriate transparency and control over the use of data.”

Uphold high standards of scientific excellence

Technology is rooted in science. Or rather, technology is science. Pichai seems to think as much, and he expects to apply the same rigorous methodology often used in the science community: “We aspire to high standards of scientific excellence as we work to progress AI development.”

Something to note is the CEO’s commitment to opening to the rest of the industry, possibly countering one of AI’s biggest problems. “We will responsibly share AI knowledge by publishing educational materials, best practices, and research,” promises Pichai. Actions will prove those promises to be true or not.

Be made available for uses that accord with these AI principles

Finally, the last of the AI principles refers to all of the ones before it. “We will work to limit potentially harmful or abusive applications.” Google will evaluate likely uses in light of factors like:

Primary purpose and use: the primary purpose and likely use of a technology and application, including how closely the solution is related to or adaptable to a harmful use.

Nature and uniqueness: whether they’re making available a technology that is unique or more generally available.

Scale: whether the use of the technology will have significant impact.

Nature of Google’s involvement: whether they’re providing general-purpose tools, integrating tools for customers, or developing custom solutions.