Exclusives

Microsoft Azure: AI Must be Treated Responsibly

Artificial intelligence (AI) is one of the most significant technologies we have and can be used to achieve many great things for companies, people and society, but if treated irresponsibly, AI also poses potential risks, according to Microsoft Azure.

“AI is redefining technology of our time and can do so much for people, industry and society at large,” Alysa Taylor, corporate VP of Microsoft’s Industry, Apps and Data Marketing team, said Dec. 7, during the digital event “Put Responsible AI Into Practice.”

She pointed as an example to the non-profit organization The Ocean Cleanup, which, using Microsoft AI tools, found a better, more efficient way to differentiate plastic from other materials while cleaning oceans, she said.

Another example: Microsoft Azure is helping the U.S. government make farming more sustainable, she noted. Also, hospitals are using Microsoft tools such as its Azure Health Bot to screen for COVID-19, she told viewers.

“We’re investing across the company, from breakthrough research to AI tools for developers and data scientists to our AI-powered apps and experiences,” she said, explaining: “Our goal in creating these tools and technologies is to augment the work that people do, freeing up time for more creative tasks and innovative thinking.”

All of it is being “built on the foundation and the commitment to responsible AI because we recognize that this technology can be used for both desirable and undesirable purposes and that its use may have unintended outcomes,” she said, adding: “We must all be realistic about the challenges that we will face.”

Microsoft believes in principles that put people first: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability, she noted.

Microsoft’s responsible AI governance model started about five years ago, with the AI & Ethics in Engineering & Research (AETHER) committee, which she said continues to advise its senior leadership on the challenges and opportunities presented by AI innovations. It continued with the creation of the Office of Responsible AI (ORA) and then the Responsible AI Strategy in Engineering team (RAISE).

Microsoft wants to help other companies build their own responsible AI practices based on its learnings, she said.

To help accomplish that, the company has introduced the new resource manual “Ten Guidelines for Product Leaders to Implement AI Responsibly,” she told viewers. Microsoft teamed with Boston Consulting Group to develop the guidelines, she said.

A “People-Centric Design”

“We have many talented data scientists really building state-of-the-art models across vision, language, machine translation and really pushing the state-of-the-art in every dimension,” according to Eric Boyd, corporate VP of AI Platform at Microsoft.

“But one of the things that they really have to think about is how are we going to build this in a responsible way, making sure that the advances of AI are available to everyone,” he said. “And so they really think through how we’re making sure that the data is still going to be private, that we’re going to be transparent about the process that we’re using and really build trust with our users.”

And “one of the key things that we found with this is having a people-centric design and really thinking through how are people that are going to use this software really going to interact with it” are important factors, he explained.

“We found that having a multi-disciplinary group helping out with this” is important also. After all, he explained: “You don’t just need data scientists. You need user experience experts, you need design experts, you need developers and product managers and really the full suite to think through the experience. How is the user going to interact with this product and are they going to understand how it’s being used, or what’s being asked of them?”

With all of that in mind, Microsoft “ended up building a lot of tools that really help simplify the process, help you understand how to interpret the results or understand where the errors are coming in in your system,” he said. “And then we’ve taken those tools and we’ve made them available in open source, [and we’re] looking for others to contribute and build upon them,” he told viewers.

Microsoft’s “leading tool for data scientists is Azure Machine Learning and it really helps data scientists do everything from build to train to manage their models, all in one place,” he went on to say, adding: “Some of the things that we really want to help them with is understanding where there are errors in their models. No model is going to be perfect and so it’s going to have a place where it misses [and] makes bad predictions. And so Error Analysis is a tool that can help them see the distribution of that and see where is that lined up.”

One thing Microsoft heard from developers was that it’s difficult to integrate all the tools, he said. So the company introduced a Responsible AI dashboard to help them, he told viewers, noting it is open source and available in Azure Machine Learning.