How CEOs Can Prioritize Ethical Innovation and Data Dignity in AI

More and more, companies are relying on artificial intelligence to carry out various functions of their business – some that only computers can do and some that are still better than humans handled. And while it might make sense that a computer could perform these functions without any kind of bias or agenda, leaders in artificial intelligence are increasingly wary of this delicate scenario.

Anxiety is so prevalent that the new Responsible AI measures have been brought forward by the federal governmentRequire companies to examine these biases and operate beyond human systems to avoid them.

The Four Pillars of Responsible Artificial Intelligence

Ray Eitel Porter, Managing Director and Global Leader of Responsible Artificial Intelligence at Accentureidentified during a virtual event hosted by luck On Thursday, the tech consulting firm was working around four “pillars” for AI implementation: principles and governance, policies and controls, technology and platforms, and culture and training.

“The four pillars basically came from our engagement with a number of clients in this space and from our realization of where people are in their journey,” he said. “Most of the time now, it’s really about how you put your principles into practice and apply them.”

Many companies these days have an artificial intelligence framework. Policies and Controls is the next layer, which is about how to put these principles into place. Technology and platforms are the tools through which you implement these principles, and the culture and training part ensures that everyone at every level of the company understands their role, can implement it, and buy into it.

“It’s definitely not just something for a data science team or a technology team,” Ettle Porter said. “It’s very relevant to everyone across the company, so culture and training are really important.”

Naba Banerjee, Chief Product Officer at Airbnbproposed the inclusion of a fifth pillar: the financial investments required to achieve these things.

Interestingly, Ethel Porter said the interest and intent is there, citing a recent Accenture survey of 850 senior executives globally, which found that only 6% were able to integrate operationally responsible AI, while 77% said that Doing so is a top priority when looking to the future. .

As for Banerjee’s view of investing, the same survey showed that 80% of respondents said they would allocate 10% of their AI and analytics budgets over the next few years to responsible AI, while 45% said they would allocate 20% of their budget. for this endeavor.

“This is really encouraging because, frankly, without money, it is very difficult to do these things, and it shows that there is a very strong commitment on the part of organizations to move to the next step … to operationalize the principles through the governance mechanism,” he said.

How companies try to be responsible

Airbnb is using artificial intelligence to prevent house parties in host homes, which have become an even bigger problem amid the pandemic. One way the company is trying to detect this risk is by looking at tenants under the age of 25 for rental mansion, assuming those customers are exploring party locations.

“That makes some sense, so why use AI?” Banerjee asked. “But when you have a platform with over 100 million guests, over 4 million hosts, over 6 million listings, and the scale keeps growing, you can’t do it with a set of rules. Once you build a set of rules, someone finds What a way to bypass the rules.”

Banerjee said employees were constantly training the model to enforce these rules, but it wasn’t perfect.

“When you try to stop the bad actors, you unfortunately catch some dolphins in the net as well,” she said.

This is when the humans in customer service have to step in to troubleshoot individual users of the platform, who had no intention of dumping the Rager, but were prevented from holding it anyway. These cases have been used to improve the models as well.

But robots can’t do everything. One of the ways that the online homestay market is using is to keep people informed Airbnb project beacon, which focuses on preventing discrimination by partnering with civil rights organizations. Banerjee said the company’s mission is to create a world where anyone can belong anywhere, and for that purpose, the platform has removed 2.5 million users since 2016 who did not follow the Community Standards.

“Unless you can measure and understand the impact of whatever kind of system you’re building to keep society safe… you can’t do anything about it,” she said.

Project Lighthouse aims to measure and eradicate this discrimination, but it does so without facial recognition or algorithms. Instead, it is used by humans to help understand someone’s perceived race while keeping that person’s identity anonymous.

“When we see a gap between white guests and black guests and white hosts and black hosts, we take action,” she said.

in Master Card Credit CardArtificial intelligence has long been used to prevent fraud across the millions of daily transactions that occur across the country.

“It’s interesting because at Mastercard we work in data and technology. This is the space we’ve been in for many years,” says Raj Seshadri, Head of Data and Services at Mastercard.

She added that the concept of trust is ingrained in this work: “What is the intent of what you do? What have you been hoping to achieve and what are the unintended consequences?”

But the more data you have, Seshadri said, it can help avoid discrimination when using AI. For example, small businesses run by women are usually not approved for the most credit, but with more data points, it may be possible to reduce gender discrimination.

“It’s equal opportunity,” Al-Shashdri said.

Biased robots are human creations

Biased bots are not conscious creatures with an agenda, said Krishna Jade, founder and CEO of Fiddler AI, but rather the result of flawed human data telling us what we hope will be an improved version of the process.

The difficulty here, Jade said, is that software based on machine learning is getting better at some kind of black box. It doesn’t work like traditional software where you can view code line by line and make repairs. It becomes difficult, then, to explain how AI works.

“They’re basically trying to infer what’s going on in the model,” Jade says. The data that artificial intelligence uses to calculate a Mastercard customer’s loan approval, for example, may be causal to the model, but not in the real world. “There are many other factors that may drive the current rate.”

In Fiddler AI, users can “tinker” with the model’s input to see why it behaves the way it is. You can adjust someone’s past debts to see how much their credit score will change, for example.

“These types of interactions can build trust through a model,” he said, noting that many industries, such as banks, have risk management teams that review their AI processes, but not all industries implement these checks.

New government regulations This is likely to change as many in the industry have called for Amnesty International Bill of Rights.

“Many of these conversations are underway, and I think that’s a good thing,” Al-Shadari said.

Leave a Comment