Eliminating artificial intelligence bias is everyone’s job

Last Updated on February 7, 2023 by Admin

[ad_1]

With reliance on AI-based decisioning and operations growing by the day, it’s important to take a step back and ask if everything that can be done to assure fairness and mitigate bias is being done. There needs to be greater awareness and training behind AI deployments. Not just for developers and data scientists, bur also product managers, executives, marketing managers, and merchandisers. That’s the word from John Boezeman, chief technology officer at Acoustic, who shared his insights on the urgency of getting AI right. 

img-8571.jpg

Photo: Joe McKendrick

Q: How far along are corporate efforts to achieve fairness and eliminate bias in AI results?  

Boezeman:  Trying to determine bias or skew in AI is a very difficult problem and requires a lot of extra care, services, and financial investment to be able to not only detect, but then fix and compensate those issues. Many corporations have unintentionally used biased or incomplete data in different models; understanding that and changing this behavior requires cultural changes and careful planning within a company. 

Those that operate under defined data ethics principles will be well-positioned to avoid bias in AI, or at least be able to detect and remedy it if and when it’s identified. 

Q: Are companies doing enough to regularly review their AI results? What’s the best way to do this?  

Boezeman:  As new tools are provided around the auditability of AI, we’ll see a lot more companies regularly reviewing their AI results. Today, many companies either buy a product that has an AI feature or capability embedded or it’s part of the proprietary feature of that product, which doesn’t expose the auditability. 

Companies may also stand up the basic AI capabilities for a specific use case, usually in that AI discover level of usage. However, in each of these cases the auditing is usually limited. Where auditing really becomes important is in “recommend” and “action” levels of AI. In these two phases, it’s important to use an auditing tool to not introduce bias and skew the results. 

One of the best ways to help with auditing AI is to use one of the bigger cloud service providers’ AI and ML services. Many of those vendors have tools and tech stacks that allow you to track this information. Also key is for identifying bias or bias-like behavior to be part of the training for data scientists and AI and ML developers. The more people are educated on what to look out for, the more prepared companies will be to identify and mitigate AI bias.   

Q: Should IT leaders and staff receive more training and awareness to alleviate AI bias?  

Boezeman:  Definitely. Both the data scientists and AI/ML developers need training on bias and skew, but it’s also important to expand this training to product managers, executives, marketing managers, and merchandisers. 

It’s easy to fall into the trap of doing what you’ve always done, or to only go after a bias-centric approach like many industries have done in the past. But with training around alleviating AI bias, staff across your organization will be able to identify bias rather than trusting that everything AI produces is fact. From there, your company can help mitigate its impact.  

Q: AI and machine learning initiatives have been underway for several years now. What lessons have enterprises been learning in terms of most productive adoption and deployment?  

Boezeman: AI is not a panacea to solve everything. I have seen many attempts to throw AI at any use case, independent if AI is the right use case, all to enable a marketing story without providing real value. The trick to successful deployment of an AI solution is a combination of the quality of the data and the quality of the models and algorithms driving the decisioning. Simply put, if you put junk in, you’ll get junk out. The most successful deployments have a crisp use case, and well-defined data to operate with.  

Q: What areas of the organization are seeing the most success with AI?  

Boezeman:  There are many different stages in AI, but mostly they can be boiled down to three basic states: discover, recommend, and automatic action. Right now, the places I see it mostly used is in discover — insights, alerts, notifications — space. This is where the system tells you something is going on abnormal or outside of known patterns, or something is trending in a direction you should care about. People trust this kind of interaction and model, and can easily collaborate if they want proof. 

Marketers leverage AI in the discover space to determine how successful their campaigns are, for example. Another example is a merchandiser that may deploy an AI-powered solution to detect fraud or issues with the customer journey. 

Where I still see a lot of hesitation is in the recommend and action states. I used to own a product that calculated the best price for a product and order to display them in a web storefront, based on many data points, from quantity, to profitability, to time to markdown, to storage space used in warehouse. And even this product could, if you turned it on, automatically take action. What we found is many merchandisers like seeing the recommendation, but they personally wanted to take action, and also wanted to see multiple options, and finally, they wanted to see the decision tree on why the system recommended an option. When we first launched it, we didn’t have the “Why did the system recommend XYZ?” functionality. Until we provided a way to allow the merchandiser the ability to see what the recommendation was based on, they didn’t trust it.  

Q: What technologies or technology approaches are making the most difference?  

Boezeman:  There are many companies operating in this realm that are inventing new, impactful technologies every day. Spark and Amazon Sagemaker are two examples. The technologies that are making the most difference though, are those that enable you to identify bias in your AI models. When AI algorithms are biased, they can lead to unfair and incorrect results. By being able to see the bias in the system, you can then diagnose, and mitigate the situation. As the industry continues to grow, this will be a key baseline capability each technology stack will need to support. 

[ad_2]

Source link