Humans in the loop: it takes people to ensure artificial intelligence success

[ad_1]

When it comes to artificial intelligence, don’t try to go it alone. IT departments, no matter how skilled and ready developers and data scientists may be, can only go so far past proofs of concept. It takes people –from all corners of the enterprise and working collaboratively — to deliver AI success,

buildings-with-rose-new-york-city-march-2021-photo-by-joe-mckendrick.jpg

Photo: Joe McKendrick

In discussing lessons learned about AI in recent years, industry experts point to the need to get the people from across the enterprise on board. “A copious amount of training data and elastic compute power are not the cornerstones for successful AI implementations,” says Sreedhar Bhagavatheeswaran, global head of Mindtree Consulting.

That cornerstone of AI success is people — not only AI skills, but involvement from all disciplines, from marketing to supply chain management. In recent years — and especially over the past year, as the need for automated or unattended processes accelerated, “enterprises learned that they must get stakeholder buy-in, with a true champion for AI within the organization’s leadership team,” says Dan Simion, VP of AI and analytics at Capgemini Americas.  

A concerted AI development and deployment effort also needs “strong governance, internal marketing within the company, and proper training to fuel further adoption of the AI initiatives across the business’ functional areas,” he adds. The key is being able to showcase the valuable insights being generated by these models, 

In efforts to make AI pervasive, “enterprises are now conscious of critical factors such as identifying the right journeys and use cases where AI intervention can make a business impact, operationalizing AI by establishing an AI operations and governance mechanisms, and blending the right proportion of data engineering and AI talent,” says Bhagavatheeswaran.

The catch, of course, is many of these efforts get undermined by organizational politics, or simple inertia. AI seems glamorous and promising, but acceptance and adoption takes time. “Companies should plan for the time and effort needed to conduct training sessions, and continuously reinforce the use and benefits of the AI system over the traditional methods,” advises Nitin Aggarwal, vice president data analytics at The Smart Cube. “Sharing and celebrating small and frequent wins is a proven catalyst.” 

AI also needs to have a friendly face, rather than perceptions of robots, software or otherwise, taking the reins of the company. “Make the end user interface business-friendly and intuitive,” Aggarwal suggests. “The lower the burden on the end user to understand the insights in terms of ‘so what,’ the higher the chances of them actually using the system.” If possible, he advises having an MLOps team on hand “to ensure the deployed solutions continue to work as expected.”

To date, the areas of the business having the most success with AI “are those with direct connections to customer interactions — such as marketing and sales,” says Simion. “These areas are constantly looking to drive revenue, and are more open to innovative new methods and tactics to improve efficiencies, which AI offers.”  Aggarwal agrees, noting that areas seeing the most initial success with AI include “marketing mix optimization, pricing and promotions ROI improvement, demand forecasting, CRM and hyper-personalization.” Lately however, AI’s power has also been turned on areas such as supply chain risk management, he adds.  

AI is more than technology — it’s new ways of thinking about problems and opportunities. Everyone needs to have access to this powerful new tool, Simion urges. “Make sure everyone across the enterprise is using the same technology stack, so each functional area can have access to the same lessons and insights. Consistency of the technology and the value it can bring is what makes the most difference.”

AI adoption also hinges on perceptions that it is fair and accurate, making fighting AI bias is another challenge proponents need to address head-on. Start with the data, Aggarwal states. “As AI algorithms learn from data, make a conscious effort for collecting and feeding richer data, that is corrected for bias and is fairly representative of all classes,” he advises. 

In most cases, “when you deploy AI models into production at scale, you have automatic tools to monitor the results in real-time,” says Simion.  “When the AI models are outside of their pre-set boundaries and limits, human intervention is necessary. This is done to ensure AI is performing as expected to drive efficiencies for the business, and it also is done to ensure any issues with AI bias or trust are caught and corrected.”    

It’s critical that humans be kept in the loop, says Aggarwal. “Sometimes human decision making alongside the algorithm is helpful to understand different responses and identify any inherent errors or biases. Human judgement can bring in more awareness, context, understanding and research ability to guide fair decision making. Debiasing should be looked at as an ongoing commitment.”

As part of this, companies may benefit by establishing an “AI governance council that reviews not only the business results influenced by their AI initiatives, but is also responsible for explaining the results of specific use cases when needed,” says Bhagavatheeswaran.

IT leaders and staff need to receive more training and awareness to alleviate AI bias as well. “It also ties into how staff performance is evaluated and how incentives are aligned,” says Aggarwal. “If creating the most accurate AI system is the key result area for a data scientist, chances are that you will get a highly accurate system but one, which may not be the most responsible. Similarly, for all staff, an important training should be on where to look for and how to detect biases in AI, and then reward teams who are able to find and recognize flaws.”

[ad_2]

Source link