AI improves team and enterprise culture, says MIT and BCG

By on
AI improves team and enterprise culture, says MIT and BCG

75 per cent of global executives say AI improves team and enterprise culture, according to research by MIT Sloan Management Review (MIT SMR) and Boston Consulting Group (BCG).

The cross-industry examination reveals that AI is helping companies to re-assess effectiveness, leading to new ways of working, behaviours and business objectives.

Michael Chu, partner and associate director at BCG told Digital Nation Australia that AI not only improves decision making but improves team morale.

“Done well, AI improves morale by making each team member much more effective, not by taking away their autonomy, but supplying them with recommendations and the information to make the right decision, while also allowing for the human to apply their own judgement. The AI effectively has your back, supporting you when you need it, but allowing you to make the decision, rather than being a merciless taskmaster,” says Chu.

He uses a transport analogy to describe how the AI can assist workers to pivot and change their behaviours when it comes to decision making. While old AI was like a train on tracks, advanced AI works like a GPS says Chu.

“It shows you how to get there based on what you, the human want, and shows points of interest in case you actually want to stop along the way for food or petrol. Plus, at any time, seeing road conditions that the AI might not know about, you can opt to change the path to something more efficient.”

According to the report 79 per cent of respondents said AI improved their team morale, while 87 per cent cited improvements in collective learning.

Chu describes collective learning as knowledge sharing between humans and AI.

“Collective learning means that humans make AI smarter by sharing their knowledge, and AI makes humans smarter by sharing the rationale behind all decisions,” says Chu.

While learning in the past was one-way, with humans feeding AI the data, and the AI’s knowledge improving, humans saw little return.

“New developments in AI, including transparency and semi-supervised learning, have meant that now, the human also can learn. The AI tells you what it is recommending and why. It allows you to test alternative solutions, giving you a safe sandbox to test your own ideas and get feedback from the AI before deploying them in real life. It allows you to feed in your own knowledge which may be unavailable to the AI.”

Cultural acceptance of AI is crucial for businesses to successfully adopt AI tools, but a lack of transparency can lead to mistrust of the tools, says Chu.

“The AI recommended something different from what I, an experienced employee, have been doing all along, but won’t tell you why. It won’t let you test and compare what it’s recommending versus what you would have done in the past.”

“We’ve found that allowing users to test what they did before in development and during operation to see how the results differ drives trust.”

The research reveals that nearly half of the companies surveyed believe mistrust stems from a lack of understanding (49 per cent) or training (46 per cent).

According to David Kiron, editorial director at MIT SMR and report co-author, “Cultural acceptance of AI begins with trust. Building trust depends on teaching and training workers, explaining the reasons for AI recommendations and providing AI tools that solve problems."

© Digital Nation

Most Read Articles