3 minutes
Digital Innovation

teams, trust, and AI

Of the many timely findings in our 2024 research study, The Adaptive Organization: A Benchmark of Changing Approaches to Project Management, one stands out to me as key for the moment we find ourselves in: 

The top-scoring capability for team leaders is the ability to inspire trust and promote collaboration.

This echoes a finding from our 2023 study (Project Management Skills for Value Delivery), which indicated that “carrying out work with integrity and trustworthiness” was a top skill for project leaders in high-performing organizations.

It seems an opportune time to focus on trust and trustworthiness in the age of AI. No sooner had we all gotten excited about the possibilities of AI than the downsides began to emerge: organizations and newsmakers began backpedaling, and investment in AI faltered. Today, an article on Bloomberg reminds us of the dot.com bubble and its effect on the economy, asking if AI is a bubble and where that might lead. What has happened in the technical/social realm feels like a global version of “Cool! … Wait, what?”

Today’s news that Google’s AI Overview is delivering howling inaccuracies and falsehoods is just the latest in a stream of news about these applications, which we have developed and released in a whirl of giddy excitement, have unintended effects. This is far from surprising. In a medical context, a new device or drug would be exhaustively tested for side effects before being released; since AI “only” effects the body politic, we are less stringent. This Wild West approach to changing how we interact with technology, each other, and the world wasn’t chosen or agreed upon; we are merely stumbling headlong into it. And of all the headlines cautioning us that I’ve read this past month, the one revealing the dismissal of OpenAI’s entire long-term risk assessment team concerns me more than anything. It’s a stunning admission that we don’t know what may happen, and rather than try to find out, the developers of Chat-GPT would rather bury their heads in the sand. I suppose they cannot be sued for negative outcomes if they literally did not see them coming.

So—back to trust. Do we trust AI? Perhaps... with some things. And yet, the extension of trust has to be based on our assessment of whether or not a person, technology, or institution is trustworthy. Wired Magazine recently opined on this topic, concluding that having AI integrated into everything we do with technology “will require a rethinking of much of our assumptions about governance and economy.” In a society that is already short on trust, that is a tall order. Perhaps the best place to start is by improving our own personal trustworthiness. It is, after all, a key attribute in successful project management leaders. Only the trustworthy can design and manage systems that rely on trust.

Are you ready?

Jeannette Cabanis-BrewinAbout the Author:

Jeannette Cabanis-Brewin is Editor-in-Chief of PM Solutions Research, the content generation center of PM Solutions, Inc., a project management consulting and training firm based in Chadds Ford, PA. A frequent presenter on project management research topics, she is the author or editor of over 20 project management books, including two that have received the PMI Literature Award.  In 2007, she received a Distinguished Contribution Award from PMI. Jcabanis-brewin@pmsolutions.com