A few weeks ago in Atlanta, GA, PMI’s Global Summit boasted many presentations and discussions around how Artificial Intelligence will shape projects … and project managers … in the near future. Nearly everyone I met had AI on their minds, some with delight—others with trepidation. Overall, the outlook felt upbeat—after all, who wouldn’t want to hand over their boring grunt work to a bot?—but the final questioner in the Q&A period after my presentation (still viewable if you attended the event but missed it) voiced the kind of concerns that keep many, including some of the original developers of AI, awake at night.
Most of the 200 or so who had attended my presentation had left the room by the time she stepped to the mic to ask, “Your presentation was upbeat. But are you worried, too?” I figured the people who hung around to hear this question, and its answer, deserved honesty, and since they were concerned, might very well be the people who could act to curb AI’s worst tendencies in their own organizations.
So, yeah, I’m worried. If Geoffrey Hinton is worried, we should be, too. Given how fast things are progressing, and the many possible wrong turns we could make in the rush to develop AI, the development of new tools and their adoption should be viewed with caution.
I’m not an expert in AI, but after nearly seven decades on the planet, I do consider myself somewhat skilled at being human. Here’s what I would be looking at if I were in a position to develop or implement an AI tool:
- Does the tool solve a problem that actually exists? We have a tendency to invent tools in search of a job and indulge in technical overkill. Maybe you don’t need AI for some/much of the work you do. Says Andrius Zujus, “Despite the popularity and promise of AI, there is a good chance that the problem you are trying to solve doesn’t require an elaborate AI solution.” A key skill for value delivery, according to our research, is problem solving, which includes knowing how to identify when something isn’t really a problem.
- Check your biases. Will you be alert to the ways that an AI tool may be skewing results or silencing viewpoints because of the blind spots programmed into it? Such AI biases can have an impact on risk identification and assessment, among other important issues.
- Who’s in charge? AI is not here to do the critical thinking, creative problem-solving, or sensitive handling of complex issues. That’s our job. Anything AI contributes to needs to be thoroughly vetted by knowledgeable people. Handing over the number-crunching or data collection to AI may free people from some forms of drudgery, but we can’t afford to be lazy or thoughtless. “it is the human ability to understand context — which AI tools lack — that necessitates the need for greater human skills,” says a recent HBR article.
I’ll have more about this in future blogs. Next month, come back to hear how we will be researching AI in project management in 2024, and how you can participate.
About the Author:
Jeannette Cabanis-Brewin is Editor-in-Chief of PM Solutions Research, the content generation center of PM Solutions, Inc., a project management consulting and training firm based in Chadds Ford, PA. A frequent presenter on project management research topics, she is the author or editor of over 20 project management books, including two that have received the PMI Literature Award. In 2007, she received a Distinguished Contribution Award from PMI. Jcabanis-brewin@pmsolutions.com