Should charities embrace AI on their trustee boards?

I recently finished reading Homo Deus, by Yuval Noah Harari. In it, is a chapter that discusses a venture capitalist company that appointed an AI as an observer to the board. Called Vital, the AI helped save its organisation from the brink of bankruptcy.

This isn’t news, as both the book and the media coverage are a couple of years old. But it has captured my interest, and got me thinking about the role of AI and humans on charity boards in the future.

Vital uses parameters and algorithms to help analyse risk and inform decisions about investments. It has been able to overlook feelings, media trends and hype and make more informed decisions. Based on the success of the AI, Vital 2.0 is being developed.

“We treat it as a member of our board with observer status. As a board, we agreed that we would not make positive investment decisions without corroboration by Vital.” Dmitry Kaminskiy, managing partner of Deep Knowledge Ventures.

Given the nature of business for a venture capitalist, it’s easy to see how an AI member of a board can analyse risk. I can imagine how investment outcomes could be better if you overlook human intuition and emotions.

AI for charities – how could it work?

In my experience, some boards would struggle with a conference call, let alone having an integrated AI presence. But as technology advances, becomes more available and affordable, it could happen.

How might my role as a trustee change if such a tool were available to my board?

I can see value in having instant access to an AI that knows charity governance, finance and law inside out. It could have access to every ruling ever made by courts – every bit of guidance ever set by the Charity Commission. Accessing every set of minutes, accounts and strategies of every other charity. I have no doubt that such a tool would add value to me, and my board, but could it engage in debate and decision making?

How could an algorithm, interpret the complex, multifaceted human issues, that we deal with on charity boards?

Would having such a presence affect our ability to be subjective? Would it go against decisions that might be at times less commercial? What about making decisions that are more socially focused, or dare I say it political, and in line with our charitable objectives? Can you develop artificial intelligence with values and that can understand social motives?

There are more questions than answers available at the moment. In the future, humans will need to make complex and ethical decisions about the growing importance (and reliance) on technology.

Where should we draw the line?

About the author:
Bill Yuksel is Business Manager at Peridot Partners. A trustee himself, he supports progressive students’ unions to recruit brilliant leaders.