Diversity is a strength, especially when it comes to artificial intelligence design. That’s why people from diverse professional backgrounds, from law to creative arts need to be involved in AI design and development. However, many teams attempting to connect on AI often run into a major stumbling block: confusion.
James Landay, a professor of computer science at Stanford University, advocated for participative AI in a recent podcast, stating that the holistic design of AI systems up front is now the most important part of AI implementations. AI will not be successful without human-centered values, Landay urged in his discussion with Lareina Yee, senior partner at McKinsey.
Getting to human-centered AI is not just about the applications, Landay. cofounder and codirector of the Stanford Institute for Human-Centered Artificial Intelligence (HAI), explained. “It’s also about how we create and design those AI systems, who we involve in that development, and how we foster a process that’s more human centered as we create and evaluate AI systems.”
The challenge with AI is its unpredictability. It’s a different kind of technology from PCs, for example, “and it is not as reliable in some ways,” Landay said. That’s because AI systems are probabilistic, which can deliver different results based on the data it is fed, versus deterministic, “where the same input always gives you the same output. We need to think about designing AI systems differently.”
Feed data into AI’s probabilistic models, “and receive different results, depending on how that data’s processed in that huge neural network,” he said. Probabilistic models are also generating hallucinations, or statements that aren’t true. “We’re not even sure why they occur, and this is actually one of the bigger issues concerning just who is building these models.”
Therefore, AI systems are more difficult to manage when they go awry. “That’s why we need to think about designing AI systems differently, since they’re going to become ubiquitous throughout our everyday lives, from health to education to government,” said Landay.
“Right now, we mainly have sets of engineers, like responsible AI groups or safety teams, who are meant to check products before they’re released. Unfortunately, there’s a lot of incentive to just push something out the door. And these teams don’t quite have the social capital to stop it.”
Instead, diverse expertise needs to be embedded in the design and development process. “We need teams with these different disciplines—the social scientists, the humanists, the ethicists—because then problems will be found earlier. And as team members, those people will have the social capital to make that change happen.”
One of the challenges with an open, multi-disciplinary approach to AI is there ends up being many chefs in a crowded kitchen. “People in different fields speak different languages, so the same words can mean different things to different people,” Landay cautioned. “For example, I’m working on a project with an English professor and someone from the medical school. And what they call a ‘pilot study’ is not what I would call a ‘pilot study.’”
At the same time, such confusion may not be such a bad thing — the confusion may lead “to new ideas and new ways of looking at things,” he explained. “For example, we’ve had people working on large language models who are looking at natural language processing. And then they run into an ethicist with a background in political science who questions some of the things they’re doing or how they’re releasing their software without particular safeguards.”
AI is reshaping our businesses, our workplaces, and our society and large. It’s urgent that it be a collaborative process.