Is Responsible AI Possible?
With Roy Austin, Eleven Canterbury Consultant and Director of the Howard Law Responsible AI Initiative, and Dan Martin, Eleven Canterbury Program & Relationship Manager
Summary
What is responsible AI? How can we rely on AI for answers, analysis, and information without being misled by bias and misinformation? These are the key questions AI specialist Roy Austin attempts to answer.
Roy is the Director of the Howard Law Responsible AI Initiative, is the former VP for Civil Rights and Deputy General Counsel at Meta, and was the Deputy Assistant to the President for Urban Affairs, Justice, and Opportunity in the Obama administration. He joins host Dan Martin to discuss what responsible AI requires as these systems move into daily use across work, education, and everyday decision-making.
Roy breaks down how AI is already influencing real-world behavior, from how children form beliefs to how executives analyze contracts and all of us consume information. He explains why AI outputs are never neutral, how underlying data and engineering choices shape results, and what that means for anyone relying on these tools for information.
He underscores the critical importance of always verifying AI outputs and citations, comparing responses across models, and identifying failure signals such as confident but unsupported answers, bias, and manipulated content.
From systemic risks, including political influence, propaganda, workforce disruption, deepfakes, and fraud, Roy makes one point clear: AI does not determine what is true. The responsibility to evaluate accuracy and truth from fiction remains with the user.
Transcript
Dan Martin: Artificial intelligence. What an interesting subject. My son is a college professor. He said he is convinced about the artificial part, but not the intelligence part!
It’s moving into the younger generation. My granddaughter, who’s seven, asked ChatGPT whether Santa Claus was real or not. It was an interesting response from ChatGPT, not along the lines of yes, Virginia, there is a Santa Claus. But it was clever.
What we’re seeing is a whole generation growing up in a world that’s really quite different. I think they may be in a better shape of knowing how to use artificial intelligence or these large language models than perhaps the older generation. We’re lucky we have Roy Austin with us.
He was the former head of Civil Rights and Deputy General Counsel at Meta, a Special Assistant on the White House Council on Urban Affairs for President Obama, had a long career in the Justice Department, and now you’re working on responsible AI. What is responsible AI?
Roy Austin: Dan, thank you so much for having me for this conversation. And I love the story about your granddaughter and Santa. Maybe we’ll get a chance to talk a little bit about that, and you can tell me whether Santa’s real or not.
Responsible AI is just really AI for humanity, is the way we look at it, and that is, how do we actually use AI in ways that are beneficial to most people, not to the corporate titans and the tech bros, as they’re sometimes called, but how do we use AI to help with hunger and climate and race relations and things like that. And, how do we improve everybody’s lives? So, at the end of the day, that is what responsible AI is.
Dan Martin: So how do you do that? There are things that I use AI for. I had a rental agreement for a house in Zurich, which was written in German. Hidden in German on page 14, was “you’ll return the apartment to the condition it was in when the people before you rented it.” I can see using AI to do things like that, looking at legal contracts, finding some analysis.
People get the impression that it really is smart and it really depends what question you ask and which AI you’re asking whether it’s right or not. I looked at, Grok, which is Elon’s AI, and asked about the Cyber Truck, and I didn’t get exactly the same response I got when I asked ChatGPT. Are there ways for people to know how to use it? Do people need to be trained or do we need regulations with AI?
Roy Austin: This is the thing with AI that people have to realize, yes, it is smarter than human beings in that it has a body of knowledge and a memory that we could never have. It has everything that’s on the internet in ways different AI systems have ingested.
But what people have to understand is that the responses are not magic, that the responses are actually built by human beings, by engineers and their biases. So, you ask it a difficult question, let’s say you ask it a question about climate change, and the answer that it gives is going to really depend on how it was engineered and how it was fine-tuned. What sources it used to come to its answer, because even though it has, let’s say, 90% of the potential sources on climate in the world, what it doesn’t have is the ability to discern one from the other and say, okay, I’ll use this one, I will not us that one, except for the engineers say, take the source that comes from a science journal as opposed to take one that comes from an opinion piece, but that’s completely up to the engineers to do. So that is the struggle that people have is sometimes they go to one, they go to ChatGPT and say, oh, Chat’s answer is this. And then that’s what sticks in their head. But they don’t think to go to Claude, they don’t think to go to Meta AI. They don’t think to go to Copilot, compare those answers, and then use their human reasoning to decide, “Okay, this is actually the one that is best.”
Dan Martin: So we really have to train people.
Roy Austin: Yes. That’s the most important thing. And that’s why, in some ways, I think we’re okay, that AI’s not going to take over everything because AI is just going to reflect the judgment of its builder.
And human beings are going to have to step in there and decide, well, which direction do we want to go? Now, we just have to make sure people realize that the answer they get is not necessarily the truth. And sometimes it’s going to be 100% a lie because some builders want to use it to socially engineer people in their political direction. So people have to recognize that. It’s, do you watch Fox News? Do you watch MS NOW? That’s going to be the same thing we’re going to see in AI and that we’re already seeing in AI.
Dan Martin: It’s sort of propaganda on steroids.
Roy Austin: It’s propaganda on steroids, and don’t forget the sycophancy in the whole thing. And this is something that some people notice, and some people don’t, and this is where we get people having relationships with it. You put something into AI, let’s say you write an email, and for most of them, the first thing it’s going to say is, oh, Dan, that’s a brilliant email that you just wrote, but maybe you want to make these changes.
For a lot of people, that it’s a brilliant email, it’s like, oh my god, it likes my writing, it likes me. And that’s actually a real problem. It’s the way the AI companies get more money is by more subscribers is by more users. And so what do they do to encourage more people to use? Well, they make you feel very special. That’s problematic when people start forming relationships with it. And it’s problematic just in general because, you know, there are stupid questions in the world, Dan, and sometimes your AI should tell you, you know what? That was really a dumb question.
Dan Martin: The first thing ChatGPT told my granddaughter was, what a wonderful question.
Roy Austin: Exactly. Exactly. And now she feels so good about herself that she asked that question.
Dan Martin: I’d be interested in your idea. When I talk to my grandchildren about ChatGPT, I invariably hear that it makes some really stupid mistakes and says some really stupid things. So, I think they’re a bit more discerning than some of the adults.
Roy Austin: I think some are, and your granddaughter, I would expect, is precocious, and she gets it and probably surrounds herself with people like that. But, just like we see in the real world, people are of varying sensibilities, of varying educations, of varying willingness to accept what somebody says without really interrogating it.
And so we just have to be really careful. And look, all of these models are going to universities, are going to schools, and saying, Hey, we’ll give it to you for free, so that it becomes the go-to for those people. And that’s problematic when they’re not always truth tellers.
The other thing that we should think about, though, is that there’s also a homogenization. Because it can only kind of give you one set of ideas. So, instead of a broad look at something, sometimes they point everybody in the same direction. So, one of the civil rights issues is, if I tell my AI my race, how does that impact the response it gives?
And we’ve seen this where, if I tell it that I’m black and I ask for some music that I will like, well, the AI’s going to say, oh, he’s black, so he must like rap music. Or, he’s of a certain age, so he must like R&B. And so, in some ways the AI is taking stereotypes about people, take your race, your religion, your ethnicity, the AI has to evaluate that in some way. And what it ends up doing is stereotyping people and making it seem like, oh, you’re a woman, so you must like this. You’re a man, so you must like this, and so you end up with a certain amount of homogenization, which is also, I think, problematic and something that, as a civil rights lawyer, I’m also concerned about.
Dan Martin: That’s something I hadn’t thought of, but that certainly makes sense. The other thing you said I hadn’t thought of is that I realize these models get their information by scouring the web. I always thought, this may be a shock to some people, but there are, factually, objectively wrong things on the internet. If you’re scouring that and you don’t control what’s right and what’s wrong, what happened and what didn’t happen, and it’s left to the individual programmers to decide, you have a situation where you almost have to let people know which side this is coming from, because we run the risk of being unable to determine what factually and objectively happened.
Roy Austin: That’s the importance of transparency, and that’s the importance of true citations. And you want people to understand what’s underneath the hood. Now, the problem with AI is that there’s a tiny number of people who even have a basic understanding of what’s under the hood of an AI system. So just by making it generally transparent, most people are going to throw up their hands. And it’s like reading the terms of service of your blender. You’re not reading that. You just start your blender.
And so, what does transparency mean? And then who’s going to evaluate the transparency? And on the engineering side, we’ve seen things like Grok, where Elon Musk has leaned the algorithm toward his material as opposed to other people’s material. Now, for some people, they’d be like, well, I don’t want that, so I choose not to. Some people don’t get that, and they don’t understand that that’s what’s happening. They’re like, oh, Elon said this. Oh, Elon said that. It’s like, oh, you know, he’s the most brilliant person in the world because he keeps saying all this. It’s like, no, no, no. You’re on his systems, Grok, or X-slash-Twitter, and they’re pointing you in that direction; that’s problematic.
The other thing, just historically, the winners write the story, generally. So, when you are looking across the internet for information, you’re getting the story based on how the winner has decided to portray it. You’re not getting an objective truth to the extent that there is an objective truth to many of these things.
Dan Martin: Roy, we’re in version, what, 1.0 or 2.0? This thing has flown! ChatGPT just came into being about three years ago. I mean, the fact that it is now being used by, I think the number I saw was something like 800 million users. That’s incredible, and these things are advancing because they’re learning from themselves at a rapid pace.
Roy Austin: I don’t know where we’re going to be in a month, in six months, with respect to AI because it is the fastest-moving revolution in technology that we have seen in humankind, as far as I’m concerned.
Dan Martin: It’s exciting times.
Roy Austin: It’s exciting times, it’s scary times. It’s going to take some jobs. We have to figure out what that means.
It’s going to pass along some bad information. We’ve actually seen an election that was impacted by the fact that the leadership, I can’t remember the country, but the leadership basically used AI to deceive people. Deep fakes are a real problem on the fraud side.
I can pretend to be your granddaughter. If I get a sample of her voice, I can call you, and I can get you to do things because you think you’re talking to her. That’s a real scary time. So, for all the good that some people want to do with it, an equal and opposite number of people are trying to do bad with it, and it’s not clear who’s going to win that battle.
Dan Martin: You’re working on responsible AI, I think, letting people become a bit more aware of what the sources are and how it works. That’s one way to attack the situation. But it’s something we’ll certainly have to keep an eye on.
I’m enjoying watching how the grandchildren respond to it and, learning from them. Thank you very much for the really exciting conversation. I’ve really enjoyed it, Roy.
Roy Austin: Dan, thank you for the opportunity. Always a pleasure speaking with you.