Conversations With the Experts: AI at the Table: Partner or Threat?
With Lauren Cantor, Eleven Canterbury Consultant, AI Specialist and Dan Martin, Eleven Canterbury Program & Relationship Manager
Summary
In this episode of Conversations with the Experts, host Dan Martin is joined by the endlessly curious and insightful Lauren Cantor, an AI specialist whose career path has taken her from the cosmos of astrophysics to the intensity of Wall Street, the world of fine arts, and now the fast-evolving frontier of AI consulting.
Together, they explore how artificial intelligence is reshaping our world, from the boardroom to the classroom. Lauren offers a refreshingly grounded and optimistic take on AI’s potential, challenging the doom-and-gloom narratives with a thoughtful perspective on how AI can enhance, not replace, human capability.
They dig into big questions: What does it mean when AI development is largely led by the private sector? How will white-collar jobs evolve? Can AI be a force for good in education? Through it all, Lauren champions emotional intelligence, creativity, and critical thinking, timeless human strengths she believes will remain irreplaceable in an AI-driven future.
If you’re curious about where we’re headed with AI and how to thrive in that future, this is a conversation you won’t want to miss.
Transcript
Dan Martin: Hello, and welcome to another edition of Conversations with the Experts. My guest today is Lauren Cantor, a real Renaissance person. I was looking at your CV, Lauren. You have an undergraduate degree in astrophysics, a career on Wall Street, both in trading and strategy, and you’re working in fine arts and design. You’re teaching college-level classes and consulting on Artificial Intelligence. I get tired reading about all of that, so I’m really looking forward to this discussion! I thought that I might ask you about professional football because I didn’t see that on your CV!
What we want to talk about today is Artificial Intelligence, which is something you’ve been working on. I’ve heard a lot about Artificial Intelligence and the impact that it will have on society, ranging from the extinction of human beings, which seems a bit over the top, to its being another tool that we have to learn how to use. So, the question is, AI, is this an invention, like the washing machine, or is it more like the internet or vaccines that really transcend and transform things?
Lauren Cantor: It’s a really interesting question. I think AI is unique in that it’s really one of the most transformative technologies that hasn’t been developed with the help of the government. If you think of vaccines, if you think of GPS, if you think of the internet, most of those were developed by, or at least had a big part of government funding to help in that development.
If you think about the generative AI era that we’re living in now, it’s all being funded by the private sector. So, there’s a little bit of concern in the sense that AI is being developed without regulations. It’s being developed for profit motives, and not necessarily for humanity and the human good.
And then there is the second wave; AI is being developed and replacing white collar jobs. This is kind of the first time we’re seeing that technology and an industrial revolution are going after cognitive jobs rather than physical jobs. So, while I don’t think AI is going to be our lizard overlords and take over the world, I have a very optimistic view towards AI. I view it as a collaborator, as kind of my intern.
I do think it is definitely changing the way we work, and it will change the jobs that we have, but I don’t think it’s necessarily that we should be afraid of Skynet, afraid of the Terminator taking over our lives.
Dan Martin: I take your point. I hadn’t thought about not having the government involved in creating something like this. Sometimes it’s hard to put regulations and security in after it’s already created. I think back to Microsoft developing the Windows operating system, and that was an era when everything was going to be open and free and people sharing, and then afterwards, trying to secure it became a very difficult task after you had a zillion people already using it.
We’ve touched a bit on using AI and disrupting white collar jobs for the first time, so it’s not just somebody on the assembly line. I even talked with my daughter, who’s a lawyer. There are those kinds of jobs, too. My son teaches at the university and said that people who, maybe five years ago, were going into computer science, are shifting out of that because there’s a thought that a lot of the coding jobs may disappear as well. Are there safe havens or places where you should be spending your time?
Lauren Cantor: I do think AI is transformative, and it should be used as a collaborator. It’s definitely helped me become more productive. My clients use it for more efficiency, for speed, to do mundane tasks that used to take a lot of time.
I think the jobs that AI will never be able to replace are those that require EQ, emotional intelligence. AI is never really going to be able to replace social workers or even teachers, anything that really needs human interaction. We found, right after Covid, after the pandemic, that we all really craved human interaction, and doing things in real life made a big difference. I don’t think AI’s ever going to be able to replace that.
So, I think those types of jobs will never get replaced. I also think that even though computer coding jobs are on the way down, it’s necessary to understand how to code because the more you know how to code, the better you will be at steering AI in the correct way.
Dan Martin: Yes, steering is a word I’ve heard in the use of AI.
I talked with someone today about supply chain, and she was saying, you can get really good advice from AI if you’re really careful about understanding the question, understanding your process, you can get analysis done. But if you take it on a superficial level, you get superficial results.
You also talked about teaching jobs. My son teaches at a university, and he’s noticed that, again, you have some people using AI but not really understanding what they’re doing. He teaches economics, and they’ve learned how to handle the problem sets, the math problem sets, but then you get them in a verbal discussion, and they can’t quite remember whether the supply and demand curve goes this way or that way. So, it seems like for teaching, right now, it’s fairly easy to see when AI has written an essay question. That may get harder. What do you do? Can you tell an AI-generated essay, or are there other things you can do to make sure people know what they’re talking about?
Lauren Cantor: I’m a big proponent of AI in class, depending on the age of the user. I would never use it in elementary school or junior high. Definitely for high school and college. I want my students to know how to use AI responsibly and ethically. And depending also on the university setting, some universities believe using AI is cheating. I don’t. I think students need to know how to use it because they’re going to have to use it in their schools.
Dan Martin: I remember at the university when I was there, they were worried about people using calculators; we had to keep the engineers on slide rules.
Lauren Cantor: Right, same thing. I wasn’t allowed to use a calculator in the olden days either. And spell check and cursive, those things are gone as well. But, Zoom, I think, also accelerated a change to education, right? Students have a much shorter attention span, and teaching online has changed the way we teach as well.
One thing that I’ve found with my students is there’s a trick called prompt injection where you can add white text into your assignments and ask it to add, you know, $10,000 to an answer. So, the students will get the answer wrong automatically if they don’t know that there’s white text in there. So that’s one way to trick them on a quiz to see if they’re cheating.
Also, I try not to have too many take-home assignments, but when I do, the first thing we do is we talk about it in class. It’s all interaction, it’s all teamwork, it’s all collaboration, it’s all trying to teach critical thinking skills rather than getting them to study for the test.
Anthropic, the makers of Claude, just did a study actually looking at how students use Claude, and it was pretty interesting. They found that Claude is actually really good at writing code, and they found, as we would think, the people who use Claude the most are STEM majors, but they didn’t use Claude as much for school, or that’s what they thought. They don’t know for sure. So, people are using AI, they just can’t tell how they’re using it; whether they’re cheating on their homework, or they’re using it to help them study for a test.
Dan Martin: It sounds like a really sophisticated tool that people really need to learn how to use in the best way. Excluding it from the classroom doesn’t strike me as smart.
Lauren Cantor: Definitely, depending on the age of the students, I think it is a disservice. Like you said, I think it is the calculator, the Excel spreadsheet of the now or the future. It’s a tool that students are going to have to use. There was even an article from the CEO of Shopify, where he talked about actually asking people in their job interviews, What can you do with AI? So, it’s a requirement for job interviews these days.
Dan Martin: Interesting.
As an aside, I wouldn’t underestimate elementary school students. My grandson in kindergarten was using Zoom during COVID, and he came to me, saw me doing Zoom calls, and he told me, ‘You know if you change your name to ‘connecting to audio’, the teacher will think that your computer’s broken and leave you alone?’ It’s kind of amazing what they can come up with!
This is a really interesting topic, there’s a lot more to say. I’m sure we’ll have another discussion. I really thank you for your time and for enlightening me, Lauren.
Lauren Cantor: Thanks so much for having me, Dan. Really appreciate it.