In this episode of Status Check with Spivey, Mike has a conversation with Dr. Nita Farahany—speaker, author, Duke Law Distinguished Professor, and the Founding Director of the Duke Initiative for Science & Society—on the future of artificial intelligence in law school, legal employment, legislation, and our day-to-day lives.
They discuss a wide range of AI-related topics, including how significantly Dr. Farahany expects AI to change our lives (10:43, 23:09), how Dr. Farahany checks for AI-generated content in her classes and her thoughts on AI detector tools (1:26, 5:46), the reason that she bans her students from using AI to help generate papers (plus, the reasons she doesn’t ascribe to) (3:41), predictions for how AI will impact legal employment in both the short term and the long term (7:26), which law students are likely to be successful vs. unsuccessful in an AI future (12:24), whether our technology is spying on us (17:04), cognitive offloading and the idea of “cognitive extinction” (18:59), how AI and technology can take away our free will (24:45) and ways to take it back (27:58), how our cognitive liberties are at stake and what we can do to reclaim them both on an individual level (30:06) and a societal level (35:53), neural implants and sensors and our screenless future (39:27), how to use AI in a way that promotes rather than diminishes critical thinking (44:43), and how much, for what purposes, and with which tools Dr. Farahany uses generative AI herself (47:27).
Among Dr. Farahany’s numerous credentials and accomplishments, she is the author of the 2023 book, The Battle for Your Brain: Defending Your Right to Think Freely in the Age of Neurotechnology; she has given two TED Talks and spoken at numerous high-profile conferences and forums; she served on the Presidential Commission for the Study of Bioethical Issues from 2010 to 2017; she was President of the International Neuroethics Society from 2019 to 2021; and her scholarship includes work on artificial intelligence, cognitive biometric data privacy issues, and other topics in law and technology, ethics, and neuroscience. She is the Robinson O. Everett Distinguished Professor of Law and Professor of Philosophy at Duke University, where she also earned a JD, MA, and PhD in philosophy after completing a bachelor’s degree from Dartmouth and a master’s from Harvard, both in biology.
Dr. Farahany’s Substack—featuring her interactive online AI Law & Policy and Advanced Topics in AI Law & Policy courses—is available here. The app she recommends is BePresent. The Status Check episode Mike mentions, with Dr. Judson Brewer, is here.
You can listen and subscribe to Status Check with Spivey on Apple Podcasts, Spotify, and YouTube. You can read a full transcript of this episode with timestamps below.
Mike: Welcome to Status Check with Spivey, where we talk about life law, law school, law school admissions, a little bit of everything. We’re in that “everything” category. I got to reconnect just now with someone who was a faculty member at Vanderbilt where I got my start in admissions: Nita Farahany, who’s now a Distinguished Professor at Duke and the Founding Director of the Duke Initiative for Science & Society. At Duke, she’s an expert on the ethical, legal, and societal implications of emerging technologies and their impacts on your and my brains.
We very much stay in the AI space. How does AI impact you? Can Nita—one of the world’s experts on AI and its legal, ethical ramifications—if you submit a paper with AI, how confident, and this was the first question I asked her, is she that she can detect it? How should you use AI going forward for your job search to get a job, and how should you turn it off so we don’t face cognitive decline? What are the benefits of using AI? When should you, when should you not? What’s it going to do with legal hiring? Can law professors detect it? And a heck of a lot more.
I’ll let Nita get into it. This is a little bit above my pay grade. She’s the world’s expert. Without further delay, this was me and Nita Farahany from Duke Law.
Nita, good to see you. It’s been a long time since we were both at Vanderbilt.
Nita: Yeah, it’s been a while. It’s nice to see you.
[1:26] Mike: Things have changed. You haven’t changed much, but society around us has changed. I’m curious, just diving in for what our listeners are interested in—if someone were to submit, in your class, a paper written by AI, how confident are you that you would detect it?
Nita: I guess it depends on how much of it was written by AI. If you mean, like, if the entire thing was generated by AI, do I think I would do a decent job detecting it just by reading it myself? Probably not. But there are echoes that you can pick up that sound like AI at this point. And you do have a kind of gut instinct when you read a lot of AI generated text where you’re like, “Mmm, that doesn’t feel quite right.”
And then I use multiple AI detectors in my classes with AI tools. And so I run it through three or four different AI detectors just to get a sense of what the AI detectors say. And if I end up with something that kind of consistently, across all of them and my own intuition is, is that it’s written by AI, I usually confront the student.
I had that last semester. I had a student who, it was pretty clear to me that the student was struggling in class, and then I got something that had basically no content in it. It sounded good, but it didn’t actually say anything. And so I was like, “I’m pretty sure this is written by AI.” I ran it through the AI detectors, came back with almost 0% human score across all of them, and I went to the student and said, “Hey, I think this might’ve been written by AI. I’m going to give you a chance to go ahead and rewrite it and resubmit it, because even if it’s not written by AI, it’s kind of missing any substantive content. And, you know, they kind of readily admitted to the fact that yeah, they had generated it with AI.
Mike: Interesting. So were you concerned about false positives?
[3:10] Nita: Yes. Like, I’m really concerned about false scoring of it. And that is, there is a convergence, somewhat, between how people write—the more you use AI tools, the more your writing starts to sound like AI as well. But there are still some tells of what is AI and what isn’t AI, and most of the time people aren’t generating 100% of a paper by AI. They might have part of it that’s written by AI. And so when you put it through these AI detectors, and they come back with 30%, 40%, 50% scores, what do you do with that?
Now, I’ve said my students, in writing a paper, can’t use AI to generate the writing itself, but they can use it to brainstorm with it. But that no part of what they finally submit should be written by AI. Not a paragraph, not a sentence. And they’re decent, right now, at picking up some kind of uniquely AI words or AI structure. And it turns out, it’s not just the words you use. It’s, like, how an argument is actually structured that allows detecting when AI is used versus when AI isn’t used.
But ultimately, I kind of think this is a cat and mouse game. Like, the reason I say “don’t use AI” is largely because I’ve seen what the evidence is of the impacts of cognitive offloading. And especially in an educational environment, what you’re trying to teach people is critical thinking skills. So the more they offload those critical thinking skills when they’re supposed to be building them, the more it’s a disservice to themselves
But if we’re moving into a world where people are increasingly going to be co-authoring and co-writing and collaborating with AI as tools, the question really isn’t, “Did AI write it or not?” But are we figuring out a way to still foster critical thinking skills in a way that’s going to be relevant in an AI future?
[4:52] Mike: Yeah, agreed 100%. We don’t want to create a future where we’re all in a matrix and AI is doing everything. I liked your word “co-evolution” that you used on NPR. I’ve noticed, too, that when I read Stephen King—I have a second book due—my writing sounds like Stephen King. I mean, it’s not just AI.
Nita: Yeah. We’re influenced by what we read. But the more AI generated content is out there, the more the news stories we read are AI-generated, the more everything we read is co-edited with or co-written with AI, the more our voices will change and be shaped by that as well.
I mean, a lot of people are calling it “AI slop.” I think it’s just the evolution of where we’re going, for good or for bad. And using AI detector tools I think is going to be hard over time. When—what is it? Is it the AI? Is it the person? Is it that we’re convening toward a norm? Like, I don’t know.
Mike: This will be a crass analogy. North Korea develops a ballistic missile, so we develop a missile defense system. They develop a hypersonic missile, so we develop—is it going to be that sort of cat and mouse?
Nita: I mean, if that’s where we stay—right? So I mean, like, almost as soon as AI came out, I think it was out of Princeton that first there was, like, the AI detector tools that came out. So then students started to figure out, like, okay, if you put in grammatical errors into your papers, then AI scores it more differently and says that it’s more human. And now there are “humanizing tools,” right? So you have the thing generated by AI, and then you can go to a different tool and have it humanize it so it sounds more human so that it passes the AI detectors. That’s just not the game we wanted to be in, right? It’s about thoughtfully rethinking what kind of pedagogical outcome should be.
So like this semester, for example, I’m teaching a seminar. And, you know, I went into the seminar thinking like, “Okay, there are some students who are going to want to write papers, and they need to do so for their significant writing credit for the semester, but for the rest of them, like, do I need people writing papers, or is there something that’s more pedagogically interesting to get them to engage with the material in a different way?” Like maybe it’s working in groups on an amicus brief, or drafting legislation and interviewing people, and really engaging with a lot of the critical thinking skills, but more human-based skills that we sort of have implicitly assumed we’ll develop over time, but we’re not as explicitly focused on. And, pedagogically, I think we have to evolve rather than evolving tools to detect when people are using AI.
[7:02] Mike: Makes sense. And it kind of morphs to something I’ve been thinking about, too.
You want your students critically thinking. You’re not worried so much about, “Did you come up with this one sentence on your own?” But, “Did you have this thoughtful process of coming up with these skills so that you go on and use those critical thinking skills to represent clients or whomever, make ethical, moral legal decisions for your clients down the road?”
What about from the law firm side? What about from the employer side? You know, I sort of hear two different arguments. Argument one is, okay, we’ve already invested in this third-year associate, and now 80% of what they’ve been doing, we can do with AI. Now, we don’t want to fire the third year associate, because we’ve already spent hundreds of thousands of dollars, but why would we hire 20 Duke law students into our 1L class when we can hire five? Because the third-year associate is 90% more efficient. That’s argument number one, is that there’s going to be an AI hiring impingement soon.
Nita: Yeah. That argument was made with e-discovery too, right? Where, so many first year associates were spending time in basements with giant file boxes, going through and doing massive and painful discovery for hours, and like, that’s what’s first, second, third-year associates did for a long time. And then e-discovery came along where, that process of having a bunch of associates in a basement with documents didn’t make sense anymore, because you could do it much more efficiently with e-discovery instead and having humans oversee the process.
And people said “Well, you know, we’re not going to need lawyers anymore. We’re certainly not going to need associates anymore. We can just hire far fewer of them and have technology do it.” But that’s not how it turned out, right? It didn’t lead to a contraction in the number of associates. It’s led to a shift in what it is that they were working on instead.
And the question is, with generative AI, is there enough left after the shift that it makes sense to continue to bring in associates? Like, is there really going to be a replacement? And can you really continue to say like, “Oh, well, we’ll delegate the low-level stuff to AI, and then humans will do that higher order critical thinking”? Maybe. You know, that’s an empirical question to be seen.
So far, if you look at what’s happening, I think a lot of people are pulling back rather than diving into AI. And that’s because there’s a gap right now, especially in fields like law, where what you need is very thoughtful, not error-ridden, carefully cite-checked kinds of arguments that don’t have hallucinations in them. And the process of having to check every citation and avoid the number of hallucinations that have occurred turns out to be, perhaps, right now, more time consuming than less time consuming than generating it straight out. So I think there are certain things where AI has become very effective and useful in legal settings, but it’s not in the replacement of the first-year and second-year associates yet.
[9:50] Mike: to your point about e-discovery, I heard the same thing about fax machines. I heard the same thing about emails. I heard the same thing about smartphones. “This was going to make our lives, make the work go away.” No, I just, now I work more hours, and I hire more people at our firm.
So maybe that’s true. But I saw an article the other day, AI is learning twice as fast, if not more, than Moore’s Law. Its algorithmic predictability is growing. So if that’s the case—I guess I’m just putting you on the spot—is there a potentiality where, thinking from a firm’s perspective, “We care about our return on investment. We care about competing against other firms.” Do the seven biggest firms go so deep into AI that they’re charging lower costs, AI’s not getting sick, AI’s not asking for raises, and is there a future where legal hiring is hit? I don’t know if you’ve seen the undergrad hiring market, but they’re getting crushed by AI in some fields.
Nita: Yeah. That doesn’t surprise me. Right? There is a lot that AI can do, and a lot more than I think people expected AI to be able to do. And I do think, like the industrial revolution, that this is going to be a major revolution in work. So it’s not that I don’t think that there will be disruptions to the legal industry. There will be. Is that really going to be because you have the biggest firms charging less? Probably not. It’s hard to get people to charge less once they’ve charged more I’ve found over time, right?
But, you know, will they lean heavily into AI? They already are. I mean, the biggest firms are the first ones who signed up for services like Harvey and figured out quickly how to use services like Harvey effectively as part of their legal practice, training it on their documents, being able to create custom workflows. But custom workflows don’t necessarily mean that you have fewer. Do you end up with more clients that you can serve with fewer people? Possibly.
Does that lead to a consolidation within the industry, such that you have more clients being served by fewer firms, who are able to do so quickly and effectively thanks to mastering AI and already having the training base to make that possible? Probably. Like, you’re probably going to have some consolidation within the legal field.
You’ll have smaller mom-and-pop shops that maybe can compete better than they could before, rather than less well than they could before, because they’re put on a more level playing field with having AI at their fingertips. But AI gets better and better. Compute gets more and more expensive. It hasn’t gotten less expensive. And the more complex the tasks are, so far, the more expensive it is to actually have that AI as part of what you do.
And so, you know, there will be a shakeout. I just don’t think it’s going to be a shakeout leading to a loss of a bunch of first year associates in the short run. I think it’s going to take time to figure out how it’s going to actually play out in the legal field.
I will say this. The people who spend most of their law school career just using AI tools and not thinking, they’re probably not the ones who are going to do the best in the long run.
[12:35] Mike: I would agree. Not even in the long run. They also have to be interviewed by firms where, I’ve listened to enough of your work—neural implants aren’t here yet. They have to speak on their feet, and they’re not even going to make it to the long run because they’re not going to make it in the short term. I would agree with that.
We can go back in time. I found your old Vanderbilt bio. When we were both at Vanderbilt, your focus was on bioethics, you know, neuroscience. Your bio tracks with your bio now at Duke. This to me seems like a natural progression. But when you were watching Terminator, were you like, “That’s the future,” or did you think it would be genetic manipulation? What were the plot twists in your career from Vanderbilt to now?
Nita: Yeah, I mean, I started by being really interested in behavioral genetics, so not just genetics straight up, but the way in which genetics impacts behavior. That brought me naturally to neuroscience, because it’s really about the kind of interplay between, you know, brain genes, expression of brains. But, you know, it started by kind of looking at how does that play out and thinking about human autonomy, human free will. My dissertation back in the day was really about, how do we think about genetics and neuroscience and philosophical concepts of free will and how the law thinks about all of that?
And neuroscience has become amplified by AI. So I’d say what brought me to AI is neuroscience, which is—all of genetics, all of neuroscience is powered by data, and it’s powered by the analysis of data. And so you had to kind of watch the growth of AI to see how those fields were rapidly changing thanks to AI, because of the capability jumps in AI. And so I just sort of started tracking AI at the same time, because of how it impacted the first fields that I was really deeply invested in, which were both neuroscience and genetics.
And in tracking them, you know, they kind of all converge together. And seeing that convergence has been, you know, what has been interesting to me. I’m still very much in the same place as I was when I was back at Vanderbilt, which is studying how these different technologies shape and change how we think about responsibility and how we think about what it means to be human. It’s just become more interesting as the technology and the advances of it have become more interesting. And because of it, like, you have to understand AI inside and out. You have to understand the governance of AI inside and out in order to understand the convergence of these different fields.
[14:47] Mike: I think you opened up one of your Duke classes for everyone. Is that accurate? I can take it, right?
Nita: Yeah, you can take it. So, Fall of 2025, I had a ton of people ask if, judges and lawyers, could they audit my AI Law and Policy class at Duke? And Duke was like, “No, that’s not going to happen.”
Mike: They want CLE credit or something for that, right?
Nita: Yeah, I mean something, right? So I was like, “Okay, is there some different way that I could do this?” I had been dabbling a little bit with Substack, and I was like, “What if I use Substack as the vehicle by which I deliver my course content?” And after every class, I would just sit down for a few hours and take the lecture and turn it into an interactive lecture for people online to take. And I did that across the fall semester, and so all 26 classes from the fall are available, and people can go and take them at their own pace on Substack.
And now I’m doing that again with my Advanced Topics in AI Law and Policy class this spring, where, we’ve just started, and people can dive in and learn about the advanced topics. And that’ll be kind of a rotating set of topics. This spring, I decided to focus on the issues of cognitive offloading and cognitive impacts and, like, the impacts of social media and recommender systems, and how the law is grappling with the impact of the use of AI on humans, rather than just the governance of the technology itself, which is a lot of what the fall focused on.
Mike: You used the term “cognitive offloading” a few times. I’m not familiar with it. I can make a reading comprehension guess. We’re thinking less; something’s thinking for us.
Nita: Yeah.
[16:13] Mike: But one way, to even back up, your first class that you offered on Substack, you even defined, what is AI?
Nita: Yeah.
Mike: Because to me it’s interesting. We’ve had lie detecting tests—they’re not measuring our brainwaves or our neural synapses, but they’ve been measuring—so in some sense, we’ve probably had some version of AI for a long time. But what is your definition of AI, and how does it differ from things we’ve had like Grammarly and other tools for many years?
Nita: The truth is, they’re all AI. We’ve had AI for a really long time. AI is not a new concept. I think, where people have become familiar with AI is with transformer models. That’s our general-purpose AI models. So these are things like ChatGPT or Claude, or other models that can do lots of different things. But, recommender systems are AI. Grammarly uses AI. When you’re on Netflix and it auto-plays or makes a recommendation for what you ought to watch next, that’s AI.
[17:04] Mike: So, let me stop you there. You’re the expert, and everyone has this question. If I’m talking to my partner, spouse, friend, Anna, over the phone and all of a sudden—and I’m just talking on the phone, because everyone wants to know this—and now Netflix is recommending I watch this movie that I was just talking about. How is that happening?
Nita: Let me ask you this, Mike, which is, how do you think that’s happening?
Mike: I think we see coincidences more—when I was in high school—
Nita: You think it’s coincidental? You don’t think it’s that actually your devices are listening to you at all times, and that they shouldn’t be but that they’re listening to you and always on, and that there’s voice recognition and preference recognition, and that all of your tells have led to it?
Mike: I don’t know the answer of course.
Nita: Well, I’ll just say this, which is there have been multiple lawsuits about this and settlements as a result of the fact that these devices are not supposed to be listening to you when you’re just talking about it, but they may be. And sometimes part of the terms of service is, they can be listening to you for purposes of advertisement or for targeting and tailoring content to you.
So it’s not just what you type in. And that’s part of, I think, what people oftentimes don’t realize is, it’s not just what you choose to share. That, like, psychogenic profiles about you and inferences about you are being built. It’s about a lot more than that. You know, a lot of passive sharing that you don’t realize that you’re doing. In fact, the first lecture of this semester for my AI Law and Policy class is about the inferences.And in the fall semester we do kind of what data is being collected and how AI is actually able to use it to build these really precise psychogenic profiles of you. Generative AI makes it a lot more powerful because it’s this kind of closed-loop world. You can have content which is generated in response to the psychogenic profiles that have been created about you in an instant.
But you asked me about cognitive offloading.
Mike: Now I want to be part of one of these class action lawsuits if they are listening. But let’s go.
Nita: Well, I mean, you can probably join one. It’s not that hard. I don’t run them, but although, you know, if you find a good one, let me know, because my students are going to write amicus briefs in some of these interesting cases this semester.
[18:59] But, cognitive offloading is a term that’s been around for a while. It’s really just when you offload something that you would do, traditionally, to a tool or a device, but it’s something that you would mentally do.
So, calculators involve cognitive offloading. Instead of doing the math in your head, you do the math on a device. GPS is spatial cognitive offloading. Instead of actually navigating through space and figuring out how to do so, you use GPS to do so instead.
The “Google effect” is something that’s been known for a really long time. The Google effect is instead of remembering things, you remember how to find things. And so facts have often been offloaded. Or—most people don’t remember phone numbers anymore.
Mike: I was going to ask you if you could remember any. Like, I don’t know—
Nita: I can, yeah, I mean, so I actually still do remember a lot of phone numbers, but not all of them. But I have at least, like, a dozen phone numbers that I could tell you off the top of my head. And some of them, even people that I call a lot, I don’t necessarily keep because it’s offloaded onto my phone.
So, cognitive offloading with AI is similar, but it’s also different in some ways. And in fact, this is class 1.2 of the spring semester. It really gets into, is it the same or is it different? And if you look at the things that we offloaded before, they were kind of pieces of thinking, where you still owned a lot of the critical thinking. Like, you decided what you wanted to search for when you went to Google, you did the search, you got a bunch of search results, and then you interpreted the results and made sense of them for yourself.
With cognitive offloading for AI, people are offloading the entire process of critical thinking. And so pretty quickly what we’re seeing is that the impact of use of AI, and especially regular use of AI, is a reduction in critical thinking skills overall, not just one aspect of it. It’s the kind of holistic offloading of thinking itself and writing itself. Which, that’s like a long-term experiment that we don’t know how that’s going to play out, but we can guess that given how crucial critical thinking is to human self-determination, that could be a pretty bad thing.
[20:53] Mike: So, we had Dr. Anna Lembke on our podcast talking about dopamine, and she talks a lot about how we have more downtime than ever before. I think, because a lot of this cognitive offloading, things are just more convenient and more—right? So if AI’s going to accelerate this pace, maybe we can shift towards ethics and how to, like, legally protect ourselves, too. But are we heading towards a world of de-evolution of thought? So we’ve been evolving in thought for 300,000 years of homo sapiens…
Nita: Yeah, I think probably. And maybe I’m a little bit biased, but my next book that I’m finishing now is entitled Cognitive Extinction. So I think that we are moving toward the de-evolution of thought and the de-evolution of human thinking.
Now, will that all turn out okay for us in the long run? I don’t think so, but who knows? Maybe there’s some kind of human resilience where we find some other purpose and meaning, and all that downtime we really invest in relating to one another and spending more time with our loved ones and spending more time in nature. But not if we’re still glued to screens. I mean, we’d have to find some way to actually automate all of those processes, get off screens, not be addicted to social media, not be addicted to devices, and actually interacting with one another.
So it might be that we end up leaning into other aspects of self-determination, like reclaiming mind/body connection and connection with others, but that’s not happening right now. That’s just a possible pathway forward if we give up on critical thinking.
[22:24] Mike: Right. And we’re probably being manipulated a lot more. I think this is where you’re going to tell me, “Yes, Mike, you are,” just like, “Yes, Mike, they’re listening, and that’s why Netflix is recommending Alabama football games.”
Nita: Just wait. Like in a minute you’re going to get advertisements for Alabama football games again.
Mike: Or I’ll say something, like, ridiculous. Right? I already get a lot of those. We’re probably being manipulated.
Nita: Yeah. You are being manipulated.
Mike: Can you give me a great example? My example would be, I’m very much in the middle politically, but my friends on the right, they only tell me things that they hear from right talking points, and my friends on the left tell me things they only hear from left talking points, and there’s no convergence anymore like in the old days of media. It’s all these echo chambers. That’s an obvious one, but maybe you can give me a better everyday one that’s happening to me that I’m not aware of.
Nita: People understand the filter bubbles that they end up in for political information, but it’s everything. I mean, if the source of your content is steered by recommender algorithms, it’s the food that you’re exposed to, the recipes that you’re exposed to, and what you choose to cook.
One of my students in my seminar was talking about how there was something that was trending on TikTok for her, which was something about what makes a girl’s girl versus not a girl’s girl. And she described how this had changed her relationships and how she evaluated relationships with other people, until she started to really question it and stopped doing that. And she was like, “And you know, this was happening with everybody in my friends group.” The other people in my seminar were like, “That wasn’t on my corner of TikTok. I’ve never heard of that trend or that concept.” Right? This thing that, like, fundamentally was changing her relationships with other people and how she judged her relationships with other people. Whereas, the other student was like, “No, the thing that was trending on my side was, like, how people were dealing with back pain and with leg pain and the kinds of exercises, and so, like, I’ve started doing these exercises every day that really govern how I get up and I spend my mornings.”
Each one of us are in these little bubbles that are steering our behavior, how we interact with other people, what we do, the first thing when we get up in the morning, what we eat at night, how we relate to other people. And it’s so pervasive that, I think, people can understand like, “Oh, I’m getting all of my news from left-leaning or right-leaning sources,” but they don’t realize they’re getting their entire life curated by their experience of being online.
[24:45] Mike: So, I disagree with Sam Harris; I think that I have free will and I can grow. It doesn’t matter; either I can or I can’t, but I’d like to think that I can. I think Sam Harris would say, “Well, there’s no difference. We hadn’t had free will anyway. We were just an algorithm of every molecule in our body plus every experience we’ve had, and now we’re just an algorithm of that plus what’s being pushed on us as experiences.” Neither here nor there.
Are we being—are our personalities, now, how I interact with you, is that being manipulated by AI predicting me, learning me—databases really, is what you’re saying—databases learning me, and then subtly crafting things that I see that causes me to behave in a different way towards other people?
Nita: Yes. Even towards yourself. But I think you’re useful to distinguish between freedom over your preferences and desires and freedom of action, which are different kinds of freedoms that exist within free will. So, Harry Frankfurt was a very famous American philosopher, and he makes this distinction to show, something that makes us kind of uniquely human isn’t necessarily, like, control over our preferences and desires, but the ability to sort between them.
So for example, like, I like chocolate. I like not having migraines. I like red wine. I also like not having migraines. And I can sort between them and set what we call a “second-order desire” between my first-order desires. And then the question is, can I act consistent with that or not? Right? So I, both can sort between preferences of desires that I might not have any control over, and then, can I choose not to drink the red wine? Can I choose not to eat the piece of chocolate, or do I have no freedom of action?
And if you don’t have freedom of action, then Frankfurt and I would agree, you’re not, like, fully human. You’re more of a wanton. You’re acting just in accordance with your impulses. And that’s what’s kind of interesting about when you spend a lot of time online, is that neuroscience studies show, especially when you’re on, like, short form videos together with recommender algorithms on a platform like TikTok, essentially your capacity for freedom of action gets turned off. You end up going into kind of automaton mode.
And so if you’ve noticed, like, all of a sudden you might look down at your watch and, like, two hours have passed when you thought you were going to spend 15 minutes on it, it’s because that capacity for freedom of action just got turned off. Like, that part of your brain sort of just got quieted, and you went into this kind of reptilian automatic mode.
And reclaiming that—the capacity to act consistently with your second-order desires—I think that’s what free will is. But a lot of what’s happening in the digital world is trying to both shape your preferences and desires, shape even your second-order sorting between those preferences and desires, and then turn off the capacity for you to then spend time choosing to act in accordance with what your actual preferences and desires are. Like there’s no authentic you who is acting in the world.
So free will can be overridden. And, you know, from a neuroscience perspective, like, sure, we can philosophically say that every action is determined by some preceding action, but that doesn’t mean that there’s no capacity for flexibility of freedom choices when it comes to action. Neuroscience also supports that you maintain flexibility of action choices up until the last moment.
[27:58] Mike: So I do something called “load management day,” although I’m horrible at it. And that won’t surprise you. I think we all are. On Sundays, I try to put my phone in a different room. Or load management hours. I still want to get into the legal framework for this, but just for mental health, one of every four of our podcasts roughly is with a mental health expert. I, you know, mentioned Dr. Lemke and Dr. Brewer. We’ve had Dr. Gabor Maté, genetics, epigenetics, your area. What would your advice be for if we are being manipulated more?
Nita: Download an app like BePresent and put it on your phone, and set intentions n advance. So for example, decide, “On the weekends I only wanna use my phone for purposes of email, and only for X amount of time per day, and not during these hours.” And then when you try to get into it, your phone will say like, “No, you can’t,” because you’ve already set your intention, and you can override it, but it actually takes some steps to override it. And taking those steps itself engages the part of your brain that is the self-control part of your brain, rather than the automaton function. So your automaton self reaches for the phone and wants to launch Instagram. Your activation of that pause through an app like BePresent reactivates the part of your brain that enables you to make choices. And just giving yourself that kind of pause enables you to act more consistently with your preferences. And, I have it on my phone, I’m not perfect at it, I sometimes breakthrough and I’m like, “No, I really do need to check this thing. I wanna one more video on embroidery or whatever.”
Mike: You’re right. Now I’m gonna get embroidery commercials.
Nina: My daughters and I get into these crafts. Recently, we’ve gotten into embroidery.
But in any event, it’s the pause itself that has substantially enabled me to act more consistently with my own authentic preferences and desires rather than being so easily overridden and manipulated.
Mike: Is the app be called BePresent?
Nita: Yeah. BePresent.
Mike: I think Dr. Judson Brewer—we’ll re-link his podcast too—he talks about these pauses.
Nita: There’s tons of neuroscience that shows that giving yourself the pause is what actually enables you to activate the part of your brain that allows you to put into place that kind of reflection.
[30:06] Mike: Turning to what you were talking about on NPR, there’s not just the intentionality and the mental health way to do this. I live in Colorado. Per your NPR interview, that was one of three states that I have right to my neural data, for lack of a better word.
Nita: Yeah. So I mean, you know, overall, it’s great to be able to have tools that enable us to download it, but that puts a lot of pressure on the individual to reclaim their mental state. And the question is, like, why don’t we have rights against this to begin with? Like, why aren’t we legally both prioritizing our cognitive liberty, our right to self-determination over our brain and mental experiences, and why aren’t we limiting the amount of which tapping into our mental states and manipulating our mental states is impermissible by law?
And so there’s some early attempts to try to address this. Some of them are, as you refer to, neural data laws, which are pretty narrow right now in scope but are at least well intentioned. So what neural data laws do, for the most part, is say if there’s a direct measurement of brain activity, and that’s usually through, like, a brain sensor that is part of a device, if that’s in your headphones or your earbuds or increasingly devices like the neural Meta band, which is a wristband that picks up your intention to type or swipe as you interact with the AR glasses that they have.
That data is sensitive data, and therefore, there are limitations on how that data can be used. Can’t be sold to third parties, can’t be used without explicit notice and consent, limitations for purpose for which it was collected rather than repurposing it to do things like steer your mental activity.
So I think it’s a good starting place. The problem is, so much of your mental data isn’t coming from those neural devices. It’s coming from, you know, the pause before you scroll to the next video on a particular device. It’s coming from what you’ve typed. It’s coming from what you’ve purchased. It’s an amalgamation of a whole bunch of data that’s being used to build these psychogenic profiles.
And so, you know, a kind of more comprehensive approach to mental privacy would say, like, it doesn’t matter which source the data is coming from; what matters is the inferences and how it’s being used or misused against you. And so I hope what we’ll see in second-generation laws is a move toward a broader understanding of the many different ways in which inferences are being made about your mental state and safeguards against manipulation.
You see that in some of the age-related design cases or legislation that’s cropped up. A lot of it has faced First Amendment challenges. And so it’s going to be a question of whether or not having age-related protections on social media—against both the collection of data but also the steering and the kind of cognitive constructs that are designed to keep your attention and keep you engaged and designed to kind of activate certain parts of your brain that turn off your self-control—if that kind of stuff can be limited without running afoul of First Amendment concerns.
Mike: And the headwinds against that, of course, would be corporate desire for money—
Nita: Tech lobbying.
Mike: —or government, military. Correct.
[33:10] Nita: Yeah. So, I mean, you know, right now I think all of it is caught up in the bigger question, like, under the Trump administration—with the recent executive order to try to prevent states from passing AI-related legislation—the question is, is there any way to put into place safeguards that don’t come at the cost of innovation within the AI space? And because there’s such a big emphasis from a national security and from a race-against-China perspective to develop AI kind of at all costs, I think that applies whether you’re talking about recommender algorithms, AI, or large language models and foundation models AI, like the full umbrella of protections I think applies to AI right now.
And so you both have massive tech lobbying every time that there is any legislation that tries to protect kids from, you know, social media harms. But then you also have the Trump administration who both isn’t allowing any sort of comprehensive federal law to pass in this space, but also trying to prevent states from acting in this space. There are serious headwinds against it.
Interestingly, I think, you know, there’s this kind of carve-out space you could think about, which is, we see what the harms of social media have been for kids. And increasingly I think people are coming to accept that and coming to take measures to address it. If we’re quickly starting to see what the cognitive impacts of AI are, can we enable innovation to go forward while also trying to prevent a decade from now saying, like, “Oh, we wish we hadn’t led ourselves into cognitive extinction, or created an entire generation of people who can’t think because of their overreliance on AI,” and are there design choices that we could make now, whether through incentives or regulation, that might prevent that kind of future harm?
[34:58] Mike: What’s crazy is it’s not just cognitive extinction, which is interesting, but social extinction. There’s now AI companions.
Nita: Yeah, yeah. Relationships. I mean, it’s overall—
Mike: 29 million people have AI significant others, which is just, I heard that on the news the other day.
Nita: That’s shocking. I hadn’t heard that stat yet, but it doesn’t surprise me.
But yeah, I mean it’s overall much more comprehensive than just offloading of critical thinking. And it’s not just AI companions, right? The very fact that so much of our relationships are intermediated by technology now has had a serious impact on the connection between people. Or the fact that most people, when they wake up in the morning, the first thing they do is reach for their phone. That’s had a serious impact on interoception, which is your connection to your body itself, which has all kinds of impacts for autonomy.
So it’s really kind of a comprehensive undermining of human self-determination, not just cognition, but overall the kind of mental landscape of what it means to be human.
Mike: It’s an interesting thought experiment. If I ask any of my friends my age, “Do you wish you had grown up with cell phones?” none of them would say yes. None of them.
Nina: Yeah. Yeah.
Mike: But if you ask anyone today, “Do you want to get rid of your cell phone?” they’re also not going to say, “Yes, I do.” So what is your carve-out space? What is, like, the sweet spot? If you’re talking to a law student, and they’re actually super into what you just said, they read your first book, they read your upcoming book—what would they pursue to fight the good fight so that we don’t have cognitive decline, but we do interact more with one another? Where is that carve-out space? Where is the good fight?
[36:26] Nita: Yeah. I mean, I think it’s not a simple one-step thing. There’s a kind of comprehensive realignment I think that we need in order to get to a place of kind of reclaiming cognition. Like, I don’t think device bans are the way to get there. I think the truth is, to your point, people don’t wanna give up their cell phone, even if they wish that there was a world in which they didn’t grow up with them. And so, is there a different way to co-evolve with technology that actually puts human flourishing at the forefront of it rather than in the background? And I think it requires changes both in our individual behavior, but learning what that is, like, what is a healthy way to interact with technology that enables you to use it for good rather than bad?
But it also just can’t be up to the individual, because these are powerful forces. And so there has to be a rethinking of what design of technology looks like for human enhancement rather than human extraction and diminishment. There has to be laws that both incentivize that, but also punish failure to do so. It has to be a meaningful area. Like, we’re getting pretty good at red teaming and impact assessments, ex ante for technology, but for really limited harms, not for harms to cognition. And it turns out that you can measure things like manipulation by AI models. You can measure manipulation by your social media feeds.
But we need that kind of transparency for people, and we need the requirement that those be in place. You know, like, it’d be nice if you had, sort of, in the top right corner of your social media feed, like, “This is the extent of manipulation of this view right now.” Just like a little meter. Like, you’re in the red zone. Like, maybe you want to actually change some settings to not end up in the red zone. Just some basic things like that would be incredibly helpful.
Mike: I mean, you mentioned red teaming. We just had former CIA Director General David Petraeus on, and of course we talked a good deal about red teaming. I would love a red team app on my phone.
[38:13] Nita: Yeah. I mean, just a red teaming app on mental manipulation would be great. You are being manipulated—or maybe we’ll take away the kind of normative term from there. So, “Here is the extent to which your feed has veered far to the right.” Maybe you’re fine with that. Maybe you’re not fine with that.
If you had increased controls and transparency about what’s happening, or if you could do simple fixes like turn off the recommender algorithm and be able to just see what is trending in your region or in your country or in any other place you wanted to, that also has a powerful effect. It just, it turns out that if you look at neuroscience studies of what happens when you are on a platform like TikTok watching what’s popular in your region versus what’s customized to you, it has a totally different impact on your brain and your capacity to actually engage self-control versus not.
So little changes like that that are evidence-based would be incredibly powerful, and it can’t just be required by individuals to implement them. We need there to be a kind of comprehensive ecosystem that moves us in that direction.
Mike: I would agree. We get much more dopamine in the immediacy from bad habits in the immediacy than good habits that are more long-term. So we need both individual, but we need more macro level.
[39:27] Looking even further out, you know, I know right now I wear earbuds, and they can maybe monitor, but you were talking and people have been talking about neural implants in our brains. And then the word “mind control” came up. Are these things that should be keeping me up at night, or is it just cognitive extinction of someone’s grandchildren or someone’s children. Long-term you and I wouldn’t have, at Vanderbilt, predicted the movie Minority Report.
Nita: Yeah. I mean, I still don’t think Minority Report’s going to happen. That’s not really technology that—that precogs, I don’t really see happening. But the concept of, like, could we get to the point of just kind of perfect prediction? I mean, that kind of stuff is already sort of happening. Right? So if you look at Compass and other risk-based prediction models that are AI-driven, that are being used in the criminal justice system to make decisions about bail or sentencing, those are based on predictions of future dangerousness or predictions about the person and the likelihood of them committing a crime. And it’s not in the moment arresting them. It is having serious impacts about their decision-making. Or it’s credit scoring that’s being done by AI. It’s hiring decisions that are being done by AI. So that kind of idea is happening more broadly.
Are neural implants part of what you should be worrying about? Maybe implants not so much, although if you talk to any of the brain/computer interface companies that are working on implanted neurotechnology for health related reasons, they do think in the future when it becomes safe and efficacious that increasingly more people, not just people who need it to restore autonomy, but average people might choose an implant so that they can merge with AI and have the capabilities of AI. I don’t see that within my lifetime, but maybe I’m shortsighted.
I think much more likely what you’re going to have is what’s already come to the market. Like, there will be a ChatGPT moment where it becomes much more widespread. People are used to now quantifying their health. They have sensors that are in their watches that track their heart rate. They have sensors in their rings to track temperature and sleep patterns. They are using productivity tools on their phone. They’re using mindfulness apps that are tracking a huge amount of data. So the next generation of these devices, and probably the biggest untapped market, is tracking brain health and wellness by embedding sensors into earbuds and headphones and other everyday devices that track brain activity.
And AI has become increasingly more powerful at being able to decode what those neural signals mean. And why would people adopt this? They’d adopt it in part for the same reasons they’ve adopted health sensors for all of the rest of their bodily activities. But they’ll also adopt it because, increasingly what companies are doing is turning to the development of neural interface as the way we’ll come to interact with technology. So, you know, replace your keyboard and replace your mouse with a neural sensor that picks up your intention to type or swipe or speak. And the better AI gets at decoding what those neural patterns mean, the more seamless our interaction with technology will be.
And so, where this kind of converges between the AI and neurotech industry is that most people don’t think that the future of technology is based on screens, like, not carrying around a mobile device in your hand and having to look at a screen and interact with it, but having AI everywhere so then you can be confident that, you know, the reason that Netflix is recommending to you the Alabama game is because AI is listening to you at all times and everywhere, and that’s because we’ve moved from having a screen you intentionally interact with to like AI in your pocket, or AI in small, seamless interactions.
And you have to have some way of interacting with that, right? It can’t be that you have a keyboard or a mouse or a device that you’re working with. And so the vision for most of these companies is that the way you’ll interact with these is through neural interface. So a brain sensor, you know, and that increasingly could come as a little tattoo behind your ears that pick up EEG activity, electroencephalography activity, brain activity in the form of electrical waves, and deciphers what that means to interact with those AI devices.
And so then, you know, we’re not walking around with screens in front of us. We’re walking around with AI listening to everything all the time, and us interacting with it by thinking about it doing so.
[43:37] Mike: It’s such a slippery slope. I just say, that takes away the pause that you were talking about earlier. I wake up in the morning, and there’s no longer the phone in the other room.
Nina: No.
Mike: It’s tattooed behind my ear.
Nita: In the same way that if you have an Apple Watch, it can vibrate and wake you up in the morning, you know, you get a little vibration behind your ear, it’s interacting with your smart alarm clock on the side, and it says, like, “Good morning, Mike. It’s time to get ready for your day.” And, you say, “Oh, like, I don’t feel like getting up yet.” “Well, You have X, Y, and Z on your calendar today, so it’s time to get up and take a shower, and don’t worry, I’ll be here listening at all times and part of all of that.
Mike: Hence your word “co-evolution,” but it’s literally our partner throughout the rest of—
Nina: Yeah.
Mike: I mean, you could see AI being intertwined with us now going forward 5, 10 years until how long we last as a species.
I don’t want to steal from your future class. I do want end on a positive note. You mentioned in your syllabus, what does AI do to people using it going forward? And on a positive way, if I’m a law school student listening to this, or a law school applicant, I’m a little bit disturbed at at some of this. How do I reclaim my brain? How do I prevent from being manipulated?
Nina: Yeah.
[44:43] Mike: What is your advice to your students on, how do you make AI work for you—in not cognitive offloading, but the opposite? Becoming more cognitively capable, more cognitively curious? Finding spaces in the legal profession where there’s going to be opportunities for you.
Nita: Yeah, so I mean, first I’d say—learn and understand how AI actually works. It’s important to have a really good understanding of what AI is in order to understand some of its limitations and what it’s doing. It demystifies it in a way that I think is really helpful for people. And it was pretty extraordinary in the fall semester to see a lot of students who came into the class not really understanding, like, what does it mean to predict the next token? What is that actually doing? To move away from thinking about AI as a thinking partner, to understanding it’s a predictive machine.
Now, once you do that, it’s a predictive machine that can override your thinking, if you don’t set it up and put guardrails in place in how you use it. And so one way that that can happen, for example, is, like, I have a setting in Claude that says, “I want you to interact with me in a way that promotes my critical thinking skills. I do not want you to give me the answers. I want you, if I ask you a question, to ask me questions back to help me get to the answer myself.”
And so, you can do that. You can put those kinds of guardrails into place that say, “This is how I want you to interact with me.” And then you can try to use it for brainstorming. And you can use it for refinement, for editing, for example. Like, if you write the first draft, you haven’t overwritten the process of reasoning and executive functioning and putting into place what the structure and the idea is. And then, if you want to go to AI and say, “Hey, I’d like you to give me suggestions about what are the next steps that I could take to strengthen the argument?” It’ll come back to you with questions if you have that setup in place and say, “Have you thought about X, Y, and Z?” and guide you through that process of thinking better about your writing. So use it as a thinking stimulus rather than a thinking replacement.
Mike: Maybe another way of looking at it is, use your brain, and then feel free to use AI as another set of eyes that guides you.
Nita: Yeah, I think that’s right. But don’t use AI to generate the idea to begin with.
Mike: Exactly.
Nina: It just turns out that a lot of the process of thinking itself is the process of coming up with the idea, the process of writing the outline, the process of structuring and restructuring the argument. I think a copy editor is great. I love a great copy editor when I submit an op-ed or, you know, a post somewhere. And it’s wonderful to have AI serve as a copy editor. I don’t take every edit that a copy editor offers to me. And you shouldn’t take every edit that AI offers to you, right? And so it can be useful to have another set of eyes, as you put it. Here, it’s another predictive machine on top of it. But nevertheless, same concept, right? Which is to have it act as a good copy editor.
[47:27] Mike: Final question, then, about that. How often do you pull out your phone and use AI? And do you think about things like energy uses, water usage, and all that on your mind too?
Nita: Yeah, I mean, I should think about it more. That’s the truth, Mike. I would like to tell you that I’m really thoughtful every time I use AI instead of Google or instead of any technology whatsoever from an energy usage perspective. I’m not as thoughtful as I should be about it, or as we all should be about it.
But how often do I use it? I use it often, every day, all the time, I am interacting with AI tools. We all are. The question is, which AI tools, right? So there isn’t a device or technology you use anymore that, for the most part, isn’t using AI.
If you mean, do I use things like Claude or ChatGPT or Grok or Gemini? I do, and I use them for different purposes, because I think each of them have different strengths. I find that sometimes, for hard research questions, it can be useful to generate an initial bibliography of sources, and I find Chat GPT is particularly good at that. Sometimes, for checking if there are source errors or argument flaws or something within an argument, I might use something like Grok for that, which I’ve kind of set to be really brash and mean to me. Gemini and Notebook, I find, is pretty good for project planning and for bigger project maintenance. And Claude, I find, is a really good copy editor. I like Claude’s voice more than I like the voice of some of the other ones, and so I appreciate the kind of copy edits that I get from Claude.
[48:50] Mike: So, my biggest takeaway from this conversation is as follows. The best thing you can do is not be a Luddite or whatever and shun this. It’s coming anyways. Nor can you the expert or me the neophyte predict really where it’s leading. But something we can be very mindful of is, it’s awesome to be our own intellectual stimulation.
Nita: Yeah. I love the tools for intellectual stimulation. I even love the tools for some offloading. I don’t like email. I like being able to generate emails and email responses using these kinds of tools. I think that’s great. There’s some things that I just don’t mind offloading because they have been such an annoyance in my life that it’s great to have that as a possibility.
I’d say be careful, be thoughtful, and try to realize that it’s your own mind that’s at stake. It’s not the short-term outcome that you’re trying to optimize for. It’s the long-term wellbeing of your mental state.
Mike: And all of us, by the way. We’re all in this together. If we hand it off.
Nina: That’s right.
Mike: I’m a little suspect of the few emails we sent each other now, whether they’re sent by you or not.
Nita: If you get a really short email back from me, which is for the most part what you have, that means I wrote it.
Mike: Right, right.
Nina: If you get, like, a long email that has lots of niceties in it, I didn’t write that.
Mike: Right. “Great to hear from you, Mike. It sounds like you’re doing so great.” I’ll know that’s AI.
Nina: Yeah.
Mike: Thank you so much. Everyone’s fascinated in this topic. You picked a good field. In 10 years when you’re back, are we going to be talking about quantum computing? Is that the next one?
Nita: Hmm, I guess we’ll have to see.
Mike: We’ll find out. We’ll have you back in 10 years—or sooner.
Nina: See you then.
Mike: Thank you so much, Nita.
Nita: All right. Take care.


In this episode of Status Check with Spivey, Dr. Guy Winch returns to the podcast for a conversation about his new book, Mind Over Grind: How to Break Free When Work Hijacks Your Life. They discuss burnout (especially for those in school or their early career), how society glorifies overworking even when it doesn’t lead to better outcomes (5:53), the difference between rumination and valuable self-analysis (11:02), the question Dr. Winch asks patients who are struggling with work-life balance that you can ask yourself (17:58), how to reduce the stress of the waiting process in admissions and the job search (24:36), and more.
Dr. Winch is a prominent psychologist, speaker, and author whose TED Talks on emotional well-being have over 35 million combined views. He has a podcast with co-host Lori Gottlieb, Dear Therapists. Dr. Winch’s new book, Mind Over Grind: How to Break Free When Work Hijacks Your Life, is out today!
Our last episode with Dr. Winch, “Dr. Guy Winch on Handling Rejection (& Waiting) in Admissions,” is here.
You can listen and subscribe to Status Check with Spivey on Apple Podcasts, Spotify, and YouTube. You can read a full transcript of this episode with timestamps below.


In this episode of Status Check with Spivey, Mike interviews General David Petraeus, former director of the Central Intelligence Agency and Four-Star General in the United States Army. He is currently a Partner at KKR, Chairman of the KKR Global Institute, and Chairman of KKR Middle East. Prior to joining KKR, General Petraeus served for over 37 years in the U.S. military, culminating in command of U.S. Central Command and command of coalition forces in Afghanistan. Following retirement from the military and after Senate confirmation by a vote of 94-0, he served as Director of the CIA during a period of significant achievements in the global war on terror. General Petraeus graduated with distinction from the U.S. Military Academy and also earned a Ph.D. in international relations and economics from Princeton University.
General Petraeus is currently the Kissinger Fellow at Yale University’s Jackson School. Over the past 20 years, General Petraeus was named one of America’s 25 Best Leaders by U.S. News and World Report, a runner-up for Time magazine’s Person of the Year, the Daily Telegraph Man of the Year, twice a Time 100 selectee, Princeton University’s Madison Medalist, and one of Foreign Policy magazine’s top 100 public intellectuals in three different years. He has also been decorated by 14 foreign countries, and he is believed to be the only person who, while in uniform, threw out the first pitch of a World Series game and did the coin toss for a Super Bowl.
Our discussion centers on leadership at the highest level, early-career leadership, and how to get ahead and succeed in your career. General Petraeus developed four task constructs of leadership based on his vast experience at the highest levels, which can be viewed at Harvard's Belfer Center here. He also references several books on both history and leadership, including:
We talk about how to stand out early in your career in multiple ways, including letters of recommendation and school choice. We end on what truly matters, finding purpose in what you do.
General Petraeus gave us over an hour of his time in his incredibly busy schedule and shared leadership experiences that are truly unique. I hope all of our listeners, so many of whom will become leaders in their careers, have a chance to listen.
-Mike Spivey
You can listen and subscribe to Status Check with Spivey on Apple Podcasts, Spotify, and YouTube. You can read a full transcript with timestamps below.


In this episode of Status Check with Spivey, Anna has an in-depth discussion on law school admissions interviews with two Spivey consultants—Sam Parker, who joined Spivey this past fall from her position as Associate Director of Admissions at Harvard Law School, where she personally interviewed over a thousand applicants; and Paula Gluzman, who, in addition to her experience as Assistant Director of Admissions & Financial Aid at both UCLA Law and the University of Washington Law, has assisted hundreds of law school applicants and students in preparing for interviews as a consultant and law school career services professional. You can learn more about Sam here and Paula here.
Paula, Sam, and Anna talk about how important interviews are in the admissions process (9:45), different types of law school interviews (14:15), advice for group interviews (17:05), what qualities applicants should be trying to showcase in interviews (20:01), categories of interview questions and examples of real law school admissions interview questions (26:01), the trickiest law school admissions interview questions (33:41), a formula for answering questions about failures and mistakes (38:14), a step-by-step process for how to prepare for interviews (46:07), common interview mistakes (55:42), advice for attire and presentation (especially for remote interviews) (1:02:20), good and bad questions to ask at the end of an interview (1:06:16), the funniest things we’ve seen applicants do in interviews (1:10:15), what percentage of applicants we’ve found typically do well in interviews (1:10:45), and more.
Links to Status Check episodes mentioned:
You can listen and subscribe to Status Check with Spivey on Apple Podcasts, Spotify, and YouTube. You can read a full transcript of this episode with timestamps below.