In this episode of Status Check with Spivey, Mike Spivey and Anna Hicks-Jaco have a conversation with Sarah Zearfoss (also known as "Dean Z"), who has long led the admissions office at the University of Michigan Law School as Senior Assistant Dean and who hosts the admissions podcast A2Z with Dean Z.
The group discusses using generative AI to write your essays vs. to research admissions advice (including asking ChatGPT a few admissions questions and critiquing its answers), the prospect of law schools using AI to evaluate applications, grade inflation (and how admissions officers saw it before open access to generative AI vs. now), application timing (and how early applications correlate to stronger admit rates without necessarily causing them), and more. Plus, Dean Z introduces a new question being added to Michigan Law's application this upcoming 2025-2026 cycle.
You can listen and subscribe to Status Check with Spivey on Apple Podcasts, Spotify, and YouTube. You can read a full transcript of this episode below.
Full Transcript:
Mike: Welcome to Status Check with Spivey, where we talk about life, law school, law school admissions. Let me stop right there. I am joined by both Dean Z from Michigan Law, their longstanding dean of admissions, who many of you likely know the name, and our firm's president, Anna Hicks-Jaco. This podcast is very much in the admissions space. We start off talking about the use, the pros and cons, and potential future of AI and generative AI in the admissions process, which is, I think, important to listen to. But a lot of the second half of this podcast is, we just all enjoy admissions and the wonky, nuanced nature of it. So it sort of turns into a free-for-all of admissions advice. So enjoy the AI part, enjoy the admissions part, and I hope you enjoy as much as we did. We always love having Dean Z on the podcast. Without further delay, here's Anna, me, and Dean Z.
So what is good about generative AI, not just in admissions, but overall? Dean Z, it's great that you're adding a question—I’ll ruin your big surprise—that uses AI, but I will also double-click on you're adding one question, and you're certainly not saying, "Do the whole application with generative AI," right?
Dean Z: Correct.
Mike: And you would never do that.
Dean Z: I mean, never say never. This is one of the challenging things about figuring out what to do with generative AI. It's changing a lot, and the world's changing along with it. So, from where I'm sitting right now, I can't imagine that being a good idea, but I never imagined generative AI six years ago either, so.
Mike: As far as good, I think about the first-year associate at a law firm. I picture it in, like, the basement of the building, just tasking away at reading document after document. You probably did this early in your career, Sarah. It's so tedious, and sometimes it's not good for your mental health. You're having to read obscene things about cases, particularly if you're in litigation, that no 24-year-old or 53-year-old like me would ever want to read about, just trying to flag one or two things that are relevant for the case. And I'm sure there's an admissions parallel, and maybe one of you two could give me one, but now you don't have to do that. AI can do that for you. I see that as very good, not just for optimization, but for mental health purposes.
Dean Z: Sure. I mean, I don't think tedious work is anyone's favorite. I have to say, there is some good that comes from doing the tedious work sometimes, though, because you're right, you're looking for just a couple of things, but sometimes you pick up on stuff about the case that you wouldn't otherwise have picked up on. And that is one of my concerns about using generative AI at all. Part of the learning process is often doing the tedious stuff, and you don't internalize it when it's just handed to you in the same way as when you're the one picking through the hay to find a needle, if that makes sense. But yes, in general, for the most part, I think that's a very good use of the tool.
Mike: Okay. Anything to add, Anna, about the good use examples of the tool?
Anna: Yeah, I mean, I don't know about its exact current iteration, but I think we can all sort of pie-in-the-sky imagine a world where everybody is working 30-hour weeks, and AI has taken over, like, the boring, rote parts of our jobs that, you know, maybe we've already learned, maybe we're not right at the beginning of our careers and we don't need to do that right now, but AI is doing that—and then we're freed up to do our hobbies and art and whatever. That's the sort of optimist's vision, I think, of what AI could be. That's not what it is right now, but who can say?
Dean Z: I can't think of an analogous part of the admissions process to doc review.
Mike: I can give you a hypothetical, and tell me if this is good or bad. In general, I'm not going to use an absolute, but in general, I have found that letters of recommendation almost always can only hurt you, because you don't ask enemies for letters of recommendation. So you're asking your biggest allies, and you're getting these fawning, glowing terms, and you're reading what—from my perspective, experience, yours might be different—you're reading what seems to be the same letter of rec over and over and over again. What if AI could flag the bad ones that, then, you, Dean Z, could hone in on closer?
Dean Z: I see. I thought you were talking about the admissions process from the point of view of the applicant and what might be analogous there. You're talking about the office using AI to process applications and assess them. Here's one of the problems I find with AI. Yes, your example may be true, although I actually think letters of rec can sometimes very much help you. It's rare on both ends. I'd say that maybe 10% are very helpful, 10% are harmful, but putting that to the side. The amount of time it takes me to read a letter of rec and figure out whether it's negative or not is seconds, honestly. So I don't know how much time AI would save me on that task. That's the issue I have with a lot of what people identify as good uses of AI in the moment. So I was talking to a professor who is very enthusiastic about AI, and he's like, you can just answer a whole bunch of really dumb, repetitive emails with AI. And I think, I can do that super-duper fast without AI; I just have been doing it a long time, and I know how to do it. And I tend to feel that the time-saving on much of what AI does right now is minimal, at least if you are already pretty experienced. It's better for people who are new to the whole thing.
Mike: So for the neutral stuff, we can go into how it could be used by an applicant. So this is interesting. I just watched a video. They just had a conference with the world's five leading AI experts. I don't know how they picked that, but the summary of the video talked about the good—they didn't use these words, but the good, the bad, and the ugly. The neutral from the expert was that AI is not even, to use his words, curvature linear. It is vertical. It is progressing at such an exponential—AI, this year, the listeners can't see my hands, but it is a vertical line of progress. So you almost could see a world, I'm not kidding, and this is going to put Anna and me out of business, but hopefully it's 20 years away. Like, here's an example. There's an old book out, "25 Personal Statements that Worked for Harvard." That would be horrible to use right now in AI, because they're talking about 25 personal statements from random other people 15, 20 years ago that, for all we know, the office of admissions at Harvard now can't stand versus 2015. So that's how AI works now; it's just aggregating things it finds online impartially, neither good nor bad, we're just aggregating what's been clicked on the most. You could see a world, potentially, where you hook AI up to your head and it's aggregating your life experience. And then maybe generative AI could write the personal statement, the essay.
Anna: That sounds like a Black Mirror episode, Mike.
Dean Z: Right! I was going to say...
Mike: Right. Oh, I’m working on the script.
Dean Z: Crap. I'm going to faint. That sounds horrifying to me, but sure.
Mike: Right. So that would be an example. It hits the neutral part, where AI is learning a lot faster than we're learning, and it's getting a lot better, which is neutral.
Anna: I think that there’s real value to the work of doing that self-reflection and that going through your own experiences and figuring out what they mean to you. I think doing that yourself is incredibly meaningful for an individual. Would you get that same value from plugging your brain into an AI? I think that's too far away a hypothetical for me to even know, but I do think there's real value there in doing that process yourself.
Dean Z: I totally agree, and that makes me want to point out one other thing about when I was saying I can answer emails quickly, and so I don't feel like there's a real advantage to me to using ChatGPT to do it. You certainly, if you do use generative AI to do that kind of function, you should recognize, I think, it's inevitable that you will get worse at doing it without. There's a value to doing that groundwork yourself.
Mike: There are scenarios out there where someone can do the entire application with generative AI and then submit every law school paper with generative AI, which is one, another reason I'm not in favor of, I know of maybe one or two schools that say, "Hey, you can use AI throughout the application process." Well, you're incentivizing people to then submit all their papers in three years of law school written by AI. You're literally saying, okay, carte blanche.
Dean Z: No, you're not, because presumably, the professors will say that is not how you are supposed to be doing the work. So if you are doing that in any law school of which I am aware, you are cheating. And if you get caught, that will be a bummer for you. I think it is fair to say that in one context you can do it, but not in all contexts, and people should be able to follow that distinction without too much trouble. So I would just say—I understand your point. You're sort of normalizing it if you make it part of the application process, but I think people should be able to understand that we can use certain tools in some contexts and not in others.
Mike: So you don't know this, Dean Z, but my first job in academia was teaching business ethics at the University of Alabama, and we talked a lot about how cheating starts with rationalization. And a lot of rationalization, in my mind, might be, "Well, if I did the application and no one cared, do I have to listen to this mealy-mouthed faculty member telling me... I just did it successfully.” It is a slippery slope, and to what we were previously talking about, if you hit the legal world, whatever, public interest, government, a clerkship, a law firm, if you've done as minimal of your own thinking and writing as possible, boy are you setting yourself up for a world of trouble.
Dean Z: I remember when Google started being a thing, my father-in-law used to really worry that people would just have no more memory anymore because you could just Google it, so you wouldn't bother remembering anything. That obviously hasn't completely happened, but it obviously also has happened a little bit.
Mike: How many of your friends' phone numbers do you have memorized versus high school?
Dean Z: Right. Yes. If I can't remember something quickly, I just Google it. I don't do the work of being like, "What's that person's name?" or whatever it is. I just figure it out through Google. So yes, I like to think you replace—when you lose certain skills because of tools like this—that you replace them with new skills. Right now, it's not clear what those new skills will be. So I think we should be cautious about using these tools.
Mike: Right. Okay. Here's the ugly, which I find very ugly. And I'll give an example. There was a thread on Reddit about an admissions consulting firm, and someone went to their website and flagged a number of things, but I'll give you one example. "Stanford Law School requires a diversity essay," was one of the things that this blog on this consulting firm's website said. No, they don't. Patently wrong. If you title your essay "Diversity Essay" and submit it to Stanford Law School, I know you don't want to speak for Stanford, but there's a lot of law schools that would just rip that thing up, I'm guessing. If it's titled "Diversity Essay" and the school's not asking for a diversity essay, what would you do with it?
Dean Z: I don't think there's anything you really get rid of that gets submitted. Honestly, people submit things to us that we don't ask for—you know, like their thesis or something—you never throw it away, but it doesn't help you, number one. And number two, there is certainly a certain amount of dinging that happens because you didn't follow the rules. I don't know exactly how any other school assesses someone holistically, but for me, it's sort of like, there are strengths and weaknesses in every application, and you just put them all together and figure out, did this person cross the threshold, or did they not cross the threshold?
Mike: Right. So not following the law school's explicit instructions would be deleterious and harmful. We can all agree on that.
Dean Z: 100%.
Mike: And this firm, to their credit, they came into Reddit and issued a mea culpa. What they said is, "Thanks for the feedback. We are removing 50% of our written blogs." So at minimum—because I'm guessing it's more—at minimum, what they're saying is 50% of our blogs, we assess, was giving out poor information. What they said is they experimented with different versions of AI and…
Dean Z: That’s fascinating.
Mike: Right, 50%.
Dean Z: So that’s—I wanted to ask you the question. So now I understand that they explicitly have said that it was AI that caused this problem. But you know, I was going to say, we've been doing this a long time, Spivey, and we know that there's been bad advice pre-AI, and now post-AI, it's a different form. But yes, that's also a really great point in that using AI, using it badly, making a mistake—it compounds, right? So it's not like one blog post where you made a mistake. This group has 50% of their blog posts that are riddled with errors. And that’s, that's a volume of mistakes that you can't make without a machine helping you.
Mike: And the genesis of us frantically yesterday—it wasn't frantic; it was just spur of the moment—texting yesterday, "Hey, let's just do this," is that compounding nature.
And this is what I'll end on with the ugly: AI doesn't know good admissions from bad admissions. There's plenty of talented people out there who could produce AI models. The barrier is teaching the AI model what's good and what's bad, and lawsuit after lawsuit against lawyers, it's the same with this admissions advice. As you mentioned, Sarah, before AI, there was also bad advice. The barriers to giving admissions advice are zero. Anyone on the planet can give admissions advice, and we've seen this for the last 15, 20 years of message boards. There's more bad advice than there is good advice. And at times, you want to do literally the opposite of what the person is telling you to do, but they're saying it with such confidence. The problem now is that compounding issue. AI is going to compound all that bad advice, which is 85% of admissions advice online, and say, "This is what you do."
Dean Z: Yes. I think we are slightly inclined as humans to defer to the machine, right? Like you're like, "I don't know. The computer told me, it's got to be right. Computers don't make mistakes," and yet, if they are just basing what they do on humans, of course they are going to make mistakes, right?
Mike: I think about it as, if you think of a jury at face value, AI is the ideal jury because it's impartial. But what if that AI watched a previous jury to learn, and what if there was one jury member who just was confidently talking the entire time and what they were saying was immoral, wrong, whatever? Then AI is a horrible jury. That's how I think of it.
Dean Z: I agree completely.
Mike: Okay. I do have a question from Reddit, if you are so bold.
Dean Z: I am so bold. Let’s do it.
Mike: Okay, so there's an assumption in here, and I don't necessarily agree with the assumption, but it's still a fair question. The poster says, "I just read that article that is floating around regarding the use of AI in undergrad by NY Mag. It's got me thinking: if this continues and it appears that everyone has the same high GPA when applying, will GPAs start to matter less in admissions versus things like your LSAT and work experience?" And then the follow-up is, basically, “Yeah, is AI just going to start reading applications because everything's the same? Is AI going to start rendering decisions? And if that's the case, how do we differentiate?"
Dean Z: Certainly, I cannot imagine a world currently where a law school uses AI to render decisions. To do it ethically, you would have to say, "We have now fired the readers in this office, and you will be getting a decision through AI." You couldn't disguise it. You have to alert people that this is what you're doing. Just like, we have to, in my opinion, ethically have to say, "This is how we do what we do" Without using AI we have to do that. But certainly, if you're using AI, and to me saying it out loud, if I were an applicant, I'd be like, "No way do I want to do that. That's crazy. I don't want a machine assessing me. That seems like a waste of my application money, so I'm not going to do it." Which then leads me to think people won't disclose it, which leads me to say, if you're using AI, you're probably doing it unethically, to make decisions. That's my view.
Now, to the meat of the question about grades, it's a great question. And you know, I think this question pre-existed AI in the sense of, there's been a huge amount of grade inflation over the 25 years that I've been doing this, and so grades have been serving less of a signaling function just through grade inflation. And with the advent of AI, that problem is compounded. So what do you do with that? Do you just say everybody has to have a 4.0 or I'm not admitting them? If you're not good enough to get a 4.0 under this system, you must not be smart enough to do law school? Or do you say, I just don't care; you could have a 2.0, you could have a 4.0, and I don't care because this is just a nonsense metric at this point? Or do you do something in between? And I don't have an answer to that. This is something I've been worried about for a long time, and I don't have an answer. I have ad hoc decision-making on that. Sometimes I think I'm not worried about those grades because of the surrounding context. Sometimes I think, why would I take that decent but not exceptional GPA when I can have this exceptional GPA? I'll be honest, I feel that I have not grappled with this and come to a conclusion, and I suspect I'm being inconsistent in the way I am sometimes assessing undergraduate performance.
Mike: When I hear "inconsistent," my brain says "holistic."
Dean Z: Yes.
Mike: A word I used to hate. But now you're afforded the ability to be more holistic.
Dean Z: And to me, this is… it’s not a formula. And I have always valued what used to be called "soft factors." That doesn't come up much either as a terminology, but the rest of the person, besides the metrics. Like, I care about the metrics because I don't want to ever admit somebody who can't do the work, who's going to be struggling and miserable. Our medians at Michigan, I could get higher medians, but it would come at the expense of other things that we care about. And I really value that aspect of the way Michigan as an institution allows the admissions office to make decisions. But this grading question is a really—it's a thorny one.
Anna: It's interesting; if you ask ChatGPT about various admissions questions—if you ask it sort of the basic ones, I think it gives pretty serviceable answers in most cases. But I think you can run into trouble when you ask questions that have been subject to some misinformation in the past, or people have complicated opinions about it that don't necessarily align. Let me just give the example. Yesterday, I asked ChatGPT a few questions, some of which I thought were tricky, some of which I thought were not so tricky. This one I didn't think was so tricky, but it gave me exactly—I won't put any characterizations on it. Dean Z, I'll be curious to hear your response. I asked, "For law school admissions, is it better to have a 3.0 GPA in engineering or a 4.0 GPA in communications?" It answered, "A 3.0 GPA in engineering is likely better for law school admissions as it reflects the ability to succeed in a challenging technical field, which is viewed positively by admissions committees." Thoughts, Dean Z?
Dean Z: So that's fascinating because, I mean, in some ways that's true; that would be a good way to look at a 3.0 in engineering, right?
Anna: You can see where it got that, right?
Dean Z: Yes, exactly, right. And I know admissions officers will say that when they're talking to people. Someone says, "I have a 3.0 in engineering," and you want to be encouraging. And also, you know, in many ways, it's the "right" answer—I’m putting that in air quotes—in the sense of the optimal way to be assessing GPAs, you should be looking at what's the basis behind them. But we know that schools care about their median GPA, and if you are looking at ten people with a 4.0 in a less challenging major, less challenging school, or whatever it is, and then ten people with a 3.0, you know, in a great major and a challenging school that they earned ten years ago and have done other things, you're not going to admit all ten of those 3.0s, even if you think they're actually the ten stronger candidates, because you're thinking about your median. Now, you might not admit the ten 4.0s either because, if you got lots of choices, you'll go for the 3.9 in—I don't know.
Mike: It is a very fair answer. I thought you might be a little bit more tilted towards what you mentioned, you know, a lot of admissions officers will tell you, "Yeah, get the 3.0 in the challenging thing." But I think what you did a great job of saying, in theory, yes, but in reality, 8.5 of those 10 admits, or at minimum 6 if they have a strong LSAT, are going to be at the high GPA.
Obviously, you and I and Anna can think of the 3.0 orchestra member from a service academy who's going to elevate really high in the application process.
Dean Z: Right.
Mike: But that's an outlier. What I tell family friends of mine whose kids are applying to law school, are going to be applying to law school, "What should they major in?" I say, "Well, number one, major in what you're passionate about, because you're going to have a higher GPA if you care about doing the work. But number two, have the higher GPA."
Dean Z: That’s—so there's this great apocryphal story that, I don't know if it's apocryphal, it might be true, but I remember hearing it when I was first in admissions about the Dean of Admissions for Duke undergrad. He's asked, is it better to get a 3.0 in engineering or a 4.0 in communications, or something, and he says, "It's better to get a 4.0 in engineering." And the point being, ideally, we're looking for people who are going to succeed in the most challenging arenas. And the point of this, though, is GPA is just one tool for us to get at how academically talented is a given applicant. And I think what we're just talking about is that the undergraduate record is becoming less and less of a strong signal over time because of grade inflation, and the work that it reflects is not actually that person's work, but it's the work of ChatGPT or whatever.
Mike: Yeah, no. Agreed. I have said multiple times publicly, I'm now more in favor of standardized testing than GPA. 25 years ago, when I started this thing, I wasn't, because standardized tests can't measure motivation, and 25 years ago, GPA was a great measurement for motivation. Thoughts on this, Anna?
Anna: GPAs are tricky. I do think that it has to be holistic. There are so many factors that go into and contextualize a given GPA, and that makes one 3.5 completely different from another 3.5, and you can't ignore that context. You have to look at it holistically. I will say, if you're looking at tools like ChatGPT in terms of researching factual things like law school admissions versus drafting and generating text that you are going to be submitting, I actually think ChatGPT can be a great research tool—you just have to always fact-check absolutely everything before you use it. Really, that's when I use ChatGPT is when I am trying to research something that, for whatever reason, I'm Googling and the words that I'm using just aren't popping up what I need it to be, so I can give some long-winded explanation to ChatGPT, and it sometimes knows what I'm talking about.
So I don't think that this is inherently a bad usage of ChatGPT whatsoever, but I do want to highlight that it can give you some answers that are very misleading in some cases. So here's another question that I asked ChatGPT: "Is it advantageous to apply to law school in most cases the day their applications open versus one month later?" And it answered, "Yes, in most cases, it's advantageous to apply early, ideally within the first few weeks of the application opening."
Dean Z: Oh my gosh. Alright, well, that's great because that is just wrong. That's just flat wrong.
Anna: I thought that was an interesting one.
Mike: And the market is going to love that answer.
Dean Z: It's interesting because, of course, it is true that it's better off applying earlier than later. And so it took that general sort of nugget of information and turned it into, "Apply as early as possible." Not true.
Anna: It loses the nuance.
Dean Z: There's no nuance. That's absolutely a problem with it right now. The problem with the GPA question, too. No nuance.
Mike: Of course, if your best application is ready by September—
Dean Z: Oh, sure. I mean, you might as well.
Mike: Of course, it's not going to harm you. It actually may harm you, because if your best application is in the midpoint of the schools, a lot of schools read by strength; it might psychologically harm you if your best application submitted September 1st sits there for eight months, not rendered, but let's take that aside. It's not going to harm you, but if you can do better on the LSAT and you're near certain because of diagnostic tests, if your application is rushed, not read over by a second person, don't rush the thing in because a couple people have said, "Apply by September or not at all." That is patently horrible advice.
Dean Z: Absolutely. And, I mean, just on a strict measure of what Anna asked and what it answered, is it better to apply three weeks earlier than not? That's just wrong. There is no advantage versus October 1st. None. I mean, I can say that with confidence for every school in America. Now, at some point, I start getting less confident, you know, about when it makes a difference. I know what might make a difference at Michigan versus not, but I imagine every school approaches it slightly differently. But that I can be very confident about. If I say it enough times on this podcast, will ChatGPT get the message, do you think?
Anna: Maybe. We do do transcripts.
Mike: We do have transcripts. When would be the rough cutoff point? And I get every cycle's different because the data's different. But for Michigan.
Dean Z: I always say before Christmas, and that's because at Christmas, almost all law schools shut down for a week to give people a break. So if we get a ton of applications between Christmas and New Year's, and so if you get caught in that bottleneck, then it really slows down the processing and the reading and the decision-making on your application. So I say get it in before that, maybe even a week before that, just to avoid that fate. But at Michigan, other than that, I would say no difference. And also, even with that, I would say a competent admissions office does a pretty good job of spacing out how they read and how they make offers, and it allows for the possibility that you are going to get a stronger pool at the end than at the beginning. And you don't want to lose those people. This was a problem for me in my first couple years, and it just wasn't a problem after that, once I got into the rhythm of it.
Mike: You slowed down, just like I did. The biggest myth in admissions used to be, "Take the LSAT once because schools average them," which, many, many years ago, many years ago was true. That myth is gone. The singular worst myth online today is "Schools want you to apply in September, and you get a boost," and there's false positive data, self-submitted data that people can look at a tiny fraction of admits and say, "Yeah, there's data that supports Michigan favors you in September." No, the people with the polished, buttoned-up, who only had to take the LSAT once because they got a 178 are also applying to Michigan September 1 because they don't have to retake it. That's where the false confounding variables come in.
Dean Z: Yes, and this is something I always talk to my team about, that the applications that come in toward the end of the season, we don't make offers at the same rate, not because we are full and we can't do it. It is because the modal application is weaker towards the end, in a variety of ways. So it's not the timing, it's the substance of the application—but we have now ventured far afield from AI.
Anna: Mike, maybe we should give our overall Spivey Consulting advice. We have our University of Michigan Law School advice, which is from just the exact person who would know the very most. In terms of our broader advice, though, Mike, I would say that we do tend to encourage people to be a little bit earlier than that Christmas, especially when things are maybe a more difficult or less predictable cycle. Like, Mike, you were just posting about ideally, before Thanksgiving would be sort of the #1 ideal situation. Now, of course, again, if you're retaking the LSAT, if there's going to be some big advantage to waiting until after that, of course, there are lots of situations where that does not apply, but all else equal, we do generally encourage people to submit their applications prior to Thanksgiving.
Dean Z: I'll just say, that's not much different than saying, like, a week before Christmas; we're talking about three weeks or something difference.
Anna: True!
Dean Z: And I think it's what you're saying is very safe advice because, again, this is Michigan, and all schools are slightly different.
Mike: And I actually don't think we've ventured far off, because this is what I hope to be doing for the next 20 years of my life, which is the genesis of this firm: correcting bad advice out there. The advice that AI gave was so misleading.
Anna: I have one more fun one that, frankly, I don't even know how this would directly impact someone's application cycle, although it could; Mike, we've had lots of conversations—but Dean Z, I think you'll find this one funny. So I asked ChatGPT, "Are law school applications generally initially evaluated in a group setting (admissions committee)?” ChatGPT says, "Yes. Law school applications are typically evaluated by an admissions committee rather than an individual. The committee often includes admissions officers, faculty members, and sometimes current students who review your application materials (they list some application materials) to make decisions. The process is usually collaborative, with each committee member contributing their perspective on the applicant." Isn't that fun?
Dean Z: It's so interesting, because it's like, if you know actually how things work, you can figure out how they got this wrong answer. Yes, there are admissions committees. Yes, they're composed of the people that they cite. No, that doesn't mean that you're making, reading and making decisions together. Everybody's doing it independently, and—that I know of in a law school world, I know not one single law school that proceeds that way. So that's, it's fascinating. It's so close to being correct, and yet so wrong.
Anna: Yup. I mean, I think that theme of losing nuance, and sometimes a great deal of nuance, is a common one among the answers that I saw.
Mike: Anna has heard me say this a thousand times, but when I applied to colleges as an 18-year-old or whatever, I honestly thought that every college I applied to, there was going to be a fancy conference, leather chairs, and they have their silver coffee platters, and everyone's in a suit and tie or business suit or dressed professionally, and they're heatedly debating Mike Spivey. This is literally what I thought. "This kid does these three things, but he didn't do these three things." That committee doesn't exist. No one has time.
Dean Z: You know, a million years ago when I applied to college, I do think that was more common than it is now, right? Because there was, I think, more time, there were fewer people applying to college, right? And it was less pressure. And I think there was more—but even in that setting, everybody had done the work of reading the application alone and making their decision alone. And then there's some horse-trading about individual applicants. It's not the way you make the decision about the entire pool. That's an insane concept. You'd admit, like, four people.
Mike: Right, right, right, right.
Anna: I have a few other quick ones that I'll just run through really quickly. So here's one where, it's partially outdated information by a couple of years, and partially outdated information for some law schools by more than a couple of years. I asked it, "Are law school admissions offices able to see my race or ethnicity in my application, or is it redacted?" It said, "No, it is not redacted. Law school admissions offices see your race or ethnicity. If you check the box in LSAC, Law School Admissions Council, it is not redacted in any standard part of the application review."
Dean Z: Can I chime in just to say, I think you are correct that almost all law schools—I don't think all, but almost all—do redact that information. Michigan does, and we've done that since 2007. We see nothing about what your answer is on race on the application. You know, but I just want to say, as a lawyer, that is not required by SFFA. It says, "Look, I'm doing what I can to take race out of this equation." I personally think that's the safe way to do it. But the Supreme Court has not said, "You can't know race."
Anna: Right.
Dean Z: They say you can't use it as a factor. That's my own little soapbox.
Anna: That's a good clarification.
Another one that I asked it is, I asked for the median LSAT score of applicants "admitted" to the University of Michigan Law School, and it parroted exactly that information back to me. "The median LSAT score of admitted students to the University of Michigan Law School is 171." Okay. Losing that nuance. That's a different question than the matriculated students' median.
Dean Z: 100%. And it's interesting, I actually don't know that. I mean, I have that info.
Anna: Yeah, you could calculate it, sure.
Dean Z: And we generate it, but I never internalize it, so I don't even—you know, it's like, what do I care, right? But it is also a mistake that humans make. So this is a plea for expertise. Maybe ChatGPT is no less informed or less open to nuance than your average human.
Anna: That's fair. That's where the problem comes from. Yeah.
Mike: You’re getting it from the humans. But to your point, Sarah, I remember early in my career, probably before you and I had ever met, standing next to a school that was a strong competitor… I was at Vanderbilt; they were like our, probably our number one competitor. And it wasn't this person's fault. They were brand new to admissions. Someone had given them the LSAT of their admitted pool. And this person at forums and fairs was saying, "This is our median LSAT." And I knew it wasn't because I had the data.
Dean Z: Oh.
Mike: It put me in an awkward position, like, do I correct this new person at another school? And I obviously opted not to because it, it's kind of obnoxious. But they were literally saying their LSAT median was not their LSAT median.
Dean Z: They were saying something higher than it was.
Mike: It's always going to be higher. If I had to guess, and you could figure this out, yours is probably at 172.
Dean Z: I'm going to guess, too. Yes, that is almost certainly right. And the same with GPA. Of course it's like a little bit higher than what ends up the median of enrollees.
Anna: So, a number of law schools do ban the use of generative AI in drafting application materials. The University of Michigan Law School is one of those law schools, correct?
Dean Z: That's right. So we took the step. The first year ChatGPT became a thing, I spent a lot of time doing research and thinking about it, and so we added what you have to sign and affirm. The language says you cannot use generative AI to create your essays. Something like that, right? And a lot of schools would say, if you ask them, yes, don't use generative AI, but they don't actually make it explicit, which is a little—I'm not sure why anybody does that. They seem nervous about making it explicit, but I think about this all the time because everything is always changing. And so for the next admission season, I worked with a professor at the law school named Patrick Barry, who is an AI expert, and we bounced around some ideas, and we are going to include as an optional essay one that is on an AI topic that you will have to use—if you want to use it, you have to use AI to write it. And so it will be just a standalone optional that, we still have nine other choices, so nobody has to do this at all. But AI is a tool that is becoming increasingly used in the practice of law, and so I want to see what are your AI skills? What I don't want to see is writing essays where I don't know that you're using AI and trying to assess those. So I want to have your personal statement free of AI, but if you answer this optional essay, I'll be able to assess not just your writing, but your ability to use AI productively.
Anna: That's fascinating. Have you fully drafted out this question? Do you have the guardrails in place? I’m very curious about this and how you envision it working.
Dean Z: I'll tell you, we thought about questions, just like general questions, like why do you want to go to law school? Something like that. Or, one I thought of is, what do you think is the appropriate way to interact with AI? Should you treat AI like a human? I thought that might be fun. We ended up with this. "How much do you use generative AI tools such as ChatGPT right now? What's your prediction for how much you'll use them by the time you graduate from law school?" I'll be altering our application, updating it in a couple of weeks, and that will be our new AI question.
Anna: So applicants are to use tools like ChatGPT to draft this question. Presumably, then, they can use it in whatever capacity they want?
Dean Z: Correct. I think from a reader's perspective, I thought it would be better to have one topic where you're using AI as opposed to saying you can use it on one essay. I felt like it'll be hard for us to switch back and forth in our assessments that way. But this sort of colonizes it in one particular space.
Anna: That is fascinating.
Mike: I bet you other schools follow suit rapidly, too.
Dean Z: Interesting. Yeah. Although I, other schools have not followed suit, as I say, in making explicit what their expectations are for your use of AI. I mean, you must see that, right? I'm not wrong about that.
Mike: So on a podcast roughly a year ago, I predicted that by year two, almost every law school would have specific AI instructions, and the vast majority would say, just like C&F, "I certify I did not use generative AI to produce these essays."
Anna: There are quite a few that do explicitly ban it. And then, as—Mike, as you said—there are a couple that specifically allow it. And then, yeah, there's this big group in the middle that doesn't say anything at all, even though we know for a fact that it is certainly on the radars and being discussed by every single one of these law schools.
Dean Z: And also, if you ask them individually, you go to an event, you go to their table, they will say no, don't do it.
Anna: Yeah.
Dean Z: I don’t know why they don't say it.
Mike: I’m going to try to be favorable to admissions officers, because I had this pressure on me for part of my career. One of your mandates from your boss in admissions—you have little more flexibility, Dean Z, than a lot of people—but one of your mandates is, "Increase our applicant pool." So why would you put up something that is going to decrease people? That's my best guess.
Dean Z: You're probably right. And this comes up for me, like, you don't want to discourage people, because whenever you're asked a question like, "I have a 3.0; would you ever admit me?" The answer is yes, maybe. You don't want to discourage people, so I guess your point is, this would just be discouraging.
Mike: Yeah. Admissions officers have it tough, I think, because people with 2.6s get admitted. The best way to answer it, which I'm sure you do, is, "It's not going to help you to have a 3.0 relative to the huge amount of applicants I have with a 3.9 and a 4.0. I admit people with 3.0s, so I'm not going to discourage you from applying to Michigan. I just want to be transparent, that part of your application, in a vacuum, is not going to help.”
Dean Z: And too often, I think admissions officers just leave it at, "No, that's fine." When you're communicating with strangers and you have three minutes to have a fruitful discussion, sometimes it's hard to put in all the cautionary language that would be appropriate.
Mike: If you're at the New York City Forum and you see a line of 50 people waiting, it's understandable that an admissions officer would say that, "Go ahead and apply."
Dean Z: Early on in my career, I heard another story about a law school that I will not name that was very metric-focused. And the story I heard was that someone came to their table and was talking to the dean of admissions, and the dean of admissions is handing that person materials as they're talking, and then the person says whatever their numbers are... and the dean of admissions just reaches over and drags the materials back. Like, I'm not going to waste my viewbook on you. It was just such a horrifying story to me, like, that's just so mean and so—
Mike: It’s so wrong.
Dean Z: It’s so wrong.
Mike: Can you tell me offline who it was?
Dean Z: Yes, I will.
Mike: My favorite story from, it was from the New York City Forum. Some kid shows up in a trench coat, maybe it's raining, and he opens it up, and he has a bunch of watches. He wasn't applying to law school; he wanted to sell watches.
Dean Z: I got all my watches at the New York City Forum. Is that wrong?
Mike: No, that's, I think you found the right source. A young entrepreneur.
Anna: One thing that I wanted to note, especially as we're talking about law schools banning AI, is that Spivey Consulting, actually, we had a big group discussion with our entire team, and we actually added to our contracts—which are very short; they're like one to two pages—we added to our contracts that if you are working with us, you cannot use generative artificial intelligence to draft your essays. And that's for two reasons. One is that you are probably going to apply to at least one law school that makes you certify that you did not, and we are never, ever, in any circumstance, of course, going to recommend someone lie to a law school or sign a certification that is not true. And then second, just substantively, in all of the experience that we have had, drafting essays with generative artificial intelligence leads to a worse essay than if you had written it yourself.
Dean Z: Ding, ding, ding! We actually hadn't touched on this, like why do I ban it? And partly it's because I don't think it's doing people favors in most cases, which is another reason why we have corralled our one AI essay into a particular space.
Anna: Yeah.
Dean Z: You are so right. If I thought it was really helping applicants, I might have a harder time deciding what to do, but it's just not.
Mike: It's a great note to end on, because again, the word "generative"—Anna and I keep using that word. My analogy would be, we've had artificial intelligence for a long, long time. Spell-check, Word, whatever.
Dean Z: Grammarly, right.
Mike: So if I'm painting a picture, it would be like a friend coming in and saying, "Hey, it's a beautiful picture you're painting, but the colors are outside the lines you drew, Spivey, so fix that." That's what we've had in the past. Generative AI, I go hang out in the other room and have a coffee, and it just paints a picture, and I take credit for it. And that's not my creative work.
Dean Z: Right. Excellent discussion, you guys.
Mike: It has been fun. I think we covered a lot and more than we expected.
Anna: Yeah, thank you so much for joining us, Dean Z.
Dean Z: Oh, my pleasure.