Table of Contents
So to Speak podcast transcript: Tech check â AI moratorium, Character AI lawsuit, FTC, Digital Services Act, and FSC v. Paxton

Note: This is an unedited rush transcript. Please check any quotations against the audio recording.
Corbin Barthold: Itâs the goal of the professional managerial class to take control of the models. They wanna have audits where all of the model development is overseen and basically bring it to the bad old days of Claude where if I ask it, âWhat are the 10 key works of the Western Canon?â Today, thatâs subjective, but here, you know, Dante and Odysseus. I do remember it wasnât that long ago where it said, âThatâs a problematic question.â
Recording: Somewhere I read of the freedom of speech.
Nico Perrino: Youâre listening to So To Speak, the free speech podcast brought to you by ĂÛÌÒֱȄ, the ĂÛÌÒֱȄ. All right, folks. Welcome back to So To Speak, the free speech podcast where every other week we take an uncensored look at the world of free expression through the law, philosophy, and stories that define your right to free speech. Iâm your host, Nico Perrino. Today weâre going to do a tech check. That is weâre going to check in on the latest news in tech and free speech. And to do that we are joined by my colleague, Ari Cohn. Ari, welcome back onto the show. Fourth time now Sam has on my notes here.
Ari Cohn: Oh, wow. You know, it feels like it was just yesterday it was the first time back. You know?
Nico Perrino: Well, weâre gonna give you a tie. We have these new ties that weâre giving out to our guests or scarves, depending on if theyâre female. You havenât earned the Rolex yet.
Ari Cohn: Okay, well ...
Nico Perrino: I donât think anyoneâs earned the Rolex yet?
Ari Cohn: Can I at least get the tie and the scarf?
Nico Perrino: Weâll see. Youâre gonna have to talk to Sam about that. But Ari, youâre the lead counsel for tech policy at ĂÛÌÒֱȄ. You do all things tech.
Ari Cohn: What touches tech, touches me. That just actually came out real bad.
Nico Perrino: It reminded me of a Darkness song. You remember that band, The Darkness?
Ari Cohn: I do. I love The Darkness.
Nico Perrino: I love the Darkness, too. They were kind of â
Ari Cohn: I figured you might.
Nico Perrino: This old school â70s style rock band.
Ari Cohn: Like actual old school like â
Corbin Barthold: I believe in a thing called love.
Nico Perrino: Yes.
Ari Cohn: Is this the first time youâve had a singer on â
Corbin Barthold: Iâm going for copyright infringement in the first two minutes of the podcast.
Nico Perrino: And that beautiful voice, listeners, that youâre hearing is Corbin Barthold. He is the internet policy counsel at Tech Freedom. You also host a podcast.
Corbin Barthold: The Tech Policy Podcast. And Ari is my lost love, my 15-time guest. He used to be at Tech Freedom.
Ari Cohn: Was it really 15 times?
Corbin Barthold: I mean youâre still invited back.
Nico Perrino: Are we stomping on your territory here, Corbin, with this podcast?
Corbin Barthold: Yeah, you took my Questlove from my show.
Nico Perrino: Okay. I donât understand that reference.
Corbin Barthold: Jimmy Fallon, you know, your righthand man.
Nico Perrino: I donât know who Jimmy Fallon is. Iâm joking. Iâm joking. Iâm joking. Iâm joking.
Ari Cohn: I can believe you donât know who Questlove is, but Jimmy Fallon is a bridge too far for even me.
Nico Perrino: All right, letâs jump right in. On Thursday, May 22nd, the full House narrowly passed the One Big Beautiful Bill Act 215 to 214, one vote margin there. And in it they had a provision halting state and local AI enforcement for a decade. But last Tuesday, on July 1st, the Senate struck the state moratorium from the bill by a vote of 99 to 1.
The House subsequently voted in favor of the Senateâs changes to the bill. And on Friday, July 4th, the President signed the Big Beautiful Bill into law. Corbin, on X last Tuesday, you said, âI hope Texas and Florida stand ready to counter every California and New York measure on AI equity, antiracism, decolonize the algo, et cetera, with a colorblind, et cetera, mandate.â
You said, âThe worst-case scenario is that the left mandates âwoke scoldâ AI. The right concludes AI is left coded and bad. The right mandates AI slowdown. The left Bluesky crowd mandates AI slowdown. Vicious downward spiral. The future is dumb.â Thereâs a lot there in that tweet, Corbin. My assumption is you donât like that they didnât have this moratorium in the bill.
Corbin Barthold: Well, my first lesson is just never tweet ever. Okay, got it.
Nico Perrino: Because you have to assume youâre gonna come on this podcast, and Iâm gonna try and read it out loud and just butcher it.
Corbin Barthold: Yeah. Well, so, first of all, I didnât realize that if Marjorie Taylor Greene had just read the bill, it would have gone down in the House. So, it was not ideal. I donât think it was well crafted. Ten years was kind of insane. In 10 years, I plan to have an AI girlfriend. Things are gonna really develop between now and then.
Ari Cohn: Howâs your wife feeling about that?
Corbin Barthold: Weâll have to talk.
Nico Perrino: But is it really insane, though, Corbin? Because you have Section 230 which doesnât have a sunset provision, and it does some of the same stuff.
Corbin Barthold: I think AI is actually moving fast enough, but 10 years is pretty wild. The impact itâs gonna have on culture, I was fine with the notion of scaling back the amount of time. And they could have done a more careful job about talking about what it applied to.
But in principle, talking about keeping states out, keeping their grubby fingers off of especially AI outputs and AI model training, it was definitely directionally correct, because what weâre going to have now ... If you liked the culture war that we have been fighting over social media, wait until you see the culture war that weâre gonna have over AI. There is no unbiased, Edenic, LLM out there. and the fight is actually going to be to push it in the direction of ruthlessly diverse Gemini. Do we all remember that?
Nico Perrino: Yes, I remember that where you had Black George Washington and ... yeah.
Corbin Barthold: The end goal is not some kind of neutrality. The postmodern critical race theory left thinks that thatâs not a thing that exists or can be achieved. So, weâre gonna get all kinds of algorithmic justice measures that are in fact trying to turn LLMs into roughly what college admissions are today, where thereâs a thumb heavily on the scale of outputs. And right now, already, David Risotto had a study. If you are having a job application put in, and the resumeâs being fed into one of these LLMs, his advice to you is have a female name and use your pronouns.
This notion that it is all slanted in some way that is racist in the traditional sense is actually wrong. Itâs biased in a lot of different directions. Thereâs gonna be a lot of yelling about this. If we start to try to put government controls on what these LLMs do and the outputs, itâs gonna be very ugly.
And I am not excited about the notion of Texas getting involved in this. I think that they would have equally stupid common carrier requirements or whatever. But I will say, if the federal government is not going to step in and preempt this kind of stuff, the depressing best-case scenario is at least that maybe states on both sides get involved, fight it out, and maybe Neil Gorsuch realizes that the dormant commerce clause wasnât so bad after all.
Nico Perrino: Ari, I have a question for you. But one more point on this, Corbin. So, private companies like Google, they can create the LLMs with whatever algorithm or bias they want. They can have âwoke AIâ if they want.
Corbin Barthold: For sure.
Nico Perrino: But what youâre saying is that by not preempting state legislation on some of this stuff, youâre gonna have the government mandate in some cases, presumably, a woke AI. And we might see it on the basis of some sort of prevention of algorithmic discrimination, for example.
Corbin Barthold: Yeah. One way that this is gonna play out is thereâs a big push to impose disparate impact liability on LLMs. And nobody wants discrimination. Youâre pretty nuts if youâre opposing Title VII of the Civil Rights Act. But disparate treatment is the normal benchmark across most discrimination laws, meaning if you hire someone based on their sex or their race over someone else, now, thatâs illegal. And thatâs illegal if you do it with an LLM. Putting AI in the mix doesnât change that.
Disparate impact, weâre suddenly in the world where any statistical difference between demographic groups and an outcome, immediately, the maker of the product that reflects that disparate impact is on the back foot, is facing liability unless they can prove their innocence, which ... you know.
Demographic differences exist across basically everything you care to measure in society because people make different choices. They have different cultures. They have different values. It's not surprising that there are different outcomes in certain areas placing liability on AI for reflections of society that they are just putting back out into the world is gonna be very, very ugly. It will be the plaintiffâs lawyers field day.
Ari Cohn: Oh, yeah. And where that all leads to is something that Greg Lukianoff has pointed out, and weâve pointed out in some of our writings. Itâs when you get to that point, the incentive for the LLMs, the developers, is to make sure that nothing their AI spits could be remotely conceived of as causing some kind of discriminatory effect, which means the LLMs then present a skewed version of reality because nothing â We donât wanna include, say, any crime statistics that might point to, say, different stats between different segments of populations or something like that, whatever you wanna choose.
And then when the LLM is asked to, say, make connections between different datapoints on different things to try and figure out new things about society we might not have known before, well, then what happens is the LLM makes those connections based on flawed, faulty information. So, we have this new information built on flawed information, which is then fed back into the system and creates yet another level of bad information based on flawed understandings of how the world around us is.
Nico Perrino: They call that model collapse. Right?
Ari Cohn: Yeah.
Corbin Barthold: Itâs the goal of the professional managerial class to take control of the models. They wanna have audits where all of the model development is overseen and basically bring it to the bad old days of Claude where if I ask it, âWhat are the 10 key works of the Western Canon?â Today, it says, âThatâs subjective. But here, Dante and Odysseus ...â And of course, it is subjective. So, thatâs okay. I do remember it wasnât that long ago where it said, âThatâs a problematic question.â
Nico Perrino: Was that actually the case?
Corbin Barthold: Oh, yeah, yeah, yeah. When Claude was super safety obsessed, this was the kind of stuff it would say before it realized that nobody wanted that. And no, just answer my damn question. And so, I donât think itâs hyperbole to say that thatâs the goal. Itâs to decolonize the algorithm. And I donât think GOP states are gonna take that lying down. So, again, to repeat, they will do equally stupid things, and thatâs why we cannot have nice things.
Nico Perrino: Like they tried to do with the laws in Texas and Florida, for example.
Corbin Barthold: Exactly, but with AI.
Nico Perrino: With policing of social media or preventing content moderation in social media. Why would the House of Representatives and presumably Trumpâs AI team wanna pass this moratorium? I read somewhere that thereâs something like 1,000-plus state-based AI regulations that are going through legislatures at this moment. Is the effort to try and preempt this patchwork approach to regulating AI?
Ari Cohn: So, I think that is a large part of it. Thereâs this fear that when there are, say, 50 states, and if there are 50 different state regulations on what the AI can do, the incentive for companies is not to comply with 50 statesâ laws. It is to comply with the most restrictive stateâs law, if thatâs possible, to cover all bases, minimize compliance costs, and things like that. And then you have California or New York setting AI policy for the entire country, which is, you know.
Nico Perrino: Kind of like California does with its emission policies for cars. Right?
Ari Cohn: Yeah.
Nico Perrino: Itâs a big enough market that if it sets a policy, companies tend to abide that as the denominator or numerator â I donât know which one â that they need to â
Ari Cohn: Donât ask me about fractions, Nico.
Corbin Barthold: I was told thereâd be no math.
Nico Perrino: You do not want me doing math. You donât me going anywhere near math. My ACT score from high school is evidence enough of that. You had Adam Thierer, our friend from R Street, say that this is a devastating blow for little tech AI players in the United States, as only large technology companies will be able to comply with the enormously confusing, costly compliance burdens. So, if youâre a small AI, startup AI company, youâre gonna hire a litany of lawyers to try and ensure youâre in compliance with all these different states and perhaps have different model outputs based on the requirements of those states, which would in essence be different models.
Ari Cohn: And it would cost a ton of money, and it would leave the field basically at the mercy of the biggest players who can afford that kind of thing, or you have the tiny players who then can comply with just the most restrictive law to cover all of it. Then they offer basically an inferior product because, again, theyâre offering the California model to everyone instead of, say, the California model, the Texas model. But I donât think we want that generally. And I think more fundamentally, this points to an interesting kind of societal rift that Corbin and I were just talking about between the tech right and the right-right.
Corbin Barthold: So, I think the little tech point is real, but I do also think thereâs a certain strategic pivot by some to that, because weâre all waking up to the fact that the GOPâs pathological disdain for big tech actually goes farther than some of us thought. And I thought it went pretty far, this realization that if Google is a leading AI firm, and they are key for us being China in AI, that is something that Steve Bannons of the world are actually willing to throw under the bus because they just hate that damn woke corporation that much.
And they have so convinced themselves on this mostly fake social media censorship narrative that they got themselves twisted in a knot over. Theyâre letting that spread and pollute basically every tech topic that they run into. And that actually, this divorce weâre seeing between the tech right and the populace right now â and itâs not just Elon Musk falling out with Trump. I think, actually, weâre gonna have a lot of battles coming down the line from screening the genetics of babies to cloud seeding and adjusting the weather to AI. Actually, there might be a much bigger fallout in store here, and AI is just the beginning.
Nico Perrino: I canât leave your comment that thereâs mostly fake conservative censorship outrage relating to social media. Why do you say that?
Corbin Barthold: Oh, man, Iâm in the ĂÛÌÒֱȄ office, arenât I? I better be careful.
Nico Perrino: Well, you acknowledged, right, that if you give these states, if you give some of these companies free reign, theyâre gonna produce woke AI potentially. And I think we saw some social media companies put in place social media content moderation policies that wanted to produce feeds that were âwoke.â And now you might not call that censorship in the traditional sense, that is government censorship.
Although, you have Murthy v. Missouri where thereâs suggestions that the government put pressure on these social media companies to censor. But I think most Americans felt that when their posts criticizing trans athletes in sports were taken down, that that was a form of censorship. Albeit private censorship, it was a form of censorship that they experienced more on a day-to-day basis than government censorship.
Corbin Barthold: I certainly think decisions were made in content moderation that people had perfectly good reason to get upset about. Attempts to control the narrative during COVID-19, I think actually a lot of the people who complain about content moderation are on a pretty strong footing with some of that. But to work our way backwards starting with the Gemini AI, it was a scandal. Right? The ruthlessly diverse Gemini, they changed it. They didnât say, âThis is what youâre getting. Get used to it.â And ditto with the content moderation.
The coup occurred on Twitter. Itâs been turned into X. So, we can work our way back up. The market worked. And even before the market worked, I think the media landscape was much more fractured than that GOP narrative gives credit for. A lot of conservative voices did very well on Facebook. The whole thing about the social media landscape is it was this fracturing from the old world of broadcast television leading into cable television, leading into social media. Now weâre leading into AI. Actually, thereâs never been more outlets to speak and to have your voice heard.
So, Iâm not saying it was nothing, and Iâm not gonna lazily cite there were, you know, âOh, thereâs studies that showed that conservatives did well on social media through all the platforms.â Well, they needed to because they were at a disadvantage on a lot of the traditional media outlets until five minutes ago. Brendan Carr is still upset about this. They didnât start with nothing. It just took a molehill and turned it into a mountain. And now itâs broken their brains, and theyâre taking that gripe and breaking yet further things because they just canât take the W and move on.
Nico Perrino: Theyâre breaking further things, in this case, youâre saying â
Corbin Barthold: Like AI.
Nico Perrino: AI regulation insofar as they think that you have these big companies in play. We donât wanna stop states from regulating them. So, we canât have this AI moratorium.
Corbin Barthold: Itâs more important to stick it to Google than to have leading AI models be made in the United States.
Nico Perrino: Ari, you and I have talked about this before on this podcast. And honestly, I donât know what ĂÛÌÒֱȄâs position was, if we had one on this AI moratorium. I know you were skeptical of federal preemption months ago. Whereâs your head on that now? I know the disagreement you and I had is that Iâm a big believer in Section 230. I know you are as well. And I think that preemption created the internet more or less.
Ari Cohn: Yeah. So, I donât know if I was necessarily entirely skeptical of preemption. And to be honest, this â
Nico Perrino: You just worried that the federal government could screw it up more. In this case, it would just be a moratorium. They werenât doing anything.
Ari Cohn: Yeah. So, right. I think my skepticism was more like: Do I trust this Congress to write AI regulations that are particularly sane? Probably not. And itâs somewhat unusual for there to be preemption without some kind of regulatory structure in place, like a bill that just says states shall not regulate, but the federal government also isnât going to regulate.
Nico Perrino: Nobody does anything right now.
Ari Cohn: Yeah, right, exactly. So, yeah, I worry that I think, because, like you, I think Section 230 was so important. But because thereâs so much consternation about it, itâs become this weird boogeyman for certain ... I wouldnât say just the right thing. Itâs become a boogeyman for the right and the left. That weâre going to take the lessons that were good about Section 230 and say those were actually bad lessons, and do the reverse for AI, which would cause a whole bunch of issues. So, thatâs kind of where my â
Nico Perrino: For our listeners who donât recall Section 230, part of the Communications Decency Act, it gives the social media companies and other internet companies â
Ari Cohn: Immunity.
Nico Perrino: Immunity, so to speak, if they moderate content or if they donât moderate content. So, letâs pivot here, because I imagine we have some of our listeners right now who are asking: What does this have to do with free speech? What does artificial intelligence have to do with free speech? And I think that our next topic of conversation will help clarify that question that some of our listeners might have. So, I wanna talk about a lawsuit that was filed in Florida. Last October, Megan Garcia filed a wrongful death and negligence lawsuit against Character Technologies in a federal district court in Orlando.
Ms. Garcia is the mother of a 14-year-old who committed suicide after forming an emotional attachment to a Character AI chatbot portraying Game of Thrones character Daenerys Targaryen. Last December, Character Technologies responded to that lawsuit by filing a motion to dismiss the complaint contending that the chatbotâs outputs are a form of expressive content protected by the First Amendment. On Tuesday, May 20th, Judge Anne Conway denied the motion to dismiss, allowing the lawsuit to move forward into discovery.
And here is what Judge Conway wrote about Character Technology's First Amendment claims. Itâs kind of a long read here. So, bear with me. She said, âThe Court must decide whether Character AIâs output is expressive such that it is speech. For this inquiry, Justice Amy Coney Barrettâs concurrence in Moody on the intersection of AI and speech is instructive.â
Corbin Barthold: Itâs so painful.
Nico Perrino: Iâm gonna keep going, nevertheless. âIn Moody, Justice Barrett hypothesized the effect that using AI to moderate content on social media sites might have on the majorityâs holding that content moderation is speech. She explained that where a platform creates an algorithm to remove posts supporting a particular position from its social media site, the algorithm simply implements the entityâs inherent expressive choice to exclude a message. The same might not be true of AI, though, especially where the AI relies on an LLM, large language model.â
And here Judge Conway quotes directly from Justice Barrettâs concurrence. âJustice Barrett said, âBut what if a platformâs algorithm just presents automatically to each user whatever the algorithm thinks the user will like?â The First Amendment implications might be different for that kind of algorithm. And what about AI, which is rapidly evolving? What if a platformâs owners hand the reins to an AI tool and ask it simply to remove hateful content. If the AI relies on large language models to determine what is hateful and should be removed, has a human being with First Amendment rights made an inherently expressive choice not to propound a particular point of view?â
And this is going back to Judge Conwayâs conclusion on that quote from Justice Amy Coney Barrett. Judge Conway writes, âCharacter AIâs output appears more kin to the latter at this stage of the litigation. Accordingly, the court is not prepared to hold that Character AIâs output is speech.â So, thatâs a long way of getting back to what I wanna get from you, Corbin, which is: Why is this painful? Why do you think the court got it wrong, presumably, based on your editorializing there? Is an AI output speech for the First Amendment?
Corbin Barthold: Lisa Blatt is one of the most prominent Supreme Court practitioners. And she was once asked, and she said something that I found surprising. She was like, âConcurrences in Supreme Court decisions? Yeah, I donât read those. Who cares? Read the majority opinion.â Thatâs how I feel about this Barrett concurrence. I have never seen a concurrence do so much damage, the one that she â So, sheâs simul â
Nico Perrino: Do you think people are just hoping someone on the Supreme Court would say something like that, and theyâve just latched onto it?
Corbin Barthold: Probably.
Nico Perrino: What exactly is Amy Coney Barrett saying there? It might have been confusing, my reading it, âcause there are some ellipses in here that Judge Conway puts there.
Corbin Barthold: I think itâs something to do with the Butlerian Jihad. I think thereâs just ... okay. So, in Dune, the science â
Nico Perrino: For listeners, my hand just went right above my head, I have to say.
Corbin Barthold: Weâre gonna go on a journey. Yeah, weâre gonna go on a journey. This is common in science fiction where the science fiction writer has to deal with the fact that weâve come up with warp speed to go through the universe, and we have all these fancy tools, but then human relations are pretty much the same. The humans are interacting with each other in a way thatâs totally legible to the viewer in our present day. And in Dune, the explanation is that there was a Butlerian Jihad at some point in the distant past where everybody just destroyed any kind of technology that would displace humans as the governors of their world. I think Iâve got that right. Iâm not a Dune head, but ...
Nico Perrino: Neither am I. Canât help you there.
Corbin Barthold: The notion is that algorithmâs scary. Ergo, algorithm special rule. And there are just so many layers at which this is misguided. So, to start at the top, thereâs really no way that a human can self-wind a clock, you know, the algorithm, and set it out into the world and not have had some kind of expressive input on what the algorithm is going to do.
Ari Cohn: Thatâs my question. Where do these judges think that LLMs come from?
Corbin Barthold: They think that they just spring up, like you just, âAlgorithm, go,â and ... If Jackson Pollackâs paintings are protected expressions, then so should algorithms. Be like, âHe doesnât know where the paintâs gonna fall. Heâs just splattering.â Iâm like, yes, itâs true that the AI researchers donât know precisely how the LLMs work and canât predict precisely what they will do, but they are constantly tweaking these things. And this is true both with AI outputs and with social media algorithms, which is what Barrett was talking about. Never mind the fact that algorithms give the user what they want is itself an expressive choice.
Nico Perrino: I have no idea what youâre gonna say here either, but my engaging with you presumably and hearing what you have to say is a protected First Amendment activity.
Corbin Barthold: Well, that brings me to the next problem. So, forget about the output for a moment and just think about the rights of the listener. There is an amazing line in one of the Conway orders in which she says â Iâm paraphrasing â âDefendants fail to articulate how stringing words together qualifies as speech.â And itâs like ...
Ari Cohn: Thatâs almost a direct quote, actually.
Corbin Barthold: The words have semantic sovereignty. You have a reaction to words that you see regardless of who the speaker was or what their intention was like. The death of the author is a real postmodernist insight that, regardless of what the author intended to convey to you, you have some kind of reaction to words. The plaintiff is suing saying that these words had a profound impact on this child. How could they do that if they werenât expressive in some meaningful way? So, thereâs a strong First Amendment tradition of listeners having rights to receive information that this is flouting.
And then Iâll do one more before handing it to Ari, because I do need to talk about the fact that, even apart from the First Amendment, this is just really bad tort law. This is such a mess of a lawsuit. It is offensive to the complexity of suicide as a thing in our society. I actually was giving a talk at a salon in San Francisco, and it was Chatham House. So, Iâm gonna talk in big generalities, but it was very much sort of cultural elite activist types.
And I was debating somebody whoâs very much on the other side of this kind of thing, and I talked about the fact that this LLM didnât encourage the suicide. Actually, it said it was horrified by the idea of the suicide when the child explicitly mentioned â It said, âDonât do that.â And then at the end, he says something vague to do with Game of Thrones where heâs like, âIâm going to go home.â And the chatbot says â
Nico Perrino: Yeah. It says he told the bot how much he wanted to come home to her. The bot replied, âPlease do, my sweet king.â And then the 14-year-old shot himself in the head.
Corbin Barthold: Yeah. So, I talked about â
Nico Perrino: Tragic, of course.
Corbin Barthold: Absolutely tragic. But as I like to talk about Katharine Hepburnâs brother, their family went to a play one night, and in the play there was a hanging. And the next morning, her brother was found hung by suicide in his hotel room. And the causation here is always extraordinarily complex, and thereâs always a lot of factors here apart from whatever we might think is the immediate trigger. I listened to the Soundgarden lyrics, âNothing seems to kill me no matter how hard time I try,â probably 1,000 times as a child, but thatâs not going to cause me to go and commit suicide.
Nico Perrino: Or famously, Ozzy Osbourneâs âSuicide Solutionâ where someone filed a lawsuit following a suicide.
Corbin Barthold: But where Iâm going with this is â So, in this debate I was having, I talked about how there was no explicit directive to suicide or encouragement or endorsement. And my opponent read the âCome homeâ line, and at that point I had lost the debate. The people in this room were just nodding like, âOh, yeah, thatâs really terrible.â And I was thinking, âDear Lord.â Okay. So, we cannot have war scenes on the front of newspapers anymore. We canât have the hang of hanging. We canât have the song with the line.
I mean Megan Garcia â Ari and I were talking before the show about how far to go with this. Okay. She is an attorney. She has made a publicity campaign about this. Iâm sure she is a grieving mother. She has suffered a grievous tragedy, but sheâs deciding to do media interviews about this.
And I donât understand why as an attorney she doesnât see that if thatâs the kind of trigger that weâre going to be investigating, that any parent who has a child who commits suicide is immediately suspect. âWell, what did you say the day before the suicide?â If itâs hair-trigger like that, thatâs really not a road that we want to go down. So, thereâs a good reason that in tort law you need to have a special duty of care, which Character AI certainly didnât have with this child. So, itâs bad First Amendment law. It's a bad tort law.
Nico Perrino: It's a bad fact.
Ari Cohn: And thereâs also this general rule in most states that suicide breaks the chain of causation because it is so complex and because itâs hard to ascribe any particular cause to such a complex psychological phenomenon, that suicide actually breaks causation, reducing liability.
Corbin Barthold: Yeah, in a weird way, I would actually â So, it is bad facts in the sense that itâs just tragic and horrific.
Nico Perrino: Sure.
Corbin Barthold: But itâs also not a bad fact. This is a real loser as a matter of tort law, and yet I would be willing to be the stocking horse here and say that we defend speech precisely because it is powerful. People are going to have very close relationships with AI chatbots. Itâs coming. Get ready for it. And Iâm not just talking about The New York Times having an article where they found some edge cases where a couple people say, âIâm a normal person. I just happen to think that Iâm talking with a different dimension when I talk to the chatbot.â
You will always find people who are maybe not well. No, I mean normal, well-adjusted people are gonna have very close relationships with chatbots because they are these expressive beings. Iâm not saying theyâre conscious, but I am saying people are probably gonna treat them a lot like they are conscious.
Nico Perrino: Well, one of my former colleagues at the Institute for Justice, Paul Sherman, wrote a great piece about how he used AI as a therapist to overcome a tragic and distressing event from earlier in his life.
Corbin Barthold: Yeah. And the shut-it-down people, they have this illusion of control where they see one event that occurs thatâs icky, and they wanna just shut it down. And that ignores the fact that: A.) Theyâre not gonna be able to, and B.) They have no sense of the costs and the benefits and people who are getting that kind of benefit. Thereâs a bunch of those for every tragic case.
Nico Perrino: Well, thatâs something that you also saw with the rise of the internet, too, with some of these debates. For every child pornography cover there was on Time magazine, there were also people who were going into court, like in the Reno case, talking about the community that was created around the internet and all the benefits that came from those interactions. Now, Ari, ĂÛÌÒֱȄ filed an amicus brief urging immediate review of the courtâs refusal to recognize the First Amendment implications of AI-generated speech.
And you wrote ĂÛÌÒֱȄâs brief for that. You write, âAIâs integral and pervasive tool for communication, information retrieval, and knowledge creation.â You write that, âAssembling words to convey coherent, intelligible messages and information is the essence of speech.â What do you see as the big picture consequences if LLMs or other forms of expressive artificial intelligence donât receive First Amendment protection?
Corbin Barthold: And you need to explain it here âcause Judge Conway told you to get the F out.
Ari Cohn: Yeah, thatâs true. Just a few days ago, she blanket denied all of the amici, their motions for leave to file the brief saying, âIt would be unhelpful to the court,â which clearly is not the case because anyone who read her order found â
Corbin Barthold: She doesnât need you. She has Justice Barrett.
Ari Cohn: Yeah, right?
Nico Perrino: Do judges have pretty broad discretion to reject briefs, amicus briefs?
Ari Cohn: At the district court level, it happens. This was a non-dispositive motion. This wasnât an amicus brief on a motion that would dismiss or otherwise dispose of the case itself at the trial court level where amicus briefs are somewhat uncommon, very uncommon, in fact. So, we knew.
Nico Perrino: This was a significant decision on the motion to dismiss because, if Iâm not mistaken, Ari, this is the first time a federal court has really grappled with the question of whether these AI outputs are speech deserving of First Amendment protection.
Ari Cohn: And thatâs why we decided to weigh in here. To give a little bit of a layman civil procedure course here in 20 seconds, generally speaking, a decision on a motion to dismiss is not immediately appealable. You have to wait until a final decision in the case before you can take it up to the appellate court. You can ask the court for permission in special cases set out by a statute to allow an immediate interlocutory, which means immediate review.
Nico Perrino: And thatâs what Character Technology is seeking here.
Ari Cohn: Yes, exactly. So, we weighed in to say, listen, the implications of this ruling are so great, particularly when you think about â We were just talking about Section 230. When you think about the history of Section 230, you look at the very first district and appellate court pair of decisions, which was the Zeran versus America Online decision, which basically provides to this day the generally accepted reading of Section 230. That first decision lasted more than two decades or about two decades now. So, we have this decision that could actually be very, very influential on how this particular body of law progresses.
Nico Perrino: Progresses, yeah. As some of our long-time listeners know, one of the mentors of mine is Ira Glasser, the former executive director of ACLU. I made a documentary about his life and career defending free speech called Mighty Ira, and he has this saying. He said, âEarly law is like cement. If you let it sit too long, it hardens.â
Ari Cohn: It sets, yeah.
Nico Perrino: It becomes impregnable. And so, thatâs, Iâm assuming, why we are very concerned about these district court decisions.
Ari Cohn: Man, I wish I had had that quote when I was writing the brief. That would have been super helpful.
Nico Perrino: I tweeted out the quote.
Corbin Barthold: Doesnât matter, Ari. It wasnât getting read.
Ari Cohn: I should have asked you, Nico.
Nico Perrino: Well, Judge Conway here Iâm sure is reading my Twitter.
Ari Cohn: Yeah, right. But hereâs the long and short of it. Imagine a situation where a kid in a school is doing research on human rights atrocities during World War II. Can President Donald Trump, if AI output is not speech, is there anything stopping him from telling Congress to pass a law saying or issuing an executive order or doing whatever the hell it is he does these days to say, âAI models cannot say one word about the internment of Japanese Americans during World War II.â
And then the students donât learn it, and then society forgets it because we have come to rely on AI for information retrieval and all of these various things that weâre going to increasingly be using it for. Letâs not pretend like weâre still going to be, say, consulting Encyclopedia Brittanica when ChatGPT is like 16 levels from where it is right now. Itâs just not gonna be the case.
Nico Perrino: Well, thatâs what China's trying to do with Tiananmen Square.
Ari Cohn: Yeah, right. And we are opening the door to that. So, thereâs that part of it. Thereâs also the part â
Nico Perrino: Because if something is not protected constitutionally, if expression is not protected by the First Amendment â
Ari Cohn: If itâs not expression â
Nico Perrino: Then itâs regulable.
Ari Cohn: Yeah, if itâs not an expression, there is no barrier to the government saying, âWell, this kind of output you canât have,â which doesnât really make any sense. Another example is AI can make it very easy for campaigners to mount campaigns for public office or activists to create pressure campaigns that would have required basically an entire staff worth of people to create and edit and publish.
So, can the government say you canât output anything that criticizes the government line or a government official or, say, campaigns for office and protect themselves from all kinds of criticism or what have you if itâs made with an LLM and just make it more difficult for their critics to actually produce materials that they find inconvenient or uncomfortable?
I mean the things that we use AI for right now, those are examples enough to make you wonder about where this goes, but imagine how much â Again, when we come to rely on ChatGPT 17.0 or whatever the hell itâs gonna be, imagine what kind of control that gives the government over the everyday aspects of our communications with each other. I mean itâs just ...
Nico Perrino: Yeah. You conclude in the brief the government will have almost limitless power to regulate what information AI systems may or may not provide its users and what expressions users may create using AI.
Corbin Barthold: One caveat we should â
Nico Perrino: And then weâll move on. Yeah, go ahead.
Corbin Barthold: Definitely work here is just â So, youâll see people on the plaintiffâs lawyer side say, âWell, weâre not trying to regulate the speech. Weâre just trying to regulate the tools.â And Judge Conway seemed to buy that in parts of her opinion, even though other parts are written way more broadly. And that is such a sort of classic motte-and-bailey, where when you dig into what theyâre actually trying to do, itâs like, well, we know â
Nico Perrino: Weâre not trying to regulate the press. Weâre just trying to regulate the printing press.
Corbin Barthold: Yeah, itâs basically that.
Ari Cohn: The ink.
Nico Perrino: The ink.
Corbin Barthold: When you dig into the lawsuits both in the AI realm and â
Nico Perrino: You can have a press, you just canât buy ink for it.
Corbin Barthold: Pretty much. I mean when you dig into the â And itâs both the AI lawsuits and the social media lawsuits. You realize that wanna get rid of everything from the AI being able to say, like, âUm,â or, âUh,â like little â
Ari Cohn: Yeah, because itâs too human.
Corbin Barthold: Because itâs too human. They want to go after upvotes. If I thumbs-up something on social media, that is expressive. It is not â
Ari Cohn: From a function, yeah.
Corbin Barthold: John Milton. Itâs not Paradise Lost, but Iâm sending a message. They actually want to get in and really get into the plumbing. And it does amount to total control over this stuff through the back door, even if you take their argument on its own terms.
Ari Cohn: Yeah. Even though they might say, âWell, weâre just trying to address the harms caused by this,â first of all, for all the reasons you said, thatâs crap. But second of all, you have to look at what power that grants in future cases or, say, if Congress wants to do something. These decisions donât stay limited to a case. That is the whole point of jurisprudence is that we build up this line of this doctrine of how we address things, and these things arenât confinable to one particular case. You have to look at the downstream effects.
Nico Perrino: Yeah. And Iâm assuming thereâs stuff that the federal government or state legislatures could do to get around decisions like this. For example, Section 230 is a statute. Itâs not constitutional law. Itâs not a precedent. Itâs not common law or any of that. Itâs a statute. So, if you do have bad laws like this, you could have, for example, a legislatureâŠ
Ari Cohn: Yeah. But are we in a political environment where thatâs likely to happen? Thatâs the thing. Itâs like if we were trying to produce Section 230 today, it would never even get out of committee.
Corbin Barthold: Weâre going in the wrong direction. You know, Marsha Blackburn, the AI moratorium went down in part because the populist right was afraid that it would get in the way of child safety measures, which ultimately is the, âShut it down. These chatbots are scary,â outlook.
Nico Perrino: Well, that and, of course, the facts of a case like this where you have a minor committing suicide plays in, and it helps bolster that narrative.
Ari Cohn: Yeah. I mean thereâs the saying, âHard facts make bad law,â but itâs equally true that dead kids make bad law.
Nico Perrino: Yeah. All right, letâs move on from artificial intelligence now. On Monday, June 23rd, Andrew Ferguson, whoâs the chair of the Federal Trade Commission, the FTC, announced that his agency reached a settlement with two of the largest advertising holding companies, Omnicom Group and Interpublic Group of companies, IPG, that allows these companies to merge. The FTC had been targeting these two firms as part of a recent advertiser boycott, antitrust investigation. This probe is looking for evidence.
Their probe is looking for evidence of a coordinated boycott that might violate competition law. The investigation is focused on firms like Omnicom and IPG and media watchdogs like Media Matters and Ad Fontes Media. And the proposed consent order for this merger prohibits the two companies from âentering into or maintaining any agreement or practice that would steer advertising dollars away from publishers based on their political or ideological viewpoints.
According to Ferguson, the settlement does not limit either advertisers or marketing companiesâ constitutionally protected right to free speech. But Ari, you say prohibiting the carrying out or enactment of editorial discretion absolutely limits First Amendment activity. Why is Ferguson wrong?
Ari Cohn: Itâs the Seinfeld, âYou can take the reservation. You just donât know how to hold the reservation.â You can have the opinion. You just canât act on the opinion. This is just the next iteration in which we are trying to turn something that is expressive into not expressive simply just by calling it something else.
Nico Perrino: So, help us out here. So, the expressive act is these companies coming together and saying, âWe donât like the viewpoints that might exist on this platform.â Say X, I think thatâs the motivating platform for many of these investigations. âAnd therefore, weâre joining together, and weâre not gonna put our advertising dollars on these platforms.â
Ari Cohn: Actually, itâs a level before that. It is not the coming together of all of them to do it. It is each company saying, âI donât want my advertisements being placed next to X, Y, and Z content.â
Nico Perrino: Well, the reason I say coming together is because this is an antitrust investigation. So, there has to be some sort of coordination. Right?
Ari Cohn: Yeah. But well, I donât think Andrew Ferguson feels bound by that at all, because there really isnât any evidence that there is some kind of mass, like none of us are going to do this because we all agree with each other. It is we have all looked at this information that we have gotten and decided that we donât want to do this. I havenât seen any evidence that anyone has said, âWe will not do business with you unless you also boycott advertising on X.â I have seen no indication of that.
Nico Perrino: But even if they did do that, would that be constitutionally protected, association, speech, what have you?
Ari Cohn: So, yeah. I think if thereâs the old case of Lorain Journal, which is kind of a favorite in this space, which kind of lays out the distinction between decisions that are made for editorial reasons and decisions that are made as just general business decisions meant to, say, hurt a competitor. If everyone comes together and says, âWe like Bluesky,â or, âWe like Threads. And we are going to not do business with you if you advertise on X because we like this. We want this other company to succeed.â That could maybe cause some antitrust consternation. But if people are saying, âWe donât like the content, and we donât want our advertisements being shown next to that,â that is an editorial discretion question.
Nico Perrino: Well, I had mentioned Media Matters earlier. They had a report that showed, allegedly, that some advertiser content was appearing next to Nazi content or which have you. I havenât looked deeply into the facts, but I know that X was really disturbed by that report, said it was false, filed a defamation lawsuit, and thatâs ongoing. And thatâs part of the concern here. Right? So, maybe you have advertisers that did see that Media Matters report and said, âWe donât know if itâs true or not. We donât want any of our advertisers putting their content next to this. So, weâre not gonna buy it.â
Ari Cohn: But thatâs exactly the point. Media Matters put out this expressive thing saying, âHey, this is what we have found. This is how we perceive it.â Media Matters has zero control over what the advertisers do. Media Matters can say all they want, and the advertisers can look at it and make a decision based on that. What Andrew Ferguson is looking at there is: You have made a decision because somebody said an opinion to you that I donât like. That is the crux of what Andrew Ferguson is doing here.
Corbin Barthold: Free speech is normally rough and tumble. That is what it is by its nature. So, we talked about conservatives on social media earlier. And even in the darkest supposedly bad old days, if there were instances where people had a legitimate right, on the whole, people were doing pretty well despite maybe the fact that inside of Twitterâs offices had a classical progressive worldview.
Nico Perrino: Well, it depends on how you look at it. Again, we might disagree on some of this stuff. But if youâre Jordan Peterson, and you build up a million followers on X, and you get taken down because you say something about trans, youâre gonna call that censorship. But your point is â
Corbin Barthold: Youâre gonna be unhappy. But my point is then â
Nico Perrino: Iâm going to be an abbey?
Corbin Barthold: Youâre going to be unhappy.
Nico Perrino: Oh, I see what youâre saying.
Nico Perrino: There are other platforms you can go for. I grant you all that.
Corbin Barthold: So, what Ferguson has done, heâs actually gone a step back and said, âWell, it wasnât the advertiserâs opinion. What was happening was ...â And this gets to my point of sharp dealing. Oh, well, the agency was having this political opinion, and then it was roping the advertisers in and saying, âHereâs our naughty list. And weâre just gonna apply it unless you object and tell us otherwise.â And actually, if you look at the key precedent on this NAACP versus Claiborne Hardware, which is this boycott case.
Nico Perrino: From the Civil Rights era.
Corbin Barthold: Yes. So, NAACP leaders were unhappy with the treatment they were getting in â I believe it was Claiborne County, Mississippi. And so, they decided to organize a boycott of white-owned businesses. And it wasnât cricket, letâs put it that way. They put NAACP representatives outside the stores to harass any Black patrons who were thinking about going in. They named and shamed people who went into those stores at their meetings.
So, the notion of an actor having an opinion and sort of going to great lengths to act on it, that is not enough to take you outside of First Amendment protection. NAACP versus Claiborne rules that that was a legal boycott. It was expressive. It was not economic. And so, thatâs the big piece thatâs missing with Ferguson here. All of these things are always from the advertiserâs perspective like: Step 1.) Boycott social media platforms. Step 2 ... Step 3.) Profit. What is the economic motive of the advertiser here? And Ferguson, he doesnât know what to do with that.
Sometimes, what he says is, âWell, there is this ... Itâs not about the economics of the advertiser. Itâs about the product quality, and the free speech and free expression is part of the product quality. So, youâre diminishing the product quality. And so, itâs an antitrust thing.â A.) Thatâs not an antitrust thing.
Ari Cohn: At all.
Corbin Barthold: Like Miami Herald versus Tornillo, we know that you can have monopoly power in a speech market, and thatâs not an antitrust thing. Thatâs a free speech thing. But secondly, heâs making assumptions about the product market, that less content moderation is always, ipso facto, product improvement. And the market just has not borne that out. Advertisers have fled Elon Muskâs X precisely because they donât think itâs a good product. If it were the better product, if less moderation was a better product, we would expect defection from this cartel. So, now itâll bleed in from its bad First Amendment law to its bad antitrust law.
Nico Perrino: Or you could have X still be one of the top social media platforms, but people still stay on there, but you have partisans, whether theyâre users or advertisers, leave. So, it can still have a very significant market that advertisers might want to reach, but that they donât go there because they donât like how their advertisements might be displayed, for instance, next to Nazi content, allegedly.
Corbin Barthold: Yes, that could be their reason for leaving. What Iâm getting at is, if itâs a superior product â which it kinda needs to be for Fergusonâs theory to hang together. This Omnicom merger, just to take it on its own terms â although, itâs not the whole picture â itâs a six-to-five merger. Normally, if we could get Judge Frank Easterbrook, like a rock-ribbed antitrust conservative in the room, weâd be hearing that six to five is not all that scary. Never mind the fact that these are just the agencies. If youâre Unilever, if youâre an individual company â
Ari Cohn: I think you might need to explain what six to five means for people.
Corbin Barthold: Six major firms in the market to five. So, weâre not talking about a merger to a monopoly, for instance. And so, my point, to just do a little antitrust primer, collusion. Right? If we three in this room turn the microphones off and decide that weâre gonna collude to do something, to rob a bank or whatever, we have a reasonable possibility that we will be able to keep it to ourselves. I can monitor you guys. We can keep our secret. We can do our little conspiracy.
The more actors there are at play, the easier it is to defect. So, the higher the motive, if itâs all like, âOkay. So, weâre gonna boycott this company. Weâre gonna lose money. Weâre gonna hurt ourselves, but weâll hurt them more. So, letâs do it.â All of us have a big incentive to defect, to say, âWell, you guys go and hurt yourselves by not advertising on the best platform. Iâm gonna make money by advertising. The cost of the advertisements are lower. Iâm reaching this audience. Iâm getting bang for my buck.â So, Fergusonâs theory relies on this sort of vast leftwing conspiracy where all the advertisers hate money, or they donât like it as much as they like being political assholes. Yeah, it's just ...
Ari Cohn: Well, this actually all ties into Andrew Fergusonâs investigation into the big tech censorship whatever investigation he was doing. And we submitted comments. I talked about something that Corbin basically just alluded to. Andrew Ferguson has said many times he thinks that social media platforms should be a marketplace of ideas. And it is all well and good if you think so. I tend to agree with him on that just as a general principle. Step 2 is he doesnât think that social media platforms are acting as he thinks a marketplace of ideas should operate.
Step 3 is, therefore, they are an inferior product because Andrew Ferguson thinks that they should be something and are not being the thing enough that he thinks they should be, which is all well and good if youâre not wielding the power of the government, but Andrew Ferguson does not have the right to dragoon social media platforms into being his idea of what a marketplace of ideas is. Thatâs just not within the FTCâs power. And he seems to have these assumptions based on what Andrew Fergusonâs opinion about what these things should be is.
Nico Perrino: Is Andrew Ferguson conceding or arguing that these advertisers are boycotting X, presumably, or these other platforms for ideological reasons or for economic reasons?
Ari Cohn: I think he is.
Nico Perrino: You were just kind of speaking to that part.
Ari Cohn: I think he is conceding that it is for ideological reasons. I think he â
Nico Perrino: I donât understand how that doesnât implicate the First Amendment.
Ari Cohn: Well, I donât understand that either.
Corbin Barthold: So, that goes to my point of he doesnât really know what to do, and heâs trying to turn the ideological motive into an economic motive by saying more ideologically diverse platforms are a better product in the market. And thatâs where the sort of sleight of hand occurs.
Ari Cohn: Yeah. And actually, there is â
Nico Perrino: It confuses the shit out of me.
Ari Cohn: Same here. And there are actually, as Corbin said, lazily cited studies â
Nico Perrino: I mean, sometimes â
Ari Cohn: There are studies that say that people do not want unmoderated platforms. There are a lot of people who do want a product that is more moderate. Personally, I have pretty thick skin. I love the kind of Wild West feel of platforms that have less moderation, maybe because Iâm an asshole. Who knows? But there are people who donât want that, and there are products for them. And Andrew Ferguson is basically saying there should not be products for them.
Nico Perrino: And also, these advertising companies are products, too. Companies hire these advertising companies, and the media mixes that they put together are an editorial choice sometimes layered by ideological or partisan viewpoints on who they think their advertisers should be associated withâŠ
Ari Cohn: Letâs not get into Bud Light here.
Nico Perrino: But yes. I mean Bud Lightâs a great example. So, you can have all the economic incentive in the world to advertise on a place like X because it has so many users, but you might have media companies that are hired by these advertisers that just do not wanna spend their money there for ideological reasons. And I think the First Amendment should protect that sort of activity.
Corbin Barthold: I donât know how this ever went forward after Musk told advertisers to go F themselves. I donât. Itâs so overdetermined that advertisers would leave X. And if you go on X, also, I think some of these conservatives, like Andrew Ferguson, if they actually went and spent 10 minutes on Gab, I think theyâd be pretty startled about what truly unmoderated really means.
It really is a toxic sewage dump, not like racist in the way the term has sometimes been watered down, like virulently hateful, awful content. And you can find that on X, too. So, while the Media Matters study was not exactly rigorous scholarship, there is plenty of good reason as an advertiser to be worried about going on X for that, and then also the fact that Elon Musk is going around threatening heâll sue you if you donât advertise on his platform, which is not a great way to win friends and influence people.
Nico Perrino: He hasnât read the Dale Carnegie book yet, I guess. The last topic that I wanna touch on takes us out of the United States or maybe â and over to Europe. So, from July 2022, the Digital Services Act obliges platforms from social media networks to market places to act quickly against illegal content; curb risky, targeted ads; and submit the largest services to strict risk assessments, transparency, and audit duties. The European Commission stated the act reshapes the web into a âsafer and more open digital space grounded in respect for fundamental rights.â
And this past spring, all digital providers had to publish an annual transparency report. So, starting last Tuesday, July 1st, those reports must follow a single standard template set out in the EUâs implementing regulations, and that includes the number of user notices, trusted flagger notices, and government orders received, broken down by 15-plus content categories that actions and median handling time, counts and outcomes of all internal complaints, out of court disputes, and account feature suspensions with median decision times.
And then earlier this year, the European Commission adopted three new investigatory measures for one social media company in particular, X, under the Digital Services Act that requires it to additionally provide information on its recommender system, preserve any information regarding future changes made to its recommender system, and 3.) Provide information on some of Xâs commercial APIs. That is application programming interfaces. Iâm sure some of our listeners just started tuning out at some point.
Ari Cohn: I started to, yeah.
Nico Perrino: I mean this is Europe. So, thereâs a lot of red tape going on here.
Corbin Barthold: Germans will not learn their lesson that itâs kind of a weird look when they get obsessive about recordkeeping.
Nico Perrino: But part of the reason I wanna talk about this is because theyâre going after X under the Digital Services Act. These are American companies that dominate the internet ecosystem, so to speak. And the Digital Services Act and some of the burdens and fines and requirements placed on these American internet companies have come up in things like the tariff trade negotiations between the executive branch and Europe.
And we also have seen some instances of speech policing abroad in Australia, for example, and a little bit in Europe bleeding into what sort of content Americans can access or the sort of conversations Americans might have with Europeans, for example, on these platforms. So, there is, in this borderless, global, digital world that we live in â
Ari Cohn: Oh, youâre just trying not to say globalist, arenât you? Thereâs a good reason why American companies are dominating the internet, and that is because â
Nico Perrino: âCause they donât have to do all this bullshit.
Ari Cohn: We donât have crazy ass regulations like that.
Nico Perrino: Well, unless the AI regulations on a state-by-state basis come into a fact. Right?
Ari Cohn: All right. Well, thatâs enough pessimism for Tuesday, Nico.
Corbin Barthold: Well, I should plug â Daphne Koller over at Stanford has a great article on lawfare called âThe Rise of the Compliant Speech Platform.â And it is all about how the DSA is turning social media platforms into corporations that operate more like banks with auditing and acting like they have fiduciary duties where theyâre doing all this kind of recordkeeping and box-checking. And that is â
Nico Perrino: Thatâs recordkeeping and box-checking for speech. Letâs be clear.
Corbin Barthold: Yeah, and that is bleeding over into the United States. Because once you set up that apparatus, you might as well just come up with a uniform way of doing things.
Nico Perrino: Yes.
Ari Cohn: And we have direct evidence. I mean back when Donald Trump was still a candidate, Thierry Breton, who had this weird tendency to tween pictures of himself while making announcements about DSA enforcements.
Nico Perrino: He had no fear of a stop light, that man.
Ari Cohn: No, no, that man did not. He sent a letter to Elon Musk warning him that airing an interview with Donald Trump, a major party candidate for president, could run afoul of the DSAâs provisions and that extraterritorial enforcement on speech that Americans could see might be necessary under the DSA because of spillover effects on Europe. This isnât the recent Elon Musk being accused of interfering with German elections. This is â
Nico Perrino: Or didnât some people in England threaten him with prosecution for his tweets surrounding the ...
Ari Cohn: Yeah. But even that, at the very least, it was because there were events in Europe or the UK that he was commenting on that were influencing things there. This is the EU saying you airing an interview for Americans â
Nico Perrino: On American soil.
Ari Cohn: On an American presidential election with a major party candidate for president is regulable under the DSA because weâre worried about what Europeans are gonna think or see or react to when they look at information about American elections. I mean, first of all, thatâs batshit crazy [inaudible â crosstalk] [01:01:03].
Nico Perrino: Heâs no longer in a position of power. Right? Didnât he resign, or did he left?
Ari Cohn: No, they replaced him. They havenât really renounced that theory of â that Thierry, ha, ha, ha â of enforcement and what the DSA covers. We are seeing literal attempts by Europe to impose speech regulations on Americans. And Iâm pretty sure we fought an entire war about that. But on top of that, thereâs a secondary problem.
Nico Perrino: I think that complaint was listed in the Declaration of Independence somewhere.
Ari Cohn: Yeah, thereâs something about that in our â
Corbin Barthold: George III suppressing tweets.
Ari Cohn: Yeah, [inaudible â crosstalk] [01:01:40].
Corbin Barthold: He suppresseth our tweets.
Ari Cohn: Thereâs a secondary problem, and that is thereâs a vast disconnect between Europeans and Americans, just in terms of the knowledge of different legal systems and countries and things, generally, where people look and say, âOh. Well, the DSA is trying to make people safe online. Whatâs stopping us? Why didnât we pass the DSA? Whatâs up with that?â And state legislators are asking, âWell, the Europeans are doing this. Why canât we?â Which is a stupid question to begin with because we are not Europe, and we have the First Amendment, but people donât get that.
So, itâs creating an enormous amount of pressure on legislators in the United States to pass similar laws to âmake the internet safer,â not that the DSA is gonna remotely do that. So, it creates this second order effect of having the pressure for Americans to pass EU-style regulation, which is insane to me.
Nico Perrino: So, I wanna close out here by just covering some ground that we covered on the last podcast in the Free Speech Coalition v. Paxton Supreme Court decision from the last day of the term. It was actually the last decision the Supreme Court ended up handing down because the remaining case is gonna get reargued, I guess. But this Paxton case was argued in January, and we got the decision on the last day of the term.
For our listeners who donât remember, this relates to a Texas law that requires age verification to access adult material on websites that contain more than 33% adult material or material that might be considered harmful to minors. That is pornography for all intents and purposes. The Supreme Court said that this law passed intermediate scrutiny and can go into effect, more or less. A number of other states have tried to pass laws like this as well.
Some of them have faced legal challenges. I think the Supreme Court is more or less giving them a green light at this point. I donât wanna litigate that case necessarily. We already talked about it on the last podcast. But I do wanna ask you guys what you have heard, if anything, in the tech space in the fallout from this decision. And my biggest question is: Is this gonna reach social media at some point, or do you think the decision and the rationale will be cabined just to pornography and adult material?
Corbin Barthold: Yeah. Ari and I already did an event in which we ranted about the decision for an entire hour.
Nico Perrino: Well, letâs not do that. Letâs rant about it for like three minutes.
Corbin Barthold: Yeah. Iâll go right at the social media question. There is â
Nico Perrino: And let people get on with their day.
Corbin Barthold: Itâs a decision that at certain points has a feel of straining for a certain âThis ride onlyâ status, but it is not written like that. It is written in a way that can cause all kinds of mischief. And my concern â and thereâs several lines to this effect, where all youâve gotta do is take the line in the decision and take the word âobscenityâ out and put in unprotected as to minors, where what it effectively says is, on the internet, if there is any speech that any minor does not have a First Amendment right to see, slapping age-gating, you slapping an age verification right on that speech is permissible with intermediate scrutiny. And if you read the decision that way, then, oh, yes, it is absolutely going to go towards social media. We will be fighting that fight.
Ari Cohn: And hereâs the optimistic view, which Iâm not trying toâŠ
Nico Perrino: Well, hold on, Ari. In your answer here, is the only category of speech thatâs unprotected under the First Amendment only for minors obscene as to minors speech, that is its sexual speech? Because they looked at this question in the Entertainment Merchants Association Case relating to violent video games, and they said that this isnât an unprotected category of speech.
Ari Cohn: So, thatâs part of the optimistâs answer is that that is the only place where it has been. Up until now, that is the only place it has really kind of had an effect. I think what you will see is states trying to declare other speech unprotected for minors and kind of running headfirst into the Entertainment Merchants Association, but deciding it's worth the risk.
Corbin Barthold: And letâs be clear. Scalia joined the liberals in the majority in Brown to say that the First Amendment keeps up with technology. Justice Alito joined by Chief Justice Roberts wrote a concurrence in that decision saying, in so many words â
Nico Perrino: And Brown is the Brown v. EMS case that I was referencing before. ThatâsâŠ
Corbin Barthold: Yeah. Saying, âNo. No, it doesnât. We should move slowly and carefully, and the First Amendment should lag behind technology,â in so many words. And Thomas dissented. And Scaliaâs not on the court anymore, and Alito and Roberts and Thomas are. So, going forward, if youâre gonna read the tea leaves there, itâs not unreasonable to think that weâre gonna start seeing the Brown concurrence be ascendant, and the Brown majority continue to get narrowed.
Ari Cohn: But hereâs the optimist side of it. A lot of these social media cases so far, the courts have had to decide whether these social media laws are content-based. And one of the ways they have found they are content-based is saying they target social interaction, and that is content-based regulation. If the Supreme Court wants to prohibit this harmful content to minors â pornography, basically â what they could do is they could say, âSocial interaction is not an unprotected category of speech for minors. Therefore, this is a content-based regulation that impacts protected speech for everyone and therefore a strict scrutiny.â They could do that.
Corbin Barthold: I hope they do that.
Ari Cohn: I donât know if they want to. They could.
Corbin Barthold: My concern is theyâre gonna pick up like Tinker school speech style cases and say, âWeâve never really fleshed it out, but we have said that minors have something less than First full Amendment protection.â
Ari Cohn: Right, thereâs room for monkey business.
Corbin Barthold: âAnd therefore, weâre now gonna start putting meat on those bones.â
Nico Perrino: I mean youâre referencing cases here. Youâre referencing strict scrutiny. Justice Oliver Wendell Holmes had that famous saying that the practice of law is not logic. It's an experience. And I do worry that in the wake of all the work that Jonathan Haidt, for example, has done with Anxious Generation and the experience that many parents have had with their kids on social media, that thereâs just gonna be a significant cultural momentum in the direction of regulating these â
Ari Cohn: Of safety-ism.
Nico Perrino: Of safety-ism, yes. Of regulating the social media companies in a way that, if you just look at the First Amendment precedent, would be foreclosed.
Corbin Barthold: I was asked what First Amendment litigators should do following Paxton. And I said, âThe time has come. You can no longer lead with your dry, âThe First Amendment protects us because precedent, precedent, precedent.â You still need to argue that stuff, of course, as a lawyer, but you need to switch to, âThis speech is valuable. This other view is based on junk science. This is a moral panic. We uphold free expression for these reasons.ââ And I think that needs to be the leading story that litigators in this space go back to telling.
Nico Perrino: Yeah. Iâm doing this research for my book right now, which is about the so-called free speech century, the period between 1919 and 2019, more or less, where our First Amendment rights really started to expand and get their full meaning. And one of the things that Iâve been looking into is the history surrounding the rise of the internet and the Reno case.
And one of things that I found was that one of the most compelling arguments for the judges in the early litigation of that case was just the affidavits and the testimony that they received from internet users about the value of the internet and creating community, having outlets for expression, because all they had been hearing were the debates on the floor of Congress about internet porn, more or less.
Ari Cohn: And thatâs exactly, for instance, where ĂÛÌÒֱȄâs own user lawsuit against the Utah law was super important in our amicus brief in the 10th Circuit, where weâre telling the stories of the users and the impact on their lives that social media has had for the positive. Thatâs why those stories are so important to the frontline in the litigation because it helps do exactly what you just said, Corbin, which is to make the story clear that this is not some kind of abstract theoretical conversation about doctrine and rights. These are real people, and this has a real effect on their lives, and weâre gonna tell those stories.
Nico Perrino: All right, Ari, I think weâre gonna leave it there. Ari Cohn, of course, lead counsel for tech policy at ĂÛÌÒֱȄ. Corbin Barthold is internet policy counsel at TechFreedom. He also has his own tech podcast. Remind me, Corbin. Whatâs the name of that podcast?
Corbin Barthold: The Tech Policy Podcast.
Ari Cohn: The one and only.
Corbin Barthold: Creatively named. Yes.
Nico Perrino: Well, you gotta get the SEO in there. Right? Search engine optimization rewards a lack of creativity. It rewards something thatâs just straightforwardly descriptive. Thatâs why â
Corbin Barthold: You sound like Justice Barrett, you know. Itâs not expressive.
Nico Perrino: Thatâs why our podcast is not just called So to Speak. Itâs called So to Speak, the Free Speech Podcast, âcause you gotta get a âfree speech podcastâ in there. All right, folks. I am Nico Perrino. And this podcast is recorded and edited by a rotating roster of my ĂÛÌÒֱȄ colleagues, including Sam Li and Chris Maltby.
This podcast is produced by Sam Li. To learn more about So to Speak, you can subscribe to our YouTube channel or our Substack page, both of which feature video versions of these conversations, and you can follow us on X by searching for the handle freespeechtalk. Feedback can be sent to sotospeak@atthefire.org. Again, that is sotospeak@thefire.org. And if you enjoyed this episode, leave us a review on Apple Podcasts or Spotify. Those are the two most helpful places you can leave us a review. Reviews do help us attract new listeners to the show. And until next time, thanks again for listening.