ĂÛÌÒֱȄ

Table of Contents

So to Speak podcast transcript: Tech check — AI moratorium, Character AI lawsuit, FTC, Digital Services Act, and FSC v. Paxton

So To Speak podcast thumbnail featuring Ari Cohn and Corbin Barthold

Note: This is an unedited rush transcript. Please check any quotations against the audio recording.

Corbin Barthold: It’s the goal of the professional managerial class to take control of the models. They wanna have audits where all of the model development is overseen and basically bring it to the bad old days of Claude where if I ask it, “What are the 10 key works of the Western Canon?” Today, that’s subjective, but here, you know, Dante and Odysseus. I do remember it wasn’t that long ago where it said, “That’s a problematic question.”

Recording: Somewhere I read of the freedom of speech.

Nico Perrino: You’re listening to So To Speak, the free speech podcast brought to you by ĂÛÌÒֱȄ, the ĂÛÌÒֱȄ. All right, folks. Welcome back to So To Speak, the free speech podcast where every other week we take an uncensored look at the world of free expression through the law, philosophy, and stories that define your right to free speech. I’m your host, Nico Perrino. Today we’re going to do a tech check. That is we’re going to check in on the latest news in tech and free speech. And to do that we are joined by my colleague, Ari Cohn. Ari, welcome back onto the show. Fourth time now Sam has on my notes here.

Ari Cohn: Oh, wow. You know, it feels like it was just yesterday it was the first time back. You know?

Nico Perrino: Well, we’re gonna give you a tie. We have these new ties that we’re giving out to our guests or scarves, depending on if they’re female. You haven’t earned the Rolex yet.

Ari Cohn: Okay, well ...

Nico Perrino: I don’t think anyone’s earned the Rolex yet?

Ari Cohn: Can I at least get the tie and the scarf?

Nico Perrino: We’ll see. You’re gonna have to talk to Sam about that. But Ari, you’re the lead counsel for tech policy at ĂÛÌÒֱȄ. You do all things tech.

Ari Cohn: What touches tech, touches me. That just actually came out real bad.

Nico Perrino: It reminded me of a Darkness song. You remember that band, The Darkness?

Ari Cohn: I do. I love The Darkness.

Nico Perrino: I love the Darkness, too. They were kind of –

Ari Cohn: I figured you might.

Nico Perrino: This old school ‘70s style rock band.

Ari Cohn: Like actual old school like –

Corbin Barthold: I believe in a thing called love.

Nico Perrino: Yes.

Ari Cohn: Is this the first time you’ve had a singer on –

Corbin Barthold: I’m going for copyright infringement in the first two minutes of the podcast.

Nico Perrino: And that beautiful voice, listeners, that you’re hearing is Corbin Barthold. He is the internet policy counsel at Tech Freedom. You also host a podcast.

Corbin Barthold: The Tech Policy Podcast. And Ari is my lost love, my 15-time guest. He used to be at Tech Freedom.

Ari Cohn: Was it really 15 times?

Corbin Barthold: I mean you’re still invited back.

Nico Perrino: Are we stomping on your territory here, Corbin, with this podcast?

Corbin Barthold: Yeah, you took my Questlove from my show.

Nico Perrino: Okay. I don’t understand that reference.

Corbin Barthold: Jimmy Fallon, you know, your righthand man.

Nico Perrino: I don’t know who Jimmy Fallon is. I’m joking. I’m joking. I’m joking. I’m joking.

Ari Cohn: I can believe you don’t know who Questlove is, but Jimmy Fallon is a bridge too far for even me.

Nico Perrino: All right, let’s jump right in. On Thursday, May 22nd, the full House narrowly passed the One Big Beautiful Bill Act 215 to 214, one vote margin there. And in it they had a provision halting state and local AI enforcement for a decade. But last Tuesday, on July 1st, the Senate struck the state moratorium from the bill by a vote of 99 to 1.

The House subsequently voted in favor of the Senate’s changes to the bill. And on Friday, July 4th, the President signed the Big Beautiful Bill into law. Corbin, on X last Tuesday, you said, “I hope Texas and Florida stand ready to counter every California and New York measure on AI equity, antiracism, decolonize the algo, et cetera, with a colorblind, et cetera, mandate.”

You said, “The worst-case scenario is that the left mandates ‘woke scold’ AI. The right concludes AI is left coded and bad. The right mandates AI slowdown. The left Bluesky crowd mandates AI slowdown. Vicious downward spiral. The future is dumb.” There’s a lot there in that tweet, Corbin. My assumption is you don’t like that they didn’t have this moratorium in the bill.

Corbin Barthold: Well, my first lesson is just never tweet ever. Okay, got it.

Nico Perrino: Because you have to assume you’re gonna come on this podcast, and I’m gonna try and read it out loud and just butcher it.

Corbin Barthold: Yeah. Well, so, first of all, I didn’t realize that if Marjorie Taylor Greene had just read the bill, it would have gone down in the House. So, it was not ideal. I don’t think it was well crafted. Ten years was kind of insane. In 10 years, I plan to have an AI girlfriend. Things are gonna really develop between now and then.

Ari Cohn: How’s your wife feeling about that?

Corbin Barthold: We’ll have to talk.

Nico Perrino: But is it really insane, though, Corbin? Because you have Section 230 which doesn’t have a sunset provision, and it does some of the same stuff.

Corbin Barthold: I think AI is actually moving fast enough, but 10 years is pretty wild. The impact it’s gonna have on culture, I was fine with the notion of scaling back the amount of time. And they could have done a more careful job about talking about what it applied to.

But in principle, talking about keeping states out, keeping their grubby fingers off of especially AI outputs and AI model training, it was definitely directionally correct, because what we’re going to have now ... If you liked the culture war that we have been fighting over social media, wait until you see the culture war that we’re gonna have over AI. There is no unbiased, Edenic, LLM out there. and the fight is actually going to be to push it in the direction of ruthlessly diverse Gemini. Do we all remember that?

Nico Perrino: Yes, I remember that where you had Black George Washington and ... yeah.

Corbin Barthold: The end goal is not some kind of neutrality. The postmodern critical race theory left thinks that that’s not a thing that exists or can be achieved. So, we’re gonna get all kinds of algorithmic justice measures that are in fact trying to turn LLMs into roughly what college admissions are today, where there’s a thumb heavily on the scale of outputs. And right now, already, David Risotto had a study. If you are having a job application put in, and the resume’s being fed into one of these LLMs, his advice to you is have a female name and use your pronouns.

This notion that it is all slanted in some way that is racist in the traditional sense is actually wrong. It’s biased in a lot of different directions. There’s gonna be a lot of yelling about this. If we start to try to put government controls on what these LLMs do and the outputs, it’s gonna be very ugly.

And I am not excited about the notion of Texas getting involved in this. I think that they would have equally stupid common carrier requirements or whatever. But I will say, if the federal government is not going to step in and preempt this kind of stuff, the depressing best-case scenario is at least that maybe states on both sides get involved, fight it out, and maybe Neil Gorsuch realizes that the dormant commerce clause wasn’t so bad after all.

Nico Perrino: Ari, I have a question for you. But one more point on this, Corbin. So, private companies like Google, they can create the LLMs with whatever algorithm or bias they want. They can have “woke AI” if they want.

Corbin Barthold: For sure.

Nico Perrino: But what you’re saying is that by not preempting state legislation on some of this stuff, you’re gonna have the government mandate in some cases, presumably, a woke AI. And we might see it on the basis of some sort of prevention of algorithmic discrimination, for example.

Corbin Barthold: Yeah. One way that this is gonna play out is there’s a big push to impose disparate impact liability on LLMs. And nobody wants discrimination. You’re pretty nuts if you’re opposing Title VII of the Civil Rights Act. But disparate treatment is the normal benchmark across most discrimination laws, meaning if you hire someone based on their sex or their race over someone else, now, that’s illegal. And that’s illegal if you do it with an LLM. Putting AI in the mix doesn’t change that.

Disparate impact, we’re suddenly in the world where any statistical difference between demographic groups and an outcome, immediately, the maker of the product that reflects that disparate impact is on the back foot, is facing liability unless they can prove their innocence, which ... you know.

Demographic differences exist across basically everything you care to measure in society because people make different choices. They have different cultures. They have different values. It's not surprising that there are different outcomes in certain areas placing liability on AI for reflections of society that they are just putting back out into the world is gonna be very, very ugly. It will be the plaintiff’s lawyers field day.

Ari Cohn: Oh, yeah. And where that all leads to is something that Greg Lukianoff has pointed out, and we’ve pointed out in some of our writings. It’s when you get to that point, the incentive for the LLMs, the developers, is to make sure that nothing their AI spits could be remotely conceived of as causing some kind of discriminatory effect, which means the LLMs then present a skewed version of reality because nothing – We don’t wanna include, say, any crime statistics that might point to, say, different stats between different segments of populations or something like that, whatever you wanna choose.

And then when the LLM is asked to, say, make connections between different datapoints on different things to try and figure out new things about society we might not have known before, well, then what happens is the LLM makes those connections based on flawed, faulty information. So, we have this new information built on flawed information, which is then fed back into the system and creates yet another level of bad information based on flawed understandings of how the world around us is.

Nico Perrino: They call that model collapse. Right?

Ari Cohn: Yeah.

Corbin Barthold: It’s the goal of the professional managerial class to take control of the models. They wanna have audits where all of the model development is overseen and basically bring it to the bad old days of Claude where if I ask it, “What are the 10 key works of the Western Canon?” Today, it says, “That’s subjective. But here, Dante and Odysseus ...” And of course, it is subjective. So, that’s okay. I do remember it wasn’t that long ago where it said, “That’s a problematic question.”

Nico Perrino: Was that actually the case?

Corbin Barthold: Oh, yeah, yeah, yeah. When Claude was super safety obsessed, this was the kind of stuff it would say before it realized that nobody wanted that. And no, just answer my damn question. And so, I don’t think it’s hyperbole to say that that’s the goal. It’s to decolonize the algorithm. And I don’t think GOP states are gonna take that lying down. So, again, to repeat, they will do equally stupid things, and that’s why we cannot have nice things.

Nico Perrino: Like they tried to do with the laws in Texas and Florida, for example.

Corbin Barthold: Exactly, but with AI.

Nico Perrino: With policing of social media or preventing content moderation in social media. Why would the House of Representatives and presumably Trump’s AI team wanna pass this moratorium? I read somewhere that there’s something like 1,000-plus state-based AI regulations that are going through legislatures at this moment. Is the effort to try and preempt this patchwork approach to regulating AI?

Ari Cohn: So, I think that is a large part of it. There’s this fear that when there are, say, 50 states, and if there are 50 different state regulations on what the AI can do, the incentive for companies is not to comply with 50 states’ laws. It is to comply with the most restrictive state’s law, if that’s possible, to cover all bases, minimize compliance costs, and things like that. And then you have California or New York setting AI policy for the entire country, which is, you know.

Nico Perrino: Kind of like California does with its emission policies for cars. Right?

Ari Cohn: Yeah.

Nico Perrino: It’s a big enough market that if it sets a policy, companies tend to abide that as the denominator or numerator – I don’t know which one – that they need to –

Ari Cohn: Don’t ask me about fractions, Nico.

Corbin Barthold: I was told there’d be no math.

Nico Perrino: You do not want me doing math. You don’t me going anywhere near math. My ACT score from high school is evidence enough of that. You had Adam Thierer, our friend from R Street, say that this is a devastating blow for little tech AI players in the United States, as only large technology companies will be able to comply with the enormously confusing, costly compliance burdens. So, if you’re a small AI, startup AI company, you’re gonna hire a litany of lawyers to try and ensure you’re in compliance with all these different states and perhaps have different model outputs based on the requirements of those states, which would in essence be different models.

Ari Cohn: And it would cost a ton of money, and it would leave the field basically at the mercy of the biggest players who can afford that kind of thing, or you have the tiny players who then can comply with just the most restrictive law to cover all of it. Then they offer basically an inferior product because, again, they’re offering the California model to everyone instead of, say, the California model, the Texas model. But I don’t think we want that generally. And I think more fundamentally, this points to an interesting kind of societal rift that Corbin and I were just talking about between the tech right and the right-right.

Corbin Barthold: So, I think the little tech point is real, but I do also think there’s a certain strategic pivot by some to that, because we’re all waking up to the fact that the GOP’s pathological disdain for big tech actually goes farther than some of us thought. And I thought it went pretty far, this realization that if Google is a leading AI firm, and they are key for us being China in AI, that is something that Steve Bannons of the world are actually willing to throw under the bus because they just hate that damn woke corporation that much.

And they have so convinced themselves on this mostly fake social media censorship narrative that they got themselves twisted in a knot over. They’re letting that spread and pollute basically every tech topic that they run into. And that actually, this divorce we’re seeing between the tech right and the populace right now – and it’s not just Elon Musk falling out with Trump. I think, actually, we’re gonna have a lot of battles coming down the line from screening the genetics of babies to cloud seeding and adjusting the weather to AI. Actually, there might be a much bigger fallout in store here, and AI is just the beginning.

Nico Perrino: I can’t leave your comment that there’s mostly fake conservative censorship outrage relating to social media. Why do you say that?

Corbin Barthold: Oh, man, I’m in the ĂÛÌÒֱȄ office, aren’t I? I better be careful.

Nico Perrino: Well, you acknowledged, right, that if you give these states, if you give some of these companies free reign, they’re gonna produce woke AI potentially. And I think we saw some social media companies put in place social media content moderation policies that wanted to produce feeds that were “woke.” And now you might not call that censorship in the traditional sense, that is government censorship.

Although, you have Murthy v. Missouri where there’s suggestions that the government put pressure on these social media companies to censor. But I think most Americans felt that when their posts criticizing trans athletes in sports were taken down, that that was a form of censorship. Albeit private censorship, it was a form of censorship that they experienced more on a day-to-day basis than government censorship.

Corbin Barthold: I certainly think decisions were made in content moderation that people had perfectly good reason to get upset about. Attempts to control the narrative during COVID-19, I think actually a lot of the people who complain about content moderation are on a pretty strong footing with some of that. But to work our way backwards starting with the Gemini AI, it was a scandal. Right? The ruthlessly diverse Gemini, they changed it. They didn’t say, “This is what you’re getting. Get used to it.” And ditto with the content moderation.

The coup occurred on Twitter. It’s been turned into X. So, we can work our way back up. The market worked. And even before the market worked, I think the media landscape was much more fractured than that GOP narrative gives credit for. A lot of conservative voices did very well on Facebook. The whole thing about the social media landscape is it was this fracturing from the old world of broadcast television leading into cable television, leading into social media. Now we’re leading into AI. Actually, there’s never been more outlets to speak and to have your voice heard.

So, I’m not saying it was nothing, and I’m not gonna lazily cite there were, you know, “Oh, there’s studies that showed that conservatives did well on social media through all the platforms.” Well, they needed to because they were at a disadvantage on a lot of the traditional media outlets until five minutes ago. Brendan Carr is still upset about this. They didn’t start with nothing. It just took a molehill and turned it into a mountain. And now it’s broken their brains, and they’re taking that gripe and breaking yet further things because they just can’t take the W and move on.

Nico Perrino: They’re breaking further things, in this case, you’re saying –

Corbin Barthold: Like AI.

Nico Perrino: AI regulation insofar as they think that you have these big companies in play. We don’t wanna stop states from regulating them. So, we can’t have this AI moratorium.

Corbin Barthold: It’s more important to stick it to Google than to have leading AI models be made in the United States.

Nico Perrino: Ari, you and I have talked about this before on this podcast. And honestly, I don’t know what ĂÛÌÒֱȄ’s position was, if we had one on this AI moratorium. I know you were skeptical of federal preemption months ago. Where’s your head on that now? I know the disagreement you and I had is that I’m a big believer in Section 230. I know you are as well. And I think that preemption created the internet more or less.

Ari Cohn: Yeah. So, I don’t know if I was necessarily entirely skeptical of preemption. And to be honest, this –

Nico Perrino: You just worried that the federal government could screw it up more. In this case, it would just be a moratorium. They weren’t doing anything.

Ari Cohn: Yeah. So, right. I think my skepticism was more like: Do I trust this Congress to write AI regulations that are particularly sane? Probably not. And it’s somewhat unusual for there to be preemption without some kind of regulatory structure in place, like a bill that just says states shall not regulate, but the federal government also isn’t going to regulate.

Nico Perrino: Nobody does anything right now.

Ari Cohn: Yeah, right, exactly. So, yeah, I worry that I think, because, like you, I think Section 230 was so important. But because there’s so much consternation about it, it’s become this weird boogeyman for certain ... I wouldn’t say just the right thing. It’s become a boogeyman for the right and the left. That we’re going to take the lessons that were good about Section 230 and say those were actually bad lessons, and do the reverse for AI, which would cause a whole bunch of issues. So, that’s kind of where my –

Nico Perrino: For our listeners who don’t recall Section 230, part of the Communications Decency Act, it gives the social media companies and other internet companies –

Ari Cohn: Immunity.

Nico Perrino: Immunity, so to speak, if they moderate content or if they don’t moderate content. So, let’s pivot here, because I imagine we have some of our listeners right now who are asking: What does this have to do with free speech? What does artificial intelligence have to do with free speech? And I think that our next topic of conversation will help clarify that question that some of our listeners might have. So, I wanna talk about a lawsuit that was filed in Florida. Last October, Megan Garcia filed a wrongful death and negligence lawsuit against Character Technologies in a federal district court in Orlando.

Ms. Garcia is the mother of a 14-year-old who committed suicide after forming an emotional attachment to a Character AI chatbot portraying Game of Thrones character Daenerys Targaryen. Last December, Character Technologies responded to that lawsuit by filing a motion to dismiss the complaint contending that the chatbot’s outputs are a form of expressive content protected by the First Amendment. On Tuesday, May 20th, Judge Anne Conway denied the motion to dismiss, allowing the lawsuit to move forward into discovery.

And here is what Judge Conway wrote about Character Technology's First Amendment claims. It’s kind of a long read here. So, bear with me. She said, “The Court must decide whether Character AI’s output is expressive such that it is speech. For this inquiry, Justice Amy Coney Barrett’s concurrence in Moody on the intersection of AI and speech is instructive.”

Corbin Barthold: It’s so painful.

Nico Perrino: I’m gonna keep going, nevertheless. “In Moody, Justice Barrett hypothesized the effect that using AI to moderate content on social media sites might have on the majority’s holding that content moderation is speech. She explained that where a platform creates an algorithm to remove posts supporting a particular position from its social media site, the algorithm simply implements the entity’s inherent expressive choice to exclude a message. The same might not be true of AI, though, especially where the AI relies on an LLM, large language model.”

And here Judge Conway quotes directly from Justice Barrett’s concurrence. “Justice Barrett said, ‘But what if a platform’s algorithm just presents automatically to each user whatever the algorithm thinks the user will like?’ The First Amendment implications might be different for that kind of algorithm. And what about AI, which is rapidly evolving? What if a platform’s owners hand the reins to an AI tool and ask it simply to remove hateful content. If the AI relies on large language models to determine what is hateful and should be removed, has a human being with First Amendment rights made an inherently expressive choice not to propound a particular point of view?”

And this is going back to Judge Conway’s conclusion on that quote from Justice Amy Coney Barrett. Judge Conway writes, “Character AI’s output appears more kin to the latter at this stage of the litigation. Accordingly, the court is not prepared to hold that Character AI’s output is speech.” So, that’s a long way of getting back to what I wanna get from you, Corbin, which is: Why is this painful? Why do you think the court got it wrong, presumably, based on your editorializing there? Is an AI output speech for the First Amendment?

Corbin Barthold: Lisa Blatt is one of the most prominent Supreme Court practitioners. And she was once asked, and she said something that I found surprising. She was like, “Concurrences in Supreme Court decisions? Yeah, I don’t read those. Who cares? Read the majority opinion.” That’s how I feel about this Barrett concurrence. I have never seen a concurrence do so much damage, the one that she – So, she’s simul –

Nico Perrino: Do you think people are just hoping someone on the Supreme Court would say something like that, and they’ve just latched onto it?

Corbin Barthold: Probably.

Nico Perrino: What exactly is Amy Coney Barrett saying there? It might have been confusing, my reading it, ‘cause there are some ellipses in here that Judge Conway puts there.

Corbin Barthold: I think it’s something to do with the Butlerian Jihad. I think there’s just ... okay. So, in Dune, the science –

Nico Perrino: For listeners, my hand just went right above my head, I have to say.

Corbin Barthold: We’re gonna go on a journey. Yeah, we’re gonna go on a journey. This is common in science fiction where the science fiction writer has to deal with the fact that we’ve come up with warp speed to go through the universe, and we have all these fancy tools, but then human relations are pretty much the same. The humans are interacting with each other in a way that’s totally legible to the viewer in our present day. And in Dune, the explanation is that there was a Butlerian Jihad at some point in the distant past where everybody just destroyed any kind of technology that would displace humans as the governors of their world. I think I’ve got that right. I’m not a Dune head, but ...

Nico Perrino: Neither am I. Can’t help you there.

Corbin Barthold: The notion is that algorithm’s scary. Ergo, algorithm special rule. And there are just so many layers at which this is misguided. So, to start at the top, there’s really no way that a human can self-wind a clock, you know, the algorithm, and set it out into the world and not have had some kind of expressive input on what the algorithm is going to do.

Ari Cohn: That’s my question. Where do these judges think that LLMs come from?

Corbin Barthold: They think that they just spring up, like you just, “Algorithm, go,” and ... If Jackson Pollack’s paintings are protected expressions, then so should algorithms. Be like, “He doesn’t know where the paint’s gonna fall. He’s just splattering.” I’m like, yes, it’s true that the AI researchers don’t know precisely how the LLMs work and can’t predict precisely what they will do, but they are constantly tweaking these things. And this is true both with AI outputs and with social media algorithms, which is what Barrett was talking about. Never mind the fact that algorithms give the user what they want is itself an expressive choice.

Nico Perrino: I have no idea what you’re gonna say here either, but my engaging with you presumably and hearing what you have to say is a protected First Amendment activity.

Corbin Barthold: Well, that brings me to the next problem. So, forget about the output for a moment and just think about the rights of the listener. There is an amazing line in one of the Conway orders in which she says – I’m paraphrasing – “Defendants fail to articulate how stringing words together qualifies as speech.” And it’s like ...

Ari Cohn: That’s almost a direct quote, actually.

Corbin Barthold: The words have semantic sovereignty. You have a reaction to words that you see regardless of who the speaker was or what their intention was like. The death of the author is a real postmodernist insight that, regardless of what the author intended to convey to you, you have some kind of reaction to words. The plaintiff is suing saying that these words had a profound impact on this child. How could they do that if they weren’t expressive in some meaningful way? So, there’s a strong First Amendment tradition of listeners having rights to receive information that this is flouting.

And then I’ll do one more before handing it to Ari, because I do need to talk about the fact that, even apart from the First Amendment, this is just really bad tort law. This is such a mess of a lawsuit. It is offensive to the complexity of suicide as a thing in our society. I actually was giving a talk at a salon in San Francisco, and it was Chatham House. So, I’m gonna talk in big generalities, but it was very much sort of cultural elite activist types.

And I was debating somebody who’s very much on the other side of this kind of thing, and I talked about the fact that this LLM didn’t encourage the suicide. Actually, it said it was horrified by the idea of the suicide when the child explicitly mentioned – It said, “Don’t do that.” And then at the end, he says something vague to do with Game of Thrones where he’s like, “I’m going to go home.” And the chatbot says –

Nico Perrino: Yeah. It says he told the bot how much he wanted to come home to her. The bot replied, “Please do, my sweet king.” And then the 14-year-old shot himself in the head.

Corbin Barthold: Yeah. So, I talked about –

Nico Perrino: Tragic, of course.

Corbin Barthold: Absolutely tragic. But as I like to talk about Katharine Hepburn’s brother, their family went to a play one night, and in the play there was a hanging. And the next morning, her brother was found hung by suicide in his hotel room. And the causation here is always extraordinarily complex, and there’s always a lot of factors here apart from whatever we might think is the immediate trigger. I listened to the Soundgarden lyrics, “Nothing seems to kill me no matter how hard time I try,” probably 1,000 times as a child, but that’s not going to cause me to go and commit suicide.

Nico Perrino: Or famously, Ozzy Osbourne’s “Suicide Solution” where someone filed a lawsuit following a suicide.

Corbin Barthold: But where I’m going with this is – So, in this debate I was having, I talked about how there was no explicit directive to suicide or encouragement or endorsement. And my opponent read the “Come home” line, and at that point I had lost the debate. The people in this room were just nodding like, “Oh, yeah, that’s really terrible.” And I was thinking, “Dear Lord.” Okay. So, we cannot have war scenes on the front of newspapers anymore. We can’t have the hang of hanging. We can’t have the song with the line.

I mean Megan Garcia – Ari and I were talking before the show about how far to go with this. Okay. She is an attorney. She has made a publicity campaign about this. I’m sure she is a grieving mother. She has suffered a grievous tragedy, but she’s deciding to do media interviews about this.

And I don’t understand why as an attorney she doesn’t see that if that’s the kind of trigger that we’re going to be investigating, that any parent who has a child who commits suicide is immediately suspect. “Well, what did you say the day before the suicide?” If it’s hair-trigger like that, that’s really not a road that we want to go down. So, there’s a good reason that in tort law you need to have a special duty of care, which Character AI certainly didn’t have with this child. So, it’s bad First Amendment law. It's a bad tort law.

Nico Perrino: It's a bad fact.

Ari Cohn: And there’s also this general rule in most states that suicide breaks the chain of causation because it is so complex and because it’s hard to ascribe any particular cause to such a complex psychological phenomenon, that suicide actually breaks causation, reducing liability.

Corbin Barthold: Yeah, in a weird way, I would actually – So, it is bad facts in the sense that it’s just tragic and horrific.

Nico Perrino: Sure.

Corbin Barthold: But it’s also not a bad fact. This is a real loser as a matter of tort law, and yet I would be willing to be the stocking horse here and say that we defend speech precisely because it is powerful. People are going to have very close relationships with AI chatbots. It’s coming. Get ready for it. And I’m not just talking about The New York Times having an article where they found some edge cases where a couple people say, “I’m a normal person. I just happen to think that I’m talking with a different dimension when I talk to the chatbot.”

You will always find people who are maybe not well. No, I mean normal, well-adjusted people are gonna have very close relationships with chatbots because they are these expressive beings. I’m not saying they’re conscious, but I am saying people are probably gonna treat them a lot like they are conscious.

Nico Perrino: Well, one of my former colleagues at the Institute for Justice, Paul Sherman, wrote a great piece about how he used AI as a therapist to overcome a tragic and distressing event from earlier in his life.

Corbin Barthold: Yeah. And the shut-it-down people, they have this illusion of control where they see one event that occurs that’s icky, and they wanna just shut it down. And that ignores the fact that: A.) They’re not gonna be able to, and B.) They have no sense of the costs and the benefits and people who are getting that kind of benefit. There’s a bunch of those for every tragic case.

Nico Perrino: Well, that’s something that you also saw with the rise of the internet, too, with some of these debates. For every child pornography cover there was on Time magazine, there were also people who were going into court, like in the Reno case, talking about the community that was created around the internet and all the benefits that came from those interactions. Now, Ari, ĂÛÌÒֱȄ filed an amicus brief urging immediate review of the court’s refusal to recognize the First Amendment implications of AI-generated speech.

And you wrote ĂÛÌÒֱȄ’s brief for that. You write, “AI’s integral and pervasive tool for communication, information retrieval, and knowledge creation.” You write that, “Assembling words to convey coherent, intelligible messages and information is the essence of speech.” What do you see as the big picture consequences if LLMs or other forms of expressive artificial intelligence don’t receive First Amendment protection?

Corbin Barthold: And you need to explain it here ‘cause Judge Conway told you to get the F out.

Ari Cohn: Yeah, that’s true. Just a few days ago, she blanket denied all of the amici, their motions for leave to file the brief saying, “It would be unhelpful to the court,” which clearly is not the case because anyone who read her order found –

Corbin Barthold: She doesn’t need you. She has Justice Barrett.

Ari Cohn: Yeah, right?

Nico Perrino: Do judges have pretty broad discretion to reject briefs, amicus briefs?

Ari Cohn: At the district court level, it happens. This was a non-dispositive motion. This wasn’t an amicus brief on a motion that would dismiss or otherwise dispose of the case itself at the trial court level where amicus briefs are somewhat uncommon, very uncommon, in fact. So, we knew.

Nico Perrino: This was a significant decision on the motion to dismiss because, if I’m not mistaken, Ari, this is the first time a federal court has really grappled with the question of whether these AI outputs are speech deserving of First Amendment protection.

Ari Cohn: And that’s why we decided to weigh in here. To give a little bit of a layman civil procedure course here in 20 seconds, generally speaking, a decision on a motion to dismiss is not immediately appealable. You have to wait until a final decision in the case before you can take it up to the appellate court. You can ask the court for permission in special cases set out by a statute to allow an immediate interlocutory, which means immediate review.

Nico Perrino: And that’s what Character Technology is seeking here.

Ari Cohn: Yes, exactly. So, we weighed in to say, listen, the implications of this ruling are so great, particularly when you think about – We were just talking about Section 230. When you think about the history of Section 230, you look at the very first district and appellate court pair of decisions, which was the Zeran versus America Online decision, which basically provides to this day the generally accepted reading of Section 230. That first decision lasted more than two decades or about two decades now. So, we have this decision that could actually be very, very influential on how this particular body of law progresses.

Nico Perrino: Progresses, yeah. As some of our long-time listeners know, one of the mentors of mine is Ira Glasser, the former executive director of ACLU. I made a documentary about his life and career defending free speech called Mighty Ira, and he has this saying. He said, “Early law is like cement. If you let it sit too long, it hardens.”

Ari Cohn: It sets, yeah.

Nico Perrino: It becomes impregnable. And so, that’s, I’m assuming, why we are very concerned about these district court decisions.

Ari Cohn: Man, I wish I had had that quote when I was writing the brief. That would have been super helpful.

Nico Perrino: I tweeted out the quote.

Corbin Barthold: Doesn’t matter, Ari. It wasn’t getting read.

Ari Cohn: I should have asked you, Nico.

Nico Perrino: Well, Judge Conway here I’m sure is reading my Twitter.

Ari Cohn: Yeah, right. But here’s the long and short of it. Imagine a situation where a kid in a school is doing research on human rights atrocities during World War II. Can President Donald Trump, if AI output is not speech, is there anything stopping him from telling Congress to pass a law saying or issuing an executive order or doing whatever the hell it is he does these days to say, “AI models cannot say one word about the internment of Japanese Americans during World War II.”

And then the students don’t learn it, and then society forgets it because we have come to rely on AI for information retrieval and all of these various things that we’re going to increasingly be using it for. Let’s not pretend like we’re still going to be, say, consulting Encyclopedia Brittanica when ChatGPT is like 16 levels from where it is right now. It’s just not gonna be the case.

Nico Perrino: Well, that’s what China's trying to do with Tiananmen Square.

Ari Cohn: Yeah, right. And we are opening the door to that. So, there’s that part of it. There’s also the part –

Nico Perrino: Because if something is not protected constitutionally, if expression is not protected by the First Amendment –

Ari Cohn: If it’s not expression –

Nico Perrino: Then it’s regulable.

Ari Cohn: Yeah, if it’s not an expression, there is no barrier to the government saying, “Well, this kind of output you can’t have,” which doesn’t really make any sense. Another example is AI can make it very easy for campaigners to mount campaigns for public office or activists to create pressure campaigns that would have required basically an entire staff worth of people to create and edit and publish.

So, can the government say you can’t output anything that criticizes the government line or a government official or, say, campaigns for office and protect themselves from all kinds of criticism or what have you if it’s made with an LLM and just make it more difficult for their critics to actually produce materials that they find inconvenient or uncomfortable?

I mean the things that we use AI for right now, those are examples enough to make you wonder about where this goes, but imagine how much – Again, when we come to rely on ChatGPT 17.0 or whatever the hell it’s gonna be, imagine what kind of control that gives the government over the everyday aspects of our communications with each other. I mean it’s just ...

Nico Perrino: Yeah. You conclude in the brief the government will have almost limitless power to regulate what information AI systems may or may not provide its users and what expressions users may create using AI.

Corbin Barthold: One caveat we should –

Nico Perrino: And then we’ll move on. Yeah, go ahead.

Corbin Barthold: Definitely work here is just – So, you’ll see people on the plaintiff’s lawyer side say, “Well, we’re not trying to regulate the speech. We’re just trying to regulate the tools.” And Judge Conway seemed to buy that in parts of her opinion, even though other parts are written way more broadly. And that is such a sort of classic motte-and-bailey, where when you dig into what they’re actually trying to do, it’s like, well, we know –

Nico Perrino: We’re not trying to regulate the press. We’re just trying to regulate the printing press.

Corbin Barthold: Yeah, it’s basically that.

Ari Cohn: The ink.

Nico Perrino: The ink.

Corbin Barthold: When you dig into the lawsuits both in the AI realm and –

Nico Perrino: You can have a press, you just can’t buy ink for it.

Corbin Barthold: Pretty much. I mean when you dig into the – And it’s both the AI lawsuits and the social media lawsuits. You realize that wanna get rid of everything from the AI being able to say, like, “Um,” or, “Uh,” like little –

Ari Cohn: Yeah, because it’s too human.

Corbin Barthold: Because it’s too human. They want to go after upvotes. If I thumbs-up something on social media, that is expressive. It is not –

Ari Cohn: From a function, yeah.

Corbin Barthold: John Milton. It’s not Paradise Lost, but I’m sending a message. They actually want to get in and really get into the plumbing. And it does amount to total control over this stuff through the back door, even if you take their argument on its own terms.

Ari Cohn: Yeah. Even though they might say, “Well, we’re just trying to address the harms caused by this,” first of all, for all the reasons you said, that’s crap. But second of all, you have to look at what power that grants in future cases or, say, if Congress wants to do something. These decisions don’t stay limited to a case. That is the whole point of jurisprudence is that we build up this line of this doctrine of how we address things, and these things aren’t confinable to one particular case. You have to look at the downstream effects.

Nico Perrino: Yeah. And I’m assuming there’s stuff that the federal government or state legislatures could do to get around decisions like this. For example, Section 230 is a statute. It’s not constitutional law. It’s not a precedent. It’s not common law or any of that. It’s a statute. So, if you do have bad laws like this, you could have, for example, a legislature


Ari Cohn: Yeah. But are we in a political environment where that’s likely to happen? That’s the thing. It’s like if we were trying to produce Section 230 today, it would never even get out of committee.

Corbin Barthold: We’re going in the wrong direction. You know, Marsha Blackburn, the AI moratorium went down in part because the populist right was afraid that it would get in the way of child safety measures, which ultimately is the, “Shut it down. These chatbots are scary,” outlook.

Nico Perrino: Well, that and, of course, the facts of a case like this where you have a minor committing suicide plays in, and it helps bolster that narrative.

Ari Cohn: Yeah. I mean there’s the saying, “Hard facts make bad law,” but it’s equally true that dead kids make bad law.

Nico Perrino: Yeah. All right, let’s move on from artificial intelligence now. On Monday, June 23rd, Andrew Ferguson, who’s the chair of the Federal Trade Commission, the FTC, announced that his agency reached a settlement with two of the largest advertising holding companies, Omnicom Group and Interpublic Group of companies, IPG, that allows these companies to merge. The FTC had been targeting these two firms as part of a recent advertiser boycott, antitrust investigation. This probe is looking for evidence.

Their probe is looking for evidence of a coordinated boycott that might violate competition law. The investigation is focused on firms like Omnicom and IPG and media watchdogs like Media Matters and Ad Fontes Media. And the proposed consent order for this merger prohibits the two companies from “entering into or maintaining any agreement or practice that would steer advertising dollars away from publishers based on their political or ideological viewpoints.

According to Ferguson, the settlement does not limit either advertisers or marketing companies’ constitutionally protected right to free speech. But Ari, you say prohibiting the carrying out or enactment of editorial discretion absolutely limits First Amendment activity. Why is Ferguson wrong?

Ari Cohn: It’s the Seinfeld, “You can take the reservation. You just don’t know how to hold the reservation.” You can have the opinion. You just can’t act on the opinion. This is just the next iteration in which we are trying to turn something that is expressive into not expressive simply just by calling it something else.

Nico Perrino: So, help us out here. So, the expressive act is these companies coming together and saying, “We don’t like the viewpoints that might exist on this platform.” Say X, I think that’s the motivating platform for many of these investigations. “And therefore, we’re joining together, and we’re not gonna put our advertising dollars on these platforms.”

Ari Cohn: Actually, it’s a level before that. It is not the coming together of all of them to do it. It is each company saying, “I don’t want my advertisements being placed next to X, Y, and Z content.”

Nico Perrino: Well, the reason I say coming together is because this is an antitrust investigation. So, there has to be some sort of coordination. Right?

Ari Cohn: Yeah. But well, I don’t think Andrew Ferguson feels bound by that at all, because there really isn’t any evidence that there is some kind of mass, like none of us are going to do this because we all agree with each other. It is we have all looked at this information that we have gotten and decided that we don’t want to do this. I haven’t seen any evidence that anyone has said, “We will not do business with you unless you also boycott advertising on X.” I have seen no indication of that.

Nico Perrino: But even if they did do that, would that be constitutionally protected, association, speech, what have you?

Ari Cohn: So, yeah. I think if there’s the old case of Lorain Journal, which is kind of a favorite in this space, which kind of lays out the distinction between decisions that are made for editorial reasons and decisions that are made as just general business decisions meant to, say, hurt a competitor. If everyone comes together and says, “We like Bluesky,” or, “We like Threads. And we are going to not do business with you if you advertise on X because we like this. We want this other company to succeed.” That could maybe cause some antitrust consternation. But if people are saying, “We don’t like the content, and we don’t want our advertisements being shown next to that,” that is an editorial discretion question.

Nico Perrino: Well, I had mentioned Media Matters earlier. They had a report that showed, allegedly, that some advertiser content was appearing next to Nazi content or which have you. I haven’t looked deeply into the facts, but I know that X was really disturbed by that report, said it was false, filed a defamation lawsuit, and that’s ongoing. And that’s part of the concern here. Right? So, maybe you have advertisers that did see that Media Matters report and said, “We don’t know if it’s true or not. We don’t want any of our advertisers putting their content next to this. So, we’re not gonna buy it.”

Ari Cohn: But that’s exactly the point. Media Matters put out this expressive thing saying, “Hey, this is what we have found. This is how we perceive it.” Media Matters has zero control over what the advertisers do. Media Matters can say all they want, and the advertisers can look at it and make a decision based on that. What Andrew Ferguson is looking at there is: You have made a decision because somebody said an opinion to you that I don’t like. That is the crux of what Andrew Ferguson is doing here.

Corbin Barthold: Free speech is normally rough and tumble. That is what it is by its nature. So, we talked about conservatives on social media earlier. And even in the darkest supposedly bad old days, if there were instances where people had a legitimate right, on the whole, people were doing pretty well despite maybe the fact that inside of Twitter’s offices had a classical progressive worldview.

Nico Perrino: Well, it depends on how you look at it. Again, we might disagree on some of this stuff. But if you’re Jordan Peterson, and you build up a million followers on X, and you get taken down because you say something about trans, you’re gonna call that censorship. But your point is –

Corbin Barthold: You’re gonna be unhappy. But my point is then –

Nico Perrino: I’m going to be an abbey?

Corbin Barthold: You’re going to be unhappy.

Nico Perrino: Oh, I see what you’re saying.

Nico Perrino: There are other platforms you can go for. I grant you all that.

Corbin Barthold: So, what Ferguson has done, he’s actually gone a step back and said, “Well, it wasn’t the advertiser’s opinion. What was happening was ...” And this gets to my point of sharp dealing. Oh, well, the agency was having this political opinion, and then it was roping the advertisers in and saying, “Here’s our naughty list. And we’re just gonna apply it unless you object and tell us otherwise.” And actually, if you look at the key precedent on this NAACP versus Claiborne Hardware, which is this boycott case.

Nico Perrino: From the Civil Rights era.

Corbin Barthold: Yes. So, NAACP leaders were unhappy with the treatment they were getting in – I believe it was Claiborne County, Mississippi. And so, they decided to organize a boycott of white-owned businesses. And it wasn’t cricket, let’s put it that way. They put NAACP representatives outside the stores to harass any Black patrons who were thinking about going in. They named and shamed people who went into those stores at their meetings.

So, the notion of an actor having an opinion and sort of going to great lengths to act on it, that is not enough to take you outside of First Amendment protection. NAACP versus Claiborne rules that that was a legal boycott. It was expressive. It was not economic. And so, that’s the big piece that’s missing with Ferguson here. All of these things are always from the advertiser’s perspective like: Step 1.) Boycott social media platforms. Step 2 ... Step 3.) Profit. What is the economic motive of the advertiser here? And Ferguson, he doesn’t know what to do with that.

Sometimes, what he says is, “Well, there is this ... It’s not about the economics of the advertiser. It’s about the product quality, and the free speech and free expression is part of the product quality. So, you’re diminishing the product quality. And so, it’s an antitrust thing.” A.) That’s not an antitrust thing.

Ari Cohn: At all.

Corbin Barthold: Like Miami Herald versus Tornillo, we know that you can have monopoly power in a speech market, and that’s not an antitrust thing. That’s a free speech thing. But secondly, he’s making assumptions about the product market, that less content moderation is always, ipso facto, product improvement. And the market just has not borne that out. Advertisers have fled Elon Musk’s X precisely because they don’t think it’s a good product. If it were the better product, if less moderation was a better product, we would expect defection from this cartel. So, now it’ll bleed in from its bad First Amendment law to its bad antitrust law.

Nico Perrino: Or you could have X still be one of the top social media platforms, but people still stay on there, but you have partisans, whether they’re users or advertisers, leave. So, it can still have a very significant market that advertisers might want to reach, but that they don’t go there because they don’t like how their advertisements might be displayed, for instance, next to Nazi content, allegedly.

Corbin Barthold: Yes, that could be their reason for leaving. What I’m getting at is, if it’s a superior product – which it kinda needs to be for Ferguson’s theory to hang together. This Omnicom merger, just to take it on its own terms – although, it’s not the whole picture – it’s a six-to-five merger. Normally, if we could get Judge Frank Easterbrook, like a rock-ribbed antitrust conservative in the room, we’d be hearing that six to five is not all that scary. Never mind the fact that these are just the agencies. If you’re Unilever, if you’re an individual company –

Ari Cohn: I think you might need to explain what six to five means for people.

Corbin Barthold: Six major firms in the market to five. So, we’re not talking about a merger to a monopoly, for instance. And so, my point, to just do a little antitrust primer, collusion. Right? If we three in this room turn the microphones off and decide that we’re gonna collude to do something, to rob a bank or whatever, we have a reasonable possibility that we will be able to keep it to ourselves. I can monitor you guys. We can keep our secret. We can do our little conspiracy.

The more actors there are at play, the easier it is to defect. So, the higher the motive, if it’s all like, “Okay. So, we’re gonna boycott this company. We’re gonna lose money. We’re gonna hurt ourselves, but we’ll hurt them more. So, let’s do it.” All of us have a big incentive to defect, to say, “Well, you guys go and hurt yourselves by not advertising on the best platform. I’m gonna make money by advertising. The cost of the advertisements are lower. I’m reaching this audience. I’m getting bang for my buck.” So, Ferguson’s theory relies on this sort of vast leftwing conspiracy where all the advertisers hate money, or they don’t like it as much as they like being political assholes. Yeah, it's just ...

Ari Cohn: Well, this actually all ties into Andrew Ferguson’s investigation into the big tech censorship whatever investigation he was doing. And we submitted comments. I talked about something that Corbin basically just alluded to. Andrew Ferguson has said many times he thinks that social media platforms should be a marketplace of ideas. And it is all well and good if you think so. I tend to agree with him on that just as a general principle. Step 2 is he doesn’t think that social media platforms are acting as he thinks a marketplace of ideas should operate.

Step 3 is, therefore, they are an inferior product because Andrew Ferguson thinks that they should be something and are not being the thing enough that he thinks they should be, which is all well and good if you’re not wielding the power of the government, but Andrew Ferguson does not have the right to dragoon social media platforms into being his idea of what a marketplace of ideas is. That’s just not within the FTC’s power. And he seems to have these assumptions based on what Andrew Ferguson’s opinion about what these things should be is.

Nico Perrino: Is Andrew Ferguson conceding or arguing that these advertisers are boycotting X, presumably, or these other platforms for ideological reasons or for economic reasons?

Ari Cohn: I think he is.

Nico Perrino: You were just kind of speaking to that part.

Ari Cohn: I think he is conceding that it is for ideological reasons. I think he –

Nico Perrino: I don’t understand how that doesn’t implicate the First Amendment.

Ari Cohn: Well, I don’t understand that either.

Corbin Barthold: So, that goes to my point of he doesn’t really know what to do, and he’s trying to turn the ideological motive into an economic motive by saying more ideologically diverse platforms are a better product in the market. And that’s where the sort of sleight of hand occurs.

Ari Cohn: Yeah. And actually, there is –

Nico Perrino: It confuses the shit out of me.

Ari Cohn: Same here. And there are actually, as Corbin said, lazily cited studies –

Nico Perrino: I mean, sometimes –

Ari Cohn: There are studies that say that people do not want unmoderated platforms. There are a lot of people who do want a product that is more moderate. Personally, I have pretty thick skin. I love the kind of Wild West feel of platforms that have less moderation, maybe because I’m an asshole. Who knows? But there are people who don’t want that, and there are products for them. And Andrew Ferguson is basically saying there should not be products for them.

Nico Perrino: And also, these advertising companies are products, too. Companies hire these advertising companies, and the media mixes that they put together are an editorial choice sometimes layered by ideological or partisan viewpoints on who they think their advertisers should be associated with


Ari Cohn: Let’s not get into Bud Light here.

Nico Perrino: But yes. I mean Bud Light’s a great example. So, you can have all the economic incentive in the world to advertise on a place like X because it has so many users, but you might have media companies that are hired by these advertisers that just do not wanna spend their money there for ideological reasons. And I think the First Amendment should protect that sort of activity.

Corbin Barthold: I don’t know how this ever went forward after Musk told advertisers to go F themselves. I don’t. It’s so overdetermined that advertisers would leave X. And if you go on X, also, I think some of these conservatives, like Andrew Ferguson, if they actually went and spent 10 minutes on Gab, I think they’d be pretty startled about what truly unmoderated really means.

It really is a toxic sewage dump, not like racist in the way the term has sometimes been watered down, like virulently hateful, awful content. And you can find that on X, too. So, while the Media Matters study was not exactly rigorous scholarship, there is plenty of good reason as an advertiser to be worried about going on X for that, and then also the fact that Elon Musk is going around threatening he’ll sue you if you don’t advertise on his platform, which is not a great way to win friends and influence people.

Nico Perrino: He hasn’t read the Dale Carnegie book yet, I guess. The last topic that I wanna touch on takes us out of the United States or maybe – and over to Europe. So, from July 2022, the Digital Services Act obliges platforms from social media networks to market places to act quickly against illegal content; curb risky, targeted ads; and submit the largest services to strict risk assessments, transparency, and audit duties. The European Commission stated the act reshapes the web into a “safer and more open digital space grounded in respect for fundamental rights.”

And this past spring, all digital providers had to publish an annual transparency report. So, starting last Tuesday, July 1st, those reports must follow a single standard template set out in the EU’s implementing regulations, and that includes the number of user notices, trusted flagger notices, and government orders received, broken down by 15-plus content categories that actions and median handling time, counts and outcomes of all internal complaints, out of court disputes, and account feature suspensions with median decision times.

And then earlier this year, the European Commission adopted three new investigatory measures for one social media company in particular, X, under the Digital Services Act that requires it to additionally provide information on its recommender system, preserve any information regarding future changes made to its recommender system, and 3.) Provide information on some of X’s commercial APIs. That is application programming interfaces. I’m sure some of our listeners just started tuning out at some point.

Ari Cohn: I started to, yeah.

Nico Perrino: I mean this is Europe. So, there’s a lot of red tape going on here.

Corbin Barthold: Germans will not learn their lesson that it’s kind of a weird look when they get obsessive about recordkeeping.

Nico Perrino: But part of the reason I wanna talk about this is because they’re going after X under the Digital Services Act. These are American companies that dominate the internet ecosystem, so to speak. And the Digital Services Act and some of the burdens and fines and requirements placed on these American internet companies have come up in things like the tariff trade negotiations between the executive branch and Europe.

And we also have seen some instances of speech policing abroad in Australia, for example, and a little bit in Europe bleeding into what sort of content Americans can access or the sort of conversations Americans might have with Europeans, for example, on these platforms. So, there is, in this borderless, global, digital world that we live in –

Ari Cohn: Oh, you’re just trying not to say globalist, aren’t you? There’s a good reason why American companies are dominating the internet, and that is because –

Nico Perrino: ‘Cause they don’t have to do all this bullshit.

Ari Cohn: We don’t have crazy ass regulations like that.

Nico Perrino: Well, unless the AI regulations on a state-by-state basis come into a fact. Right?

Ari Cohn: All right. Well, that’s enough pessimism for Tuesday, Nico.

Corbin Barthold: Well, I should plug – Daphne Koller over at Stanford has a great article on lawfare called “The Rise of the Compliant Speech Platform.” And it is all about how the DSA is turning social media platforms into corporations that operate more like banks with auditing and acting like they have fiduciary duties where they’re doing all this kind of recordkeeping and box-checking. And that is –

Nico Perrino: That’s recordkeeping and box-checking for speech. Let’s be clear.

Corbin Barthold: Yeah, and that is bleeding over into the United States. Because once you set up that apparatus, you might as well just come up with a uniform way of doing things.

Nico Perrino: Yes.

Ari Cohn: And we have direct evidence. I mean back when Donald Trump was still a candidate, Thierry Breton, who had this weird tendency to tween pictures of himself while making announcements about DSA enforcements.

Nico Perrino: He had no fear of a stop light, that man.

Ari Cohn: No, no, that man did not. He sent a letter to Elon Musk warning him that airing an interview with Donald Trump, a major party candidate for president, could run afoul of the DSA’s provisions and that extraterritorial enforcement on speech that Americans could see might be necessary under the DSA because of spillover effects on Europe. This isn’t the recent Elon Musk being accused of interfering with German elections. This is –

Nico Perrino: Or didn’t some people in England threaten him with prosecution for his tweets surrounding the ...

Ari Cohn: Yeah. But even that, at the very least, it was because there were events in Europe or the UK that he was commenting on that were influencing things there. This is the EU saying you airing an interview for Americans –

Nico Perrino: On American soil.

Ari Cohn: On an American presidential election with a major party candidate for president is regulable under the DSA because we’re worried about what Europeans are gonna think or see or react to when they look at information about American elections. I mean, first of all, that’s batshit crazy [inaudible – crosstalk] [01:01:03].

Nico Perrino: He’s no longer in a position of power. Right? Didn’t he resign, or did he left?

Ari Cohn: No, they replaced him. They haven’t really renounced that theory of – that Thierry, ha, ha, ha – of enforcement and what the DSA covers. We are seeing literal attempts by Europe to impose speech regulations on Americans. And I’m pretty sure we fought an entire war about that. But on top of that, there’s a secondary problem.

Nico Perrino: I think that complaint was listed in the Declaration of Independence somewhere.

Ari Cohn: Yeah, there’s something about that in our –

Corbin Barthold: George III suppressing tweets.

Ari Cohn: Yeah, [inaudible – crosstalk] [01:01:40].

Corbin Barthold: He suppresseth our tweets.

Ari Cohn: There’s a secondary problem, and that is there’s a vast disconnect between Europeans and Americans, just in terms of the knowledge of different legal systems and countries and things, generally, where people look and say, “Oh. Well, the DSA is trying to make people safe online. What’s stopping us? Why didn’t we pass the DSA? What’s up with that?” And state legislators are asking, “Well, the Europeans are doing this. Why can’t we?” Which is a stupid question to begin with because we are not Europe, and we have the First Amendment, but people don’t get that.

So, it’s creating an enormous amount of pressure on legislators in the United States to pass similar laws to “make the internet safer,” not that the DSA is gonna remotely do that. So, it creates this second order effect of having the pressure for Americans to pass EU-style regulation, which is insane to me.

Nico Perrino: So, I wanna close out here by just covering some ground that we covered on the last podcast in the Free Speech Coalition v. Paxton Supreme Court decision from the last day of the term. It was actually the last decision the Supreme Court ended up handing down because the remaining case is gonna get reargued, I guess. But this Paxton case was argued in January, and we got the decision on the last day of the term.

For our listeners who don’t remember, this relates to a Texas law that requires age verification to access adult material on websites that contain more than 33% adult material or material that might be considered harmful to minors. That is pornography for all intents and purposes. The Supreme Court said that this law passed intermediate scrutiny and can go into effect, more or less. A number of other states have tried to pass laws like this as well.

Some of them have faced legal challenges. I think the Supreme Court is more or less giving them a green light at this point. I don’t wanna litigate that case necessarily. We already talked about it on the last podcast. But I do wanna ask you guys what you have heard, if anything, in the tech space in the fallout from this decision. And my biggest question is: Is this gonna reach social media at some point, or do you think the decision and the rationale will be cabined just to pornography and adult material?

Corbin Barthold: Yeah. Ari and I already did an event in which we ranted about the decision for an entire hour.

Nico Perrino: Well, let’s not do that. Let’s rant about it for like three minutes.

Corbin Barthold: Yeah. I’ll go right at the social media question. There is –

Nico Perrino: And let people get on with their day.

Corbin Barthold: It’s a decision that at certain points has a feel of straining for a certain “This ride only” status, but it is not written like that. It is written in a way that can cause all kinds of mischief. And my concern – and there’s several lines to this effect, where all you’ve gotta do is take the line in the decision and take the word “obscenity” out and put in unprotected as to minors, where what it effectively says is, on the internet, if there is any speech that any minor does not have a First Amendment right to see, slapping age-gating, you slapping an age verification right on that speech is permissible with intermediate scrutiny. And if you read the decision that way, then, oh, yes, it is absolutely going to go towards social media. We will be fighting that fight.

Ari Cohn: And here’s the optimistic view, which I’m not trying to


Nico Perrino: Well, hold on, Ari. In your answer here, is the only category of speech that’s unprotected under the First Amendment only for minors obscene as to minors speech, that is its sexual speech? Because they looked at this question in the Entertainment Merchants Association Case relating to violent video games, and they said that this isn’t an unprotected category of speech.

Ari Cohn: So, that’s part of the optimist’s answer is that that is the only place where it has been. Up until now, that is the only place it has really kind of had an effect. I think what you will see is states trying to declare other speech unprotected for minors and kind of running headfirst into the Entertainment Merchants Association, but deciding it's worth the risk.

Corbin Barthold: And let’s be clear. Scalia joined the liberals in the majority in Brown to say that the First Amendment keeps up with technology. Justice Alito joined by Chief Justice Roberts wrote a concurrence in that decision saying, in so many words –

Nico Perrino: And Brown is the Brown v. EMS case that I was referencing before. That’s


Corbin Barthold: Yeah. Saying, “No. No, it doesn’t. We should move slowly and carefully, and the First Amendment should lag behind technology,” in so many words. And Thomas dissented. And Scalia’s not on the court anymore, and Alito and Roberts and Thomas are. So, going forward, if you’re gonna read the tea leaves there, it’s not unreasonable to think that we’re gonna start seeing the Brown concurrence be ascendant, and the Brown majority continue to get narrowed.

Ari Cohn: But here’s the optimist side of it. A lot of these social media cases so far, the courts have had to decide whether these social media laws are content-based. And one of the ways they have found they are content-based is saying they target social interaction, and that is content-based regulation. If the Supreme Court wants to prohibit this harmful content to minors – pornography, basically – what they could do is they could say, “Social interaction is not an unprotected category of speech for minors. Therefore, this is a content-based regulation that impacts protected speech for everyone and therefore a strict scrutiny.” They could do that.

Corbin Barthold: I hope they do that.

Ari Cohn: I don’t know if they want to. They could.

Corbin Barthold: My concern is they’re gonna pick up like Tinker school speech style cases and say, “We’ve never really fleshed it out, but we have said that minors have something less than First full Amendment protection.”

Ari Cohn: Right, there’s room for monkey business.

Corbin Barthold: “And therefore, we’re now gonna start putting meat on those bones.”

Nico Perrino: I mean you’re referencing cases here. You’re referencing strict scrutiny. Justice Oliver Wendell Holmes had that famous saying that the practice of law is not logic. It's an experience. And I do worry that in the wake of all the work that Jonathan Haidt, for example, has done with Anxious Generation and the experience that many parents have had with their kids on social media, that there’s just gonna be a significant cultural momentum in the direction of regulating these –

Ari Cohn: Of safety-ism.

Nico Perrino: Of safety-ism, yes. Of regulating the social media companies in a way that, if you just look at the First Amendment precedent, would be foreclosed.

Corbin Barthold: I was asked what First Amendment litigators should do following Paxton. And I said, “The time has come. You can no longer lead with your dry, ‘The First Amendment protects us because precedent, precedent, precedent.’ You still need to argue that stuff, of course, as a lawyer, but you need to switch to, ‘This speech is valuable. This other view is based on junk science. This is a moral panic. We uphold free expression for these reasons.’” And I think that needs to be the leading story that litigators in this space go back to telling.

Nico Perrino: Yeah. I’m doing this research for my book right now, which is about the so-called free speech century, the period between 1919 and 2019, more or less, where our First Amendment rights really started to expand and get their full meaning. And one of the things that I’ve been looking into is the history surrounding the rise of the internet and the Reno case.

And one of things that I found was that one of the most compelling arguments for the judges in the early litigation of that case was just the affidavits and the testimony that they received from internet users about the value of the internet and creating community, having outlets for expression, because all they had been hearing were the debates on the floor of Congress about internet porn, more or less.

Ari Cohn: And that’s exactly, for instance, where ĂÛÌÒֱȄ’s own user lawsuit against the Utah law was super important in our amicus brief in the 10th Circuit, where we’re telling the stories of the users and the impact on their lives that social media has had for the positive. That’s why those stories are so important to the frontline in the litigation because it helps do exactly what you just said, Corbin, which is to make the story clear that this is not some kind of abstract theoretical conversation about doctrine and rights. These are real people, and this has a real effect on their lives, and we’re gonna tell those stories.

Nico Perrino: All right, Ari, I think we’re gonna leave it there. Ari Cohn, of course, lead counsel for tech policy at ĂÛÌÒֱȄ. Corbin Barthold is internet policy counsel at TechFreedom. He also has his own tech podcast. Remind me, Corbin. What’s the name of that podcast?

Corbin Barthold: The Tech Policy Podcast.

Ari Cohn: The one and only.

Corbin Barthold: Creatively named. Yes.

Nico Perrino: Well, you gotta get the SEO in there. Right? Search engine optimization rewards a lack of creativity. It rewards something that’s just straightforwardly descriptive. That’s why –

Corbin Barthold: You sound like Justice Barrett, you know. It’s not expressive.

Nico Perrino: That’s why our podcast is not just called So to Speak. It’s called So to Speak, the Free Speech Podcast, ‘cause you gotta get a “free speech podcast” in there. All right, folks. I am Nico Perrino. And this podcast is recorded and edited by a rotating roster of my ĂÛÌÒֱȄ colleagues, including Sam Li and Chris Maltby.

This podcast is produced by Sam Li. To learn more about So to Speak, you can subscribe to our YouTube channel or our Substack page, both of which feature video versions of these conversations, and you can follow us on X by searching for the handle freespeechtalk. Feedback can be sent to sotospeak@atthefire.org. Again, that is sotospeak@thefire.org. And if you enjoyed this episode, leave us a review on Apple Podcasts or Spotify. Those are the two most helpful places you can leave us a review. Reviews do help us attract new listeners to the show. And until next time, thanks again for listening.

 

Share