I think the love for Earthsea is probably sincere. Joe Carlsmith is also an Anthropic employee and has written long essays about Earthsea. Computer people have always self-aggrandized themselves as magicians or wizards, learning to speak true names to gain power, so then the computer science department is Roke. At Anthropic in particular they also agonize whether the powers they are calling up are too dangerous. Like the wizards of Roke this doesn't lead them to stop calling up the powers. And they're quite willing to be courtiers.
I also think that spite towards writers and artists is extremely real in many other parts of the AI world (although Anthropic has a literate company culture and good taste).
That said it's an underrated factor that much of the demand for AI writing assistance is from people who don't have strong English skills, but have jobs that require them to work in English. Including a large proportion of AI company employees and scientists. This is probably the modal person who wants Claude to write their emails and summarize documents for them. Obviously few such people work in the Anglophone discourse industry which is why the reporting never focuses on this case.
this is a much more thoughtful comment than my irritated post deserves. I will say I probably feel either “most warmly” or “least unfriendly” (not sure which) toward Anthropic of the AI companies, or I guess at least compared to OpenAI.
My actual big fear with LLMs is not that they’ll end writing or anything but that they will eliminate the tier of jobs that aid in learning stuff in a deeper way—like with coding I have this kind of nightmare vision of ending up in one of those sci fi futures where people are surrounded by tech they have no real ability to understand and can barely maintain.
LLMs make it so easy and tempting to just choose to not understand, especially the better they get.
For coding and math there are countervailing forces I think. For example mathematicians have been getting surprised by ChatGPT digging up useful old proofs that everyone alive had forgotten about, sort of the opposite.
But now we will have to choose to actually learn things, and probably fewer people will.
i didn’t really understand the quiz apart from the whole “most people are freaked out when writers have capital-S Style.” which is also true of film, music, or really any other art form i think?
What you said about the writer/journalist freakout about AI reminds me of how writers talk about 'slop'. There's this consensus that AI writing is 'slop' and bad and essentially meaningless junk, which I agree with. But writers have been producing junk for ages (no offense!), especially on digital platforms (and before!). Like, writers and journalists are talking a big game about how what they do is the most meaningful vocation that has ever existed, and how AI will be the death of writing, but then these writers go off and post listicles and twee roundups and shopping recs. It's not a nice conversation to have, but I think if people are actually going to be honest about AI and the proliferation of bad writing, we also need to talk about how actual writers create slop just well enough on their own. Just a thought!!
I have to admit I’m not really fond of calling things slop and I think if it’s going to be a useful word it has to be restricted to things that are not just “bad” but are literally mindless (as in, do not have a mind behind them). But I do think there should be some soul-searching going on, yeah.
i really resisted taking this quiz when i saw it on twitter but eventually i caved in and the premise was so... much weirder than i thought? originally i thought the test was trying to force people to judge between a famous writer's writing and passages that claude had generated from an original prompt -- such that people would have to confront whether claude can produce "better writing" than human writers -- but every text was so blatantly just rewriting exactly the original passage (but more contrived and heavy-handed)
and to me that framework just undercut the stakes of llm writing (a problem that this test seems to be trying to confront)... like yes i could also copy a passage from a hilary mantel book and play thesaurus games with it... why would i feel threatened by that
"I’m not knee-jerk opposed to LLMs, which seem like a useful assistant technology for some people who already know what they’re doing, and a disaster for every person who doesn’t (like students)."
We're AI impression buddies, then.
I tell my 9-year-old, who's fascinated by tech, to work on handwriting fluency and expect, now more than ever, to take exams by hand if the goal is mastering anything (it's how math and physics exams worked when I was an undergrad, anyhow).
9-year-old replies that it's physically possible to set up exam terminals that are gapped away from all the tempting aids. In practice, though, local schools' firewalls against temptation have proven flimsy. My own kids have already noticed some cracks to exploit, though not yet as impressive as this nearby example:
"I first knew that D65 had a serious screen management back in 2022. I picked up my kid from after care and all the kids were on their tablets. One kid had figured out a way to use the coding app on the tablets, Scratch, to watch unfiltered YouTube (this was before D65 banned YouTube outright). From a technical standpoint, that was insanely impressive - a YouTube client in Scratch!? But from an IT/Systems management standpoint, yikes. "
something I think about is that I wasn’t actually allowed to use a calculator until I got to trig—which I think was good, though I am not sure I still remember how to do long division. Ultimately—if you know how to be no/low tech you can adapt higher tech more easily to your own needs… but it’s hard to do it the other way.
Comparing the Le Guin to the AI work, the AI is much more pessimistic than she was! Blights, scars, fevers, whatever it trained on was darker than Earthsea.
well I imagine Earthsea itself represents like… .00000000000000001% of the mass of Claude’s training data (even that number is probably larger than the reality). Actually I wonder what the most common type of book is.
I think the love for Earthsea is probably sincere. Joe Carlsmith is also an Anthropic employee and has written long essays about Earthsea. Computer people have always self-aggrandized themselves as magicians or wizards, learning to speak true names to gain power, so then the computer science department is Roke. At Anthropic in particular they also agonize whether the powers they are calling up are too dangerous. Like the wizards of Roke this doesn't lead them to stop calling up the powers. And they're quite willing to be courtiers.
I also think that spite towards writers and artists is extremely real in many other parts of the AI world (although Anthropic has a literate company culture and good taste).
That said it's an underrated factor that much of the demand for AI writing assistance is from people who don't have strong English skills, but have jobs that require them to work in English. Including a large proportion of AI company employees and scientists. This is probably the modal person who wants Claude to write their emails and summarize documents for them. Obviously few such people work in the Anglophone discourse industry which is why the reporting never focuses on this case.
this is a much more thoughtful comment than my irritated post deserves. I will say I probably feel either “most warmly” or “least unfriendly” (not sure which) toward Anthropic of the AI companies, or I guess at least compared to OpenAI.
My actual big fear with LLMs is not that they’ll end writing or anything but that they will eliminate the tier of jobs that aid in learning stuff in a deeper way—like with coding I have this kind of nightmare vision of ending up in one of those sci fi futures where people are surrounded by tech they have no real ability to understand and can barely maintain.
LLMs make it so easy and tempting to just choose to not understand, especially the better they get.
For coding and math there are countervailing forces I think. For example mathematicians have been getting surprised by ChatGPT digging up useful old proofs that everyone alive had forgotten about, sort of the opposite.
But now we will have to choose to actually learn things, and probably fewer people will.
i didn’t really understand the quiz apart from the whole “most people are freaked out when writers have capital-S Style.” which is also true of film, music, or really any other art form i think?
I mean it can’t really “prove” anything it’s just meant to make people freak out.
i agree with you there. it was annoying
What you said about the writer/journalist freakout about AI reminds me of how writers talk about 'slop'. There's this consensus that AI writing is 'slop' and bad and essentially meaningless junk, which I agree with. But writers have been producing junk for ages (no offense!), especially on digital platforms (and before!). Like, writers and journalists are talking a big game about how what they do is the most meaningful vocation that has ever existed, and how AI will be the death of writing, but then these writers go off and post listicles and twee roundups and shopping recs. It's not a nice conversation to have, but I think if people are actually going to be honest about AI and the proliferation of bad writing, we also need to talk about how actual writers create slop just well enough on their own. Just a thought!!
I have to admit I’m not really fond of calling things slop and I think if it’s going to be a useful word it has to be restricted to things that are not just “bad” but are literally mindless (as in, do not have a mind behind them). But I do think there should be some soul-searching going on, yeah.
that's fair, it might sort of be a useless word at this point!
i really resisted taking this quiz when i saw it on twitter but eventually i caved in and the premise was so... much weirder than i thought? originally i thought the test was trying to force people to judge between a famous writer's writing and passages that claude had generated from an original prompt -- such that people would have to confront whether claude can produce "better writing" than human writers -- but every text was so blatantly just rewriting exactly the original passage (but more contrived and heavy-handed)
and to me that framework just undercut the stakes of llm writing (a problem that this test seems to be trying to confront)... like yes i could also copy a passage from a hilary mantel book and play thesaurus games with it... why would i feel threatened by that
the idea is that you look at the percentage of people who “prefer AI” and feel like you’re just a dinosaur waiting for a meteor to hit I think
"I’m not knee-jerk opposed to LLMs, which seem like a useful assistant technology for some people who already know what they’re doing, and a disaster for every person who doesn’t (like students)."
We're AI impression buddies, then.
I tell my 9-year-old, who's fascinated by tech, to work on handwriting fluency and expect, now more than ever, to take exams by hand if the goal is mastering anything (it's how math and physics exams worked when I was an undergrad, anyhow).
9-year-old replies that it's physically possible to set up exam terminals that are gapped away from all the tempting aids. In practice, though, local schools' firewalls against temptation have proven flimsy. My own kids have already noticed some cracks to exploit, though not yet as impressive as this nearby example:
"I first knew that D65 had a serious screen management back in 2022. I picked up my kid from after care and all the kids were on their tablets. One kid had figured out a way to use the coding app on the tablets, Scratch, to watch unfiltered YouTube (this was before D65 banned YouTube outright). From a technical standpoint, that was insanely impressive - a YouTube client in Scratch!? But from an IT/Systems management standpoint, yikes. "
https://foiagras.com/p/d65-tech-hard-to-unwind
something I think about is that I wasn’t actually allowed to use a calculator until I got to trig—which I think was good, though I am not sure I still remember how to do long division. Ultimately—if you know how to be no/low tech you can adapt higher tech more easily to your own needs… but it’s hard to do it the other way.
Comparing the Le Guin to the AI work, the AI is much more pessimistic than she was! Blights, scars, fevers, whatever it trained on was darker than Earthsea.
well I imagine Earthsea itself represents like… .00000000000000001% of the mass of Claude’s training data (even that number is probably larger than the reality). Actually I wonder what the most common type of book is.