:DDDD
Russia forgot to mark Taiwan as part of China in a map they put out so in retaliation China has annexed a set of disputed islands on a boundary river in one of their new maps.
How plausible sentence generators are changing the bullshit wars
This Friday (September 8) at 10hPT/17hUK, I’m livestreaming “How To Dismantle the Internet” with Intelligence Squared.
On September 12 at 7pm, I’ll be at Toronto’s Another Story Bookshop with my new book The Internet Con: How to Seize the Means of Computation.
In my latest Locus Magazine column, “Plausible Sentence Generators,” I describe how I unwittingly came to use – and even be impressed by – an AI chatbot – and what this means for a specialized, highly salient form of writing, namely, “bullshit”:
https://locusmag.com/2023/09/commentary-by-cory-doctorow-plausible-sentence-generators/
Here’s what happened: I got stranded at JFK due to heavy weather and an air-traffic control tower fire that locked down every westbound flight on the east coast. The American Airlines agent told me to try going standby the next morning, and advised that if I booked a hotel and saved my taxi receipts, I would get reimbursed when I got home to LA.
But when I got home, the airline’s reps told me they would absolutely not reimburse me, that this was their policy, and they didn’t care that their representative had promised they’d make me whole. This was so frustrating that I decided to take the airline to small claims court: I’m no lawyer, but I know that a contract takes place when an offer is made and accepted, and so I had a contract, and AA was violating it, and stiffing me for over $400.
The problem was that I didn’t know anything about filing a small claim. I’ve been ripped off by lots of large American businesses, but none had pissed me off enough to sue – until American broke its contract with me.
So I googled it. I found a website that gave step-by-step instructions, starting with sending a “final demand” letter to the airline’s business office. They offered to help me write the letter, and so I clicked and I typed and I wrote a pretty stern legal letter.
Now, I’m not a lawyer, but I have worked for a campaigning law-firm for over 20 years, and I’ve spent the same amount of time writing about the sins of the rich and powerful. I’ve seen a lot of threats, both those received by our clients and sent to me.
I’ve been threatened by everyone from Gwyneth Paltrow to Ralph Lauren to the Sacklers. I’ve been threatened by lawyers representing the billionaire who owned NSOG roup, the notoroious cyber arms-dealer. I even got a series of vicious, baseless threats from lawyers representing LAX’s private terminal.
So I know a thing or two about writing a legal threat! I gave it a good effort and then submitted the form, and got a message asking me to wait for a minute or two. A couple minutes later, the form returned a new version of my letter, expanded and augmented. Now, my letter was a little scary – but this version was bowel-looseningly terrifying.
I had unwittingly used a chatbot. The website had fed my letter to a Large Language Model, likely ChatGPT, with a prompt like, “Make this into an aggressive, bullying legal threat.” The chatbot obliged.
As someone who has been attempting to use LLMs and Transformer based AI models to make public domain sci-fi works based on early 20th century Sci-Fi - this all checks out depressingly.
For context - In order to potentially get this to work anywhere near well I’m effectively only using the base architecture of LLMs to learn the meta-narrative of stories (broken down into graph form (YAY GRAPHS)) which can then make facsimiles of stories that have come before.
Most LLMs don’t do this.
Most LLMs aren’t trained to consider the structure of long-form literature or stories.
An alarming number of LLM creators don’t check their training data thoroughly.
They just bung masses of data into the machine with no regard to the structure, and meaning, and storytelling (or copyright…) of the text itself. It’s effectively a case of “chuck all of humanity’s literature into the blender and it’ll mostly be fine”.
You’ll be at the dining table with this eldritch abomination of a foodstuff you ordered, take a bite, and it’ll be a combination of Austen, Shelley and Hunter S Thompson in weird amounts with an aftertaste that just makes little culinary sense as the meal goes on.
You can fine-tune them, but it’s only so effective given the underlying architecture.
LLM’s are effectively giant pattern identifying machines - for that they’re very useful. They’re good for creating chatbots that can output info on very unambiguous fields where knowledge stays similar for an age. Otherwise be very, VERY careful using them.
Predators in France (brown for bears, black for wolves, yellow for lynx)
white for frenchmen
Utterly heartbroken.
THE TONY BLAIR INSTITUTE.
This is like finding out your crush is a Tory.