2025-06-09


etc

  • Panjandrum: The 'giant firework' built to break Hitler's Atlantic Wall

    "At first all went well. Panjandrum rolled into the sea and began to head for the shore, the brass hats watching through binoculars from the top of a pebble ridge [...] Then a clamp gave: first one, then two more rockets broke free: Panjandrum began to lurch ominously. It hit a line of small craters in the sand and began to turn to starboard, careering towards [photographer Louis] Klemantaski, who, viewing events through a telescopic lens, misjudged the distance and continued filming. Hearing the approaching roar he looked up from his viewfinder to see Panjandrum, shedding live rockets in all directions, heading straight for him.

    "As he ran for his life, he glimpsed the assembled admirals and generals diving for cover behind the pebble ridge into barbed-wire entanglements. Panjandrum was now heading back to the sea but crashed on to the sand where it disintegrated in violent explosions, rockets tearing across the beach at great speed."

    Panjandrum had failed for the final time, and the project was quietly scrapped.


Rank Propaganda / Thought Policing / World Disordering

  • Political emotions on the far right

    The Remarque Institute, based out of New York University, is an institute established for the study of contemporary Europe. The essays across the following pages were presented at a one-day conference on the emotional landscapes of the contemporary far right, from the AfD in Germany to Moms for Liberty in the US. Much liberal handwringing over the surging far right attempts to analyse its rationale, methods or motivations, but here these writers tackle its feelings.

  • Former Wikimedia employee says abuse at the nonprofit is "organization wide"

    A trans software engineer fired by Wikipedia is speaking out after she filed a lawsuit against the nonprofit website claiming wrongful termination. Kayla Mae said that the “bigotry” described in her suit is “organization wide” and that most of her former colleagues “are as against the problems in leadership as I was.”

Religion / Tribal / Culture War and Re-Segregation

  • Shipping discourse

    Pro-shippers (also known as anti-antis), etymologically inverted from anti-shipper, believe that creating or consuming fiction which depicts harmful behavior does not itself function as an endorsement of such actions. Some pro-shippers believe that fictional works can affect societal attitudes towards sexuality when portrayed irresponsibly, but they align with the general movement's support of artistic free-expression and the continuation of adult content within fan spaces. Because most antis are teenagers, many pro-shippers consider the anti movement an attack on sexual content in general and an attempt to displace adult-oriented content from fan spaces. Both antis and pro-shippers are largely LGBT, reflecting the fanfiction community as a whole—a 2013 survey conducted by fans revealed that only 38% of AO3 users surveyed were heterosexual, with more nonbinary users than men. The two groups are demographically similar in terms of racial, gender, and sexual identities and report similar rates of neurodiversity and survivorship of sexual abuse. However, antis are generally younger than pro-shippers, with the largest contingent in their early-to-mid teens.

  • The Wire That Transforms Much of Manhattan into One Big, Symbolic Home

  • From Spain to Mecca on horseback: The men performing Hajj like medieval pilgrims

Info Rental / ShowBiz / Advertising

TechSuck / Geek Bait

AI Will (Save | Destroy) The World

  • A knockout blow for LLMs?

    On the one hand, it echoes and amplifies the training distribution argument that I have been making since 1998: neural networks of various kinds can generalize within a training distribution of data they are exposed to, but their generalizations tend to break down outside that distribution. That was the crux of my 1998 paper skewering multilayer perceptrons, the ancestors of current LLM, by showing out-of-distribution failures on simple math and sentence prediction tasks, and the crux in 2001 of my first book (The Algebraic Mind) which did the same, in a broader way, and central to my first Science paper (a 1999 experiment which demonstrated that seven-month-old infants could extrapolate in a way that then-standard neural networks could not). It was also the central motivation of my 2018 Deep Learning: Critical Appraisal, and my 2022 Deep Learning is Hitting a Wall. I singled it out here last year as the single most important — and important to understand — weakness in LLMs. (As you can see, I have been at this for a while.)

    On the other hand it also echoes and amplifies a bunch of arguments that Arizona State University computer scientist Subbarao (Rao) Kambhampati has been making for a few years about so-called “chain of thought” and “reasoning models” and their “reasoning traces” being less than they are cracked up to be. For those not familiar a “chain of thought” is (roughly) the stuff a system says it “reasons” its way to answer, in cases where the system takes multiple steps; “reasoning models” are the latest generation of attempts to rescue the inherent limitations of LLMs, by forcing them to “reason” over time, with a technique called “inference-time compute”. (Regular readers will remember that when Satya Nadella waved the flag of concession in November on pure pretraining scaling - the hypothesis that my deep learning is a hitting a wall paper critique addressed - he suggested we might find a new set of scaling laws for inference time compute.)

    Rao, as everyone calls him, has been having none of it, writing a clever series of papers that show, among other things that the chains of thoughts that LLMs produce don’t always correspond to what they actually do.

  • Re: My AI Skeptic Friends Are All Nuts

  • Anthropic's AI is not writing it's own blog.

  • What if We're Building too Much AI Infrastructure?

    We're spending tens of billions of capex dollars on vast datacenters but what if the AI future is more efficient and runs cooler?

  • The last six months in LLMs, illustrated by pelicans on bicycles

    The obvious question at this point is which of these pelicans is best? I’ve got 30 pelicans now that I need to evaluate, and I’m lazy... so I turned to Claude and I got it to vibe code me up some stuff.

  • Meta in talks for Scale AI investment that could top $10B

Trump

Left Angst

Russia Bad / Ukraine War

Health / Medicine

  • How a mysterious epidemic of kidney disease is killing young men

  • We’re secretly winning the war on cancer

  • It's OK not to be fat

    Over the past year, I lost about 45 pounds — about 20% of my maximum body weight. This didn’t seem like a particularly epic weight loss journey — certainly a lot less than the 70 pounds that Matt Yglesias lost a few years back. And unlike Matt, I didn’t have bariatric surgery to shrink my stomach. In fact, I didn’t even use Ozempic, Mounjaro, or any other weight-loss drug at all. All I did was eat less and exercise a little bit more. This seems like the kind of boring, everyday story that doesn’t really merit a blog post. But I think the way that I lost weight actually does have some interesting implications for how, as a society, we should think about weight loss — and about other personal struggles like addiction.

Environment / Climate / Green Propaganda