Turn off all AI in Firefox.

Quick way to turn off all AI in Firefox to stop it from running so slow.

Type “about:config” in the address bar and click OK when it warns you that you can break stuff.

Next enter each one of these lines into the search bar and click the button to toggle them to “FALSE”. That turns them off.


browser.ml.chat.enabled

browser.ml.enable

browser.ml.linkPreview.enabled

browser.ml.pageAssist.enabled

browser.ml.smartAssist.enabled

extensions.ml.enabled

browser.tabs.groups.smart.enabled

browser.search.visualSearch.featureGate

browser.urlbar.quicksuggest.mlEnabled

pdfjs.enableAltText

places.semanticHistory.featureGate

sidebar.revamp


And that should turn it all off.

More on Bandcamp

I have been a big fan of Bandcamp for years now. I love that I buy my music and most the money goes to the artist direct. I was worried when they were bought out, but it seems like they are staying true to goal of “artists first”. And now they have banned AI from Bandcamp as well which is freaking awesome.

Keeping Bandcamp Human

“Today we are fortifying our mission by articulating our policy on generative AI, so that musicians can keep making music, and so that fans have confidence that the music they find on Bandcamp was created by humans.

Our guidelines for generative AI in music and audio are as follows:

  • Music and audio that is generated wholly or in substantial part by AI is not permitted on Bandcamp. 

  • Any use of AI tools to impersonate other artists or styles is strictly prohibited in accordance with our existing policies prohibiting impersonation and intellectual property infringement.

If you encounter music or audio that appears to be made entirely or with heavy reliance on generative AI, please use our reporting tools to flag the content for review by our team. We reserve the right to remove any music on suspicion of being AI-generated.

With this policy, we’re putting human creativity first, and we will be sure to communicate any updates to the policy as the rapidly changing generative AI space develops. Thank you.”

This comment from Hacker News I think sums up AI art pretty well.

“Whenever I see defences of AI "art" people very often reduce the arguments to these analogies of using tools, but it's ineffective. Whether you use MS Paint, Photoshop, pencil, watercolor etc. That all requires skill, practice, and is this great intersection of intent and ability. It's authentic. Generating media with AI requires no skill, no intent, and very minimal labor. It is an approximation of the words you typed in and reduces you to a commissioner. You created nothing. You commissioned a work from a machine and are claiming creative authorship.” - frakt0x90

The 70% AI productivity myth: why most companies aren't seeing the gains

“Now consider the narrative you've been hearing from vendors, executives, and LinkedIn thought leaders: AI has collapsed software development costs by 70-90%. Development velocity is through the roof. If you're not seeing these gains, you're doing it wrong.

These two realities don't fit together. If even Karpathy feels behind, what hope does the average enterprise engineering team have?

The answer is uncomfortable: the 70-90% productivity claim is true for about 10% of the industry. For the other 90%, it's a marketing hallucination masquerading as data.”

A randomized controlled study by METR (Model Evaluation & Threat Research) found something that should terrify every CTO: experienced developers using AI tools took 19% longer to complete tasks than those working without them.

Not beginners. Not interns fumbling with ChatGPT. Experienced engineers. On codebases they knew. With tools designed to make them faster.

They got slower.

The Stack Overflow 2025 Developer Survey adds nuance. While 52% of developers report some positive productivity impact from AI tools, only a minority experience transformative gains. 46% now actively distrust AI output accuracy, up from 31% last year. The number-one frustration, cited by 66% of developers: AI solutions that are "almost right, but not quite", leading to time-consuming debugging.”

“This isn't learning a new library or framework. This is learning to work with something that is:

For experienced developers, this may actually be harder. They have decades of muscle memory around deterministic systems. They've internalized debugging strategies that don't apply when the "bug" is an LLM hallucination with no stack trace.

The data supports this. Only 48% of developers use AI agents or advanced tooling. A majority (52%) either don't use agents at all or stick to simpler AI tools. 38% have no plans to adopt them.”

AI explained in one Comment from reddit.

Last quarter I rolled out Microsoft Copilot to 4,000 employees.

$30 per seat per month.

$1.4 million annually.

I called it "digital transformation."

The board loved that phrase.

They approved it in eleven minutes.

No one asked what it would actually do.

Including me.

I told everyone it would "10x productivity."

That's not a real number.

But it sounds like one.

HR asked how we'd measure the 10x.

I said we'd "leverage analytics dashboards."

They stopped asking.

Three months later I checked the usage reports.

47 people had opened it.

12 had used it more than once.

One of them was me.

I used it to summarize an email I could have read in 30 seconds.

It took 45 seconds.

Plus the time it took to fix the hallucinations.

But I called it a "pilot success."

Success means the pilot didn't visibly fail.

The CFO asked about ROI.

I showed him a graph.

The graph went up and to the right.

It measured "AI enablement."

I made that metric up.

He nodded approvingly.

We're "AI-enabled" now.

I don't know what that means.

But it's in our investor deck.

A senior developer asked why we didn't use Claude or ChatGPT.

I said we needed "enterprise-grade security."

He asked what that meant.

I said "compliance."

He asked which compliance.

I said "all of them."

He looked skeptical.

I scheduled him for a "career development conversation."

He stopped asking questions.

Microsoft sent a case study team.

They wanted to feature us as a success story.

I told them we "saved 40,000 hours."

I calculated that number by multiplying employees by a number I made up.

They didn't verify it.

They never do.

Now we're on Microsoft's website.

"Global enterprise achieves 40,000 hours of productivity gains with Copilot."

The CEO shared it on LinkedIn.

He got 3,000 likes.

He's never used Copilot.

None of the executives have.

We have an exemption.

"Strategic focus requires minimal digital distraction."

I wrote that policy.

The licenses renew next month.

I'm requesting an expansion.

5,000 more seats.

We haven't used the first 4,000.

But this time we'll "drive adoption."

Adoption means mandatory training.

Training means a 45-minute webinar no one watches.

But completion will be tracked.

Completion is a metric.

Metrics go in dashboards.

Dashboards go in board presentations.

Board presentations get me promoted.

I'll be SVP by Q3.

I still don't know what Copilot does.

But I know what it's for.

It's for showing we're "investing in AI."

Investment means spending.

Spending means commitment.

Commitment means we're serious about the future.

The future is whatever I say it is.

As long as the graph goes up and to the right.

-@gothburz

A slightly different version of alchemy: creating art from AI.

“The output of generative AI is novel, to be sure, and it can even be enjoyable at times. But what it isn’t any longer is: valuable.

An ever-growing segment of the population can now sniff out AI art. It’s obvious, when you know what to look for. It sticks out. It’s glaring. It’s immediately off-putting. People actively avoid it when they can, and instantly de-value everything associated with it.

...

Art is valuable precisely because it is not easy to create.

And I am interested in art—we are interested in art, in any and all of its forms—because humans made it. That’s the very thing that makes it interesting; the who, the how, and especially the why.

...

The struggle that produced the art—the human who felt it, processed it, and formed it into this unique shape in the way only they could—is integral to the art itself. The story of the human behind it is the missing, inimitable component that AI cannot reproduce.

That’s what I and so many others find so repulsive about generative AI art; it’s missing the literal soul that makes art interesting in the first place.

We care about art because it’s a form of connection to other humans.

....

And no, I’m sorry, but prompting your way to the finished piece absolutely does not count—

—Not that it matters. I’ve gotten a little off-topic, but whether AI-generated art is truly art isn’t the point, and it doesn’t really matter anyway. The zone is too flooded, regardless.

AI-generated content is everywhere; it’s inescapable; and it’s therefore made itself less than worthless.

AI will never fully displace creatives, because the moment AI can mass-produce any kind of creative work at scale, that work will stop being worth producing in the first place.

It will be toxic; a trend well past its prime, already rotten on the vine.

The more gold you make, the less the gold is worth.

Good luck with that lead, though.”

“College is just how well I can use ChatGPT at this point,”

I am not a fan and everything I read just seems to make AI worse and worse and worse. And I used it for band flyers in the past.. BAd Dan….

Everyone Is Cheating Their Way Through College. ChatGPT has unraveled the entire academic project.”

Sarah, a freshman at Wilfrid Laurier University in Ontario, said she first used ChatGPT to cheat during the spring semester of her final year of high school. (Sarah’s name, like those of other current students in this article, has been changed for privacy.) After getting acquainted with the chatbot, Sarah used it for all her classes: Indigenous studies, law, English, and a “hippie farming class” called Green Industries. “My grades were amazing,” she said. “It changed my life.” Sarah continued to use AI when she started college this past fall. Why wouldn’t she? Rarely did she sit in class and not see other students’ laptops open to ChatGPT. Toward the end of the semester, she began to think she might be dependent on the website. She already considered herself addicted to TikTok, Instagram, Snapchat, and Reddit, where she writes under the username maybeimnotsmart. “I spend so much time on TikTok,” she said. “Hours and hours, until my eyes start hurting, which makes it hard to plan and do my schoolwork. With ChatGPT, I can write an essay in two hours that normally takes 12.

Band poster and EP artwork!

Made a new band poster for a show and did the artwork for this EP by Manx. Rock and Roll Portland Peoples!

I Will Fucking Piledrive You If You Mention AI Again

LOL! What a great write up, love this kind of humor.

“I. But We Will Realize Untold Efficiencies With Machine L-

What the fuck did I just say?

I started working as a data scientist in 2019, and by 2021 I had realized that while the field was large, it was also largely fraudulent. Most of the leaders that I was working with clearly had not gotten as far as reading about it for thirty minutes despite insisting that things like, I dunno, the next five years of a ten thousand person non-tech organization should be entirely AI focused. The number of companies launching AI initiatives far outstripped the number of actual use cases. Most of the market was simply grifters and incompetents (sometimes both!) leveraging the hype to inflate their headcount so they could get promoted, or be seen as thought leaders1.”

II. But We Need AI To Remain Comp-

Sweet merciful Jesus, stop talking. Unless you are one of a tiny handful of businesses who know exactly what they're going to use AI for, you do not need AI for anything - or rather, you do not need to do anything to reap the benefits. Artificial intelligence, as it exists and is useful now, is probably already baked into your businesses software supply chain. Your managed security provider is probably using some algorithms baked up in a lab software to detect anomalous traffic, and here's a secret, they didn't do much AI work either, they bought software from the tiny sector of the market that actually does need to do employ data scientists. I know you want to be the next Steve Jobs, and this requires you to get on stages and talk about your innovative prowess, but none of this will allow you to pull off a turtle neck, and even if it did, you would need to replace your sweaters with fullplate to survive my onslaught.”

It just keeps going like that. Yeah, I am of the camp that AI is the new NFT, 90’s .com crash coming.

Expectations Versus Reality

Interesting read from Edward Zitron of “Better Offline” called, “Expectations Versus Reality” that is about AI in film making that is wort a read.

“These stories only serve to help Sam Altman, who desperately needs you to believe that Hollywood is scared of Sora and generative AI, because the more you talk about fear and lost jobs and the machines taking over, the less you ask a very simple question: does any of this shit actually work?“

“The answer, it turns out, is “not very well.” In a piece for FXGuide, Mike Seymour sat down with Shy Kids, the people behind Air Head, and revealed how Sora is, in many ways, totally useless for making films. Sora takes 10-20 minutes to generate a single 3 to 20 second shot, something that isn’t really a problem until you realize that until the shot is rendered, you really have absolutely no idea what the hell it’s going to spit out.”


This part from the linked article sums it up really well. 300-1 usable shot ration is insane and they had to do a ton of post to clean up strings, stabilize and all sorts of other crap.

“While all the imagery was generated in SORA, the balloon still required a lot of post-work. In addition to isolating the balloon so it could be re-coloured, it would sometimes have a face on Sonny, as if his face was drawn on with a marker, and this would be removed in AfterEffects. similar other artifacts were often removed.

For the minute and a half of footage that ended up in the film, Patrick estimated that they generated “hundreds of generations at 10 to 20 seconds a piece”. Adding, “My math is bad, but I would guess probably 300:1 in terms of the amount of source material to what ended up in the final.


That is not a actual production ready tool with this info. This is kinda the usmmery of Ed’s piece.

“That’s ultimately the problem with the current AI bubble — that so much of its success requires us to tolerate and applaud half-finished tools that only sort of, kind of do the things they’re meant to do, nodding approvingly and saying “great job!” like we’re talking to a child rather than a startup with $13 billion in funding with a CEO that has the backing of fucking Microsoft. “

Podcast- The AI Bubble Is Bursting

New podcast I stumbled into called “Better Offline”. Just finished the episode called, “The AI Bubble is Bursting” which is pretty good. Ironically enough, I’d bet dollars to donuts AI did the transcription, cause it messed up more than a few words. Quotes are from the second in the series. I just realized I listened out of order… whoops. They are in proper order below.

Some bits I found interesting, like no one can say if it is actually profitable. And side note, this is from the transcription which is wonky.

In October twenty twenty three, Richard Windsor, the research director at large of Counterpoint Research, which is one of the more reliable analyst houses, hypothesized that open AI's monthly cash burn was in the region of one point one billion dollars a month, based on them having to raise thirteen billion dollars from Microsoft, most of it, as I noted in credits for its Azure cloud computing service to run their models.

It could be more, it could be less. As a private company, only investors and other insiders can possibly know what's going on in open Ai. However, four months later, Reuter's would report that open AI made about two billion dollars in revenue in twenty twenty three, a remarkable sum that much like every other story about open ai, never mentions profit. In fact, I can't find a single reporter that appears to have asked Sam Mormon about how much profit open ai makes, only breathless hype with no consideration of its sustainability.

Even if open ai burns a tenth of windsors estema about one hundred million dollars a month, that's still far more money than they're making.

“Salesforce chief financial officer Amy Weaver said in their most recent earning score that Salesforce was not factoring in material contribution from Salesforce's numerous AI products in its financial year twenty twenty five.

Graphics software company Adobe shares slid in their last earnings. It's the company failed to generate meaningful revenue from its masses of AI products, with analysts now worried about its ability to actually monetize any of these generative products. Service now claimed to its earnings that generative AI was meaningfully contributed to its bottom line.”

And I do love me a good ending rant, lol!

“ And the AI revolution, despite its spacious hype, is not really for us. It's not for you and me. It's for people at Satya Nadella of Microsoft to claim that they've increased growth by twenty percent. It's for people like Sam Altman to buy another fucking Porsche. It's so that these people can feel important and be rich, rather than improving society at all. Maybe I'm wrong, Maybe all of this is the future, maybe everything will be automated, but I don't see the signs. This doesn't feel much different to the metaverse. There's a product, but in the end, what's it really do? Just like the metaverse, I don't think many people are really using it. All signs point to this being an empty bubble. And I'm sure you're sick of this too. I'm sure that you're sick of the tech industry telling you the futures here when it's the present and it fucking sucks.

AI in production

From Wil Wheaton’s tumblr I found this interesting story on using AI in production that I find pretty accurate. It is really difficult to art direct, period. We actually made the call to walk away from some projects that used AI because we were worried it would come off the rails and not be able to hit deadline. I wish I could find the original text but duck duck go and google both just throw up a lot of AI stuff not related.

"the future of the internet: a garbage dump"

This week in AI, brought to you by, “Is it too early to have a drink?” Great article by Erik Hoel called, “Here lies the internet, murdered by generative AI”. Read the whole piece, it’s a real good account of what is happening to the internet right now in real time. I was looking for info on a new printer and the amount of AI trash is insane. Here are way to many pull quotes.

“The amount of AI-generated content is beginning to overwhelm the internet. Or maybe a better term is pollute. Pollute its searches, its pages, its feeds, everywhere you look. I’ve been predicting that generative AI would have pernicious effects on our culture since 2019, but now everyone can feel it.

...

What, exactly, are these “workbooks” for my book? AI pollution. Synthetic trash heaps floating in the online ocean. The authors aren’t real people, some asshole just fed the manuscript into an AI and didn’t check when it spit out nonsensical summaries. But it doesn’t matter, does it? A poor sod will click on the $9.99 purchase one day, and that’s all that’s needed for this scam to be profitable since the process is now entirely automatable and costs only a few cents.

...

Now that generative AI has dropped the cost of producing bullshit to near zero, we see clearly the future of the internet: a garbage dump.

...

This isn’t what everyone feared, which is AI replacing humans by being better—it’s replacing them because AI is so much cheaper. Sports Illustrated was not producing human-quality level content with these methods, but it was still profitable.

...

All around the nation there are toddlers plunked down in front of iPads being subjected to synthetic runoff, deprived of human contact even in the media they consume. There’s no other word but dystopian. Might not actual human-generated cultural content normally contain cognitive micro-nutrients (like cohesive plots and sentences, detailed complexity, reasons for transitions, an overall gestalt, etc) that the human mind actually needs? We’re conducting this experiment live. For the first time in history developing brains are being fed choppy low-grade and cheaply-produced synthetic data created en masse by generative AI, instead of being fed with real human culture. No one knows the effects, and no one appears to care. “

This week in AI.

All AI news is bad news. That pretty much sums it up. I won’t even get into the video aspect yet, that will need it’s own post.

Instacart is using AI art. It's incredibly unappetizing.

“The text for the ingredients and instructions for the above recipes, meanwhile, is also generated by AI, as disclosed by Instacart itself: "This recipe is powered by the magic of AI, so that means it may not be perfect. Check temperatures, taste, and season as you go. Or totally switch things up — you're the head chef now. Consult product packaging to confirm any dietary or nutritional information which is provided here for convenience only. Make sure to follow recommended food safety guidelines."“


'Rat Dck' Among Gibberish AI Images Published in Science Journal

“The open-access paper explores the relationship between stem cells in mammalian testes and a signaling pathway responsible for mediating inflammation and cancer in cells. The paper’s written content does not appear to be bogus, but its most eye-popping aspects are not in the research itself. Rather, they are the inaccurate and grotesque depictions of rat testes, signaling pathways, and stem cells.

The AI-generated rat diagram depicts a rat (helpfully and correctly labeled) whose upper body is labeled as “senctolic stem cells.” What appears to be a very large rat penis is labeled “Dissilced,” with insets at right to highlight the “iollotte sserotgomar cell,” “dck,” and “Retat.” Hmm.”


Microsoft and OpenAI warn state-backed threat actors are using generative AI en masse to wage cyber attacks

Russian, North Korean, Iranian, and Chinese-backed threat actors are attempting to use generative AI to inform, enhance, and refine their attacks, according to a new threat report from Microsoft and OpenAI.

The group’s use of LLMs reflects the broader behaviors being used by cyber criminals according to analysts at Microsoft, and overlaps with threat actors tracked in other research such as Tortoiseshell, Imperial Kitten, and Yellow Liderc.

As well as using LLMs to enhance their phishing emails and scripting techniques, Crimson Sandstorm was observed using LLMs to assist in producing code to disable antivirus systems and delete files in a directory after exiting an application, all with the aim of evading anomaly detection.

Good lord, AI is now in Squarespace.

“Write a paragraph about digital versus film photography using dark humor.” The web is gonna be nothing but this shit in under 2 months.

“Film versus digital photography - a battle as old as time itself, or at least as old as film and digital cameras. It's like that eternal struggle between black and white movies and Technicolor marvels. Film photography, with its pretentiousness and hipster vibes, making every snapshot look like a vintage masterpiece. You know you're in the presence of a true artist when they whip out their clunky film camera and start mumbling about the joy of waiting weeks to get their photos developed. On the other hand, digital photography is like a shameless selfie addict on Instagram, snapping away without a care in the world. Instant gratification, baby! Who needs soul and depth when you can slap on a filter, call it art, and gather a hundred likes before lunch? So, take your pick - the slow, "real" art of film or the instantaneous, self-gratifying world of digital. Both will capture your memories, but only one will make you look superior while doing it.”

The AI feedback loop: Researchers warn of ‘model collapse’ as AI trains on AI-generated content

Annnnd it begins. AI trained on AI spins out and becomes totally worthless. That, plus the internet being flooded with AI generated content will be interesting.

“Specifically looking at probability distributions for text-to-text and image-to-image AI generative models, the researchers concluded that “learning from data produced by other models causes model collapse — a degenerative process whereby, over time, models forget the true underlying data distribution … this process is inevitable, even for cases with almost ideal conditions for long-term learning.

“Over time, mistakes in generated data compound and ultimately force models that learn from generated data to misperceive reality even further,” wrote one of the paper’s leading authors, Ilia Shumailov, in an email to VentureBeat. “We were surprised to observe how quickly model collapse happens: Models can rapidly forget most of the original data from which they initially learned.”

In other words: as an AI training model is exposed to more AI-generated data, it performs worse over time, producing more errors in the responses and content it generates, and producing far less non-erroneous variety in its responses.