<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
  <title>Carter Temm</title>
  <subtitle>Personal website and blog of Carter Temm.</subtitle>
  <link href="https://ctemm.me/feed.xml" rel="self" />
  <link href="https://ctemm.me/" />
  <updated>2026-03-02T00:00:00Z</updated>
  <id>https://ctemm.me/</id>
  <author>
    <name>Carter Temm</name>
  </author>
  <entry>
    <title>Over half of the internet is now AI slop</title>
    <link href="https://ctemm.me/more-ai-slop/" />
    <updated>2026-03-02T00:00:00Z</updated>
    <id>https://ctemm.me/more-ai-slop/</id>
    <content type="html">&lt;p&gt;Futurism &lt;a href=&quot;https://futurism.com/artificial-intelligence/over-50-percent-internet-ai-slop&quot;&gt;reported on a study by Graphite&lt;/a&gt;, an SEO firm, that analyzed 65,000 English-language articles published between January 2020 and May 2025. As of May 2025, 52% of new articles were flagged as AI-generated. That number was around 10% when ChatGPT launched in late 2022.&lt;/p&gt;
&lt;p&gt;This number doesn&#39;t surprise me, and if you are familiar with &lt;a href=&quot;https://ctemm.me/on-ai-detectors/&quot;&gt;the signs of AI writing&lt;/a&gt;, it&#39;s not likely to surprise you either.&lt;/p&gt;
&lt;p&gt;There are a few caveats worth mentioning in this study.&lt;/p&gt;
&lt;p&gt;The detection tool (Surfer) classified anything with 50%+ LLM-generated content as AI-generated, with a reported 4.2% false positive rate.&lt;/p&gt;
&lt;p&gt;Paywalled sites also tend to block &lt;a href=&quot;https://commoncrawl.org/&quot;&gt;Common Crawl indexing&lt;/a&gt;, so a lot of human-written content isn&#39;t being counted. The real split is probably less extreme.&lt;/p&gt;
&lt;p&gt;That said, 10% to 52% in two and a half years is wild. Growth apparently plateaued around November 2024, presumably because content farms may be catching on that search engines aren&#39;t rewarding slop the way they used to.&lt;/p&gt;
&lt;p&gt;The slop doesn&#39;t win because it&#39;s good. It wins because creating posts that target certain keywords is a walk in the park, so there&#39;s an ocean of it.&lt;/p&gt;
&lt;p&gt;If you&#39;ve actually worked through what you want to say, you&#39;re part of a shrinking minority. As time goes on, I suspect this will be cause for respect on the part of your readers.&lt;/p&gt;
</content>
  </entry>
  <entry>
    <title>Moving to Claude just got a whole lot easier</title>
    <link href="https://ctemm.me/moving-to-claude/" />
    <updated>2026-03-01T00:00:00Z</updated>
    <id>https://ctemm.me/moving-to-claude/</id>
    <content type="html">&lt;p&gt;I keep trying to get my friends and colleagues and really everyone I know who still swears by ChatGPT to switch to Claude. Not for political reasons, though I do find myself more aligned with Anthropic&#39;s constitution and code of ethics. That said, I believe, at least as of March 2026, that Claude is the clear winner for any work that involves writing, problem-solving, or technical analysis.&lt;/p&gt;
&lt;p&gt;Probably the most common objection is &amp;quot;but ChatGPT already knows about x&amp;quot; or &amp;quot;I don&#39;t want to start over from scratch.&amp;quot; If you&#39;re in this boat, I&#39;d encourage you to try out Claude Opus as your daily driver for a day. My bet is you won&#39;t look back.&lt;/p&gt;
&lt;p&gt;A great summary from their &lt;a href=&quot;https://claude.com/import-memory&quot;&gt;import memory page&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;You&#39;ve spent months teaching another AI how you work. That context shouldn&#39;t disappear because you want to try something new. Claude can import what matters, so your first conversation feels like your hundredth.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I owe this discovery to &lt;a href=&quot;https://news.ycombinator.com/item?id=47204571&quot;&gt;a post on HN&lt;/a&gt;.&lt;/p&gt;
</content>
  </entry>
  <entry>
    <title>Anthropic Academy</title>
    <link href="https://ctemm.me/anthropic-academy/" />
    <updated>2026-03-01T00:00:00Z</updated>
    <id>https://ctemm.me/anthropic-academy/</id>
    <content type="html">&lt;p&gt;Thanks to a &lt;a href=&quot;https://www.reddit.com/r/ClaudeAI/comments/1rh92yp/anthropic_has_opened_up_its_entire_educational/&quot;&gt;reddit post on r/ClaudeAI&lt;/a&gt;, I just learned about &lt;a href=&quot;https://anthropic.skilljar.com/&quot;&gt;Anthropic Academy&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Basically, Anthropic publishes a few professional-level free courses on the Skilljar LMS. Each course is extremely well put together. Currently, the offerings include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Claude Code in Action: Integrate Claude Code into your development workflow&lt;/li&gt;
&lt;li&gt;Claude 101: Learn how to use Claude for everyday work tasks, understand core features, and explore resources; for more advanced learning on other topics.&lt;/li&gt;
&lt;li&gt;AI Fluency: Framework &amp;amp; Foundations: Learn to collaborate with AI systems effectively, efficiently, ethically, and safely&lt;/li&gt;
&lt;li&gt;Building with the Claude API: This comprehensive course covers the full spectrum of working with Anthropic models using the Claude API&lt;/li&gt;
&lt;li&gt;Introduction to Model Context Protocol: Learn to build Model Context Protocol servers and clients from scratch using Python. Master MCP&#39;s three core primitives (tools, resources, and prompts) to connect Claude with external services&lt;/li&gt;
&lt;li&gt;AI Fluency for educators: This course empowers faculty, instructional designers, and educational leaders to apply AI Fluency into their own teaching practice and institutional strategy.&lt;/li&gt;
&lt;li&gt;AI Fluency for students: This course empowers students to develop AI Fluency skills that enhance learning, career planning, and academic success through responsible AI collaboration.&lt;/li&gt;
&lt;li&gt;Model Context Protocol: Advanced Topics: Discover advanced Model Context Protocol implementation patterns including sampling, notifications, file system access, and transport mechanisms for production MCP server development.&lt;/li&gt;
&lt;li&gt;Claude with Amazon Bedrock and Claude with Google Cloud&#39;s Vertex AI: Self-explanatory.&lt;/li&gt;
&lt;li&gt;Teaching AI Fluency: This course empowers academic faculty, instructional designers, and others to teach and assess AI Fluency in instructor-led settings.&lt;/li&gt;
&lt;li&gt;AI Fluency for nonprofits: This course empowers nonprofit professionals to develop AI fluency in order to increase organizational impact and efficiency while staying true to their mission and values.&lt;/li&gt;
&lt;li&gt;Introduction to agent skills: Learn how to build, configure, and share Skills in Claude Code — reusable markdown instructions that Claude automatically applies to the right tasks at the right time. This course takes you from creating your first Skill to distributing them across teams and troubleshooting common issues.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In hindsight the &amp;quot;Learning&amp;quot; dropdown is right there on the Anthropic homepage, so I should have encountered this before. Good reminder to spend some time crawling through the websites of the tools you use frequently, you never know when something may have dropped silently or missed your radar.&lt;/p&gt;
&lt;p&gt;It took thirty seconds to register for an account. The content is a mix of text guides and hands-on demo videos. I haven&#39;t run into a section where there is a video and no supplementary text, which is nice. Each course also has knowledge checks in the form of a list of questions near the end.&lt;/p&gt;
&lt;p&gt;I&#39;ll go through and complete a few of these. Worst case I have something other than a podcast to listen to in the background while doing something else, best case I learn something and get to feel reasonably confident commenting on its efficacy when I&#39;m next doing an AI training.&lt;/p&gt;
</content>
  </entry>
  <entry>
    <title>On AI Detection</title>
    <link href="https://ctemm.me/on-ai-detectors/" />
    <updated>2026-02-23T00:00:00Z</updated>
    <id>https://ctemm.me/on-ai-detectors/</id>
    <content type="html">&lt;p&gt;When you read something, you probably make a subconscious assumption. You assume a person sat down, thought for a while, wrote stuff, deleted stuff, restructured the rest, and hit publish when it felt ready. You don&#39;t think about this assumption because it&#39;s been true since we began carving portraits on the walls of caves.&lt;/p&gt;
&lt;p&gt;Unfortunately, it isn&#39;t true anymore, and the tools we have to deal with this new reality don&#39;t work.&lt;/p&gt;
&lt;h2&gt;The contract&lt;/h2&gt;
&lt;p&gt;Peter Steinberger, a name that wasn&#39;t on my radar until mid-January, burst onto the AI scene and almost single-handedly built &lt;a href=&quot;https://openclaw.ai/&quot;&gt;Clawdbot/Moltbot/Openclaw&lt;/a&gt;. On a recent episode of the &lt;a href=&quot;https://www.youtube.com/watch?v=YFjfBk8HI5o&quot;&gt;Lex Fridman Podcast&lt;/a&gt; (super thought provoking episode btw) he mentioned that he&#39;s started blocking anything that &amp;quot;smells like AI.&amp;quot; with absolutely zero tolerance.&lt;/p&gt;
&lt;p&gt;The person who open-sourced a tool that can spew mountains of autonomously generated content across what were once human-only spaces... now blocks autonomous content on sight.&lt;/p&gt;
&lt;p&gt;Anyone who has spent five minutes on the theatrical hellscapes that are Facebook or LinkedIn knows what he&#39;s reacting to. The hollow confidence, bullet-pointed wisdom, and unoriginal ideas painted as novelty and the most important thing you&#39;ll read that day.&lt;/p&gt;
&lt;p&gt;Steinberger calls it a broken psychological contract, and I agree with this framing. When I realize I&#39;ve been reading machine output, the feeling isn&#39;t &amp;quot;oh well, that was still useful.&amp;quot; It&#39;s &amp;quot;I could have generated that myself.&amp;quot; Regardless of how good or useful the content was to me, I walk away feeling played.&lt;/p&gt;
&lt;p&gt;I read articles and blogs because I want to hear from people. When I just want an answer, I have AI subscriptions for that. I resent the environment we&#39;re creating when one pretends to be the other.&lt;/p&gt;
&lt;p&gt;So: build tools that tell the difference. Around the end of 2022, &lt;a href=&quot;https://gptzero.me/&quot;&gt;GPTZero&lt;/a&gt;, &lt;a href=&quot;https://www.grammarly.com/ai-detector&quot;&gt;Grammarly&#39;s AI detector&lt;/a&gt;, and dozens of others flooded the market.&lt;/p&gt;
&lt;p&gt;This would be great... if they worked.&lt;/p&gt;
&lt;h2&gt;The detectors&lt;/h2&gt;
&lt;p&gt;These tools are pattern matchers, a lot like the LLMs that created the need for them.&lt;/p&gt;
&lt;p&gt;They&#39;re built by ingesting text in two buckets: AI generated and human generated. From there they look for statistical signatures like word frequency, sentence structure, perplexity (or randomness), and structural variation. When writing is too smooth and predictable, it gets flagged.&lt;/p&gt;
&lt;p&gt;GPTZero claims:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;GPTZero has an accuracy rate of 99% when detecting AI-generated text versus human writing, meaning we correctly classify AI writing 99 out of 100 times. When testing samples where there&#39;s a mix of AI and human writing in one submission, we have a 96.5% accuracy rate.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I tested this. At first I was impressed. Then I fed in a rant I wrote years ago, long before the Generative Pre-trained Transformer was a concept. It scored positive for AI. Then I ran some actual AI output through the same detector: &amp;quot;Likely human.&amp;quot;&lt;/p&gt;
&lt;p&gt;Plenty of humans write predictably. AI is getting better at not writing like AI. These trends only go one direction.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Students get accused of cheating on papers they wrote themselves.&lt;/li&gt;
&lt;li&gt;Writers second-guess their own voice (RIP em dash, you served me well).&lt;/li&gt;
&lt;li&gt;Non-native English speakers get hit the hardest, along with anyone whose style happens to be &amp;quot;too clean&amp;quot; for the algorithm.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Meanwhile, anyone who wants to beat detection can do it in about thirty minutes, less if they really know what they&#39;re doing.&lt;/p&gt;
&lt;h2&gt;How to beat an AI detector&lt;/h2&gt;
&lt;p&gt;Wikipedia has been dealing with a flood of AI-generated edits. Most are &lt;a href=&quot;https://simonwillison.net/2024/May/8/slop/&quot;&gt;&amp;quot;AI slop&amp;quot;&lt;/a&gt;, which is to say poorly sourced, factually wrong, generated without thought and shoved into the world because creating it costs nothing.&lt;/p&gt;
&lt;p&gt;In response, the community at Wikipedia put together a comprehensive guide to spot &lt;a href=&quot;https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing&quot;&gt;signs of AI writing&lt;/a&gt; as part of their AI cleanup effort. It catalogs a bunch of tells across vocabulary (apparently AI overuses words like &amp;quot;delve,&amp;quot; &amp;quot;tapestry,&amp;quot; &amp;quot;pivotal moment&amp;quot;), structural patterns (rule-of-three lists, em dash abuse), and tone (promotional filler, sycophantic openers). It&#39;s definitely worth reading if you want to understand why AI text feels off to us.&lt;/p&gt;
&lt;p&gt;Of course, the guide is public because it&#39;s on Wikipedia, meaning it&#39;s in the training data too.&lt;/p&gt;
&lt;p&gt;I took that list, handed it to a model that isn&#39;t GPT, and told it to avoid every pattern on the page and package the result into a reusable skill. The output didn&#39;t trip a single tell.&lt;/p&gt;
&lt;p&gt;Apparently I&#39;m not the only one to have this idea. Within the same hour I found &lt;a href=&quot;https://github.com/blader/humanizer.git&quot;&gt;someone who&#39;d already packaged this as a shareable tool&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;If nothing else, there are GPTZero MCP servers now, so you can have Claude Code/Cowork (anything but ChatGPT) loop its way to perceived humanity. Just generate some text, check the score, revise, repeat until it passes.&lt;/p&gt;
&lt;p&gt;I also tested a few of the free AI humanizer tools that have popped up. Half of them didn&#39;t even work. The others passed the detection check, but they do it by flattening everything and removing the voice until what comes out feels like it has less color overall. This color is what defines human writing. It&#39;s not always clean or presentable. &amp;quot;What if I just, like, really like using the word like, man?&amp;quot;&lt;/p&gt;
&lt;p&gt;This is a defeating cycle through and through. AI generates text with obvious patterns. People document those patterns. The documentation becomes training data. AIs learn to avoid the patterns. Detectors catch less. New patterns get identified. With each iteration, the gap narrows.&lt;/p&gt;
&lt;p&gt;I think we have about a year before detection becomes impossible. Probably less.&lt;/p&gt;
&lt;h2&gt;The merge&lt;/h2&gt;
&lt;p&gt;The detectors don&#39;t assume a clear binary, but the humans making decisions based on their output certainly do. You either used AI, or you did not. If your text is flagged, it must have been generated at least in part by a computer. That binary is dissolving from both directions.&lt;/p&gt;
&lt;p&gt;The AI side is obvious. Models keep getting better at producing human-sounding text by default. Implement a few of the tricks above and you&#39;re done.&lt;/p&gt;
&lt;p&gt;The human side is weirder. We&#39;re reading AI-generated text all the time now (in emails, documentation, social media posts, and articles that may or may not have a person behind them). My totally unsubstantiated theory is that since we&#39;re prone to absorbing the patterns of what we read, this exposure is steadily turning AI-isms into human usage too. Linguists call this process convergence.&lt;/p&gt;
&lt;p&gt;Maybe my GPTZero experiment wasn&#39;t showing that the detector is bad at its job, but that the categories don&#39;t hold up anymore.&lt;/p&gt;
&lt;p&gt;I use AI tools constantly. A lot of what I write starts with me throwing back-of-a-napkin thoughts at Claude, and we go back and forth as sparring partners until I know what I want to say. Other stuff I write from scratch. Even the &amp;quot;from scratch&amp;quot; stuff is shaped by years of interacting with AI output, so the line between these processes is blurrier than one might think.&lt;/p&gt;
&lt;p&gt;There&#39;s an asymmetry here. People freely admit to using AI to write code. Nobody cares unless the code sucks. Admitting you used AI while writing prose on the other hand feels like confessing to cheating. I think it&#39;s because we treat writing as proof of thought and code as proof of function. When AI touches the prose, it feels like fraud in a way that AI-assisted code doesn&#39;t.&lt;/p&gt;
&lt;p&gt;Perhaps the fraud isn&#39;t in the tool as much as in the absence of thought. Others have said this better than I can.&lt;/p&gt;
&lt;p&gt;David McCullough (American historian and two-time Pulitzer winner) put it well: &amp;quot;Writing is thinking. To write well is to think clearly. That&#39;s why it&#39;s so hard.&amp;quot; Leslie Lamport, the father of distributed computing and initial developer of LaTeX, said something similar: &amp;quot;If you&#39;re thinking without writing, you only think you&#39;re thinking.&amp;quot;&lt;/p&gt;
&lt;p&gt;The tool doesn&#39;t change whether you&#39;re thinking. I only care whether you cared enough to work through what you were saying and make it yours.&lt;/p&gt;
&lt;p&gt;Steinberger&#39;s approach (explicitly labeling AI agents so people can choose what to engage with) is closer to a real answer. It&#39;s not a solution, but it&#39;s significantly better than any detector. Just tell people what&#39;s AI and let them decide.&lt;/p&gt;
&lt;p&gt;We&#39;re nowhere near that being standard. For now we&#39;re stuck with detectors that don&#39;t work, generators that keep improving, and people getting falsely accused of being machines when they turn in their homework.&lt;/p&gt;
&lt;p&gt;That last one is worth sitting with. Tools like &lt;a href=&quot;https://www.turnitin.com/&quot;&gt;Turnitin&lt;/a&gt; are now baked into the academic pipeline at most universities. When a student&#39;s work gets flagged as AI-generated, they often have to retake the course and pay for it again.&lt;/p&gt;
&lt;p&gt;To be clear, AI-assisted cheating is a real problem that institutions need to solve, but not this way. The present situation is that the accused student is asked to prove a negative (that they &lt;em&gt;didn&#39;t&lt;/em&gt; use AI) and the burden falls entirely on them. The adjudicator is the same institution that collects tuition if the student has to retake the course. I&#39;m not saying there&#39;s extensive precedent of institutions exploiting this, but the financial incentive exists, and it&#39;s nearly impossible for a student to challenge.&lt;/p&gt;
&lt;p&gt;My underlying point is that the tools that were supposed to protect human writing have rapidly become a way to punish human writers who happen to conform to a certain style. That&#39;s not a detection problem; detection is a ship that sails farther away with every passing day. It&#39;s a problem with the way we&#39;re planning for a future that we don&#39;t yet understand.&lt;/p&gt;
&lt;h2&gt;Further Reading&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://www.theregister.com/2026/02/16/semantic_ablation_ai_writing/&quot;&gt;Semantic ablation: Why AI Writing is Boring and Dangerous&lt;/a&gt;&lt;/p&gt;
</content>
  </entry>
  <entry>
    <title>Trying this blogging thing</title>
    <link href="https://ctemm.me/trying-blogging/" />
    <updated>2026-02-21T00:00:00Z</updated>
    <id>https://ctemm.me/trying-blogging/</id>
    <content type="html">&lt;p&gt;This site has been a long time coming. I&#39;ve been meaning to consolidate everything I&#39;m working on into one place for years, and I&#39;ve finally gotten around to it.&lt;/p&gt;
&lt;h2&gt;For the nerds&lt;/h2&gt;
&lt;p&gt;I built this site using the &lt;a href=&quot;https://www.11ty.dev/&quot;&gt;Eleventy&lt;/a&gt; static site generator.&lt;/p&gt;
&lt;p&gt;Why Eleventy? It feels like I&#39;ve tried every moderately popular SSG, plus a few of the lesser-known ones.
I currently have sites running under &lt;a href=&quot;https://getnikola.com/&quot;&gt;Nikola&lt;/a&gt; and &lt;a href=&quot;https://cobalt-org.github.io/&quot;&gt;Cobalt&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I use Nikola when I just want to spin up a site (optionally with a blog) for a project. I don&#39;t particularly care to play around with the style, I just want to write stuff and put it online wrapped around a template that I know is accessible, user friendly, and functional. Then I want to load it up in my browser and get the dopamine hit of feeling like I did a lot of work before moving onto something else.&lt;/p&gt;
&lt;p&gt;I like Cobalt when I want a light-weight builder that is entirely unopinionated, and that won&#39;t block me from anything at the cost of having to write (or get AI to write) my templates and essentially everything else. A bonus is watching the site build almost instantly because Rust.&lt;/p&gt;
&lt;p&gt;Eleventy feels like the perfect middle ground between the two. I get to provide my own layouts, styling, and structure. Collections let me publish both a blog section for thoughts I want to catalog in the moment, plus a resources section for longer-running articles that I will maintain somewhat consistently. When I want more goodies like RSS feeds, etc it&#39;s a simple matter of pulling in a plugin.&lt;/p&gt;
&lt;p&gt;Anyway, more to come. For now, the lights are on.&lt;/p&gt;
</content>
  </entry>
</feed>