Apple just dropped a bombshell on the AI world: Apple’s ML scientists proved AI “reasoning“ models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all - they just memorize patterns really well 😳\n\nWe hear a lot about artificial intelligence that can “think“ and “reason.“\n\nBut Apple's latest research paper, “The Illusion of Thinking,“ puts this to the test.\n\nAnd the results are a massive reality check.\n\nInstead of using standard math problems (which can be tainted), Apple’s team built a digital obstacle course.\n\nThey took the most advanced “reasoning“ AIs - OpenAI's o1 & o3-mini, DeepSeek-R1, Claude-3.7-Sonnet-Thinking, Gemini Thinking - and made them solve classic puzzles:\n\n- Tower of Hanoi\n- River Crossing\n- Blocks World\n\nThey cranked up the difficulty and watched what happened.\nHere are the 5 shocking discoveries:\n\n1. They hit a wall. Hard 🧱\n\n→ Beyond a certain complexity, every single model's accuracy collapsed.\n→ Not just a little bit.\n→ It dropped to ZERO.\n\n2. They start “thinking” LESS when it gets harder 📉\n\n→ This is the most counterintuitive finding.\n→ When a puzzle becomes too difficult, the AI doesn't try harder. It actually spends fewer tokens thinking about it.\n→ It essentially gives up, even when it has the capacity to keep trying.\n\n3. There are 3 clear performance zones:\n\n→ Easy Puzzles: Regular LLMs are actually better and more efficient. The “thinking“ models just overthink and waste time.\n→ Medium Puzzles: This is the sweet spot where “thinking“ models have a clear advantage.\n→ Hard Puzzles: Everyone fails. The “thinking“ just delays the inevitable collapse.\n\n4. They can't follow simple instructions 🤖\n\n→ Even when Apple gave the AI the exact algorithm to solve the puzzle, it still failed at the same complexity point.\n→ It shows they aren’t executing logical steps, but are still just predicting the next word.\n\n5. Their “reasoning“ is inconsistent 🤔\n\n→ A model could solve a puzzle requiring over 100 correct moves (Tower of Hanoi), but then fail a different puzzle that only needed 5 correct moves (River Crossing).\n→ This suggests memorization, not a general ability to reason.\n\nSo the BIG takeaway is this:\n\n↳ According to Apple's research, what we call AI “reasoning“ today isn't reasoning at all.\n↳ It's a sophisticated Illusion of Thinking.\n↳ These models are incredible pattern-matchers, but they aren't yet capable of the generalizable, logical problem-solving we see in humans.\n\nTrue reasoning is still the final frontier.\n\nAGI will have to wait until then.\n\nP.S. check out 🔔linas.substack.com🔔, it's the only newsletter you need for all things when Finance meets Technology. For founders, builders, and leaders.