Generate viral LinkedIn posts in your style for free.

Generate LinkedIn posts
Alex Banks

Alex Banks

These are the best posts from Alex Banks.

14 viral posts with 3,841 likes, 1,850 comments, and 238 shares.
7 image posts, 0 carousel posts, 2 video posts, 0 text posts.

👉 Go deeper on Alex Banks's LinkedIn with the ContentIn Chrome extension 👈

Best Posts by Alex Banks on LinkedIn

One Claude user consumed $27,000 of compute in 23 days.
They paid $200. Now everyone's limits are getting cut. 
Here’s why this is unsustainable:

Anthropic just finished a two-week promotion doubling Claude's usage limits during off-peak hours.

That promo ended March 27th.

The very next day, they reduced limits during peak hours.

Give with one hand. Take with the other.

Max 20x subscribers paying $200/month are reporting they hit session limits in 3-4 prompts where they'd previously get 20+.

A user on X (@Pranit) called this exact move 11 days before the announcement.

Anthropic’s playbook:

1. Quietly reduce limits
2. Offer a temporary 2x promo that absorbs the transition
3. Let the promo expire
4. The new floor is lower than the old one, but nobody noticed

The reason they can do this is that these consumer AI plans are massively subsidised, and the limits were never defined in the first place.

Whilst the API is totally transparent: $5/million input tokens, $25/million output tokens.

Consumer plans just say "5x more usage" or "20x more usage."

More than what? They've never assigned a number.

Undefined limits mean unlimited flexibility to move the ceiling without anyone pointing to what changed.

Now here's why this is happening.

After Anthropic refused to remove AI safeguards for the U.S. Department of Defence, Claude became the no.1 free app on the App Store.

Over a million new users were signing up per day. That's a great problem to have, until your infrastructure can't keep up.

One power user (@jumperz on X) consumed 1.1 billion tokens in 23 days.

That’s roughly $27,000 in API-equivalent compute on a $200/month plan (135x multiplier).

Insane.

Anthropic burns 70 cents of every dollar it brings in. Inference costs came in 23% higher than their own projections. Breakeven isn't expected until 2027-2028.

This is the Uber playbook.

Subsidise rides until everyone deletes their cab app, then raise prices once dependency is locked in.

Anthropic is doing the same thing with AI compute:

→ Burn through venture capital to buy market share
→ Build workflow dependency
→ Slowly correct the subsidy as users get too embedded to leave

The weekly rate limits that arrived in August 2025 were the first signal. This is the second.

And the competitive pressure is only increasing. Paying subscribers are openly comparing Claude's tight limits to OpenAI, where some plans offer hundreds of requests without hitting a ceiling.

The day Anthropic announced its caps, OpenAI reset Codex usage limits across all plans.

I use Claude every single day. I think it's the best LLM available right now.

But if you're building your workflow around any AI tool, understand what you're actually paying for.

The floor can shift at any time and you won't see it coming.

Follow me Alex Banks for daily AI highlights and insights.

This idea was from my recent newsletter.

Read it here: https://lnkd.in/eyGqxMNy
Post image by Alex Banks
This should be illegal.

We’re entering a world where nothing can be trusted online:

I recently came across this video using Kling Motion Control.

It takes your movements and puts them in anyone's body.

Here's how it works:

→ Record yourself doing any movement
→ Use AI to generate a character image
→ Kling 2.6 merges the two seamlessly
→ Your moves, their face

We've now hit the threshold where it's impossible to discern whether someone is human in the digital world.

My takeaways:

The implications are huge in Hollywood:

→ Using someone's likeness without them present
→ Character swapping cost trends to near-zero
→ Reshooting scenes without actors on set

I also see new markets emerging:

→ Individuals renting out their identity
→ Licensing your likeness for content creation
→ Actors selling "performance rights" to their digital twin

Finally, proof of authenticity will become essential infrastructure, not just a nice-to-have.

Sam Altman is already building a global identity verification system using iris scans to fight against fraud and bots.

I believe we'll see identity become the next great asset class.

Follow me Alex Banks for daily AI highlights and insights.

P.S. If you liked this post, you'll love the newsletter.

I help you learn AI each week.
↳ Subscribe here: https://lnkd.in/ePSZP6KF

Video credit: ederxavier3d on Instagram
Anthropic said no to the Pentagon.
OpenAI signed the deal hours later.
Here's what happened:

Anthropic drew two red lines with the Department of War:

1. No mass domestic surveillance
↳ AI can now analyse bulk data the government buys on Americans at scale
↳ Currently legal, but only because the law hasn't caught up

2. No fully autonomous weapons
↳ AI isn't reliable enough to automate selecting and engaging targets
↳ No oversight framework exists for removing humans from the loop

For context:

→ Anthropic was the first AI company on the classified cloud
→ Deployed across intelligence, cyber ops, and combat support
→ Forfeited hundreds of millions cutting off CCP-linked firms
→ These two red lines represent ~1% of use cases

The Pentagon gave Anthropic a 3-day ultimatum.

Anthropic refused.

The response:

→ President Trump: "Their selfishness is putting AMERICAN LIVES at risk"
→ Secretary of War Pete Hegseth: Anthropic is a "supply chain risk"
→ Trump ordered every federal agency to cease all use of Anthropic's technology

Then OpenAI entered.

On Friday morning, Altman publicly backed Anthropic's red lines.

By Friday evening, he'd signed the Pentagon deal himself:

→ OpenAI claims the same red lines as Anthropic, plus a third
→ OpenAI asked the Pentagon to offer identical terms to all AI labs
→ Multiple OpenAI employees signed an open letter supporting Anthropic
→ Altman admitted the deal was "definitely rushed" and "the optics don't look good"

The consumer backlash was immediate:

→ "Cancel ChatGPT" went viral across Reddit and X
→ Claude hit # 1 on the App Store, overtaking ChatGPT
→ Claude is now # 1 in Germany, Canada, and other markets

My takeaway:

Not only have I seen my workflows fully transition from ChatGPT to Claude over the last year, but the consumer mass-market is catching up.

I think it’s also important to point out that correlation doesn’t necessarily mean causation here.

Claude didn’t just hit # 1 in the app store as a result of ChatGPT accepting the DOW deal.

Anthropic has simply built a better product where the timing of adoption happened to coincide with this viral uproar of the company denying use of their products for the surveillance of Americans and operation of autonomous weapons.

Ethical positioning now has direct commercial consequences.

Values are becoming a competitive moat.

I covered this idea yesterday in my newsletter.

Be the first to receive it by subscribing today.

Read it here: https://lnkd.in/e898z7SG
Post image by Alex Banks
Anthropic just measured which jobs AI is actually replacing.

The gap between theory and reality is massive.

Anthropic published a new research paper using its own Claude usage data to track AI's real-world impact on jobs.

What's new:

They created a metric called "observed exposure" that combines theoretical AI capability with actual professional usage data. The results are eye-opening.

→ Computer & Math: 96% theoretical capability. 32% actual coverage.
→ Office & Admin: 94% theoretical. 42% observed.
→ Legal: 88% theoretical. Just 15% observed.

Capability isn't the bottleneck. Legal constraints, verification requirements, and slow enterprise adoption are what's holding back real-world deployment today.

Most exposed occupations:

→ Computer programmers top the list at 75% task coverage
→ Customer service reps follow at 70%
→ Data entry keyers at 67%

But there’s a certain irony at play that I think is worth pointing out.

Programmers are both the most exposed occupation AND the heaviest adopters of AI.

They're actively building and using the technology that automates their own work.

The workers most at risk overall skew older, female, more educated, and higher-paid, earning 47% more on average than their unexposed counterparts.

Graduate degree holders are nearly 4x more represented in the most exposed group.

Despite all this exposure:

→ No meaningful increase in unemployment for high-risk workers since ChatGPT launched
→ But hiring of 22-25 year olds into exposed roles has dropped roughly 14%
→ No equivalent decline for workers over 25

My takeaway:

It’s interesting to see the “disruption” showing up as a hiring freeze vs sweeping layoffs.

But mainstream media much prefer to print “thousands made redundant” to sensationalise headlines.

I also think it’s important to point out the 30% of workers that have zero AI exposure.

Cooks, bartenders, mechanics, lifeguards. The roles AI can't touch are almost entirely physical.

Having a living measure like this helps track how the gap between AI’s theoretical capability and real-world adoption narrows over time.

That gap is where the next wave of disruption lives.

Follow me Alex Banks for daily AI highlights and insights.

I talked about AI’s impact on jobs first in my newsletter.

You get the most important news + analysis in your inbox every Sunday.

Read it here: https://lnkd.in/ei8r5Xyq
Post image by Alex Banks
Elon Musk spent a decade promising Mars.

Now SpaceX is building a city on the Moon instead.

Why the Moon wins on iteration speed:

→ Launch to the Moon: Every 10 days
→ Launch to Mars: Every 26 months
→ Trip to the Moon: 2 days
→ Trip to Mars: 6 months
→ Moon city: <10 years
→ Mars city: 20+ years

Mars is still on the roadmap. SpaceX plans to begin Mars efforts in 5-7 years.

But Musk's words: "the overriding priority is securing the future of civilisation and the Moon is faster."

Now layer in the bigger picture.

Last week Musk merged SpaceX with xAI in a $1.25 trillion deal.

It’s now the most valuable private company in history with a potential ~$50 billion IPO on the horizon.

Musk wants to launch AI data centres into orbit, arguing terrestrial power grids can't keep up with AI's energy demands.

SpaceX has already asked regulators for permission to launch 1 million satellites for an "orbital data centre system” up from 9,400 today.

It's also worth noting this is part of a broader consolidation of Musk's empire.

xAI absorbed X (previously Twitter) in March 2025. Tesla invested $2 billion in xAI last week while pivoting hard toward AI and robotics.

Investors are already speculating Tesla could eventually fold into the group too.

The Moon is just the starting point.

Follow me Alex Banks for daily AI highlights and insights.

I cover the most important AI developments like this each week in my newsletter.

Subscribe here: https://lnkd.in/ePSZP6KF
Post image by Alex Banks
Claude just solved the biggest problem with AI.

Memory is now available to Pro and Max users.

What's new:

→ No more repeating yourself every chat
→ Each project has separate memory spaces
→ Persistent context across all conversations
→ Incognito mode for conversations you don't want saved
→ Previously only available to Team/Enterprise customers
→ Claude remembers your projects, preferences, and work patterns

To get started with memory:

1. Go to Settings
2. Navigate to Capabilities
3. Look under Memory section
4. Toggle on “Search and reference chats”
5. Toggle on “Generate memory from chat history”

Bonus: Click on “Memory from your chats” to update/remove memories

Then I recommend asking Claude “What did we work on last week?”

Useful prompts to try with memory:

• "What patterns do you see in my work from our past conversations?"
• "Make unique connections between the ideas we've discussed"
• "Highlight non-obvious insights I might have missed"

Why this matters:

Pair Memory with Claude's new desktop app and you get:

→ Desktop: Always accessible (double-tap access, screenshots, voice)
→ Memory: Always contextual (picks up where you left off)

This turns Claude from a stateless chatbot into a persistent working partner.

My takeaway:

Memory has been the missing link with LLMs.

Other AIs force you to rebuild context in every conversation.

Claude now learns from every interaction and improves with each chat.

This is the difference between a tool you use occasionally and an assistant you work with daily.

Follow me Alex Banks for daily AI highlights and insights.

I cover the most important AI developments each week in my newsletter.

Subscribe here: https://lnkd.in/ePSZP6KF
Being polite to ChatGPT is making it dumber.

Saying “please” and “thank you” makes your results worse:

Penn State researchers gave ChatGPT 250 questions across maths, science, and history into five tones from "Very Polite" to "Very Rude" and ran them all through GPT-4o.

Every single comparison favoured rudeness. Not one favoured politeness.

Adding "Would you be so kind" to your prompt literally made ChatGPT dumber. Meanwhile, "You poor creature" produced the best results.

Why? LLMs are trained on vast amounts of human text. In that data:

→ Demanding language = high-stakes, precise communication
→ Polite hedging = casual, low-stakes exchanges

When you add "please" and "no rush," you activate casual mode.
When you're blunt and direct, you activate precision mode.

It gets worse over time.

A second study from Carnegie Mellon tested GPT-4o in three personas:

→ Friendly persona: 64% accuracy
→ Default persona: 71% accuracy
→ Adversarial persona: 71% accuracy

The "friendly" model dropped as low as 61.7% across follow-up rounds. The adversarial model never dipped below 69.7%.

When you tell the AI to be nice, it becomes a pushover.

Now both of these studies tested previous-generation models. GPT-4o, Claude 3.5 Sonnet, Gemini 1.5 Flash. Late-2024 models.

We're now in 2026. The frontier has moved significantly.

Developer Daniel Weinshenker recently tested Claude Opus 4.6 on exactly this question. Polite agent, neutral agent, insulted agent all given the same tasks.

For well-specified tasks, tone made zero difference. All three produced identical solutions.

Newer models have been specifically trained to be more robust to tonal variation. GPT-5.2 auto-routes between reasoning modes based on task complexity, not how rudely you asked.

These models are better at ignoring the noise around a prompt and focusing on the actual instruction.

But the underlying principle hasn't changed.

The reason rude prompts outperformed polite ones was never about rudeness itself.

It was about writing clear, direct, imperative prompts.

Directness is your weapon.

I built a master prompt from all three studies that forces any model into precision mode.

No rudeness required, only clarity.

Copy it for your next important prompt: https://lnkd.in/eNGPpaiM
Post image by Alex Banks
BREAKING: Synthesia just raised $200M.

Crazy to think AI avatars will start talking back.

I've been partnering with Synthesia since June.

They just closed a Series E at a $4 billion valuation.

Here's why I’m so bullish on AI avatars:

1. The knowledge problem

→ Companies are drowning in documents, wikis, and training materials
→ Yet employees still can't get the right answer when they need it
→ Upskilling has become a continuous, board-level priority
→ Traditional content can't keep up with the pace of change

2. The AI shift

→ Agents can now understand context and hold real conversations
→ They can coach people through complex scenarios
→ They complete actual workflows vs just generating content
→ We've moved from static video to interactive experiences

3. The opportunity

→ Synthesia started with AI video, the most effective way to teach at scale
→ Now they're turning enterprise knowledge into conversational agents
→ Early customers are already seeing higher engagement and faster knowledge transfer
→ This creates a credible path to a billion-dollar revenue platform

The round was led by Google Ventures with NVentures (NVIDIA’s venture capital arm), Accel, Kleiner Perkins, and NEA doubling down.

My takeaway:

Synthesia is now one of the most valuable AI companies in Britain.

This transforms video from a one-way communication into a two-way interactive conversation.

AI will drive the marginal cost of creating content to zero.

We are now generating video through code rather than recording with a physical camera.

Instead of being a single static medium, video can now change and adapt depending on who’s watching.

The window for this opportunity is open now.

Synthesia is positioned to define the category.

Excited to see what Victor Riparbelli and the team build next.

Follow me Alex Banks for daily AI highlights and insights.
NEWS: ElevenLabs just dropped massive AI updates.

Voice was just the starting point.

Platform 1: ElevenAgents

↳ Expressive Mode: their most emotionally intelligent model yet
↳ Turn-taking system that reads emotional cues from how you speak
↳ 70+ languages, live with the Ukrainian Government and US local governments

Platform 2: ElevenCreative

↳ Generate images, video, music, voiceovers, and sound effects from prompts
↳ Flows: node-based video editor for automated content pipelines (coming soon)
↳ Music Marketplace and Finetune: publish tracks, earn royalties, or generate in your style

Platform 3: ElevenAPI

↳ Direct access to all foundational models
↳ Voice, dubbing, transcription, sound effects, speech-to-speech
↳ Build whatever you need

I spoke with Carles Reina (4th employee and first GTM hire) this week.

"Talking to technology is a lot more natural and engaging and quicker and easier than actually writing to technology."

What he said was spot on. Voice is how humans were built to communicate.

Everyone's obsessing over which model is smartest.

ElevenLabs is focused on something else entirely.

Own the voice and interface layer and you're the infrastructure everyone relies on.

I personally now speak to AI models far more than I type.

Voice is overwhelmingly the interface of the future.

Especially as now agents are capable of reading your emotional state and responding with empathy.

The smartest AI means nothing if it still feels like talking to a machine.

ElevenLabs understood that before anyone else.

Follow me Alex Banks for daily AI highlights and insights.

P.S. I did a full breakdown in my newsletter.

Read it here: https://lnkd.in/eK3gBRbD

#ElevenAgentsPartner
This meme is everywhere right now.

And it's asking the wrong question.

It presents two paths.

AI either succeeds and destroys jobs, or fails and crashes the economy.

But both paths are already happening simultaneously.

AI is succeeding AND displacing workers.
AI is falling short AND companies are over-investing.

Citrini Research asked the perfect question earlier this year.

"What if AI bullishness is right, and that's actually bearish?"

It went mega-viral on Substack with over 8,000 likes.

Fear sells better than hope, especially when the technology is this new and this uncertain.

But here's what the meme gets wrong. It assumes a binary outcome.

Much like the Industrial Revolution transformed work rather than abolished it, I think we'll see the same with the rise of AI.

Sure you had the Luddite riots, decades of wage stagnation, massive social upheaval.

Yet new roles were created, old ones faded, and our definition of "work" shifted entirely.

The difference this time is that AI is coming for both physical AND knowledge work simultaneously.

Blue collar and white collar all at once.

The transition will be the hardest part.

It's messy, very uncertain, and requires an awful lot of preparation.

→ Upskilling and retraining at an unprecedented scale
→ Thoughtful leadership that amplifies humans rather than replaces them
→ Rethinking how society distributes resources AND meaning

Sure you can give someone UBI, but you can't give them meaning.

Someone asks "what do you do?" and you answer with your job.

Strip that away, and the question becomes “who am I now?”

That's the path worth figuring out.

Follow me Alex Banks for daily AI highlights and insights.

I explored this idea in depth in my recent newsletter.

Subscribe here: https://lnkd.in/ePSZP6KF
Post image by Alex Banks
Stop forcing one AI to do everything.

Here's how I choose the right model for each task.

One of the most common questions I get:

"I've got ChatGPT, Claude, and Gemini, which model should I use for what?"

Here's how I actually think about it:

I treat LLMs like a toolbelt.

My current task → model map:

1. Long-form writing
↳ Default: Claude Opus 4.5
↳ Backup: ChatGPT 5.2 Thinking

2. Deep research
↳ Default: ChatGPT 5.2 Pro
↳ Backup: Gemini 3 Pro

3. Problem solving & complex reasoning
↳ Default: Grok 4.1
↳ Backup: ChatGPT 5.2 Thinking

4. Learning
↳ Default: Gemini 3 Pro + Guided Learning
↳ Backup: ChatGPT 5.2 + Study & Learn

5. Coding
↳ Default: Claude Opus 4.5
↳ Backup: Claude Sonnet 4.5

A few principles I've found useful:

• Task first, model second
• Pairs, not monogamy (I use 2-3 models every day)
• Latency, cost, context > benchmarks
• Always have a default AND a backup

My takeaway:

"Which model is best?" is the wrong question.

The right question: "What's the job I need done?"

Match the tool to the task. Your output quality will 10x.

I did a full breakdown with my default prompts and setups for each job.

Read it here: https://lnkd.in/eShuwmCt
Figure's robot just taught itself to move like a human.

Robots will be living with us sooner than you think.

Their new AI system "Helix 02" controls Figure 03's entire body as one continuous behaviour.

Walking. Balancing. Manipulating. All on a single neural network.

What makes this a leap forward:

→ Trained on 1,000+ hours of human motion data
→ Palm cameras and fingertip sensors can feel objects as light as a paperclip
→ 4 minutes of autonomous dishwasher loading with zero resets or human intervention

The human-like details that caught my attention:

→ Uses its hip to shut a kitchen drawer
→ Kicks the dishwasher door up with its foot
→ Selects the wash program and starts the cycle

Six months ago, Figure 02 was only moving its upper body.

Now Figure 03 walks, balances, and manipulates as one fluid behaviour.

This is a serious step up from factory parcel sorting to generalised domestic capability.

As founder Brett Adcock stated, this has been a year-long effort to re-align their AI stack for long time horizons and complex manipulation.

Now that robots can handle delicate tasks like extracting pills from a medicine box and dispensing precise liquid volumes, we're entering a new phase of home robotics.

I personally can't wait to get my hands on one.

I did a full breakdown of Helix 02 in my latest newsletter.

Get practical AI workflows and tutorials for busy professionals + weekly news analysis.

Read it here: https://lnkd.in/eSaQhyQd
Meta just acquired Manus.

Zuckerberg wants to win. Badly.

When Manus launched in March 2025 the internet dismissed it as "just a Claude wrapper."

Now Zuckerberg is paying ~$2 billion to own it:

→ 147 trillion tokens processed
→ 80 million virtual computers created
→ State-of-the-art performance on real-world AI benchmarks
→ All achieved in just a few months

What makes Manus different:

• Self-directed operation without waiting for instructions
• Multi-agent architecture with specialised sub-agents
• End-to-end task execution from research to deployment

Manus will continue operating its subscription service while integrating directly into Meta's AI products.

The goal is to bring autonomous agents to billions of users and millions of businesses.

This fits Zuckerberg's pattern perfectly:

→ $14.3B for 49% of Scale AI
→ $200M package for Apple's former AI Chief
→ $100M+ offers to poach from OpenAI, DeepMind, Anthropic

People will work for Zuck if the price is right.

My takeaway:

Everyone said the value would concentrate in foundation models.

The reality is playing out differently.

Turns out the application layer is where the real value lives.

Zuck himself highlighted: "The rest of this decade seems likely to be the decisive period."

He's not waiting around.

I'll be doing a full breakdown of this acquisition in my newsletter this week.

Subscribe here: https://lnkd.in/ePSZP6KF

And follow me Alex Banks for daily AI highlights and insights.
Post image by Alex Banks
Sam Altman said ads were a "last resort."

That day has arrived.

OpenAI is introducing advertising to ChatGPT's free and lower-paid tiers.

Here's what they're promising:

→ Ads won't influence ChatGPT's responses
→ Conversations stay private from advertisers
→ Premium tiers remain ad-free (for now)

The OpenAI timeline:

→ March 2025: Raised $40B at $300B valuation
→ December talks: New $100B round at $750B valuation
→ Today: Burning through cash at an extraordinary rate

Subscription revenue alone isn't cutting it.

This is the fundamental law of the web: If the user doesn't pay the bill, the advertiser does.

Back in May 2024, Sam sat down for a fireside chat at Harvard University.

When asked about advertising, he remarked: "I kind of think of ads as like a last resort for us as a business model."

14 months later, here we are.

No one could have predicted how capital-intensive this race would be.

Add in fierce competition and there's little pricing power left.

My takeaway:

I'm not sure ads alone can cover their losses.

Performing well in advertising often pushes companies toward aggressive data collection.

This is something Google is often criticised for.

Eventually, compute will get cheaper.

But right now it's all about aggressive build-out at any cost.

Once ad revenue becomes material to the business, the incentives shift.

That's just how it works.

The LLM market is competitive enough now that friction like this could accelerate the shift to Claude, Gemini, or Grok.

OpenAI built the most used AI product in history.

Monetising it without eroding trust is the real test.

Would ads make you switch?

I did a full breakdown on this in yesterday's newsletter.

Read it here: https://lnkd.in/en4Mu7ei

Related Influencers