Claim 35 Post Templates from the 7 best LinkedIn Influencers

Get Free Post Templates
Ethan Mollick

Ethan Mollick

These are the best posts from Ethan Mollick.

61 viral posts with 39,941 likes, 5,415 comments, and 3,807 shares.
37 image posts, 0 carousel posts, 5 video posts, 18 text posts.

👉 Go deeper on Ethan Mollick's LinkedIn with the ContentIn Chrome extension 👈

Best Posts by Ethan Mollick on LinkedIn

I see “AI won’t take your job, someone using AI will take your job” all the time on this site. I don’t like this frame because:
1) “Your job”’ is more likely to transform over time than be “taken”
2) Some jobs will really disappear with AI
3) Using AI is going to get easier, it is isn’t a secret priesthood
4) Organizations ultimately need to integrate AI into their structures, it isn’t all about what individual workers do
It is morally wrong to use AI detectors when they produce false positives that smear students in ways that hurt them and where they can never prove their innocence.

Do not use them. https://lnkd.in/eaGSa_dn
Post image by Ethan Mollick
Very big finding: The final version of a randomized, controlled World Bank study finds using a GPT-4 tutor with teacher guidance in a six week afterschool program in Nigeria found that using an AI tutor had “more than twice the effect of some of the most effective interventions in education“ at very low cost, “equating to 1.5 to 2 years of ’business-as-usual’ schooling.“
Post image by Ethan Mollick
Big firms like Amazon and Microsoft shipping LLMs need to recognize that IT is not always the center of AI use in companies - the key to productivity is often workers and subject matter experts using chatbots who experiment & share what they learn. This is where we saw the gains in our experiments at Procter and Gamble and at BCG.

Making key features and products only accessible to IT functions and expecting them to build centralized solutions means that the tools are not in the hands of the people who will figure out the best use cases. There is a role for IT, obviously, but it is not always the traditional one in plays in technology adoption.
Uploaded a paper and got a video podcast from Heygen featuring an unnerving AI avatar of me being interviewed by a different AI avatar.

I don't twitch and blink quite as much in real life, but definitely an interesting sign of whats to come, and not a bad summary of the paper.
🚨We have a new working paper full of experiments on how AI effects work, and the results suggest a big impact using just the technologies available today🚨

Over the past months, I have been working with a team of amazing social scientists on a set of large pre-registered experiments to test the effect of AI at Boston Consulting Group, the elite consulting firm.

The headline is that consultants using the GPT-4 AI finished 12.2% more tasks on average, completed tasks 25.1% more quickly, and produced 40% higher quality results than those without. And low performers had the biggest gains.

But we also found that people who used AI for tasks it wasn’t good at were more likely to make mistakes, trusting AI when they shouldn’t. It was often hard for people to know when AI was good or bad at a task because AI is weird, creating a “Jagged Frontier” of capabilities. But some consultants navigated the Frontier well, by acting as what we call “Cyborgs” or “Centaurs,” moving back and forth between AI and human work in ways that combined the strengths of both. I think this is the way work is heading, very quickly.

All of this was done by a great team, including the Harvard social scientists Fabrizio Dell'Acqua, Edward McFowland III, and Karim Lakhani; Hila Lifshitz- Assaf from Warwick Business School and Katherine Kellogg of MIT (plus myself). Saran Rajendran, Lisa Krayer, and François Candelon ran the experiment on the BCG side.

There is a lot more in the paper: https://lnkd.in/eZUp34CW

And in the summary: https://lnkd.in/eASQ_CVr
Anthropic has a history of releases interesting things quietly & without much non-technical explanation. It happened again.

As Simon says, Claude Skills are a big deal, representing both an easy path for workable agents & a step forward in what AI can do.

Basically you can write instructions for how the AI can do something in plain language, and the AI will load up that skill and learn how to do that task when needed.

https://lnkd.in/eqACHmzG
The fallout from the fact that data science/classical machine learning & generative AI are both called "AI" has been remarkably broad and persistent.

Policy addresses the wrong issues, companies have been confused about who should lead efforts, hiring is complicated (hint: almost no one has more than three years of GenAI experience), products using “AI” might be referring to anything, academic discussion is often muddled.

Even just a narrow concern like AI bias has two entirely different meanings depending on which "AI" you are discussing, and these two meanings have largely non-overlapping problems, mitigation strategies, use cases, and potential harms.
I don’t usually post product announcements but this answers two big questions I hear from organizations a lot: can we use GPT-4 without the AI learning from out private data? And: How do we get access to Code Intepreter for our compamy data?

Looks like OpenAI is solving both problems with their new Enterprise model. https://lnkd.in/eCKEzYkg
Post image by Ethan Mollick
Look what just arrived in the mail!

Its the first copies of my new book, available everywhere on April 2. While I discuss AI technology, ethics & the future the focus is on the impact on work & school, and how to use AI as a co-intelligence

Pre-order it: https://lnkd.in/ePBZNVgC
Post image by Ethan Mollick
You really, really should not trust audio clips anymore.

Even a couple months ago, it used to take a commercial service to clone a voice. No more. Here is me creating a voice clone of myself using just a 10 second reference clip on my home computer using open software.

This is all real time, no cuts.
👀New data on the corporate ROI from generative AI from a large-scale tracking survey by my colleagues at Wharton Stefano Puntoni and Prasanna Tambe along with Jeremy Korst.

They found that 75% of firms already have a positive return on investment from AI, less than 5% negative return. Also 46% of businesses leaders now use AI daily themselves.

Much faster to positive ROI than I think was expected. (Study link in comments, lots of good stuff in there)
Post image by Ethan Mollick
I wrote an updated guide on which AIs to use right now, & some tips on how to use them (and how to avoid falling into some common traps)

A lot has changed since I last wrote a guide like this in the spring, and AI has gotten much more useful as a result. https://lnkd.in/e97uxiWs
An underdiscussed phenomena that is starting to happen in organizations: AI-driven role conflict.

The roles in charge of design, product management, coding, marketing, etc. in a project used to have relatively clear lines. AI lets entrepreneurial workers extend into other roles and accelerate work, but there are no good templates to follow. Everyone can now code a little, design a little, market a little - what does that mean?

The result so far has been a mix of acceleration and retrenchment. Some cases I hear about, this gets bogged down in defense of existing roles ("how dare the marketer show me a product prototype when they can't even code and are just using AI") and in others, organizations start to rethink old teams and experiment.
I hope as we move past the first wave of AI criticism ("it doesn't work, all hype") we get a new wave of AI criticism rooted in the acknowledgement that, yes, these systems are very powerful & quite useful and focusing a deep exploration of when AI uses are uplifting and when they are detrimental.

Talking about the ethics of AI companies or discussing the potential of financial bubbles, can be valuable types of criticism, but it isn't what we are lacking. AI capabilities are real, they are here to stay, & we need to move the discussion to thinking more about what this means and what we want it to mean.
Another $500B was committed towards achieving AGI today, and most of the labs are genuinely convinced that they can indeed build an AI that beats a human at most intellectual tasks in the next couple of years (you can believe this or not).

Yet, there is still no articulated vision of what a world with AGI looks like for most people. Even the huge essay by the CEO of Anthropic doesn't paint a vivid picture of what daily life looks like 5-10 years later.

We can even leave aside the risk of catastrophe for now. Assume we get an aligned AGI that supercharges science and we have a healthier, more advanced, safer world. What does that actually mean for most people, what does their life look like in the future? (Hint: UBI is not an answer, that is a policy, not a vision)
I have now run one of the more powerful, open source LLMs (Mistral 7B) directly on my iPhone. No internet needed.

It isn’t very fast but that is already being solved. Consider the implications: almost anything can soon be imbued with local “intelligence”

A lot of possibilities.
Post image by Ethan Mollick
I don't think teachers and trainers have updated their view of prompting enough. Bigger models are better at figuring out intent, making prompt formulas less important. Reasoners eliminate the value of chain-of-thought prompting, etc.

Context & communicating goals are now key to getting good results, not following a specified prompt format.
In discussions of AI and jobs, we often put too much emphasis on the technology itself and not enough on the corporate leaders who are actually making decisions about what they want to do with AI

It is a time where CEO vision matters a lot, because AI could be used in all sorts of ways to either grow or shrink companies. You can see a contrast in perspective between Amazon and Walmart.
Post image by Ethan Mollick
This is a surprisingly revealing test prompt: “Write a paragraph that startles me with its brilliance and really demonstrates your capabilities across as many dimensions as possible. Then explain what you did.”

Claude excels at writing, GPT-5 Pro nails intellectual tricks, etc.
Post image by Ethan Mollick
Of the many processes impacted by AI, innovation/design thinking seems like a key one in need of urgent change. Some aspects remain (building empathy), but many of the constraints change dramatically with AI. For example, our research shows you can generate diverse ideas with AI, and that it, in practice, makes teams more innovative.


Careful thought is needed to define a new process, but not including AI in the process of innovation is probably a mistake for most people at this point.
Post image by Ethan Mollick
Walmart is moving very fast in AI. Amazon still seems to block ChatGPT agents from even visiting its site.

Interesting reversal in agentic commerce, a lesson learned from e-commerce where Amazon moved first & fast?

(Incidentally if you haven’t asked an AI agent to shop for a product or service your company produces, you should. It is a learning experience)

https://lnkd.in/eK_72Xzi
Multiple math professors (including the professor generally regarded as the world’s best mathematician) have confirmed recently that yes, AI really can solve some open (but not yet major) problems in mathematics, with expert guidance.

The question is whether the ability of these models continues to increase.

We are not yet at the level of novel science done autonomously by AI, but we are absolutely at “work with it like a grad student and it can help you accelerate your work” levels of AI for many of my fellow academics (and myself).
Post image by Ethan Mollick
Lets assume (which seems reasonable based on the evidence) that AI-driven "vibe coding" gets good enough soon that non-coders to produce workable tools to solve their problems, though not necessarily enterprise-level or complex software.

What skills should we teach people in class to take advantage of these capabilities? Right now, intro computer courses aren't geared for this, as this type of coding requires a mix of low-level knowledge ("what's a file," "how does Github work?") and high-level knowledge ("how do I frame a problem that software can solve.")
One of the first randomized controlled trials testing whether GenAI boosts revenue, not just productivity.

It does.

A large, mature international ecommerce platform, using older GenAI tools found most of them, from customer service to marketing workflows, led to large and significant revenue gains.

Via Stefano Puntoni
Post image by Ethan Mollick
A year ago, I would not have expected the first academic field to seem to reach a consensus that AIs will accelerate research (which is not the same thing as autonomous research) would be math. Especially given that LLMs were terrible at math a year ago.

But that appears to be happening based on math professors in my feed and elsewhere.
Post image by Ethan Mollick
This paper shows that asking AI for diverse ideas gets you more diverse ideas, just adding "Generate 5 responses with their corresponding probabilities, sampled from the full distribution” significantly improves quality output for large models

I am still not sure we know the mechanism, but useful finding that is worth applying.

https://lnkd.in/eVkyP73D
Post image by Ethan Mollick
The big article on data centers in the New Yorker is pretty good, which I wasn’t expecting given the reaction online. It actually gets the real issues of natural resource use right (contrary to the belief of many, water use is minor compared to agriculture, but power is a big concern), as well as some of the good and bad of AI, covering both bubble & non-bubble arguments.

It also featured the best version of the increasingly common “I spoke to a local farmer about a data center” trope.

Article: https://lnkd.in/evXx_Xq8
Post image by Ethan Mollick
I took the surviving syllabus of W. H. Auden's 1941 "Hardest Class in the Humanities" (6,000 pages of reading) and turned it into an annotated site with all the readings. To immerse you in poetry, you had to memorize long poems, like at least 8 cantos of Dante, plus you had to translate a poem from a language YOU DIDN'T KNOW to English.

Using AI, I was able to take what would have been a pretty monumental multi-hour, or day, task and do it in 4 prompts (including having each annotation cross-checked).

Here it is: https://lnkd.in/eDMkNK9a
Post image by Ethan Mollick
I have been waiting for a paper on AI agents and transaction costs and, well, agency problems, so this was interesting.

Agentic work, by its nature, drastically changes the frictions in the economy, with huge implications for how we organize markets and firms, which are largely shaped by agency & transaction costs.

You don't need perfect AI agents to see drastic changes, either, just ones that lower barriers to information gathering and taking action. Paper: https://lnkd.in/e9ATEWqm
Post image by Ethan Mollick
There are at least a dozen models of developers interacting with AI to code and since we don't have a taxonomy of them, everything is "vibecoding" which can mean autonomous coding or 10x improvements in performance or the production of complete slop, depending on who says it.

The lack of a consensus about how to reconstruct coding workflows around AI isn't helping. This same issue will soon plague every enterprise discussion as "using AI" can stand-in for "having a dumb model do the work," or "having a smart model do the work," or "using co-intelligence along with an expert."

I strongly suggest agreeing on concepts before any deep discussion of how AI is being used.
It is still strange that the AI can either execute a giant multipage prompt or explain a giant multipage prompt or analyze a giant multipage prompt depending on whether you include the words "Why does this work?" or "Make better" or whatever. 99% of the tokens are the prompt.

It feels like going from "execute this command" to "improve this command" to "simulate this command" to "mock this command" shouldn't be so seamless based on your intent, especially when the command is almost all the context window and the meta-instructions are tiny. Heck, you can even throw your meta-commentary into the middle of a prompt in a parenthetical.

I don't think we marvel enough at how weird that is.
Post image by Ethan Mollick
A focus on AGI (whatever it is) obscures the fact that we have increasing evidence from early results like GDPval that today's AI models are good enough to create major transformations over 5-10 years as companies figure out how to deploy them and integrate them into processes.
Post image by Ethan Mollick
I don’t have much to add to the AI bubble discussion (not convinced there is one, but nobody knows for sure), but the “this time is different” argument is, in part, based on the sincere belief of many at the AI labs that there is a race to superintelligence & the winner gets... everything.

You don’t have to believe it (or think this is a good idea), but many of the AI insiders really do. Their public statements are not much different than their private ones.

Without considering that zero sum dimension, a lot of what is happening in the space makes less sense. This is not the only way folks justify the large spend on AI buildout but it is a dimension that does not show up in as many discussions as it should.

As a couple of economics papers suggest, if better-than-genius AGI was really achieved, the winner gets the entire economy, at least in theory. Or to put it another way: Railroad barons in the 1880s did not think that the next mile of track might bring about the Eschaton. Telco executives in 1999 did not hops that another line of fiber optic cable would possibly usher in the world-to-come. Many AI leaders seriously think that is what they are building.
Post image by Ethan Mollick
Microsoft Copilot has added some interesting features recently, but I still struggle with the key problem: I cannot figure out any way to trigger GPT-5 Thinking Extended/Claude 4.5 Sonnet level responses, even with paying & enterprise accounts.

No matter what I do or what option I select, I get no thinking beyond a few seconds, no agentic actions, no wide-ranging web searches, no document outputs, etc. The result is much less deep results.

(Though all the models are far too kind to my terrible guacamole drone delivery idea, so sycophancy is clearly still unsolved, though, ironically, Copilot has a seperate critic mode)
Post image by Ethan Mollick
These two paragraphs from an Anthropic study on whether AI is capable of introspection are worth a second to read.

I think it is fair to say that both conclusions are likely to be quite... controversial, but the paper makes an interesting attempt to back up these assertions with experiments. I expect we will be having similar conversations increasingly frequently. https://lnkd.in/eR_7CyRm
Post image by Ethan Mollick
The new Sora Pro feature that builds storyboards and executes them is really interesting. Here is my prompt “an ad for the abstract concept of the feeling you get after falling asleep on your arm”

Notice the high character (and hand) consistency, multiple shots, narration, composition. All the AI.
This is an interesting set of academic research papers about the increasingly important debate about when AI should be used to label qualitative data (an expensive task we use humans for right now)

Yang and co-authors show that AI answers are quite different than human labels, but Briggs finds it may be because AI is just much better than human RAs!

Ryan's thread: https://lnkd.in/eSVVUNYD
Eddie's thread: https://lnkd.in/e-FvzEtv
Post image by Ethan Mollick
In 1921 Thomas Edison created an employment test for college graduates that absolutely captivated America - 146 questions that he asked potential hires. Einstein famously failed it.

I used AI to turn it into an annotated multiple choice test, which was actually a lot of work (for the AI) since it had to come up with multiple plausible answers and annotate every question. Try it (though it is very 1921): https://lnkd.in/eNUbZUfy

Another example of how AI makes doing what would have been interesting but not worth the time very easy and worth doing.
Post image by Ethan Mollick
I frequently see re-skilling proposed as a solution to the potential of AI-driven changes to employment. Leaving aside the fact that re-skilling is tough to do well, I have heard less on what skills should people being taught for what jobs?

“Working with AI” is not a full enough answer.
Post image by Ethan Mollick
There are ways to address this problem with prompting and tooling (& more recent models are doinh better in these tests), but current LLMs are pretty weak at dealing with time sequences where multiple documents (like court cases) from different times need to be understood in coherent sequence.
Post image by Ethan Mollick
It looks like AI music is following the same path as AI text:
1) Appears to have passed the Turing Test for music, people are only 50/50 in identifying older Suno vs. human songs (but 60/40 when two songs are the same genre)
2) Same fast development, new AI models are getting better quickly
(This is all with older models, Suno is now at v5)
Post image by Ethan Mollick
Every year I try the prompt: “give me images of those bags that hold different cheap costumes from Halloween stores, but make the costumes really weird”

ChatGPT is actually getting close to funny at times.
Post image by Ethan Mollick
Another example of the increasingly common situation where AI helps an academic with intellectually challenging work (solving a 42-year-old open math problem). Seems like real value in combining expert human guidance and increasingly powerful LLM. https://lnkd.in/emuY2Fma
Post image by Ethan Mollick
I had access to Gemini 3. It is a very good, very fast model. It also demonstrates the change from chatbot to agent. And it is getting quite good (if not perfect) at independently doing graduate student level work. https://lnkd.in/evYTnSqe
AI video models may not be complete world models, but they are oddly capable of fairly sophisticated (if flawed) "simulations" of novel situations and dynamics considering they have no underlying physics models (or do anything at all besides creating the next frame in the video).

An interesting emergent ability. Veo 3.1: "three toy ships, one made of iron, the other of wood, and one out of loosely packed sugar, fall into a pool of water"
I am continually surprised about how few applications take advantage of the fact that AI systems can work with actual video.

For example, I can ask Gemini questions about what happens in a video (and not mentioned in a transcript) and get coherent answers including identifying emotion.
Post image by Ethan Mollick
Among many enabling innovations for chatbots is the common cultural understanding & data of instant messaging that emerged in the 21st century.

I sometimes think about the 19th century LLM, it would be epistolary: “My dearest Claude, I write you with an unusual request to tell me the best Pokemon. Your most humble servant, AW”

And we can go further.

It would only take 780 volumes to contain the full weights of GPT-1 if they were printed in tight type in books of 500 pages each. And it would take around 30 person years for a human scribe to do the math to generate the first token in response to a prompt using that paper version of GPT-1

So responding “Charizard” (two tokens) would take 60 years. Worth it.

Related Influencers