Claim 35 Post Templates from the 7 best LinkedIn Influencers

Get Free Post Templates
Ethan Mollick

Ethan Mollick

These are the best posts from Ethan Mollick.

9 viral posts with 15,675 likes, 1,713 comments, and 2,033 shares.
3 image posts, 0 carousel posts, 2 video posts, 4 text posts.

👉 Go deeper on Ethan Mollick's LinkedIn with the ContentIn Chrome extension 👈

Best Posts by Ethan Mollick on LinkedIn

I see “AI won’t take your job, someone using AI will take your job” all the time on this site. I don’t like this frame because:
1) “Your job”’ is more likely to transform over time than be “taken”
2) Some jobs will really disappear with AI
3) Using AI is going to get easier, it is isn’t a secret priesthood
4) Organizations ultimately need to integrate AI into their structures, it isn’t all about what individual workers do
It is morally wrong to use AI detectors when they produce false positives that smear students in ways that hurt them and where they can never prove their innocence.

Do not use them. https://lnkd.in/eaGSa_dn
Post image by Ethan Mollick
Very big finding: The final version of a randomized, controlled World Bank study finds using a GPT-4 tutor with teacher guidance in a six week afterschool program in Nigeria found that using an AI tutor had “more than twice the effect of some of the most effective interventions in education“ at very low cost, “equating to 1.5 to 2 years of ’business-as-usual’ schooling.“
Post image by Ethan Mollick
Big firms like Amazon and Microsoft shipping LLMs need to recognize that IT is not always the center of AI use in companies - the key to productivity is often workers and subject matter experts using chatbots who experiment & share what they learn. This is where we saw the gains in our experiments at Procter and Gamble and at BCG.

Making key features and products only accessible to IT functions and expecting them to build centralized solutions means that the tools are not in the hands of the people who will figure out the best use cases. There is a role for IT, obviously, but it is not always the traditional one in plays in technology adoption.
Uploaded a paper and got a video podcast from Heygen featuring an unnerving AI avatar of me being interviewed by a different AI avatar.

I don't twitch and blink quite as much in real life, but definitely an interesting sign of whats to come, and not a bad summary of the paper.
🚨We have a new working paper full of experiments on how AI effects work, and the results suggest a big impact using just the technologies available today🚨

Over the past months, I have been working with a team of amazing social scientists on a set of large pre-registered experiments to test the effect of AI at Boston Consulting Group, the elite consulting firm.

The headline is that consultants using the GPT-4 AI finished 12.2% more tasks on average, completed tasks 25.1% more quickly, and produced 40% higher quality results than those without. And low performers had the biggest gains.

But we also found that people who used AI for tasks it wasn’t good at were more likely to make mistakes, trusting AI when they shouldn’t. It was often hard for people to know when AI was good or bad at a task because AI is weird, creating a “Jagged Frontier” of capabilities. But some consultants navigated the Frontier well, by acting as what we call “Cyborgs” or “Centaurs,” moving back and forth between AI and human work in ways that combined the strengths of both. I think this is the way work is heading, very quickly.

All of this was done by a great team, including the Harvard social scientists Fabrizio Dell'Acqua, Edward McFowland III, and Karim Lakhani; Hila Lifshitz- Assaf from Warwick Business School and Katherine Kellogg of MIT (plus myself). Saran Rajendran, Lisa Krayer, and François Candelon ran the experiment on the BCG side.

There is a lot more in the paper: https://lnkd.in/eZUp34CW

And in the summary: https://lnkd.in/eASQ_CVr
I don’t usually post product announcements but this answers two big questions I hear from organizations a lot: can we use GPT-4 without the AI learning from out private data? And: How do we get access to Code Intepreter for our compamy data?

Looks like OpenAI is solving both problems with their new Enterprise model. https://lnkd.in/eCKEzYkg
Post image by Ethan Mollick
You really, really should not trust audio clips anymore.

Even a couple months ago, it used to take a commercial service to clone a voice. No more. Here is me creating a voice clone of myself using just a 10 second reference clip on my home computer using open software.

This is all real time, no cuts.
Another $500B was committed towards achieving AGI today, and most of the labs are genuinely convinced that they can indeed build an AI that beats a human at most intellectual tasks in the next couple of years (you can believe this or not).

Yet, there is still no articulated vision of what a world with AGI looks like for most people. Even the huge essay by the CEO of Anthropic doesn't paint a vivid picture of what daily life looks like 5-10 years later.

We can even leave aside the risk of catastrophe for now. Assume we get an aligned AGI that supercharges science and we have a healthier, more advanced, safer world. What does that actually mean for most people, what does their life look like in the future? (Hint: UBI is not an answer, that is a policy, not a vision)

Related Influencers