Claim 35 Post Templates from the 7 best LinkedIn Influencers

Get Free Post Templates
Yann LeCun

Yann LeCun

These are the best posts from Yann LeCun.

10 viral posts with 27,854 likes, 1,278 comments, and 1,116 shares.
1 image posts, 0 carousel posts, 0 video posts, 9 text posts.

👉 Go deeper on Yann LeCun's LinkedIn with the ContentIn Chrome extension 👈

Best Posts by Yann LeCun on LinkedIn

AI assistants are going to be with us at all times.
Eventually, they will reach Human-Level intelligence.
They will understand the physical world and be capable of reasoning and planning.
They will be open source and trained in a distributed fashion across all languages and cultures of the world.

https://lnkd.in/ec-qvysC
If you are worried about “AI causing mass unemployment“ listen to economists who have studied the impact of technological revolutions on the labor market, like Erik Brynjolfsson.

DO NOT listen to computer scientists who are concerned by the social consequences of their work.
The weak reasoning abilities of LLMs are partially compensated by their large associative memory capacity.

They are a bit like students who have learned the material by rote but haven't really built deep mental models of the underlying reality.
Big changes for AI R&D at Meta!

- FAIR remains FAIR: No change in the modus operandi and mission.
- FAIR is now part of Reality Labs - Research (RLR) under Michael Abrash. RLR is a wide-ranging research organization that now includes AI.
- FAIR is still managed by Joelle Pineau and Antoine Bordes. Joëlle, Antoine, and I co-lead FAIR. They do the hard work. I help them with strategy.
- FAIR now stand for “Fundamental AI Research“

AI has become so central to operations that Meta AI groups working on product-oriented projects will now be part of the corresponding product groups.

https://lnkd.in/grKEM7pJ
Going through US immigration with Global Entry is a breeze: just stand in front of the machine, look at the camera, and it's done.
Thank you, ConvNet-based face authentication!

Hard to predict that our 2005 CVPR paper that revived Siamese nets could have this kind of impact (if indirect)
https://lnkd.in/e6sVqFTh
I think the phrase AGI should be retired and replaced by “human-level AI“.
There is no such thing as AGI.
Even human intelligence is very specialized.
We do not realize that human intelligence is specialized because all the intelligent tasks we can think of are task that we can apprehend.
But that is a tiny subset of all tasks.
The overwhelming majority of tasks are completely out of reach of un-augmented human intelligence ***
It's a bit like notions of complexity, in the Kolmogorov / Solomonoff / Chaitin sense: almost all (long) sequences of symbols appear random, except for the tiny number that we can actually write, produce, or define in a compact form.
This is the case for *any* intelligent entity (human or otherwise) and direct consequence of Kraft's inequality: only an exponentially small number of symbol sequences of a given length have a description significantly shorter than themselves.
If intelligence (or understanding) is related to the existence of an efficient representation of data that has predictive power, then *any* intelligent entity can only “understand“ a tiny sliver of its universe.
What is not understandable appears random, and is called noise by engineers, entropy by physicists, and heat by most people.
Example (a bit contrived, I admit): There are 2^N possible binary configurations of N bits. Hence there are 2^(2^N) possible boolean “classification“ functions that map those N bits to a single bit.
Now, let's consider the human optic nerve with its 1 million fibers, and let's assume they are binary. Among the 2^(2^1e6) possible boolean functions (an unimaginably large number!), what proportion is potentially computable by our visual cortex?
The entire synaptic matrix of the human brain contains less than 10^17 bits, hence can represent less than 2^(10^17) boolean functions. Now 10^16 is way less than 2^1e6 and 2^(10^16) is an insignificant number compared to 2^(2^1e6).
Hence the proportion of binary visual tasks the human brain can apprehend is an infinitesimal proportion of all possible such tasks.
As Einstein famously said: “the most incomprehensible thing about the world is that it is comprehensible.“
*** by “human-level intelligence“, I don't mean an AI system that reproduces the human mind, but an AI system that can accomplish all the intellectual tasks a human can accomplish with similar performance (however you want to measure that).
Thanks to Irina Rish for prompting me to start this discussion.
I have made these points in my book “Quand La Machine Apprend“.
Excellent podcast with Nikhil Kamath in which we cover a lot of topics related to AI and deep learning: the history of AI, science and engineering, what is intelligence, GOFAI and neural nets, how does machine learning work, convolutional nets and transformers, self-supervised learning, LLMs and their limitations, what are JEPAs and why we need them, advice to students and entrepreneurs.
A broad survey of published methods to “augment“ Language Models so they can reason, plan, and use tools to elaborate their answers.
Tools such as search engines, calculators, code interpreters, database queries, etc, can help LLMs produce factual answers.

Brought to you by @MetaAI - FAIR.

https://lnkd.in/gSvmn2S3
Post image by Yann LeCun
IEEE Spectrum writes about progress in Self-Supervised Learning at Meta-FAIR, particularly the recent work on Masked Auto-Encoders with transformer architectures.
Ian Hogarth's opinion piece in the Financial Time echos previous remarks by Emmanuel Macron and Mario Draghi: technology drives economic growth and Europe is missing some ingredients that would enable the emergence of large technology companies.

Europe has the required talents: lots of AI breakthroughs were produced in Europe by Europeans, but with .... US funds (e.g. at Google-DeepMind in London or Meta-FAIR in Paris).

Ian points to a lower tolerance for risk in Europe than in the US, both from entrepreneurs and (perhaps more importantly) from investors. There is that.

But there is another important factor: almost all of the fundamental innovations in AI of the last dozen years did *not* come from startups. They came from well-funded industry research labs belonging to large and highly-profitable companies: Google, Meta, Microsoft, and a few others.
DeepMind would *never* have survived, let alone deliver breakthroughs, without being bought by a large company like Google. Their original business model as an independent company was never going to fly, in part because long-term research is expensive, and in part because they were overly optimistic about their time-line to AGI (their original plan for AGI based on RL was a complete failure).

Why haven't large European group started ambitious AI research labs in the vain of Google Brain, DeepMind, FAIR, or MSR?
European comapanies used to have world-class research labs, but not anymore. In the heydays of Bell Labs, IBM Research, Xerox PARC, and Microsoft Research, there were such labs Europe in the 1980s (e.g. Phillips Labs, Siemens, France Telecom, Alcatel). But they never valued research scientist careers like American tech companies have re-learned to do it, starting with MSR in the laste 90s.
European industry research labs became shadows of their former selves. But interestingly, they caused some interesting spin-offs to exist: the most valuable European tech company is ASML, which was created on the remnant of Phillips Labs.

The existence of ambitious industry research labs has an incredibly positive effect on the R&D and startup ecosystem. I witnessed this effect first-hand with the creation of FAIR-Paris in 2015: it almost single-handedly jump-started the AI startup ecosystem in Paris (which is now the most vibrant in Europe today).

The existence of FAIR-Paris, and later the Parisian branch of DeepMind sent a message to the young aspiring scientists: you can have a career in AI research in Europe, and outside of academia. It motivated and lot of talented students to pursue graduate studies, to learn how to do research by doing a PhD.
FAIR-Paris contributed to this by hosting PhD students in residence. FAIR graduates a dozen PhDs in an average year.
They have gone on to do wonderful things in the European ecosystem.

Some have founded AI startups and raised large amounts of capital .... but often from US investors.

Related Influencers