Claim 35 Post Templates from the 7 best LinkedIn Influencers

Get Free Post Templates
Andrew Ng

Andrew Ng

These are the best posts from Andrew Ng.

36 viral posts with 217,813 likes, 6,313 comments, and 10,904 shares.
7 image posts, 0 carousel posts, 13 video posts, 14 text posts.

👉 Go deeper on Andrew Ng's LinkedIn with the ContentIn Chrome extension 👈

Best Posts by Andrew Ng on LinkedIn

I wrote in today's edition of The Batch about the new F-1 visa policy, and want to share that message here as well. #StudentBan
Post image by Andrew Ng
Announcing: Agentic Document Extraction!

PDF files represent information visually - via layout, charts, graphs, etc. - and are more than just text. Unlike traditional OCR and most PDF-to-text approaches, which focus on extracting the text, an agentic approach lets us break a document down into components and reason about them, resulting in more accurate extraction of the underlying meaning for RAG and other applications. Watch the video for details.
Fun breakfast with Yann LeCun. We chatted about open science and open source (grateful for his tireless advocacy of these for decades), JEPA and where AI research and models might go next!.
Post image by Andrew Ng
Agentic Document Extraction just got much faster! From previous 135sec median processing time down to 8sec. Extracts not just text but diagrams, charts, and form fields from PDFs to give LLM-ready output. Please see the video for details and some application ideas.
Math for Machine Learning and Data Science is now available on Coursera! Taught by Luis Serrano, this gives an intuitive understanding of the most important math concepts for AI.

I’ve often said “don’t worry about it” when it comes to math, because math shouldn’t hold anyone back from making progress in ML. And, understanding some key topics in linear algebra, calculus, and prob & stats will help you better get learning algorithms to work.

This specialization was designed with numerous interactive visualizations to help you see how the math works. Math isn’t about memorizing formulas; it’s about sharpening your intuition. I hope you enjoy the specialization!
Had an insightful conversation with Geoff Hinton about AI and catastrophic risks. Two thoughts we want to share:
(i) It's important that AI scientists reach consensus on risks-similar to climate scientists, who have rough consensus on climate change-to shape good policy.
(ii) Do AI models understand the world? We think they do. If we list out and develop a shared view on key technical questions like this, it will help move us toward consensus on risks.

I learned a lot speaking with Geoff. Let’s all of us in AI keep having conversations to learn from each other!
Building and deploying a machine learning model usually takes months. How can you go from starting a project to training and deploying your model in minutes? Here's a 3min overview of the LandingLens platform.
Our new short course, “Knowledge Graphs for RAG” is now available! Knowledge graphs are a data structure that is great at capturing complex relationships between data of multiple types. By enabling more sophisticated retrieval of text than similarity search alone, knowledge graphs can improve the context you pass to the LLM and the performance of your RAG applications.

In this course, taught by Andreas Kollegger of Neo4j, you’ll 
- Explore how knowledge graphs work by building a graph of public financial documents from scratch
- Learn to write queries that retrieve text and data from the graph and use it to enhance the context you pass to an LLM chatbot
- Combine a knowledge graph with a question-answer chain to build better RAG-powered chat systems

Sign up here! https://lnkd.in/gZx2Kie5
I’ve been thinking about how to accelerate how all of us build and deploy ML, and have some ideas I want to share. I hope you’ll join me on this interactive livestream next Wednesday to chat over some ideas! https://lnkd.in/gYYzR_K
Hanging out with Project Jupyter co-founder Brian Granger. If not for him and Fernando PĂ©rez, we wouldn’t have the coding notebooks we use daily in AI and Data Science. Very grateful to him and the whole Jupyter team for this wonderful open-source work!
Post image by Andrew Ng
Learn to build your own voice-activated AI assistant that can execute tasks like gathering recent AI news from the web, scripting out a podcast, and using tools to put all that into a multi-speaker podcast. See our new short course: "Building Live Voice Agents with Google’s ADK (Agent Development Kit),” taught by Google’s Lavi Nigam and Sita Lakshmi Sangameswaran.

ADK provides modular components that make it easy to build and debug agents. It also includes a built-in web interface for tracing agentic reasoning. This course illustrates these concepts via building a live voice agent that can chain actions to complete a complex task like creating a podcast. This requires maintaining context, implementing guardrails, reasoning, and handling audio streaming, while keeping latency low.

You’ll learn to:
- Build voice agents that listen, reason, and respond
- Guide your agent to follow a specific workflow to accomplish a task
- Coordinate specialized agents to build an agentic podcast workflow that researches topics and produces multi-speaker audio
- Understand how to deploy an agent into production

Even if you’re not yet building voice systems, you'll find understanding how realtime agents stream data and maintain reliability useful for designing modern agentic applications.

Please join here: https://lnkd.in/ga6tD5rt
Announcing the Data-Centric AI competition! I’m excited to invite you to participate in this new competition format, and see how you can improve an AI system only by refining the data it depends on! https://bit.ly/3vwE56i
Post image by Andrew Ng
The new ICE policy regarding F-1 visa international students is horrible and will hurt the US, students, and universities. Pushes universities to offer in-person classes even if unsafe or with no pedagogical benefit, or students to leave the US amidst pandemic and risk inability to return.

Here's the text of the policy. This puts the US, students and universities in a lose-lose-lose situation. https://lnkd.in/gx4E85S
Post image by Andrew Ng
Everyone should learn to code with AI! At AI Fund, everyone - not just engineers - can vibe code or use AI assistance to code. This has been great for our creativity and productivity. I hope more teams will empower everyone to build with AI. Please watch the video for details.
Announcing “Generative AI for Software Development,“ a new specialization on Coursera! Taught by my friend and longtime DeepLearning.AI instructor Laurence Moroney. Using GenAI for software development goes well beyond using chatbots for code generation. This 3-course series shares current best practices for AI use through the entire software development lifecycle: From design and architecture to coding, testing, deployment, and maintenance.

You'll learn to use LLMs as your thought partner, pair programmer, documentation specialist, security analyst, and performance optimization expert. There's a lot that anyone that writes software can gain from using GenAI, and this will show you how!

Please sign up here to get started! https://lnkd.in/gJZ_j88K
Without proper governance, an AI agent might autonomously access sensitive data, expose personal information, or modify sensitive records. In our new short course: “Governing AI Agents,” created with Databricks and taught by Amber R., you’ll design AI agents that handle data safely, securely, and transparently across their entire lifecycle.

You’ll learn to integrate governance into your agent’s workflow by controlling data access, ensuring privacy protection and implementing observability.

Skills you'll gain:
- Understand the four pillars of agent governance: Lifecycle management, risk management, security, and observability
- Define appropriate data permissions for your agent
- Create views or SQL queries that return only the data your agent should access
- Anonymize and mask sensitive data like social security numbers and employee IDs
- Log, evaluate, version, and deploy your agents on Databricks

If you’re building or deploying AI agents, learning how to govern them is key to keeping systems safe and production-ready.

Sign up here: https://lnkd.in/gNPY8jbW
New Course: ACP: Agent Communication Protocol

Learn to build agents that communicate and collaborate across different frameworks using ACP in this short course built with IBM Research’s BeeAI, and taught by Sandi Besen, AI Research Engineer & Ecosystem Lead at IBM, and Nicholas Renotte, Head of AI Developer Advocacy at IBM.

Building a multi-agent system with agents built or used by different teams and organizations can become challenging. You may need to write custom integrations each time a team updates their agent design or changes their choice of agentic orchestration framework.

The Agent Communication Protocol (ACP) is an open protocol that addresses this challenge by standardizing how agents communicate, using a unified RESTful interface that works across frameworks. In this protocol, you host an agent inside an ACP server, which handles requests from an ACP client and passes them to the appropriate agent. Using a standardized client-server interface allows multiple teams to reuse agents across projects. It also makes it easier to switch between frameworks, replace an agent with a new version, or update a multi-agent system without refactoring the entire system.

In this course, you’ll learn to connect agents through ACP. You’ll understand the lifecycle of an ACP Agent and how it compares to other protocols, such as MCP (Model Context Protocol) and A2A (Agent-to-Agent). You’ll build ACP-compliant agents and implement both sequential and hierarchical workflows of multiple agents collaborating using ACP.

Through hands-on exercises, you’ll build:
- A RAG agent with CrewAI and wrap it inside an ACP server.
- An ACP Client to make calls to the ACP server you created.
- A sequential workflow that chains an ACP server, created with Smolagents, to the RAG agent.
- A hierarchical workflow using a router agent that transforms user queries into tasks, delegated to agents available through ACP servers.
- An agent that uses MCP to access tools and ACP to communicate with other agents.

You’ll finish up by importing your ACP agents into the BeeAI platform, an open-source registry for discovering and sharing agents.

ACP enables collaboration between agents across teams and organizations. By the end of this course, you’ll be able to build ACP agents and workflows that communicate and collaborate regardless of framework.

Please sign up here: https://lnkd.in/g4gES9CF
An exciting new professional certificate: PyTorch for Deep Learning, taught by Laurence Moroney, is now available at DeepLearning.AI. This is the definitive program for learning PyTorch, which is one of the main frameworks researchers use to build breakthrough AI systems. If you want to understand how modern deep learning models work—or build your own custom architectures—PyTorch gives you direct control over the key aspects of model development.

This three-course professional certificate takes you from fundamentals through advanced architectures and deployment:

Course 1: PyTorch: Fundamentals - Learn how PyTorch represents data with tensors and how datasets fit into the training process. You'll build and train neural networks step by step, monitor training progress, and evaluate performance. By the end, you'll understand PyTorch's workflow and be ready to design, train, and test your own models.

Course 2: PyTorch: Techniques and Ecosystem Tools - Master hyperparameter optimization, model profiling, and workflow efficiency. You'll use learning rate schedulers, tackle overfitting, and apply automated tuning with Optuna. Work with TorchVision for visual AI and Hugging Face for NLP. Learn transfer learning and fine-tune pretrained models for new problems.

Course 3: PyTorch: Advanced Architectures and Deployment - Build sophisticated architectures including Siamese Networks, ResNet, DenseNet, and Transformers. Learn how attention mechanisms power modern language models and how diffusion models generate images. Prepare models for deployment with ONNX, MLflow, pruning, and quantization.

Skills you'll gain:
- Build and optimize neural networks in PyTorch—the framework researchers use to create breakthrough models
- Fine-tune pretrained models for computer vision and NLP tasks—adapting existing models to solve your specific problems
- Implement transformer architectures and work with diffusion models, the core technologies behind ChatGPT and modern image generation
- Optimize models with quantization and pruning to make them fast and efficient for real-world deployment

Whether you want to use pre-existing models, build your own custom models, or just understand what's happening under the hood of the systems you use, this specialization will give you that foundation.

Start learning PyTorch: https://lnkd.in/debGfGct
DeepLearning.AI Pro is now generally available -- this is the one membership that keeps you at the forefront of AI. Please join!

There has never been a moment when the distance between having an idea and building it has been smaller. Things that required months of work for teams can now be built by individuals using AI, in days. This is why we built DeepLearning.AI Pro. I'm personally working hard on this membership program to help you to build applications that can launch or accelerate your career, and shape the future of AI.

DeepLearning.AI Pro gives you full access to 150+ programs, including my recently launched Agentic AI course, the new Post-Training and PyTorch courses by Sharon Zhou and Laurence Moroney (just released this week), and all of DeepLearning.AI's top courses and professional certificates.

All course videos remain free. Pro membership adds hands-on learning: labs to build working systems, practice questions to hone your understanding, and certificates to share your skills.

I'm also building new tools to help you create AI applications and grow your career (and have fun doing so!). Many will be available first to Pro members.

Try out DeepLearning.AI Pro free, and let me know what you build!

https://lnkd.in/g599YP7E
I'm very excited to welcome Ted Greenwald to the deeplearning.ai team! Ted is a former Wall Street Journal editor, and will be leading a new editorial function to share with you the most important stories in AI. Stay tuned!
Post image by Andrew Ng
AI coding just arrived in Jupyter notebooks - and Brian Granger (Jupyter co-founder) and I will show you how to use it.

Coding by hand is becoming obsolete. The latest Jupyter AI - built by the Jupyter team and showcased at JupyterCon this week - brings AI assistance directly into notebooks.

Most AI coding assistants struggle with Jupyter notebooks. Jupyter AI was designed specifically for them. This is the first course to teach it.

In this short course, Brian and I teach you to:
- Generate and debug code directly in notebook cells through an integrated chat interface
- Provide the right context (like API docs) to help AI write accurate code
- Use Jupyter AI's unique notebook features: drag cells to chat, generate cells from chat, attach context for the LLM

We've integrated Jupyter AI directly into the DeepLearning.AI platform, so you can start using it immediately. Since Jupyter AI is open source, you can also install and run it locally afterward.

Whether you're experienced with notebooks or learning them for the first time, this course will prepare you for AI-assisted notebook development.

Start using Jupyter AI (free): https://lnkd.in/gz3r_mRw
Next week on June 30, I’ll be with my Machine Learning Engineering for Production (MLOps) Specialization co-instructors Robert Crowe and Laurence Moroney, as well as Chip Huyen and Rajat Monga, in a live event to talk about MLOps. Hope to see you there!
Congratulations to the #Stanford2020 class that just graduated today! An online commencement wasn't what anyone had envisioned, but I am excited to see what you will accomplish and the contributions you'll make to this chaotic world. Proud of all of you! https://lnkd.in/gfmJUqT #stanford
Just finished writing final few chapters of Machine Learning Yearning book draft, on how to organize and strategize your ML projects. Will send out soon -- sign up at http://mlyearning.org if you want a copy!
What are the most important topics to study for building a technical career in AI? I share my thoughts on this in The Batch.
An exciting new course: Fine-tuning and Reinforcement Learning for LLMs: Intro to Post-training, taught by Sharon Zhou, PhD, VP of AI at AMD. Available now at DeepLearning.AI.

Post-training is the key technique used by frontier labs to turn a base LLM--a model trained on massive unlabeled text to predict the next word/token--into a helpful, reliable assistant that can follow instructions. I've also seen many applications where post-training is what turns a demo application that works only 80% of the time into a reliable system that consistently performs. This course will teach you the most important post-training techniques!

In this 5 module course, Sharon walks you through the complete post-training pipeline: supervised fine-tuning, reward modeling, RLHF, and techniques like PPO and GRPO. You'll also learn to use LoRA for efficient training, and to design evals that catch problems before and after deployment.

Skills you'll gain:
- Apply supervised fine-tuning and reinforcement learning (RLHF, PPO, GRPO) to align models to desired behaviors
- Use LoRA for efficient fine-tuning without retraining entire models
- Prepare datasets and generate synthetic data for post-training
- Understand how to operate LLM production pipelines, with go/no-go decision points and feedback loops
These advanced methods aren’t limited to frontier AI labs anymore, and you can now use them in your own applications.

Learn here: https://lnkd.in/gn9UAunn
Readers responded with both surprise and agreement last week when I wrote that the single biggest predictor of how rapidly a team makes progress building an AI agent lay in their ability to drive a disciplined process for evals (measuring the system’s performance) and error analysis (identifying the causes of errors). It’s tempting to shortcut these processes and to quickly attempt fixes to mistakes rather than slowing down to identify the root causes. But evals and error analysis can lead to much faster progress. In this first of a two-part letter, I’ll share some best practices for finding and addressing issues in agentic systems.

Even though error analysis has long been an important part of building supervised learning systems, it is still underappreciated compared to, say, using the latest and buzziest tools. Identifying the root causes of particular kinds of errors might seem “boring,” but it pays off! If you are not yet persuaded that error analysis is important, permit me to point out: 
- To master a composition on a musical instrument, you don’t only play the same piece from start to end. Instead, you identify where you’re stumbling and practice those parts more.
- To be healthy, you don’t just build your diet around the latest nutrition fads. You also ask your doctor about your bloodwork to see if anything is amiss. (I did this last month and am happy to report I’m in good health! 😃)
- To improve your sports team’s performance, you don’t just practice trick shots. Instead, you review game films to spot gaps and then address them.
To improve your agentic AI system, don’t just stack up the latest buzzy techniques that just went viral on social media (though I find it fun to experiment with buzzy AI techniques as much as the next person!). Instead, use error analysis to figure out where it’s falling short, and focus on that.

Before analyzing errors, we first have to decide what is an error. So the first step is to put in evals. I’ll focus on that for the remainder of this letter and discuss error analysis next week.

If you are using supervised learning to train a binary classifier, the number of ways the algorithm could make a mistake is limited. It could output 0 instead of 1, or vice versa. There is also a handful of standard metrics like accuracy, precision, recall, F1, ROC, etc. that apply to many problems. So as long as you know the test distribution, evals are relatively straightforward, and much of the work of error analysis lies in identifying what types of input an algorithm fails on, which also leads to data-centric AI techniques for acquiring more data to augment the algorithm in areas where it’s weak.

With generative AI, a lot of intuitions from evals and error analysis of supervised learning carry over — history doesn’t repeat itself, but it rhymes.

[Truncated due to length limit. Full text: https://lnkd.in/gjqv6VeA ]
Ian Goodfellow, Anima Anandkumar, Alexei Efros, Sharon Zhou and I will be speaking at GANs for Good, an online panel, on September 30th at 10am PDT. This is to celebrate the launch of DeepLearning.AI's new Generative Adversarial Networks Specialization. Come join us! https://bit.ly/3hPfLpy

You can also sign up to get course updates: https://bit.ly/2FZixuU
Love seeing the data-centric AI development movement growing! Starting this month, FourthBrain (online AI bootcamp and AI Fund portfolio company) will be teaching data-centric approaches to MLOps!
What rules regarding publishing papers would be fair, when it relates to work done by researchers working for companies? I ask this question in this week's The Batch, and would love to hear your thoughts. https://lnkd.in/gUx6piK
Post image by Andrew Ng
I’ve been following the Data-Centric AI competition leaderboard with excitement. Right now Wei Jing is in the lead, followed closely by AryanTyagi. Bi2i and Svpino are tied for third place. Anyone want to take them on?
I hope we can empower everyone to build with AI. Starting from K-12, we should teach every student AI enabled coding, since this will enable them to become more productive and more empowered adults. But there is a huge shortage of computer science (CS) teachers. I recently spoke with high school basketball coach Kyle Creasy, who graduated with a B.A. in Physical Education in 2023. Until two years ago, he had never written a line of Python. Now — with help from AI — he not only writes code, he also teaches CS. I found Kyle’s story inspiring as a model for scaling up CS education in the primary- and secondary-school levels.

Kyle’s success has been with the support of Kira Learning (an AI Fund portfolio company), whose founders Andrea Pasinetti and Jagriti Agrawal have created a compelling vision for CS education. In K-12 classrooms, teachers play a huge social-emotional support role, for example, encouraging students and helping them when they stumble. In addition, they are expected to be subject-matter experts who can deliver the content needed for their subject. Kira Learning uses digital content delivery — educational videos, autograded quizzes, and AI-enabled chatbots to answer students' questions but without giving away homework answers — so the teacher can focus on social-emotional support. While these are still early days, it appears to be working!

A key to making this possible is the hyperpersonalization that is now possible with AI (in contrast to the older idea of the flipped classroom, which had limited adoption). For example, when assigned a problem in an online coding environment, if a student writes this buggy line of Python code

best_$alty_snack = 'potato chips'

Kira Learning’s AI system can spot the problem and directly tell the teacher that $ is an invalid character in a variable name. It can also suggest a specific question for the teacher to ask the student to help get them unstuck, like “Can you identify what characters are allowed in variable names?” Whereas AI can directly deliver personalized advice to students, the fact that it is now helping teachers also deliver personalized support will really help in K-12.

Additionally, agentic workflows can automate a lot of teachers’ repetitive tasks. For example, when designing a curriculum, it’s time-consuming to align the content to educational standards (such as the Common Core in the United States, or the AP CS standard for many CS classes). Having an AI system carry out tasks like these is already proving helpful for teachers.

Since learning to code, Kyle has built many pieces of software. He proudly showed me an analysis he generated in matplotlib of his basketball players’ attempts to shoot three-pointers (shown above), which in turn is affecting the team’s strategy on the court. One lesson is clear: When a basketball coach learns to code, they become a better basketball coach!

[Reached length limit. Full text: https://lnkd.in/gthKuC5Q ]
The full agenda for AI Dev 25 x NYC is ready.

Developers from Google, AWS, Vercel, Groq, Mistral AI, SAP, and other exciting companies will share what they've learned building production AI systems. Here's what we'll cover:

Agentic Architecture: When orchestration frameworks help versus when they accumulate errors. How model-driven agents and autonomous planning handle edge cases.

Context Engineering: Why retrieval fails for complex reasoning tasks. How knowledge graphs connect information that vector search misses. Building memory systems that preserve relationships.

Infrastructure: Where hardware, models, and applications create scaling bottlenecks. Semantic caching strategies that cut costs and latency. How inference speed enables better orchestration.

Production Readiness: Moving from informal evaluation to systematic agent testing. Translating AI governance into engineering practice. Building under regulatory constraints.

Tooling: MCP implementations that work. Context-rich code review systems. Working demos you can adapt for your applications.

I'll share my perspective on where AI development is heading. Looking forward to seeing you there! https://lnkd.in/gMafG9aG
I recently received an email titled “An 18-year-old’s dilemma: Too late to contribute to AI?” Its author, who gave me permission to share this, is preparing for college. He is worried that by the time he graduates, AI will be so good there’s no meaningful work left for him to do to contribute to humanity, and he will just live on Universal Basic Income (UBI). I wrote back to reassure him that there will still be plenty of work he can do for decades hence, and encouraged him to work hard and learn to build with AI. But this conversation struck me as an example of how harmful hype about AI is.

Yes, AI is amazingly intelligent, and I’m thrilled to be using it every day to build things I couldn’t have built a year ago. At the same time, AI is still incredibly dumb, and I would not trust a frontier LLM by itself to prioritize my calendar, carry out resumĂ© screening, or choose what to order for lunch — tasks that businesses routinely ask junior personnel to do.

Yes, we can build AI software to do these tasks. For example, after a lot of customization work, one of my teams now has a decent AI resumé screening assistant. But the point is it took a lot of customization.

Even though LLMs can handle a much more general set of tasks than previous iterations of AI technology, compared to what humans can do, they are still highly specialized. They’re much better at working with text than other modalities, still require lots of custom engineering to get it the right context for a particular application, and we have few tools — and only inefficient ones — for getting our systems to learn from feedback and repeated exposure to a specific task (such as screening resumĂ©s for a particular role).

AI has stark limitations, and despite rapid improvements, it will remain limited compared to humans for a long time.

AI is amazing, but it has unfortunately been hyped up to be even more amazing than it is. A pernicious aspect of hype is that it often contains an element of truth, but not to the degree of the hype. This makes it difficult for nontechnical people to discern where the truth really is. Modern AI is a general purpose technology that is enabling many applications, but AI that can do any intellectual tasks that a human can (a popular definition for AGI) is still decades away or longer. This nuanced message that AI is general, but not that general, often is lost in the noise of today's media environment.

[Truncated for length. Full text:  https://lnkd.in/gAuQcZ8M ]
AI agents are getting better at looking at different types of data in businesses to spot patterns and create value. This is making data silos increasingly painful. This is why I increasingly try to select software that lets me control my own data, so I can make it available to my AI agents.

Because of AI’s growing capabilities, the value you can now create from “connecting the dots” between different pieces of data is higher than ever. For example, if an email click is logged in one vendor’s system and a subsequent online purchase is logged in a different one, then it is valuable to build agents that can access both of these data sources to see how they correlate to make better decisions.

Unfortunately, many SaaS vendors try to create a data silo in their customer’s business. By making it hard for you to extract your data, they create high switching costs. This also allows them to steer you to buy their AI agent services — sometimes at high expense and/or of low quality — rather than build your own or buy from a different vendor. Unfortunately, some SaaS vendors are seeing AI agents coming for this data and working to make it harder for you (and your AI agents) to efficiently access it.

One of my teams just told me that a SaaS vendor we have been using to store our customer data wants to charge over $20,000 for an API key to get at our data. This high cost — no doubt intentionally designed to make it hard for customers to get their data out — is adding a barrier to implementing agentic workflows that take advantage of that data.

Through AI Aspire (an AI advisory firm), I advise a number of businesses on their AI strategies. When it comes to buying SaaS, I often advise them to try to control their own data (which, sadly, some vendors mightily resist). This way, you can hire a SaaS vendor to record and operate on your data, but ultimately you decide how to route it to the appropriate human or AI system for processing.

Over the past decade, a lot of work has gone into organizing businesses’ structured data. Because AI can now process unstructured data much better than before, the value of organizing your unstructured data (including PDF files, which LandingAI’s Agentic Document Extraction specializes in!) is higher than ever before.

In the era of generative AI, businesses and individuals have important work ahead to organize their data to be AI-ready.

P.S. As an individual, my favorite note-taking app is Obsidian. I am happy to “hire” Obsidian to operate on my notes files. And, all my notes are saved as Markdown files in my file system, and I have built AI agents that read from or write to my Obsidian files. This is a small example of how controlling my own notes data lets me do more with AI agents!

[Original text: https://lnkd.in/gYPUvZGT ]
New course announcement: Design, Develop, and Deploy Multi-Agent Systems with CrewAI, taught by JoĂŁo (Joe) Moura, CrewAI Co-founder and CEO.

Multi-agent systems let you build AI teams that work together to automate complex workflows, similar to how human teams work.

CrewAI makes it simple to build multi-agent systems that handle routine work for you—just define your agents, tasks, and crew, and it manages the complexity of coordinating multiple agents and their context automatically. (Disclosure: I made a small angel investment in CrewAI.) This course takes you from building your first agent to deploying production systems using the open-source CrewAI framework.

Skills you'll gain:
- Build reliable AI agents equipped with tools, memory, and guardrails
- Develop teams of agents that can plan, reason, and coordinate
- Deploy production-ready systems with tracing, evaluation, and monitoring

Whether you’re exploring multi-agent systems for the first time or looking to take your projects further, this course will help you build a mental framework for designing multi-agent systems, and help you turn ideas into scalable, production-ready applications.

Sign up here: https://lnkd.in/gEM_vNFN

Related Influencers