Generate viral LinkedIn posts in your style for free.

Generate LinkedIn posts
Nitin Aggarwal

Nitin Aggarwal

These are the best posts from Nitin Aggarwal.

6 viral posts with 3,281 likes, 219 comments, and 60 shares.
0 image posts, 0 carousel posts, 0 video posts, 6 text posts.

👉 Go deeper on Nitin Aggarwal's LinkedIn with the ContentIn Chrome extension 👈

Best Posts by Nitin Aggarwal on LinkedIn

These days I’m getting lots of ads on YouTube mentioning Prompt engineers earning 300-500K USD annual salary (maybe my search history is this bad for these ads). In Indian rupees, these numbers are in crores. The end message of these ads is common: “Take my course/certificate and become a prompt engineer. Earn millions. You are missing out”.

Trust me, No “prompt engineers” are earning millions. People who are earning well in AI/ML, are “also” doing prompt engineering; not “just” prompt engineering. Most of them are currently in the AI/ML domain and won’t need a certification to learn it.

Secondly, there is so much content openly available. You need not pay a dime to access them. Also, you must check the profiles of those who claim to be an expert and are selling these courses. I checked a few of them from the ads and found they hardly have any experience in this field. The idea behind the prompt is to make use of AI super simple.

Don’t feel overwhelmed or trapped in FOMO. Don’t let people fool you with your hard-earned money.

If you are still motivated or want to learn more about it via a paid course, please comment. I would love to record a course for you to buy. 🙂

#ExperienceFromTheField
One most important lessons I learned in my career is: It doesn’t matter if you are an Executive, Engineer, Data Scientist, Product Manager, or Project Manager, as you grow, your most important skill will be Sales/Marketing.

You must know how to sell/Market. Your customers may change from internal to external. Your stuff may vary from Product to Software to Services to People to Idea. Your value proposition may differ from Cost reduction to Revenue generation to Competitive advantage creation to Disruption. Your pitch may change from Enterprise to Individual mass users. One thing that won’t change is to tweak your messaging to sell it and convince people to buy.

If you can’t sell, you won’t just fail yourself but fail your team. As a leader, one of your primary responsibilities is to bring more opportunities to your team and make them succeed. It won’t be possible without this skill. Better to build this skill than struggle.

#ExperienceFromTheField
For the first time ever in the AI field, we have more builders than consumers. More servers (you can read it as MCPs) than clients. Every team or organization is trying to find their ikigAI. It’s a never-ending, tireless loop of discovering the problem best suited for AI, one they want to solve as part of their business and one their customers are willing to pay for.

Tools are giving the impression that it’s easy to build and deploy agents. As a result, we’re flooded with “agents” in the market with little to no accountability. AI was always meant to do things humans couldn’t do at scale. But the messaging has changed. Now it’s framed as something that will replace humans. That shift isn’t helping the field. The trust chasm is continuously widening. It’s evident when you look at the adoption numbers of lots of these tools and their desperation for customer acquisition.

AI isn’t “free,” and it’s certainly not easy to make it work at scale. It’s easy to build an agent but hard to take ownership of its responses. It’s easy to launch but hard to maintain and drive adoption. Be careful not to let your problem-solving turn into tool-hopping.

Go deep. Build intuition around the algorithms. I know it’s hard, and it takes time, but it’s rewarding. That’s your moat. Prompt “engineering” isn’t.

#ExperienceFromTheField #WrittenByHuman
Recently, I was playing with my friend’s 3-year-old. She started tapping and swiping my watch from every angle, waiting for something to happen. But my old-school watch doesn’t have a touch screen, and no matter how hard she tried, it stayed silent. After a few attempts, she sighed and said, “It’s not working. It’s gone bad.” I smiled, pressed a few buttons to light up the screen, and her face lit up too. “Wow, it’s working!” she said before trying again, only to find it “not working” once more. We kept going in circles, and soon she concluded that it worked only for me because I had magic.

What struck me was how quickly she built a story around something she didn’t understand. There may be a “generational” understanding of touch as the starting interface of a product with a screen. There was no frustration, only curiosity. To her, the watch wasn’t broken; it was mysterious and perhaps even enchanted.

If you find yourself comparing a watch to an LLM, taps to prompts, and expecting instant magic every time, maybe it’s time to take a little break from AI or AI tools. Sometimes, it’s not the watch that’s broken. It’s just
 “not working” for you. 😉

#ExperienceFromTheField #WrittenByHuman
Multi-agentic systems quickly evolve into a full-blown software engineering effort rather than pure AI development. Once you add agents with role-based access (RBAC), agent identity, logs, telemetry, and integrations, you start realizing you’re building a distributed application ecosystem, not just tuning or deploying models.

In parallel, you have to manage the context, agent flows, model selection, handoffs, and personalization running all in sync. Every additional coordination point adds latency, and that starts surfacing directly in the user experience. Designing for responsiveness while maintaining intelligence becomes a continuous trade-off. A lot starts falling on PMs to lead decision-making around experience and prioritization. That’s why it often feels confusing to know what skills are actually needed to build production-grade agentic systems.

We’re now seeing the rise of the “AI PM,” where the crux is more about making technical decisions than just persona identification. We saw this wave for program managers when technical PgMs evolved. “Curated” user journeys are already taking a hit as expectations evolve from general-purpose systems toward agent ecosystems, and this phase will last for a while.

AI has always been a partnership field. It’s hard to build something meaningful in a silo or within a single team. You need alignment across data, applications, UX, and business. Agents are just taking that partnership to the next level where intelligence, context, and collaboration start to merge. In such a matrix, accountability is what often gets lost. And if you don’t get the right response, well, it’s an agentic error. Because sometimes, there’s no "one" human behind it anymore.

#ExperienceFromTheField #WrittenByHuman
When we talk about agentic systems, the common perception is that they’ll be built on deep, layered AI stacks with multiple agents communicating, reasoning, and coordinating to complete tasks end-to-end. But the reality of what’s going into production today is far more grounded. A significant number of “agentic” systems rely heavily on RPA (Robotic Process Automation) and deterministic rules, with AI often serving as a thin natural-language layer. Sometimes it’s nothing more than a chatbot interface handing off to a rules engine downstream. In many cases, there’s hardly any agent-to-agent communication. These systems are evolving, and their incremental successes are important, but truly autonomous agentic architectures are still in early exploration and experimentation.

We’ve seen this pattern before. Data science matured the same way. Tree-based models were trusted, adopted, and operationalized far more than neural networks. It happened not because of superior performance, but because they offered familiarity, transparency, and, perhaps most importantly, a sense of “control.” Even today, a significant portion (arguably the majority) of AI used in critical industries like financial services and healthcare still relies on tree-based methods rather than high-end neural networks.

This is why AI observability has such a promising future. It isn’t just about governance or maintenance. It’s about creating the visibility, trust, and control that organizations need as agentic systems grow more complex. It provides the organization a cushion that they are "leading it" and understanding the mechanics behind agentic systems. And addressing these foundational aspects is what will ultimately unlock the next wave of truly autonomous agents. It's more than a technical shift and truly a mindset shift.

#ExperienceFromTheField #WrittenByHuman

Related Influencers