Claim 35 Post Templates from the 7 best LinkedIn Influencers

Get Free Post Templates
Aishwarya Srinivasan

Aishwarya Srinivasan

These are the best posts from Aishwarya Srinivasan.

53 viral posts with 88,799 likes, 3,391 comments, and 581 shares.
47 image posts, 0 carousel posts, 3 video posts, 2 text posts.

👉 Go deeper on Aishwarya Srinivasan's LinkedIn with the ContentIn Chrome extension 👈

Best Posts by Aishwarya Srinivasan on LinkedIn

Starting this new year with a new adventure 🚀
Google as a company has always intrigued me with its absolutely breathtaking innovations. I have been following Google's research in Federated Learning, Responsible AI, and some of the other exciting AI projects. I am thrilled to share that I will be joining Google Cloud's AI services team as a Data Scientist, and looking forward to tackling new challenges.

#noogler #google #newbeginnings2022 #datascientist #gcp #googlecloud #cloud #ai #data #googlecloudready
Post image by Aishwarya Srinivasan
United States is a land of infinite opportunities and I'll tell you why!

All of us work towards something, don't we? All of us have a dream, which is very close to our hearts and it means more than anything to us. As a kid growing up with parents and grandparents in a one room apartment, there was one thing I really wanted, a better home, a bigger home. At that point it seemed like a far fetched dream for my family.

Fast forward, last week I bought my second home in bay area after buying one for my mother in India.

I could realize the dream that my family had, and all of it could happen because the opportunities I have had in this beautiful country, 🇺🇸. I am extremely grateful for what this country has offered me.

If I could meet my younger self, I would tell myself, that some ambition or goal might feel far fetched at first, but keep moving towards it, keep building yourself one step at a time, one day at a time. Pin your goal and just keep moving towards it, there will be hurdles and challenges but be resilient and have the unwavering intention of reaching where you aimed for.

Keep at it! Keep hustling! Don't forget to pause and refresh, reflect and appreciate yourself about where you have reached 😄

Thank you so much Anita Bolinjkar for helping me through the process and being so supportive 💯


#opportunities #work #motivation #inspiration #unitedstatesofamerica
Post image by Aishwarya Srinivasan
Coming from a low-income household, where my mother was the bread-winner of the family, I have grown up seeing what a strong woman looks like.

She has always supported me through all my dreams, and she fought against the society who called me “too ambitious“. It makes me laugh how the world thinks a man with strong desire to achieve his goals is “ambitious“ but if a woman does the same, she gets an added prefix of “too“!

I have seen my mother work for 12 hours a day, 7 days straight, to earn the overtime, encashing all her vacation, cutting corners for herself, just to make sure I get the best education. Not once she spoke to me about “getting married at a certain age“ or “needing to slow down“, or “focusing too much on work“. Why would she? She knew where education can get you to.

She worked at a company for 30 years where she started as a stenographer and rose the ranks to becoming a research officer. With just a formal education till 10th grade, this lady is entirely self-taught!

So, don't be surprised if I tell you that I get furious when I see women not being treated equal at work! Most of the times, it is unconscious bias that makes people set expectations for woman to act a certain way, or do certain things.

I have faced this myself where people have handed menial tasks (e.g social event planning) to me, and given an important task to a male-coworker. It is appalling how subtly this happens, and how many times it happens till a point that you have set your value based on what other think you are capable of!

If you are a woman reading this- know that YOU define your worth, don't let others tell you what you are capable of, because most likely they are wrong and underestimating your ability.

#thankful
Post image by Aishwarya Srinivasan
Building machine learning models get cumbersome and complex at times with huge amounts of data. And one of my favorite technologies is using PySpark, as it makes multiprocessing and distributed computing easy.

I recently got this book “Applied Data Science using PySpark“. During the initial skim, I found that the book is very comprehensive and is a perfect hand book you can use while working on PySpark. A great thing I found about the book is that it not only focusses on the data science pipeline from a technical perspective but also explains the deployment and productionalization of the machine learning models. A must read for all the data scientists who are working on applied machine learning use-cases.

A special shout-out to the authors, for donating the royalties of the book for children's education. I would encourage people from my network to get the book. As a Data Scientist, I recommend the contents of the book, and as a philanthropist I would like to advocate the author's cause.

Kudos: Sundar Krishnan, Ramcharan Kakarla and Sridhar Alla

#datascience #machinelearning #pyspark #bigdata #book #applieddatascience #guide #predictiveanalytics
Post image by Aishwarya Srinivasan
Sharing a personal milestone with my network ❤️

Last week, I tied the knot with my partner Aditya Suresh in an intimate ceremony in Mumbai surrounded by our closest friends and family!

I'll be spending the rest of the year and the holiday season relaxing and spending time with family before returning to the States.

This has probably been my longest time off from work, however I am so grateful and filled with renewed energy to crush my professional and personal goals in 2024.

Here's wishing everyone a cheerful holiday season and thank you for all your wishes!
Post image by Aishwarya Srinivasan
And that's a wrap to Google Cloud Next 2023! It was such a wonderful experience talking to all the customers about their use-cases in the Generative AI Showcase.

The conference was very focused on the future of Generative AI, with Duet AI integration in Google Workspace and Vertex AI GenAI offerings.

Thank you to all the Cloud leaders for making this event a success and all the customers who really are the core of everything that happens here!

Nitin Aggarwal Thomas Cliett Nathan Faggian Tanvi Desai
#google #next #conference #genai #ai #ml
Post image by Aishwarya Srinivasan
As a Data Scientist, I’ve had my fair share of experience using Python Notebooks for experimenting, model development and quick & dirty analysis. However, even during my university projects, I often felt that the regular Jupyter Notebooks interface can get pretty long-winded with many different cell blocks. Ordering the cells and scrolling through a long notebook especially when you have to present your project or collaborate with other data scientists was never the best experience.

I was recently introduced to a new way of visualizing Jupyter Notebooks - as a visual canvas. Einblick takes your traditional Jupyter Notebook and expands it into an expansive canvas that you can use to build your data science projects .

Some of the benefits I noticed while trying out the product were:

📌 It’s easy to run multiple modeling techniques in parallel. Einblick allows branching of cells. So if you have one dataset and you want to apply multiple ML algorithms and compare the results, you can run both in parallel and view them on the same canvas!

📌 It’s one of the most nifty ways of presenting your code. Technical presentations with blocks of python cells on a Jupyter Notebook are not the most visually appealing. I found Einblick’s approach to be easy to use in a presentation or share with external stakeholders / collaborators directly.

📌 The basics of notebook functionality still exist. Einblick has retained the same shortcuts (e.g. shift + enter) that Data Scientists use everyday. Making it an easily recognizable interface.

I was able to import an existing Notebook directly into Einblick with a few clicks and was presented with the canvas interface. The product can also pull in datasets from a host of popular databases via connectors

Are you a fan of visualizing your python code? Let me know of your experience - You can sign up for an Einblick account for free using your google/github profile and get started :-)

Check it out here:https://lnkd.in/gCxbKKwS


#datascience #ml #datascientists #datascientist #algorithms #google #python #experience #development #project #university #projects #github
Post image by Aishwarya Srinivasan
Met this inspiring woman and friend at Google office today, to get the first set of autographed copies of her book- “Visualizing Google Cloud“.
The book is absolutely mind boggling and a piece of art! Priyanka Vergadia you constantly surprise me with how you make learning creative 👏

#google #learning #cloud #creative #data #ml #ai
Post image by Aishwarya Srinivasan
Why does my resume not get picked? 🤔
How do I get experience without experience? 🤨
What are the skillsets I need to build a successful career as a Data Scientist? 🧐

Do you relate to any of these questions? Don’t feel shy, these are very commonly asked questions. I see hundreds of people asking me these and then I thought of finding a resource that could help you. If you're looking to move successfully into Data Science or Data Analytics - this is a must-watch FREE session.

It's 75 minutes of non-stop insights & advice that will get you feeling confident about moving towards an incredible role. It's been watched by thousands & thousands of people already.

It is run by Andrew Jones who is a former Amazon & PlayStation Data Scientist. He's interviewed hundreds of candidates at those companies so it's all very much on point.

The session covers:

👉The skills hiring managers actually need & want
👉 How much you can really earn as a Data Scientist
👉 Inside knowledge on the DS hiring process
👉 How to optimise your Resume to get noticed
👉 The biggest myth about portfolio projects
👉 Ideas for portfolio projects
👉 How to stand out from the competition in interviews
👉 What separates “good“ Data Scientists from “great“ ones
👉 The best programme for learning Data Science

My advice would be to have a pen & paper ready to write down notes as you go - I saw a review of the webinar saying they'd written 8-pages!

It's completely FREE - don't miss out - Link in the comment.

#datascience #datascientists #datascientists #datascientists #dataanalytics #datascientist #hiring #career #people #experience #learning #webinar #resume #projects
Post image by Aishwarya Srinivasan
At a point when some organizations announced “menstural leave“ for women every month, a male friend of mine jokingly said - “Huh, I wish I were a woman, I could have had an extra day of leave every month“. Well, sure we all like an extra day off but I wonder if one would really wish for periods. There is a lot it comes with- mood swings, abdominal pain, back pain, feeling pukish about certain foods, cravings for certain foods, feeling restricted to do outdoor activities, not feeling the best self at work, to begin with. It is a very natural phenomenon for women, but very much ignored and something people don't talk about much.

Look around in your office (or your video meeting) and see how many women are present. All of us are going through this every single month for about a week. That gives us approximately 3 weeks of regular life, which takes away 13 weeks of a year. Yet I am surprised people won't acknowledge this as much! I have seen my friends at college and teammates at work not feeling their best at this time, and I wanted to share some things that have worked for me to make my period cycles more comfortable.

1. Food 🍲 🫖
I try to eat more nuts and herbal tea during this time. Bananas and chocolates work miracles especially with a “not so happy“ stomach.

2. External comforters 🤗
For the longest time, I used to use a hot water bag, then switched to electric heating pads. They work really well for the pain but they weren't the best solution. I couldn't even move around in the house, let alone going out. So I started reading about some devices that could help me. Thanks to ad recommendation engines, I was bombarded with a dozen ads of period pain devices. Most of them were bulky and way too expensive.

I found the Welme device (MyWelme) which was pretty compact and very reasonably priced, so I picked it up while in India. I would say it has been a good solution while I want to go out (it's super portable) and still want something to relieve my pain.

3) Exercise 🧘‍♀️
Believe me when I say this- regular exercise has miraculously impact on how you feel during your periods. With regular workouts and yoga, I have found myself feeling so well, that I can go for strenuous hikes while on my periods (excuse me, still not on day 1 or 2. I am not a superhuman)


4) Ayurvedic meds 💊
I try to stay away from allopathy as much as I can, hence one of my go to is my Amma's favorite- Kottakkal Ayurveda. I specifically get the period meds which balances iron levels in your body and helps your periods more easy.

So these are my tips that you can use, or can suggest to your partner. Not saying this would miraculously make you not feel your periods at all, but it has helped me feel the best I can during the period days 😊

#work #people #share #india
Post image by Aishwarya Srinivasan
Had a splendid Sunday evening with this amazing group of founders at Founders, Inc. in San Francisco! Thank you Allie K. Miller for organizing this and inviting me. Got to catch up with Marily Nika, Ph.D and Daliana Liu.

Can't wait to work closely with this incredibly talented group of founders building AI tech that is going to change the reality we are in today!

Well, we could also say this was a pre-party before we all gather for OpenAI Dev Day 🚀
Post image by Aishwarya Srinivasan
Every Diwali, I find myself hitting a quiet reset button.

It’s more than just lights, sweets, and fireworks - it’s that feeling of renewal. A reminder that we can begin again, no matter how the year has unfolded.

Each year, this festival grounds me in gratitude and hope. It reminds me that even when things don’t go as planned, we can still create moments of meaning, sometimes in the most unexpected places.

Last year, I celebrated Diwali surrounded by my entire family, laughter echoing through our home.
This year looked different. We found ourselves celebrating in the middle of the mountains, just calm skies, fresh air, and quiet reflection.

And somehow, it was equally beautiful.

For me, Diwali isn’t just about where you are, but the light you carry with you- in your work, your relationships, and your own growth journey.

Here’s to new beginnings, to chasing what lights you up, and to carrying that energy through the rest of the year.
Post image by Aishwarya Srinivasan
It still feels a little unreal, I just crossed 1 million across all platforms where I create and share⭐️

When I started posting on LinkedIn, it wasn’t about becoming a creator. It was about curiosity - connecting with other ML researchers, sharing what I was learning, and building a community that learned together.

Over the years, that curiosity turned into a mission - to learn, teach, and grow alongside this community. And to help others make sense of a world that’s moving faster than ever, especially now with AI changing how we work, learn, and even think.

There are so many people who feel lost right now - unsure where to start, what to learn next, or how to stay relevant.

If my content has helped even a few of you feel a little more confident, a little more inspired, or a little more seen - that’s what makes it all worth it.

I’m deeply grateful for every person who’s been part of this journey, whether you discovered my work through a post, a reel, a workshop, or a random video. You’ve shaped this path more than you know.

The goal hasn’t changed - to keep learning, to keep teaching, and to help others navigate this ever-evolving AI world with curiosity and courage.

To the next million ❤️
Post image by Aishwarya Srinivasan
Yesterday evening I stopped by the brand‑new Tools for Humanity World Orb Center in San Francisco and met Alex Blania, the co‑founder & CEO of World (co-founded with Sam Altman).

We talked shop about what they’re building and I finally got to see the Orb up close- spoiler: it’s way more sci‑fi in person than in the press photos which have been featured a lot this week.

📷 What the Orb actually does

→ It takes a high‑resolution snapshot of your iris, converts it into an irreversible “iris‑code,” and immediately deletes the raw image.

→ That iris‑code becomes your World ID- a zero‑knowledge proof you can flash online to prove you’re a unique human without revealing anything else.

→ The hardware design and firmware are open‑source, so anyone can audit how it works.

This “proof‑of‑personhood” matters more than ever now that AI agents can pass CAPTCHAs and spin up fake accounts at scale.

World hopes the Orb will become the digital equivalent of a passport stamp for everything from social media sign‑ups to secure voting and even (someday) UBI distribution.

Why SF just got its own Orb Center ❓

→ World quietly launched six U.S. “World Spaces” yesterday- Atlanta, Austin, LA, Miami, Nashville and here in San Francisco, after logging 26 million sign‑ups overseas.

→ Each space is staffed with half a dozen Orbs so you can book a 5‑minute slot, verify, and walk out with a World ID (and yes, a small WLD airdrop if you opt in).

A few takeaways from my chat with Alex ⭐️

→ No biometric images leave the Orb; only the encrypted hash does. The team is doubling down on open hardware docs so skeptics can verify that claim themselves.

→ They’re shipping a more compact “mobile verification device” later this year to reach places a 5 kg Orb can’t. Think pop‑up events, hackathons, disaster zones.

→ The dream isn’t just WLD tokens; it’s a universal, privacy‑preserving identity layer that any app- dating, ride‑hailing, e‑commerce, and can plug into so bots stay out and humans get rewarded.

Walking out, I felt that same mix of curiosity and healthy skepticism that follows most moon‑shot ideas.

But seeing the engineering rigor (and hearing Alex's candid answers) convinced me the team is seriously tackling the hardest parts: privacy, transparency, and global access.


If you’re in SF and want a peek, the center’s open all week. Happy to share more about the experience- HMU!
Post image by Aishwarya Srinivasan
I am headed to Paris 😄

For a much-needed break and to attend the Olympic Games. It is my first time in Paris and while I am there I am looking to observe and share exciting insights on how AI is powering such an event at a global scale. Stay tuned for posts on the latest in sports analytics, digital twin, and fan engagament technology.

If you are a startup based out of Paris Metropolitan Area, DM me, let's meet up over ☕️🥐🇫🇷
Post image by Aishwarya Srinivasan
If you're not doing some things that are crazy, then you're doing the wrong things.
A lot of times we find ourselves in an autopilot phase, where we are very comfortable and vary to disrupt the perceived perfection in our lives.

I am going to use my way to explain this to you.

Have you heard exploration vs exploitation? It is a concept in Reinforement Learning models and in recommendation systems. (Just stay with me and I promise I will explain this to you more intuitively)

From an ML perspective- Exploration means that the model tries to experiment with newer, lesser probable options, and Exploitation means that the model choses the best option, considering what has proven to be best fit for the use-case and optimize the result.

Well, humans function the same way. Some of us have higher threshold of exploration aka “doing unconventional things“ versus some have lower threshold. This trade-off between exploitation and exploration is a crucial parameter to tune in models. And trust me, almost NEVER we want the models to just do exploitation without exploration.

Why? Well, without exploration- one will never get to see through the infinite pool of knowledge and possibilities out there!! As much as it is a default nature of humans to follow the trusted formula, do remember someone someday DID “explore“ the option that today many have been “exploiting“.

If you want to really push your limits to achieve something that in your head feela impossible- what you are missing out in the equation is “exploration“.

Try out things outside your comfort zone, be uncomfortable, because that's the resistance your brain needs to reach something that you haven't before 💙

PS: This picture was taken in Google Playa Vista office in LA 😄
Post image by Aishwarya Srinivasan
Got this surprise from LinkedIn !!
Thank you so much for your appreciation. I am grateful to be a part of this community.
Thanks, Jessi Hempel and Daniel Roth for this recognition.

#datascience #datascientist #machinelearning #ibmdse #thankyou #linkedin #topvoice #influencer #top10 #ai #columbiauniversity
Post image by Aishwarya Srinivasan
Do you wonder why your resume doesn't get picked? 🤔
Are you curious to learn about how to get experience without really having work-experience? 🤨
Do you wish to learn about the skillsets that you need to build a successful career as a Data Scientist? 🧐

Are you thinking the same? Don’t feel shy, these are very commonly asked questions. I see hundreds of people asking me these and then I thought of finding a resource that could help you. If you're looking to move successfully into Data Science or Data Analytics - this is a must-watch FREE session.

It's 75 minutes of non-stop insights & advice that will get you feeling confident about moving towards an incredible role. It's been watched by thousands & thousands of people already.

It is run by Andrew Jones who is a former Amazon & PlayStation Data Scientist. He's interviewed hundreds of candidates at those companies so it's all very much on point.

The session covers:

👉The skills hiring managers actually need & want
👉 How much you can really earn as a Data Scientist
👉 Inside knowledge on the DS hiring process
👉 How to optimise your Resume to get noticed
👉 The biggest myth about portfolio projects
👉 Ideas for portfolio projects
👉 How to stand out from the competition in interviews
👉 What separates “good“ Data Scientists from “great“ ones
👉 The best program for learning Data Science

My advice would be to have a pen & paper ready to write down notes as you go - I saw a review of the webinar saying they'd written 8-pages!

It's completely FREE - don't miss out - register to watch here https://bit.ly/dsi-webinar

Save this post for future reference and share this with your friends and colleagues!

#datascience #datascientists #datascientists #datascientists #dataanalytics #datascientist #hiring #career #people #experience #learning #webinar #resume #projects
Post image by Aishwarya Srinivasan
Recently got a copy of Quantum Machine Learning and Optimisation in Finance. As someone who is new to Quantum computing, I am very fascinated to read this book, as it covers using Quantum algorithms for machine learning use-cases. The book covers everything you need to get started with Quantum computing, boosting algorithms, Boltzmann machines and Quantum Neural Nets.

All the topics are covered with financial use-cases in mind like market generator, portfolio optimisation and Monte Carlo applications.

Check out the book here: https://packt.link/ZtYpt

#machinelearning #finance #computing #algorithms #ml #ai #book #licreatoraccelerator
Post image by Aishwarya Srinivasan
🔥 ICYMI, I've got you covered with the most important takeaways from Google Cloud Next 2025.

Here are the most important things you need to know + some resources you can read:

🔧 Ironwood TPUs:
Google’s 7th-gen Ironwood TPUs are not just an upgrade—they redefine compute for thinking AI. At 42.5 exaflops per pod and 7.2 TB/s memory bandwidth, they're optimized for memory-heavy tasks like Mixture-of-Experts (MoE) models.

The Inter-Chip Interconnect (ICI) scales to 9,216 chips, dramatically reducing latency for real-time agent collaboration. With 192GB HBM per chip, Ironwood can store a 100-trillion-parameter model entirely in memory, accelerating inference.

👉 Learn more: https://lnkd.in/g_CEFWQm

🖥️ The AI Hypercomputer: 
Google’s AI Hypercomputer treats AI workloads as an integrated system. By co-designing Ironwood TPUs, NVIDIA Blackwell GPUs (via A4/A4X VMs), and Pathways runtime, it achieves near-linear scaling for billion-parameter models. Its liquid-cooled design boosts efficiency by 29x, directly cutting costs and supporting sustainability goals.

👉 Learn more: https://lnkd.in/gHANw43R

🤖 Multi-Agent Systems:
The Agent Development Kit (ADK) quietly revolutionizes multi-agent development by open-sourcing the framework behind Google's Agentspace. ADK's Agent2Agent protocol enables seamless collaboration across frameworks, solving interoperability issues.

ADK plus MCP, well- we are going to see a lot of vibe coded cool AI agent demos soon!

👉 Checkout ADK here: https://lnkd.in/g3AaPgas

💡 Gemini 2.5:
Models That “Think“ Gemini 2.5 Pro introduces built-in internal reasoning via dynamic computation graphs, boosting accuracy by 40% on complex tasks. Gemini on Distributed Cloud now enables regulated industries (healthcare, finance) to run secure, compliant, and private AI workloads with its impressive 1M-token context window.

👉 Check it out here: https://lnkd.in/ghpnpRfX

🔒 Security
Google addresses AI security vulnerabilities through runtime attestation (Model Armor), preventing prompt injection attacks. Combined with Confidential Computing on Distributed Cloud, it ensures sensitive workloads remain secure and private.

👉 Check it out here: https://lnkd.in/gns7sMQ5

It was a pleasure to represent Fireworks AI at the conference and deliver a session on how Fireworks AI is able to optimize generative AI inferencing to be most efficient across speed, cost, and quality. Will soon share a blog based on my session, so stay tuned 🔔

PS: Anyone checked out ADK and built something cool? Would love to hear.
Post image by Aishwarya Srinivasan
Such an incredible start of the conference with Pat Gelsinger 's keynote. In the session Pat not only introduced Intel’s vision for the “Siliconomy,” but also the groundbreaking advancements in GPUs and NPUs showcasing some awestruck-ing demos on how Intel is collaborating with companies like ai.io, Stability AI, Rewind AI, and FIT:MATCH.ai. What took the limelight for me is how you can run all of these complex computations right on your computer locally for reduced latency, optimized costs and maintain data privacy.

Well, you can tell how fascinated I am with everything that is happening at the conference, and therefore I decided to do a takeover of Intel's X and LinkedIn pages!

You heard it right! I will be taking over the Intel's official Intel IoT X (https://lnkd.in/gq3QR884) and Intel Internet of Things LinkedIn page to give you the latest updates of what's happening at the event. So, go checkout the Intel handles to keep yourself updated with live feed from the conference.

PS: If you are at the event, this might be a way for you to spot where I am 😉

#ai #ml #ad #IntelInfluencer
Post image by Aishwarya Srinivasan
✏️✍️📝 Register here: https://lnkd.in/emVv6BC


#ai #datascience #machinelearning #artificialintelligence #data #innovation #mentorshipprogram #mentorship #mentoringmatters #learnandgrow #learningbydoing #techforgood #expertsession #digitaltransformation #bigdata #ml
I’m hiring two AI Engineering Interns to collaborate with me on a few of my Illuminate AI passion projects, including building agentic AI pipelines, experimenting with GenAI workflows, testing new tools, and partnering with me on technical content and blogs.

These are paid, full-time, 3-month remote roles under Illuminate AI, my personal initiative to explore and experiment with AI systems in real-world contexts.

You’ll be working directly with me, getting hands-on mentorship, the chance to publish and showcase your work, and building portfolio-worthy projects that demonstrate your skills. I’ll also be happy to provide a LinkedIn recommendation upon successful completion.

If you’re passionate about building, experimenting, and learning fast, this is a great opportunity to grow.

Apply here: https://lnkd.in/dhVMbKEq
Post image by Aishwarya Srinivasan
Had a wonderful day at TechCrunch Disrupt.
I hosted two roundtable discussions with startups where we dove deep into prototyping, fine-tuning, and evaluations for generative AI models and applications.

One big takeaway stood out:
Many teams are still heavily relying on proprietary models, and their biggest reason is user experience.

That insight genuinely surprised me. As someone leading Developer Relations, one of my core goals is to make open-weight (open-source) models just as seamless to use as proprietary ones.

I truly believe that open source offers unmatched customization and optimization. And the fact that open models are now benchmarking at par with closed-source systems tells you how far this ecosystem has come.

My belief is that the next wave of AI adoption won’t be about a single model, it’ll be compound AI:
→ A mix of models working together
→ Tuned with proprietary data
→ Customized beyond prompt-level control

That’s where the future is heading, and that’s the world we’re building at Fireworks, where open models are powerful, practical, and production-ready.

If you’re a startup or an enterprise team building GenAI applications at scale, feel free to grab a 1:1 with me during my office hours. Would love to jam on your stack: https://lnkd.in/dW8Z7uHK

(Please register using your business email ONLY)

P.S. These office hours are meant for professional discussions specific to Fireworks AI and ongoing work around LLM systems. They’re not personal consultation sessions.
Post image by Aishwarya Srinivasan
LinkedIn Creator meets The creator of LinkedIn! (pun intended)

Truly a pinch-me moment for me ❤️

After using the platform for 10+ years and building a strong community, I owe a large part of my personal brand and career progression to LinkedIn. Reid Hoffman built LinkedIn 23 years ago (long before any modern social media app) and shared that even he could not have anticipated how the platform has evolved over that time and it’s importance today!

I had the esteemed opportunity to chat with Reid about all things AGI, Superagency (which is also the title of his new book bdw) and his predictions about the future of AI.

Thank you Reid for such an inspiring discussion on AI , also for the signed books!

Stay tuned for the full podcast- Ctrl + Alt + AI, where we discuss the good, the bad, and the future!

#Motivation #Learning
Post image by Aishwarya Srinivasan
One of the traits I’m most proud of is something I wish more young girls were encouraged to have: the habit of questioning authority.

Growing up, many of us were taught to listen to our fathers, brothers, teachers, and bosses but rarely to ourselves. We were conditioned to follow the loudest or most confident voice in the room instead of our own.

I didn’t grow up like that because my mom never raised me that way.

She was bread-winner of our family, balancing work and home on a tight budget. But she never once said, “Do as I say.” Every decision was a conversation. She’d share her thoughts, listen to mine, and always end with, “Do what you think is right.”

That sentence shaped everything about who I became. It made me curious and confident, and it taught me to challenge rules even when it made others uncomfortable.

When I decided to move abroad for higher education, almost everyone around me questioned it. Some even said, “Why don’t you just get married instead?” But my mom stood by me. She wanted me to be independent, educated, and unafraid to think for myself.

That kind of freedom to think, to choose, and to question is what builds real confidence. It’s what helps women become decision-makers in their own lives instead of followers in someone else’s story.

If there’s one thing I want every girl reading this to remember, it’s this: you don’t owe obedience to anyone who tries to decide your path. You owe it to yourself to think, to question, and to choose.

We’ve spent years teaching boys to be bold. It’s time we teach girls to be decisive. That’s how we change the narrative.
Post image by Aishwarya Srinivasan
I’ve always been passionate about teaching, but for me, teaching has never been about knowing everything. It’s about learning, unlearning, and growing together.

When I first started sharing on LinkedIn, I was a student in India trying to learn from people across the world. I wanted to understand how they thought, what they built, and how they navigated their careers. Those early conversations gave me perspective and shaped who I am today.

Over time, teaching became a habit. It keeps me curious. It keeps me accountable. It pushes me to read, learn, and stay updated so that I can help others feel more confident in this fast-changing AI world.

Helping someone gain clarity about their next step or seeing them upskill because of something I shared is genuinely one of the most rewarding feelings.

Around three years ago, that same passion led me to topmate.io. I still remember speaking with Dinesh and Ankit when they were just starting out. They were building something from scratch with so much heart and conviction. Every piece of feedback I gave as a user, they took seriously. They moved fast, they listened, they built.

I believed in them so much that I joined as an advisor, and later, as an investor. Watching how far they’ve come has been incredible.

Through Topmate, I’ve hosted workshops, webinars, shared digital resources, and mentored hundreds of professionals. It’s been one of my favorite platforms to connect, teach, and give back to the community.

Today, I’m grateful to be featured among the Top 100 creators on Topmate. But more than that, I’m proud of what this journey represents, that teaching can be your way of learning, that giving back can be your path to growth.

If you’ve ever thought about mentoring or sharing what you know, start. Your experiences might be exactly what someone else needs to move forward. And if you’re looking for a place to begin, topmate.io is a great one.

Happy learning ❤️
Post image by Aishwarya Srinivasan
🚨BREAKING NEWS: OpenAI just launched their new AI browser, ChatGPT Atlas, and I got early access to it.

After having testing if for several hours- first take is that I love it. I can instantly see all my copy-paste and jumping around tabs go away.

When I was replying to emails, I didn’t have to switch tabs to ChatGPT. I just asked Atlas, “make this reply sound more concise,” and it edited inline, inside Gmail. Then I was reviewing a document, and instead of copying text over, I asked it for a summary right there. It understood the page context instantly.

Next, I tried something more complex. I asked Atlas to plan a one-week trip to Alaska over the holidays. It pulled up flight options, hotels, and even built a full itinerary, all in one flow, no spreadsheets or extra tabs. I also asked it to help me meal prep and order the groceries for me on instacart- did a pretty good job with just a little feedback (checkout the screenshots below)👇

Atlas integrates contextual grounding (it knows the page you’re on), browser memory (it remembers what matters and can resurface past work), and inline inference (you can write, edit, or ask questions directly on the page). It’s powered by the same ChatGPT stack but natively optimized for browsing.

Then there’s Agent Mode which currently in preview for Plus and Pro users. This is where it starts to act. You can say, “book the hotel from my Alaska search” or “compare these two articles,” and it handles it for you. Early days, but this is what agentic workflows will look like once embedded directly in everyday tools.

What I also like: OpenAI has made memory optional. You can clear browsing data, use incognito mode, or control which sites it can see. And yes, your ChatGPT memory syncs across the browser. That means the context you’ve built in ChatGPT carries into your browsing session.

If you’ve built Custom GPTs, they’re now accessible inside Atlas too. I can connect it to my AishGPT! Basically your content writer, research assistant, or code reviewer or any other customGPT you use can live in your browser - not as a separate chat window, but as a co-pilot that interacts with whatever’s on your screen.

📺 You can test it out for yourself: https://lnkd.in/d2heMDuR
Post image by Aishwarya Srinivasan
YudiJ (Pritesh Jagani) has been one of the few creators I followed since my early days as an international student.

He has been guiding students for years, helping them navigate life, education, and careers in the United States.

It was amazing to finally meet him in person and record a podcast together. We had a long and honest conversation about how international students can build their careers in the U.S., understand the job market, and make the most of their time here.

I also shared my experiences working at multiple big tech companies and now at a startup, along with many personal lessons from my journey.

The first episode is now out, where we discuss how students should approach their career journey in the U.S: https://lnkd.in/dAbYN46H

Go check it out on Yudi’s podcast. If you are an international student or planning to study in the U.S., this is one channel you should definitely follow for real stories and practical advice.
Post image by Aishwarya Srinivasan
Seven years ago, I walked into IBM as an intern on the Data Science Elite Team, led by Dr. Seth Dobrin.

I still remember chatting with him in the coffee pantry on my first week. I had a dozen ideas and zero clarity on what to focus on. Seth told me something that stayed with me ever since - “follow your interests and play to your strengths.”

That’s how I ended up choosing reinforcement learning for machine trading as my first-ever project. That one decision shaped how I think about experimentation and learning to this day.

Over the years, Seth has continued to be my mentor, and now, a friend. He’s one of those rare leaders who’s genuinely approachable and always available for his team. It’s one thing to listen to people’s ideas, and another to actually create opportunities for them to grow. Seth has always done the latter.

I met him again recently at the Masters of Scale Summit - that’s usually where we end up catching up, at conferences. Every time we meet, it feels like picking up right where we left off.

What inspires me most is that he’s still building, still innovating, now with Arya Labs.

Traditional AI approaches face a trade-off: general-purpose models like LLMs are flexible but prone to hallucination, while physics-based simulations are accurate but rigid and expensive. Arya Labs eliminates this trade-off through a Deterministic Reality Architecture - blending the flexibility of AI with the rigor of physics-based systems.

It’s amazing to see how the spirit of innovation and curiosity he fostered in his team back then continues to reflect in what he’s building today.

Grateful for mentors who shape not just your skills, but your mindset.
Post image by Aishwarya Srinivasan
If you’re getting started in the AI engineering space and want to understand how to actually build an AI agent, here’s a structured way to think about it.

Over the last several months, I’ve been building, testing, and teaching agentic AI systems, and I realized most people jump straight into frameworks like LangGraph, CrewAI, or AutoGen without fully understanding the system design mindset behind them.

Here’s a 12-step framework I put together to help you design your first AI agent, end-to-end.

🧩 From defining the problem to scaling it reliably.

→ Start with Problem Formulation & Use Case Selection - clearly define the goal and validate that it needs agentic behavior (reasoning, tool use, autonomy).

→ Map the User Journey & Workflow - understand where the agent fits into human or system loops.

→ Build your Knowledge & Context Strategy - design a RAG or memory pipeline to give your agent structured access to information.

→ Choose your Model & Architecture - open-source, fine-tuned, or multimodal depending on the use case.

→ Define Agent Roles & Topology - whether it’s a single-agent planner or a multi-agent ecosystem.

→ Layer on Tooling & Integration - secure APIs, function calling, and monitoring.

→ Then move into Prototyping, Guardrails, Benchmarking, Deployment, and Scaling - optimizing for accuracy, latency, and cost.

Each layer matters because building an AI agent isn’t about wiring APIs, it’s about engineering autonomy with accountability.

Now that you have this template, pick a use case that excites you - maybe something that improves your own productivity or automates a workflow you repeat daily. Or look online for open project ideas on AI agents, and just start building.

〰️〰️〰️
Follow me (Aishwarya Srinivasan) for more AI insight and subscribe to my Substack to find more in-depth blogs and weekly updates in AI: https://lnkd.in/dpBNr6Jg
Post image by Aishwarya Srinivasan
Last week at TEDAI San Francisco, I sat in on one of my favorite sessions, The Philosophy of AI, where my friend Chip Huyen joined Walter DE BROUWER, James Joaquin, Bryan McCann, and Steven Levy for a conversation that made everyone in the room pause.

It was such an important conversation, that reminded me why we need to talk about AI not just as a technology, but as a reflection of how we define intelligence, creativity, and human agency.

Chip made a point that stuck with me. She said the real story of AI progress isn’t about novel ideas, it’s about scale, about what happens when better data and more compute compound over time. What many once called “brute force” turned out to be a turning point for the field. I loved that Chip shared how she strictly sees AI as a tool!

Walter and James brought a fascinating perspective, describing AI as a new species that we’re learning to coexist with. They challenged the audience to see this evolution not through fear, but through responsibility- to ask what kind of intelligence we want to create, and what kind of relationship we want with it.

The conversation also touched on Agentic AI, systems that can plan and take action. Chip’s view was simple but powerful, learning is a privilege, and AI should be seen as a collaborator that extends human creativity, not replaces it.

Bryan raised questions around awareness and trust- reminding us that as machines become more capable, it’s on us to stay self-aware as builders.

My biggest takeaway, AI will keep getting better at optimization and automation, but meaning and intention will always come from us.

Our role isn’t to make AI more human, it’s to make humanity more intentional in how we build it.

Happy building ❤️
Post image by Aishwarya Srinivasan
I’m opening weekly Friday Office Hours for AI builders!

If you’re experimenting with open-source models, building GenAI pipelines, or exploring fine-tuning for your use cases, I’d love to chat.

This will be a casual, technical session to connect with startups and enterprise teams who are building LLM systems, discuss your challenges, and just brainstorm ideas together.

At Fireworks AI, we’ve been powering some of the most exciting use cases in the ecosystem- from Cursor’s fast-apply feature to AI workflows at Upwork, Notion AI, Cresta, Sentient, and Sourcegraph. It’s been fascinating to see how small improvements at the inference layer can completely change the end-user experience.

We’ve also open-sourced Eval Protocol, a toolkit to help you evaluate large language models and understand their performance beyond surface-level metrics.

My goal with the office-hours is simple: connect with more practitioners, hear what you’re building, exchange ideas, and share learnings about optimizing inference and fine-tuning with open-source models.

If you’d like to join one of the Friday sessions, fill out the form below, would love to connect and chat: https://lnkd.in/dW8Z7uHK
(Please register using your business email.)

P.S. These office hours are meant for professional discussions specific to Fireworks AI and ongoing work around LLM systems. They’re not personal consultation sessions.
Post image by Aishwarya Srinivasan
A lot of incredibly talented people at Amazon have been impacted by the recent layoffs, and it’s really heartbreaking to see.

If you’ve been affected and are figuring out what’s next, I’d love to help in whatever way I can.
Whether you need general guidance, want to pivot into AI, or just need a sounding board. Feel free to DM me, and I’ll try my best to get on a call with as many people as possible.

Also, Fireworks AI is hiring across several roles. If something aligns with your background, please apply and drop me a DM once you’ve done so.
We’re hiring across teams at Fireworks AI, from Software Engineering, Marketing, and Product, to Go-to-Market, Tech, and more.

If you want to join a high-growth startup led by some of the best minds in machine learning, and work alongside one of the strongest peer groups in the industry, this is your chance!

Whether your expertise lies in Reinforcement Learning, Multimodal AI, Cloud Infrastructure, or Site Reliability, we’re building across the stack, and we’d love to have you on the journey.

👉 Apply here: https://lnkd.in/dx8322tW
Post image by Aishwarya Srinivasan
🚨BREAKING: Fireworks AI just announced $254M Series C at a $4B valuation led by Lightspeed & Index Ventures - a massive milestone that validates the next frontier of AI inference.

We’re entering a new phase of AI, where inference, not just training, becomes the core differentiator. The real race isn’t just about building the biggest model anymore, it’s about serving it faster, cheaper, and more efficiently at scale. That’s where Fireworks stands apart.

At its core, Fireworks is building the inference engine for the modern AI stack, enabling developers and enterprises to go from prototype to production seamlessly, without sacrificing speed or cost. Whether it’s a startup deploying its first multimodal app or an enterprise optimizing large-scale LLM workloads, we’re obsessed with one thing: making high-performance inference accessible to everyone.

The broader market is now bifurcating into two parallel paths:

- ASIC-driven systems that will dominate specific high-volume workloads (think model training and hyperscale serving)
- GPU-optimized, software-accelerated stacks that will continue powering dynamic, high-performance inference for developers building the next wave of AI products

Fireworks AI sits right at this intersection, purpose-built for the world of open-source models, multimodal applications, and developer-centric innovation.

It’s an incredibly exciting time to be here, not just because of the funding, but because the mission feels bigger than ever: To build the infrastructure that powers the future of intelligent systems.

We are expanding our team, come join us!
Post image by Aishwarya Srinivasan
AI agents are only as powerful as the context they operate in.

Imagine your AI customer support agent pulling from constantly updated documentation, surfacing the right answers instantly, and improving its knowledge base every time a new policy, feature, or ticket update goes live- without any manual intervention or expensive third-party tools.

Or imagine an MCP-connected agentic AI that can browse, search, and take actions across the web autonomously, planning a travel itinerary, managing e-commerce operations, or monitoring live data streams to make real-time decisions.

That’s where the next leap in AI adoption is happening- not in building bigger models, but in enabling agents that can access, interpret, and act on real-time information. Static workflows are quickly becoming a bottleneck, and adaptive agents are becoming the differentiator.

This shift is reshaping how teams think about automation, customer experience, and operational intelligence. The future isn’t about pre-defined rules, it’s about creating systems that learn and evolve continuously.

If you’re curious to see this in action, Zapier and Apify are hosting a live webinar on October 23, showcasing how to build AI agents with real-time web data.

They’ll walk through practical examples of integrating Apify’s web data with Zapier workflows for customer support, agentic operations, and more, a great session if you’re exploring how to make your agents more dynamic and context-aware.

It is a free webinar and you will get a recording once you register: https://bit.ly/47dwvn2

Happy Learning 🚀

#ZapierPartner
Post image by Aishwarya Srinivasan
When I was a kid, I loved flipping through science books that showed the anatomy of the human body, with every organ neatly labeled and every function explained.

The other day, I started wondering what it would look like if we did the same for AI agents.

If we imagined AI intelligence the way we understand human intelligence, how would we describe its brain, eyes, hands, or heart?

It’s not an exact one-to-one comparison, but I thought it would be fun to map it out. So I created this visual: Anatomy of an AI Agent 😉, a look at how different components of an AI agent mirror how our bodies think, sense, act, and adapt.

Because in the end, every intelligent system, human or artificial, is only as powerful as how well its parts work together.
Post image by Aishwarya Srinivasan
Fireworks AI just made it to LinkedIn’s Top Startups list!!

Behind this milestone is a team of builders, engineers, and dreamers who show up every day to push the boundaries of what’s possible with open-source AI.

We’ve been obsessed with one thing- making AI faster, more scalable, and more accessible for anyone who wants to build.

And the best part is that we're just getting started ❤️

If you want to help shape the future of AI, where performance meets imagination, come build with us at Fireworks AI. We are hiring: https://lnkd.in/dFX6xVup
When evaluating AI agents, accuracy alone is a poor proxy for performance.

An agent’s goal isn’t to produce a correct answer, it’s to complete a task. And how reliably it does that depends on more than just model precision.

Three metrics matter most:

1. Task Success Rate (TSR)
Measures the percentage of end-to-end tasks completed correctly.
This captures real-world reliability – can the agent consistently finish what it starts?

2. First-Try Success (FTS)
Tracks how often the agent succeeds on its first attempt.
This reflects reasoning quality and prompt grounding – whether it understands the task context accurately before acting.

3. Recovery Speed
Captures how quickly, or in how many steps, the agent self-corrects after a mistake.
This is the best signal of adaptability and robustness, which are critical for agents operating in dynamic environments.

In complex, multi-step workflows, these metrics often tell a more complete story than accuracy or BLEU scores.

An agent that can self-correct and adapt is far more valuable than one that only performs well under static test conditions.

〰️〰️〰️
Follow me (Aishwarya Srinivasan) for more AI insight and subscribe to my Substack to find more in-depth blogs and weekly updates in AI: https://lnkd.in/dpBNr6Jg
Post image by Aishwarya Srinivasan
AI agents are only as powerful as the context they operate in.

When context is static, agents plateau. But when context evolves- when it’s refreshed, connected, and accessible, that’s when things start to get interesting.

Take a customer support agent, for example. The most effective ones don’t just respond; they learn. They automatically pull the latest documentation, adapt to new policies, and refine their knowledge base the moment a new feature or update goes live, no manual retraining, no complex setup.

Or look at what’s happening with MCP-connected agents. These systems can browse, search, and act across the web- handling tasks like building a travel plan, managing e-commerce operations, or reacting to real-time data streams to make decisions on their own.

That’s the direction the industry is heading, not toward bigger models, but toward smarter context.
The real progress in AI adoption is happening in how well agents can access, interpret, and act on live information.

Static workflows are becoming the bottleneck. Adaptive, context-aware systems are becoming the differentiator.

They’re transforming how we think about automation, customer experience, and operational intelligence, moving from rule-based processes to systems that continuously learn and evolve.

If you want to explore how this is being applied in practice, Zapier and Apify are hosting a live session on TOMORROW (October 23) showcasing how real-time web data can make agents more dynamic and self-updating.

It’s free to attend, and you’ll get the recording afterward: https://bit.ly/47dwvn2

Happy learning 🚀

#ZapierPartner
Post image by Aishwarya Srinivasan
This Week of AI is here⚡️
Another packed week in the world of AI- from new chips and infrastructure partnerships to major product launches and model updates.

Here’s your quick TL;DR of everything that happened across the AI ecosystem this week 👇

♻️ Share this with your network
🔔 Follow me (Aishwarya Srinivasan) for no BS data and AI updates and insights
I’m sure you’ve heard that the AI industry has moved from obsessing over prompt engineering to now understanding context engineering.

In every LLM application, there are different types of context that shape how the model reasons and acts.

At the simplest level, context can be:
→ User context: what the user says or does
→ System context: what the application provides as background
→ Memory context: what the model or agent recalls from past interactions

When agents interleave LLM calls and tool calls, they don’t just alternate blindly, so they use feedback from tools to dynamically refine the next LLM call.

This creates a continuous reasoning loop: predict → act → observe → re-prompt. That’s where context engineering starts to become an art.

Broadly, we can think of context engineering as four key operations:
→ Write context: crafting inputs that guide model behavior
→ Select context: retrieving or filtering relevant information
→ Compress context: distilling large knowledge into limited tokens
→ Isolate context: keeping different reasoning threads independent

At the end of the day, context is what gives intelligence its continuity. Models generate responses, but context is what gives them understanding.

If you want to learn more about Context Engineering, join our 1.5 hour hands-on workshop on 8th November where Arvind and I will be covering both theory + demo: https://lnkd.in/dvFmMiRr
Post image by Aishwarya Srinivasan
If you’re an aspiring AI engineer, RAG (Retrieval-Augmented Generation) is one of the most essential systems to understand.

Almost every serious AI product today, from copilots to research assistants uses RAG to combine retrieval and generation for accurate, grounded responses.

Here’s a simplified way to look at it:

𝗛𝗼𝘄 𝗥𝗔𝗚 𝗦𝘆𝘀𝘁𝗲𝗺𝘀 𝗪𝗼𝗿𝗸
RAG is made up of three key layers:

- 𝗥𝗲𝘁𝗿𝗶𝗲𝘃𝗲𝗿: Finds the most relevant documents or chunks using embeddings and vector search.
- 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗼𝗿: Produces an answer using both the user’s query and the retrieved context.
- 𝗢𝗿𝗰𝗵𝗲𝘀𝘁𝗿𝗮𝘁𝗼𝗿: Manages the flow, deciding what to retrieve, how to construct the prompt, and what goes into the LLM.

The pipeline looks like this:
𝗨𝘀𝗲𝗿 𝗤𝘂𝗲𝗿𝘆 → 𝗥𝗲𝘁𝗿𝗶𝗲𝘃𝗲𝗿 → 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝗙𝘂𝘀𝗶𝗼𝗻 → 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗼𝗿 → 𝗢𝘂𝘁𝗽𝘂𝘁

𝗠𝘆 𝟮 𝗰𝗲𝗻𝘁𝘀 🤌
- Start simple with off-the-shelf retrievers like BM25 or bge-large-en and gradually move to hybrid retrieval.
- Chunk documents semantically, not by fixed token size.
- Evaluate retriever and generator separately before optimizing the full system.
- Cache embedding results and retrieval outputs to cut latency.
- Always measure recall@k and hallucination rate — not just accuracy.

𝗪𝗵𝗲𝗿𝗲 𝘁𝗼 𝗦𝘁𝗮𝗿𝘁
If you’re learning RAG for the first time:
- Build a small document QA system using open datasets like arXiv or Wikipedia.
- Experiment with LangChain or LlamaIndex to understand orchestration.
- Run side-by-side tests using different retrievers and observe how output quality changes.
- Finally, deploy a lightweight prototype with Fireworks or another inference engine to see how retrieval affects cost and performance in production.

If you want to have a deep-dive into RAG systems, Arvind and I did 2 workshops recently:
- 𝗥𝗔𝗚 𝗙𝗼𝘂𝗻𝗱𝗮𝘁𝗶𝗼𝗻𝘀 (𝟰-𝗵𝗼𝘂𝗿 𝘄𝗼𝗿𝗸𝘀𝗵𝗼𝗽): 
https://lnkd.in/gjx5WNP6

- 𝗥𝗔𝗚 𝗧𝗼𝘄𝗮𝗿𝗱𝘀 𝗠𝗮𝘀𝘁𝗲𝗿 (𝟰-𝗵𝗼𝘂𝗿 𝘄𝗼𝗿𝗸𝘀𝗵𝗼𝗽):
https://lnkd.in/dJGR3zQU
Post image by Aishwarya Srinivasan
I’ll be at TechCrunch Disrupt this week, hosting two roundtables on October 28th about how teams are building, fine-tuning, and scaling GenAI applications with open models.

If you’re attending Disrupt, shoot me a DM! I’d love to grab coffee, hear what you’re building, and chat about how Fireworks AI can help you take it to the next stage.

PS: I have a few complimentary passes for folks who are genuinely interested in the startup and AI ecosystem, happy to share them with people who want to join the energy at Disrupt this year.
Post image by Aishwarya Srinivasan
If you’re getting started in AI engineering, one of the most useful things you can learn is how information retrieval actually works under the hood.

1️⃣ Embedding models take your text, audio, or images and turn them into numerical vectors that represent meaning. For example, if you ask, “What’s the best laptop for travel?”, the model doesn’t look for those exact words - it looks for documents about lightweight laptops, battery life, and portability.

That’s because embeddings capture semantic similarity, not just keywords. They power the search layer in most GenAI systems by helping you find contextually relevant data quickly.

2️⃣ Re-ranking models then step in to fine-tune the results. They take the top candidates and figure out which ones truly answer your question.
So if embeddings bring you 10 possible answers, the re-ranker will reorder them so that the one that best matches your intent, say “best laptops for digital nomads”, comes first.

If embeddings are about recall, re-ranking is about precision.

Together, they form the foundation of RAG (Retrieval-Augmented Generation) systems, the architecture that powers copilots, AI search tools, and enterprise assistants.

→ Embeddings fetch what’s relevant
→ Re-ranking ensures it’s the right answer

You’ll see these models in everything from chatbots and search tools to AI agents that need reliable memory or factual grounding.

If you want to build real-world GenAI systems, understanding embeddings and re-ranking is where you start thinking like a system designer, not just a prompt engineer.

Here is detailed blog that dives deeper into how these models work and how to implement them using open-source tools.
🔗 in comments
Post image by Aishwarya Srinivasan
The future of data centers isn’t on Earth, it’s in orbit!
As AI workloads scale exponentially, our planet is running out of room and resources, to keep up. Power, cooling, and sustainability have become existential bottlenecks for compute.

That’s what makes Starcloud’s partnership with NVIDIA such a breakthrough. Their upcoming Starcloud-1 launch will be the first time an NVIDIA H100 GPU, a true data-center-class processor, operates in space.

By using the vacuum of space as a natural heat sink and tapping into unlimited solar energy, Starcloud aims to run large-scale AI compute with 10x lower energy costs and zero water usage for cooling.

The implications go far beyond energy efficiency. This is a glimpse into how sustainable, high-performance computing could evolve over the next decade.
Starcloud’s work reminds us that the next frontier for AI isn’t just smarter models, it’s smarter infrastructure. It is rethinking where and how we run intelligence itself.

I am proud to back the Starcloud team early on, and seeing their vision come to life in partnership with NVIDIA is nothing short of inspiring.

Keep at it Philip & team!!

Read more on NVIDIA’s blog: https://lnkd.in/dpWEcGgj
Post image by Aishwarya Srinivasan
I’ll be at TechCrunch Disrupt tomorrow , hosting two roundtables about how teams are building, fine-tuning, and scaling GenAI applications with open models.

If you’re attending Disrupt, shoot me a DM! I’d love to grab coffee, hear what you’re building, and chat about how Fireworks AI can help you take it to the next stage.

Related Influencers