Generate viral LinkedIn posts in your style for free.

Generate LinkedIn posts
Pamela Fox

Pamela Fox

These are the best posts from Pamela Fox.

8 viral posts with 1,092 likes, 50 comments, and 117 shares.
8 image posts, 0 carousel posts, 0 video posts, 0 text posts.

šŸ‘‰ Go deeper on Pamela Fox's LinkedIn with the ContentIn Chrome extension šŸ‘ˆ

Best Posts by Pamela Fox on LinkedIn

At PyBay 2025 this weekend, Guido van Rossum presented a new Python package called "typeagent" that implements "structured RAG", a different approach to indexing and retrieval. This is how I understand it:

šŸ“‘ Ingestion: It uses an LLM to turn the content into structured data like entities, topics, actions. It defines the schema in TypeScript, since LLMs are adept at generating TypeScript-adherent schemas. Then the structured data is stored in a standard database (no graph DB needed).
šŸ”Ž Retrieval: It again uses an LLM to turn the user query into similarly structured data, and then retrieves matching structured data from the database. If the token budget allows, it also adds the original content the data was extracted from.

It is *not* the same as GraphRAG, as that approach builds up hierarchical knowledge graphs based off multiple pieces of related data. GraphRAG enables you to ask more zoomed out questions, but doesn't scale well for scenarios where new data is being added all the time.

Guido demonstrated the approach by asking questions on his personal Gmail inbox, like about people and events mentioned in emails. My impression is that this approach could work particularly well for conversation retrieval, perhaps in combination with a hybrid search.

Try out the new package and see what you think -
it's on my TODO list to explore soon!
https://lnkd.in/gdgt7xCr
Post image by Pamela Fox
Our popular RAG solution now supports ACLs for ingested documents using the built-in ACL controls of Azure AI Search. How it works:
* We enable permission filtering for the AI Search index, and define "oids" and "groups" fields mapped to USER_IDS and GROUP_IDS permissions.
* During ingestion and user upload, we set oids and groups fields on chunks accordingly, or set "all" to allow global document access.
* During search, we send the access token of the logged in user to the search service, and it filters the chunks based off the token claims.

Learn more in the release notes here:
https://lnkd.in/gMsSM3fW

Props to Matthew Gotteiner for the implementation šŸ‘šŸ¼
Post image by Pamela Fox
Today was our penultimate session: Agents! šŸ¤–

Catch up with the recording here:
https://lnkd.in/gmxPnXqz

We covered:
* What's an agent? šŸ› ļø Tool calling in a loop šŸ”
* Building a standard agent with agent-framework
* Supervisor agent architecture with subagent
* Similar agent setup for langchain and pydantic-ai
* Agentic workflows with agent-framework and visualization with devui
* Agentic workflows with Langgraph and visualization in Langsmith
* Human-in-the-loop with Langchain's Agent Inbox
* Agent planning and memory

Slides here:
https://lnkd.in/g5grKt7v

Let me know if you'd want a deeper dive into any agent topics in future live stream series!
Post image by Pamela Fox
"Just because AI can write your tests ...should it?"
ā¬†ļø That's the the talk I gave today at PyBay 2025!

Here's the 5 min version:

LLMs are generalists, so they *can* write passing tests, but they don't follow best practices and use the optimal testing tools.

The problems and solutions:

😢 Problem: LLM-written tests are often high in redundancy.
ā˜ŗļø Solution: In pytest, use parametrize for parameter variation and fixtures for commonly used test data

😢 Problem: LLMs generate sample data that is overly simplistic and not reflective of the actual diversity of the human world, like for fake names ("John Doe") and addresses ("1 Main St").
ā˜ŗļø Solution: Use Faker() to generate real-world fake data

😢 Problem: LLM-written assertions often only do a partial check of returned data from an API endpoint, because the LLM doesn't know what it should look like.
ā˜ŗļø Solution: Use snapshot testing with pytest-snapshot to capture the output from API endpoints, so that you'll know when *any* field changes and be effectively asserting on every field.

😢 Problem: LLM-written tests do not cover 100% of the code lines.
ā˜ŗļø Solution: Use coverage to measure line coverage, and write tests for missing lines.

😢 Problem: Even 100% coverage does not mean that all edge cases have been explored!
ā˜ŗļø Solution: Use hypothesis for property-based testing of programs, and schemathesis for property-based testing of API endpoints.

In conclusion:
Yes, you can use an AI to write tests, but if you do, you should set the LLM up for sucess! Once you decide what best practices and tools you want to use in your tests, write up a precise prompt with those guidelines, and the LLM can write better tests with higher coverage and less redundancy.

Check the slides here, where I managed to interleave my love of native California bees:
https://lnkd.in/gbzaYz3z
I also gave out seeds at the very end to attendees- three fanny packs full!

I'll post a link to recording on YouTube when available.
It was a great conference as always, so glad I came back this year!
Post image by Pamela Fox
Python AI agent frameworks are converging on a similar interface for "Agent" -a prompt with a list of tools.

Here I compare the same agent across 3 frameworks - Microsoft Agent Framework, Langchain v1, and Pydantic AI.

I think the similarity is great for developers -it's easier to learn one framework and then transfer skills to another framework when needed. There are differences in the frameworks still, but I'm happy they're agreeing on at least the basic terminology, the nouns and verbs.

1ļøāƒ£ Microsoft Agent Framework:

agent = ChatAgent(
chat_client=client,
instructions="Help users plan their weekends and choose the best activities for the given weather.",
tools=[get_weather, get_activities, get_current_date],
)

2ļøāƒ£ Langchain v1:

agent = create_agent(
model=model,
system_prompt="Help users plan their weekends and choose the best activities for the given weather.",
tools=[get_weather, get_activities, get_current_date],
)

3ļøāƒ£ Pydantic AI:

agent = Agent(
model,
system_prompt="Help users plan their weekends and choose the best activities for the given weather.",
tools=[get_weather, get_activities, get_current_date],
)

Full code in
https://lnkd.in/gBtHpVxt

Of course, where it gets really interesting is where the frameworks differ - but those tend to come out once you start getting into more complex architectures, adding middleware, mixing in structured outputs, that sort of thing.
Post image by Pamela Fox
Our Python+AI series is over, but you can still watch the videos, download the slides, and run the code samples!

Get the links @
https://lnkd.in/gs73SPVV
Post image by Pamela Fox
By default, LLMs output unstructured text. But thanks to structured outputs, you can guide LLMs to output structured data instead, according to your precise schema. That makes LLMs way more helpful for automation workflows, entity extraction, and classification.

That's what we talked about in today's live stream:
https://lnkd.in/evE3W_Xr

We showed how to define schemas using Pydantic models, including field descriptions, enums, and nested models. Then we showed entity extraction scenarios, processing GitHub issues, webpages, Word documents, PDFs, Images, and more!

Oh and I did a bonus demo of my LinkedIn agent, which uses Pydantic-AI with structured outputs to read my LinkedIn networking requests and decide whether to accept or ignore. Structured outputs are sooo dang useful.
Post image by Pamela Fox
We've been working with Stephen McCullough from NVIDIA to show you all how to run open-weight models (like gpt-oss) on Azure Container Apps Serverless GPUs (A100) using NIM docker images.

Join our live streams next week to see how it's done!

EMEA friendly time:
https://lnkd.in/gsWzAhJr

US friendly time:
https://lnkd.in/gT-M47td

Australia-friendly time (with Anthony Shaw, since I'll be 😓):
https://lnkd.in/gyADkpRf
Post image by Pamela Fox

Related Influencers