Generate viral LinkedIn posts in your style for free.

Generate LinkedIn posts
Matteo Collina

Matteo Collina

These are the best posts from Matteo Collina.

7 viral posts with 1,112 likes, 52 comments, and 71 shares.
2 image posts, 0 carousel posts, 1 video posts, 3 text posts.

πŸ‘‰ Go deeper on Matteo Collina's LinkedIn with the ContentIn Chrome extension πŸ‘ˆ

Best Posts by Matteo Collina on LinkedIn

The TanStack supply-chain compromise published 84 malicious versions across 42 packages yesterday. The attack chain is remarkable: pull_request_target abuse β†’ GitHub Actions cache poisoning β†’ runtime OIDC token extraction β†’ direct npm publish.

No stolen npm tokens. The release workflow itself was never breached. The attacker just... became it.

This lands precisely on a point I made in last week's newsletter: "trusted publishing" and provenance attestation verify *who* published something, not *whether that person was in control of their own actions*.

The Axios compromise proved this with session hijacking. TanStack proves it again through CI subversion. Different vectors, same structural flaw.

There is one cheap, high-leverage defense that actually works: minimum release age.

Most malicious releases are detected and yanked within hours. A 24-hour install cooldown filters out the smash-and-grab attacks automatically.

All three major package managers now support this:
β€’ npm (v11.10+): min-release-age=1
β€’ pnpm (v10.16+): minimumReleaseAge: 1440
β€’ Yarn (Berry 4.10+): npmMinimalAgeGate: 1440

Links in the comments.
10,000 requests per second. 120 seconds. React Router.

PM2 failed on nearly 70% of requests.

Watt failed on 38%.

Same hardware. Same app. Different architecture.

Here's what happened:
We ran an extreme stress test on AWS EKS: 3x m5.2xlarge nodes, 6 total CPUs, k6 hammering at a constant 10,000 req/s with up to 20,000 virtual users.

This is 10x the load from our Next.js benchmarks. We wanted to break things (and also React-Router could take it).

The results:

Watt vs PM2:
β†’ 45% higher throughput (6,032 vs 4,154 req/s)
β†’ 2.9x more successful responses (467K vs 160K)
β†’ 21% lower average latency
β†’ PM2 dropped 680,000+ iterations

The surprise: Single Node beat PM2. 5,838 req/s vs 4,154 req/s.
The cluster module's IPC overhead isn't "a little extra work". It's a 30% tax that collapses under pressure.

Why?
PM2 routes ALL connections through a master process via IPC. Serialization, coordination, bottleneck.
Watt uses SO_REUSEPORT. The Linux kernel distributes connections directly to workers. No middleman. No overhead. No single point of failure.

And when a Watt worker blocks or crashes? Only that worker restarts. With Single Node, your entire pod is down until Kubernetes notices.

Most apps don't need 10K req/s. But the architecture that handles extreme load gracefully is the same architecture that handles normal load efficiently.

PM2's overhead exists at ANY scale.

Full methodology, charts, and reproduction code in the post.
I was targeted by the same social engineering campaign that compromised Axios on npm.

Same playbook: an invitation to a cloned Slack workspace, a well-crafted pitch from what appeared to be a legitimate company, and eventually β€” the tell β€” a request to download and install "review software."

I didn't. I had too much going on and something felt off. But I'll be honest: if my schedule had been lighter that day, I might have clicked it.

The Axios compromise wasn't a leaked password or a stolen API key. The attacker hijacked the maintainer's live, authenticated browser session. As far as npm, GitHub, or any publishing pipeline could tell, the attacker *was* the maintainer.

This exposes a dangerous gap in how we think about supply chain security.

npm "trusted publishing" and Sigstore provenance are valuable building blocks. They cryptographically attest who published a package. But in the Axios case, provenance would have attested to exactly the right person.

The problem: provenance answers "did this come from who it says it came from?" β€” but it does not answer "is the person who published this actually in control of their own actions?"

That distinction is everything.

The fallout was real. The compromised version made its way into OpenAI's macOS signing pipeline, forcing certificate rotation for ChatGPT Desktop, Codex, and Atlas.

So what would actually help?

β†’ Delay windows before new versions appear in registry responses (OpenAI would have been saved by this)
β†’ Machine compromise detection on publishing platforms
β†’ Dual-control publishing for high-impact packages
β†’ Finally: stop pretending "trust the publisher" is sufficient

I wrote more about this.

https://lnkd.in/dSsS9WBt
We just open-sourced a new job queue library for Node.js. Here's why we built it.

Every backend team eventually needs background job processing. And every team eventually discovers how many ways it can break:

β†’ Jobs vanish during deploys
β†’ Duplicate work piles up from client retries
β†’ Stalled jobs sit in limbo after a worker crash
β†’ No clean way to wait for a job's result

We kept solving these same problems across projects at Platformatic, so we packaged the solution: @platformatic/job-queue.

It's not another Redis wrapper. It's a complete job processing system with the reliability patterns built in.

What's included out of the box:

πŸ”Ή Deduplication by job ID - repeated enqueue calls don't create duplicate work
πŸ”Ή enqueueAndWait() - request/response semantics when you need a result back (with timeout handling and typed errors)
πŸ”Ή Automatic retries with configurable attempts and backoff
πŸ”Ή Stalled job recovery - a Reaper detects crashed workers and requeues their jobs
πŸ”Ή Graceful shutdown - in-flight jobs complete before the process stops
πŸ”Ή TypeScript-native with typed payloads and results

Three storage backends for different stages:

MemoryStorage for dev β†’ FileStorage for simple single-node deploys β†’ RedisStorage for production with horizontal scaling and leader election.

Same API across all three. Start simple, scale without rewriting code.

The architecture is straightforward:

Producers enqueue jobs. Consumers dequeue and execute them with full state tracking - every transition (queued β†’ processing β†’ completed/failed/retry) is persisted. The Reaper runs as a separate concern, detecting stalled work and recovering it automatically. With Redis, it supports leader election so you can run multiple Reaper instances safely.

Where this fits:

We use both patterns in the same system depending on the endpoint:

πŸ”Έ Fire-and-forget for emails, notifications, webhooks - things where retries handle failures gracefully
πŸ”Έ Request/response (enqueueAndWait) for invoice generation, payment processing, expensive validations - where the caller needs a bounded response path

Getting started:

npm install @platformatic/job-queue

We've been testing this at Platformatic, and it's been reliable. But the project is young, and we want real-world feedback - especially the weird edge cases.

If you're evaluating queue systems for Node.js, give it a try and let us know what breaks.
Post image by Matteo Collina
Optional chaining is one of JavaScript's most useful features. But what's the performance impact? TL;DR it's massive.

I recently collaborated with Simone Sanfratello on detailed benchmarks comparing noop functions to optional chaining, and the results were revealing: noop functions are 5.5x to 8.8x faster. Running 5 million iterations clearly showed the differences. Noop functions achieved 939M ops/sec as the baseline. Optional chaining with empty objects ran at 134M ops/sec (7x slower). Optional chaining with an existing method reached 149M ops/sec (6.3x slower). Deep optional chaining was the slowest, at 106M ops/sec (8.8x slower).

The explanation comes down to what V8 must do. Noop functions are inlined by V8, making them essentially zero-overhead. The function call vanishes in optimized code. Optional chaining requires property lookup and null/undefined checks at runtime. V8 can't optimize these away because the checks must occur each time. This is why Fastify uses the abstract-logging module. Instead of checking logger?.info?.() throughout the code, Fastify provides a noop logger object with all logging methods as noop functions. The key is to provide noops upfront rather than checking for existence later. When logging is disabled, V8 inlines these noop functions at almost zero cost. With optional chaining, runtime checks are required every time.

One reason for excessive optional chaining is TypeScript's type system encourages defensive coding. Properties are marked as potentially undefined even when runtime guarantees they exist, causing developers to add ?. everywhere to satisfy the type checker. The solution is better type modeling. Fix your interfaces to match reality, or use noop fallbacks like "const onRequest = config.hooks.onRequest || noop" and call it directly. Don't let TypeScript's cautious type system trick you into unnecessary defensive code.

Context matters, though. Even "slow" optional chaining executes at 106+ million operations per second, which is negligible for most applications. Use optional chaining for external data or APIs where the structure isn't controlled, in normal business logic prioritizing readability and safety, and to reduce defensive clutter. Use noop functions in performance-critical paths, when code runs thousands of times per request, in high-frequency operations where every microsecond counts, and when you control the code and can guarantee function existence. Even a few thousand calls per request make the performance difference significant.

My advice: don't optimize prematurely. Write your code with optional chaining where it enhances safety and clarity. For most applications, the safety benefits outweigh the performance costs. If profiling reveals a bottleneck, consider switching to noop functions. Profile first, optimize second. Remember: readable, maintainable code often surpasses micro-optimizations. But when those microseconds matter, now you understand the cost.
Post image by Matteo Collina
We heard the same story from engineering teams: "Half our team writes Python for AI and data science. The other half builds web applications in Node.js. Managing separate repositories, deployment pipelines, and monitoring systems is slowing us down."

Every organization adding AI to its web applications faces this fragmentation. Python developers build sophisticated models with LangChain and Transformers. JavaScript developers create experiences with React and Next.js. But these teams work in isolation with incompatible tooling and separate deployment processes.

Today at Platformatic, we announce @platformatic/python, which fundamentally changes how Python and Node.js work together. We've embedded Python directly inside the Node.js process. Your FastAPI or Django application runs alongside your Next.js app or Express server as a single, unified application.

This transforms how teams build and ship software. Run both your Next.js frontend and FastAPI backend with one command. Debug across language boundaries in the same process. Deploy one container instead of orchestrating multiple services. Your entire stack becomes one cohesive application.

The development workflow is radically simplified. Clone one repository containing both Python ML models and React components. Use the same hot-reload experience whether editing Python or JavaScript. Write tests that verify integration between your data processing and API routes. No more coordinating releases between services or debugging version mismatches.
Python developers expose models through familiar ASGI interfaces. JavaScript developers call Python endpoints without service discovery complexity. Data scientists contribute directly to production without coordinating with platform teams. The artificial boundaries between language ecosystems dissolve.

Getting started takes minutes. Install the package, drop your Python application in a folder, and point to it in the configuration. Your existing FastAPI, Django, or Starlette code runs unchanged with full ASGI support.
Performance? 5,200 requests per second, outperforming several Python servers. But the real value is simplicity. No network failures between services. No distributed tracing overhead. Just one application.

We built @platformatic/python because the future of application development is unified, not fragmented. When data scientists and frontend developers work in the same codebase with the same tools, when deployment means pushing one thing instead of orchestrating many, teams can move at the speed of innovation.

The age of managing separate Python and Node.js services is over. With @platformatic/python, you have one codebase, one development environment, one deployment, one team.

What will you build when the barriers between languages disappear?
Recently, I watched a junior developer spend three hours debugging a production issue. The culprit? A typo in an API field name that TypeScript should have caught.

It reminded me of my painful experiences copying curl commands from the DevTools browser and trying to reverse-engineer undocumented APIs. I would write the HTTP client manually, ship it, and then discover at 2 AM that the API returns different data than expected.

I found a better way.

Now, I transform curl commands directly into type-safe, production-ready API clients in under a minute. There is no manual HTTP code and no runtime surprises.

The workflow is simple: collect curl commands from DevTools or documentation, run curl-to-json-schema to generate an OpenAPI schema, and then use massimo-cli to create a fully typed client with built-in error handling and response validation.

I recently integrated with a legacy system with only a few curl examples in an old wiki. What would have taken days of trial and error took one afternoon. The generated client caught several issues during development that would have been production bugs.

The real magic happens when APIs change. Update your curl examples, regenerate the client, and TypeScript immediately flags any breaking changes. You're debugging at compile time, not in production.

This isn't just about saving time. It's about shipping confidently, knowing your API integrations are rock-solid.

I've written a guide on implementing this workflow if you're tired of writing boilerplate HTTP code or debugging type mismatches at runtime.

https://lnkd.in/d53BNdgG

Related Influencers