Generate viral LinkedIn posts in your style for free.

Generate LinkedIn posts
Milan Jovanović

Milan Jovanović

These are the best posts from Milan Jovanović.

11 viral posts with 14,580 likes, 858 comments, and 1,188 shares.
9 image posts, 1 carousel posts, 0 video posts, 1 text posts.

👉 Go deeper on Milan Jovanović's LinkedIn with the ContentIn Chrome extension 👈

Best Posts by Milan Jovanović on LinkedIn

C# has been getting many new features in recent versions.

And one that stands out is pattern matching.

What is it?

Pattern matching lets you check if an object has specific characteristics:
- Is null or is not null
- Is of a particular type
- Property has a specific value

I enjoy using it for type checks, and I particularly like switch expressions.

However, the more complex the pattern is, the more I dislike it.

Sure, it can be concise, and you usually write less code.

But you also need to think about readability and maintainability.

P.S. If you liked this, join The .NET Weekly - my newsletter with 23k+ readers that teaches you how to improve at .NET & software architecture:
https://lnkd.in/dMDPXuUh

Take a look at the example from the code snippet.

Which version do you find more readable?
Post image by Milan Jovanović
C# Tip 💡

How do you create your DTOs?

We can use 𝗿𝗲𝗰𝗼𝗿𝗱𝘀 to represent DTOs, starting from C# 9.

The example from the picture uses a primary constructor so that the record definition can become a one-liner.

It's a nice and concise way to define types.

A few DTO naming conventions I've seen:
-[Something]Dto
- [Something]Model
- [Something]Response

I prefer the last one since I usually think about request/response objects when talking about DTOs.

P.S. If you want to learn more about .NET and software architecture, consider subscribing to my newsletter.

Join 23.000+ engineers: https://lnkd.in/dMDPXuUh

What's your thought on using records as DTOs?

#dotnet #csharp #softwareengineering
Post image by Milan Jovanović
Microsoft just revealed its secret project - .NET Aspire.

Here's everything you need to know. 👇

What is .NET Aspire?

.NET Aspire is an opinionated, cloud-ready stack for building observable, production-ready, distributed applications.

.NET Aspire helps with:

- Orchestration
- Components
- Tooling

Orchestration refers to coordinating and managing various elements within a cloud-native application.

.NET Aspire streamlines the configuration and interconnection of different parts of your cloud-native app.

It provides useful abstractions for managing:

- Service discovery
- Environment variables
- Container configurations

Without you having to handle low-level implementation details.

My favorite feature is the built-in Aspire dashboard.

You can use it to monitor:

- Containers
- Structured logs
- Applications traces
- Applications metrics

But there are even more fantastic things inside.

What about deploying an Aspire application?

NET Aspire applications are built with cloud-agnostic principles in mind.

This allows you to deploy Aspire apps across various platforms supporting .NET and containers.

With .NET Aspire (Preview 1), you can deploy to Azure Container Apps.

This is doable through a few simple commands using the Azure CLI.

I'm excited about the Aspire project and will create more content about it on my YouTube channel and newsletter.

If you enjoyed this, you will love The .NET weekly newsletter.
More than 32,000+ engineers already read it.

Subscribe here: https://lnkd.in/dYU8aN7s

What's your first impression of .NET Aspire?
Post image by Milan Jovanović
You can now test your APIs without leaving VS Code.

But how?

Well, Postman just released a VS Code extension to GA.

The Postman VS Code extension enables you to develop and test your APIs in Postman directly from Visual Studio Code.

All your favorite Postman features are now available in VS Code:

- Sending API requests (HTTP, gRPC, WebSocket)
- Using collections to organize your request
- Different environments for your APIs
- And much more...

Check it out here: https://lnkd.in/duutwpe4
Post image by Milan Jovanović
I spent 17 hours optimizing an API endpoint to make it 15x faster.

Here's a breakdown of what I did.

I worked on an e-commerce application. One endpoint was crunching some heavy numbers. And it wasn't scaling well.

The endpoint calculated a report. It needed data from several services to perform the calculations.

This is the high-level process I took:

- Identify the bottlenecks
- Fix the database queries
- Fix the external API calls
- Add caching as a final touch

𝗦𝗼, 𝗵𝗼𝘄 𝗱𝗼 𝘆𝗼𝘂 𝗶𝗱𝗲𝗻𝘁𝗶𝗳𝘆 𝘁𝗵𝗲 𝗯𝗼𝘁𝘁𝗹𝗲𝗻𝗲𝗰𝗸𝘀 𝗶𝗻 𝘆𝗼𝘂𝗿 𝘀𝘆𝘀𝘁𝗲𝗺?

If you know the slowest piece of code, you will know what to fix. The 80/20 rule works wonders here. Improving 20% of the slowest code can yield an 80% improvement.

The fun doesn't stop here. Performance optimization is a continuous process and requires constant monitoring and improvements. Fixing one problem will reveal the next one.

The problems I found were:

- Calling the database from a loop
- Calling an external service many times
- Duplicate calculations with the same parameters

Measuring performance is also a crucial step in the optimization process:

- Logging execution times with a Timer/Stopwatch
- If you have detailed application metrics, even better
- Use a performance profiler tool to find slow code

𝗙𝗶𝘅𝗶𝗻𝗴 𝘀𝗹𝗼𝘄 𝗱𝗮𝘁𝗮𝗯𝗮𝘀𝗲 𝗾𝘂𝗲𝗿𝗶𝗲𝘀

A round trip between your application and a database or service can last 5-10ms (or more). The more round trips you have, the more it adds up.

Here are a few things you can do to improve this:

- Don't call the database from a loop
- Return multiple results in one query

𝗖𝗼𝗻𝗰𝘂𝗿𝗿𝗲𝗻𝘁 𝗲𝘅𝗲𝗰𝘂𝘁𝗶𝗼𝗻 𝗶𝘀 𝘆𝗼𝘂𝗿 𝗳𝗿𝗶𝗲𝗻𝗱

I had multiple asynchronous calls to different services. These services were independent of each other. So, I called these services concurrently and aggregated the results. This simple technique helped me achieve significant performance improvement.

𝗖𝗮𝗰𝗵𝗶𝗻𝗴 𝗮𝘀 𝗮 𝗹𝗮𝘀𝘁 𝗿𝗲𝘀𝗼𝗿𝘁

Caching is an effective way to speed up an application. But it can introduce bugs when the data is stale. Is this tradeoff worth it?

In my case, achieving the desired performance was critical. You also have to consider the cache expiration and eviction strategies.

A few caching options in ASP .NET:

- IMemoryCache (uses server RAM)
- IDistributedCache (Redis, Azure Cache for Redis)

What do you think of my process? Would you do something differently?

---
Subscribe to my weekly newsletter to accelerate your .NET skills: https://bit.ly/3R9JnT5
Post image by Milan Jovanović
Will this new EF Core 8 feature be the end of Dapper?

Here's what you should know about EF Raw SQL queries. 👇

EF Core 8 supports returning unmapped types from raw SQL queries.

Why is this so important?

Previously, you could only return entity types or scalar values.

But now, you can map raw SQL queries to any C# type.

Is it faster than Dapper?

No, not in the tests I've performed.

Right now, it's on par with LINQ queries.

And this might be all the performance you need.

The big benefit is you can now do everything with only EF.

Here's what's possible with EF raw SQL queries:

- Querying unmapped types with SQL
- Composing SQL queries with LINQ
- Executing updates with SQL

You can also query views, functions, and stored procedures.

Here's how to get started with EF Raw SQL queries: https://lnkd.in/exDU7v3X

---

✍️ What would you add?
♻️ Reshare if you like EF Core.
Post image by Milan Jovanović
How can Task.WhenAll give you better performance?

We often use async calls in our code to improve throughput and resource utilization in our applications.

In the first example, we await multiple calls.

The entire method will execute in the sum of times it takes each async call to complete.

However, we await all the async calls in the second example simultaneously.

In theory, our method will complete when the slowest async call ends.

Notice that waiting for the slowest call to complete is faster than waiting for all the calls to complete individually.

Also, whether or not Task.WhenAll will execute the calls in parallel will depend on certain factors out of your control.

However, you can't use this approach if you need the result of one async operation to execute the next one.

P.S. If you want to learn more about .NET and software architecture, consider subscribing to my newsletter.

Join 24.000+ engineers: https://lnkd.in/dMDPXuUh

How often do you use Task.WhenAll?
Post image by Milan Jovanović
If you're writing code every day, here are 3 things to remember:

- Stay away from over-engineering
- Make it work, then make it pretty
- Add safety with tests

Software engineering is complex.

But your process can be simple.
Did you have a chance to work with the Outbox pattern?

If you're working with microservices, this is definitely something to add to your toolkit.

So what is it?

The Outbox pattern is a technique for reliably publishing events in a distributed system.

Instead of publishing events directly, the Outbox pattern involves storing events in a separate table in your database and then having a background process read from that table and publish the events to a message broker.

Why would you ever want to introduce this sort of complexity?

Well, if you're working with distributed systems you surely know that things break. A downstream service is down. The network isn't available.

If you couple your application requests with the process of notifying other services, either by directly calling them or publishing a message to the queue, you are introducing a potential issue.

The Outbox pattern is used to solve the problem of ensuring that events are published in a reliable way.

In a distributed system, it's common to have multiple services that need to be updated when an event occurs.

For example, if a user updates their profile, you might need to update multiple services with that new data.

By using the Outbox pattern, you can ensure that those updates happen reliably, even if some of the services are temporarily unavailable.

If you're working with a SQL database, for example, you know that your transaction is atomic. You can reliably persist your message to the Outbox table and have a background worker process that message at a later time.

One of the key benefits of the Outbox pattern is that it helps you to ensure consistency in your distributed system.

By using a separate table to store events, you can be sure that events are only published once, and that they are published in the correct order.

Another benefit is that the Outbox pattern is generally easy to implement and can be used with any message broker or queueing system.

Plus, it can help you to improve the performance and scalability of your system by decoupling the act of publishing events from the rest of your application logic.

You can also add retries for failed messages, and try to publish them again later.

Of course, the Outbox pattern only takes care of the publishing side of things. On the consumer, you still need to think about duplicate messages in case of retries.

If you enjoyed this post, you will love my weekly .NET newsletter. Every Saturday I share one actionable tip, and it's always less than a 5-minute read.

Join 9800+ engineers: https://lnkd.in/dMDPXuUh

#softwareengineering #dotnet #outbox
Post image by Milan Jovanović
What is the cleanest way to get configuration values in .NET?

I use the Options pattern, and here's why you.

App configuration lives in environment variables or JSON files.

You can get individual values using the `IConfiguration` interface.

But this is error-prone and cumbersome, so I don't recommend it.

Instead, I'll use the Options pattern:

- Create a class to represent the settings
- Bind the class properties application settings (JSON)
- Consume the Options pattern in my code using IOptions

You can also add data annotations to your settings class for simple validation.

Are you using the Options pattern?

If you are new to the Options pattern, start here →https://lnkd.in/dbeN2Kat
Post image by Milan Jovanović
How do you design a good API?

Here are 5 tips for designing a quality API.

API design is deciding how your API will expose data and functionality to consumers. A good API design describes the API endpoints and resources in some standard format.

There are 4 key stages of API design:

1 - Determine what the API should do
2 - Define the API contracts (OpenAPI)
3 - Validate your assumptions with tests
4 - Document the API (endpoints, error codes, etc.)

Here are 5 more tips for designing a good API.

1) 𝗣𝗿𝗶𝗼𝗿𝗶𝘁𝗶𝘇𝗲 𝗖𝗼𝗻𝘀𝗶𝘀𝘁𝗲𝗻𝗰𝘆 𝗮𝗻𝗱 𝗦𝗶𝗺𝗽𝗹𝗶𝗰𝗶𝘁𝘆

Your API should have consistent naming conventions, response formats, and an error-handling strategy. Simplicity in API design makes it easier for developers to understand and integrate with your API.

2) 𝗘𝗺𝗯𝗿𝗮𝗰𝗲 𝗥𝗘𝗦𝗧𝗳𝘂𝗹 𝗣𝗿𝗶𝗻𝗰𝗶𝗽𝗹𝗲𝘀

You should design your API according to RESTful principles, emphasizing statelessness, a client-server architecture, and a uniform interface. REST APIs are the standard in a majority of .NET applications.

3) 𝗨𝘀𝗲 𝘁𝗵𝗲 𝗰𝗼𝗿𝗿𝗲𝗰𝘁 𝗛𝗧𝗧𝗣 𝗦𝘁𝗮𝘁𝘂𝘀 𝗖𝗼𝗱𝗲𝘀

You should use the correct HTTP status codes to communicate the outcome of API requests. This includes successful operations (2xx), client errors (4xx), and server errors (5xx). It helps consumers of your API understand what went wrong and how to rectify it.

4) 𝗜𝗺𝗽𝗹𝗲𝗺𝗲𝗻𝘁 𝗔𝗣𝗜 𝗩𝗲𝗿𝘀𝗶𝗼𝗻𝗶𝗻𝗴

Plan for future changes by implementing versioning in your API. This allows you to make improvements and changes without breaking existing client integrations. The most common approach is URL versioning.

5) 𝗔𝘂𝘁𝗵𝗡 𝗮𝗻𝗱 𝗔𝘂𝘁𝗵𝗭

Implement authentication, authorization, and data encryption where necessary. Protecting sensitive data and ensuring that only authorized users can access specific API endpoints is crucial for maintaining the trust and integrity of your API.

Learn more about API design here: https://lnkd.in/eAv2Fgbs

What is something you do to design a good API?
Post image by Milan Jovanović

Related Influencers