Generate viral LinkedIn posts in your style for free.

Generate LinkedIn posts
Sahn Lam

Sahn Lam

These are the best posts from Sahn Lam.

13 viral posts with 19,989 likes, 227 comments, and 2,442 shares.
11 image posts, 0 carousel posts, 2 video posts, 0 text posts.

๐Ÿ‘‰ Go deeper on Sahn Lam's LinkedIn with the ContentIn Chrome extension ๐Ÿ‘ˆ

Best Posts by Sahn Lam on LinkedIn

A Visual Guide to CI/CD

๐—–๐—ผ๐—ป๐˜๐—ถ๐—ป๐˜‚๐—ผ๐˜‚๐˜€ ๐—œ๐—ป๐˜๐—ฒ๐—ด๐—ฟ๐—ฎ๐˜๐—ถ๐—ผ๐—ป (๐—–๐—œ) is a foundational practice in DevOps where developers frequently merge code changes into the main branch, often multiple times a day. This process is complemented by automated testing to ensure new changes integrate seamlessly with existing code. The primary goal of CI is to find and address bugs quicker, improve software quality, and reduce the time required to validate and release updates

๐—–๐—ผ๐—ป๐˜๐—ถ๐—ป๐˜‚๐—ผ๐˜‚๐˜€ ๐——๐—ฒ๐—ฝ๐—น๐—ผ๐˜†๐—บ๐—ฒ๐—ป๐˜ (๐—–๐——) automates deploying code changes to a production without human intervention. It ensures every change passing all automated tests gets deployed. It accelerates customer feedback by releasing updates more frequently. CD also reduces pressure on developers by eliminating manual release processes.

Some companies rely on ๐—–๐—ผ๐—ป๐˜๐—ถ๐—ป๐˜‚๐—ผ๐˜‚๐˜€ ๐——๐—ฒ๐—น๐—ถ๐˜ƒ๐—ฒ๐—ฟ๐˜† instead. Continuous Delivery extends CI by automatically preparing code changes for release to production. However, unlike Continuous Deployment, it requires manual approval prior to production deployment. This practice ensures that all changes are automatically built, tested, and ready for release. It allows teams to deploy new changes anytime at the push of a button.

โ€“
Subscribe to our weekly newsletter to get a Free System Design PDF (158 pages): https://bit.ly/496keA7
Post image by Sahn Lam
Netflix Tech Stack (CI/CD Pipeline)

Letโ€™s explore the innovative tools and techniques behind Netflix's world-class continuous delivery pipeline.

Planning: Netflix Engineering uses JIRA for project planning and Confluence for documentation.

Coding: Java is the primary language for backend services. Other languages are used where appropriate.

Building: Gradle is the main build tool. Custom Gradle plugins support various use cases.

Packaging: Code, dependencies, and configurations are packaged into Amazon Machine Images (AMIs) for release.

Testing: Netflix uses a suite of chaos engineering tools to simulate failures like outages or latencies. These chaos tests are also run against the real production environment to validate resilience and failover mechanisms.

Deployment: Netflix uses its Spinnaker tool for canary rollout deployments.

Monitoring: Metrics are centralized in Atlas. Kayenta detects anomalies.

Incident Response: PagerDuty handles incident management. Incidents are prioritized and dispatched.

Over to you: If you do chaos testing against production, what tools or techniques do you use?

โ€“
Subscribe to our weekly newsletter to get a Free System Design PDF (158 pages): https://bit.ly/496keA7
Post image by Sahn Lam
How C++, Java, and Python Work Under the Hood?

Weโ€™ve recently released a video explaining these popular programming languages' inner workings!

Compiled languages like C++ and Go transform source code into machine code using a compiler before execution. The compiled machine code can then be directly executed by the CPU.

Java first compiles source code into bytecode, which is platform-independent and executed by the Java Virtual Machine (JVM). The JVM can further boost performance using Just-In-Time (JIT) compilation to convert bytecode to machine code at runtime.

Interpreted languages like Javascript and Ruby don't undergo compilation. Instead, their code is processed line-by-line by an interpreter during execution. However, modern JavaScript engines like V8 also utilize JIT compilation for enhanced performance.

Python is a mix of both worlds. It first compiles source code into platform-independent bytecode, which is then executed line-by-line by the platform-dependent interpreter. In addition, implementations like PyPy use JIT compilation for a speed boost.

Generally, compiled languages can offer speed advantages, but the line between compiled and interpreted languages is increasingly blurred thanks to modern optimization techniques.

Watch the whole video here: https://lnkd.in/ghEbQ583
Post image by Sahn Lam
I've used Redis in production for almost a decade. It's reliable and easy to use (if used correctly). Here are my top 5 use cases where it shines:

1. Caching

The most common use case is to utilize Redis for caching. This helps protect the database layer from overloading. Redis offers fast lookup times for cached data and can help improve application performance.

2. Session Store

We use Redis to share user session data among stateless servers. Redis provides a centralized place to store session data and makes it easy to scale out servers.

3. Distributed lock

We use Redis distributed locks to grant mutually exclusive access to shared resources. This prevents race conditions in distributed systems. Redis locks are easy to implement and automatically expire.

4. Counter and Rate Limiter

We use Redis to track like counts, view counts etc on social media apps. Redis counters provide atomic increments/decrements. We also use Redis to enforce rate limits on our API endpoints. This helps prevent abuse.

5. Leaderboard

Sorted sets make it easy to implement gaming leaderboards in Redis. We can add, update, or remove users from the leaderboard and query ranges efficiently.

There are many other features in Redis. What are some other real-world use cases where you've used Redis successfully?

โ€“
Subscribe to our weekly newsletter to get a Free System Design PDF (158 pages): https://bit.ly/496keA7
Post image by Sahn Lam
Visualizing SQL Queries

A mental model can help visualize how SQL queries are executed. Conceptually, SQL statements can be thought of as executing in this sequence:

1. FROM: Tables are identified and joined to create the initial dataset.

2. WHERE: Filters are applied to the initial dataset based on specified criteria.

3. GROUP BY: The filtered rows are grouped according to the specified columns.

4. HAVING: Additional filters are applied to the grouped rows based on aggregate criteria.

5. SELECT: Specific columns are chosen from the resultant dataset for the output.

6. ORDER BY: The output rows are sorted by the specified columns in ascending or descending order.

7. LIMIT: The number of rows in the output is restricted.

In reality, the actual execution sequence may differ from this mental model due to optimization strategies employed by the query optimizer. The query optimizer:

- Parses the SQL statements

- Translates them into relational algebra

- Applies optimization procedures

- Generates an execution plan

Even though the actual execution plan may vary due to optimization, this mental model remains a valuable visualization for understanding the core logic of SQL queries.

Over to you: Have you ever seen a query optimizer come up with a completely counterintuitive execution plan? Share the craziest query plan you've encountered.

โ€“
Subscribe to our weekly newsletter to get a Free System Design PDF (158 pages): https://bit.ly/496keA7
Post image by Sahn Lam
Observability: logging, tracing, and metrics.

๐Ÿ”น Logging
Logging involves recording discrete events within a system, such as incoming requests or database accesses. It typically generates high volumes of data. The ELK stack (Elasticsearch, Logstash, Kibana) is commonly used to build log analysis platforms. Implementing standardized logging formats across teams for efficient search in log datasets.

๐Ÿ”น Tracing
Tracing provides insight into the journey of requests across system components like APIs, load balancers, services, and databases. It is instrumental in identifying performance bottlenecks. OpenTelemetry offers a unified approach for implementing logging, tracing and metrics within a single architecture.

๐Ÿ”น Metrics
Metrics represent aggregate data points reflecting a system's operational state, including query rates, API responsiveness, and service latencies. This time-series data is collected in databases like InfluxDB and often processed by tools such as Prometheus, which supports querying and alerting based on specific criteria. Visualization and alerting on metrics can be done in platforms like Grafana, which integrates with various alerting mechanisms like email, SMS, or Slack.

Which tools have you used for observability?

โ€“
Subscribe to our weekly newsletter to get a Free System Design PDF (158 pages): https://bit.ly/496keA7
Post image by Sahn Lam
What is Kubernetes?

Kubernetes (k8s) is a container orchestration system for deploying and managing containers. Its design is influenced by Google's internal cluster management system Borg.

A k8s cluster consists of worker machines called nodes that run containerized applications. Every cluster has at least one worker node that hosts pods - the components of the application workload. The control plane manages the nodes and pods. In production, the control plane usually runs across multiple computers for fault tolerance and high availability.

๐—–๐—ผ๐—ป๐˜๐—ฟ๐—ผ๐—น ๐—ฃ๐—น๐—ฎ๐—ป๐—ฒ ๐—–๐—ผ๐—บ๐—ฝ๐—ผ๐—ป๐—ฒ๐—ป๐˜๐˜€

- API Server - Communicates with all k8s components and handles all pod operations

- Scheduler - Watches pod workloads and assigns them to nodes

- Controller Manager - Runs core control loops like the Node Controller and EndpointSlice Controller

- etcd - Key-value store that backs all cluster data

๐—ช๐—ผ๐—ฟ๐—ธ๐—ฒ๐—ฟ ๐—ก๐—ผ๐—ฑ๐—ฒ ๐—–๐—ผ๐—บ๐—ฝ๐—ผ๐—ป๐—ฒ๐—ป๐˜๐˜€

- Pods - The smallest unit deployed and managed by k8s. Pods group containers and give them a single IP address.

- kubelet - An agent on each node that ensures container runtimes are running in pods

- kube-proxy - A network proxy on each node that handles routing and load balancing for services and pods

โ€“
Subscribe to our weekly newsletter to get a Free System Design PDF (158 pages): https://bit.ly/496keA7
How does HTTPS work?

Hypertext Transfer Protocol Secure (HTTPS) is an extension of HTTP that utilizes Transport Layer Security (TLS) to encrypt communication between a client and server.ย  Any intercepted data will be unreadable and secure from tampering and eavesdropping.

What's the process for encrypting and decrypting data?

Step 1 - The journey begins with the client (like your browser) establishing a TCP connection with the server.

Step 2 - Next comes the โ€œclient helloโ€ where the browser sends a message containing supported cipher suites and the highest TLS version it can handle. Cipher suites are sets of algorithms that typically include: a key exchange method to share keys between devices, a bulk encryption algorithm to encrypt data, and a message authentication code algorithm to check data integrity.

The server responds with a โ€œserver helloโ€, confirming the chosen cipher suite and TLS version that they can both understand. The server then sends a TLS certificate to the client containing its domain name, certificate authority signature, and the serverโ€™s public key. The client checks this certificate to validate it is trusted and belongs to the server.

Step 3 - Once the TLS certificate is validated, the client creates a session key to be used for encrypting the bulk data transfer. Bulk data transfer refers to the transmission of the actual application data between client and server once the secure TLS connection is established. To securely send this session key to the server, itโ€™s encrypted with the serverโ€™s public key. The server, with its private key, is the only one who can decrypt this encrypted session key.

Step 4 - Now that both parties have the secret session key, they shift gears to symmetric encryption. Itโ€™s like theyโ€™ve agreed on a private language that only they understand. This makes the data transfer very secure. Symmetric encryption is much faster for large amounts of data.

โ€“
Subscribe to our weekly newsletter to get a Free System Design PDF (158 pages): https://bit.ly/496keA7
Ever wonder what it means when your manager asks you to design for โ€œhigh availability,โ€œ โ€œhigh scalability,โ€œ or โ€œhigh throughputโ€œ? Let me break it down in simple terms.

๐—›๐—ถ๐—ด๐—ต ๐—”๐˜ƒ๐—ฎ๐—ถ๐—น๐—ฎ๐—ฏ๐—ถ๐—น๐—ถ๐˜๐˜† - ๐—ž๐—ฒ๐—ฒ๐—ฝ๐—ถ๐—ป๐—ด ๐˜๐—ต๐—ฒ ๐—ฆ๐—ฒ๐—ฟ๐˜ƒ๐—ถ๐—ฐ๐—ฒ ๐—จ๐—ฝ ๐—ฎ๐—ป๐—ฑ ๐—ฅ๐˜‚๐—ป๐—ป๐—ถ๐—ป๐—ด

This refers to maximizing uptime of a service, usually targeting 99.9% availability or higher. Each additional nine usually means an exponential increase in complexity. To achieve this, we build in redundancy at multiple levels, with failover systems ready to step in if the main system crashes.

๐—›๐—ถ๐—ด๐—ต ๐—ง๐—ต๐—ฟ๐—ผ๐˜‚๐—ด๐—ต๐—ฝ๐˜‚๐˜ - ๐—›๐—ฎ๐—ป๐—ฑ๐—น๐—ถ๐—ป๐—ด ๐—›๐—ฒ๐—ฎ๐˜ƒ๐˜† ๐—Ÿ๐—ผ๐—ฎ๐—ฑ

Throughput refers to the number of requests a system can handle per second, measured in transactions per second (TPS) or queries per second (QPS). Common techniques include adding caches, tweaking thread usage, optimizing bottlenecks, and enabling asynchronous processing to handle more simultaneous requests.

๐—›๐—ถ๐—ด๐—ต ๐—ฆ๐—ฐ๐—ฎ๐—น๐—ฎ๐—ฏ๐—ถ๐—น๐—ถ๐˜๐˜† - ๐—š๐—ฟ๐—ผ๐˜„๐—ถ๐—ป๐—ด ๐—–๐—ฎ๐—ฝ๐—ฎ๐—ฐ๐—ถ๐˜๐˜†

Scalability means a system can expand its workload capacity as needed. To scale out horizontally, itโ€™s common to break services into independent modules or microservices. Leveraging load balancers and service registries enables seamlessly routing requests to new resources.

Over to you: What stories do you have tackling these in system design?

โ€“
Subscribe to our weekly newsletter to get a Free System Design PDF (158 pages): https://bit.ly/496keA7
Post image by Sahn Lam
Understanding OAuth 2.0

OAuth is an open standard that allows users to grant limited access to their data on one site to other sites or applications without exposing their passwords. It has become the backbone of secure authorization across the web and mobile apps.

๐—ง๐—ต๐—ฒ ๐—ข๐—”๐˜‚๐˜๐—ต ๐—ฒ๐—ฐ๐—ผ๐˜€๐˜†๐˜€๐˜๐—ฒ๐—บ

OAuth connects three main players:

- The User who wants to grant access to their data without sharing login credentials
- The Server that hosts the user's data and provides access tokens
- The Identity Provider (IdP) that authenticates the user's identity and issues tokens

๐—›๐—ผ๐˜„ ๐—ข๐—”๐˜‚๐˜๐—ต ๐˜„๐—ผ๐—ฟ๐—ธ๐˜€

When a user tries to access their data through a third-party app, they are redirected to log in through the IdP. The IdP sends an access token to the app, which presents it to the server. Recognizing the valid token, the server grants access.

๐—ง๐—ต๐—ฒ ๐—ข๐—”๐˜‚๐˜๐—ต ๐—™๐—น๐—ผ๐˜„๐˜€

OAuth 2.0 defines four flows for obtaining authorization tokens:

- Authorization Code Flow - for server-side applications
- Client Credentials Flow - when the app is the resource owner
- Implicit Code Flow - not secure and no longer recommended
- Resource Owner Flow - for trusted apps using owner credentials

๐—ž๐—ฒ๐˜† ๐—ฏ๐—ฒ๐—ป๐—ฒ๐—ณ๐—ถ๐˜๐˜€

- Enhances user experience by eliminating multiple passwords
- Allows secure data access across platforms using tokens
- Balances accessibility and security

OAuth 2.0 has become the standard for authorization. It enables secure, convenient data sharing while protecting user accounts.

โ€“
Subscribe to our weekly newsletter to get a Free System Design PDF (158 pages): https://bit.ly/496keA7
Post image by Sahn Lam
Top 12 Tips for API Security

- Use HTTPS
- Use OAuth2
- Use WebAuthn
- Use Leveled API Keys
- Authorization
- Rate Limiting
- API Versioning
- Whitelisting
- Check OWASP API Security Risks
- Use API Gateway
- Error Handling
- Input Validation

โ€“
Subscribe to our weekly newsletter to get a Free System Design PDF (158 pages): https://bit.ly/496keA7
Post image by Sahn Lam
Securing REST APIs

It is important to make sure that only approved users and applications can access or make changes to resources in our API.

Here are some common was to secure REST APIs:

1. ๐—•๐—ฎ๐˜€๐—ถ๐—ฐ ๐—”๐˜‚๐˜๐—ต๐—ฒ๐—ป๐˜๐—ถ๐—ฐ๐—ฎ๐˜๐—ถ๐—ผ๐—ป

This sends a username and password with each request to the API. Itโ€™s straightforward, but not very secure unless used witn encryption like HTTPS.

Good for simpler apps where advanced security is not critical. Should be combined with encrypted connections.

2. ๐—ง๐—ผ๐—ธ๐—ฒ๐—ป ๐—”๐˜‚๐˜๐—ต๐—ฒ๐—ป๐˜๐—ถ๐—ฐ๐—ฎ๐˜๐—ถ๐—ผ๐—ป

This uses tokens, like JSON Web Tokens (JWT), that are exchanged between the client app and server. Login information is not sent with each request.

Better for more secure and scalable apps where not sending credentials each time is essential.

3. ๐—ข๐—ฝ๐—ฒ๐—ป๐—œ๐—— ๐—–๐—ผ๐—ป๐—ป๐—ฒ๐—ฐ๐˜ ๐—ฎ๐—ป๐—ฑ ๐—ข๐—”๐˜‚๐˜๐—ต

These allow limited third-party access to user data without exposing passwords. OpenID Connect handles user authentication and OAuth handles authorization.

Perfect when third-party services need controlled access to user data, like when integrating with Google, Facebook, or Twitter.

4. ๐—”๐—ฃ๐—œ ๐—ž๐—ฒ๐˜† ๐—”๐˜‚๐˜๐—ต๐—ฒ๐—ป๐˜๐—ถ๐—ฐ๐—ฎ๐˜๐—ถ๐—ผ๐—ป

This gives unique keys to users or apps which are sent in request headers or query parameters. Simple to implement but may not be as robust as token or OAuth methods.

Good for basic access control when security needs are moderate. Allows access to specific API functionalities without complex user permissions.

Securing our API should be a top concern. The method chosen should match the sensitivity of the data and required protection level.

โ€“
Subscribe to our weekly newsletter to get a Free System Design PDF (158 pages): https://bit.ly/496keA7
Post image by Sahn Lam
Reverse Proxy vs. API Gateway vs. Load Balancer

Modern websites and applications have complex architecture needs. Here we'll explore three key components:

๐Ÿ”นReverse Proxy: Acts as an intermediary between clients and backend servers. Key features include:
- Returning data requests on behalf of backend servers
- Shielding sensitive infrastructure from external probing

๐Ÿ”นAPI Gateway: Sits between clients and backend services, acting as a single entry point. It routes requests to the appropriate services. Useful for:
- Organizing communication between frontends and intricate backends
- Avoiding exposing all services publicly

๐Ÿ”นLoad Balancer: Distributes network traffic across multiple servers, preventing overload on any single resource. Crucial for:
- Managing high traffic loads without downtime

These represent powerful tools for building robust, secure, and scalable modern web stacks. They each serve distinct purposes, but frequently work together seamlessly.

โ€“
Subscribe to our weekly newsletter to get a Free System Design PDF (158 pages): https://bit.ly/496keA7
Post image by Sahn Lam

Related Influencers