How many of them do you know?
--
Find a more detailed explanation here:ย https://lnkd.in/gibud-c6
--
Some of the terms are pretty self-explanatory, so I wonโt go through each of them, like:
- Gradient Descent, Normal Distribution, Sigmoid, Correlation, Cosine similarity, Naive Bayes, F1 score, ReLU, Softmax, MSE, MSE + L2 regularization, KMeans, Linear regression, SVM, Log loss.
Here are the remaining terms:
- MLE: Used to estimate the parameters of a statistical model by maximizing the likelihood of the observed data.
- Z-score: A standardized value that indicates how many standard deviations away a data point is from the mean.
- OLS: A closed-form solution for linear regression obtained using MLE.
- Entropy: A measure of the uncertainty or randomness of a random variable. It is often utilized in decision trees and the t-SNE algorithm.
- Eigen Vectors: Vectors that do not change direction after a linear transformation. The principal components in PCA are obtained using eigenvectors of the data's covariance matrix.
- R2 (R-squared): It measures the proportion of variance explained by a regression model.
- KL divergence: Assesses how much information is lost when one distribution is used to approximate another distribution. It is used as a loss function in the t-SNE algorithm.
- SVD: A factorization technique that decomposes a matrix into three other matrices. It is fundamental in linear algebra for applications like dimensionality reduction.
- Lagrange multipliers: A mathematical technique to solve constrained optimization problems. For instance, consider an optimization problem with an objective function f(x) and assume that the constraints are g(x)=0 and h(x)=0. Lagrange multipliers help us solve this. This is used in PCA's derivation.
๐ Over to you: Of course, this is not an all-encompassing list. What other mathematical definitions will you include here?