Skip to main content

One post tagged with "lda"

View All Tags

ยท 2 min read
Aneesh Sambu

In machine learning, a parametric algorithm is a type of algorithm that makes assumptions about the underlying distribution of the data. These assumptions are typically based on a specific mathematical model, such as a linear regression model or a Gaussian distribution.

Parametric algorithms estimate the parameters of the model based on the training data, and then use these parameters to make predictions or decisions on new data. Because they make assumptions about the underlying distribution, parametric algorithms can be more efficient and require less data than non-parametric algorithms.

Examples of parametric algorithms include Linear Regression, Logistic Regression, Naive Bayes, and Linear Discriminant Analysis (LDA). These algorithms are often used in classification and regression tasks, and can be effective in a wide range of applications. However, they may not be as flexible as non-parametric algorithms and may not perform well if the underlying assumptions are not met.

In parametric algorithms, we are assuming that the data follows a specific mathematical model or distribution. For example, in linear regression, we assume that the relationship between the input variables and the output variable is linear. In logistic regression, we assume that the output variable follows a logistic distribution. In Naive Bayes, we assume that the input variables are conditionally independent given the output variable. In Linear Discriminant Analysis (LDA), we assume that the input variables follow a multivariate normal distribution. These assumptions allow us to estimate the parameters of the model based on the training data, and then use these parameters to make predictions or decisions on new data. However, if the underlying assumptions are not met, the performance of the algorithm may be affected.