Baidu Encyclopedia version

Maximum likelihood estimation is a statistical method used to find the parameters of the associated probability density function of a sample set. This method was first used by geneticists and statistician Sir Ronald Fisher in the years 1912 to 1922.

“Like” is a relatively close-to-speech translation of a similarity, and “likelihood” is “possibility” in modern Chinese. Therefore, it is more straightforward to refer to it as the “maximum likelihood estimate”.

The maximum likelihood method explicitly uses a probability model whose goal is to find a phylogenetic tree that can produce observation data with higher probability. The maximum likelihood method is a representative of a completely statistical-based phylogenetic tree reconstruction method. This method takes into account the probability of each nucleotide substitution in each set of sequence alignments.

For example, the probability of a transition occurring is approximately three times that of a spin. In a three-sequence alignment, if one of the columns is found to be a C, a T, and a G, we have reason to believe that the relationship between the sequence in which C and T are located is likely to be closer. Since the common ancestor sequence of the sequence being studied is unknown, the calculation of the probability becomes complicated; and since multiple substitutions may occur at one or more sites, and not all of the sites are independent of each other, the probability calculation The complexity is further increased. Nevertheless, objective criteria can be used to calculate the probability of each locus and calculate the probability of each possible tree representing the sequence relationship. Then, by definition, the tree with the highest sum of probabilities is most likely a phylogenetic tree that reflects the real situation.

see details


Wikipedia version

In statistics, Maximum Likelihood Estimation (MLE) is a method of estimating the parameters of a statistical model given the observation. Given the observations, the MLE attempts to find a parameter value that maximizes the likelihood function. The resulting estimate is called the maximum likelihood estimate, which is also abbreviated as MLE.

The maximum likelihood method is used for extensive statistical analysis. For example, suppose we are highly interested in adult female penguins, but cannot measure the height of each penguin in the population (due to cost or time constraints). Assuming a highly normal distribution with some unknown mean and variance, MLE can be used to estimate the mean and variance, while only knowing the height of some samples of the overall population. MLE will do this by taking the mean and variance as parameters and finding specific parameter values ​​that make the observed result most likely given a normal model.

From the perspective of Bayesian inference, MLE is the largest a posteriori estimate (MAPIn the special case, it assumes a uniform prior distribution of the parameters. On the other hand, from the point of view of frequency theory, MLE is one of several methods for obtaining parameter estimation without using a priori distribution. Primers are avoided by not probabilistic statements of parameters, but only their estimates are considered, and their properties are completely defined by observation and statistical models.

see details

Easyai public number