Forskning ved Københavns Universitet - Københavns Universitet

Forside

Distribution Learning in Evolutionary Strategies and Restricted Boltzmann Machines

Publikation: Bog/antologi/afhandling/rapportPh.d.-afhandlingForskning

Standard

Distribution Learning in Evolutionary Strategies and Restricted Boltzmann Machines. / Krause, Oswin.

Department of Computer Science, Faculty of Science, University of Copenhagen, 2015. 121 s.

Publikation: Bog/antologi/afhandling/rapportPh.d.-afhandlingForskning

Harvard

Krause, O 2015, Distribution Learning in Evolutionary Strategies and Restricted Boltzmann Machines. Department of Computer Science, Faculty of Science, University of Copenhagen.

APA

Krause, O. (2015). Distribution Learning in Evolutionary Strategies and Restricted Boltzmann Machines. Department of Computer Science, Faculty of Science, University of Copenhagen.

Vancouver

Krause O. Distribution Learning in Evolutionary Strategies and Restricted Boltzmann Machines. Department of Computer Science, Faculty of Science, University of Copenhagen, 2015. 121 s.

Author

Krause, Oswin. / Distribution Learning in Evolutionary Strategies and Restricted Boltzmann Machines. Department of Computer Science, Faculty of Science, University of Copenhagen, 2015. 121 s.

Bibtex

@phdthesis{b279ab3fdb994894958b95106c7f4401,
title = "Distribution Learning in Evolutionary Strategies and Restricted Boltzmann Machines",
abstract = "The thesis is concerned with learning distributions in the two settings of Evolutionary Strategies (ESs) and Restricted Boltzmann Machines (RBMs). In both cases, the distributions are learned from samples, albeit with different goals.Evolutionary Strategies are concerned with finding an optimum of an objective function for which the gradient is not available. The algorithm samples function values from a search distribution and adapts the parameters of the distribution during the optimization process. In the thesis, new update schemes for the covariance matrix used by the CMA-ES are investigated. An update rule using a triangular Cholesky factor is introduced and the additive covariance matrix update is replaced by a multiplicative rule. Experiments show that the proposed methods improve performance of the CMA-ES either computationally or by allowing simpler handling of constraints.The second part of the thesis is concerned with RBMs that are fitted to a dataset using maximum log-likelihood. As the computation of the distribution's normalization constant is intractable, Markov Chain Monte Carlo methods are required to estimate and follow the log-likelihood gradient. The thesis investigates the approximation properties of stacked RBMs used to model the distribution of real valued data. Further, estimation algorithms of the normalization constant of an RBM are compared and a theoretical framework is introduced from which a number of well known algorithms can be derived. Lastly, a method based on Population Monte Carlo is introduced which uses importance sampling to reduce the bias of Contrastive Divergence with only minor computational overhead. It is shown that in some cases this can lead to large improvements, while in problems with more complex distribution the variance of the estimator can become problematic.",
author = "Oswin Krause",
year = "2015",
language = "English",
publisher = "Department of Computer Science, Faculty of Science, University of Copenhagen",

}

RIS

TY - BOOK

T1 - Distribution Learning in Evolutionary Strategies and Restricted Boltzmann Machines

AU - Krause, Oswin

PY - 2015

Y1 - 2015

N2 - The thesis is concerned with learning distributions in the two settings of Evolutionary Strategies (ESs) and Restricted Boltzmann Machines (RBMs). In both cases, the distributions are learned from samples, albeit with different goals.Evolutionary Strategies are concerned with finding an optimum of an objective function for which the gradient is not available. The algorithm samples function values from a search distribution and adapts the parameters of the distribution during the optimization process. In the thesis, new update schemes for the covariance matrix used by the CMA-ES are investigated. An update rule using a triangular Cholesky factor is introduced and the additive covariance matrix update is replaced by a multiplicative rule. Experiments show that the proposed methods improve performance of the CMA-ES either computationally or by allowing simpler handling of constraints.The second part of the thesis is concerned with RBMs that are fitted to a dataset using maximum log-likelihood. As the computation of the distribution's normalization constant is intractable, Markov Chain Monte Carlo methods are required to estimate and follow the log-likelihood gradient. The thesis investigates the approximation properties of stacked RBMs used to model the distribution of real valued data. Further, estimation algorithms of the normalization constant of an RBM are compared and a theoretical framework is introduced from which a number of well known algorithms can be derived. Lastly, a method based on Population Monte Carlo is introduced which uses importance sampling to reduce the bias of Contrastive Divergence with only minor computational overhead. It is shown that in some cases this can lead to large improvements, while in problems with more complex distribution the variance of the estimator can become problematic.

AB - The thesis is concerned with learning distributions in the two settings of Evolutionary Strategies (ESs) and Restricted Boltzmann Machines (RBMs). In both cases, the distributions are learned from samples, albeit with different goals.Evolutionary Strategies are concerned with finding an optimum of an objective function for which the gradient is not available. The algorithm samples function values from a search distribution and adapts the parameters of the distribution during the optimization process. In the thesis, new update schemes for the covariance matrix used by the CMA-ES are investigated. An update rule using a triangular Cholesky factor is introduced and the additive covariance matrix update is replaced by a multiplicative rule. Experiments show that the proposed methods improve performance of the CMA-ES either computationally or by allowing simpler handling of constraints.The second part of the thesis is concerned with RBMs that are fitted to a dataset using maximum log-likelihood. As the computation of the distribution's normalization constant is intractable, Markov Chain Monte Carlo methods are required to estimate and follow the log-likelihood gradient. The thesis investigates the approximation properties of stacked RBMs used to model the distribution of real valued data. Further, estimation algorithms of the normalization constant of an RBM are compared and a theoretical framework is introduced from which a number of well known algorithms can be derived. Lastly, a method based on Population Monte Carlo is introduced which uses importance sampling to reduce the bias of Contrastive Divergence with only minor computational overhead. It is shown that in some cases this can lead to large improvements, while in problems with more complex distribution the variance of the estimator can become problematic.

UR - https://soeg.kb.dk/permalink/45KBDK_KGL/fbp0ps/alma99122849563205763

M3 - Ph.D. thesis

BT - Distribution Learning in Evolutionary Strategies and Restricted Boltzmann Machines

PB - Department of Computer Science, Faculty of Science, University of Copenhagen

ER -

ID: 153761015