Mojmír Mutný

Mojmír Mutný

PhD Student

ETH Zurich

I am a PhD student at ETH Zurich under the supervisior of Andreas Krause in Learning and Adaptive Systems group. My current research work mostly focuses on modern instances of Experimental Design – a branch of statistics addressing the search for the most informative experiments in order to infer an unknown statistical quantity.

I am also interested in/have worked on Bayesian optimization, kernel methods, sensor selection, control, bandit algorithms and convex optimization. Some of my algorithms have found application in machine calibration, spatial analysis and directed evolution.

Interests

  • Mathematical Optimization
  • Machine Learning
  • Active Learning
  • Experimental design
  • Bandit Algorithms

Education

  • PhD in Computer Science, 2018-Now

    ETH Zurich

  • MSc in Computational Science, 2017

    ETH Zurich

  • BSc in Mathematical Physics, 2015

    University of Edinburgh

Recent Posts

Publications

Diversified Sampling for Batched Bayesian Optimization with Determinantal Point Processes

In this work we introduced DPP-BBO, a natural and easily applicable framework for enhancing batch diversity in BBO algorithms which …

Experimental Design of Linear Functionals in Reproducing Kernel Hilbert Spaces

Optimal experimental design seeks to determine the most informative allocation of experiments to infer an unknown statistical …

Tuning Particle Accelerators with Safety Constraints using Bayesian Optimization

Tuning machine parameters of particle accelerators is a repetitive and time-consuming task that is challenging to automate. While many …

Active Exploration via Experiment Design in Markov Chains

A key challenge in science and engineering is to design experiments to learn about some unknown quantity of interest. Classical …

Learning Controllers for Unstable Linear Quadratic Regulators from a Single Trajectory

We present the first approach for learning–from a single trajectory–a linear quadratic regulator (LQR), even for unstable …

Data Summarization via Bilevel Optimization

The increasing availability of massive data sets poses a series of challenges for machine learning. Prominent among these is the need …

Efficient Pure Exploration for Combinatorial Bandits with Semi-Bandit Feedback

Combinatorial bandits with semi-bandit feedback generalize multi-armed bandits, where the agent chooses sets of arms and observes a …

No-regret Algorithms for Capturing Events in Poisson Point Processes

Inhomogeneous Poisson point processes are widely used models of event occurrences. We address emphadaptive sensing of Poisson Point …

MakeSense: Automated Sensor Design for Proprioceptive Soft Robots

Soft robots have applications in safe human–robot interactions, manipulation of fragile objects, and locomotion in challenging and …

Sensing Cox Processes via Posterior Sampling and Positive Bases

We study adaptive sensing of Cox point processes, a widely used model from spatial statistics. We introduce three tasks: maximization …

Coresets via Bilevel Optimization for Continual Learning and Streaming

Coresets are small data summaries that are sufficient for model training. They can be maintained online, enabling efficient handling of …

Experimental Design for Optimization of Orthogonal Projection Pursuit Models

Bayesian optimization and kernelized bandit algorithms are widely used techniques for sequential black box function optimization with …

Convergence Analysis of Block Coordinate Algorithms with Determinantal Sampling

We analyze the convergence rate of the randomized Newton-like method introduced by Qu et. al. (2016) for smooth and convex objectives, …

Adaptive and Safe Bayesian Optimization in High Dimensions via One-Dimensional Subspaces

Bayesian optimization is known to be difficult to scale to high dimensions, because the acquisition step requires solving a non-convex …

Bayesian Optimization for Fast and Safe Parameter Tuning of SwissFEL

Parameter tuning is a notoriously time-consuming task in accelerator facilities. As tool for global optimization with noisy …

Efficient High Dimensional Bayesian Optimization with Additivity and Quadrature Fourier Features

We develop an efficient and provably no-regret Bayesian optimization (BO) algorithm for optimization of black-box functions in high …

Parallel Stochastic Newton Method

We propose a parallel stochastic Newton method (PSN) for minimizing unconstrained smooth convex functions. We analyze the method in the …

Stochastic second-order optimization via von Neumann series

A stochastic iterative algorithm approximating second-order information using von Neumann series is discussed. We present convergence …

Code

Quadrature Fourier Features (QFF) for Python

This repository includes the code used in paper:Mojmir Mutny & Andreas Krause, “Efficient High Dimensional Bayesian Optimization with Additivity and Quadrature Fourier Features”, NIPS 2018 It provides an efficient finite basis approximation for RBF and Matern kernels in low dimensions.