Mojmír Mutný

Mojmír Mutný

Researcher

ETH Zurich

I am a researcher at ETH Zurich working in the research group of Andreas Krause, Learning and Adaptive Systems group. My current research work mostly focuses on modern instances of Experimental Design – addressing the search for the most informative experiments in order to infer an unknown statistical quantity.

I am also interested in/have worked on Bayesian optimization, kernel methods, sensor selection, control, bandit algorithms and convex optimization. Some of my algorithms have found application in machine calibration, spatial analysis and protein design.

Interests

  • Mathematical Optimization
  • Machine Learning
  • Active Learning
  • Experimental design
  • Bandit Algorithms
  • Protein Design

Education

  • PhD in Computer Science, 2018-2024

    ETH Zurich

  • MSc in Computational Science, 2017

    ETH Zurich

  • BSc in Mathematical Physics, 2015

    University of Edinburgh

Recent Posts

Publications

Active Exploration via Experiment Design in Markov Chains

A key challenge in science and engineering is to design experiments to learn about some unknown quantity of interest. Classical …

Experimental Design of Linear Functionals in Reproducing Kernel Hilbert Spaces

Optimal experimental design seeks to determine the most informative allocation of experiments to infer an unknown statistical …

Diversified Sampling for Batched Bayesian Optimization with Determinantal Point Processes

In this work we introduced DPP-BBO, a natural and easily applicable framework for enhancing batch diversity in BBO algorithms which …

Sensing Cox Processes via Posterior Sampling and Positive Bases

We study adaptive sensing of Cox point processes, a widely used model from spatial statistics. We introduce three tasks: maximization …

No-regret Algorithms for Capturing Events in Poisson Point Processes

Inhomogeneous Poisson point processes are widely used models of event occurrences. We address emphadaptive sensing of Poisson Point …

Efficient Pure Exploration for Combinatorial Bandits with Semi-Bandit Feedback

Combinatorial bandits with semi-bandit feedback generalize multi-armed bandits, where the agent chooses sets of arms and observes a …

Learning Controllers for Unstable Linear Quadratic Regulators from a Single Trajectory

We present the first approach for learning–from a single trajectory–a linear quadratic regulator (LQR), even for unstable …

Data Summarization via Bilevel Optimization

The increasing availability of massive data sets poses a series of challenges for machine learning. Prominent among these is the need …

MakeSense: Automated Sensor Design for Proprioceptive Soft Robots

Soft robots have applications in safe human–robot interactions, manipulation of fragile objects, and locomotion in challenging …

Convergence Analysis of Block Coordinate Algorithms with Determinantal Sampling

We analyze the convergence rate of the randomized Newton-like method introduced by Qu et. al. (2016) for smooth and convex objectives, …

Coresets via Bilevel Optimization for Continual Learning and Streaming

Coresets are small data summaries that are sufficient for model training. They can be maintained online, enabling efficient handling of …

Experimental Design for Optimization of Orthogonal Projection Pursuit Models

Bayesian optimization and kernelized bandit algorithms are widely used techniques for sequential black box function optimization with …

Adaptive and Safe Bayesian Optimization in High Dimensions via One-Dimensional Subspaces

Bayesian optimization is known to be difficult to scale to high dimensions, because the acquisition step requires solving a non-convex …

Bayesian Optimization for Fast and Safe Parameter Tuning of SwissFEL

Parameter tuning is a notoriously time-consuming task in accelerator facilities. As tool for global optimization with noisy …

Efficient High Dimensional Bayesian Optimization with Additivity and Quadrature Fourier Features

We develop an efficient and provably no-regret Bayesian optimization (BO) algorithm for optimization of black-box functions in high …

Parallel Stochastic Newton Method

We propose a parallel stochastic Newton method (PSN) for minimizing unconstrained smooth convex functions. We analyze the method in the …

Code

Sensing Poisson Point Processes for Python (sensepy)

It provides an efficient finite basis approximation for RBF and Matern kernels in low dimensions.

Quadrature Fourier Features (QFF) for Python

This repository includes the code used in paper:Mojmir Mutny & Andreas Krause, “Efficient High Dimensional Bayesian Optimization with Additivity and Quadrature Fourier Features”, NIPS 2018 It provides an efficient finite basis approximation for RBF and Matern kernels in low dimensions.