Control Theory and Experimental Design in Diffusion Processes

13 Oct 2012  ·  Giles Hooker, Kevin K. Lin, Bruce Rogers ·

This paper considers the problem of designing time-dependent, real-time control policies for controllable nonlinear diffusion processes, with the goal of obtaining maximally-informative observations about parameters of interest. More precisely, we maximize the expected Fisher information for the parameter obtained over the duration of the experiment, conditional on observations made up to that time. We propose to accomplish this with a two-step strategy: when the full state vector of the diffusion process is observable continuously, we formulate this as an optimal control problem and apply numerical techniques from stochastic optimal control to solve it. When observations are incomplete, infrequent, or noisy, we propose using standard filtering techniques to first estimate the state of the system, then apply the optimal control policy using the posterior expectation of the state. We assess the effectiveness of these methods in 3 situations: a paradigmatic bistable model from statistical physics, a model of action potential generation in neurons, and a model of a simple ecological system.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper