A Near-Optimal Control Method for Stochastic Boolean Networks
DOI:
https://doi.org/10.30707/LiB7.1.1647875326.011975Keywords:
Stochastic modeling, Boolean Networks, MDP, Optimal Control, GRNAbstract
One of the ultimate goals of computational biology and bioinformatics is to develop control strategies to find efficient medical treatments. One step towards this goal is to develop methods for changing the state condition of a cell into a new desirable state. Using a stochastic modeling framework generalized from Boolean Networks, we propose a computationally efficient method that determines sequential combinations of network perturbations, that induce the transition of a cell towards a new predefined state. The method requires a set of possible control actions as input, every element of this set represents the silencing of a gene or a disruption of the interaction between two molecules. An optimal control policy defined as the best intervention at each state of the system, can be obtained using theory of Markov decision processes. However, these algorithms are computationally prohibitive for models of tens of nodes. The proposed method generates a sequence of actions that approximates the optimal control policy with a computational efficiency that does not depend on the size of the state space of the system. The methods are validated by using published models where control targets have been identified. Our code in C++ is publicly available through GitHub at https://github.com/boaguilar/SDDScontrol.