Roberto Capobianco, Varun Kompella, James Ault, Guni Sharon, Stacy Jong, Spencer Fox, Lauren Meyers, Peter R. Wurman, Peter Stone: Agent-Based Markov Modeling for Improved COVID-19 Mitigation Policies. In: Journal of Artificial Intelligence Research (JAIR), vol. 71, pp. 953–992, 2020.

Abstract

The year 2020 saw the covid-19 virus lead to one of the worst global pandemics in history. As a result, governments around the world have been faced with the challenge of protecting public health while keeping the economy running to the greatest extent possible. Epidemiological models provide insight into the spread of these types of diseases and predict the effects of possible intervention policies. However, to date, even the most data-driven intervention policies rely on heuristics. In this paper, we study how reinforcement learning (RL) and Bayesian inference can be used to optimize mitigation policies that minimize economic impact without overwhelming hospital capacity. Our main contributions are (1) a novel agent-based pandemic simulator which, unlike traditional models, is able to model fine-grained interactions among people at specific locations in a community; (2) an RLbased methodology for optimizing fine-grained mitigation policies within this simulator; and (3) a Hidden Markov Model for predicting infected individuals based on partial observations regarding test results, presence of symptoms, and past physical contacts.

BibTeX (Download)

@article{capobianco2021covid,
title = {Agent-Based Markov Modeling for Improved COVID-19 Mitigation Policies},
author = {Roberto Capobianco and Varun Kompella and James Ault and Guni Sharon and Stacy Jong and Spencer Fox and Lauren Meyers and Peter R. Wurman and Peter Stone},
url = {https://doi.org/10.1613/jair.1.12632},
doi = {10.1613/jair.1.12632},
year  = {2020},
date = {2020-08-17},
urldate = {2020-08-17},
journal = {Journal of Artificial Intelligence Research (JAIR)},
volume = {71},
pages = {953--992},
publisher = {Association for the Advancement of Artificial Intelligence},
abstract = {The year 2020 saw the covid-19 virus lead to one of the worst global pandemics in history. As a result, governments around the world have been faced with the challenge of protecting public health while keeping the economy running to the greatest extent possible. Epidemiological models provide insight into the spread of these types of diseases and predict the effects of possible intervention policies. However, to date, even the most data-driven intervention policies rely on heuristics. In this paper, we study how reinforcement learning (RL) and Bayesian inference can be used to optimize mitigation policies that minimize economic impact without overwhelming hospital capacity. Our main contributions are (1) a novel agent-based pandemic simulator which, unlike traditional models, is able to model fine-grained interactions among people at specific locations in a community; (2) an RLbased methodology for optimizing fine-grained mitigation policies within this simulator; and (3) a Hidden Markov Model for predicting infected individuals based on partial observations regarding test results, presence of symptoms, and past physical contacts.},
keywords = {journal},
pubstate = {published},
tppubtype = {article}
}