Prospective Learning: Principled Extrapolation to the Future

Ashwin De Silva, Rahul Ramesh, Lyle Ungar, Marshall Hussain Shuler, Noah J. Cowan, Michael Platt, Chen Li, Leyla Isik, Seung-Eon Roh, Adam Charles, Archana Venkataraman, Brian Caffo, Javier J. How, Justus M Kebschull, John W. Krakauer, Maxim Bichuch, Kaleab Alemayehu Kinfu, Eva Yezerets, Dinesh Jayaraman, Jong M. Shin, Soledad Villar, Ian Phillips, Carey E. Priebe, Thomas Hartung, Michael I. Miller, Jayanta Dey, Ningyuan Huang, Eric Eaton, Ralph Etienne-Cummings, Elizabeth L. Ogburn, Randal Burns, Onyema Osuagwu, Brett Mensh, Alysson R. Muotri, Julia Brown, Chris White, Weiwei Yang, Andrei A. Rusu Timothy Verstynen, Konrad P. Kording, Pratik Chaudhari, Joshua T. Vogelstein
Proceedings of The 2nd Conference on Lifelong Learning Agents, PMLR 232:347-357, 2023.

Abstract

Learning is a process which can update decision rules, based on past experience, such that future performance improves. Traditionally, machine learning is often evaluated under the assumption that the future will be identical to the past in distribution or change adversarially. But these assumptions can be either too optimistic or pessimistic for many problems in the real world. Real world scenarios evolve over multiple spatiotemporal scales with partially predictable dynamics. Here we reformulate the learning problem to one that centers around this idea of dynamic futures that are partially learnable. We conjecture that certain sequences of tasks are not retrospectively learnable (in which the data distribution is fixed), but are prospectively learnable (in which distributions may be dynamic), suggesting that prospective learning is more difficult in kind than retrospective learning. We argue that prospective learning more accurately characterizes many real world problems that (1) currently stymie existing artificial intelligence solutions and/or (2) lack adequate explanations for how natural intelligences solve them. Thus, studying prospective learning will lead to deeper insights and solutions to currently vexing challenges in both natural and artificial intelligences.

Cite this Paper


BibTeX
@InProceedings{pmlr-v232-de-silva23a, title = {Prospective Learning: Principled Extrapolation to the Future}, author = {De Silva, Ashwin and Ramesh, Rahul and Ungar, Lyle and Shuler, Marshall Hussain and Cowan, Noah J. and Platt, Michael and Li, Chen and Isik, Leyla and Roh, Seung-Eon and Charles, Adam and Venkataraman, Archana and Caffo, Brian and How, Javier J. and Kebschull, Justus M and Krakauer, John W. and Bichuch, Maxim and Kinfu, Kaleab Alemayehu and Yezerets, Eva and Jayaraman, Dinesh and Shin, Jong M. and Villar, Soledad and Phillips, Ian and Priebe, Carey E. and Hartung, Thomas and Miller, Michael I. and Dey, Jayanta and Huang, Ningyuan and Eaton, Eric and Etienne-Cummings, Ralph and Ogburn, Elizabeth L. and Burns, Randal and Osuagwu, Onyema and Mensh, Brett and Muotri, Alysson R. and Brown, Julia and White, Chris and Yang, Weiwei and Verstynen, Andrei A. Rusu Timothy and Kording, Konrad P. and Chaudhari, Pratik and Vogelstein, Joshua T.}, booktitle = {Proceedings of The 2nd Conference on Lifelong Learning Agents}, pages = {347--357}, year = {2023}, editor = {Chandar, Sarath and Pascanu, Razvan and Sedghi, Hanie and Precup, Doina}, volume = {232}, series = {Proceedings of Machine Learning Research}, month = {22--25 Aug}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v232/de-silva23a/de-silva23a.pdf}, url = {https://proceedings.mlr.press/v232/de-silva23a.html}, abstract = {Learning is a process which can update decision rules, based on past experience, such that future performance improves. Traditionally, machine learning is often evaluated under the assumption that the future will be identical to the past in distribution or change adversarially. But these assumptions can be either too optimistic or pessimistic for many problems in the real world. Real world scenarios evolve over multiple spatiotemporal scales with partially predictable dynamics. Here we reformulate the learning problem to one that centers around this idea of dynamic futures that are partially learnable. We conjecture that certain sequences of tasks are not retrospectively learnable (in which the data distribution is fixed), but are prospectively learnable (in which distributions may be dynamic), suggesting that prospective learning is more difficult in kind than retrospective learning. We argue that prospective learning more accurately characterizes many real world problems that (1) currently stymie existing artificial intelligence solutions and/or (2) lack adequate explanations for how natural intelligences solve them. Thus, studying prospective learning will lead to deeper insights and solutions to currently vexing challenges in both natural and artificial intelligences.} }
Endnote
%0 Conference Paper %T Prospective Learning: Principled Extrapolation to the Future %A Ashwin De Silva %A Rahul Ramesh %A Lyle Ungar %A Marshall Hussain Shuler %A Noah J. Cowan %A Michael Platt %A Chen Li %A Leyla Isik %A Seung-Eon Roh %A Adam Charles %A Archana Venkataraman %A Brian Caffo %A Javier J. How %A Justus M Kebschull %A John W. Krakauer %A Maxim Bichuch %A Kaleab Alemayehu Kinfu %A Eva Yezerets %A Dinesh Jayaraman %A Jong M. Shin %A Soledad Villar %A Ian Phillips %A Carey E. Priebe %A Thomas Hartung %A Michael I. Miller %A Jayanta Dey %A Ningyuan Huang %A Eric Eaton %A Ralph Etienne-Cummings %A Elizabeth L. Ogburn %A Randal Burns %A Onyema Osuagwu %A Brett Mensh %A Alysson R. Muotri %A Julia Brown %A Chris White %A Weiwei Yang %A Andrei A. Rusu Timothy Verstynen %A Konrad P. Kording %A Pratik Chaudhari %A Joshua T. Vogelstein %B Proceedings of The 2nd Conference on Lifelong Learning Agents %C Proceedings of Machine Learning Research %D 2023 %E Sarath Chandar %E Razvan Pascanu %E Hanie Sedghi %E Doina Precup %F pmlr-v232-de-silva23a %I PMLR %P 347--357 %U https://proceedings.mlr.press/v232/de-silva23a.html %V 232 %X Learning is a process which can update decision rules, based on past experience, such that future performance improves. Traditionally, machine learning is often evaluated under the assumption that the future will be identical to the past in distribution or change adversarially. But these assumptions can be either too optimistic or pessimistic for many problems in the real world. Real world scenarios evolve over multiple spatiotemporal scales with partially predictable dynamics. Here we reformulate the learning problem to one that centers around this idea of dynamic futures that are partially learnable. We conjecture that certain sequences of tasks are not retrospectively learnable (in which the data distribution is fixed), but are prospectively learnable (in which distributions may be dynamic), suggesting that prospective learning is more difficult in kind than retrospective learning. We argue that prospective learning more accurately characterizes many real world problems that (1) currently stymie existing artificial intelligence solutions and/or (2) lack adequate explanations for how natural intelligences solve them. Thus, studying prospective learning will lead to deeper insights and solutions to currently vexing challenges in both natural and artificial intelligences.
APA
De Silva, A., Ramesh, R., Ungar, L., Shuler, M.H., Cowan, N.J., Platt, M., Li, C., Isik, L., Roh, S., Charles, A., Venkataraman, A., Caffo, B., How, J.J., Kebschull, J.M., Krakauer, J.W., Bichuch, M., Kinfu, K.A., Yezerets, E., Jayaraman, D., Shin, J.M., Villar, S., Phillips, I., Priebe, C.E., Hartung, T., Miller, M.I., Dey, J., Huang, N., Eaton, E., Etienne-Cummings, R., Ogburn, E.L., Burns, R., Osuagwu, O., Mensh, B., Muotri, A.R., Brown, J., White, C., Yang, W., Verstynen, A.A.R.T., Kording, K.P., Chaudhari, P. & Vogelstein, J.T.. (2023). Prospective Learning: Principled Extrapolation to the Future. Proceedings of The 2nd Conference on Lifelong Learning Agents, in Proceedings of Machine Learning Research 232:347-357 Available from https://proceedings.mlr.press/v232/de-silva23a.html.

Related Material