I am Po-Chen Kuo, a 5th-year Ph.D. Candidate in computational neuroscience at University of Washington and a visiting scientist at the Allen Institue for Neural Dynamics. I am fortunate to have Professor Edgar Y. Walker as my advisor, and work closely with the UW Computational Neuroscience Center. Previously I received my M.D. and B.Sc. in Physics at National Taiwan University in 2020.
I am interested in how neural circuits, dynamics, and computation support the complex phenomena of cognition, behavior, and intelligence in organmisms. I study how biological and artificial intelligent systems adapt under uncertainty, with a focus on reinforcement learning, Bayesian inference, and meta-learning. Please visit my research page for more details!
Aside from research, I enjoy reading, cooking, and baseball.
Email: pckuo [at] uw [dot] edu
Office: Magnuson Health Sciences Building, 1705 NE Pacific Street, Seattle, WA 98195
Kuo, P,-C. & Walker, E. Y. (2026). An Information-Theoretic Framework For Optimizing Experimental Design To Distinguish Probabilistic Neural Codes. In ICLR 2026. (28% acceptance)
Kuo, P,-C., Hou, H., Dabney, W., & Walker, E. Y. (2025). Predictive Coding Enhances Meta-RL To Achieve Bayes-Optimal Belief Representation Under Partial Observability. In NeurIPS 2025. (24.5% acceptance)
Bull, M. S., Kuo, P,-C., …, & Buice, M. A. (2025). Volume Transmission Implements Context Factorization to Target Online Credit Assignment and Enable Compositional Generalization. In NeurIPS 2025. (24.5% acceptance)
Kuo, P,-C. & Walker, E. Y. (2025). An Information-Theoretical Approach To Optimizing Task Design For Differentiating Probabilistic Neural Codes. In Data on the Brain & Mind Workshop @ NeurIPS 2025. (Oral, top 5)
Kuo, P,-C., Hou, H., Dabney, W., & Walker, E. Y. (2024). Learning Bayes-Optimal Representation in Partially Observable Environments via Meta-Reinforcement Learning with Predictive Coding. In The First Workshop on NeuroAI @ NeurIPS2024.
Brenner, M., Hess, F., Mikhaeil, J. M., Bereska, L. F., Monfared, Z., Kuo, P.-C., & Durstewitz, D. (2022). Tractable Dendritic RNNs for Reconstructing Nonlinear Dynamical Systems. In ICML 2022. (21.9% acceptance)
Cheng, H.-T., Yeh, C.-F., Kuo, P.-C., … & Liu, T.-L. (2020). Self-similarity student for partial label histopathology image segmentation. In ECCV 2020. (27.1% acceptance)
[Upcoming, Apr 2026] [Poster] ICLR 2026. ‘‘An Information-Theoretic Framework For Optimizing Experimental Design To Distinguish Probabilistic Neural Codes.’’ Kuo, P,-C. & Walker, E. Y.
[Dec 2025] [Oral, Poster] NeurIPS 2025, Data on the Brain & Mind Workshop. ‘‘An Information-Theoretical Approach To Optimizing Task Design For Differentiating Probabilistic Neural Codes.’’ Kuo, P,-C. and Walker, E. Y.
[Dec 2025] [Poster] NeurIPS 2025. ‘‘Predictive Coding Enhances Meta-RL To Achieve Bayes-Optimal Belief Representation Under Partial Observability.’’ Kuo, P,-C., Hou, H., Dabney, W., & Walker, E. Y.
[Dec 2025] [Poster] NeurIPS 2025. *‘‘Volume Transmission Implements Context Factorization to Target Online Credit Assignment and Enable Compositional Generalization.’’ * Bull, M. S., Kuo, P,-C., Smith, A. L., & Buice, M. A.
[Nov 2025] [Poster] Society for Neuroscience, SfN 2025. ‘‘An Information-Theoretical Approach To Optimizing Task Design For Distinguishing Probabilistic Codes In Neural Populations.’’ Kuo, P,-C. and Walker, E. Y.
[Sep 2025] [Poster] Lake Conference – Neural Coding & Dynamics 2025. ‘‘Task Structures Shape Underlying Dynamical Systems That Implement Computation.’’ Kuo, P,-C., Walker, E. Y., & Driscoll, L.
[Jun 2025] [Spotlight, Poster] Multi-disciplinary Conference on Reinforcement Learning and Decision Making, RLDM 2025. ‘‘Learning Bayes-Optimal Representation in Partially Observable Environments via Meta-Reinforcement Learning with Predictive Coding.’’ Kuo, P,-C., Hou, H., Dabney, W., & Walker, E. Y.
[Mar 2025] [Poster, Presenter Travel Award] Computational and Systems Neuroscience, COSYNE 2025. ‘‘Task Structures Shape Underlying Dynamical Systems That Implement Computation.’’ Kuo, P,-C., Walker, E. Y., & Driscoll, L.
[Dec 2024] [Poster] The First Workshop on NeuroAI @ NeurIPS2024. ‘‘Learning Bayes-Optimal Representation in Partially Observable Environments via Meta-Reinforcement Learning with Predictive Coding.’’ Kuo, P,-C., Hou, H., Dabney, W., & Walker, E. Y.
[Aug 2024] [Poster] Analytical Connectionism Summer School 2024. ‘‘Uncovering the Computation of Dynamic Foraging with Actor-critic Recurrent Neural Networks.’’ Kuo, P,-C., Driscoll, L., & Walker, E. Y.
[Aug 2024] [Poster] Cognitive Computational Neuroscience, CCN 2024 ‘‘Adaptive Learning Under Uncertainty With Variational Belief Deep Reinforcement Learning.’’ Kuo, P.-C., Hou, H., & Walker, E. Y.
[Jun 2024] [Poster] Research in Encoding And Decoding of Neural Ensembles, AREADNE 2024. ‘‘An Information-Theoretical Approach To Optimize Task Design For Distinguishing Probabilistic Codes In Neural Populations.’’ Kuo, P.-C. and Walker, E. Y.
[May 2024] [Poster] CoNectome 2024 Symposium. ‘‘Bayesian Reinforcement Learning For The Computational Basis Of Dynamic Foraging.’’ Kuo, P.-C. and Walker, E. Y.
[Feb 2024] [Poster] Janelia Conference, Bridging Diverse Perspectives on the Mechanistic Basis of Foraging. ‘‘Bayesian Reinforcement Learning As A Mechanistic Model For Dynamic Foraging Behavior.’’ Kuo, P.-C. and Walker, E. Y.
[Dec 2025] National Yang Ming Chiao Tung University, Institute of Neuroscience Seminar, invited by Prof. Shih-Chieh Lin. ‘‘A NeuroAI Approach To Adaptive Learning Under Uncertainty: From Neural Representation To Dynamics.’’
[Dec 2025] NeurIPS 2025, Data on the Brain & Mind Workshop. ‘‘An Information-Theoretical Approach To Optimizing Task Design For Differentiating Probabilistic Neural Codes.’’
[Oct 2025] University of Washington, Grey Matters Journal Club. ‘‘A NeuroAI Approach to Understanding Adaptive Learning Under Uncertainty.’’
[Sep 2025] Stanford University, Prof. Andreas Tolias’ Laboratory. ‘‘Deciphering Neural Representations Supporting Perception and Decision-Making Under Uncertainty.’’
[Aug 2025] Allen Institute, Summer Workshop on the Dynamic Brain. ‘‘Recurrent Neural Networks for Dynamic Foraging.’’
[Feb 2025] University of Washington, Prof. Bing Brunton’s Laboratory. ‘‘Learning Bayes-optimal Representations Under Partial Observability via Meta-Reinforcement Learning with Predictive Coding.’’
[Jul 2024] TReND-CaMinA: Computational Neuroscience and Machine Learning in Africa. ‘‘From Neural Variability to Population Coding.’’
[Feb 2024] University of Washington, NEUSCI 403 Lecture (Computational Models For Cognitive Neuroscience). ‘‘Adaptive learning under uncertainty: learning to reinforcement learn with actor-critic recurrent neural networks.’’
[Aug 2023] Allen Institute for Brain Science, Summer Workshop on the Dynamic Brain. ‘‘What gives rise to neural variability and dynamics?’’