default search action
1st CoLLAs 2022: Montréal, Québec, Canada
- Sarath Chandar, Razvan Pascanu, Doina Precup:
Conference on Lifelong Learning Agents, CoLLAs 2022, 22-24 August 2022, McGill University, Montréal, Québec, Canada. Proceedings of Machine Learning Research 199, PMLR 2022 - Shuang Li, Yilun Du, Gido van de Ven, Igor Mordatch:
Energy-Based Models for Continual Learning. 1-22 - Liam Collins, Aryan Mokhtari, Sanjay Shakkottai:
How Does the Task Landscape Affect MAML Performance? 23-59 - Oleksiy Ostapenko, Timothée Lesort, Pau Rodríguez, Md Rifat Arefin, Arthur Douillard, Irina Rish, Laurent Charlin:
Continual Learning with Foundation Models: An Empirical Study of Latent Replay. 60-91 - Zichen Ma, Yu Lu, Wenye Li, Shuguang Cui:
EFL: Elastic Federated Learning on Non-IID Data. 92-115 - Christina M. Funke, Paul Vicol, Kuan-Chieh Wang, Matthias Kümmerer, Richard S. Zemel, Matthias Bethge:
Disentanglement and Generalization Under Correlation Shifts. 116-141 - Luke Metz, C. Daniel Freeman, James Harrison, Niru Maheswaranathan, Jascha Sohl-Dickstein:
Practical Tradeoffs between Memory, Compute, and Performance in Learned Optimizers. 142-164 - Lucas Caccia, Jing Xu, Myle Ott, Marc'Aurelio Ranzato, Ludovic Denoyer:
On Anytime Learning at Macroscale. 165-182 - Simone Marullo, Matteo Tiezzi, Alessandro Betti, Lapo Faggi, Enrico Meloni, Stefano Melacci:
Continual Unsupervised Learning for Optical Flow Estimation with Deep Networks. 183-200 - Alessandro Betti, Lapo Faggi, Marco Gori, Matteo Tiezzi, Simone Marullo, Enrico Meloni, Stefano Melacci:
Continual Learning through Hamilton Equations. 201-212 - Simon Guiroy, Christopher Pal, Gonçalo Mordido, Sarath Chandar:
Improving Meta-Learning Generalization with Activation-Based Early-Stopping. 213-230 - Shahaf S. Shperberg, Bo Liu, Alessandro Allievi, Peter Stone:
A Rule-based Shield: Accumulating Safety Rules from Catastrophic Action Effects. 231-242 - Bo Liu, Qiang Liu, Peter Stone:
Continual Learning and Private Unlearning. 243-254 - Christopher Beckham, Issam H. Laradji, Pau Rodríguez, David Vázquez, Derek Nowrouzezahrai, Christopher Pal:
Overcoming challenges in leveraging GANs for few-shot data augmentation. 255-280 - Mohammad Rostami:
Increasing Model Generalizability for Unsupervised Visual Domain Adaptation. 281-293 - Wenxuan Zhou, Steven Bohez, Jan Humplik, Nicolas Heess, Abbas Abdolmaleki, Dushyant Rao, Markus Wulfmeier, Tuomas Haarnoja:
Forgetting and Imbalance in Robot Lifelong Learning with Off-policy Data. 294-309 - Rylan Schaeffer, Gabrielle Kaili-May Liu, Yilun Du, Scott Linderman, Ila Rani Fiete:
Streaming Inference for Infinite Non-Stationary Clustering. 310-326 - Maia Fraser, Vincent Létourneau:
Inexperienced RL Agents Can't Get It Right: Lower Bounds on Regret at Finite Sample Complexity. 327-334 - Leonard Bereska, Efstratios Gavves:
Continual Learning of Dynamical Systems With Competitive Federated Reservoir Computing. 335-350 - Ahmed Akakzia, Olivier Sigaud:
Learning Object-Centered Autotelic Behaviors with Graph Neural Networks. 351-365 - Pranshu Malviya, Balaraman Ravindran, Sarath Chandar:
TAG: Task-based Accumulated Gradients for Lifelong learning. 366-389 - Prashant Shivaram Bhat, Bahram Zonooz, Elahe Arani:
Task Agnostic Representation Consolidation: a Self-supervised based Continual Learning Approach. 390-405 - Stephanie C. Y. Chan, Andrew Kyle Lampinen, Pierre Harvey Richemond, Felix Hill:
Zipfian Environments for Reinforcement Learning. 406-429 - Sri Aurobindo Munagala, Sidhant Subramanian, Shyamgopal Karthik, Ameya Prabhu, Anoop M. Namboodiri:
CLActive: Episodic Memories for Rapid Active Learning. 430-440 - Christian Alexander Steinparz, Thomas Schmied, Fabian Paischer, Marius-Constantin Dinu, Vihang Prakash Patil, Angela Bitto-Nemling, Hamid Eghbal-zadeh, Sepp Hochreiter:
Reactive Exploration to Cope With Non-Stationarity in Lifelong Reinforcement Learning. 441-469 - Kajetan Schweighofer, Marius-Constantin Dinu, Andreas Radler, Markus Hofmarcher, Vihang Prakash Patil, Angela Bitto-Nemling, Hamid Eghbal-zadeh, Sepp Hochreiter:
A Dataset Perspective on Offline Reinforcement Learning. 470-517 - Thijs Lambik Van der Plas, Sanjay G. Manohar, Tim P. Vogels:
Predictive Learning Enables Neural Networks to Learn Complex Working Memory Tasks. 518-531 - Hugo Cisneros, Tomás Mikolov, Josef Sivic:
Benchmarking Learning Efficiency in Deep Reservoir Computing. 532-547 - Gyuhak Kim, Bing Liu, Zixuan Ke:
A Multi-Head Model for Continual Learning via Out-of-Distribution Replay. 548-563 - Mohammad Saidur Rahman, Scott E. Coull, Matthew Wright:
On the Limitations of Continual Learning for Malware Classification. 564-582 - Tosca Lechner, Shai Ben-David:
Inherent Limitations of Multi-Task Fair Representations. 583-603 - Alex Kearney, Anna Koop, Johannes Günther, Patrick M. Pilarski:
What Should I Know? Using Meta-Gradient Descent for Predictive Feature Discovery in a Single Stream of Experience. 604-616 - Ali Abbasi, Parsa Nooralinejad, Vladimir Braverman, Hamed Pirsiavash, Soheil Kolouri:
Sparsity and Heterogeneous Dropout for Continual Learning in the Null Space of Neural Activations. 617-628 - Xu Ji, Razvan Pascanu, R. Devon Hjelm, Balaji Lakshminarayanan, Andrea Vedaldi:
Test Sample Accuracy Scales with Training Sample Density in Neural Networks. 629-646 - Diana Benavides Prado, Patricia Riddle:
A Theory for Knowledge Transfer in Continual Learning. 647-660 - Nicholas Corrado, Yuxiao Qu, Josiah P. Hanna:
Simulation-Acquired Latent Action Spaces for Dynamics Generalization. 661-682 - Sam Powers, Eliot Xing, Abhinav Gupta:
Self-Activating Neural Ensembles for Continual Reinforcement Learning. 683-704 - Sam Powers, Eliot Xing, Eric Kolve, Roozbeh Mottaghi, Abhinav Gupta:
CORA: Benchmarks, Baselines, and Metrics as a Platform for Continual Reinforcement Learning Agents. 705-743 - Tyler L. Hayes, Christopher Kanan:
Online Continual Learning for Embedded Devices. 744-766 - Yujing Wang, Luoxing Xiong, Mingliang Zhang, Hui Xue, Qi Chen, Yaming Yang, Yunhai Tong, Congrui Huang, Bixiong Xu:
Heat-RL: Online Model Selection for Streaming Time-Series Anomaly Detection. 767-777 - Phung Lai, Han Hu, Hai Phan, Ruoming Jin, My T. Thai, An M. Chen:
Lifelong DP: Consistently Bounded Differential Privacy in Lifelong Machine Learning. 778-797 - Valentin Guillet, Dennis George Wilson, Emmanuel Rachelson:
Neural Distillation as a State Representation Bottleneck in Reinforcement Learning. 798-818 - Ekdeep Singh Lubana, Puja Trivedi, Danai Koutra, Robert P. Dick:
How do Quadratic Regularizers Prevent Catastrophic Forgetting: The Role of Interpolation. 819-837 - Annie Xie, Chelsea Finn:
Lifelong Robotic Reinforcement Learning by Retaining Experiences. 838-855 - Michael T. Matthews, Mikayel Samvelyan, Jack Parker-Holder, Edward Grefenstette, Tim Rocktäschel:
Hierarchical Kickstarting for Skill Transfer in Reinforcement Learning. 856-874 - Sepideh Esmaeilpour, Lei Shu, Bing Liu:
Open Set Recognition Via Augmentation-Based Similarity Learning. 875-885 - Jelena Luketina, Sebastian Flennerhag, Yannick Schroecker, David Abel, Tom Zahavy, Satinder Singh:
Meta-Gradients in Non-Stationary Environments. 886-901 - Sasha Salter, Markus Wulfmeier, Dhruva Tirumala, Nicolas Heess, Martin A. Riedmiller, Raia Hadsell, Dushyant Rao:
MO2: Model-Based Offline Options. 902-919 - Fahad Sarfraz, Elahe Arani, Bahram Zonooz:
SYNERgy between SYNaptic Consolidation and Experience Replay for General Continual Learning. 920-936 - NareshKumar Gurulingan, Elahe Arani, Bahram Zonooz:
Curbing Task Interference using Representation Similarity-Guided Multi-Task Feature Sharing. 937-951 - Vijaya Raghavan T. Ramkumar, Elahe Arani, Bahram Zonooz:
Differencing based Self-supervised pretraining for Scene Change Detection. 952-965 - Kilian Fatras, Hiroki Naganuma, Ioannis Mitliagkas:
Optimal Transport meets Noisy Label Robust Loss and MixUp Regularization for Domain Adaptation. 966-981 - Jorge A. Mendez, Marcel Hussing, Meghna Gummadi, Eric Eaton:
CompoSuite: A Compositional Reinforcement Learning Benchmark. 982-1003 - Rahaf Aljundi, Daniel Olmeda Reino, Nikolay Chumerin, Richard E. Turner:
Continual Novelty Detection. 1004-1025 - Shruthi Gowda, Bahram Zonooz, Elahe Arani:
InBiaseD: Inductive Bias Distillation to Improve Generalization and Robustness through Shape-awareness. 1026-1042 - Martin Gauch, Maximilian Beck, Thomas Adler, Dmytro Kotsur, Stefan Fiel, Hamid Eghbal-zadeh, Johannes Brandstetter, Johannes Kofler, Markus Holzleitner, Werner Zellinger, Daniel Klotz, Sepp Hochreiter, Sebastian Lehner:
Few-Shot Learning by Dimensionality Reduction in Gradient Space. 1043-1064 - Meghna Gummadi, David Kent, Jorge A. Mendez, Eric Eaton:
SHELS: Exclusive Feature Sets for Novelty Detection and Continual Learning Without Class Boundaries. 1065-1085 - Mingxi Cheng, Tingyang Sun, Shahin Nazarian, Paul Bogdan:
Trustworthiness Evaluation and Trust-Aware Design of CNN Architectures. 1086-1102 - Cristina Garbacea, Qiaozhu Mei:
Adapting Pre-trained Language Models to Low-Resource Text Simplification: The Path Matters. 1103-1119 - Zachary Alan Daniels, Aswin Raghavan, Jesse Hostetler, Abrar Rahman, Indranil Sur, Michael R. Piacentino, Ajay Divakaran, Roberto Corizzo, Kamil Faber, Nathalie Japkowicz, Michael Baron, James Seale Smith, Sahana Pramod Joshi, Zsolt Kira, Cameron Ethan Taylor, Mustafa Burak Gurbuz, Constantine Dovrolis, Tyler L. Hayes, Christopher Kanan, Jhair Gallardo:
Model-Free Generative Replay for Lifelong Reinforcement Learning: Application to Starcraft-2. 1120-1145 - Jonathan Wilder Lavington, Sharan Vaswani, Mark Schmidt:
Improved Policy Optimization for Online Imitation Learning. 1146-1173 - Matthew J. A. Smith, Jelena Luketina, Kristian Hartikainen, Maximilian Igl, Shimon Whiteson:
Learning Skills Diverse in Value-Relevant Features. 1174-1194 - Prashant Shivaram Bhat, Bahram Zonooz, Elahe Arani:
Consistency is the Key to Further Mitigating Catastrophic Forgetting in Continual Learning. 1195-1212 - Georgios Tziafas, Lambert Schomaker, S. Hamidreza Kasaei:
Sim-To-Real Transfer of Visual Grounding for Human-Aided Ambiguity Resolution. 1213-1230 - Andrei Alex Rusu, Sebastian Flennerhag, Dushyant Rao, Razvan Pascanu, Raia Hadsell:
Probing Transfer in Deep Reinforcement Learning without Task Engineering. 1231-1254
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.