default search action
Dilip Arumugam
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [i23]Dilip Arumugam, Wanqiao Xu, Benjamin Van Roy:
Exploration Unbound. CoRR abs/2407.12178 (2024) - [i22]Dilip Arumugam, Saurabh Kumar, Ramki Gummadi, Benjamin Van Roy:
Satisficing Exploration for Deep Reinforcement Learning. CoRR abs/2407.12185 (2024) - 2023
- [c13]Ben Prystawski, Dilip Arumugam, Noah D. Goodman:
Cultural reinforcement learning: a framework for modeling cumulative culture on a limited channel. CogSci 2023 - [i21]Dilip Arumugam, Mark K. Ho, Noah D. Goodman, Benjamin Van Roy:
Bayesian Reinforcement Learning with Limited Cognitive Load. CoRR abs/2305.03263 (2023) - [i20]Wanqiao Xu, Shi Dong, Dilip Arumugam, Benjamin Van Roy:
Shattering the Agent-Environment Interface for Fine-Tuning Inclusive Language Models. CoRR abs/2305.11455 (2023) - [i19]Akash Velu, Skanda Vaidyanath, Dilip Arumugam:
Hindsight-DICE: Stable Credit Assignment for Deep Reinforcement Learning. CoRR abs/2307.11897 (2023) - [i18]Jan-Philipp Fränken, Sam Kwok, Peixuan Ye, Kanishk Gandhi, Dilip Arumugam, Jared Moore, Alex Tamkin, Tobias Gerstenberg, Noah D. Goodman:
Social Contract AI: Aligning AI Assistants with Implicit Group Norms. CoRR abs/2310.17769 (2023) - 2022
- [c12]Dilip Arumugam, Satinder Singh:
Planning to the Information Horizon of BAMDPs via Epistemic State Abstraction. NeurIPS 2022 - [c11]Dilip Arumugam, Benjamin Van Roy:
Deciding What to Model: Value-Equivalent Sampling for Reinforcement Learning. NeurIPS 2022 - [i17]Dilip Arumugam, Benjamin Van Roy:
Between Rate-Distortion Theory & Value Equivalence in Model-Based Reinforcement Learning. CoRR abs/2206.02025 (2022) - [i16]Dilip Arumugam, Benjamin Van Roy:
Deciding What to Model: Value-Equivalent Sampling for Reinforcement Learning. CoRR abs/2206.02072 (2022) - [i15]Dilip Arumugam, Satinder Singh:
Planning to the Information Horizon of BAMDPs via Epistemic State Abstraction. CoRR abs/2210.16872 (2022) - [i14]Dilip Arumugam, Mark K. Ho, Noah D. Goodman, Benjamin Van Roy:
On Rate-Distortion Theory in Capacity-Limited Cognition & Reinforcement Learning. CoRR abs/2210.16877 (2022) - [i13]Dilip Arumugam, Shi Dong, Benjamin Van Roy:
Inclusive Artificial Intelligence. CoRR abs/2212.12633 (2022) - 2021
- [c10]Dilip Arumugam, Benjamin Van Roy:
Deciding What to Learn: A Rate-Distortion Approach. ICML 2021: 373-382 - [c9]Dilip Arumugam, Benjamin Van Roy:
The Value of Information When Deciding What to Learn. NeurIPS 2021: 9816-9827 - [i12]Dilip Arumugam, Benjamin Van Roy:
Deciding What to Learn: A Rate-Distortion Approach. CoRR abs/2101.06197 (2021) - [i11]Dilip Arumugam, Peter Henderson, Pierre-Luc Bacon:
An Information-Theoretic Perspective on Credit Assignment in Reinforcement Learning. CoRR abs/2103.06224 (2021) - [i10]David Abel, Cameron Allen, Dilip Arumugam, D. Ellis Hershkowitz, Michael L. Littman, Lawson L. S. Wong:
Bad-Policy Density: A Measure of Reinforcement Learning Hardness. CoRR abs/2110.03424 (2021) - [i9]Dilip Arumugam, Benjamin Van Roy:
The Value of Information When Deciding What to Learn. CoRR abs/2110.13973 (2021) - 2020
- [c8]David Abel, Nate Umbanhowar, Khimya Khetarpal, Dilip Arumugam, Doina Precup, Michael L. Littman:
Value Preserving State-Action Abstractions. AISTATS 2020: 1639-1650 - [c7]Aidan Curtis, Minjian Xin, Dilip Arumugam, Kevin T. Feigelis, Daniel Yamins:
Flexible and Efficient Long-Range Planning Through Curious Exploration. ICML 2020: 2238-2249 - [i8]Aidan Curtis, Minjian Xin, Dilip Arumugam, Kevin T. Feigelis, Daniel Yamins:
Flexible and Efficient Long-Range Planning Through Curious Exploration. CoRR abs/2004.10876 (2020) - [i7]Dilip Arumugam, Debadeepta Dey, Alekh Agarwal, Asli Celikyilmaz, Elnaz Nouri, Bill Dolan:
Reparameterized Variational Divergence Minimization for Stable Imitation. CoRR abs/2006.10810 (2020) - [i6]Dilip Arumugam, Benjamin Van Roy:
Randomized Value Functions via Posterior State-Abstraction Sampling. CoRR abs/2010.02383 (2020)
2010 – 2019
- 2019
- [j1]Dilip Arumugam, Siddharth Karamcheti, Nakul Gopalan, Edward C. Williams, Mina Rhee, Lawson L. S. Wong, Stefanie Tellex:
Grounding natural language instructions to semantic goal representations for abstraction and generalization. Auton. Robots 43(2): 449-468 (2019) - [c6]David Abel, Dilip Arumugam, Kavosh Asadi, Yuu Jinnai, Michael L. Littman, Lawson L. S. Wong:
State Abstraction as Compression in Apprenticeship Learning. AAAI 2019: 3134-3142 - [i5]Dilip Arumugam, Jun Ki Lee, Sophie Saskin, Michael L. Littman:
Deep Reinforcement Learning from Policy-Dependent Human Feedback. CoRR abs/1902.04257 (2019) - 2018
- [c5]David Abel, Dilip Arumugam, Lucas Lehnert, Michael L. Littman:
State Abstractions for Lifelong Reinforcement Learning. ICML 2018: 10-19 - [c4]Nakul Gopalan, Dilip Arumugam, Lawson L. S. Wong, Stefanie Tellex:
Sequence-to-Sequence Language Grounding of Non-Markovian Task Specifications. Robotics: Science and Systems 2018 - [i4]Dilip Arumugam, David Abel, Kavosh Asadi, Nakul Gopalan, Christopher Grimm, Jun Ki Lee, Lucas Lehnert, Michael L. Littman:
Mitigating Planner Overfitting in Model-Based Reinforcement Learning. CoRR abs/1812.01129 (2018) - 2017
- [c3]Siddharth Karamcheti, Edward C. Williams, Dilip Arumugam, Mina Rhee, Nakul Gopalan, Lawson L. S. Wong, Stefanie Tellex:
A Tale of Two DRAGGNs: A Hybrid Approach for Interpreting Action-Oriented and Goal-Oriented Instructions. RoboNLP@ACL 2017: 67-75 - [c2]Dilip Arumugam, Siddharth Karamcheti, Nakul Gopalan, Lawson L. S. Wong, Stefanie Tellex:
Accurately and Efficiently Interpreting Human-Robot Instructions of Varying Granularities. Robotics: Science and Systems 2017 - [i3]Dilip Arumugam, Siddharth Karamcheti, Nakul Gopalan, Lawson L. S. Wong, Stefanie Tellex:
Accurately and Efficiently Interpreting Human-Robot Instructions of Varying Granularities. CoRR abs/1704.06616 (2017) - [i2]Christopher Grimm, Dilip Arumugam, Siddharth Karamcheti, David Abel, Lawson L. S. Wong, Michael L. Littman:
Latent Attention Networks. CoRR abs/1706.00536 (2017) - [i1]Siddharth Karamcheti, Edward C. Williams, Dilip Arumugam, Mina Rhee, Nakul Gopalan, Lawson L. S. Wong, Stefanie Tellex:
A Tale of Two DRAGGNs: A Hybrid Approach for Interpreting Action-Oriented and Goal-Oriented Instructions. CoRR abs/1707.08668 (2017) - 2015
- [c1]James MacGlashan, Monica Babes-Vroman, Marie desJardins, Michael L. Littman, Smaranda Muresan, Shawn Squire, Stefanie Tellex, Dilip Arumugam, Lei Yang:
Grounding English Commands to Reward Functions. Robotics: Science and Systems 2015
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-08-25 19:11 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint