Gaussian Error Linear Units (GELUs) D Hendrycks, K Gimpel arXiv preprint arXiv:1606.08415, 2016 | 6366 | 2016 |
Benchmarking Neural Network Robustness to Common Corruptions and Perturbations D Hendrycks, T Dietterich International Conference on Learning Representations (ICLR), 2019 | 3838 | 2019 |
A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks D Hendrycks, K Gimpel International Conference on Learning Representations (ICLR), 2017 | 3679 | 2017 |
Measuring Massive Multitask Language Understanding D Hendrycks, C Burns, S Basart, A Zou, M Mazeika, D Song, J Steinhardt International Conference on Learning Representations (ICLR), 2020 | 2341 | 2020 |
Deep Anomaly Detection with Outlier Exposure D Hendrycks, M Mazeika, T Dietterich International Conference on Learning Representations (ICLR), 2019 | 1688 | 2019 |
The Many Faces of Robustness: A Critical Analysis of Out-of-Distribution Generalization D Hendrycks, S Basart, N Mu, S Kadavath, F Wang, E Dorundo, R Desai, ... International Conference on Computer Vision (ICCV), 2020 | 1614 | 2020 |
AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty D Hendrycks, N Mu, ED Cubuk, B Zoph, J Gilmer, B Lakshminarayanan International Conference on Learning Representations (ICLR), 2020 | 1558* | 2020 |
Natural Adversarial Examples D Hendrycks, K Zhao, S Basart, J Steinhardt, D Song Conference on Computer Vision and Pattern Recognition (CVPR), 2019 | 1464 | 2019 |
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ... arXiv preprint arXiv:2206.04615, 2022 | 1096 | 2022 |
Using Self-Supervised Learning Can Improve Model Robustness and Uncertainty D Hendrycks, M Mazeika, S Kadavath, D Song Neural Information Processing Systems (NeurIPS), 2019 | 1061 | 2019 |
Measuring Mathematical Problem Solving With the MATH Dataset D Hendrycks, C Burns, S Kadavath, A Arora, S Basart, E Tang, D Song, ... NeurIPS, 2021 | 962 | 2021 |
Using Pre-training Can Improve Model Robustness and Uncertainty D Hendrycks, K Lee, M Mazeika International Conference on Machine Learning, 2712-2721, 2019 | 855 | 2019 |
Using Trusted Data to Train Deep Networks on Labels Corrupted by Severe Noise D Hendrycks, M Mazeika, D Wilson, K Gimpel Neural Information Processing Systems (NeurIPS), 2018 | 645 | 2018 |
Scaling Out-of-Distribution Detection for Real-World Settings D Hendrycks, S Basart, M Mazeika, M Mostajabi, J Steinhardt, D Song International Conference on Machine Learning (ICML), 2022 | 487* | 2022 |
Measuring Coding Challenge Competence With APPS D Hendrycks, S Basart, S Kadavath, M Mazeika, A Arora, E Guo, C Burns, ... NeurIPS, 2021 | 465 | 2021 |
Pretrained Transformers Improve Out-of-Distribution Robustness D Hendrycks, X Liu, E Wallace, A Dziedzic, R Krishnan, D Song Association for Computational Linguistics (ACL), 2020 | 444 | 2020 |
Aligning AI With Shared Human Values D Hendrycks, C Burns, S Basart, A Critch, J Li, D Song, J Steinhardt International Conference on Learning Representations (ICLR), 2020 | 414 | 2020 |
Early Methods for Detecting Adversarial Images D Hendrycks, K Gimpel International Conference on Learning Representations (ICLR) Workshop, 2017 | 322 | 2017 |
Unsolved Problems in ML Safety D Hendrycks, N Carlini, J Schulman, J Steinhardt arXiv preprint arXiv:2109.13916, 2021 | 309 | 2021 |
Decodingtrust: A comprehensive assessment of trustworthiness in gpt models B Wang, W Chen, H Pei, C Xie, M Kang, C Zhang, C Xu, Z Xiong, R Dutta, ... Advances in Neural Information Processing Systems 36, 31232-31339, 2023 | 282 | 2023 |