[edit]
Volume 239: Proceedings on "I Can't Believe It's Not Better: Failure Modes in the Age of Foundation Models" at NeurIPS 2023 Workshops, 16 December 2023, New Orleans, Louisiana, USA
[edit]
Editors: Javier Antorán, Arno Blaas, Kelly Buchanan, Fan Feng, Vincent Fortuin, Sahra Ghalebikesabi, Andreas Kriegler, Ian Mason, David Rohde, Francisco J. R. Ruiz, Tobias Uelwer, Yubin Xie, Rui Yang
How (not) to ensemble LVLMs for VQA
; Proceedings on "I Can't Believe It's Not Better: Failure Modes in the Age of Foundation Models" at NeurIPS 2023 Workshops, PMLR 239:1-20
[abs][Download PDF]
Can Visual Scratchpads With Diagrammatic Abstractions Augment LLM Reasoning?
; Proceedings on "I Can't Believe It's Not Better: Failure Modes in the Age of Foundation Models" at NeurIPS 2023 Workshops, PMLR 239:21-28
[abs][Download PDF]
Filter bubbles and affective polarization in user-personalized large language model outputs
; Proceedings on "I Can't Believe It's Not Better: Failure Modes in the Age of Foundation Models" at NeurIPS 2023 Workshops, PMLR 239:29-37
[abs][Download PDF]
Are large language models good annotators?
; Proceedings on "I Can't Believe It's Not Better: Failure Modes in the Age of Foundation Models" at NeurIPS 2023 Workshops, PMLR 239:38-48
[abs][Download PDF]
Self-Evaluation Improves Selective Generation in Large Language Models
; Proceedings on "I Can't Believe It's Not Better: Failure Modes in the Age of Foundation Models" at NeurIPS 2023 Workshops, PMLR 239:49-64
[abs][Download PDF]
Is Scaling Learned Optimizers Worth It? Evaluating The Value of VeLO’s 4000 TPU Months
; Proceedings on "I Can't Believe It's Not Better: Failure Modes in the Age of Foundation Models" at NeurIPS 2023 Workshops, PMLR 239:65-83
[abs][Download PDF]
Exploring Social Bias in Downstream Applications of Text-to-Image Foundation Models
; Proceedings on "I Can't Believe It's Not Better: Failure Modes in the Age of Foundation Models" at NeurIPS 2023 Workshops, PMLR 239:84-102
[abs][Download PDF]
Adversarial Attacks and Defenses in Large Language Models: Old and New Threats
; Proceedings on "I Can't Believe It's Not Better: Failure Modes in the Age of Foundation Models" at NeurIPS 2023 Workshops, PMLR 239:103-117
[abs][Download PDF]
The Role of Linguistic Priors in Measuring Compositional Generalization of Vision-Language Models
; Proceedings on "I Can't Believe It's Not Better: Failure Modes in the Age of Foundation Models" at NeurIPS 2023 Workshops, PMLR 239:118-126
[abs][Download PDF]
Pre-trained Language Models Do Not Help Auto-regressive Text-to-Image Generation
; Proceedings on "I Can't Believe It's Not Better: Failure Modes in the Age of Foundation Models" at NeurIPS 2023 Workshops, PMLR 239:127-133
[abs][Download PDF]
subscribe via RSS