AI

International Conference on Learning Representations (ICLR) 2024

5 Mins read

Apple is sponsoring the International Conference on Learning Representations (ICLR), which is taking place in person from May 7 to 11 in Vienna Austria. ICLR brings together professionals dedicated to the advancement of deep learning.

Schedule

Below is the schedule of Apple sponsored workshops and events at ICLR 2024. Stop by the Apple booth from May 7 to 9 from 9:00am – 5:00pm CEST and May 10 from 9:00am – 4:00pm CEST at Halle/Hall A, Booth #3.

Tuesday, May 7

Wednesday, May 8

Thursday, May 9

Friday, May 10

Saturday, May 11

Accepted Papers

Compressing LLMs: The Truth is Rarely Pure and Never Simple

Ajay Jaiswal (The University of Texas at Austin), Zhe Gan, Xianzhi Du, Bowen Zhang, Zhangyang Wang (The University of Texas at Austin), Yinfei Yang

Data Filtering Functions: Algorithmic Curation of Billion Scale Datasets

Alex Fang (University of Washington), Albin Madappally Jose, Amit Jain, Ludwig Schmidt (University of Washington), Alexander Toshev, Vaishaal Shankar

Efficient ConvBN Blocks for Transfer Learning and Beyond

Kaichao You (Tsinghua University), Qin Guo (Tsinghua University), Anchang Bao (Tsinghua University), Meng Cao, Ping Huang, Jiulong Shan, Mingsheng Long (Tsinghua University)

Efficient-3Dim: Learning a Generalizable Single Image Novel View Synthesizer in One Day

Yifan Jiang, Hao Tang, Rick Chang, Liangchen Song, Zhangyang Wang (University of Texas at Austin), Liangliang Cao

FedHyper: Adaptive Step Sizes for Efficient Federated Learning with Hypergradient Descent

Ziyao Wang (University of Maryland College Park), Jianyu Wang, Ang Li (University of Maryland College Park)

FERRET: Refer and Ground Anything Anywhere at Any Granularity

Haoxuan You, Haotian (AIML) Zhang, Liangliang Cao, Zhe Gan, Bowen Zhang, Zirui Wang, Xianzhi Du, Shih-Fu Chang (Columbia University), Yinfei Yang

Generative Modeling with Phase Stochastic Bridge

Tianrong Chen (Georgia Tech), Jiatao Gu, Josh Susskind, Shuangfei Zhai, Laurent Dinh, Evangelos Theodorou (Georgia Tech)

Guiding Instruction-based Image Editing via Multimodal Large Language Models

Tsu-Jui Fu (University of California, Santa Barbara), Wenze Hu, Xianzhi Du, William Wang (University of California, Santa Barbara), Yinfei Yang, Zhe Gan

Hindsight PRIORs for Reward Learning from Human Preferences

Mudit Verma (Apple Intern / Arizona State University), Rin Metcalf Susa

JointNet: Extending Text-to-Image Diffusion for Dense Distribution Modeling

Jingyang Zhang, Shiwei Li, Yuanxun Lu (Nanjing University), Tian Fang, David McKinnon, Yanghai Tsin, Long Quan (The University of Science and Technology), Yao Yao (Nanjing University)

Large Language Models for Generalizable Reinforcement Learning of Embodied Tasks

Andrew Szot (Georgia Institute of Technology), Max Schwarzer (Université de Montréal), Harsh Agrawal, Bogdan Mazoure, Rin Metcalf Susa, Natalie Mackraz, Walter Talbott, Devon Hjelm, Alexander Toshev

Large-scale Training of Foundation Models for Wearable Biosignals

Salar Abbaspourazad, Oussama Elachqar, Andy Miller, Saba Emrani, Udhay Nallasamy, Ian Shapiro

LiDAR: Sensing Linear Probing Performance in Joint Embedding SSL Architectures

Vimal Thilak, Omid Saremi, Preetum Nakkiran, Josh Susskind, Chen Huang, Hanlin Goh, Laurent Dinh, Etai Littwin

Manifold Diffusion Fields

Ahmed Elhag (AIMS Senegal), Yuyang Wang, Josh Susskind, Miguel Angel Bautista Martin

Matryoshka Diffusion Models

Jiatao Gu, Shuangfei Zhai, Yizhe Zhang, Josh Susskind, Navdeep Jaitly

MOFI: Learning Image Representation from Noisy Entity Annotated Images

Wentao Wu, Aleksei Timofeev, Chen (SII) Chen, Bowen Zhang, Kun Duan, Shuangning Liu, Yantao Zheng, Jonathon Shlens (Google (contributions while at Apple)), Xianzhi Du, Zhe Gan, Yinfei Yang

Overcoming the Pitfalls of Vision-Language Model Finetuning for OOD Generalization

Yuhang Zang (Nanyang Technological University), Hanlin Goh, Josh Susskind, Chen Huang

Poly-View Contrastive Learning

Amitis Shidani (Oxford University), Dan Busbridge, Devon Hjelm, Jason Ramapuram, Eeshan Gunesh Dhekane, Russ Webb

ReLU Strikes Back: Exploiting Activation Sparsity in Large Language Models

Iman Mirzadeh, Keivan Alizadeh Vahid, Sachin Mehta, Carlo C Del Mundo, Oncel Tuzel, Golnoosh Samei, Mohammad Rastegari, Mehrdad Farajtabar

TiC-CLIP: Continual Training of CLIP Models

Saurabh Garg (Carnegie Mellon University), Hadi Pour Ansari, Mehrdad Farajtabar, Sachin Mehta, Raviteja Vemulapalli, Oncel Tuzel, Vaishaal Shankar, Fartash Faghri

Vanishing gradients in reinforcement learning based fine-tuning of language models

Noam Razin (Tel Aviv University), Hattie Zhou (Université de Montréal), Preetum Nakkiran, Josh Susskind, Omid Saremi, Arwen Bradley, Vimal Thilak, Etai Littwin

What Algorithms can Transformers Learn? A Study in Length Generalization

Hattie Zhou (Université de Montréal), Omid Saremi, Etai Littwin, Arwen Bradley, Noam Razin (Tel Aviv University), Josh Susskind, Samy Bengio, Preetum Nakkiran

Conformal Prediction via Regression-as-Classification

Etash Guha (Riken AIP), Shlok Natarajan (Salesforce), Thomas Mollenhoff (Riken AIP), Emtiyaz Khan (Riken AIP), Eugene Ndiaye

How to compute efficiently Hessian-vector products?

Mathieu Dagreou (Inria), Thomas Moreau (Inria), Samuel Vaiter (CNRS), Pierre Ablin

Only Pay for What is Uncertain: Variance-Adaptive Thompson Sampling

Aadi Saha, Branislav Kveton (Amazon)

Pseudo-Generalized Dynamic View Synthesis from a Video

Xiaoming Zhao (UIUC), Fangchang Ma, Josh Susskind, Miguel Angel Bautista Martin, Alex Colburn, Alex Schwing

When Can Transformers Reason With Abstract Symbols?

Enric Boix (MIT), Josh Susskind, Omid Saremi, Emmanuel Abbe, Etai Littwin, Samy Bengio

Workshop Accepted Papers

Frequency-Aware Masked Autoencoders for Multimodal Pretraining on Biosignals

Ran Liu (Georgia Institute of Technology), Ellen Zippi, Hadi Pour Ansari, Chris Sandino, Jingping Nie (Columbia University), Hanlin Goh, Erdrin Azemi, Ali Moin

How Far Are We from Intelligent Visual Deductive Reasoning?

Yizhe Zhang, Richard Bai, Ruixiang Zhang, Jiatao Gu, Shuangfei Zhai, Josh Susskind, Navdeep Jaitly

Large-scale Training of Foundation Models for Wearable Biosignals

Salar Abbaspourazad, Oussama Elachqar, Andy Miller, Saba Emrani, Udhay Nallasamy, Ian Shapiro

Rephrase not Repeat: Beating Scaling Laws in Data Constrained Language Modeling

Pratyush Maini (CMU), Skyler Seto, David Grangier, Richard Bai, Yizhe Zhang, Navdeep Jaitly

Acknowledgements

Samy Bengio is a member of the ICLR 2024 Organizing Committee.

Samy Bengio, Miguel Angel Bautista Martin, Eugene Ndiaye and Yizhe Zhang are ICLR 2024 area chairs.

Fartash Faghri, Enrico Fini, Devon Hjelm, Bogdan Mazoure, Wenze Hu, Rin Metcalf Susa, Vimal Thilak and Luca Zappella are reviewers for ICLR 2024.


Source link

Related posts
AI

Microsoft Researchers Release AIOpsLab: An Open-Source Comprehensive AI Framework for AIOps Agents

3 Mins read
The increasing complexity of cloud computing has brought both opportunities and challenges. Enterprises now depend heavily on intricate cloud-based infrastructures to ensure…
AI

Meet LLMSA: A Compositional Neuro-Symbolic Approach for Compilation-Free, Customizable Static Analysis with Reduced Hallucinations

3 Mins read
Static analysis is an inherent part of the software development process since it enables such activities as bug finding, program optimization, and…
AI

NOVA: A Novel Video Autoregressive Model Without Vector Quantization

3 Mins read
Autoregressive LLMs are complex neural networks that generate coherent and contextually relevant text through sequential prediction. These LLms excel at handling large…

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *