ICLR 2015
ICLR 2015 Main Accepted Papers Other Years ICLR 2015 Basic Information When May 7 - 9, 2015 Where The Hilton San Diego Resort & Spa There is a negotiated room rate for ICLR 2015. Please use this link for reservations. If you have difficulty with the booking site, please call the Hilton San Diego's in-house reservation team directly at +1-619-276-4010 ext. 1. Registration Anyone registering after April 29, 2015 will need to see Karen Smith at the registration desk for a badge. Late registration regular $800 Late registration student $600 Note that the registration fee includes breakfast, coffee breaks, dinner, and the joint ICLR/AISTATS reception. See the conference schedule for the timing of these events. Online Registration Form Important Dates 19 Dec. 2014 Authors submit papers to ICLR 2015 via CMT before 11:59 pm PST 26 Dec. 2014 Authors update their submissions with the arXiv number and URL if they were not available on 19 Dec. 2014. 02 Jan. 2015 Reviewers receive their assignments. 09 Feb. 2015 Reviewers submit their reviews. 27 Feb. 2015 Authors post their initial responses to the reviews. 09 Mar. 2015 End of discussion period for papers. 20 Mar. 2015 Decisions sent to authors. 06 Apr. 2015 Deadline for early registration and to register for the hotel at the conference rate. Committee General Chairs Yoshua Bengio, Université de Montreal Yann LeCun, New York University and Facebook Program Chairs Brian Kingsbury, IBM Research Samy Bengio, Google Nando de Freitas, University of Oxford Hugo Larochelle, Université de Sherbrooke Contact iclr2015.programchairs@gmail.com Discussion, Forum, Pictures on the ICLR Facebook Page https://www.facebook.com/iclr.cc Sponsors ICLR 2015 gratefully acknowledges the support of its sponsors. Gold Silver Bronze Conference Wireless Access network: Hilton Resort username: iclr2015 password: deeplearning Conference Schedule Date Start End Event Details May 7 0730 0900 breakfast South Poolside – Sponsored by Baidu 0900 1230 Oral Session – International Ballroom 0900 0940 keynote Antoine Bordes (Facebook), Artificial Tasks for Artificial Intelligence ( slides ) Video1 Video2 0940 1000 oral Word Representations via Gaussian Embedding by Luke Vilnis and Andrew McCallum (Brown University) ( slides ) Video 1000 1020 oral Deep Captioning with Multimodal Recurrent Neural Networks (m-RNN) by Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, Zhiheng Huang, Alan Yuille (Baidu and UCLA) ( slides ) Video 1020 1050 coffee break 1050 1130 keynote David Silver (Google DeepMind), Deep Reinforcement Learning ( slides ) Video1 Video2 1130 1150 oral Deep Structured Output Learning for Unconstrained Text Recognition by Text Recognition” by Max Jaderberg, Karen Simonyan, Andrea Vedaldi, Andrew Zisserman (Oxford University and Google DeepMind) ( slides ) Video 1150 1210 oral Very Deep Convolutional Networks for Large-Scale Image Recognition by Karen Simonyan, Andrew Zisserman (Oxford) ( slides ) Video 1210 1230 oral Fast Convolutional Nets With fbfft: A GPU Performance Evaluation by Nicolas Vasilache, Jeff Johnson, Michael Mathieu, Soumith Chintala, Serkan Piantino, Yann LeCun (Facebook AI Research) ( slides ) Video 1230 1400 lunch On your own 1400 1700 posters Workshop Poster Session 1 – The Pavilion 1730 1900 dinner South Poolside – Sponsored by Google May 8 0730 0900 breakfast South Poolside – Sponsored by Facebook 0900 1230 Oral Session – International Ballroom 0900 0940 keynote Terrence Sejnowski (Salk Institute), Beyond Representation Learning Video1 Video2 0940 1000 oral Reweighted Wake-Sleep ( slides ) Video 1000 1020 oral The local low-dimensionality of natural images ( slides ) Video 1020 1050 coffee break 1050 1130 keynote Percy Liang (Stanford), Learning Latent Programs for Question Answering ( slides ) Video1 Video2 1130 1150 oral Memory Networks ( slides ) Video 1150 1210 oral Object detectors emerge in Deep Scene CNNs ( slides ) Video 1210 1230 oral Qualitatively characterizing neural network optimization problems ( slides ) Video 1230 1400 lunch On your own 1400 1700 posters Workshop Poster Session 2 – The Pavilion 1730 1900 dinner South Poolside – Sponsored by IBM Watson May 9 0730 0900 breakfast South Poolside – Sponsored by Qualcomm 0900 0940 keynote Hal Daumé III (U. Maryland), Algorithms that Learn to Think on their Feet ( slides ) Video 0940 1000 oral Neural Machine Translation by Jointly Learning to Align and Translate ( slides ) Video 1000 1030 coffee break 1030 1330 posters Conference Poster Session – The Pavilion (AISTATS attendees are invited to this poster session) 1330 1700 lunch and break On your own 1700 1800 ICLR/AISTATS Oral Session – International Ballroom 1700 1800 keynote Pierre Baldi (UC Irvine), The Ebb and Flow of Deep Learning: a Theory of Local Learning Video 1800 2000 ICLR/AISTATS reception Fresco's (near the pool) Keynote Talks Antoine Bordes Artificial Tasks for Artificial Intelligence Despite great recent advances, the road towards intelligent machines able to reason and adapt in real-time in multimodal environments remains long and uncertain. This final goal is so complex and further away that it is impossible to perform experiments and research directly in the desired final conditions, so one has to use intermediate and/or proxy tasks as midway goals. Some of those tasks like object detection in computer vision, or machine translation in natural language processing are very useful on their own and fuel many applications. However, such intermediate tasks are already very difficult and it is not obvious that they are suited testbeds for designing intelligent systems: their inherent complexity makes it hard to precisely interpret the behavior and true capabilities of algorithms, in particular regarding key sophisticated capabilities like reasoning and planning. Hence, in this talk, we advocate the use of controlled artificial environments for developing research in AI, environments in which one can precisely study the behavior of algorithms and unambiguously assess their abilities. This talk follows from joint work and discussions with Jason Weston, Sumit Chopra, Tomas Mikolov and Leon Bottou, among others. David Silver Deep Reinforcement Learning In this talk I will discuss how reinforcement learning (RL) can be combined with deep learning (DL). There are several ways to combine DL and RL together, including value-based, policy-based, and model-based approaches with planning. Several of these approaches have well-known divergence issues, and I will present simple methods for addressing these instabilities. These methods have achieved notable success in the Atari 2600 domain. I will present recent a selection of recent results that improve on the published state-of-the-art in Atari and other challenging domains. Finally, I will discuss how RL can be used to improve DL, even when the native problem is supervised or unsupervised learning. Terrence Sejnowski Beyond Representation Learning As we build ever deeper networks with ever more sophisticated representations it is a good time to pause and ask ourselves where this will end. Building ever taller skyscrapers gets our heads in the clouds but will it get us to the moon? A good place to look for answers is nature. This lecture will start with a look at the hierarchy of cortical areas where much of our intuition about deep learning came from, and will explore essential brain regions that these cortical areas communicate with that give rise to intelligent behavior. Percy Liang Learning Latent Programs for Question Answering “The first Summer Olympics that had at least 20 nations took place in which city?” We tackle the problem of building a system to answering these questions that involve computing the answer. We propose a methodology based on semantic parsing, where we map a question onto a latent program (logical form), whose execution yields the answer (denotation). To obtain both depth (complexity of the program) and breadth (diversity of the questions/domains), we define a new task of answering a complex question from semi-structured tables on the web. We show promising results on the new dataset and invite the community to take on this challenge. Hal Daumé III Algorithms that Learn to Think on their Feet The classic framework of machine learning is: example in, prediction out. This is great when examples are fully available. But it is very different from how humans reason. We get some information and may make a prediction. Or we may decide to get more information. For us, it's worth spending effort when making hard and important decisions (e.g., foreign policy); it is not on easy or low-cost decisions (e.g., afternoon snacks). I'll describe our recent work that focuses on information cost, value, and time. I'll show examples from three settings in natural language processing: syntactic parsing, question answering in competitions and simultaneous machine translation. The last is the problem of incrementally producing a translation of a foreign sentence before the entire sentence is “heard” and is challenging even for well-trained humans. This is joint work with a number of fantastic collaborators: Jordan Boyd-Graber, Leonardo Claudino, Jason Eisner, Lise Getoor, Alvin Grissom II, He He, Mohit Iyyer, John Morgan, Jay Pujara and Richard Socher. Pierre Baldi The Ebb and Flow of Deep Learning: a Theory of Local Learning In a physical neural system, where storage and processing are intertwined, the learning rules for adjusting synaptic weights can only depend on local variables, such as the activity of the pre- and post-synaptic neurons. Thus learning models must specify two things: (1) which variables are to be considered local; and (2) which kind of function combines these local variables into a learning rule. We consider polynomial learning rules and analyze their behavior and capabilities in both linear and non-linear networks. As a byproduct, this framework enables the discovery of new learning rules and important relationships between learning rules and group symmetries. Stacking local learning rules in deep feedforward networks leads to deep local learning. While deep local learning can learn interesting representations, it cannot learn complex input-output functions, even when targets are available for the top layer. Learning complex input-output functions requires instead local deep learning, where target information is transmitted to the deep layers, thereby raising two fundamental issues: (1) the nature of the transmission channel; and (2) the nature and amount of information transmitted over this channel. This leads to the class of deep targets learning algorithms, which provide targets for the deep layers, and its stratification along the information spectrum, illuminating the remarkable power and uniqueness of the backpropation algorithm. The theory clarifies the concept of Hebbian learning, what is learnable by Hebbian learning, and explains the sparsity of the space of learning rules discovered so far and the unique role backpropagation plays in this space. Conference Oral Presentations Word Representations via Gaussian Embedding , Luke Vilnis and Andrew McCallum Deep Captioning with Multimodal Recurrent Neural Networks (m-RNN) , Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, and Alan Yuille Deep Structured Output Learning for Unconstrained Text Recognition , Max Jaderberg, Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman Very Deep Convolutional Networks for Large-Scale Image Recognition , Karen Simonyan and Andrew Zisserman Fast Convolutional Nets With fbfft: A GPU Performance Evaluation , Nicolas Vasilache, Jeff Johnson, Michael Mathieu, Soumith Chintala, Serkan Piantino, and Yann LeCun Reweighted Wake-Sleep , Jorg Bornschein and Yoshua Bengio The local low-dimensionality of natural images , Olivier Henaff, Johannes Balle, Neil Rabinowitz, and Eero Simoncelli Memory Networks , Jason Weston, Sumit Chopra, and Antoine Bordes Object detectors emerge in Deep Scene CNNs , Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba Qualitatively characterizing neural network optimization problems , Ian Goodfellow and Oriol Vinyals Neural Machine Translation by Jointly Learning to Align and Translate , Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio May 9 Conference Poster Session Board Presentation 2 FitNets: Hints for Thin Deep Nets , Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio 3 Techniques for Learning Binary Stochastic Feedforward Neural Networks , Tapani Raiko, Mathias Berglund, Guillaume Alain, and Laurent Dinh 4 Reweighted Wake-Sleep , Jorg Bornschein and Yoshua Bengio 5 Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs , Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan Yuille 7 Multiple Object Recognition with Visual Attention , Jimmy Ba, Volodymyr Mnih, and Koray Kavukcuoglu 8 Deep Narrow Boltzmann Machines are Universal Approximators , Guido Montufar 9 Transformation Properties of Learned Visual Representations , Taco Cohen and Max Welling 10 Joint RNN-Based Greedy Parsing and Word Composition , Joël Legrand and Ronan Collobert 11 Adam: A Method for Stochastic Optimization , Jimmy Ba and Diederik Kingma 13 Neural Machine Translation by Jointly Learning to Align and Translate , Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio 15 Scheduled denoising autoencoders , Krzysztof Geras and Charles Sutton 16 Embedding Entities and Relations for Learning and Inference in Knowledge Bases , Bishan Yang, Scott Yih, Xiaodong He, Jianfeng Gao, and Li Deng 18 The local low-dimensionality of natural images , Olivier Henaff, Johannes Balle, Neil Rabinowitz, and Eero Simoncelli 20 Explaining and Harnessing Adversarial Examples , Ian Goodfellow, Jon Shlens, and Christian Szegedy 22 Modeling Compositionality with Multiplicative Recurrent Neural Networks , Ozan Irsoy and Claire Cardie 24 Very Deep Convolutional Networks for Large-Scale Image Recognition , Karen Simonyan and Andrew Zisserman 25 Speeding-up Convolutional Neural Networks Using Fine-tuned CP-Decomposition , Vadim Lebedev, Yaroslav Ganin, Victor Lempitsky, Maksim Rakhuba, and Ivan Oseledets 27 Deep Captioning with Multimodal Recurrent Neural Networks (m-RNN) , Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, and Alan Yuille 28 Deep Structured Output Learning for Unconstrained Text Recognition , Max Jaderberg, Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman 30 Zero-bias autoencoders and the benefits of co-adapting features , Kishore Konda, Roland Memisevic, and David Krueger 31 Automatic Discovery and Optimization of Parts for Image Classification , Sobhan Naderi Parizi, Andrea Vedaldi, Andrew Zisserman, and Pedro Felzenszwalb 33 Understanding Locally Competitive Networks , Rupesh Srivastava, Jonathan Masci, Faustino Gomez, and Juergen Schmidhuber 35 Leveraging Monolingual Data for Crosslingual Compositional Word Represen
Executive Summary
The article presents the International Conference on Learning Representations (ICLR) 2015, a premier event in the field of artificial intelligence and machine learning. The conference took place from May 7-9, 2015, at the Hilton San Diego Resort & Spa, featuring keynote speakers, oral presentations, and a joint ICLR/AISTATS reception. The article highlights the conference schedule, registration fees, and important dates for authors, reviewers, and attendees. The conference aimed to bring together researchers and practitioners to share knowledge and advance the field of learning representations. This summary provides an overview of the conference, its scope, and its significance in the field of artificial intelligence and machine learning.
Key Points
- ▸ ICLR 2015 was a premier event in artificial intelligence and machine learning
- ▸ The conference took place from May 7-9, 2015, at the Hilton San Diego Resort & Spa
- ▸ Keynote speakers included Antoine Bordes, David Silver, and Yoshua Bengio
- ▸ The conference featured oral presentations, a joint ICLR/AISTATS reception, and a conference schedule with various events
Merits
Strength in Networking Opportunities
The conference provided a platform for researchers and practitioners to interact, share knowledge, and collaborate, promoting advancements in the field of learning representations.
Comprehensive Conference Schedule
The conference schedule included various events, such as keynote speeches, oral presentations, and a joint reception, catering to diverse interests and needs of attendees.
Notable Keynote Speakers
The conference featured prominent keynote speakers, including Antoine Bordes, David Silver, and Yoshua Bengio, who presented cutting-edge research and insights in the field of artificial intelligence and machine learning.
Demerits
Limited Geographic Scope
The conference was held in San Diego, potentially limiting accessibility and attendance from researchers and practitioners outside the region.
Registration Fees
The registration fees, including late registration fees, may have been a barrier for some attendees, particularly students and individuals from underrepresented groups.
Expert Commentary
ICLR 2015 was a significant event in the field of artificial intelligence and machine learning, showcasing the latest research, advancements, and trends in the area. The conference provided a platform for researchers and practitioners to interact, share knowledge, and collaborate, promoting advancements in the field of learning representations. The comprehensive conference schedule, notable keynote speakers, and effective conference management all contributed to the success of the event. However, the conference was limited by its geographic scope and potentially high registration fees, which may have excluded some attendees. Nevertheless, ICLR 2015 demonstrates the importance of conferences in facilitating knowledge sharing and collaboration among researchers and practitioners in the field of artificial intelligence and machine learning. As the field continues to evolve, it is essential to address issues related to accessibility and inclusivity in conferences and research events.
Recommendations
- ✓ Future conferences in the field of artificial intelligence and machine learning should prioritize accessibility and inclusivity, including measures to reduce registration fees and increase geographic scope.
- ✓ Conferences should continue to focus on facilitating knowledge sharing and collaboration among researchers and practitioners, promoting advancements in the field of learning representations.