This week in sunny Southern California, the world’s community of AI researchers had one of its biggest annual gatherings, as the International Conference of Machine Learning (ICML) took place in Long Beach.
VentureBeat didn’t have a reporter at ICML, but you don’t have to be in Long Beach to read the latest state-of-the-art research and breakthrough advances that move the AI needle.
Conference organizers gave best paper honors to “Rates of Convergence for Sparse Variational Gaussian Process Regression” from University of Cambridge and “Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations.”
With authors from ETH Zurich, Max-Planck Institute for Intelligent Systems, and Google Brain, the latter work evaluates more than 12,000 disentanglement models to dispel some beliefs. It asserts, for example, that unsupervised learning of disentangled representations is impossible without inductive biases on models and data.
“Our results suggest that future work on disentanglement learning should be explicit about the role of inductive biases and (implicit) supervision, investigate concrete benefits of enforcing disentanglement of the learned representations, and consider a reproducible experimental setup covering several data sets,” the paper’s abstract reads.
The group also released a library of 10,000 pretrained disentanglement models for training, evaluation, and future research.
Another paper aimed at challenging AI industry assumptions was among top honorable mentions. “Analogies Explained: Towards Understanding Word Embeddings” by University of Edinburgh researchers examines neural word embeddings like word2vec that power natural language processing.
Researchers at MIT’s Media Lab, Princeton University’s Institute for Advanced Study, and Google’s DeepMind devised methods for coordination and communication of multi-agent reinforcement learning to earn an honorable mention, as as did two University of Oxford Department of Statistics papers.
Popular author presentation videos together with slides can be found on the SlidesLive website.
For the first time, this year event chairs asked researchers to include code to prove their findings at the same time that they shared their paper manuscripts. Researchers around the world submitted more than 3,000 papers, and organizers accepted nearly 800 manuscripts.
Sharing code helps verify scientific results shared in research papers. Sharing code at manuscript submission time instead of upon acceptance also appears to be good for judges evaluating papers, as more than half said they found it helpful when deciding what’s worthy of being accepted for publication.
The code-at-submit-time experiment found that 67% of accepted papers submitted code along with research to back the validity of their claims. In a Medium post that shares experiment results, co-chairs suggested a common code repository and common archive to further back reproducibility over time.
A breakdown of ICML participants by affiliation found that top contributors include Google; Microsoft; Facebook; MIT; Stanford University; and University of California, Berkeley.
In another initiative this week to encourage replication of results, Facebook introduced PyTorch Hub, an API and workflow for research reproducibility and support. The hub in beta comes with about 20 pretrained models for replicating at launch.
Other notable research we came across this week includes work by Intel AI to combine reinforcement learning methods to train a 3D humanoid to walk and another that demonstrates how to compress models without accuracy loss.
Beyond ICML, we wrote about AI created by University of York researchers that can predict when Dota 2 players will die, Facebook AI’s MelNet generative model that can sound like Bill Gates delivering a TED Talk, and evidence that Alexa’s speech error rate continues to decline.
More research is coming. The Computer Vision and Pattern Recognition (CVPR) event — also in Long Beach and also considered one of the largest annual AI conferences, according to the 2018 AI Index report — begins Monday.
The above research isn’t meant to be a comprehensive list of noteworthy research from ICML — just a glance at work that caught our eye this week. So if you know about great research from ICML or other conferences that you think merits coverage, send news tips to Khari Johnson and Kyle Wiggers — and be sure to bookmark our AI Channel and subscribe to the AI Weekly newsletter.
Thanks for reading,
Senior AI staff writer