Machine learning research with positive real-world impact.
Ambitious applied research, positive outcomes
Recent highlights
Our research is supported by access to massive datasets, close collaboration with world renowned academic faculty, and a uniquely scalable machine learning platform.
Nature Communications | Decentralized federated learning through proxy model sharing
Nature Communications | Decentralized federated learning through proxy model sharing

Abstract
Institutions in highly regulated domains such as finance and healthcare often have restrictive rules around data sharing. Federated learning is a distributed learning framework that enables multi-institutional collaborations on decentralized data with improved protection for each collaborator’s data privacy. In this paper, we propose a communication-efficient scheme for decentralized federated learning called ProxyFL, or proxy-based federated learning. Each participant in ProxyFL maintains two models, a private model, and a publicly shared proxy model designed to protect the participant’s privacy. Proxy models allow efficient information exchange among participants without the need of a centralized server. The proposed method eliminates a significant limitation of canonical federated learning by allowing model heterogeneity; each participant can have a private model with any architecture. Furthermore, our protocol for communication by proxy leads to stronger privacy guarantees using differential privacy analysis. Experiments on popular image datasets, and a cancer diagnostic problem using high-quality gigapixel histology whole slide images, show that ProxyFL can outperform existing alternatives with much less communication overhead and stronger privacy.
ICML 2023 | TR0N: Translator Networks for 0-Shot Plug-and-Play Conditional Generation
ICML 2023 | TR0N: Translator Networks for 0-Shot Plug-and-Play Conditional Generation

Abstract
We propose TR0N, a highly general framework to turn pre-trained unconditional generative models, such as GANs and VAEs, into conditional models. The conditioning can be highly arbitrary, and requires only a pre-trained auxiliary model. For example, we show how to turn unconditional models into class-conditional ones with the help of a classifier, and also into text-to-image models by leveraging CLIP. TR0N learns a lightweight stochastic mapping which “translates” between the space of conditions and the latent space of the generative model, in such a way that the generated latent corresponds to a data sample satisfying the desired condition. The translated latent samples are then further improved upon through Langevin dynamics, enabling us to obtain higher-quality data samples. TR0N requires no training data nor fine-tuning, yet can achieve a zero-shot FID of 10.9 on MS-COCO, outperforming competing alternatives not only on this metric, but also in sampling speed — all while retaining a much higher level of generality.
ICLR 2023 Spotlight | Disparate Impact in Differential Privacy from Gradient Misalignment
ICLR 2023 Spotlight | Disparate Impact in Differential Privacy from Gradient Misalignment

Abstract
As machine learning becomes more widespread throughout society, aspects including data privacy and fairness must be carefully considered, and are crucial for deployment in highly regulated industries. Unfortunately, the application of privacy enhancing technologies can worsen unfair tendencies in models. In particular, one of the most widely used techniques for private model training, differentially private stochastic gradient descent (DPSGD), frequently intensifies disparate impact on groups within data. In this work we study the fine-grained causes of unfairness in DPSGD and identify gradient misalignment due to inequitable gradient clipping as the most significant source. This observation leads us to a new method for reducing unfairness by preventing gradient misalignment in DPSGD.
ICLR 2023 | Verifying the Union of Manifolds Hypothesis for Image Data
ICLR 2023 | Verifying the Union of Manifolds Hypothesis for Image Data

Abstract
Deep learning has had tremendous success at learning low-dimensional representations of high-dimensional data. This success would be impossible if there was no hidden low-dimensional structure in data of interest; this existence is posited by the manifold hypothesis, which states that the data lies on an unknown manifold of low intrinsic dimension. In this paper, we argue that this hypothesis does not properly capture the low-dimensional structure typically present in image data. Assuming that data lies on a single manifold implies intrinsic dimension is identical across the entire data space, and does not allow for subregions of this space to have a different number of factors of variation. To address this deficiency, we consider the union of manifolds hypothesis, which states that data lies on a disjoint union of manifolds of varying intrinsic dimensions. We empirically verify this hypothesis on commonly-used image datasets, finding that indeed, observed data lies on a disconnected set and that intrinsic dimension is not constant. We also provide insights into the implications of the union of manifolds hypothesis in deep learning, both supervised and unsupervised, showing that designing models with an inductive bias for this structure improves performance across classification and generative modelling tasks. Our code is available at https://github.com/layer6ai-labs/UoMH.
ICLR 2023 | Temporal Dependencies in Feature Importance for Time Series Prediction
ICLR 2023 | Temporal Dependencies in Feature Importance for Time Series Prediction

Abstract
Time series data introduces two key challenges for explainability methods: firstly, observations of the same feature over subsequent time steps are not independent, and secondly, the same feature can have varying importance to model predictions over time. In this paper, we propose Windowed Feature Importance in Time (WinIT), a feature removal based explainability approach to address these issues. Unlike existing feature removal explanation methods, WinIT explicitly accounts for the temporal dependence between different observations of the same feature in the construction of its importance score. Furthermore, WinIT captures the varying importance of a feature over time, by summarizing its importance over a window of past time steps. We conduct an extensive empirical study on synthetic and real-world data, compare against a wide range of leading explainability methods, and explore the impact of various evaluation strategies. Our results show that WinIT achieves significant gains over existing methods, with more consistent performance across different evaluation metrics.
Our research areas include recommendation systems, computer vision, time series forecasting, and natural language processing.

Big vision, deep roots
The co-founders of Layer 6, Jordan Jacobs and Tomi Poutanen, are also founders of the Vector Institute for Artificial Intelligence, and we maintain multiple research initiatives with Vector faculty. Current and former scientific advisors include professors Raquel Urtasun, Sanja Fidler, Rich Zemel, David Duvenaud, Laura Rosella and Scott Sanner.
Meaningful partnerships
Originally founded in 2011, Layer 6 now forms the AI research lab of TD Bank Group. Layer 6 impacts the lives of 25 million customers, helping more people achieve their financial goals. Partnerships with TD Securities provides Layer 6 with market data for training algo trading systems.
Layer 6 embraces opportunities to collaborate with Toronto’s world-leading medical research community, offering deep learning solutions to transform healthcare delivery and improve health outcomes. We are the first to deploy deep learning models on health data covering large population.
Passion to learn, driven to succeed
Our team represents 18 different countries of birth and we care deeply about fostering an inclusive culture where we learn from each other and win together. We are united by our passion for deep learning and a desire to apply our skills to have an outsized and positive impact on the future.
Meet some of our team
Develop your career at Layer 6
We’re growing our team exclusively with people driven to be at the top of the game in machine learning.

In the news
-
Inside TD’s AI play: How Layer 6’s technology hopes to improve old-fashioned banking advice
Globe And Mail
Read articleon Globe And Mail -
Tomi Poutanen: Geoffrey Hinton's Turing Award celebrates a life devoted to ground-breaking AI research
TD Newsroom
Read articleon TD Newsroom -
Tomi Poutanen: Chief Artificial Intelligence Officers Enter the C-Suite
Wall Street Journal
Read articleon Wall Street Journal -
TD Bank’s ‘Layer 6’ to bring machine learning personalization to diabetes care
IT Business
Read articleon IT Business -
TD Advances Innovation in Canadian Healthcare
TD Bank Group
Read articleon TD Bank Group
-
Inside TD’s AI play: How Layer 6’s technology hopes to improve old-fashioned banking advice
Globe And Mail
Read articleon Globe And Mail -
Tomi Poutanen: Geoffrey Hinton's Turing Award celebrates a life devoted to ground-breaking AI research
TD Newsroom
Read articleon TD Newsroom -
Tomi Poutanen: Chief Artificial Intelligence Officers Enter the C-Suite
Wall Street Journal
Read articleon Wall Street Journal -
TD Bank’s ‘Layer 6’ to bring machine learning personalization to diabetes care
IT Business
Read articleon IT Business -
TD Advances Innovation in Canadian Healthcare
TD Bank Group
Read articleon TD Bank Group -
Jordan Jacobs, co-founder of Vector Institute on Canada as a global AI leader
IT Business
Read articleon IT Business -
Layer 6’s Jordan Jacobs: Canada needs to promote itself as an AI leader
BetaKit
Read articleon BetaKit -
U of T alumni and graduate students part of Layer 6 AI's win in global competition
U of T News
Read articleon U of T News -
Tomi Poutanen and Michael Rhodes discuss the future of artificial intelligence with Amanda Lang
TD Bank Group
Read articleon TD Bank Group -
Get Smart – Artificial intelligence is transforming business and life
Ivey Business School
Read articleon Ivey Business School -
How TD Bank plans to use artificial intelligence
BNN Bloomberg
Read articleon BNN Bloomberg