Machine learning research with positive real-world impact.
We develop and deploy industry-leading machine learning systems. Our initiatives have the power to uplift large populations, while advancing the field of artificial intelligence.
Teaching Models to Forget: A Deep Dive into Minimally Invasive Machine Unlearning
A control-oriented framework for practical, sustainable unlearning Machine learning systems are now deeply embedded in industries where privacy, compliance, and user trust matter — from financial services and healthcare to...
Research, /
From Guesswork to Guarantees: A New Framework for Reliable LLM Summarization
Product, /
How Generative AI Is Transforming the Front Office at TD Securities


Ambitious applied research, positive outcomes
Layer 6 unites research, engineering, and product teams to quickly translate theory into impactful real-world applications.
Research Highlights
Our research is supported by access to massive datasets, close collaboration with world renowned academic faculty, and is deployed in impactful applications.
Our research areas include:
- deep learning and generative AI
- model explainability and trustworthy AI
- time series modelling
- natural language processing
International Conference on Machine Learning
ICML 2026 | Agentic Monte Carlo: Reinforcement Learning for Black-Box LLM Agents
Abstract
LLM agents operate in two distinct regimes: open-weight agents amenable to reinforcement learning (RL) and black-box agents whose behaviour must be controlled purely at test time. Although black-box agents are often backed by state-of-the-art proprietary LLMs, API-only access precludes parameter-level optimization, rendering most RL methods inapplicable. To address this limitation, we turn to a known equivalence between RL and Bayesian inference. We propose Agentic Monte Carlo (AMC) to directly sample from the optimal policy of a black-box agent rather than training it through RL. The optimal policy is a posterior over trajectories whose prior we define as the fixed black-box LLM agent. We employ Sequential Monte Carlo to sample from this posterior by learning a value function to steer the agent while leaving the underlying black-box model unchanged. We validate AMC on three diverse environments from the AgentGym benchmark, demonstrating significant improvements over prompting baselines and even outperforming Group Relative Policy Optimization (GRPO) as we scale the test-time compute of our method. AMC demonstrates the feasibility of performing principled RL-style optimization of black-box LLM agents.
International Conference on Machine Learning
ICML 2026 | Conf-Gen: Conformal Uncertainty Quantification for Generative Models
International Conference on Machine Learning
ICML 2026 | Beyond Procedure: Substantive Fairness in Conformal Prediction
Abstract
Conformal prediction (CP) offers distribution-free uncertainty quantification for machine learning models, yet its interplay with fairness in downstream decision-making remains underexplored. Moving beyond CP as a standalone operation (procedural fairness), we analyze the holistic decision-making pipeline to evaluate substantive fairness-the equity of downstream outcomes. Theoretically, we derive an upper bound that decomposes prediction-set size disparity into interpretable components, clarifying how label-clustered CP helps control method-driven contributions to unfairness. To facilitate scalable empirical analysis, we introduce an LLM-in-the-loop evaluator that approximates human assessment of substantive fairness across diverse modalities. Our experiments reveal that label-clustered CP variants consistently deliver superior substantive fairness. Finally, we empirically show that equalized set sizes, rather than coverage, strongly correlate with improved substantive fairness, enabling practitioners to design more fair CP systems.
International Conference on Learning Representations
ICLR 2026 | Textual Bayes: Quantifying Uncertainty in LLM-Based Systems
Abstract
Although large language models (LLMs) are becoming increasingly capable of solving challenging real-world tasks, accurately quantifying their uncertainty remains a critical open problem–one that limits their applicability in high-stakes domains. This challenge is further compounded by the closed-source, black-box nature of many state-of-the-art LLMs. Moreover, LLM-based systems can be highly sensitive to the prompts that bind them together, which often require significant manual tuning (i.e., prompt engineering). In this work, we address these challenges by viewing LLM-based systems through a Bayesian lens. We interpret prompts as textual parameters in a statistical model, allowing us to use a small training dataset to perform Bayesian inference over these prompts. This novel perspective enables principled uncertainty quantification over both the model’s textual parameters and its downstream predictions, while also incorporating prior beliefs about these parameters expressed in free-form text. To perform Bayesian inference–a difficult problem even for well-studied data modalities–we introduce Metropolis-Hastings through LLM Proposals (MHLP), a novel Markov chain Monte Carlo (MCMC) algorithm that combines prompt optimization techniques with standard MCMC methods. MHLP is a turnkey modification to existing LLM pipelines, including those that rely exclusively on closed-source models. Empirically, we demonstrate that our method yields improvements in both predictive accuracy and uncertainty quantification (UQ) on a range of LLM benchmarks and UQ tasks. More broadly, our work demonstrates a viable path for incorporating methods from the rich Bayesian literature into the era of LLMs, paving the way for more reliable and calibrated LLM-based systems.
Conference of the European Chapter of the Association for Computational Linguistics
EACL 2026 | Classifying and Addressing the Diversity of Errors in Retrieval-Augmented Generation Systems
Abstract
Retrieval-augmented generation (RAG) is a prevalent approach for building LLM-based question-answering systems that can take advantage of external knowledge databases. Due to the complexity of real-world RAG systems, there are many potential causes for erroneous outputs. Understanding the range of errors that can occur in practice is crucial for robust deployment. We present a new taxonomy of the error types that can occur in realistic RAG systems, examples of each, and practical advice for addressing them. Additionally, we curate a dataset of erroneous RAG responses annotated by error types. We then propose an auto-evaluation method aligned with our taxonomy that can be used in practice to track and address errors during development.
Our research areas include:
- deep learning and generative AI
- model explainability and trustworthy AI
- time series modelling
- natural language processing
Big vision, deep roots
-
The founders of Layer 6 are also the founders of the Vector Institute for Artificial Intelligence, and we maintain active research collaborations with Vector faculty. 
-
Signal 1 spun out of the research collaboration between Layer 6 and St. Michael’s Hospital, and provides cutting-edge AI platforms for real time monitoring of patient outcomes. 
-
Radical Ventures was launched by the founders of Layer 6 to incubate and support leading AI startups in Canada and abroad. With over a billion dollars raised, Radical has become one of the premier AI focused venture funds in the world. 
Impactful partnerships
Originally founded in 2011, Layer 6 now forms the AI centre of excellence of TD Bank Group. Layer 6 impacts the lives of over 27 million customers, helping more people achieve their financial goals and needs through AI systems founded on the responsible use of AI.
Layer 6 embraces opportunities to contribute to the Canadian AI ecosystem. The founders of Layer 6 played pivotal roles in launching the Vector Institute, Radical Ventures, and Signal 1. Together these entities are integral in driving the Canadian AI innovation, from research to product incubation to scale-up. We continue to collaborate with leading academic institutions globally.
Passion to learn, driven to succeed
Our team comes from globally diverse backgrounds and we care deeply about fostering an inclusive culture. We learn from each other and win together. We are united by our passion for deep learning and a desire to apply our skills to have an outsized and positive impact on the future.
Meet some of our team
Develop your career at Layer 6
We’re growing our team with people driven to be at the cutting edge of machine learning in research, engineering, and impactful applications.