Symposium ML4QT | Machine Learning to Advance Quantum Technologies

Präsenzveranstaltung | 07. – 08. Juli 2025 | experimenta Heilbronn
Über das Symposium

Maschinelles Lernen (ML) besitzt ein enormes Potenzial für die Weiterentwicklung von Quantenhardware und -software. Untersuchen Sie mit uns, wie ML zum Beispiel die Entwicklung von Algorithmen oder die Steuerung von Quantum Processing Units (QPUs) vorantreiben kann. Wir werden auch beleuchten, wie Quantentechnologien durch neue ML-basierte Forschungsmethoden gefördert werden kann.

Das Symposium Machine Learning to Advance Quantum Technologies (ML4QT) ist ein Forum für Forschende und Entwickler:innen, die sich für die Schnittstelle zwischen maschinellem Lernen und Quantentechnologien begeistern. Unabhängig davon, ob Sie bereits in diesem spannenden Bereich tätig sind oder einen Einstieg suchen, ist ML4QT die perfekte Plattform, um Möglichkeiten und Synergien zu erkunden, die die Entwicklung von Quantentechnologien vorantreiben werden.  

Bei dieser Eröffnungsveranstaltung werden wir auch den neu entstehenden Applied Quantum AI Hub Heilbronn vorstellen, ein Zentrum, das sich darauf konzentriert, angewandte Quantenforschung und Innovation voranzutreiben. Darüber hinaus markiert ML4QT den ersten Schritt zur Etablierung eines jährlichen Treffens, das den langfristigen Austausch und die Zusammenarbeit zwischen Expert:innen aus dem akademischen Bereich, Forschungseinrichtungen und der Industrie fördern soll.

Mai 2, 2025

Deadline Call for Contributions –

Mai 2, 2025
Mai 12, 2025

– Feedback Contribution

Mai 12, 2025
Mai 30, 2025

Deadline Registrierung –

Mai 30, 2025
Juli 7, 2025

– Symposium

Juli 7, 2025
Juli 8, 2025

Award Voting –

Vote for the Best Talk & Poster Awards

Juli 8, 2025

Programm

Das Symposium umfasst Keynotes, invited und contributed Talks sowie Plenardiskussionen. Außerdem gibt es eine Poster Session, eine Abendveranstaltung und ein Konferenzdinner.

Day 1
09:00Opening Prof. Oliver Riedel, Dr. Christian Tutschku
09:15Keynote I: AI for Science and EngineeringProf. Steffen Staab
10:15Coffee Break
10:45Keynote II: De-novo design of physics experiments with artificial intelligenceDr. Mario Krenn
11:45Invited Talk: AI for Quantum: perspectives and new applicationsProf. Christine Muschik
12:30Lunch Break
13:30Invited Talk: Scalable Hamiltonian learning with graph neural networks: bridging numerical simulation and experimentProf. Anna Dawid
14:00Contributed Talks: ML 4 Quantum Sensor and Computer DevelopmentDr. Pierre-Emmanuel Emeriau
Anurag Saha Roy
Benjamin MacLellan
15:00Coffee Break
15:20Excursus: ML for Quantum and Quantum for QML at IBMDr. Johannes Greiner
15:40Contributed Talks: ML with QCDr. Jan Schnabel
Dr. Michael Marthaler
16:20Poster Session with Mini Presentations Jason Pye
Manuel Lautaro Hickmann
Alessio Paesano
Anant Agnihotri
Dr. Max Scheerer
17:30Canoe City Tour
19:30Conference Dinner
Day 2
08:45Welcome BackProf. Achim Kempf, Dr. Christian Tutschku
09:00Keynote III: Quo vadis, quantum machine learning?Prof. Jens Eisert
10:00Invited Talk: Dimensionality reduction for visualisation: can we go beyond neighbour embedding?Prof. John A. Lee
10:30Coffee Break
10:45Contributed Talks: ML 4 Quantum Software DevelopmentAbhishek Dubey
Daniel Barta
Johannes Jung
Darya Martyniuk
12:00Contributed Talks: ML 4 Quantum Error CorrectionEvan Peters
Nico Meyer
12:45Lunch Break
13:40Best Poster/ Speaker Award
14:00Panel and Plenary Discussion: Role of AI for Research Prof. Achim Kempf, Jason Pye, Dr. Robert Jonsson
15:15Closing Notes and Outlook
15:45Open Coffee Break
16:00End
Keynote 1: AI for Science and Engineering (Prof. Steffen Staab, University of Stuttgart)

The prototypical scientific process in many technical and natural sciences domains comprises ideation, definition of hypotheses, experimental design, execution, evaluation, and iteration. This scientific process has often been understood to constitute the pinnacle of intellectual achievement, and also a blueprint for the development of efficient and effective business processes. In this talk, I want to sketch an overarching picture of the use of AI in science with particular results from the Stuttgart Center for Simulation Science – and beyond. 

Prof. Dr. Steffen Staab heads the Institute for Artificial Intelligence at the University of Stuttgart, Germany, and holds a chair for Web and Computer Science at the University of Southampton, UK. Coming from a research background in computational linguistics, Staab is interested in foundations and usage of semantic technologies investigating the likes of knowledge graphs, machine learning, natural languge processing and databases. Prof. Staab is a fellow of the Association for Computing Machinery (ACM), the European AI Association (EurAI), the Asia-Pacific AI Association (AAIA), and ELLIS – the European Laboratory for Learning and Intelligent System Society.

Keynote 2: De-novo design of physics experiments with artificial intelligence (Dr. Mario Krenn, MPI Erlangen)

Artificial intelligence (AI) is a potentially disruptive tool for physics and science in general. One crucial question is how this technology can contribute at a conceptual level to help acquire new scientific understanding or inspire new surprising ideas. I will talk about how AI can be used as an artificial muse in physics, which suggests surprising and unconventional ideas and techniques that the human scientist can interpret, understand and generalize to its fullest potential [1]. I will focus on AI for the design of new physics experiments, in the realm of quantum-optics [2, 3] and quantum-enhanced gravitational wave detectors [4] as well as super-resolution microscopy [5].

[1] Krenn, Pollice, Guo, Aldeghi, Cervera-Lierta, Friederich, Gomes, Häse, Jinich, Nigam, Yao, Aspuru-Guzik, On scientific understanding with artificial intelligence. Nature Reviews Physics 4, 761 (2022).

[2] Krenn, Kottmann, Tischler, Aspuru-Guzik, Conceptual understanding through efficient automated design of quantum optical experiments. Physical Review X 11(3), 031044 (2021).

[3] Ruiz-Gonzalez, Arlt, et al., Digital Discovery of 100 diverse Quantum Experiments with PyTheus, Quantum 7, 1204 (2023).

[4] Krenn, Drori, Adhikari, Digital Discovery of interferometric Gravitational Wave Detectors, in press: Phys. Rev. X (2025), (https://arxiv.org/abs/2312.04258)

[5] Rodríguez, Arlt, Möckl, Krenn, Automated discovery of experimental designs in super-resolution microscopy with XLuminA, Nature Comm. 15, 10658 (2024)

Dr. Mario Krenn is the research group leader of the Artificial Scientist Lab at the Max Planck Institute for the Science of Light in Erlangen, Germany, and will soon start as a full professor for “Machine Learning for Science” at the University of Tuebingen in the Department of Computer Science.
Dr. Krenn has introduced AI systems that design quantum experiments and hardware, several of which have been realized in laboratories, and developed algorithms to inspire unconventional ideas in quantum technologies. His ERC Starting Grant project, ArtDisQ, aims to transform physics simulators to accelerate the discovery of advanced quantum hardware.

Keynote 3: Quo vadis, quantum machine learning? (Prof. Jens Einsert, FU Berlin)

Quantum machine learning holds significant promise for enhancing various aspects of machine learning, including sample complexity, computational complexity, and generalization. The field has made substantial strides in recent years. However, a key objective—developing quantum algorithms that clearly outperform classical methods for practically relevant unstructured data—remains elusive. In this talk, we will explore this challenge from multiple perspectives. We will examine cases where separations can be identified, such as in abstract instances of generator [1] and density modeling [2], in training classical networks using quantum algorithms [3], for short quantum circuits [4], and for quantum analogs of diffusion probabilistic models [5]. At the same time, we will address challenges arising from dequantization in both noise-free [6] and non-unital noisy settings [7]. These insights will also encourage thinking beyond traditional approaches. We will reconsider the concept of generalization [8] and explore examples of explainable quantum machine learning [9] and single-shot quantum machine learning [10]. Ultimately, we will use these insights to reflect on the potential and limitations of applying quantum computers to machine learning problems involving unstructured noisy data and put this research line into the wider context of the intersection of research on artificial intelligence and quantum computing [11].

[1] Quantum 5, 417 (2021).

[2] Phys. Rev. A 107, 042416 (2023).

[3] Nature Comm. 15, 434 (2024).

[4] arXiv:2411.15548 (2024).

[5] arXiv:2502.14252 (2025).

[6] Quantum 9, 1640 (2025).

[7] arXiv:2403.13927 (2024).

[8] Nature Comm. 15, 2277 (2024).

[9] arXiv:2412.14753 (2024).

[10] Quantum 9, 1665 (2025).

[11] arXiv:2505.23860 (2025).

Jens Eisert is a theoretical physicist at the Freie Universität Berlin, working on notions of quantum computing and the study of complex quantum systems. He is known for his rigorous work in quantum information science, which spans the traditional fields of physics, mathematics, and computer science, and is pragmatically motivated by application, protocol, or phenomenon. In the past, he has made several important contributions to the field of quantum computing, focusing on what quantum computers can do and their ultimate limitations, making him one of the most seen and cited researchers in his field. Before joining the Freie Universität Berlin, he was a full professor at the University of Potsdam and an assistant professor at Imperial College London. For his work, he has received an ERC AdG, an ERC CoG, a EURYI Award (the predecessor of the ERC StG), and a Google NISQ Award.

AI for Quantum: perspectives and new applications (Prof. Christine Muschik, University of Waterloo)

In the first part of this talk, Christine Muschik will outline avenues on which modern machine learning techniques can benefit the development of quantum technologies. In the second part, she’ll cover a concrete example of such an application and explain how  AI-assisted quantum sensors can play an important role in protecting our drinking water.

Prof. Christine Muschik holds a University Research Chair at the University of Waterloo. She holds a faculty position at the Institute for Quantum Computing and an associate faculty position at the Perimeter Institute for Theoretical Physics.
Christine Muschik received a number of awards for her work on quantum computers, including an Ontario Early Researcher Award, a CIFAR Azrieli Global Scholar Fellowship for “Research Leaders of Tomorrow”, and a Sloan Fellowship for outstanding early career researchers.
Her work on taming quantum systems to perform hard calculations is internationally recognised and has been featured by Scientific American and Forbes. In 2016, her results on programming quantum computers has been named as one of the “Top 10 breakthroughs in Physics” of this year.
Today she pushes the frontier of scientific computing by developing new quantum-enhanced methods.

Scalable Hamiltonian learning with graph neural networks: bridging numerical simulation and experiment (Prof. Anna Dawid, Leiden University)

Artificial neural networks (NNs) have revolutionized language translation and image generation and may similarly transform quantum technologies. In particular, NNs show strong potential to outperform traditional numerical methods in learning Hamiltonians that govern quantum systems based on experimental data. This promise is especially notable in architectures like graph neural networks (GNNs), which are inherently invariant to system size. This allows them to infer Hamiltonians for quantum systems that are larger or structurally different from those seen during training, potentially reaching regimes inaccessible to conventional approaches such as tensor networks.
In this work, we explore this potential in the context of Rydberg-atom arrays—a leading platform for quantum simulation. The experimental control of these arrays is limited by the imprecision in the positions of optical tweezers during array assembly, introducing uncertainty in the realized Hamiltonian. To model this, we use the Density Matrix Renormalization Group (DMRG) to generate ground-state snapshots of the transverse field Ising model for many realizations of Hamiltonian parameters. Correlation functions reconstructed from these snapshots serve as input data for training the GNN.
We demonstrate that our model exhibits a remarkable ability to extrapolate beyond its training domain, both in system size and geometry. It accurately infers the underlying Hamiltonian parameters, marking a significant step toward bridging the gap between simulation and experiment in large-scale quantum systems.

Anna Dawid leads a research group as an assistant professor at the Leiden Institute of Advanced Computer Science (LIACS) and the Leiden Institute of Physics (LION) at Leiden University in the Netherlands. Her scientific interests include interpretable machine learning for science, ultracold platforms for quantum simulation, and the theory of machine learning. She is passionate about transforming automated computational methods (especially neural networks) into a new, unique lens for gaining fresh insights into difficult, well-established scientific problems.
Before joining Leiden, she was a research fellow at the Center for Computational Quantum Physics at the Flatiron Institute in New York. In 2022, she earned her PhD in physics and photonics under the supervision of Prof. Michał Tomza (Faculty of Physics, University of Warsaw) and Prof. Maciej Lewenstein (ICFO – The Institute of Photonic Sciences, Spain). Earlier, she completed a master’s degree in quantum chemistry and a bachelor’s degree in biotechnology at the University of Warsaw.
She is the first author of the book Machine Learning in Quantum Sciences, to be published by Cambridge University Press in May 2025. She is also a recipient of the START fellowship from the Foundation for Polish Science (2022) and a participant in the 74th Lindau Nobel Laureate Meeting (2024).

Dimensionality reduction for visualisation: can we go beyond neighbour embedding? (Prof. John A. Lee, Université catholique de Louvain)

Since 2008 and the advent of t-SNE, the domain of dimensionality reduction has undergone a paradigm shift.
Previous approaches to project data, like principal component analysis, or to embed data based on distance preservation, like multidimensional scaling, can hardly compete with t-SNE or its closely related variants such as UMAP.
The paradigm of t-SNE is to preserve neighbourhoods of points from the data space to the visualisation space, instead of distances, and t-SNE stands for t-distributed stochastic neighbour embedding.
In fact, t-SNE is packed with counter-intuitive empirical tweaks and their theoretical understanding is still work in progress.
Immunity against distance concentration and a strong inductive bias towards cluster formation have made t-SNE an extremely useful tool for visualisation in various applications domains, like cell biology.
However, t-SNE suffers from a few rare drawbacks, like its poor preservation of global structure, fragmentation of clusters, or complicated hyper-parameterisation.
In addition to analysing t-SNE, we will present some recent developments that might improve t-SNE or, alternatively, rehabilitate other simpler paradigms of dimensionality reduction, like multi-dimensional scaling.

John Aldo Lee was born in 1976 in Brussels, Belgium. He received the M.Sc. degree in Applied Sciences (Computer Engineering) in 1999 and the Ph.D. degree in Applied Sciences (Machine Learning) in 2003, both from the Université catholique de Louvain (UCL, Louvain-la-Neuve, Belgium). He is currently a Research Director with the Belgian fund of scientific research (Fonds de la Recherche Scientifique, F.R.S.-FNRS). During several postdocs, he developed specific image enhancement techniques for positron emission tomography in the Centre for Molecular Imaging and Experimental Radiotherapy of the Saint-Luc University Hospital (Belgium). He is also an active member of the UCL Machine Learning Group, with research interests focusing on data visualization, dimensionality reduction, neighbor embedding, intrinsic dimensionality estimation, clustering, and image processing. John A. Lee is currently the head of the center MIRO (Molecular Imaging, Radiotherapy, and Oncology) in UCLouvain Brussels. His team activities are aimed at image-guided treatment planning in radiation oncology and particle therapy. This includes adaptive treatments, motion management, image registration, AI-based automatic organ segmentation and dose prediction, as well as Monte Carlo simulations for proton therapy, robustness analysis, robust planning, and planning for non-conventional delivery techniques like arc proton therapy.

ML 4 Quantum Sensor and Computer Development
Machine Learning for Scalable Photonic Quantum Computing: From Quantum Dot Characterization to Error Correction

Dr. Pierre-Emmanuel Emeriau (Quandela)

We present three innovative applications of machine learning across the quantum technology stack for building a photonic quantum computer. First, we showcase automated quantum dot characterization in semiconductor samples through image processing algorithms, enabling rapid assessment of quantum dot position. This method extends to spectrum analysis for full automated characterization of quantum dot properties. Second, we demonstrate state-of-the-art ML-enhanced chip characterization techniques that learn precise voltage-to-phase mappings despite thermal crosstalk effects, significantly improving control. This technique also identifies chip defects and enables real-time mitigation of imperfections. Finally, we present a novel machine learning decoder for error correction using Low-Density Parity-Check (LDPC) codes with realistic noise models tailored to photonic quantum computing. Machine learning-based decoding is critical to address the full complexity of decoding problems—where Minimum-Weight Perfect Matching decoders fail for most codes—while maintaining faster decoding times compared to Belief Propagation with Ordered Statistics Decoding.
Our work illustrates how machine learning techniques can be strategically deployed from the hardware level through to the error correction layer, demonstrating the transformative potential of ML across the entire photonic quantum computing stack. By addressing key challenges in scalability, reliability, and performance, our findings provide actionable insights for integrating ML methodologies to advance practical quantum technologies.

Machine Learning based Digital Twins for accelerated quantum technology development

Anurag Saha Roy (Qruise)

In-depth characterisation of quantum devices is crucial not just for the optimal operation of high fidelity gates but more importantly, to identify true system parameters for creating an accurate model of the QPU and its control stack. Such an accurate digital twin of the system is critical for generating an error budget – a quantitative breakdown of the contribution of different error generating factors to the bottomline benchmarks of a QPU’s performance. We use modern Machine Learning tools to combine a differentiable physics accurate digital twin with data from a broad set of characterisation experiments to build a model with a high predictive power that accurately predicts the outcome of experiments even outside the training dataset. This predictive model is then used to extrapolate the error contributions from different system and environmental factors. We test these tools on superconducting QPUs and discuss various demonstrative results.

End-to-end variational quantum sensing

Benjamin MacLellan (University of Waterloo)

Harnessing quantum correlations can enable sensing beyond classical precision limits, with the realization of such sensors poised for transformative impacts across science and engineering. Real devices, however, face the accumulated impacts of noise and architecture constraints, making the design and realization of practical quantum sensors challenging. Numerical and theoretical frameworks to optimize and analyze sensing protocols in their entirety are thus crucial for translating quantum advantage into widespread practice. Here, we present an end-to-end variational framework for quantum sensing protocols, where parameterized quantum circuits and neural networks form trainable, adaptive models for quantum sensor dynamics and estimation, respectively. The framework is general and can be adapted towards diverse physical architectures, as we demonstrate with experimentally-relevant ansätze for trapped-ion and photonic systems, and enables to directly quantify the impacts that noise and finite data sampling. End-to-end variational approaches can thus underpin powerful design and analysis tools for practical quantum sensing advantage.

ML with QC
Challenges in Quantum Kernel Methods

Dr. Jan Schnabel (flaQship)

Quantum kernel methods (QKMs) have emerged as a promising approach in quantum machine learning (QML), offering both practial applications and theoretical insights. In this talks, we explore QKMs based on fidelity quantum kernel (FQKs) and projected quantum kernels (PQKs) across a manifold of design choices, examining classification and regression tasks comprising diverse datasets. Starting from a state-of-the-art hyperparameter search, we delve into the importance of hyperparameters on model performance scores. Additionally, we provide a thorough analysis addressing the design freedom of PQKs and explore the underlying principles responsible for learning. Based on this, we uncover the mechanisms that lead to effective QKMs and reveal universal patterns. Beyond that, we show that improving the generalization of quantum kernels by bandwidth-tuning results in models that closely resemble classical RBF kernels and even their low-order expansion. We support this claim by providing numerical calculations and analytical approximations. Our results underscore the need for a dual approach to QML research: identifying datasets with potential quantum advantage and refining the corresponding model designs to fully exploit quantum capabilities.

Quantum computing with non-unitary operations

Dr. Michael Marthaler (HQS Quantum Simulations)

Non-unitary operations, often overlooked in traditional quantum computing paradigms, offer powerful avenues for advancing quantum simulation and quantum reservoir computing. This talk explores the pivotal role of non-unitary operations in efficiently preparing relevant thermal states for quantum simulations, enabling more accurate modeling of complex physical systems. Additionally, we delve into their intriguing applications in quantum reservoir computing, where non-unitary dynamics is necessary to guarantee a finite memory time. By harnessing the unique properties of non-unitary operations, we uncover new possibilities for quantum algorithms and architectures, paving the way for innovative solutions in quantum information processing.

ML 4 Quantum Software Development
Unitary Synthesis of Clifford+T circuits with Reinforcement Learning

Abhishek Dubey (Fraunhofer IIS)

This paper presents a deep reinforcement learning approach for synthesizing unitaries into quantum circuits. Unitary synthesis aims to identify a quantum circuit that represents a given unitary while minimizing circuit depth, total gate count, a specific gate count, or a combination of these factors. While past research has focused predominantly on continuous gate sets, synthesizing unitaries from a parameter-free Clifford+T gate set remains a challenge. Although the time complexity of this task will inevitably remain exponential in the number of qubits for general unitaries, reducing the run-time for simple problem instances still poses a significant challenge. In this study, we apply the tree-search method Gumbel AlphaZero to solve the problem for a subset of exactly synthesizable Clifford+T unitaries. Our method effectively synthesizes circuits for up to five qubits generated from randomized circuits with up to 60 gates, outperforming existing tools like QuantumCircuitOpt and MIN-T-SYNTH in terms of synthesis time for larger qubit counts. Furthermore, it surpasses Synthetiq in successfully synthesizing random, exactly synthesizable unitaries. These results establish a strong baseline for future unitary synthesis algorithms.

Leveraging Diffusion Models for Parameterized Quantum Circuit Generation

Daniel Barta (Fraunhofer FOKUS)

Quantum computing holds immense potential, yet its practical success depends on multiple factors, including advances in quantum circuit design. In this paper, we introduce a generative approach based on denoising diffusion models (DMs) to synthesize parameterized quantum circuits (PQCs). Extending the recent diffusion model pipeline by Fürrutter et al. (2024), our model effectively conditions the synthesis process, enabling the simultaneous generation of circuit architectures and their continuous gate parameters. We demonstrate our approach in synthesizing PQCs optimized for generating high-fidelity Greenberger–Horne–Zeilinger(GHZ) states and achieving high accuracy in quantum machine learning (QML) classification tasks. Our results indicate a
strong generalization across varying gate sets and scaling qubit counts, highlighting the versatility and computational efficiency of diffusion-based methods. This work illustrates the potential
of generative models as a powerful tool for accelerating and optimizing the design of PQCs, supporting the development of more practical and scalable quantum applications.

QASIF: An Impact-Focused Quantum Architecture Search for State Preparation and Machine Learning

Johannes Jung (Fraunhofer FOKUS)

Efficient quantum circuit design is crucial to un-locking the capabilities of both near-term and future quantum technologies. However, conventional search approaches are often computationally expensive. To address this challenge, we intro-duce QASIF (Quantum Architecture Search with Impact Focus), a real-time optimization framework designed to minimize circuit evaluations while significantly improving circuit trainability by prioritizing impactful sub-circuit components. QASIF employs impact scores to evaluate sub-circuit elements based on training parameters, selectively refining high-impact
components while discarding those with minimal influence. This strategic focus considerably streamlines the search process. Our framework demonstrates strong performance in tasks such as quantum state preparation and quantum machine learning (QML), highlighting its inherent scalability and effectiveness in identifying well-structured circuit architectures. Experimental results confirm that QASIF achieves high accuracy with notably fewer evaluations, making it particularly suitable for noisy intermediate-scale quantum (NISQ) devices and scalable, fault-tolerant quantum computing architectures. By emphasizing impact-driven optimization and interpretability, QASIF offers a practical and versatile solution for quantum circuit synthesis across a broad range of applications.

Benchmarking Quantum Architecture Search with Surrogate Assistance

Darya Martyniuk (Fraunhofer FOKUS)

Advances in artificial intelligence (AI) research over the past decade motivated the exploration of AI-driven techniques to address the challenges in quantum computing, including quantum circuit design. Quantum architecture search (QAS) is a research field focusing on generation and optimization of parametrized quantum circuits (PQCs). However, prototyping, hyperparameter tuning and benchmarking of QAS approaches remains computationally expensive, as it typically requires repeated training and simulation of candidate circuits.
In this talk, we introduce SQuASH, the Surrogate Quantum Architecture Search Helper, a benchmark that leverages surrogate models to considerably accelerate their evaluation and enable uniform comparison of QAS methods. A surrogate model takes a PQC with its initial parameters as input and predicts its final performance after training, effectively bypassing the need for full training and simulation during the QAS loop. We present the methodology for building surrogate-based benchmarks, showcase performance gains across various QAS workflows, and demonstrate how SQuASH can support both benchmarking and rapid prototyping. Our open-source code and the accompanying dataset of PQCs aim to foster reproducibility and drive further innovation in the field.

ML 4 Quantum Error Correction
Sample importance for neural network decoders

Evan Peters (University of Waterloo)

Using neural networks to decode quantum error-correcting codes (QECCs) can be puzzling: Running a large QECC experiment to generate millions of decoding examples may result in mostly useless training data, and 99.99\% test accuracy might indicate bad performance. A challenge for neural network (NN) decoders is that most measured syndromes correspond to low-weight errors, which existing baseline decoders can already handle. Thus, a good NN decoder should outperform baseline decoders for syndromes that might never appear in a training dataset of millions of examples. Nonetheless, NN decoders have demonstrated superior accuracy over state-of-the-art baseline decoders on experimental data. Due to its peculiarities, data-driven decoding requires special attention if we are to use machine learning to decode QECCs effectively.

We formalize data-driven decoding (DDD), the problem of learning to decode (Q)ECC syndromes using labelled data. We show that for a simple error-correcting code, DDD is equivalent to a noisy, imbalanced binary classification problem, implying that techniques for addressing class imbalance and label noise can be adapted to improve NN decoders beyond existing demonstrations. We empirically study DDD in simulated learning experiments, showing that almost-trivial decoding problems are hard even for powerful learning models, but we can reliably improve NN decoders using a data-augmentation technique. We also derive theoretical results for random stabilizer codes, extend our framework to the problem of learning to decode multiple rounds of syndrome measurements, and discuss experimental techniques for augmenting training data to improve existing NN decoders.

Variational Quantum Error Correction: Learning Noise-Adaptive Encoding Schemes with Machine Learning

Nico Meyer (Fraunhofer IIS)

Quantum error correction (QEC) is essential for the reliable operation of quantum computers, as quantum systems are highly susceptible to errors from decoherence and environmental interactions. We present a novel approach to QEC that leverages machine learning techniques to tailor encoding schemes for specific noise channels. Our method focuses on maximizing the distinguishability of noise-affected quantum state pairs, ensuring that sufficient information is preserved for effective error correction.
We achieve this by minimizing the loss of distinguishability, quantified by the trace distance, over a two-design under the given noise channel. This results in customized encodings that are optimized for the particular error characteristics of the quantum hardware. To demonstrate the feasibility of our approach, we implement the procedure as a variational quantum algorithm, referred to as Variational Quantum Error Correction (VarQEC).
Our findings indicate that VarQEC can effectively learn robust encoding strategies that enhance the performance and reduce overhead compared to conventional QEC codes. We furthermore deploy the learned codes on actual IBM and IQM quantum devices and report beyond break-even performance. This work contributes to the field of machine learning for quantum software by providing a framework for developing adaptive QEC schemes using variational techniques. Our approach not only improves the error resilience of quantum computations but also offers insights into the interplay between machine learning and quantum error correction methodologies.

Learning quantum information scrambling with classical kernel methods

Jason Pye (Nordita (Nordic Institute for Theoretical Physics))

Out-of-Time-Ordered Correlators (OTOCs) have become important theoretical tools to probe the spreading of quantum information (scrambling) in many-body systems. They have been central to recent discussions in quantum chaos, thermalization, many-body localization and scarring, as well as black hole physics. However, computing the OTOC for a general quantum system is prohibitively expensive for classical computers, whereas simulation using current noisy quantum devices becomes difficult beyond the short time regime. Here we present a machine learning approach for approximating OTOCs. We explore four parameterized sets of Hamiltonians describing local one-dimensional quantum systems of interest in condensed matter physics. We frame the problem as a regression task, by sampling parameter values and generating small batches of labeled data with classical tensor network methods for systems with up to 40 qubits. Using this data, we train a variety of standard kernel machines and demonstrate that they can accurately learn the OTOC as a function of the Hamiltonian parameters. This method leads to an overall reduction in computational costs if one is interested in understanding the behaviour of the OTOC over large regions of a Hamiltonian parameter space. It allows one to perform a small number of evaluations of the OTOC and then use kernel methods to approximate the OTOC for the remaining parameter values of interest, which can be evaluated more efficiently than continued uses of tensor network methods.

Based on https://doi.org/10.1103/PhysRevB.111.144301

Hybrid quantum tensor networks for aeroelastic applications

Manuel Lautaro Hickmann (Institute for AI Safety and Security – German Aerospace Center (DLR))

In our work we investigated the prospect of using hybrid quantum tensor network based algorithms for aeroelastic problems, leveraging synergies between machine learning and quantum computing. Quantum tensor networks for Machine Learning can be realized by Variational Quantum Circuits using a tensor network inspired internal gate structure. We apply this method to a simplified aeroelastic configuration, aiming to estimate flutter stability and regress generating parameters from time series data.

Using a dimensionality reduction via classical 1D convolutional neural networks, we compress the data into an 8-dimensional feature vector, which is then used as input for our quantum neural network. Notably, both the classical and quantum parts are trained together in an end-to-end approach, allowing for joint optimization of the entire hybrid model. We achieved outstanding results for binary time-series classification (F1-score > 0.9) and promising results for regressing parameters from the time series.

Our findings demonstrate the potential of hybrid quantum tensor network-based algorithms for aeroelastic problems, although hyperparameter selection remains a challenge. This work contributes to the development of hybrid quantum machine learning (QML) for complex problems. For more details see: https://indico.qtml2024.org/event/1/contributions/177/

QKD as a quantum machine learning task

Alessio Paesano (JoS Quantum GmbH)

We propose considering Quantum Key Distribution (QKD) protocols as a use case for Quantum Machine Learning (QML) algorithms. We define and investigate the QML task of optimizing eavesdropping attacks on the quantum circuit implementation of the BB84 protocol. QKD protocols are well understood and solid security proofs exist enabling an easy evaluation of the QML model performance. The power of easy-to-implement QML techniques is shown by finding the explicit circuit for optimal individual attacks in a noise-free setting. For the noisy setting we find, to the best of our knowledge, a new cloning algorithm, which can outperform known cloning methods. Finally, we present a QML construction of a collective attack by using classical information from QKD post-processing within the QML algorithm.

Quantum Support Vector Machines Kernel Generation with Classical Post-Processing

Anant Agnihotri (Fraunhofer IAF)

We investigate the optimization of kernel generation for quantum support vector algorithms focused on data classification tasks. To enhance classification efficiency, we implement classical post-processing techniques. Our approach begins with preprocessing high-dimensional data using Principal Component Analysis (PCA), which effectively reduces dimensionality while retaining essential features that contribute to the classification task.
Following the preprocessing step, we generate a training kernel using the ZZ feature map. In the subsequent post-processing phase, we utilize the overlap with all quantum states—not just the all-zero state, which is the standard practice in quantum kernel methods. This innovative approach allows us to compute kernel entries as a weighted sum of these overlaps, enabling us to determine the kernel values with fewer shots, thereby improving efficiency.
We implement our method on the MNIST dataset, aiming to differentiate between handwritten digits ‚0‘ and ‚1‘, as well as to distinguish between ‚even‘ and ‚odd‘ digits. In our analysis, we compare the kernel score, defined as the fraction of unseen data points accurately identified by the standard quantum kernel, against that of the kernel enhanced by our post-processing method. Our findings reveal that the post-processed kernel significantly outperforms the standard kernel, particularly when dealing with a higher number of qubits.

Leveraging GenAI to Improve Reliability of Quantum Programs

Dr. Max Scheerer (FZI)

Quantum computing holds immense promise, but developing quantum programs remains error-prone due to inherent hardware noise and algorithmic complexity. This work explores how generative AI (GenAI) can assist in improving the reliability of quantum programs. We examine two directions in which GenAI can contribute to improved reliability: automated bug detection and correction, and support for fault-tolerant execution. We evaluate the efficacy of GenAI using two benchmarks representative of these contexts.

ML for Quantum and Quantum for QML at IBM

Dr. Johannes Greiner, IBM

The combination of Machine Learning and Quantum Computing happens from two different angles at IBM. On the one hand, Machine Learning methods including LLMs are used to enhance gate implementation and transpilation, as well as generation of Qiskit code. Related solutions are AI transpiler passes, and the Qiskit Code Assistant. On the other hand, Quantum Machine learning applications promise to challenge and eventually improve current solutions in Machine Learning. IBM Quantum follows a dual approach to contribute QML solutions and research in-house, as well as hosting applications developed by Partners in the commercial, and academic ecosystem on the Qiskit Functions catalog, and in open-source repositories, respectively.

Dr. Johannes Greiner is Technical Integrations Engineer at IBM’s Research and Development Lab in Böblingen. He has specialized in Quantum Technology since 2010, with an MSc from King’s College London and PhD from University of Stuttgart. He previously developed practical AI solutions for manufacturing at Mercedes. Currently, he works directly with IBM Quantum’s Partners to implement pipelines enabling use of IBM Quantum’s Services. He has led successful integrations with Software Vendors and Startups, particularly for Qiskit Functions, including solutions in the QML context. He is also a contributing inventor in industrial AI solutions and AI-enhanced quantum computing workflows.

Das Symposium findet in der experimenta statt, Deuschlands größtem Science Center im Herzen von Heilbronn – einem aufstrebenden Quantenzentrum und einzigartigen Ort, an dem das Fachwissen nationaler und internationaler Forschungseinrichtungen zusammenkommt.

experimenta – Das Science Center
Experimenta-Platz 
74072 Heilbronn

Sollten Sie für Ihren Aufenthalt in Heilbronn ein Hotelzimmer benötigen, können Sie einen Blick auf die Hotelempfehlungen der Stadt Heilbronn werfen. 

Sie sind eingeladen, einen Vortrag oder ein Poster (einschließlich einer Mini-Präsentation) über Ihre aktuellen Arbeiten und Ergebnisse im Bereich der Quanten- und/oder KI-Forschung einzureichen. Besonders willkommen sind Beiträge, die sich auf aktuelle Forschung im Bereich des maschinellen Lernens (mit Quanten) beziehen. Themen sind unter anderem:

  • Maschinelles Lernen für Quantenhardware  
  • Maschinelles Lernen für Quantensoftware 
  • Maschinelles Lernen für die Forschung 

Einreichung 

Einreichungen sind bis zum 02. Mai 2025 möglich. Hierfür bitten wir um:

  • eine kurze Zusammenfassung mit max. 250 Wörtern/ 2000 Zeichen
  • eine Kurzbiographie mit max. 100 Wörtern/ 500 Zeichen

Bitte reichen Sie diese über den folgenden Link ein: https://eveeno.com/152312978

Dank der großzügigen Unterstützung der Dieter Schwarz Stiftung verleiht das Program Committee einen Best Paper Award sowie einen Best Poster Award. Beide Auszeichnungen sind mit jeweils 6.000 € dotiert und richten sich gezielt an den wissenschaftlichen Nachwuchs (z.B. Doktorand:innen oder Postdocs). Ziel ist es,  Forschungsleistungen und Fähigkeiten im Wissenstransfer  sichtbar zu machen, den internationalen wissenschaftlichen Austausch zu fördern und die Teilnahme an renommierten Konferenzen zu ermöglichen.

Wir freuen uns auf innovative Beiträge und Impulse. 

Dr. Marco Roth
Fraunhofer IPA

Prof. Dr. Achim Kempf
University of Waterloo

Dr. Christian Tutschku
Fraunhofer IAO

Booklet

Organisiert durch:

Dr. Christian Tutschku, Fraunhofer IAO
Dr. Anne-Sophie Tombeil, Fraunhofer IAO
Dr. Marco Roth, Fraunhofer IPA

In Kooperation mit:

Achim Kempf, Professor und Lehrstuhlinhaber
Physics of Information Lab
Institute for Quantum Computing, University of Waterloo

Nach oben scrollen