Learning Robust Observable to Address Noise in Quantum Machine Learning

In the rapidly evolving field of Quantum Machine Learning (QML), one of the most pressing challenges is handling noise—the errors that naturally arise in quantum systems, particularly in the Noisy Intermediate-Scale Quantum (NISQ) era. But what if we could teach quantum systems to “learn” and address noise head-on? Our paper “Learning Robust Observable to Address Noise in Quantum Machine Learning” explores an approach to mitigating this issue by focusing on learning robust observables. These observables can withstand the effects of noise, improving the performance of QML models in noisy environments.

Understanding the Problem of Noise in QML

In quantum systems, noise comes from imperfections in quantum gates, interactions with the environment, and decoherence—making quantum computations highly error-prone. When applying QML, this noise leads to inaccuracies in predictions and model training. This research aims to identify observables that remain invariant or change minimally even in the presence of noise, thus offering more reliable outputs from quantum systems.

The Framework: Learning Robust Observables

We propose a machine learning-based framework to find observables that are inherently resistant to various types of noise. To tackle this, we propose training a machine learning model to identify observables that remain invariant or less susceptible to noise. The model learns from the behavior of quantum states passing through noisy channels and adjusts to find robust observables that maintain their integrity despite noise. We illustrate the problem using a Bell state (a well-known quantum state), subjecting it to a depolarization channel to simulate noise.

The process can be formalized as an optimization problem where the goal is to minimize the change in the expectation value of the observable when the quantum state is subject to noise. Mathematically, this can be expressed as minimizing:

    \[\min⁡_{\mathcal{O}}\mathbb{E}[ \left | \langle{\psi}| \mathcal{O} |{\psi} \rangle - \langle{\psi} | \mathcal{O}_n |{\psi} \rangle ]\]

Here, \mathcal{O} is a Pauli-Z observable, and \mathcal{O}_n is an observable we are trying to learn. The expectation value is computed before and after noise is introduced. The goal is to find an observable that minimizes this difference, effectively learning a robust observable.

A Toy Example

In our framework, we train QML models by simulating quantum systems across different noise channels, including depolarization, amplitude damping, phase damping, bit flip, and phase flip channels. The objective is to learn observables for various quantum circuits—such as Bell state circuits, Quantum Fourier Transform circuits, and highly entangled random circuits—that can remain robust across different noise levels. The framework demonstrated that it could identify an observable that better retains the state’s properties under noisy conditions, proving that robust observables can be learned effectively.

 Consider the following example:

(1)   \begin{equation*} O_{optimized} = \begin{pmatrix} 0.804 & 0.086 + 0.138i & 0.739 + 0.050i & 0.070 + 0.132i\\ 0.086 - 0.138i & 0.302 & 0.087 - 0.122i & 0.277 + 0.019i \\0.739 - 0.050i & 0.087 + 0.122i & 1.253 & 0.133 + 0.215i \\ 0.070 - 0.132i & 0.277 - 0.019i & 0.133 - 0.215i & 0.470\end{pmatrix}\end{equation*}

We computed its expectation value for Bell’s states under varying degrees of depolarization, p \in [0,1). The expectation values of the observable O_{optimized} on the depolarized Bell state as a function of the depolarization rate p are plotted in the following figure.

In this figure, Z is the Pauli-Z matrix, X is the Pauli-X matrix, H is the Hadamard gate, A is an arbitrary observable, and O_{optimized} is a learned single qubit Hermitian measurement operator. This toy example shows that the expectation value of the custom observable O_{optimized} on the depolarized Bell state remains constant as the depolarization rate p increases.

Key Findings

  • Custom observables designed through this method demonstrated remarkable stability against noise, especially when compared to traditional observables like Pauli matrices.
  • In noisy channels like depolarization, the learned observables maintained a more consistent expectation value, while traditional observables exhibited greater variance.
  • The approach can be applied to various types of quantum circuits, making it versatile and broadly applicable in enhancing the reliability of QML models.

Implications for Quantum Machine Learning

This study offers a promising avenue for improving the accuracy and stability of QML in real-world applications. By learning robust observables, QML systems can perform more reliably, even as we contend with the inherent noise in current quantum computers. By using learned observables, the performance of quantum machine learning models can be made more stable, even when operating in the inherently noisy NISQ regime. This has implications for advancing practical applications of quantum computing, especially as we seek to scale up quantum algorithms in the near-term.

Looking Ahead: The Future of Noise-Resistant QML

The results from this paper open up exciting possibilities for future work. Imagine a future where every quantum machine learning algorithm can autonomously adjust to different noisy environments by learning which observables to trust. This would make QML models more resilient and, ultimately, more practical for real-world applications.

One immediate future direction is testing the framework on larger systems and more complex noise models. Additionally, combining this method with error correction techniques could further enhance the stability of QML algorithms.

For a detailed exploration of the methodology and findings, read the full paper at:

https://arxiv.org/pdf/2409.07632

References

  • Khanal, Bikram, and Pablo Rivas. “Learning Robust Observable to Address Noise in Quantum Machine Learning.” arXiv preprint arXiv:2409.07632 (2024).

About the Author

Bikram Khanal is a Ph.D. student at Baylor University, specializing in Quantum Machine Learning and Natural Language Processing.

Efficient Quantum Machine Learning with a Modified Depolarization Approach

As the quantum computing community navigates the NISQ (Noisy Intermediate-Scale Quantum) era, managing noise poses a prominent challenge, particularly in Quantum Machine Learning (QML). Quantum systems inherently exhibit noise, which can drastically impact computational accuracy. Notably, depolarization noise, a prevalent noise model in quantum computing, presents a formidable obstacle in developing efficient QML models. The paper “A Modified Depolarization Approach for Efficient Quantum Machine Learning” introduces a modified representation of the depolarization channel for a single-qubit. The proposed modified channel uses two Kraus operators based only on X and Z Pauli matrices. The approach reduces the computational complexity from six to four matrix multiplications per channel execution.

What’s Depolarization, and Why Does It Matter?

Depolarization is a noise process where a quantum state collapses, with some probability, into a mixed state, essentially scrambling the information. For example, imagine working with a quantum bit (qubit) represented by a density matrix \rho. In the traditional depolarization model, noise can be introduced by applying the three Pauli matrices — X, Y, and Z to \rho with equal probability. Mathematically, this looks like:

(1)   \begin{equation*} \rho \rightarrow (1 - p) \rho + \frac{p}{3} (X \rho X + Y \rho Y + Z \rho Z)\end{equation*}


where p is the probability of depolarization. Each of the Pauli operators represents a potential disturbance to the qubit. The more noise we apply, the more the system deteriorates. However, simulating this noise is computationally expensive, especially in large quantum systems, as it requires a substantial number of matrix multiplications. This is where the paper’s novel contribution shines.

The Power of Modified Depolarization Channel and Two Kraus Operators

The central innovation of this paper is an alternative representation of the depolarization channel characterized by reduced matrix multiplication operations that only use the X and Z Pauli matrices.

(2)   \begin{equation*} \rho_{m}' = (1 - \frac{2p}{3}) \rho + \frac{2p}{3} Z((\rho X)^T X) Z\end{equation*}


Traditionally, depolarization uses three Kraus operators, each corresponding to one of the Pauli matrices. In practical terms, this means that when we’re simulating a quantum system with noise, we need to perform six matrix multiplications per qubit per step—this scales rapidly with the size of the system. The modified depolarization approach in the paper proposes reducing the number of Kraus operators to two by cleverly using only the X and Z Pauli matrices, allowing for more efficient simulation without significantly compromising the accuracy of the noise model. The two Kraus operators are defined as:

    \[\begin{array}{cc}K_0 = \sqrt{1 -\frac{2p}{3}} \mathbb{I}, &K_1 = i \sqrt{\frac{2p}{3}} ZX .\end{array}\]


The author provides meticulous proof to assert the proposed modified expression’s authenticity and Kraus operators’ authenticity. This seemingly small change reduces the number of required matrix multiplications from six to four, a non-trivial improvement in computational cost. This reduction becomes especially significant as quantum circuits grow deeper and larger—common in QML algorithms, where we often have to run complex iterative procedures.

Experimenting with Quantum Machine Learning on the Iris Dataset

To validate their approach, the authors experimented with a well-known machine learning problem: classifying the Iris dataset using the Iris dataset by training a variational quantum circuit under a modified depolarization noise channel. Their results verify that the modified depolarization channel accurately represents channel evolution for different values of p, and these results are consistent with simulation results. Once the Iris dataset was encoded into quantum states, the author trained the QML model under noisy conditions using the modified depolarization method. Thanks to Pennylane Library, the authors claim to implement the modified channel efficiently. The findings were fascinating: the computational load was reduced by 1.5 to 2 times compared to the traditional depolarization method, while classification accuracy remained comparable. This is a big deal for QML. Efficiency in quantum simulations is crucial—especially given the already limited coherence times and high noise levels of NISQ devices. Reducing computational cost allows for quicker experimentation and larger models, accelerating the development of quantum machine learning algorithms. The following figure provides the decision boundaries for readers’ reference. We request that the readers to refer to the original manuscript for in-depth analysis.

An increase in circuit depth may enhance the model’s expressivity, but it also increases its vulnerability to noise, which adversely affects the quality of the decision boundary.

Why This Matters for the NISQ Era

We’re still far from having fault-tolerant quantum computers (except google’s latest work) that can operate indefinitely without errors. For now, we must work with what we’ve got: noisy, small to mid-scale quantum devices. This means any improvement in the efficiency of noise simulation or error mitigation has a direct and significant impact on the feasibility of using quantum systems for practical problems.

The reduction in computational overhead offered by this modified depolarization approach is particularly relevant for QML, where deep quantum circuits and iterative optimization processes require substantial computational resources. This is a step toward making QML more scalable and closer to real-world applications, even within the limitations of today’s quantum technology.

Looking Ahead

As quantum hardware continues to evolve, so too will the need for more efficient noise models and error mitigation techniques. The modified depolarization approach presented in this paper offers a glimpse into how we can make QML more computationally feasible. While the improvement in noise simulation efficiency may seem small, these incremental advancements will enable the quantum systems of the future to handle more complex and meaningful tasks.

I’m excited to see how this approach will be applied to larger quantum systems and more complex QML models. The road to fully realizing quantum machine learning’s potential is long, but innovations like this bring us one step closer.

For a detailed exploration of the methodology and findings, read the full paper at: https://www.mdpi.com/2227-7390/12/9/1385

References

  • Khanal, Bikram, and Pablo Rivas. “A Modified Depolarization Approach for Efficient Quantum Machine Learning.” Mathematics 12.9 (2024): 1385.

About the Author

Bikram Khanal is a Ph.D. student at Baylor University, specializing in Quantum Machine Learning and Natural Language Processing.

International Conference on Emergent and Quantum Technologies (ICEQT’24)

July 22-25, 2024 — Las Vegas, NV

Dear Esteemed Colleagues,


Quantum computing is an expeditiously evolving field of interdisciplinary research, drawing upon fundamental principles from mathematics, physics, and engineering. To maintain scientific rigor and foster advancement, this domain necessitates a collaborative effort across various STEM disciplines.

We are delighted to announce the International Conference on Emergent and Quantum Technologies (ICEQT’24), scheduled for July 22-25, 2024, in Las Vegas, NV. The conference is designed to serve as a platform for researchers specializing in quantum machine learning and machine learning professionals exploring the application of AI in enhancing quantum computing algorithms. It aims to facilitate the exchange of insights and developments within these dynamic areas of study.

The burgeoning interest among machine learning practitioners in leveraging AI for quantum computing endeavors, and vice versa, underscores the relevance of this conference. Thus, we warmly welcome the submission of original research papers that contribute novel insights and state-of-the-art developments in the following areas of interest:

Foundations of Quantum Computing and Quantum Machine Learning

  • Quantum computing models and paradigms, e.g., Grover, Shor, and others
  • Quantum algorithms for Linear Systems of Equations
  • Quantum Tensor Networks and their Applications in QML

Quantum Machine Learning Algorithms

  • Quantum Neural Networks
  • Quantum Hidden Markov Models
  • Quantum PCA
  • Quantum SVM
  • Quantum Autoencoders
  • Quantum Transfer Learning
  • Quantum Boltzmann machines
  • Theory of Quantum-enhanced Machine Learning

AI for Quantum Computing

  • Machine learning for improved quantum algorithm performance
  • Machine learning for quantum control
  • Machine learning for building better quantum hardware

Quantum Algorithms and Applications

  • Quantum computing: models and paradigms
  • Quantum algorithms for hyperparameter tuning (Quantum computing for AutoML)
  • Quantum-enhanced Reinforcement Learning
  • Quantum Annealing
  • Quantum Sampling
  • Applications of Quantum Machine Learning

Fairness and Ethics in Quantum Machine Learning

We look forward to receiving your submissions and to welcoming you to ICEQT’24.

All submissions that are accepted for presentation will be included in the proceedings published by IEEE CPS. To ensure consistency in formatting, authors should follow the general typesetting instructions available on the IEEE’s website, including single-line spacing and a 2-column format. Additionally, authors of accepted papers must agree to the IEEE CPS standard statement regarding copyrights and policies on electronic dissemination.

Prospective authors are encouraged to submit their papers through the conference’s evaluation website at CMT. More information about the conference, including submission guidelines, can be found on our website at https://baylor.ai/iceqt/.

Important Deadlines

March 22, 2024: Submission of papers: https://cmt3.research.microsoft.com/ICEQT2024
– Full/Regular Research Papers (maximum of 8 pages)
– Short Research Papers (maximum of 5 pages)
– Abstract/Poster Papers (maximum of 3 pages)

April 15, 2024: Notification of acceptance (+/- two days)

May 1, 2024: Final papers + Registration

June 21, 2024: Last day for hotel room reservation at a discounted price.

July 22-25, 2024: The 2024 World Congress in Computer Science, Computer Engineering, and Applied Computing (CSCE’24: USA)
Which includes the International Conference on Emergent and Quantum Technologies (ICEQT’24)

Chairs:
Pablo Rivas, PhD, Baylor University
Bikram Khanal, PhD Candidate, Baylor University

International Conference on Emergent and Quantum Technologies (ICEQT’23)

July 24-27, 2023 — Las Vegas, NV

Dear Esteemed Colleagues,

Quantum computing is a rapidly emerging interdisciplinary research area that integrates concepts from mathematics, physics, and engineering. For scientific rigor and successful progress in the field, it demands contributions from various STEM areas.

In this context, we are pleased to announce the International Conference on Emergent and Quantum Technologies (ICEQT’23), to be held on July 24-27, 2023, in Las Vegas, NV. The conference aims to provide an opportunity for researchers in the field of quantum machine learning and machine learning researchers interested in applying AI to enhance quantum computing algorithms, to present and discuss recent advancements in their areas of expertise.

Notably, there has been an increasing interest from machine learning researchers to apply AI to the quantum computing domain, and vice versa. As a result, we cordially invite submissions of original research papers that present state-of-the-art contributions in the following areas:

Foundations of Quantum Computing and Quantum Machine Learning

  • Quantum computing models and paradigms, e.g., Grover, Shor, and others
  • Quantum algorithms for Linear Systems of Equations
  • Quantum Tensor Networks and their Applications in QML

Quantum Machine Learning Algorithms

  • Quantum Neural Networks
  • Quantum Hidden Markov Models
  • Quantum PCA
  • Quantum SVM
  • Quantum Autoencoders
  • Quantum Transfer Learning
  • Quantum Boltzmann machines
  • Theory of Quantum-enhanced Machine Learning

AI for Quantum Computing

  • Machine learning for improved quantum algorithm performance
  • Machine learning for quantum control
  • Machine learning for building better quantum hardware

Quantum Algorithms and Applications

  • Quantum computing: models and paradigms
  • Quantum algorithms for hyperparameter tuning (Quantum computing for AutoML)
  • Quantum-enhanced Reinforcement Learning
  • Quantum Annealing
  • Quantum Sampling
  • Applications of Quantum Machine Learning

Fairness and Ethics in Quantum Machine Learning

We look forward to receiving your submissions and to welcoming you to ICEQT’23.

All submissions that are accepted for presentation will be included in the proceedings published by IEEE CPS. To ensure consistency in formatting, authors should follow the general typesetting instructions available on the IEEE’s website, including single-line spacing and a 2-column format. Additionally, authors of accepted papers must agree to the IEEE CPS standard statement regarding copyrights and policies on electronic dissemination.

Prospective authors are encouraged to submit their papers through the conference’s evaluation website at https://american-cse.org/drafts/. More information about the conference, including submission guidelines, can be found on our website at https://baylor.ai/iceqt/.

Important Deadlines

April 12, 2023: Submission of papers: https://american-cse.org/drafts/
– Full/Regular Research Papers (maximum of 8 pages)
– Short Research Papers (maximum of 5 pages)
– Abstract/Poster Papers (maximum of 3 pages)

May 1, 2023: Notification of acceptance (+/- two days)

May 16, 2023: Final papers + Registration

June 21, 2023: Last day for hotel room reservation at a discounted price.

July 24-27, 2023: The 2023 World Congress in Computer Science, Computer Engineering, and Applied Computing (CSCE’23: USA)
Which includes the International Conference on Emergent and Quantum Technologies (ICEQT’23)

Chairs:
Dr. Pablo Rivas, Baylor University
Dr. Javier Orduz, Earlham College

Power of Data In Quantum Machine Learning

This week at the lab, we read the following paper, and here is our summary:

Huang, Hsin-Yuan, Michael Broughton, Masoud Mohseni, Ryan Babbush, Sergio Boixo, Hartmut Neven, and Jarrod R. McClean. “Power of data in quantum machine learning.” Nature communications 12, no. 1 (2021): 2631.

Summary

This work focuses on the advancement of quantum technologies and their impact on machine learning. The two paths towards the quantum enhancement of machine learning include using the power of quantum computing to improve the training process of existing classical models and using quantum models to generate correlations between variables that are inefficient to represent through classical computation. The authors show that this picture is incomplete in machine learning problems where some training data are provided, as the provided data can elevate classical models to rival quantum models. The authors present a flowchart for testing potential quantum prediction advantage based on prediction error bounds for training classical and quantum ML methods based on kernel functions. This elevation of classical models through some training samples is illustrative of the power of data. The authors also show that, “training a specific classical ML model on a collection of N training examples (\mathbf{x}, y = f(\mathbf{x})) would give rise to a prediction model h(\mathbf{x}) with

(1)   \begin{equation*} \mathbb{E}_\mathbf{x}|h(\mathbf{x})-f(\mathbf{x})|\leq c \sqrt{p^2/N} \end{equation*}

for a constant c > 0. Hence, with N \approx p^2/\epsilon^2 training data, one can train a classical ML model to predict the function f(\mathbf{x}) up to an additive prediction error \epsilon.” They also show that a slight geometric difference between kernel functions defined by classical and quantum ML guarantees similar or better performance in prediction by classical ML. On the other hand, a sizeable geometric difference indicates the possibility of a large prediction advantage using the quantum ML model.

Additionally, the authors introduced ”projected quantum kernels” and demonstrated, through empirical results, that these outperformed all tested classical models in prediction error. This work provides a guidebook for generating ML problems that showcase the separation between quantum and classical models.

Intellectual Merit

This work provides a theoretical and computational framework for comparing classical and quantum ML models. The authors develop prediction error bounds for training classical and quantum ML methods based on kernel functions, which provide provable guarantees and are very flexible in the functions they can learn. The authors also develop a flowchart for testing potential quantum prediction advantage, a function-independent prescreening that allows one to evaluate the possibility of better performance. The authors provide a constructive example of a discrete log feature map, which gives a provable separation for their kernel. They rule out many existing models in the literature, providing a powerful sieve for focusing the development of new data encodings.

Broader Impact

The authors’ contributions to the field of quantum technologies and machine learning have significant broader impacts. The development of a flowchart for testing potential quantum prediction advantage provides a tool for researchers and practitioners to determine the possibility of better performance using quantum ML models. The authors’ framework can also be used to compare and construct hard classical models, such as hash functions, which have applications in cryptography and secure communication. The authors’ work has the potential to accelerate the development of new data encodings, leading to more efficient and accurate machine learning models. This has far-reaching implications for various applications, including image recognition, text translation, and even physics applications, where machine learning can revolutionize how we analyze and interpret data. The paper was organized and written by collaborating with three famous quantum institutes: Google Quantum AI, the Institute for Quantum Information and Matter at Caltech, and the Department of Computing and Mathematical Sciences at Caltech.

Supercomputing Leverages Quantum Machine Learning and Grover’s Algorithm

Khanal, B., Orduz, J., Rivas, P. et al. Supercomputing leverages quantum machine learning and Grover’s algorithm. J Supercomput 79, 6918–6940 (2023). https://doi.org/10.1007/s11227-022-04923-4. [ bib |  .pdf ]

Quantum computing, a field that has drawn significant attention recently, promises to revolutionize how we approach computational problems. In Bikram Khanal’s paper titled “Supercomputing leverages quantum machine learning and Grover’s algorithm,” we delve deep into the intricacies of quantum computing, with a particular focus on Grover’s algorithm and its potential applications in quantum machine learning. This journal article is an extended version of an earlier conference paper found here.

Understanding Quantum Computing and Grover’s Algorithm

At the heart of our paper is a comprehensive discussion of the basics of quantum computing. We shed light on Grover’s quantum algorithm, which promises faster search capabilities compared to classical algorithms. Our team conducted an experiment simulating classical logical circulation by exploiting the power of amplitude amplification, a core principle behind Grover’s algorithm.

The Quantum Advantage

One of the primary takeaways from our research is the potential advantage quantum computing and quantum machine learning can offer over classical counterparts. By harnessing the efficiency of quantum computers, we can significantly reduce the reliance on supercomputing power for executing complex programs. This is particularly relevant for problems that pose challenges for classical computing methods.

Exploring Quantum Machine Learning

Our paper also delves into two promising approaches in quantum machine learning:

  1. Variational Quantum Circuits: These are quantum circuits that can be tuned variably, allowing for optimization in quantum computations.
  2. Kernel-based Quantum Machine Learning: This approach leverages the concept of kernels, which are used in classical machine learning for various tasks, and adapts them for quantum computations.

We believe the intersection of quantum algorithms and machine learning is ripe for exploration. While there’s a lot of potential, it’s also a domain that requires rigorous research to unearth solutions that can genuinely outperform classical machine learning methods. Kernel-based approaches, in particular, hold significant promise in the near future of quantum machine learning and supercomputing.

Looking Ahead

Our journey into the world of quantum computing doesn’t end here. We have charted out an exciting roadmap for our future research:

  • Real-world Data Testing: We aim to test our algorithm using actual data from classical circuits. This will involve parsing the circuit and feeding it into our algorithm.
  • Optimizing Computational Complexity: Our goal is to enhance the efficiency of our approach, especially when compared to classical methods.
  • Leveraging Grover’s Algorithm for Machine Learning: We believe many classification problems in machine learning can be reframed as search problems, making them ideal candidates for Grover’s algorithm.
  • Kernel Methods and Grover’s Algorithm: In our future work, we plan to reformulate machine learning problems using kernel methods, aiming to solve them efficiently as search problems using Grover’s algorithm.

Finally, we are at the cusp of some groundbreaking discoveries that can redefine the supercomputing landscape. We invite readers and fellow researchers to join us on this exciting journey and keep a close watch on the advancements in this domain. Read the paper here [ bib |  .pdf ].

Quantum circuit for Grover algorithm with user-defined oracle on clauses |q0⟩ AND |q3⟩ , |q1⟩ XOR |q2⟩, and |q2⟩ AND |q3⟩. Note that we need seven qubits in total. For c clauses, three in this case, we require c additional qubits and one output qubit, resulting (n + c + 1) qubits quantum circuit.

Hybrid Quantum Variational Autoencoders for Representation Learning

One of our recent papers introduces a novel hybrid quantum machine learning approach to unsupervised representation learning by using a quantum variational circuit that is trainable with traditional gradient descent techniques. Access it here: [ bib | .pdf ]

Much of the work related to quantum machine learning has been popularized in recent years. Some of the most notable efforts involve variational approaches (Cerezo 2021, Khoshaman 2018, Yuan 2019). Researchers have shown that these models are effective in complex tasks that grant further studies and open new doors for applied quantum machine learning research. Another popular approach is to perform kernel learning using a quantum approach (Blank 2020, Schuld 2019, Rebentrost 2014). In this case the kernel-based projection of data \mathbf{x} produces a easible linear mapping to the desired target y as follows:

(1)   \begin{equation*}     y(\mathbf{x})=\operatorname{sign}\left(\sum_{j=1}^{M} \alpha_{j} k\left(\mathbf{x}_{j}, \mathbf{x}\right)+b\right) \end{equation*}

for hyper parameters b,\alpha that need to be provided or learned. This enables the creation of some types of support vector machines whose kernels are calculated such that the data \mathbf{x} is processed in the quantum realm. That is \left|\mathbf{x}_{j}\right\rangle=1 /\left|\mathbf{x}_{j}\right| \sum_{k=1}^{N}\left(\mathbf{x}_{j}\right)_{k}|k\rangle. The work of Schuld et al., expands the theory behind this idea an show that all kernel methods can be quantum machine learning methods. Recently, in 2020, Mari et al., worked on variational models that are hybrid in format. Particularly, the authors focused on transfer learning, i.e., the idea of bringing a pre-trained model (or a piece of it) to be part of another model. In the case of Mari the larger model is a computer vision model, e.g., ResNet, which is part of a variational quantum circuit that performs classification. The work we present here follows a similar idea, but we focus in the autoencoder architecture, rather than a classification model, and we focus on learning representations in comparison between a classic and a variational quantum fine-tuned model.

Evaluating Accuracy and Adversarial Robustness of Quanvolutional Neural Networks

A combination of a quantum circuit and a convolutional neural network (CNN) can have better results over a classic CNN in some cases. In our recent article, we show an example of such a case, using accuracy and adversarial examples as measures of performance and robustness. Check it out: [ bib | pdf ]

Artificial Intelligence Computing at the Quantum Level

A unique feature of a quantum computer in comparison to a classical computer is that the bit (often referred to as qubit can be in one of two states (0 or 1) and possibly a superposition of the two states (a linear combination of 0 and 1) per time. The most common mathematical representation of a qubit is

    \[ |{\psi\rangle} = \alpha|{0\rangle} + \beta|{1\rangle},  \]

which denotes a superposition state, where \alpha, \beta are complex numbers and |{0\rangle}, |{1\rangle} are computational basis states that form an orthonormal basis in this vector space.

To know more about how quantum machine learning takes advantage of this, check our newest article here.

Performance Analysis of Quantum Machine Learning Classifiers

Quantum mechanics has advanced analytical studies beyond the constraints of classical mechanics in recent years. Quantum Machine Learning  (QML) is one of these fields of study. QML currently offers working solutions comparable to (traditional) machine learning, such as classification and prediction tasks utilizing quantum classifiers (q-classifiers).

We investigated a variety of these q-classifiers and reported the outcomes of our research in comparison to their classical counterparts. Paper: [ bib | .pdf ]