Supercomputing Leverages Quantum Machine Learning and Grover’s Algorithm

Khanal, B., Orduz, J., Rivas, P. et al. Supercomputing leverages quantum machine learning and Grover’s algorithm. J Supercomput 79, 6918–6940 (2023). https://doi.org/10.1007/s11227-022-04923-4. [ bib |  .pdf ]

Quantum computing, a field that has drawn significant attention recently, promises to revolutionize how we approach computational problems. In Bikram Khanal’s paper titled “Supercomputing leverages quantum machine learning and Grover’s algorithm,” we delve deep into the intricacies of quantum computing, with a particular focus on Grover’s algorithm and its potential applications in quantum machine learning. This journal article is an extended version of an earlier conference paper found here.

Understanding Quantum Computing and Grover’s Algorithm

At the heart of our paper is a comprehensive discussion of the basics of quantum computing. We shed light on Grover’s quantum algorithm, which promises faster search capabilities compared to classical algorithms. Our team conducted an experiment simulating classical logical circulation by exploiting the power of amplitude amplification, a core principle behind Grover’s algorithm.

The Quantum Advantage

One of the primary takeaways from our research is the potential advantage quantum computing and quantum machine learning can offer over classical counterparts. By harnessing the efficiency of quantum computers, we can significantly reduce the reliance on supercomputing power for executing complex programs. This is particularly relevant for problems that pose challenges for classical computing methods.

Exploring Quantum Machine Learning

Our paper also delves into two promising approaches in quantum machine learning:

  1. Variational Quantum Circuits: These are quantum circuits that can be tuned variably, allowing for optimization in quantum computations.
  2. Kernel-based Quantum Machine Learning: This approach leverages the concept of kernels, which are used in classical machine learning for various tasks, and adapts them for quantum computations.

We believe the intersection of quantum algorithms and machine learning is ripe for exploration. While there’s a lot of potential, it’s also a domain that requires rigorous research to unearth solutions that can genuinely outperform classical machine learning methods. Kernel-based approaches, in particular, hold significant promise in the near future of quantum machine learning and supercomputing.

Looking Ahead

Our journey into the world of quantum computing doesn’t end here. We have charted out an exciting roadmap for our future research:

  • Real-world Data Testing: We aim to test our algorithm using actual data from classical circuits. This will involve parsing the circuit and feeding it into our algorithm.
  • Optimizing Computational Complexity: Our goal is to enhance the efficiency of our approach, especially when compared to classical methods.
  • Leveraging Grover’s Algorithm for Machine Learning: We believe many classification problems in machine learning can be reframed as search problems, making them ideal candidates for Grover’s algorithm.
  • Kernel Methods and Grover’s Algorithm: In our future work, we plan to reformulate machine learning problems using kernel methods, aiming to solve them efficiently as search problems using Grover’s algorithm.

Finally, we are at the cusp of some groundbreaking discoveries that can redefine the supercomputing landscape. We invite readers and fellow researchers to join us on this exciting journey and keep a close watch on the advancements in this domain. Read the paper here [ bib |  .pdf ].

Quantum circuit for Grover algorithm with user-defined oracle on clauses |q0⟩ AND |q3⟩ , |q1⟩ XOR |q2⟩, and |q2⟩ AND |q3⟩. Note that we need seven qubits in total. For c clauses, three in this case, we require c additional qubits and one output qubit, resulting (n + c + 1) qubits quantum circuit.

On Adversarial Examples for Text Classification by Perturbing Latent Representations

Recently, with the advancement of deep learning, several applications in text classification have advanced significantly. However, this improvement comes with a cost because deep learning is vulnerable to adversarial examples. This weakness indicates that deep learning is not very robust.

Fortunately, a couple of students in our lab, Korn Sooksatra and Bikram Khanal, noticed that the input of a text classifier is discrete. Hence, it can prevent the classifier from state-of-the-art attacks. Nonetheless, previous works have generated black-box attacks that successfully manipulate the discrete values of the input to find adversarial examples. Therefore, instead of changing the discrete values, they transform the input into its embedding vector containing real values to perform the state-of-the-art white-box attacks. Then, they convert the perturbed embedding vector back into a text and name it an adversarial example. In summary, PhD candidates, Sooksatra and Khanal, create a framework that measures the robustness of a text classifier by using the gradients of the classifier.

This paper was accepted for presentation and publication at the LXAI workshop at NeurIPS 2022 in New Orleans, LA. Download the paper here: [ bib |  .pdf ]

(Editorial) Emerging Technologies, Evolving Threats: Next-Generation Security Challenges

The Volume 3, Issue 3, of the IEEE Transactions on Technology and Society is officially published with great contributions regarding security challenges posed by emerging technologies and their effects on society.

Our editorial piece is freely accessible and briefly introduces the research on this issue and how relevant these issues are. Our discussion briefly discusses GPT and DALEE, as means to show the great advances of AI and some ethical considerations around those. Take a look:

T. Bonaci, K. Michael, P. Rivas, L. J. Robertson and M. Zimmer, “Emerging Technologies, Evolving Threats: Next-Generation Security Challenges,” in IEEE Transactions on Technology and Society, vol. 3, no. 3, pp. 155-162, Sept. 2022, doi: 10.1109/TTS.2022.3202323.

NSF Award: Using NLP to Identify Suspicious Transactions in Omnichannel Online C2C Marketplaces

Baylor University has been awarded funding under the SaTC program for Enabling Interdisciplinary Collaboration; a grant led by Principal Investigator Dr. Pablo Rivas and an amazing group of multidisciplinary researchers formed by:

  • Dr. Gissella Bichler from California State University San Bernardino, Center for Criminal Justice Research, School of Criminology and Criminal Justice.
  • Dr. Tomas Cerny is at Baylor University in the Computer Science Department, leading software engineering research.
  • Dr. Laurie Giddens from the University of North Texas, a faculty member at the G. Brint Ryan College of Business.
  • Dr. Stacy Petter is at Wake Forest University in the School of Business. She and Dr. Giddens have extensive research and funding in human trafficking research.
  • Dr. Javier Turek, a Research Scientist in Machine Learning at Intel Labs, is our collaborator in matters related to machine learning for natural language processing.

We also have two Ph.D. students working on this project: Alejandro Rodriguez and Korn Sooksatra.

This project was motivated by the increasing pattern of people buying and selling goods and services directly from other people via online marketplaces. While many online marketplaces enable transactions among reputable buyers and sellers, some platforms are vulnerable to suspicious transactions. This project investigates whether it is possible to automate the detection of illegal goods or services within online marketplaces. First, the project team will analyze the text of online advertisements and marketplace policies to identify indicators of suspicious activity. Then, the team will adapt the findings to a specific context to locate stolen motor vehicle parts advertised via online marketplaces. Together, the work will lead to general ways to identify signals of illegal online sales that can be used to help people choose trustworthy marketplaces and avoid illicit actors. This project will also provide law enforcement agencies and online marketplaces with insights to gather evidence on illicit goods or services on those marketplaces.

This research assesses the feasibility of modeling illegal activity in online consumer-to-consumer (C2C) platforms, using platform characteristics, seller profiles, and advertisements to prioritize investigations using actionable intelligence extracted from open-source information. The project is organized around three main steps. First, the research team will combine knowledge from computer science, criminology, and information systems to analyze online marketplace technology platform policies and identify platform features, policies, and terms of service that make platforms more vulnerable to criminal activity. Second, building on the understanding of platform vulnerabilities developed in the first step, the researchers will generate and train deep learning-based language models to detect illicit online commerce. Finally, to assess the generalizability of the identified markers, the investigators will apply the models to markets for motor vehicle parts, a licit marketplace that sometimes includes sellers offering stolen goods. This project establishes a cross-disciplinary partnership among a diverse group of researchers from different institutions and academic disciplines with collaborators from law enforcement and industry to develop practical, actionable insights.

Self-supervised modeling. After providing a corpus associated with a C2C domain of interest and ontologies, we will extract features followed by attention mechanisms for self-supervised and supervised tasks. The self-supervised models include the completion of missing information and domain-specific text encoding for learning representations. Then supervised tasks will leverage these representations to learn the relationships with targets.

Dr. Bejarano’s work is Recognized by Amazon

According to the World Federation of the Deaf, more than 70 million deaf people exist worldwide. More than 80% of them live in developing countries. Recent research by Dr. Gissella Bejarano, our very own postdoctoral research scientist, has been recognized for its impact on computer vision and speech recognition, providing opportunities to help individuals with disabilities. With support from AWS, Dr. Bejarano is finding better ways to translate Peruvian Sign Language using computer vision and natural language processing.

Read more about this in this release by AWS.

SICEM: A Sensitivity-Inspired Constrained Evaluation Method for Adversarial Attacks on Classifiers with Occluded Input Data

In the rapidly evolving field of artificial intelligence, understanding the sensitivity of models to adversarial attacks is crucial. In our recent paper, Korn Sooksatra introduces the Sensitivity-inspired constrained evaluation method (SICEM) to address this concern.

Sooksatra, K., Rivas, P. Evaluation of adversarial attacks sensitivity of classifiers with occluded input data. Neural Comput & Applic 34, 17615–17632 (2022). https://doi.org/10.1007/s00521-022-07387-y

Understanding SICEM

Our proposed method, SICEM, evaluates the vulnerability of an incomplete input against an adversarial attack in comparison to a complete one. This is achieved by leveraging the Jacobian matrix concept. The sensitivity of the target classifier’s output to each attribute of the input is calculated, providing a comprehensive understanding of how changes in the input can affect the output.

    \[ s(x,y)_i =  \left|\min \left(0, \frac{\partial Z(x)_y}{\partial x_i} \cdot \left(\sum_{y^{'} \neq y} \frac{\partial Z(x)_{y^{'}}}{\partial x_i}\right) \cdot C(y, 1, 0)_i\right)\right| \]

This sensitivity score gives us an insight into how much each attribute of the input contributes to the output’s sensitivity. The score is then used to estimate the overall sensitivity of the given input and its mask.

    \[ S(x, M)_y = \sum_{i=0}^{n-1} (s(x, y)_i \cdot M_i) \]

For a complete input, the sensitivity ratio provides a comparative measure of how sensitive the classifier’s output is for an incomplete input versus a complete one.

Results and Implications

Our focus was on an automobile image from the CIFAR-10 dataset. Interestingly, adversarial examples generated by FGSM and IGSM required the same value of \epsilon, which was significantly lower than for other images. This can be attributed to the layer-wise linearity of the classifier. Larger inputs, like the automobile image, require a smaller \epsilon to create an adversarial example. However, JSMA required a higher \epsilon due to the metric of L_0 norm.

Understanding the sensitivity of AI models is paramount in ensuring their robustness against adversarial attacks. The SICEM method provides a comprehensive tool to ensure safer and more reliable AI systems. Read the full paper here [ bib |  .pdf ].

Hybrid Quantum Variational Autoencoders for Representation Learning

One of our recent papers introduces a novel hybrid quantum machine learning approach to unsupervised representation learning by using a quantum variational circuit that is trainable with traditional gradient descent techniques. Access it here: [ bib | .pdf ]

Much of the work related to quantum machine learning has been popularized in recent years. Some of the most notable efforts involve variational approaches (Cerezo 2021, Khoshaman 2018, Yuan 2019). Researchers have shown that these models are effective in complex tasks that grant further studies and open new doors for applied quantum machine learning research. Another popular approach is to perform kernel learning using a quantum approach (Blank 2020, Schuld 2019, Rebentrost 2014). In this case the kernel-based projection of data \mathbf{x} produces a easible linear mapping to the desired target y as follows:

(1)   \begin{equation*}     y(\mathbf{x})=\operatorname{sign}\left(\sum_{j=1}^{M} \alpha_{j} k\left(\mathbf{x}_{j}, \mathbf{x}\right)+b\right) \end{equation*}

for hyper parameters b,\alpha that need to be provided or learned. This enables the creation of some types of support vector machines whose kernels are calculated such that the data \mathbf{x} is processed in the quantum realm. That is \left|\mathbf{x}_{j}\right\rangle=1 /\left|\mathbf{x}_{j}\right| \sum_{k=1}^{N}\left(\mathbf{x}_{j}\right)_{k}|k\rangle. The work of Schuld et al., expands the theory behind this idea an show that all kernel methods can be quantum machine learning methods. Recently, in 2020, Mari et al., worked on variational models that are hybrid in format. Particularly, the authors focused on transfer learning, i.e., the idea of bringing a pre-trained model (or a piece of it) to be part of another model. In the case of Mari the larger model is a computer vision model, e.g., ResNet, which is part of a variational quantum circuit that performs classification. The work we present here follows a similar idea, but we focus in the autoencoder architecture, rather than a classification model, and we focus on learning representations in comparison between a classic and a variational quantum fine-tuned model.

Evaluating Accuracy and Adversarial Robustness of Quanvolutional Neural Networks

A combination of a quantum circuit and a convolutional neural network (CNN) can have better results over a classic CNN in some cases. In our recent article, we show an example of such a case, using accuracy and adversarial examples as measures of performance and robustness. Check it out: [ bib | pdf ]

Artificial Intelligence Computing at the Quantum Level

A unique feature of a quantum computer in comparison to a classical computer is that the bit (often referred to as qubit can be in one of two states (0 or 1) and possibly a superposition of the two states (a linear combination of 0 and 1) per time. The most common mathematical representation of a qubit is

    \[ |{\psi\rangle} = \alpha|{0\rangle} + \beta|{1\rangle},  \]

which denotes a superposition state, where \alpha, \beta are complex numbers and |{0\rangle}, |{1\rangle} are computational basis states that form an orthonormal basis in this vector space.

To know more about how quantum machine learning takes advantage of this, check our newest article here.

Enhancing Adversarial Examples on Deep QNetworks with Previous Information

This work finds strong adversarial examples for Deep Q Networks which are famous deep reinforcement learning models. We combine two subproblems of finding adversarial examples in deep reinforcement learning: finding states to perturb and determining how much to perturb. Therefore, the attack can jointly optimize this problem. Further, we trained Deep Q Networks to play Atari games: Breakout and Space Invader. Then, we used our attack to find adversarial examples on those games. As a result, we can achieve state-of-the-art results and showed that our attack is natural and stealthy. Paper: [ bib | pdf ]