Navigating the Noisy Quantum Landscape: A Look at Generalization in Quantum Machine Learning

As a quantum machine learning (QML) researcher, I’m constantly fascinated by quantum computers’ potential to revolutionize the way we learn from data. We’re living in a time when quantum computers are starting to become a reality, and their potential to revolutionize machine learning is simply mind-blowing. But there’s a catch, and it’s a big one: noise

Right now, we’re in the era of Noisy Intermediate-Scale Quantum (NISQ) devices. These early quantum computers are incredibly powerful, but they’re also extremely sensitive to errors. And when it comes to machine learning, these errors can be a real deal-breaker. That’s because a core principle of machine learning is generalization: the ability of a model to perform well on unseen data. If our quantum machine learning models are constantly being tripped up by noise, how can we trust them to make accurate predictions in the real world? 

That’s where generalization bounds come into play. These are mathematical guarantees that tell us how well a model is expected to perform on unseen data. In the classical realm, generalization bounds are well-established and have proven invaluable in guiding model development and deployment. However, in the quantum domain, particularly in the context of NISQ devices, these bounds are still being actively explored and refined.

We recently published a systematic mapping study article, “Generalization error bound for quantum machine learning in NISQ era—a survey,” that specifically focuses on the generalization bounds for supervised QML in the NISQ era. Our work meticulously analyzed 37 (out of 544) relevant research articles to understand the current state-of-the-art and identify trends and challenges in this critical area. Their work provides valuable insights that can guide future research and development in QML.

Let’s take a look at some of the key findings of this study.

Quantum Noisy Channel Limits Current Quantum Machine Learning Capabilities

The idea behind QML is to leverage the unique properties of quantum systems, like superposition and entanglement, to develop algorithms that can outperform classical machine learning approaches for certain tasks. And our findings do suggest that this quantum advantage is within reach. For instance, some of the generalization bounds proposed in the considered literature suggest that QML models can achieve better performance with smaller datasets compared to classical models, especially when dealing with high-dimensional data. This is huge! Imagine being able to train powerful machine learning models with a fraction of the data we need today. But there’s a flip side. These same bounds also highlight the delicate interplay between noise, model complexity, and generalization. Deeper quantum circuits, while potentially more expressive, are particularly vulnerable to noise accumulation, leading to degradation in performance. This means that while we might be able to achieve a quantum advantage, we need to be very careful about how we design our QML models. We need to strike a balance between expressiveness and noise resilience, which is no easy feat. 

Measurement Does Introduce Error 

Here’s another wrinkle: measurement. It turns out that simply extracting information from a quantum system can introduce errors, and these errors can affect the generalization ability of our QML models. This means we can’t just focus on building fancy quantum algorithms; we also need to develop robust measurement strategies that minimize information loss. And this becomes even more critical in the presence of noise.

Dataset Choices in Quantum Machine Learning Reflect a Familiarity Bias

The study highlights an interesting trend in dataset selection for QML research. On the one hand, we see a lot of work using synthetic datasets specifically designed to showcase the potential of QML. This makes sense; we need to understand how these models work in ideal settings before we can tackle real-world problems. On the other hand, there’s also a tendency to use familiar classical datasets like MNIST and IRIS, which are benchmarks in the classical machine-learning world. Now, there’s nothing inherently wrong with using classical datasets, but it does raise a few concerns. First, it creates a temptation to directly compare the performance of QML models with their classical counterparts, which might not be the most meaningful comparison. Remember, we’re not just trying to build quantum versions of classical algorithms; we’re trying to tap into something fundamentally different.  Second, these classical datasets might not be the best for highlighting the specific advantages of quantum algorithms. They might not even be solvable more efficiently on a quantum computer! This reliance on familiar datasets, while useful for benchmarking and comparison, can inadvertently lead to what I call “familiarity bias.”

We risk falling into the trap of constantly comparing quantum models to classical counterparts, potentially obscuring the specific advantages quantum algorithms might offer. The question then becomes: Are we prioritizing familiarity over the pursuit of quantum advantage?

The Frequent Use of IBM’s Quantum Platforms May Lead to Research Bias 

Another interesting tidbit from our findings is the apparent popularity of the IBM Quantum Platform among researchers. While this platform is certainly a frontrunner in the quantum computing race, this preference could lead to research biases. Different quantum computing platforms have different noise characteristics, gate fidelities, and qubit connectivity, all of which can affect algorithm performance. So, if we’re primarily focusing on one platform, we might be missing out on valuable insights that other platforms could offer.

Classical Optimization Techniques is an Intermediate Stop

Another interesting observation is the widespread use of classical optimization techniques like stochastic gradient descent (SGD) and backpropagation in QML. While these techniques have proven incredibly successful in classical machine learning, their suitability for optimizing quantum circuits, particularly in the presence of noise, is still a topic of active debate. While there have been quite a few works on the ‘Parameter-shift’, ‘finite differential’, and ‘Hadamard test’ gradient approach our study emphasizes that the highly non-convex optimization landscape of quantum models can significantly limit the effectiveness of classical techniques.

This raises the question: Are we limiting the potential of QML by relying on classical optimization techniques? Perhaps exploring and developing quantum-specific optimization algorithms could lead to more efficient and effective training procedures.

Quantum Kernels Offer Advantages But Require Further Exploration

A significant portion of the analyzed research focuses on quantum kernel methods. These methods aim to leverage the mathematical power of kernel theory within a quantum framework, potentially leading to improved generalization capabilities for QML models. The study also points out that quantum kernel methods can often achieve competitive performance with simpler circuit architectures compared to other QML approaches, which is particularly advantageous in the noise-prone NISQ era.

However, as promising as they are, quantum kernels aren’t without their challenges. Research suggests that under certain conditions, the values of quantum kernels can exhibit exponential concentration, leading to poor generalization performance. This phenomenon highlights the need for a deeper understanding of the interplay between kernel design, data embedding, and noise mitigation strategies.

Quantum Machine Learning Research Is Exploring a Variety of Approaches

We found the diversity in research approaches, with a mix of theoretical and empirical work, and a particular focus on kernel methods and ensemble learning. While this diversity is valuable, it also risks fragmenting the field. We need a more unified approach, one that combines the rigor of theoretical analysis with the practicality of experimental validation. While theoretical bounds provide valuable insights into the expected behavior of quantum models, their practical relevance hinges on their validation under realistic, noisy conditions. This calls for a more collaborative and iterative approach to research, where theoretical insights guide experimental design, and experimental findings inform further theoretical development.

The Field of Quantum Machine Learning Needs a Unified Approach

The findings of this study reinforce the idea that QML is a nascent field, full of potential but facing unique challenges in the NISQ era. To effectively navigate these challenges, we, as a community, need to adopt a more unified and collaborative approach. This involves:

  • Sharing knowledge: Openly sharing insights about generalization bounds, measurement complexities, dataset choices, and optimization techniques will accelerate the overall progress of the field.
  • Embracing diversity: While standardization can be beneficial, we shouldn’t limit ourselves to a single platform, such as IBM hardware, or a narrow set of techniques. Exploring diverse platforms and approaches will lead to a more robust and adaptable field.
  • Prioritizing quantum-specific solutions: While borrowing from classical techniques can be helpful, we must actively invest in developing quantum-specific algorithms and optimization strategies to fully unlock the power of quantum computing in machine learning.

Future Quantum Machine Learning Research Must Address Several Challenges

Navigating the noisy quantum landscape is not going to be easy. But the potential rewards are simply too great to ignore. As we move forward, we need to focus on:

  • Developing generalization bounds and other metrics that accurately reflect the challenges of the NISQ era.
  • Designing robust QML algorithms that can tolerate noise and efficiently extract information from quantum systems.
  • Exploring a diverse range of datasets that can showcase the unique advantages of quantum algorithms.
  • Embracing a multi-platform approach to ensure the reproducibility and generalizability of research findings.

The quest for a quantum advantage in machine learning is just beginning. And while there are many hurdles to overcome, the journey itself is a testament to human ingenuity. I, for one, am excited to see what the future holds for this revolutionary field.

For a detailed exploration of the methodology and findings, read the full paper here.

References

About the Author

Bikram Khanal is a Ph.D. student at Baylor University, specializing in Quantum Machine Learning and Natural Language Processing.

Giving Thanks for the Pioneering Advances in Machine Learning

As we gather around the table this Thanksgiving, it’s the perfect time to reflect on and express gratitude for the remarkable strides made in machine learning (ML) over recent years. These technical innovations have advanced the field and paved the way for countless applications that enhance our daily lives. Let’s check out some of the most influential ML architectures and algorithms for which we are thankful as a community.


1. The Transformer Architecture

Vaswani et al., 2017

We are grateful for the Transformer architecture, which revolutionized sequence modeling by introducing a novel attention mechanism, eliminating the reliance on recurrent neural networks (RNNs) for handling sequential data.

Key Components:

  • Self-Attention Mechanism: Computes representations of the input sequence by relating different positions via attention weights.
    \text{Attention}(Q, K, V) = \text{softmax}\left( \frac{Q K^\top}{\sqrt{d_k}} \right) V
  • Multi-Head Attention: Allows the model to focus on different positions by projecting queries, keys, and values multiple times with different linear projections. \text{MultiHead}(Q, K, V) = \text{Concat}(\text{head}_1, ..., \text{head}_h) W^O where each head is computed as: \text{head}_i = \text{Attention}(Q W_i^Q, K W_i^K, V W_i^V)
  • Positional Encoding: Adds information about the position of tokens in the sequence since the model lacks recurrence. \text{PE}_{(pos, 2i)} = \sin\left( \frac{pos}{10000^{2i/d_{\text{model}}}} \right) \text{PE}_{(pos, 2i+1)} = \cos\left( \frac{pos}{10000^{2i/d_{\text{model}}}} \right)

Significance: Enabled parallelization in sequence processing, leading to significant speed-ups and improved performance in tasks like machine translation and language modeling.


2. Bidirectional Encoder Representations from Transformers (BERT)

Devlin et al., 2018

We are thankful for BERT, which introduced a method for pre-training deep bidirectional representations by jointly conditioning on both left and right contexts in all layers.

Key Concepts:

  • Masked Language Modeling (MLM): Randomly masks tokens in the input and predicts them using the surrounding context. Loss Function: \mathcal{L}_{\text{MLM}} = -\sum_{t \in \mathcal{M}} \log P_{\theta}(x_t | x_{\backslash \mathcal{M}}) where \mathcal{M} is the set of masked positions.
  • Next Sentence Prediction (NSP): Predicts whether a given pair of sentences follows sequentially in the original text.

Significance: Achieved state-of-the-art results on a wide range of NLP tasks via fine-tuning, demonstrating the power of large-scale pre-training.


3. Generative Pre-trained Transformers (GPT) Series

Radford et al., 2018-2020

We express gratitude for the GPT series, which leverages unsupervised pre-training on large corpora to generate human-like text.

Key Features:

  • Unidirectional Language Modeling: Predicts the next token x_t given previous tokens x_{<t}. Objective Function: \mathcal{L}_{\text{LM}} = -\sum_{t=1}^N \log P_{\theta}(x_t | x_{<t})
  • Decoder-Only Transformer Architecture: Utilizes masked self-attention to prevent the model from attending to future tokens.

Significance: Demonstrated the capability of large language models to perform few-shot learning, adapting to new tasks with minimal task-specific data.


4. Variational Autoencoders (VAEs)

Kingma and Welling, 2013

We appreciate VAEs for introducing a probabilistic approach to autoencoders, enabling generative modeling of complex data distributions.

Key Components:

  • Encoder Network: Learns an approximate posterior q_{\phi}(z|x).
  • Decoder Network: Reconstructs the input from latent variables z, modeling p_{\theta}(x|z).

Objective Function (Evidence Lower Bound – ELBO): \mathcal{L}(\theta, \phi; x) = -\text{KL}(q_{\phi}(z|x) \| p_{\theta}(z)) + \mathbb{E}_{q_{\phi}(z|x)}[\log p_{\theta}(x|z)] where p_{\theta}(z) is typically a standard normal prior \mathcal{N}(0, I).

Significance: Provided a framework for unsupervised learning of latent representations and generative modeling.


5. Generative Adversarial Networks (GANs)

Goodfellow et al., 2014

We are thankful for GANs, which consist of two neural networks—a generator G and a critic D—competing in a minimax game.

Objective Function: \min_G \max_D V(D, G) = \mathbb{E}_{x \sim p_{\text{data}}}[\log D(x)] + \mathbb{E}_{z \sim p_z}[\log(1 - D(G(z)))] where p_{\text{data}} is the data distribution and p_z is the prior over the latent space.

Significance: Enabled the generation of highly realistic synthetic data, impacting image synthesis, data augmentation, and more.


6. Deep Reinforcement Learning

Mnih et al., 2015; Silver et al., 2016

We give thanks for the combination of deep learning with reinforcement learning, leading to agents capable of performing complex tasks.

Key Algorithms:

  • Deep Q-Networks (DQN): Approximate the action-value function Q(s, a; \theta) using neural networks. Bellman Equation: Q(s, a) = r + \gamma \max_{a'} Q(s', a'; \theta^{-}) where \theta^{-} are the parameters of a target network.
  • Policy Gradient Methods: Optimize the policy \pi_{\theta}(a|s) directly. REINFORCE Algorithm Objective: \nabla_{\theta} J(\theta) = \mathbb{E}_{\pi_{\theta}} \left[ \nabla_{\theta} \log \pi_{\theta}(a|s) R \right] where R is the cumulative reward.

Significance: Achieved human-level performance in games like Atari and Go, advancing AI in decision-making tasks.


7. Normalization Techniques

We are grateful for normalization techniques that have improved training stability and performance of deep networks.

  • Batch Normalization (Ioffe and Szegedy, 2015) Formula: \hat{x}_i = \frac{x_i - \mu_{\mathcal{B}}}{\sqrt{\sigma_{\mathcal{B}}^2 + \epsilon}} where \mu_{\mathcal{B}} and \sigma_{\mathcal{B}}^2 are the batch mean and variance.
  • Layer Normalization (Ba et al., 2016) Formula: \hat{x}_i = \frac{x_i - \mu}{\sqrt{\sigma^2 + \epsilon}} where \mu and \sigma^2 are computed over the features of a single sample.

Significance: Mitigated internal covariate shift, enabling faster and more reliable training.


8. Attention Mechanisms in Neural Networks

Bahdanau et al., 2014; Luong et al., 2015

We appreciate attention mechanisms for allowing models to focus on specific parts of the input when generating each output element.

Key Concepts:

  • Alignment Scores: Compute the relevance between encoder hidden states h_{\text{enc}} and decoder state s_{\text{dec}}. Common Score Functions:
    • Dot-product: \text{score}(h, s) = h^\top s
    • Additive (Bahdanau attention): \text{score}(h, s) = v_a^\top \tanh(W_a [h; s])
  • Context Vector: c_t = \sum_{i=1}^T \alpha_{t,i} h_i where the attention weights \alpha_{t,i} are computed as: \alpha_{t,i} = \frac{\exp(\text{score}(h_i, s_{t-1}))}{\sum_{k=1}^T \exp(\text{score}(h_k, s_{t-1}))}

Significance: Enhanced performance in sequence-to-sequence tasks by allowing models to utilize information from all input positions.


9. Graph Neural Networks (GNNs)

Scarselli et al., 2009; Kipf and Welling, 2016

We are thankful for GNNs, which extend neural networks to graph-structured data, enabling the modeling of relational information.

Message Passing Framework:

  • Node Representation Update: h_v^{(k)} = \sigma \left( \sum_{u \in \mathcal{N}(v)} W h_u^{(k-1)} + W_0 h_v^{(k-1)} \right) where:
    • h_v^{(k)} is the representation of node v at layer k.
    • \mathcal{N}(v) is the set of neighbors of node v.
    • W and W_0 are learnable weight matrices.
    • \sigma is a nonlinear activation function.
  • Graph Convolutional Networks (GCNs): H^{(k+1)} = \sigma \left( \tilde{D}^{-1/2} \tilde{A} \tilde{D}^{-1/2} H^{(k)} W^{(k)} \right) where:
    • \tilde{A} = A + I is the adjacency matrix with added self-loops.
    • \tilde{D} is the degree matrix of \tilde{A}.

Significance: Enabled advancements in social network analysis, molecular chemistry, and recommendation systems.


10. Self-Supervised Learning and Contrastive Learning

He et al., 2020; Chen et al., 2020

We are grateful for self-supervised learning techniques that leverage unlabeled data by creating surrogate tasks.

Contrastive Learning Objective:

  • InfoNCE Loss: \mathcal{L}_{i,j} = -\log \frac{\exp(\text{sim}(z_i, z_j)/\tau)}{\sum_{k=1}^{2N} \textbf{1}_{[k \neq i]} \exp(\text{sim}(z_i, z_k)/\tau)} where:
    • z_i and z_j are representations of two augmented views of the same sample.
    • \text{sim}(u, v) = \frac{u^\top v}{\|u\| \|v\|} is the cosine similarity.
    • \tau is a temperature parameter.
    • \textbf{1}_{[k \neq i]} is an indicator function equal to 1 when k \neq i.

Significance: Improved representation learning, leading to state-of-the-art results in computer vision tasks without requiring labeled data.


11. Differential Privacy in Machine Learning

Abadi et al., 2016

We give thanks for techniques that allow training models while preserving the privacy of individual data points.

Differential Privacy Guarantee:

  • Definition: A randomized algorithm \mathcal{A} provides (\epsilon, \delta)-differential privacy if for all datasets D and D' differing on one element, and all measurable subsets S: P[\mathcal{A}(D) \in S] \leq e^\epsilon P[\mathcal{A}(D') \in S] + \delta
  • Noise Addition: Applies calibrated noise to gradients during training to ensure privacy.

Significance: Enabled the deployment of machine learning models in privacy-sensitive applications.


12. Federated Learning

McMahan et al., 2017

We are thankful for federated learning, which allows training models across multiple decentralized devices while keeping data localized.

Federated Averaging Algorithm:

  1. Local Update: Each client k updates model parameters \theta using local data D_k: \theta_k^{t+1} = \theta^t - \eta \nabla_{\theta} \mathcal{L}(\theta^t; D_k)
  2. Global Aggregation: The server aggregates updates: \theta^{t+1} = \sum_{k=1}^K \frac{n_k}{n} \theta_k^{t+1} where:
    • n_k is the number of samples at client k.
    • n = \sum_{k=1}^K n_k is the total number of samples across all clients.

Significance: Addressed privacy concerns and bandwidth limitations in distributed systems.


13. Neural Architecture Search (NAS)

Zoph and Le, 2016

We appreciate NAS for automating the design of neural network architectures using optimization algorithms.

Approaches:

  • Reinforcement Learning-Based NAS: Uses an RNN controller to generate architectures, trained to maximize expected validation accuracy.
  • Differentiable NAS (DARTS): Models the architecture search space as continuous, enabling gradient-based optimization. Objective Function: \min_{\alpha} \mathcal{L}_{\text{val}}(w^*(\alpha), \alpha) where w^*(\alpha) is obtained by: w^*(\alpha) = \arg\min_w \mathcal{L}_{\text{train}}(w, \alpha)

Significance: Reduced human effort in designing architectures, leading to efficient and high-performing models.


14. Optimizer Advancements (Adam, AdaBound, RAdam)

We are thankful for advancements in optimization algorithms that improved training efficiency.

  • Adam Optimizer(Kingma and Ba, 2014)
    Update Rules: m_t = \beta_1 m_{t-1} + (1 - \beta_1) g_t, v_t = \beta_2 v_{t-1} + (1 - \beta_2) g_t^2, \hat{m}_t = \frac{m_t}{1 - \beta_1^t}, \hat{v}_t = \frac{v_t}{1 - \beta_2^t}, \theta_{t+1} = \theta_t - \eta \frac{\hat{m}_t}{\sqrt{\hat{v}_t} + \epsilon}
    where:
    • g_t is the gradient at time step t.
    • \beta_1 and \beta_2 are hyperparameters controlling the exponential decay rates.
    • \eta is the learning rate.
    • \epsilon is a small constant to prevent division by zero.

Significance: Improved optimization efficiency and convergence in training deep neural networks.


15. Diffusion Models for Generative Modeling

Ho et al., 2020; Song et al., 2020

We give thanks for diffusion models, which are generative models that learn data distributions by reversing a diffusion (noising) process.

Key Concepts:

  • Forward Diffusion Process: Gradually adds Gaussian noise to data over T timesteps.
    Noising Schedule: q(x_t | x_{t-1}) = \mathcal{N}(x_t; \sqrt{1 - \beta_t} x_{t-1}, \beta_t I)
  • Reverse Process: Learns to denoise from x_T back to x_0.
    Objective Function: \mathcal{L}_{\text{simple}} = \mathbb{E}_{t, x_0, \epsilon} \left[ \| \epsilon - \epsilon_\theta(x_t, t) \|^2 \right] where:
    • \epsilon is the noise added to the data.
    • \epsilon_\theta(x_t, t) is the model’s prediction of the noise at timestep t.

Significance: Achieved state-of-the-art results in image generation, rivaling GANs without their training instability.


Give Thanks…

This Thanksgiving, let’s celebrate and express our gratitude for these groundbreaking contributions to machine learning. These technical advancements have not only pushed the boundaries of what’s possible but have also laid the foundation for future innovations that will continue to shape our world.

May we continue to build upon these foundations and contribute to the growing field of machine learning.

References

  • Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is All You Need. Advances in Neural Information Processing Systems. arXiv:1706.03762
  • Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv:1810.04805
  • Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving Language Understanding by Generative Pre-training. OpenAI Blog.
  • Kingma, D. P., & Welling, M. (2013). Auto-Encoding Variational Bayes. arXiv:1312.6114
  • Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2014). Generative Adversarial Nets. Advances in Neural Information Processing Systems.
  • Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., & Hassabis, D. (2015). Human-level Control through Deep Reinforcement Learning. Nature.
  • Ioffe, S., & Szegedy, C. (2015). Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. International Conference on Machine Learning (ICML).
  • Ba, J. L., Kiros, J. R., & Hinton, G. E. (2016). Layer Normalization. arXiv:1607.06450
  • Bahdanau, D., Cho, K., & Bengio, Y. (2014). Neural Machine Translation by Jointly Learning to Align and Translate. arXiv:1409.0473
  • Kipf, T. N., & Welling, M. (2016). Semi-Supervised Classification with Graph Convolutional Networks. arXiv:1609.02907
  • He, K., Fan, H., Wu, Y., Xie, S., & Girshick, R. (2020). Momentum Contrast for Unsupervised Visual Representation Learning. arXiv:1911.05722
  • Abadi, M., Chu, A., Goodfellow, I., McMahan, H. B., Mironov, I., Talwar, K., & Zhang, L. (2016). Deep Learning with Differential Privacy. ACM SIGSAC Conference on Computer and Communications Security.
  • McMahan, B., Moore, E., Ramage, D., Hampson, S., & y Arcas, B. A. (2017). Communication-Efficient Learning of Deep Networks from Decentralized Data. arXiv:1602.05629
  • Zoph, B., & Le, Q. V. (2016). Neural Architecture Search with Reinforcement Learning. arXiv:1611.01578
  • Kingma, D. P., & Ba, J. (2014). Adam: A Method for Stochastic Optimization. arXiv:1412.6980
  • Ho, J., Jain, A., & Abbeel, P. (2020). Denoising Diffusion Probabilistic Models. arXiv:2006.11239

DiPol-GAN: A Trailblazer in Molecular Graph Generation

Five years ago, a groundbreaking paper introduced DiPol-GAN, a generative adversarial network (GAN) designed to create molecular graphs with specific chemical properties. Authored by Michael Guarino, et al. in 2019, this work remains a testament to the ingenuity at the intersection of machine learning and computational chemistry. Let’s dive into its contributions and why it continues to be influential.


From Molecules to Meaning: DiPol-GAN’s Key Innovations

DiPol-GAN broke new ground by addressing the complexities of molecular graph generation—a task requiring precision and adherence to chemical constraints. Here are the standout elements of this innovative approach:

  1. Direct Graph Representations
    While many models relied on SMILES strings to describe molecular structures, DiPol-GAN embraced the inherent richness of graph representations. Nodes represented atoms, edges captured bonds, and the graph structure preserved the relational nature of molecules—key for meaningful chemical property optimization.
  2. Hierarchical Reasoning with DIFFPOOL
    The paper introduced the use of Differentiable Pooling (DIFFPOOL) in the GAN discriminator. DIFFPOOL hierarchically aggregated graph nodes, allowing the model to extract high-level features and improve classification performance. Moreover, the authors adapted DIFFPOOL to handle multi-relational graphs, capturing the nuances of bond types—an essential feature for molecular modeling.
  3. Reinforcement Learning for Targeted Molecules
    DiPol-GAN incorporated a policy network to nudge the generative process toward molecules with desired properties, like solubility (logP) or drug-likeness. This clever integration of reinforcement learning allowed the model to focus on chemically relevant outcomes, setting a precedent for property-driven molecular design.

Five Years Later: Why DiPol-GAN Still Resonates

Even as research in graph neural networks and molecular AI progresses, DiPol-GAN’s contributions remain strikingly relevant:

  • Raising the Bar for Molecular GANs: By addressing the dual challenges of graph isomorphism and multi-relational edges, this model set a high standard for graph-based GANs.
  • Chemical Property Alignment: The integration of reinforcement learning into graph generation directly inspired modern approaches to property-targeted molecule design.
  • Benchmark Metrics: The study’s rigorous evaluation on validity, uniqueness, and novelty using the QM9 dataset provided benchmarks that still guide research today.

Continuing the Journey

DiPol-GAN’s legacy reminds us of the powerful synergy between machine learning and chemistry. Whether you’re exploring novel graph neural network architectures or advancing property-driven molecule generation, this paper offers invaluable insights.

For those working in related domains, revisiting this milestone study could spark new ideas and breakthroughs. Be sure to include it in your references to honor its influence and acknowledge the innovation it brought to the field:

Guarino, M., & Shah, A. Rivas, P., (2019). DiPol-GAN: Generating Molecular Graphs Adversarially with Relational Differentiable Pooling. Presented at the LXAI Workshop @ Neural Information Processing Society Conference (NeurIPS), pp. 9. [download]

Let’s keep building on this foundation, advancing science one molecule at a time.

Resilient AI: Advancing Robustness Against Adversarial Threats with D-ReLU

Artificial intelligence (AI) is now embedded in everyday life, from self-driving cars to medical diagnostic tools, enabling tasks to be performed faster and, in some cases, more accurately than humans. However, this rapid advancement comes with significant challenges, particularly in the form of adversarial attacks. These attacks exploit small, often imperceptible changes in input data to deceive AI systems into making incorrect decisions. For example, a strategically placed sticker on a stop sign might cause an AI-powered car to misinterpret it as a speed limit sign, creating potentially dangerous situations; another example can be small perturbations added to your dog’s picture, which can lead to state-of-the-art AI to confuse it with a cat:

The Role of ReLU and Its Limitations

The Rectified Linear Unit (ReLU) activation function is a foundational component of many AI models. Its simplicity and efficiency have made it a go-to choice for training deep learning networks. However, ReLU’s unrestricted output can make models vulnerable to adversarial noise, leading to cascading errors in predictions. Attempts to address this vulnerability, such as Static-Max-Value ReLU (S-ReLU or capped ReLU), have introduced fixed output caps, but these solutions often underperform on more complex datasets and tasks.

Introducing D-ReLU

D-ReLU represents a significant advancement over traditional ReLU. It incorporates a dynamic output cap that adjusts based on the data flowing through the network. This adaptability serves as a robust defense mechanism against adversarial inputs while maintaining computational efficiency. In essence, D-ReLU acts as a self-adjusting safeguard, preserving model integrity even under duress.

Key Features of D-ReLU:

  1. Adaptive Output Limits: D-ReLU employs learnable caps that evolve during training, enabling models to balance robustness and accuracy effectively.
  2. Enhanced Resilience: D-ReLU has demonstrated superior performance against adversarial attacks, including FGSM, PGD, and Carlini-Wagner, while maintaining consistent performance on standard datasets.
  3. Scalability: Tested on large-scale datasets like CIFAR-10, CIFAR-100, and TinyImagenet, D-ReLU has proven its ability to scale effectively without degradation in performance.
  4. Efficient Training: Unlike adversarial training methods, which require extensive additional computations, D-ReLU achieves robustness naturally, streamlining the training process.
  5. Real-World Viability: D-ReLU excels in real-world scenarios, including black-box attack settings where attackers lack full knowledge of the model.

The Broader Implications

In applications where reliability and safety are paramount—such as autonomous vehicles, financial systems, and medical imaging—D-ReLU offers a compelling solution to the challenges posed by adversarial inputs. By enhancing a model’s resilience without sacrificing performance, D-ReLU provides a vital upgrade for AI systems operating in high-stakes environments.

Future Directions

The potential of D-ReLU extends beyond current implementations. Areas of exploration include:

  • Further optimization for improved performance,
  • Applications in natural language processing and audio tasks,
  • Integration with complementary robust training methods for enhanced results.

For a detailed analysis and technical insights, download our paper here. If you are working on AI models, we encourage you to experiment with D-ReLU and share your experiences:

Sooksatra, Korn, and Pablo Rivas. 2024. “Dynamic-Max-Value ReLU Functions for Adversarially Robust Machine Learning Models” Mathematics 12, no. 22: 3551. https://doi.org/10.3390/math12223551

About the Author

Korn Sooksatra is a Ph.D. student at Baylor University, specializing in adversarial machine learning and AI robustness.

Learning Robust Observable to Address Noise in Quantum Machine Learning

In the rapidly evolving field of Quantum Machine Learning (QML), one of the most pressing challenges is handling noise—the errors that naturally arise in quantum systems, particularly in the Noisy Intermediate-Scale Quantum (NISQ) era. But what if we could teach quantum systems to “learn” and address noise head-on? Our paper “Learning Robust Observable to Address Noise in Quantum Machine Learning” explores an approach to mitigating this issue by focusing on learning robust observables. These observables can withstand the effects of noise, improving the performance of QML models in noisy environments.

Understanding the Problem of Noise in QML

In quantum systems, noise comes from imperfections in quantum gates, interactions with the environment, and decoherence—making quantum computations highly error-prone. When applying QML, this noise leads to inaccuracies in predictions and model training. This research aims to identify observables that remain invariant or change minimally even in the presence of noise, thus offering more reliable outputs from quantum systems.

The Framework: Learning Robust Observables

We propose a machine learning-based framework to find observables that are inherently resistant to various types of noise. To tackle this, we propose training a machine learning model to identify observables that remain invariant or less susceptible to noise. The model learns from the behavior of quantum states passing through noisy channels and adjusts to find robust observables that maintain their integrity despite noise. We illustrate the problem using a Bell state (a well-known quantum state), subjecting it to a depolarization channel to simulate noise.

The process can be formalized as an optimization problem where the goal is to minimize the change in the expectation value of the observable when the quantum state is subject to noise. Mathematically, this can be expressed as minimizing:

    \[\min⁡_{\mathcal{O}}\mathbb{E}[ \left | \langle{\psi}| \mathcal{O} |{\psi} \rangle - \langle{\psi} | \mathcal{O}_n |{\psi} \rangle ]\]

Here, \mathcal{O} is a Pauli-Z observable, and \mathcal{O}_n is an observable we are trying to learn. The expectation value is computed before and after noise is introduced. The goal is to find an observable that minimizes this difference, effectively learning a robust observable.

A Toy Example

In our framework, we train QML models by simulating quantum systems across different noise channels, including depolarization, amplitude damping, phase damping, bit flip, and phase flip channels. The objective is to learn observables for various quantum circuits—such as Bell state circuits, Quantum Fourier Transform circuits, and highly entangled random circuits—that can remain robust across different noise levels. The framework demonstrated that it could identify an observable that better retains the state’s properties under noisy conditions, proving that robust observables can be learned effectively.

 Consider the following example:

(1)   \begin{equation*} O_{optimized} = \begin{pmatrix} 0.804 & 0.086 + 0.138i & 0.739 + 0.050i & 0.070 + 0.132i\\ 0.086 - 0.138i & 0.302 & 0.087 - 0.122i & 0.277 + 0.019i \\0.739 - 0.050i & 0.087 + 0.122i & 1.253 & 0.133 + 0.215i \\ 0.070 - 0.132i & 0.277 - 0.019i & 0.133 - 0.215i & 0.470\end{pmatrix}\end{equation*}

We computed its expectation value for Bell’s states under varying degrees of depolarization, p \in [0,1). The expectation values of the observable O_{optimized} on the depolarized Bell state as a function of the depolarization rate p are plotted in the following figure.

In this figure, Z is the Pauli-Z matrix, X is the Pauli-X matrix, H is the Hadamard gate, A is an arbitrary observable, and O_{optimized} is a learned single qubit Hermitian measurement operator. This toy example shows that the expectation value of the custom observable O_{optimized} on the depolarized Bell state remains constant as the depolarization rate p increases.

Key Findings

  • Custom observables designed through this method demonstrated remarkable stability against noise, especially when compared to traditional observables like Pauli matrices.
  • In noisy channels like depolarization, the learned observables maintained a more consistent expectation value, while traditional observables exhibited greater variance.
  • The approach can be applied to various types of quantum circuits, making it versatile and broadly applicable in enhancing the reliability of QML models.

Implications for Quantum Machine Learning

This study offers a promising avenue for improving the accuracy and stability of QML in real-world applications. By learning robust observables, QML systems can perform more reliably, even as we contend with the inherent noise in current quantum computers. By using learned observables, the performance of quantum machine learning models can be made more stable, even when operating in the inherently noisy NISQ regime. This has implications for advancing practical applications of quantum computing, especially as we seek to scale up quantum algorithms in the near-term.

Looking Ahead: The Future of Noise-Resistant QML

The results from this paper open up exciting possibilities for future work. Imagine a future where every quantum machine learning algorithm can autonomously adjust to different noisy environments by learning which observables to trust. This would make QML models more resilient and, ultimately, more practical for real-world applications.

One immediate future direction is testing the framework on larger systems and more complex noise models. Additionally, combining this method with error correction techniques could further enhance the stability of QML algorithms.

For a detailed exploration of the methodology and findings, read the full paper at:

https://arxiv.org/pdf/2409.07632

References

  • Khanal, Bikram, and Pablo Rivas. “Learning Robust Observable to Address Noise in Quantum Machine Learning.” arXiv preprint arXiv:2409.07632 (2024).

About the Author

Bikram Khanal is a Ph.D. student at Baylor University, specializing in Quantum Machine Learning and Natural Language Processing.

Efficient Quantum Machine Learning with a Modified Depolarization Approach

As the quantum computing community navigates the NISQ (Noisy Intermediate-Scale Quantum) era, managing noise poses a prominent challenge, particularly in Quantum Machine Learning (QML). Quantum systems inherently exhibit noise, which can drastically impact computational accuracy. Notably, depolarization noise, a prevalent noise model in quantum computing, presents a formidable obstacle in developing efficient QML models. The paper “A Modified Depolarization Approach for Efficient Quantum Machine Learning” introduces a modified representation of the depolarization channel for a single-qubit. The proposed modified channel uses two Kraus operators based only on X and Z Pauli matrices. The approach reduces the computational complexity from six to four matrix multiplications per channel execution.

What’s Depolarization, and Why Does It Matter?

Depolarization is a noise process where a quantum state collapses, with some probability, into a mixed state, essentially scrambling the information. For example, imagine working with a quantum bit (qubit) represented by a density matrix \rho. In the traditional depolarization model, noise can be introduced by applying the three Pauli matrices — X, Y, and Z to \rho with equal probability. Mathematically, this looks like:

(1)   \begin{equation*} \rho \rightarrow (1 - p) \rho + \frac{p}{3} (X \rho X + Y \rho Y + Z \rho Z)\end{equation*}


where p is the probability of depolarization. Each of the Pauli operators represents a potential disturbance to the qubit. The more noise we apply, the more the system deteriorates. However, simulating this noise is computationally expensive, especially in large quantum systems, as it requires a substantial number of matrix multiplications. This is where the paper’s novel contribution shines.

The Power of Modified Depolarization Channel and Two Kraus Operators

The central innovation of this paper is an alternative representation of the depolarization channel characterized by reduced matrix multiplication operations that only use the X and Z Pauli matrices.

(2)   \begin{equation*} \rho_{m}' = (1 - \frac{2p}{3}) \rho + \frac{2p}{3} Z((\rho X)^T X) Z\end{equation*}


Traditionally, depolarization uses three Kraus operators, each corresponding to one of the Pauli matrices. In practical terms, this means that when we’re simulating a quantum system with noise, we need to perform six matrix multiplications per qubit per step—this scales rapidly with the size of the system. The modified depolarization approach in the paper proposes reducing the number of Kraus operators to two by cleverly using only the X and Z Pauli matrices, allowing for more efficient simulation without significantly compromising the accuracy of the noise model. The two Kraus operators are defined as:

    \[\begin{array}{cc}K_0 = \sqrt{1 -\frac{2p}{3}} \mathbb{I}, &K_1 = i \sqrt{\frac{2p}{3}} ZX .\end{array}\]


The author provides meticulous proof to assert the proposed modified expression’s authenticity and Kraus operators’ authenticity. This seemingly small change reduces the number of required matrix multiplications from six to four, a non-trivial improvement in computational cost. This reduction becomes especially significant as quantum circuits grow deeper and larger—common in QML algorithms, where we often have to run complex iterative procedures.

Experimenting with Quantum Machine Learning on the Iris Dataset

To validate their approach, the authors experimented with a well-known machine learning problem: classifying the Iris dataset using the Iris dataset by training a variational quantum circuit under a modified depolarization noise channel. Their results verify that the modified depolarization channel accurately represents channel evolution for different values of p, and these results are consistent with simulation results. Once the Iris dataset was encoded into quantum states, the author trained the QML model under noisy conditions using the modified depolarization method. Thanks to Pennylane Library, the authors claim to implement the modified channel efficiently. The findings were fascinating: the computational load was reduced by 1.5 to 2 times compared to the traditional depolarization method, while classification accuracy remained comparable. This is a big deal for QML. Efficiency in quantum simulations is crucial—especially given the already limited coherence times and high noise levels of NISQ devices. Reducing computational cost allows for quicker experimentation and larger models, accelerating the development of quantum machine learning algorithms. The following figure provides the decision boundaries for readers’ reference. We request that the readers to refer to the original manuscript for in-depth analysis.

An increase in circuit depth may enhance the model’s expressivity, but it also increases its vulnerability to noise, which adversely affects the quality of the decision boundary.

Why This Matters for the NISQ Era

We’re still far from having fault-tolerant quantum computers (except google’s latest work) that can operate indefinitely without errors. For now, we must work with what we’ve got: noisy, small to mid-scale quantum devices. This means any improvement in the efficiency of noise simulation or error mitigation has a direct and significant impact on the feasibility of using quantum systems for practical problems.

The reduction in computational overhead offered by this modified depolarization approach is particularly relevant for QML, where deep quantum circuits and iterative optimization processes require substantial computational resources. This is a step toward making QML more scalable and closer to real-world applications, even within the limitations of today’s quantum technology.

Looking Ahead

As quantum hardware continues to evolve, so too will the need for more efficient noise models and error mitigation techniques. The modified depolarization approach presented in this paper offers a glimpse into how we can make QML more computationally feasible. While the improvement in noise simulation efficiency may seem small, these incremental advancements will enable the quantum systems of the future to handle more complex and meaningful tasks.

I’m excited to see how this approach will be applied to larger quantum systems and more complex QML models. The road to fully realizing quantum machine learning’s potential is long, but innovations like this bring us one step closer.

For a detailed exploration of the methodology and findings, read the full paper at: https://www.mdpi.com/2227-7390/12/9/1385

References

  • Khanal, Bikram, and Pablo Rivas. “A Modified Depolarization Approach for Efficient Quantum Machine Learning.” Mathematics 12.9 (2024): 1385.

About the Author

Bikram Khanal is a Ph.D. student at Baylor University, specializing in Quantum Machine Learning and Natural Language Processing.

Enhancing AI Safety: Improving Adversarial Robustness in Vision Language Models

The Research Question

How can we improve the adversarial robustness of Vision Language Models (VLMs) to ensure their safe deployment in critical applications? This question drives our exploration into focused adversarial training techniques that improve the security of these models without excessive computational costs.

Adversarial Robustness and AI Safety

Adversarial attacks involve subtle manipulations of input data designed to deceive machine learning models into making incorrect predictions. In the context of VLMs, these attacks can have severe implications, especially when these models are deployed in sensitive areas such as autonomous driving, healthcare, and content moderation.

Enhancing the adversarial robustness of VLMs is crucial for AI safety. Robust models can withstand adversarial inputs, ensuring reliable performance and preventing malicious exploitation. Our research focuses on a novel approach to achieve this robustness by selectively re-training components of the multimodal architecture.

Our Approach

Traditional methods to improve model robustness often involve adversarial training, which integrates adversarial examples into the training process. However, this can be computationally intensive, particularly for complex models like VLMs that process images and text.

Our study introduces a more efficient strategy: adversarially re-training only the language model component of the VLM. This targeted approach leverages the Fast Gradient Sign Method (FGSM) to generate adversarial examples and incorporates them into the training of the text decoder. We maintain computational efficiency by keeping the image encoder fixed while significantly enhancing the model’s overall robustness.

Key Findings

  1. Adversarial Training Efficiency: Adversarially re-training only the language model yields robustness comparable to full adversarial training, with reduced computational demands.
  2. Selective Training Impact: Freezing the image encoder and focusing on the text decoder maintains high performance and robustness. In contrast, training only the image encoder results in a significant performance drop.
  3. Benchmark Results: Experiments on the Flickr8k and COCO datasets demonstrate that our selective adversarial training approach effectively mitigates the impact of adversarial attacks, as evidenced by improved BLEU scores and model performance under adversarial conditions.

Implications for Ethical AI

Our findings support the development of more robust and secure AI systems, which is crucial for ethical AI deployment. By focusing on adversarial robustness, we contribute to the broader goal of AI safety, ensuring that multimodal models can be trusted in real-world applications.

For a detailed exploration of our methodology and findings, read the full paper pre-print: https://arxiv.org/abs/2407.21174

References

  • Rashid, M.B., & Rivas, P. (2024). AI Safety in Practice: Enhancing Adversarial Robustness in Multimodal Image Captioning. 3rd Workshop on Ethical Artificial Intelligence: Methods and Applications, ACM SIGKDD’24. https://arxiv.org/abs/2407.21174

About the Author

Maisha Binte Rashid is a Ph.D. student at Baylor University, specializing in AI safety and multimodal machine learning.

Standard IEEE 7014


Sharing that IEEE 7014-2024: IEEE Standard for Ethical Considerations in Emulated Empathy in Autonomous and Intelligent Systems has been published!

This standard is the result of five years of dedication and collaboration by a diverse group of global experts, and Dr. Rivas has contributed at different stages. The journey was marked by passionate discussions, varied perspectives, and a unified goal of fostering ethical and responsible AI development.

As AI technology becomes increasingly powerful and integral to our daily lives, IEEE 7014-2024 represents a crucial step towards ensuring that these systems are developed with ethical considerations at the forefront.

Accessing the Standard

The full text of IEEE 7014-2024 can be viewed and purchased here: IEEE 7014. Additionally, free access may soon be available via the IEEE GET Program: IEEE GET Program, although this is currently to be confirmed.

Acknowledgments

A huge thank you to Ben Bland and all the wonderful people who contributed to this project. We worked together to reach a consensus and have made a significant contribution to the future of AI technology.

This publication is a testament to the power of collaboration and the shared vision of building a brighter technological future for humanity and our planet.

Final Thoughts

The publication of IEEE 7014-2024 is a proud moment for all who have been involved, including our very own Dr. Rivas. It underscores the importance of considering ethical implications in AI development and sets a precedent for future advancements in the field. We look forward to seeing how this standard will influence the development of AI systems that are not only intelligent but also empathetic and ethically sound.



Uncovering Patterns in Car Parts – A Step Towards Combating a Cybercrime

The black market for stolen car parts is a significant problem, exacerbated by the rise of online marketplaces like Craigslist or OfferUp, where stolen goods are often sold under the radar. In response to this growing issue, our research team at Baylor University has been leveraging cutting-edge AI techniques to detect patterns in car part sales that could signal illicit activity. This work is part of the NSF-funded Research Experiences for Undergraduates (REU) program, which provides undergraduate students with hands-on research experience in critical areas like artificial intelligence. Our project, supported by NSF Grant #2210091, investigates the potential of deep learning models to analyze vast amounts of data from online listings, offering a new tool in the fight against stolen car parts.

Why This Research Matters

The theft and resale of car parts not only affect vehicle owners but also contribute to organized crime. Detecting patterns in how stolen parts are sold online can help law enforcement track and dismantle these criminal networks. This project also presents a unique challenge to the AI research community: the complexity of analyzing unstructured, noisy data from real-world platforms. By utilizing the Vision Transformer (ViT) for image analysis, our research offers a different approach compared to previous works that employed multimodal models like ImageBind and OpenFlamingo.

Dataset and Embedding Extraction

Our dataset comprises thousands of car parts advertisements scraped from Craigslist and OfferUp, each including images and textual descriptions. To process the image data, we used the Vision Transformer (ViT), a model pre-trained on ImageNet-21k. ViT processes images by splitting them into 16×16-pixel patches, allowing for the extraction of key features from each image. These features were converted into embeddings—high-dimensional vectors that represent each image’s content in a form that the model can analyze.

We extracted embeddings for nearly 85,000 images, which were then compiled into a CSV file for further analysis, including clustering and visualization. Unlike prior works by Hamara & Rivas (2024) and Rashid & Rivas (2024), which utilized multimodal models like ImageBind and OpenFlamingo to fuse image and text data, we focused solely on image embeddings in this phase to assess the effectiveness of ViT in capturing visual patterns related to illicit activities.

Clustering and Evaluation

With the embeddings extracted, we used UMAP (Uniform Manifold Approximation and Projection) to project the high-dimensional data into a more interpretable 2D space for visualization. We then applied K-Means clustering, a widely used algorithm for grouping data, and experimented with different embedding dimensions—16, 32, 64, and 128—to identify the optimal number of clusters.

Among these, 64 dimensions proved to be the best suited for our dataset, as determined by three key clustering performance metrics:

  • Silhouette Score: Measures how similar an object is to its own cluster compared to other clusters. A value of 0.015 indicated that some clusters were poorly defined.
  • Calinski-Harabasz Index: Evaluates the variance ratio between clusters versus within clusters.
  • Davies-Bouldin Index: Measures the average similarity between each cluster and its most similar cluster.

Although 128 dimensions performed well in some tests, 64 dimensions provided the clearest balance between cluster purity and computational efficiency. The low silhouette score, while indicating some overlap between clusters, helped confirm that most clusters were well-defined, despite several outliers—posts that displayed mixed or unclear features, such as images showing both powertrains and vehicle exteriors.

Findings and Analysis

Using the K-Means algorithm, we identified 20 distinct clusters, each representing different categories of car parts. Here are some key findings:

  • Cluster 0: Primarily contained exterior shots of full vehicles.
  • Cluster 1: Featured exterior components like mirrors and bumpers.
  • Cluster 2: Focused on powertrain parts such as engines and transmissions.
  • Cluster 3: Showcased body panels including doors, trunks, and hoods.
  • Cluster 4: Grouped images of towing accessories like trailer hitches.

After clustering, we applied K-Nearest Neighbors (KNN) to identify the top 10 posts nearest to each cluster centroid, which allowed us to analyze representative posts and confirm the coherence of each cluster. Despite the general success of this approach, outliers emerged in the UMAP visualization, indicating the need for further refinement to handle posts with mixed features. This challenge is common in image analysis, particularly when models rely solely on visual data without the contextual information that multimodal models can provide.

UMAP Visualization for 64 dimensions

Comparative Analysis with Prior Work

Our approach contrasts with that of Hamara & Rivas (2024) and Rashid & Rivas (2024), who utilized multimodal models like ImageBind and OpenFlamingo to integrate image and text data for enhanced analysis. While their methods leveraged the fusion of multiple data types to capture richer context, we aimed to assess the capabilities of ViT in isolating visual patterns indicative of illicit activity. This comparison highlights the trade-offs between focusing on single-modality models versus multimodal approaches in detecting complex patterns within unstructured data.

Broader Impact

This research demonstrates the potential of AI in analyzing large, unstructured datasets from online marketplaces, providing law enforcement with new tools to monitor and track stolen car parts. From a technical perspective, our project highlights the effectiveness of using ViT for image analysis in this context. As we continue refining our models and consider integrating multimodal approaches inspired by prior work, our collaboration with crosdisciplinary partners will ensure that this system becomes a valuable tool for combating the sale of stolen goods online.

As stated previously, the silhouette score for the dataset proved to be notably small, which was supported by the visualization containing numerous outliers. This may be attributed to clusters lacking clear definition, meaning that several posts contained images without many distinguishable features. This is understandable considering that while clusters emphasized a focus on specific car parts, many images still displayed various other vehicle components. For instance, although Cluster 2 primarily featured images of powertrains, the posts in this cluster also included shots of the exterior and body panels of the vehicle. This is logical as sellers often aim to showcase multiple facets of the vehicle when listing it, explaining the lack of focus on specific car parts.

About the Author

Cameron Armijo is a Computer Science undergraduate student at Baylor University, specializing in data mining.

International Conference on Emergent and Quantum Technologies (ICEQT’24)

July 22-25, 2024 — Las Vegas, NV

Dear Esteemed Colleagues,


Quantum computing is an expeditiously evolving field of interdisciplinary research, drawing upon fundamental principles from mathematics, physics, and engineering. To maintain scientific rigor and foster advancement, this domain necessitates a collaborative effort across various STEM disciplines.

We are delighted to announce the International Conference on Emergent and Quantum Technologies (ICEQT’24), scheduled for July 22-25, 2024, in Las Vegas, NV. The conference is designed to serve as a platform for researchers specializing in quantum machine learning and machine learning professionals exploring the application of AI in enhancing quantum computing algorithms. It aims to facilitate the exchange of insights and developments within these dynamic areas of study.

The burgeoning interest among machine learning practitioners in leveraging AI for quantum computing endeavors, and vice versa, underscores the relevance of this conference. Thus, we warmly welcome the submission of original research papers that contribute novel insights and state-of-the-art developments in the following areas of interest:

Foundations of Quantum Computing and Quantum Machine Learning

  • Quantum computing models and paradigms, e.g., Grover, Shor, and others
  • Quantum algorithms for Linear Systems of Equations
  • Quantum Tensor Networks and their Applications in QML

Quantum Machine Learning Algorithms

  • Quantum Neural Networks
  • Quantum Hidden Markov Models
  • Quantum PCA
  • Quantum SVM
  • Quantum Autoencoders
  • Quantum Transfer Learning
  • Quantum Boltzmann machines
  • Theory of Quantum-enhanced Machine Learning

AI for Quantum Computing

  • Machine learning for improved quantum algorithm performance
  • Machine learning for quantum control
  • Machine learning for building better quantum hardware

Quantum Algorithms and Applications

  • Quantum computing: models and paradigms
  • Quantum algorithms for hyperparameter tuning (Quantum computing for AutoML)
  • Quantum-enhanced Reinforcement Learning
  • Quantum Annealing
  • Quantum Sampling
  • Applications of Quantum Machine Learning

Fairness and Ethics in Quantum Machine Learning

We look forward to receiving your submissions and to welcoming you to ICEQT’24.

All submissions that are accepted for presentation will be included in the proceedings published by IEEE CPS. To ensure consistency in formatting, authors should follow the general typesetting instructions available on the IEEE’s website, including single-line spacing and a 2-column format. Additionally, authors of accepted papers must agree to the IEEE CPS standard statement regarding copyrights and policies on electronic dissemination.

Prospective authors are encouraged to submit their papers through the conference’s evaluation website at CMT. More information about the conference, including submission guidelines, can be found on our website at https://baylor.ai/iceqt/.

Important Deadlines

March 22, 2024: Submission of papers: https://cmt3.research.microsoft.com/ICEQT2024
– Full/Regular Research Papers (maximum of 8 pages)
– Short Research Papers (maximum of 5 pages)
– Abstract/Poster Papers (maximum of 3 pages)

April 15, 2024: Notification of acceptance (+/- two days)

May 1, 2024: Final papers + Registration

June 21, 2024: Last day for hotel room reservation at a discounted price.

July 22-25, 2024: The 2024 World Congress in Computer Science, Computer Engineering, and Applied Computing (CSCE’24: USA)
Which includes the International Conference on Emergent and Quantum Technologies (ICEQT’24)

Chairs:
Pablo Rivas, PhD, Baylor University
Bikram Khanal, PhD Candidate, Baylor University