Learning Robust Observable to Address Noise in Quantum Machine Learning

In the rapidly evolving field of Quantum Machine Learning (QML), one of the most pressing challenges is handling noise—the errors that naturally arise in quantum systems, particularly in the Noisy Intermediate-Scale Quantum (NISQ) era. But what if we could teach quantum systems to “learn” and address noise head-on? Our paper “Learning Robust Observable to Address Noise in Quantum Machine Learning” explores an approach to mitigating this issue by focusing on learning robust observables. These observables can withstand the effects of noise, improving the performance of QML models in noisy environments.

Understanding the Problem of Noise in QML

In quantum systems, noise comes from imperfections in quantum gates, interactions with the environment, and decoherence—making quantum computations highly error-prone. When applying QML, this noise leads to inaccuracies in predictions and model training. This research aims to identify observables that remain invariant or change minimally even in the presence of noise, thus offering more reliable outputs from quantum systems.

The Framework: Learning Robust Observables

We propose a machine learning-based framework to find observables that are inherently resistant to various types of noise. To tackle this, we propose training a machine learning model to identify observables that remain invariant or less susceptible to noise. The model learns from the behavior of quantum states passing through noisy channels and adjusts to find robust observables that maintain their integrity despite noise. We illustrate the problem using a Bell state (a well-known quantum state), subjecting it to a depolarization channel to simulate noise.

The process can be formalized as an optimization problem where the goal is to minimize the change in the expectation value of the observable when the quantum state is subject to noise. Mathematically, this can be expressed as minimizing:

    \[\min⁡_{\mathcal{O}}\mathbb{E}[ \left | \langle{\psi}| \mathcal{O} |{\psi} \rangle - \langle{\psi} | \mathcal{O}_n |{\psi} \rangle ]\]

Here, \mathcal{O} is a Pauli-Z observable, and \mathcal{O}_n is an observable we are trying to learn. The expectation value is computed before and after noise is introduced. The goal is to find an observable that minimizes this difference, effectively learning a robust observable.

A Toy Example

In our framework, we train QML models by simulating quantum systems across different noise channels, including depolarization, amplitude damping, phase damping, bit flip, and phase flip channels. The objective is to learn observables for various quantum circuits—such as Bell state circuits, Quantum Fourier Transform circuits, and highly entangled random circuits—that can remain robust across different noise levels. The framework demonstrated that it could identify an observable that better retains the state’s properties under noisy conditions, proving that robust observables can be learned effectively.

 Consider the following example:

(1)   \begin{equation*} O_{optimized} = \begin{pmatrix} 0.804 & 0.086 + 0.138i & 0.739 + 0.050i & 0.070 + 0.132i\\ 0.086 - 0.138i & 0.302 & 0.087 - 0.122i & 0.277 + 0.019i \\0.739 - 0.050i & 0.087 + 0.122i & 1.253 & 0.133 + 0.215i \\ 0.070 - 0.132i & 0.277 - 0.019i & 0.133 - 0.215i & 0.470\end{pmatrix}\end{equation*}

We computed its expectation value for Bell’s states under varying degrees of depolarization, p \in [0,1). The expectation values of the observable O_{optimized} on the depolarized Bell state as a function of the depolarization rate p are plotted in the following figure.

In this figure, Z is the Pauli-Z matrix, X is the Pauli-X matrix, H is the Hadamard gate, A is an arbitrary observable, and O_{optimized} is a learned single qubit Hermitian measurement operator. This toy example shows that the expectation value of the custom observable O_{optimized} on the depolarized Bell state remains constant as the depolarization rate p increases.

Key Findings

  • Custom observables designed through this method demonstrated remarkable stability against noise, especially when compared to traditional observables like Pauli matrices.
  • In noisy channels like depolarization, the learned observables maintained a more consistent expectation value, while traditional observables exhibited greater variance.
  • The approach can be applied to various types of quantum circuits, making it versatile and broadly applicable in enhancing the reliability of QML models.

Implications for Quantum Machine Learning

This study offers a promising avenue for improving the accuracy and stability of QML in real-world applications. By learning robust observables, QML systems can perform more reliably, even as we contend with the inherent noise in current quantum computers. By using learned observables, the performance of quantum machine learning models can be made more stable, even when operating in the inherently noisy NISQ regime. This has implications for advancing practical applications of quantum computing, especially as we seek to scale up quantum algorithms in the near-term.

Looking Ahead: The Future of Noise-Resistant QML

The results from this paper open up exciting possibilities for future work. Imagine a future where every quantum machine learning algorithm can autonomously adjust to different noisy environments by learning which observables to trust. This would make QML models more resilient and, ultimately, more practical for real-world applications.

One immediate future direction is testing the framework on larger systems and more complex noise models. Additionally, combining this method with error correction techniques could further enhance the stability of QML algorithms.

For a detailed exploration of the methodology and findings, read the full paper at:

https://arxiv.org/pdf/2409.07632

References

  • Khanal, Bikram, and Pablo Rivas. “Learning Robust Observable to Address Noise in Quantum Machine Learning.” arXiv preprint arXiv:2409.07632 (2024).

About the Author

Bikram Khanal is a Ph.D. student at Baylor University, specializing in Quantum Machine Learning and Natural Language Processing.

Uncovering Patterns in Car Parts – A Step Towards Combating a Cybercrime

The black market for stolen car parts is a significant problem, exacerbated by the rise of online marketplaces like Craigslist or OfferUp, where stolen goods are often sold under the radar. In response to this growing issue, our research team at Baylor University has been leveraging cutting-edge AI techniques to detect patterns in car part sales that could signal illicit activity. This work is part of the NSF-funded Research Experiences for Undergraduates (REU) program, which provides undergraduate students with hands-on research experience in critical areas like artificial intelligence. Our project, supported by NSF Grant #2210091, investigates the potential of deep learning models to analyze vast amounts of data from online listings, offering a new tool in the fight against stolen car parts.

Why This Research Matters

The theft and resale of car parts not only affect vehicle owners but also contribute to organized crime. Detecting patterns in how stolen parts are sold online can help law enforcement track and dismantle these criminal networks. This project also presents a unique challenge to the AI research community: the complexity of analyzing unstructured, noisy data from real-world platforms. By utilizing the Vision Transformer (ViT) for image analysis, our research offers a different approach compared to previous works that employed multimodal models like ImageBind and OpenFlamingo.

Dataset and Embedding Extraction

Our dataset comprises thousands of car parts advertisements scraped from Craigslist and OfferUp, each including images and textual descriptions. To process the image data, we used the Vision Transformer (ViT), a model pre-trained on ImageNet-21k. ViT processes images by splitting them into 16×16-pixel patches, allowing for the extraction of key features from each image. These features were converted into embeddings—high-dimensional vectors that represent each image’s content in a form that the model can analyze.

We extracted embeddings for nearly 85,000 images, which were then compiled into a CSV file for further analysis, including clustering and visualization. Unlike prior works by Hamara & Rivas (2024) and Rashid & Rivas (2024), which utilized multimodal models like ImageBind and OpenFlamingo to fuse image and text data, we focused solely on image embeddings in this phase to assess the effectiveness of ViT in capturing visual patterns related to illicit activities.

Clustering and Evaluation

With the embeddings extracted, we used UMAP (Uniform Manifold Approximation and Projection) to project the high-dimensional data into a more interpretable 2D space for visualization. We then applied K-Means clustering, a widely used algorithm for grouping data, and experimented with different embedding dimensions—16, 32, 64, and 128—to identify the optimal number of clusters.

Among these, 64 dimensions proved to be the best suited for our dataset, as determined by three key clustering performance metrics:

  • Silhouette Score: Measures how similar an object is to its own cluster compared to other clusters. A value of 0.015 indicated that some clusters were poorly defined.
  • Calinski-Harabasz Index: Evaluates the variance ratio between clusters versus within clusters.
  • Davies-Bouldin Index: Measures the average similarity between each cluster and its most similar cluster.

Although 128 dimensions performed well in some tests, 64 dimensions provided the clearest balance between cluster purity and computational efficiency. The low silhouette score, while indicating some overlap between clusters, helped confirm that most clusters were well-defined, despite several outliers—posts that displayed mixed or unclear features, such as images showing both powertrains and vehicle exteriors.

Findings and Analysis

Using the K-Means algorithm, we identified 20 distinct clusters, each representing different categories of car parts. Here are some key findings:

  • Cluster 0: Primarily contained exterior shots of full vehicles.
  • Cluster 1: Featured exterior components like mirrors and bumpers.
  • Cluster 2: Focused on powertrain parts such as engines and transmissions.
  • Cluster 3: Showcased body panels including doors, trunks, and hoods.
  • Cluster 4: Grouped images of towing accessories like trailer hitches.

After clustering, we applied K-Nearest Neighbors (KNN) to identify the top 10 posts nearest to each cluster centroid, which allowed us to analyze representative posts and confirm the coherence of each cluster. Despite the general success of this approach, outliers emerged in the UMAP visualization, indicating the need for further refinement to handle posts with mixed features. This challenge is common in image analysis, particularly when models rely solely on visual data without the contextual information that multimodal models can provide.

UMAP Visualization for 64 dimensions

Comparative Analysis with Prior Work

Our approach contrasts with that of Hamara & Rivas (2024) and Rashid & Rivas (2024), who utilized multimodal models like ImageBind and OpenFlamingo to integrate image and text data for enhanced analysis. While their methods leveraged the fusion of multiple data types to capture richer context, we aimed to assess the capabilities of ViT in isolating visual patterns indicative of illicit activity. This comparison highlights the trade-offs between focusing on single-modality models versus multimodal approaches in detecting complex patterns within unstructured data.

Broader Impact

This research demonstrates the potential of AI in analyzing large, unstructured datasets from online marketplaces, providing law enforcement with new tools to monitor and track stolen car parts. From a technical perspective, our project highlights the effectiveness of using ViT for image analysis in this context. As we continue refining our models and consider integrating multimodal approaches inspired by prior work, our collaboration with crosdisciplinary partners will ensure that this system becomes a valuable tool for combating the sale of stolen goods online.

As stated previously, the silhouette score for the dataset proved to be notably small, which was supported by the visualization containing numerous outliers. This may be attributed to clusters lacking clear definition, meaning that several posts contained images without many distinguishable features. This is understandable considering that while clusters emphasized a focus on specific car parts, many images still displayed various other vehicle components. For instance, although Cluster 2 primarily featured images of powertrains, the posts in this cluster also included shots of the exterior and body panels of the vehicle. This is logical as sellers often aim to showcase multiple facets of the vehicle when listing it, explaining the lack of focus on specific car parts.

About the Author

Cameron Armijo is a Computer Science undergraduate student at Baylor University, specializing in data mining.

Celebrating Love and Innovation at The Lab: Welcome, PoderOso!

This Valentine’s Day at Baylor.AI, we’re not just celebrating love in the air but also the arrival of our latest powerhouse, affectionately named PoderOso. This state-of-the-art equipment is a testament to the unwavering support and vision of Dr. Greg Hamerly, the department chair of Computer Science at Baylor, and Dr. Daniel Pack, the dean of the School of Engineering and Computer Science. Their dedication to advancing research and innovation within our department has been instrumental in acquiring PoderOso, and for that, we are profoundly grateful.

The name ‘PoderOso’ is derived from Spanish, where ‘Poder’ means ‘Power’ and ‘Oso’ means ‘Bear’. Combined, ‘Poderoso’ translates to ‘Powerful’. Therefore, ‘PoderOso’ creatively merges these concepts to symbolize something that embodies both power and the strength of a bear, aptly reflecting the capabilities of machine.

PoderOso is a technological marvel boasting dual EPYC 7662 processors, a whopping 1024GB of DDR4-3200 ECC memory, cutting-edge storage solutions, and six NVIDIA L40S GPUs. It’s a beast designed for in-house AI research, setting a new benchmark for what we can achieve.

With PoderOso’s impressive specs, our team is poised to push the boundaries of deep learning faster than ever before. From advancing language models that can understand and generate human-like text to developing computer vision systems that can perceive the world as we do; from enhancing adversarial robustness to securing AI against malicious attacks to exploring the burgeoning field of quantum machine learning and driving forward multimodal AI research that integrates multiple types of data, PoderOso will be at the heart of our endeavors. Moreover, it will enable us to delve deeper into AI ethics, ensuring our advancements are aligned with our values and societal needs.

As we unbox PoderOso and get it up and running, we’re filled with anticipation for future breakthroughs. Below are photos of the unboxing and our dedicated IT team in front of the rack.

Our journey into the next frontier of AI research has just gotten a significant boost, thanks to PoderOso and the incredible support of our leaders. Here’s to a future where our research leads to technological advancements and fosters a more ethical, understanding, and inclusive world.

Happy Valentine’s Day to our Baylor.AI family and everyone supporting us on this exciting journey!

(Left to right) Brian Sitton, Mike Hutcheson, Pablo Rivas

Creation and Analysis of an NLU Dataset for DoD Cybersecurity Policies

Comprehending and implementing robust policies is crucial in cybersecurity. In our lab, Ernesto Quevedo et al. recently released a paper, Creation and Analysis of a Natural Language Understanding Dataset for DoD Cybersecurity Policies (CSIAC-DoDIN V1.0), which introduces a groundbreaking dataset to aid in this endeavor. This dataset bridges a significant gap in Legal Natural Language Processing (NLP) by providing structured data specifically focused on cybersecurity policies.

Dataset Overview

The CSIAC-DoDIN V1.0 dataset encompasses a wide array of cybersecurity-related policies, responsibilities, and procedures from the Department of Defense (DoD). Unlike existing datasets that focus primarily on privacy policies, this dataset includes detailed guidelines, strategies, and procedures essential for cybersecurity.

Key Contributions

  1. Novel Dataset: This dataset is the first to include comprehensive cybersecurity policies, guidelines, and procedures.
  2. Baseline Models: The paper provides baseline performance metrics using transformer-based models such as BERT, RoBERTa, Legal-BERT, and PrivBERT.
  3. Open Access: The dataset and code are publicly available, encouraging further research and collaboration.

Experiments and Results

Our team of researchers evaluated several transformer-based models on this dataset:

  • BERT: Demonstrated strong performance across various tasks.
  • RoBERTa: Showed competitive results, particularly in classification tasks.
  • Legal-BERT: Excelled in domain-specific tasks, benefiting from its legal data pre-training.
  • PrivBERT: Provided insights into the transferability of models across different policy subdomains.

Download

Access the CSIAC-DoDIN V1.0 dataset here to explore it and contribute to the advancement of Legal NLP. Join the effort to enhance cybersecurity policy understanding and implementation using cutting-edge NLP models. Download the paper here to learn more about the process.

NSF REU: How Two Undergraduates Are Advancing AI for Social Good

Two undergraduate students from the Computer Science Department at Baylor University, Misty and Andrew, have demonstrated the impactful role of undergraduate research in advancing technological frontiers. Their recent work on automatic information retrieval, conducted under the Baylor.AI lab, has been a testament to their dedication and intellectual curiosity.

Misty and Andrew’s journey in the AI lab involved engaging in research discussions that transcended typical undergraduate experiences. Their focus was on developing software to automate data collection, a crucial component in achieving the NSF grant objectives related to studying the illegal trafficking of stolen car parts on C2C marketplaces. This work is a critical piece in the larger puzzle of combating online criminal activities.

Their success in this endeavor was supported by the Research Experiences for Undergraduates (REU) program, highlighting the program’s role in fostering early research experiences. The REU program’s funding not only enabled these students to delve into real-world problems but also allowed them to contribute meaningfully to a project with significant societal implications.

This follow-up story is a continuation of our ongoing efforts, detailed in our previous post, to tackle online criminal activity under NSF’s REU program. The achievements of Misty and Andrew help in the grand scheme of AI research as we strive to have a safe and secure cyberspace; it’s essential to acknowledge and celebrate these steps in their academic journey. Their work exemplifies the potential of undergraduate research in contributing to complex and socially relevant projects.

As we continue to support and mentor our students in the Baylor.AI lab, we look forward to more such stories of perseverance, learning, and meaningful contributions to the field of AI and machine learning.

Sic’em Bears!

On Becoming an ACM Senior Member

I was honored to recently receive the ACM Senior Member designation from the Association for Computing Machinery (ACM). For my students who asked and anyone else interested, I would like to share with you what this honor is and why I received it.

First, let me tell you a little bit about the ACM. The ACM is the world’s largest educational and scientific computing society, with a mission to advance computing as a science and profession. The ACM Senior Member designation is a distinction awarded to members who have demonstrated significant accomplishments and impact in the computing field. To be considered for this honor, a candidate must have at least 10 years of professional experience in computing and have made significant contributions to the field through research, industry, or education. Being elevated to senior member status in ACM signifies that you are an established leader in the computing field, recognized by your peers for your expertise and contributions. It also comes with certain benefits, such as access to special resources and opportunities for professional development and networking.

Overall, being a senior member of ACM is a great honor and a recognition of your significant contributions to the computing field. 🫶

I was thrilled to learn that by recommendation of my mentor in the computer science department, Dr. Hamerly, the dean of the school of engineering and computer science, Dr. Baker, and of my peers, I am now an ACM Senior Member, and I believe that my contributions to the computing field over the past decade played a significant role in this recognition. Some of my most notable achievements include the following:

  • Technical leadership: leading industry-university collaborative projects, securing funding for students’ research, directing numerous theses and independent studies, developing graduate courses on data mining and machine learning, updating and developing courses, and participating in the education committees at Marist College and Baylor University.
  • Technical contributions: over 90 publications, research in machine learning and numerical optimization, contribution to SVM theory, recent research in efficient representation learning, adversarial learning, and ethical implications of biased and unfair models, and active involvement in developing AI ethics standards through work with IEEE Standards Association.
  • Professional contributions: participation in professional events, including serving as Sponsorship & Budget chair of ACM NYC of Women in Computing and as a Program Committee Chair for NAACL 2022 LXNLP workshop, active membership in professional organizations, and full-time industry experience designing end-to-end systems to support manufacturing and supply management.
  • Recognition: elevation to IEEE Senior Member, sought-after expertise and leadership in deep learning and ethics, involvement in developing AI ethics standards, and commitment to promoting diversity and inclusion in computing through work with ACM NYC of Women in Computing and participation in the AAAI Undergraduate Consortium.

My peers in the computing community have recognized these accomplishments and contributed to advancing the field. In addition to my technical contributions, I have been actively mentoring and teaching the next generation of computing professionals.

I am incredibly grateful to the ACM for this honor, and I hope it inspires some of my students to pursue academic excellence. I believe that we can all make a massive difference in the world through our work in computing, and I look forward to continuing to make meaningful contributions to the exciting and rapidly evolving field of machine learning and responsible AI.

Thank you for taking the time to read about my journey to becoming an ACM Senior Member. If you have any questions or would like to learn more about my work, please don’t hesitate to contact me.

Dr. Hamerly, chair of the computer science department and mentor, presented the ACM Senior Member certificate.

How to Show that Your Model is Better: A Basic Guide to Statistical Hypothesis Testing

Do you need help determining which machine learning model is superior? This post presents a step-by-step guide using basic statistical techniques and a real case study! 🤖📈 #AIOrthoPraxy #MachineLearning #Statistics #DataScience

When employing Machine Learning to address problems, our choice of a model plays a crucial role. Evaluating models can be straightforward when performance disparities are substantial, for example, when comparing two large-language models (LLMS) on a masked language modeling (MLM) task with 71.01 and 28.56 perplexity, respectively. However, if differences among models are minute, making a solid analysis to discern if one model is genuinely superior to others can prove challenging.

This tutorial aims to present a step-by-step guide to determine if one model is superior to another. Our approach relies on basic statistical techniques and real datasets. Our study compares four models on six datasets using one metric, standard accuracy. Alternatively, other contexts may use different numbers of models, metrics, or datasets. We will work with the tables below that show the properties of the datasets and the performance of two baseline models and two of our proposed models, for which we hope to show that they are better, which would be our hypothesis to be tested.

Summary of performance measured with standard accuracy
Summary of the main properties of the datasets considered in this tutorial.

One of the primary purposes of statistics is hypothesis testing. Statistical inference involves taking a sample from a population and determining how well the sample represents the population. In hypothesis testing, we formulate a null hypothesis, H_0, and an alternative hypothesis, H_A, based on the problem (comparing models). Both hypotheses must be concise, mutually exclusive, and exhaustive. For example, we could say that our null hypothesis is that the models perform equally, and the alternative could mean that the models perform differently.

Why is the ANOVA test not a good alternative?

The ANOVA (Analysis of Variance) test is a parametric test that compares the means of multiple groups. In our case, we have four models to compare with six datasets. The null hypothesis for ANOVA is that all the means are equal, and the alternative hypothesis is that at least one of the means is different. If the p-value of the ANOVA test is less than the significance level (usually 0.05), we reject the null hypothesis and conclude that at least one of the means is different, i.e., at least one model performs differently than the others. However, ANOVA may not always be the best choice for comparing the performance of different models.

One reason for this is that ANOVA assumes that the data follows a normal distribution, which may not always be the case for real-world data. Additionally, ANOVA does not take into account the difficulty of classifying certain data points. For example, in a dataset with a single numerical feature and binary labels, all models may achieve 100% accuracy on the training data. However, if the test set contains some mislabeled points, the models may perform differently. In this scenario, ANOVA would not be appropriate because it does not account for the difficulty of classifying certain data points.

Another issue with ANOVA is that it assumes that the variances of the groups being compared are equal. This assumption may not hold for datasets with different levels of noise or variability. In such cases, alternative statistical tests like the Friedman test or the Nemenyi test may be more appropriate.

Friedman test

The Friedman test is a non-parametric test that compares multiple models. In our example, we want to compare the performance of k=4 different models, i.e., two baseline models, Gabor randomized, and Gabor repeated, on N=6 datasets. First, the test calculates the average rank of each model’s performance on each dataset, with the best-performing model receiving a rank of 1. The Friedman test then tests the null hypothesis, H_0, that all models are equally effective and their average ranks should be equal. The test statistic is calculated as follows:

(1)   \begin{equation*} \chi_{F}^{2}=\frac{12 N}{k(k+1)}\left[\sum_{j=1}^{k} R_{j}^{2}-\frac{k(k+1)^{2}}{4}\right] \end{equation*}

where R is the average ranking of each model.

The test result can be used to determine whether there is a statistically significant difference between the performance of the models by making sure that \chi_{F}^{2} is not less than the critical value for the F distribution for a particular confidence value \alpha. However, since \chi_{F}^{2} could be too conservative, we also calculate the F_F statistic as follows:

(2)   \begin{equation*} F_{F}=\frac{(N-1) \chi_{F}^{2}}{N(k-1)-\chi_{F}^{2}}. \end{equation*}

Based on the critical value, F_{F}, and \chi_{F}^{2}, we evaluate H_0; once the null hypothesis is rejected, we apply a posthoc test. For this, we use the Nemenyi test to establish whether models differ significantly in their performance.

We will start the process of getting this test done by ranking the data. First, we can load the data and verify it with respect to the table shown earlier.

import pandas as pd
import numpy as np

data = [[0.8937, 0.8839, 0.9072, 0.9102],
        [0.8023, 0.8024, 0.8229, 0.8238],
        [0.7130, 0.7132, 0.7198, 0.7206],
        [0.5084, 0.5085, 0.5232, 0.5273],
        [0.2331, 0.2326, 0.3620, 0.3952],
        [0.5174, 0.5175, 0.5307, 0.5178]]

model_names = ['Glorot N.', 'Glorot U.', 'Random G.', 'Repeated G.']

df = pd.DataFrame(data, columns=model_names)

print(df.describe())  #<- use averages to verify if matches table

Output:

       Glorot N.  Glorot U.  Random G.  Repeated G.
count   6.000000   6.000000   6.000000     6.000000
mean    0.611317   0.609683   0.644300     0.649150
std     0.240422   0.238318   0.206871     0.200173
min     0.233100   0.232600   0.362000     0.395200
25%     0.510650   0.510750   0.525075     0.520175
50%     0.615200   0.615350   0.625250     0.623950
75%     0.779975   0.780100   0.797125     0.798000
max     0.893700   0.883900   0.907200     0.910200

Next, we rank the models and get their averages like so:

data = df.rank(1, method='average', ascending=False)
print(data)
print(data.describe())

Output:

   Glorot N.  Glorot U.  Random G.  Repeated G.
0        3.0        4.0        2.0          1.0
1        4.0        3.0        2.0          1.0
2        4.0        3.0        2.0          1.0
3        4.0        3.0        2.0          1.0
4        3.0        4.0        2.0          1.0
5        4.0        3.0        1.0          2.0

       Glorot N.  Glorot U.  Random G.  Repeated G.
count   6.000000   6.000000   6.000000     6.000000
mean    3.666667   3.333333   1.833333     1.166667
std     0.516398   0.516398   0.408248     0.408248
min     3.000000   3.000000   1.000000     1.000000
25%     3.250000   3.000000   2.000000     1.000000
50%     4.000000   3.000000   2.000000     1.000000
75%     4.000000   3.750000   2.000000     1.000000
max     4.000000   4.000000   2.000000     2.000000

With this information, we can expand our initial results table to show the rankings by dataset and the average rankings across all datasets for each model.

Now that we have the rankings, we can proceed with the statistical analysis and do the following:

(3)   \begin{align*} \chi_{F}^{2}&=\frac{12 \cdot 6}{4 \cdot 5}\left[\left(3.66^2+3.33^2+1.83^2+1.16^2\right)-\frac{4 \cdot 5^2}{4}\right] \nonumber \\ &=15.364 \nonumber  \end{align*}

(4)   \begin{equation*} F_{F}=\frac{5 \cdot 15.364}{6 \cdot 3-15.364}=29.143 \nonumber \end{equation*}

The critical value at \alpha=0.01 is 5.417. Thus, because the critical value is below our statistics obtained, we reject H_0 with 99% confidence.

The critical value can be obtained from any table that has the F distribution. In the table the degrees of freedom across columns (denoted as df_1) is k-1, that is the number of models minus one; the degrees of freedom across rows (denoted as df_2) is (k-1)\times(N-1), that is, the number of models minus one, times the number of datasets minus one. In our case this is df_1=3 and df_2=15.

Nemenyi Test

The Nemenyi test is a post-hoc test that compares multiple models after a significant result from Friedman’s test. The null hypothesis for Nemenyi is that there is no difference between any two models, and the alternative hypothesis is that at least one pair of models is different.

The formula for Nemenyi is as follows:

    \[CD = q_{\alpha} \sqrt{\frac{k(k+1)}{6N}}\]

where q_{\alpha} is the critical difference of the Studentized range distribution at the chosen significance level and k is the number of groups. The q_{\alpha} value can be obtained from the following table:

Critical values for the Nemenyi test, which is conducted following the Friedman test, with two-tailed results.

Thus, for our particular case study, the critical differences are:

(5)   \begin{equation*} CD_{\alpha=0.05}=2.569 \sqrt{\frac{4 \cdot 5}{6 \cdot 6}} = 1.915 \nonumber \end{equation*}

(6)   \begin{equation*} CD_{\alpha=0.10}=2.291 \sqrt{\frac{4 \cdot 5}{6 \cdot 6}} = 1.708 \nonumber \end{equation*}

Since the difference in rank between the randomized Gabor and baseline Glorot normal is 1.83 and is less than the CD_{\alpha=0.10}=1.708, we conclude Gabor is better. Similarly, since the difference in rank between the fixed Gabor and baseline Glorot uniform is 2.17 and is less than the CD_{\alpha=0.05}=1.915, we conclude that Gabor is better. Yes, there is sufficient statistical evidence to show that our model is better with high confidence.

Things we would like to see in papers

First of all, it would be nice to have a complete table that includes the results of the statistical tests as part of the caption or as a footnote, like this:

Second of all, graphics always help! A simple and visually appealing diagram is a powerful way to represent post hoc test results when comparing multiple classifiers. The figure below, which illustrates the data analysis from the table above, displays the average ranks of methods along the top line of the diagram. To facilitate interpretation, the axis is oriented so that the best ranks appear on the right side, which enables us to perceive the methods on the right as superior.

Comparison of all models against each other with the Nemenyi test. Models not significantly different at α = 0.10 or α = 0.05 are connected.

When comparing all the algorithms against each other, the groups of algorithms that are not significantly different are connected with a bold solid line. Such an approach clearly highlights the most effective models while also providing a robust analysis of the differences between models. Additionally, the critical difference is shown above the graph, further enhancing the visualization of the analysis results. Overall, this simple yet powerful diagrammatic approach provides a clear and concise representation of the performance of multiple classifiers, enabling more informed decision-making in selecting the best-performing model.

Main Sources

The statistical tests are based on this paper:

Demšar, Janez. “Statistical comparisons of classifiers over multiple data sets.” The Journal of Machine learning research 7 (2006): 1-30.

The case study is based on the following research:

Rai, Mehang. “On the Performance of Convolutional Neural Networks Initialized with Gabor Filters.” Thesis, Baylor University, 2021.

President’s Executive Order for Advancing Racial Equity in AI Systems: What It Means for the Future of AI-Based Technology

Summary: The President of the United States, Joe Biden, has recently authorized an Executive Order intending to enhance racial equity and foster support for marginalized communities via the federal government. The Order mandates that federal agencies employing artificial intelligence (AI) systems assume novel equity responsibilities and instructs them to forestall and rectify any form of discrimination, including safeguarding the public from the perils of algorithmic discrimination.

What you should know: The recent Executive Order on Further Advancing Racial Equity and Support for Underserved Communities Through The Federal Government emphasizes the importance of advancing equity for all, including communities that have long been underserved, and addressing systemic racism in the US policies and programs. This order implies that AI systems should be designed to ensure that they do not perpetuate or exacerbate inequities and should be used to address the unfair disparities faced by underserved communities. It is also implied that the Federal Government should work with civil society, the private sector, and State and local governments to redress unfair disparities and remove barriers to Government programs and services, which could be facilitated by the development and deployment of ethical and responsible AI systems. Additionally, the order emphasizes the need for evidence-based approaches to equitable policymaking and implementation, which can be achieved through collecting and analyzing data on the impacts of AI systems on different communities. Therefore, AI practitioners should ensure that their systems are designed, developed, and deployed to promote equity, fairness, and inclusivity and are aligned with the Federal Government’s commitment to advancing racial equity and supporting underserved communities.

The Center for Standards and Ethics in Artificial Intelligence (CSEAI)

Following President’s Executive Order, we at the CSEAI recognize the critical role of artificial intelligence in promoting fairness, accountability, and transparency. As a research center committed to developing responsible AI techniques, we believe our work can help meet the challenges and opportunities of emerging regulation, standardization, and best practices in AI systems. We are inviting industry members to partner with us financially and take part in collaborative research on trustworthy AI. Our mission is to provide applicable, actionable, standard practices in trustworthy AI and train a workforce that enables fairness, accountability, and transparency. We believe our work will help mitigate AI adoption’s operational, liability, and reputation risks.

The CSEAI brings together leading universities to conduct collaborative research in responsible AI techniques. We are committed to workforce development and providing accessible standards, best practices, testing, and compliance. We are proud to be a part of the NSF IUCRC Program and are excited to be supported by the NSF, which provides a standard agreement, organizational, and legal framework.

Join us in creating a better future for all Americans by developing responsible AI practices that promote fairness, accountability, and transparency. By partnering with the CSEAI, you will have the opportunity to work with a dedicated team of researchers, participate in cutting-edge research, and help shape the future of AI. Contact us today to learn more about partnering with the CSEAI.

Contact Pablo_Rivas@Baylor.edu and find out more at www.cseai.center.

International Conference on Emergent and Quantum Technologies (ICEQT’23)

July 24-27, 2023 — Las Vegas, NV

Dear Esteemed Colleagues,

Quantum computing is a rapidly emerging interdisciplinary research area that integrates concepts from mathematics, physics, and engineering. For scientific rigor and successful progress in the field, it demands contributions from various STEM areas.

In this context, we are pleased to announce the International Conference on Emergent and Quantum Technologies (ICEQT’23), to be held on July 24-27, 2023, in Las Vegas, NV. The conference aims to provide an opportunity for researchers in the field of quantum machine learning and machine learning researchers interested in applying AI to enhance quantum computing algorithms, to present and discuss recent advancements in their areas of expertise.

Notably, there has been an increasing interest from machine learning researchers to apply AI to the quantum computing domain, and vice versa. As a result, we cordially invite submissions of original research papers that present state-of-the-art contributions in the following areas:

Foundations of Quantum Computing and Quantum Machine Learning

  • Quantum computing models and paradigms, e.g., Grover, Shor, and others
  • Quantum algorithms for Linear Systems of Equations
  • Quantum Tensor Networks and their Applications in QML

Quantum Machine Learning Algorithms

  • Quantum Neural Networks
  • Quantum Hidden Markov Models
  • Quantum PCA
  • Quantum SVM
  • Quantum Autoencoders
  • Quantum Transfer Learning
  • Quantum Boltzmann machines
  • Theory of Quantum-enhanced Machine Learning

AI for Quantum Computing

  • Machine learning for improved quantum algorithm performance
  • Machine learning for quantum control
  • Machine learning for building better quantum hardware

Quantum Algorithms and Applications

  • Quantum computing: models and paradigms
  • Quantum algorithms for hyperparameter tuning (Quantum computing for AutoML)
  • Quantum-enhanced Reinforcement Learning
  • Quantum Annealing
  • Quantum Sampling
  • Applications of Quantum Machine Learning

Fairness and Ethics in Quantum Machine Learning

We look forward to receiving your submissions and to welcoming you to ICEQT’23.

All submissions that are accepted for presentation will be included in the proceedings published by IEEE CPS. To ensure consistency in formatting, authors should follow the general typesetting instructions available on the IEEE’s website, including single-line spacing and a 2-column format. Additionally, authors of accepted papers must agree to the IEEE CPS standard statement regarding copyrights and policies on electronic dissemination.

Prospective authors are encouraged to submit their papers through the conference’s evaluation website at https://american-cse.org/drafts/. More information about the conference, including submission guidelines, can be found on our website at https://baylor.ai/iceqt/.

Important Deadlines

April 12, 2023: Submission of papers: https://american-cse.org/drafts/
– Full/Regular Research Papers (maximum of 8 pages)
– Short Research Papers (maximum of 5 pages)
– Abstract/Poster Papers (maximum of 3 pages)

May 1, 2023: Notification of acceptance (+/- two days)

May 16, 2023: Final papers + Registration

June 21, 2023: Last day for hotel room reservation at a discounted price.

July 24-27, 2023: The 2023 World Congress in Computer Science, Computer Engineering, and Applied Computing (CSCE’23: USA)
Which includes the International Conference on Emergent and Quantum Technologies (ICEQT’23)

Chairs:
Dr. Pablo Rivas, Baylor University
Dr. Javier Orduz, Earlham College

Diving Into Large Language Models: An Exploration of ChatGPT and Its Alternatives

An abstract illustration that depicts a central hub or nucleus from which lines and arrows radiate outwards to represent the different layers.

Large Language Models (LLMs) have become a hot topic in the world of machine learning, with chatbots like ChatGPT and other models gaining widespread popularity. However, keeping up with the latest research and advancements in this rapidly evolving field can be challenging. To help you catch up, we’ve compiled a list of 11 essential research papers that every LLM enthusiast should read. From the original Transformer architecture to recent innovations in efficiency and alignment, these papers will give you a comprehensive understanding of the field and help you stay ahead of the curve. So whether you’re a seasoned LLM practitioner or just getting started, read on to discover the key papers that will take your understanding of this exciting field to the next level.

Foundational Papers on LLM Architecture and Pretraining:

  • “Attention is All You Need” by Vaswani et al.: This paper introduces the Transformer architecture, which uses scaled dot-product attention to process sequences of tokens. It has since become the basis for many state-of-the-art LLMs. (https://arxiv.org/abs/1706.03762)
  • “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding” by Devlin et al.: This paper describes BERT, a powerful LLM that uses masked language modeling to pre-train a bidirectional Transformer encoder. BERT has achieved impressive results on various natural language processing tasks. (https://arxiv.org/abs/1810.04805)
  • “Improving Language Understanding by Generative Pre-Training” by Radford et al.: This paper introduces GPT, an LLM that uses a Transformer decoder to generate text based on a given prompt. It was one of the first models to demonstrate the effectiveness of large-scale unsupervised pretraining. (https://www.cs.ubc.ca/~amuham01/LING530/papers/radford2018improving.pdf)
  • “BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension” by Lewis et al.: BART is an LLM that combines elements of both encoder and decoder architectures and can be fine-tuned for a variety of natural language tasks. (https://arxiv.org/abs/1910.13461)

Methods for Improving LLM Efficiency:

  • “FlashAttention: A Scalable Framework for Efficient Attention Mechanisms” by Yang et al.: This paper proposes FlashAttention, a more efficient attention mechanism that reduces memory consumption and computational complexity in LLMs. (https://arxiv.org/abs/2205.14135)
  • “Cramming: Efficient Training of Large-Scale Models without Layerwise Pretraining” by Li et al.: This paper introduces a novel training method for LLMs that enables them to be trained on a single GPU without the need for layerwise pretraining. (https://arxiv.org/abs/2212.14034)

Methods for Controlling LLM Outputs:

  • “InstructGPT: Controllable Text Generation with Content-Planning Transformer” by Xiong et al.: InstructGPT is an LLM that allows for more precise control over the generated text by incorporating a content-planning module into the Transformer decoder. (https://arxiv.org/abs/2203.02155)
  • “Constitutional AI: Aligning Language Models with Human Values” by Amodei et al.: This paper proposes a framework for aligning LLMs with human values and provides an example of how it can be used to prevent the generation of harmful text. (https://arxiv.org/abs/2212.08073)

Alternative (ChatGPT) LLM Architectures:

  • “BLOOM: A Distributed Open-Source Implementation of LLMs” by Nadkarni et al.: BLOOM is an open-source implementation of LLMs that enables distributed training across multiple machines. (https://arxiv.org/abs/2211.05100)
  • “Sparrow: A Large-Scale Language Model for Conversational AI” by Li et al.: Sparrow is an LLM developed by DeepMind for conversational AI and features a unique architecture that enables more efficient and accurate text generation. (https://arxiv.org/abs/2209.14375)
  • “BlenderBot 3: Recipes for Building Large-Scale Conversational Agents” by Roller et al.: BlenderBot 3 is an LLM developed by Facebook Meta for conversational AI and includes the ability to search the internet for information to incorporate into its responses. (https://arxiv.org/abs/2208.03188)

Important Ethical Concerns Regarding LLMs:

  • “On the Opportunities and Risks of Foundation Models” by Rishi Bommasani et al. This paper discusses the opportunities and risks associated with “foundation models,” a new class of machine learning models trained on large and diverse datasets. The paper highlights the technical, social, and ethical challenges of deploying foundation models in various domains. (https://arxiv.org/abs/2108.07258)
  • “GPT-3: Its Nature, Scope, Limits, and Consequences” by Luciano Floridi & Massimo Chiriatti. This paper examines the capabilities and limitations of GPT-3, a state-of-the-art language model, and argues that it is not designed to pass tests of mathematical, semantic, or ethical questions. The paper concludes that GPT-3 is not the beginning of a general form of artificial intelligence. (https://link.springer.com/article/10.1007/s11023-020-09548-1)
  • “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜” by Emily M. Bender et al. This paper raises concerns about the risks associated with LLMs like GPT-3, including their environmental and financial costs, and recommends strategies for mitigating those risks. (https://dl.acm.org/doi/abs/10.1145/3442188.3445922)

Before you go ahead and start reading these papers, remember that LLMs such as ChatGPT and its alternatives have revolutionized NLP and hold immense potential for a wide range of applications. However, we must also be mindful of the ethical concerns surrounding these models, such as potential biases and risks of misuse. As the field continues to evolve, we must prioritize ethical considerations and work towards developing models that align with human values and promote the greater good. With the right approach, large language models can enable us to build a more inclusive and equitable future where AI and human collaboration can drive innovation and positive change.