SICEM: A Sensitivity-Inspired Constrained Evaluation Method for Adversarial Attacks on Classifiers with Occluded Input Data

In the rapidly evolving field of artificial intelligence, understanding the sensitivity of models to adversarial attacks is crucial. In our recent paper, Korn Sooksatra introduces the Sensitivity-inspired constrained evaluation method (SICEM) to address this concern.

Sooksatra, K., Rivas, P. Evaluation of adversarial attacks sensitivity of classifiers with occluded input data. Neural Comput & Applic 34, 17615–17632 (2022). https://doi.org/10.1007/s00521-022-07387-y

Understanding SICEM

Our proposed method, SICEM, evaluates the vulnerability of an incomplete input against an adversarial attack in comparison to a complete one. This is achieved by leveraging the Jacobian matrix concept. The sensitivity of the target classifier’s output to each attribute of the input is calculated, providing a comprehensive understanding of how changes in the input can affect the output.

    \[ s(x,y)_i =  \left|\min \left(0, \frac{\partial Z(x)_y}{\partial x_i} \cdot \left(\sum_{y^{'} \neq y} \frac{\partial Z(x)_{y^{'}}}{\partial x_i}\right) \cdot C(y, 1, 0)_i\right)\right| \]

This sensitivity score gives us an insight into how much each attribute of the input contributes to the output’s sensitivity. The score is then used to estimate the overall sensitivity of the given input and its mask.

    \[ S(x, M)_y = \sum_{i=0}^{n-1} (s(x, y)_i \cdot M_i) \]

For a complete input, the sensitivity ratio provides a comparative measure of how sensitive the classifier’s output is for an incomplete input versus a complete one.

Results and Implications

Our focus was on an automobile image from the CIFAR-10 dataset. Interestingly, adversarial examples generated by FGSM and IGSM required the same value of \epsilon, which was significantly lower than for other images. This can be attributed to the layer-wise linearity of the classifier. Larger inputs, like the automobile image, require a smaller \epsilon to create an adversarial example. However, JSMA required a higher \epsilon due to the metric of L_0 norm.

Understanding the sensitivity of AI models is paramount in ensuring their robustness against adversarial attacks. The SICEM method provides a comprehensive tool to ensure safer and more reliable AI systems. Read the full paper here [ bib |  .pdf ].

Hybrid Quantum Variational Autoencoders for Representation Learning

One of our recent papers introduces a novel hybrid quantum machine learning approach to unsupervised representation learning by using a quantum variational circuit that is trainable with traditional gradient descent techniques. Access it here: [ bib | .pdf ]

Much of the work related to quantum machine learning has been popularized in recent years. Some of the most notable efforts involve variational approaches (Cerezo 2021, Khoshaman 2018, Yuan 2019). Researchers have shown that these models are effective in complex tasks that grant further studies and open new doors for applied quantum machine learning research. Another popular approach is to perform kernel learning using a quantum approach (Blank 2020, Schuld 2019, Rebentrost 2014). In this case the kernel-based projection of data \mathbf{x} produces a easible linear mapping to the desired target y as follows:

(1)   \begin{equation*}     y(\mathbf{x})=\operatorname{sign}\left(\sum_{j=1}^{M} \alpha_{j} k\left(\mathbf{x}_{j}, \mathbf{x}\right)+b\right) \end{equation*}

for hyper parameters b,\alpha that need to be provided or learned. This enables the creation of some types of support vector machines whose kernels are calculated such that the data \mathbf{x} is processed in the quantum realm. That is \left|\mathbf{x}_{j}\right\rangle=1 /\left|\mathbf{x}_{j}\right| \sum_{k=1}^{N}\left(\mathbf{x}_{j}\right)_{k}|k\rangle. The work of Schuld et al., expands the theory behind this idea an show that all kernel methods can be quantum machine learning methods. Recently, in 2020, Mari et al., worked on variational models that are hybrid in format. Particularly, the authors focused on transfer learning, i.e., the idea of bringing a pre-trained model (or a piece of it) to be part of another model. In the case of Mari the larger model is a computer vision model, e.g., ResNet, which is part of a variational quantum circuit that performs classification. The work we present here follows a similar idea, but we focus in the autoencoder architecture, rather than a classification model, and we focus on learning representations in comparison between a classic and a variational quantum fine-tuned model.

Artificial Intelligence Computing at the Quantum Level

A unique feature of a quantum computer in comparison to a classical computer is that the bit (often referred to as qubit can be in one of two states (0 or 1) and possibly a superposition of the two states (a linear combination of 0 and 1) per time. The most common mathematical representation of a qubit is

    \[ |{\psi\rangle} = \alpha|{0\rangle} + \beta|{1\rangle},  \]

which denotes a superposition state, where \alpha, \beta are complex numbers and |{0\rangle}, |{1\rangle} are computational basis states that form an orthonormal basis in this vector space.

To know more about how quantum machine learning takes advantage of this, check our newest article here.

NSF Award: Center for Standards and Ethics in Artificial Intelligence (CSEAI)

IUCRC Planning Grant

Baylor University has been awarded an Industry-University Cooperative Research Centers planning grant led by Principal Investigator Dr. Pablo Rivas.

The last twenty years have seen an unprecedented growth of AI-enabled technologies in practically every industry. More recently, an emphasis has been placed on ensuring industry and government agencies that use or produce AI-enabled technology have a social responsibility to protect consumers and increase trustworthiness in products and services. As a result, regulatory groups are producing standards for artificial intelligence (AI) ethics worldwide. The Center for Standards and Ethics in Artificial Intelligence (CSEAI) aims to provide industry and government the necessary resources for adopting and efficiently implementing standards and ethical practices in AI through research, outreach, and education.

CSEAI’s mission is to work closely with industry and government research partners to study AI protocols, procedures, and technologies that enable the design, implementation, and adoption of safe, effective, and ethical AI standards. The varied AI skillsets of CSEAI faculty enable the center to address various fundamental research challenges associated with the responsible, equitable, traceable, reliable, and governable development of AI-fueled technologies. The site at Baylor University supports research areas that include bias mitigation through variational deep learning; assessment of products’ sensitivity to AI-guided adversarial attacks; and fairness evaluation metrics.

The CSEAI will help industry and government organizations that use or produce AI technology to provide standardized, ethical products safe for consumers and users, helping the public regain trust and confidence in AI technology. The center will recruit, train, and mentor undergraduates, graduate students, and postdocs from diverse backgrounds, motivating them to pursue careers in AI ethics and producing a diverse workforce trained in standardized and ethical AI. The center will release specific ethics assessment tools, and AI best practices will be licensed or made available to various stakeholders through publications, conference presentations, and the CSEAI summer school.

Both a publicly accessible repository and a secured members-only repository (comprising meeting materials, workshop information, research topics and details, publications, etc.) will be maintained either on-site at Baylor University and/or on a government/DoD-approved cloud service. A single public and secured repository will be used for CSEAI, where permissible, to facilitate continuity of efforts and information between the different sites. This repository will be accessible at a publicly listed URL at Baylor University, https://cseai.center, for the lifetime of the center and moved to an archiving service once no longer maintained.

Lead Institutions

The CSEAI is partnering with Rutgers University, directed by Dr. Jorge Ortiz, and the University of Miami, directed by Dr. Daniel Diaz. The Industry Liaison Officer is Laura Montoya, a well-known industry leader, AI ethics advocate, and entrepreneur.

The three institutions account for a large number of skills that form a unique center that functions as a whole. Every faculty member at every institution brings a unique perspective to the CSEAI.

Baylor Co-PIs: Academic Leadership Team

The Lead site at Baylor is composed of four faculty that serve at different levels, having Dr. Robert Marks as the faculty lead in the Academic Leadership Team, working closely with PI Rivas in project execution and research strategic planning. Dr. Greg Hamerly and Dr. Liang Dong strengthen and diversify the general ML research and application areas, while Dr. Tomas Cerny expands research capability to the software engineering realm.

Collaborators

Dr. Pamela Harper from Marist College has been a long-lasting collaborator of PI Rivas in matters of business and management ethics and is a collaborator of the CSEAI in those areas. On the other hand, Patricia Shaw is a lawyer and an international advisor on tech ethics policy, governance, and regulation. She works with PI Rivas in developing the AI Ethics Standard IEEE P7003 (algorithmic bias).

Workforce Development Plan

The CSEAI is planning to develop the workforce in many different avenues that include both undergraduate and graduate student research mentoring as well as industry professionals continuing education through specialized training and ad-hoc certificates.

On the Performance of Convolutional Neural Networks Initialized with Gabor Filters

When observing a fully trained CNN, researchers have found that the pattern on the kernel filters (convolution window) of the receptive convolutional layer closely resembles the Gabor filters. Gabor filters have existed for a long time, and researchers have been using them for texture analysis. Given the nature and purpose of the receptive layer of CNN, Gabor filters could act as a suitable replacement strategy for the randomly initialized kernels of the receptive layer in CNN, which could potentially boost the performance without any regard to the nature of the dataset. The findings in this thesis show that when low-level kernel filters are initialized with Gabor filters, there is a boost in accuracy, Area Under ROC (Receiver Operating Characteristic) Curve (AUC), minimum loss, and speed in some cases based on the complexity of the dataset. [pdf, bib]

Different Gabor filters with different values for \lambda, \theta, and \gamma. Different parameters will change filter properties.

How High can you Fly? LCC Passenger Dissatisfaction

This study provides a better understanding of the LCC passenger dissatisfaction phenomenon, as we now have an idea of which themes are important and require urgent attention. The findings show that over 10 years, the LCC passenger dissatisfaction criteria evolved, meaning that LCCs should be strongly aware of areas of concern in order to maintain passenger satisfaction. Based on classic data analytics, four themes – flight delay, ground staff attitude, luggage handling, and seat comfort – were identified as playing a crucial role in passenger dissatisfaction. Interestingly, LCC passengers were not found to have a problem with cabin crew attitude. Two possible reasons for the major themes of ground staff dissatisfaction may simply be that LCC ground staff lack training and that passengers expect ground staff to have the authority to make decisions and to be aware of passengers’ needs. Overall, when ground staff is not able to deal with passengers’ demands, passengers feel dissatisfied. In addition, the study found that the check-in counter, food, airline ground announcements, airline responses, cleanliness and additional/personal costs are secondary themes in passenger dissatisfaction. This study, therefore, clearly shows that LCCs should prioritize their efforts to minimize passenger dissatisfaction by firstly dealing with the primary themes of passenger dissatisfaction. [pdf, bib]

The Final Themes of LCC Passenger Dissatisfaction

Special Issue “Standards and Ethics in AI”

Upcoming Rolling Deadline: May 31, 2022

Dear Colleagues,

There is a swarm of artificial intelligence (AI) ethics standards and regulations being discussed, developed, and released worldwide. The need for an academic discussion forum for the application of such standards and regulations is evident. The research community needs to keep track of any updates for such standards, and the publication of use cases and other practical considerations for such.

This Special Issue of the journal AI on “Standards and Ethics in AI” will publish research papers on applied AI ethics, including the standards in AI ethics. This implies interactions among technology, science, and society in terms of applied AI ethics and standards; the impact of such standards and ethical issues on individuals and society; and the development of novel ethical practices of AI technology. The journal will also provide a forum for the open discussion of resulting issues of the application of such standards and practices across different social contexts and communities. More specifically, this Special Issue welcomes submissions on the following topics:

  • AI ethics standards and best practices;
  • Applied AI ethics and case studies;
  • AI fairness, accountability, and transparency;
  • Quantitative metrics of AI ethics and fairness;
  • Review papers on AI ethics standards;
  • Reports on the development of AI ethics standards and best practices.

Note, however, that manuscripts that are philosophical in nature might be discouraged in favor of applied ethics discussions where readers have a clear understanding of the standards, best practices, experiments, quantitative measurements, and case studies that may lead readers from academia, industry, and government to find actionable insight.

Dr. Pablo Rivas
Dr. Gissella Bejarano
Dr. Javier Orduz
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. AI is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open-access journal is 1000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI’s English editing service prior to publication or during author revisions.

International Conference on Emergent and Quantum Technologies (ICEQT’22)

July 25-28, 2022 — Las Vegas, NV

Dear Colleagues,

Quantum computing is an emerging interdisciplinary research area at the intersection of mathematics, physics, and engineering. Quantum computing requires experts, and specialists from STEM areas to assure scientific rigor and to keep up with technological advances.

The main goal of organizing ICEQT’22 is to share knowledge about the recent advancements in the field of QML and build a forum for discussions on this topic for researchers working in this field as well as machine learning researchers, attending The 2022 World Congress in Computer Science, Computer Engineering, & Applied Computing (CSCE’22), who are interested in applying AI to enhance quantum computing algorithms.

In recent years, we have observed a significant amount of published research papers in the quantum machine learning domain. There is an increasing interest from machine learning researchers to apply AI to the quantum computing domain (and vice versa). Therefore, we invite all contributions in the following areas:

AI for Quantum
* Machine learning for improved quantum algorithm performance
* Machine learning for quantum control
* Machine learning for building better quantum hardware
Quantum technologies and applications
* Quantum computing: models and paradigms
* Fairness/ethics with quantum machine learning
* Quantum algorithms for hyperparameter tuning (Quantum computing for AutoML)
* Theory of Quantum-enhanced Machine Learning
* Quantum Machine Learning Algorithms based on Grover search
* Quantum-enhanced Reinforcement Learning
* Quantum computing, models and paradigms such as Quantum Annealing,
* Quantum Sampling
Quantum computing foundations
* Quantum computing: models and paradigms
* Applications of Quantum Machine Learning
* Quantum Tensor Networks and their Applications in QML
* Quantum algorithms for Linear Systems of Equations, and other algorithms such as Quantum Neural Networks, Quantum Hidden Markov Models, Quantum PCA, Quantum SVM, Quantum Autoencoders, Quantum Transfer Learning, Quantum Boltzmann machines, Grover, Shor, and others.

You are invited to submit a paper for consideration. ALL ACCEPTED PAPERS will be published in the corresponding proceedings by Publisher:
Springer Nature – Book Series: Transactions on Computational Science & Computational Intelligence
https://www.springer.com/series/11769

Prospective authors are invited to submit their papers by uploading them to the evaluation website at:
https://american-cse.org/drafts/

For more information, visit our website:
https://baylor.ai/iceqt/

Important Deadlines

March 31, 2022: Submission of papers: https://american-cse.org/drafts/
– Full/Regular Research Papers (maximum of 10 pages)
– Short Research Papers (maximum of 6 pages)
– Abstract/Poster Papers (maximum of 3 pages)

April 18, 2022: Notification of acceptance (+/- two days)

May 12, 2022: Final papers + Registration

June 22, 2022: Hotel Room reservation (for those who are physically attending the conference).

July 25-28, 2022: The 2022 World Congress in Computer Science, Computer Engineering, and Applied Computing (CSCE’22: USA)
Which includes the International Conference on Emergent and Quantum Technologies (ICEQT’22)

Chairs:
Dr. Javier Orduz, Baylor University
Dr. Pablo Rivas, Baylor University

A review of Earth Artificial Intelligence

In recent years, Earth system sciences are urgently calling for innovation on improving accuracy, enhancing model intelligence level, scaling up operation, and reducing costs in many subdomains amid the exponentially accumulated datasets and the promising artificial intelligence (AI) revolution in computer science. This paper presents work led by the NASA Earth Science Data Systems Working Groups and ESIP machine learning cluster to give a comprehensive overview of AI in Earth sciences. It holistically introduces the current status, technology, use cases, challenges, and opportunities, and provides all the levels of AI practitioners in geosciences with an overall big picture and to “blow away the fog to get a clearer vision” about the future development of Earth AI. The paper covers all the major spheres in the Earth system and investigates representative AI research in each domain. Widely used AI algorithms and computing cyberinfrastructure are briefly introduced. The mandatory steps in a typical workflow of specializing AI to solve Earth scientific problems are decomposed and analyzed. Eventually, it concludes with the grand challenges and reveals the opportunities to give some guidance and pre-warnings on allocating resources wisely to achieve the ambitious Earth AI goals in the future. [pdf, bib]

Challenges and opportunities.