Recently, with the advancement of deep learning, several applications in text classification have advanced significantly. However, this improvement comes with a cost because deep learning is vulnerable to adversarial examples. This weakness indicates that deep learning is not very robust.
Fortunately, a couple of students in our lab, Korn Sooksatra and Bikram Khanal, noticed that the input of a text classifier is discrete. Hence, it can prevent the classifier from state-of-the-art attacks. Nonetheless, previous works have generated black-box attacks that successfully manipulate the discrete values of the input to find adversarial examples. Therefore, instead of changing the discrete values, they transform the input into its embedding vector containing real values to perform the state-of-the-art white-box attacks. Then, they convert the perturbed embedding vector back into a text and name it an adversarial example. In summary, PhD candidates, Sooksatra and Khanal, create a framework that measures the robustness of a text classifier by using the gradients of the classifier.
Our editorial piece is freely accessible and briefly introduces the research on this issue and how relevant these issues are. Our discussion briefly discusses GPT and DALEE, as means to show the great advances of AI and some ethical considerations around those. Take a look:
Baylor University has been awarded funding under the SaTC program for Enabling Interdisciplinary Collaboration; a grant led by Principal Investigator Dr. Pablo Rivas and an amazing group of multidisciplinary researchers formed by:
Dr. Gissella Bichler from California State University San Bernardino, Center for Criminal Justice Research, School of Criminology and Criminal Justice.
Dr. Tomas Cerny is at Baylor University in the Computer Science Department, leading software engineering research.
Dr. Laurie Giddens from the University of North Texas, a faculty member at the G. Brint Ryan College of Business.
Dr. Stacy Petter is at Wake Forest University in the School of Business. She and Dr. Giddens have extensive research and funding in human trafficking research.
Dr. Javier Turek, a Research Scientist in Machine Learning at Intel Labs, is our collaborator in matters related to machine learning for natural language processing.
This project was motivated by the increasing pattern of people buying and selling goods and services directly from other people via online marketplaces. While many online marketplaces enable transactions among reputable buyers and sellers, some platforms are vulnerable to suspicious transactions. This project investigates whether it is possible to automate the detection of illegal goods or services within online marketplaces. First, the project team will analyze the text of online advertisements and marketplace policies to identify indicators of suspicious activity. Then, the team will adapt the findings to a specific context to locate stolen motor vehicle parts advertised via online marketplaces. Together, the work will lead to general ways to identify signals of illegal online sales that can be used to help people choose trustworthy marketplaces and avoid illicit actors. This project will also provide law enforcement agencies and online marketplaces with insights to gather evidence on illicit goods or services on those marketplaces.
This research assesses the feasibility of modeling illegal activity in online consumer-to-consumer (C2C) platforms, using platform characteristics, seller profiles, and advertisements to prioritize investigations using actionable intelligence extracted from open-source information. The project is organized around three main steps. First, the research team will combine knowledge from computer science, criminology, and information systems to analyze online marketplace technology platform policies and identify platform features, policies, and terms of service that make platforms more vulnerable to criminal activity. Second, building on the understanding of platform vulnerabilities developed in the first step, the researchers will generate and train deep learning-based language models to detect illicit online commerce. Finally, to assess the generalizability of the identified markers, the investigators will apply the models to markets for motor vehicle parts, a licit marketplace that sometimes includes sellers offering stolen goods. This project establishes a cross-disciplinary partnership among a diverse group of researchers from different institutions and academic disciplines with collaborators from law enforcement and industry to develop practical, actionable insights.
According to the World Federation of the Deaf, more than 70 million deaf people exist worldwide. More than 80% of them live in developing countries. Recent research by Dr. Gissella Bejarano, our very own postdoctoral research scientist, has been recognized for its impact on computer vision and speech recognition, providing opportunities to help individuals with disabilities. With support from AWS, Dr. Bejarano is finding better ways to translate Peruvian Sign Language using computer vision and natural language processing.
One of our recent papers introduces a novel hybrid quantum machine learning approach to unsupervised representation learning by using a quantum variational circuit that is trainable with traditional gradient descent techniques. Access it here: [ bib | .pdf ]
Much of the work related to quantum machine learning has been popularized in recent years. Some of the most notable efforts involve variational approaches (Cerezo 2021, Khoshaman 2018, Yuan 2019). Researchers have shown that these models are effective in complex tasks that grant further studies and open new doors for applied quantum machine learning research.
Another popular approach is to perform kernel learning using a quantum approach (Blank 2020, Schuld 2019, Rebentrost 2014). In this case the kernel-based projection of data produces a easible linear mapping to the desired target as follows:
for hyper parameters that need to be provided or learned. This enables the creation of some types of support vector machines whose kernels are calculated such that the data is processed in the quantum realm. That is . The work of Schuld et al., expands the theory behind this idea an show that all kernel methods can be quantum machine learning methods.
Recently, in 2020, Mari et al., worked on variational models that are hybrid in format. Particularly, the authors focused on transfer learning, i.e., the idea of bringing a pre-trained model (or a piece of it) to be part of another model. In the case of Mari the larger model is a computer vision model, e.g., ResNet, which is part of a variational quantum circuit that performs classification.
The work we present here follows a similar idea, but we focus in the autoencoder architecture, rather than a classification model, and we focus on learning representations in comparison between a classic and a variational quantum fine-tuned model.
A unique feature of a quantum computer in comparison to a classical computer is that the bit (often referred to as qubit can be in one of two states (0 or 1) and possibly a superposition of the two states (a linear combination of 0 and 1) per time.
The most common mathematical representation of a qubit is
which denotes a superposition state, where , are complex numbers and , are computational basis states that form an orthonormal basis in this vector space.
To know more about how quantum machine learning takes advantage of this, check our newest article here.
Baylor University has been awarded an Industry-University Cooperative Research Centers planning grant led by Principal Investigator Dr. Pablo Rivas.
The last twenty years have seen an unprecedented growth of AI-enabled technologies in practically every industry. More recently, an emphasis has been placed on ensuring industry and government agencies that use or produce AI-enabled technology have a social responsibility to protect consumers and increase trustworthiness in products and services. As a result, regulatory groups are producing standards for artificial intelligence (AI) ethics worldwide. The Center for Standards and Ethics in Artificial Intelligence (CSEAI) aims to provide industry and government the necessary resources for adopting and efficiently implementing standards and ethical practices in AI through research, outreach, and education.
CSEAI’s mission is to work closely with industry and government research partners to study AI protocols, procedures, and technologies that enable the design, implementation, and adoption of safe, effective, and ethical AI standards. The varied AI skillsets of CSEAI faculty enable the center to address various fundamental research challenges associated with the responsible, equitable, traceable, reliable, and governable development of AI-fueled technologies. The site at Baylor University supports research areas that include bias mitigation through variational deep learning; assessment of products’ sensitivity to AI-guided adversarial attacks; and fairness evaluation metrics.
The CSEAI will help industry and government organizations that use or produce AI technology to provide standardized, ethical products safe for consumers and users, helping the public regain trust and confidence in AI technology. The center will recruit, train, and mentor undergraduates, graduate students, and postdocs from diverse backgrounds, motivating them to pursue careers in AI ethics and producing a diverse workforce trained in standardized and ethical AI. The center will release specific ethics assessment tools, and AI best practices will be licensed or made available to various stakeholders through publications, conference presentations, and the CSEAI summer school.
Both a publicly accessible repository and a secured members-only repository (comprising meeting materials, workshop information, research topics and details, publications, etc.) will be maintained either on-site at Baylor University and/or on a government/DoD-approved cloud service. A single public and secured repository will be used for CSEAI, where permissible, to facilitate continuity of efforts and information between the different sites. This repository will be accessible at a publicly listed URL at Baylor University, https://cseai.center, for the lifetime of the center and moved to an archiving service once no longer maintained.
The CSEAI is partnering with Rutgers University, directed by Dr. Jorge Ortiz, and the University of Miami, directed by Dr. Daniel Diaz. The Industry Liaison Officer is Laura Montoya, a well-known industry leader, AI ethics advocate, and entrepreneur.
The three institutions account for a large number of skills that form a unique center that functions as a whole. Every faculty member at every institution brings a unique perspective to the CSEAI.
Baylor Co-PIs: Academic Leadership Team
The Lead site at Baylor is composed of four faculty that serve at different levels, having Dr. Robert Marks as the faculty lead in the Academic Leadership Team, working closely with PI Rivas in project execution and research strategic planning. Dr. Greg Hamerly and Dr. Liang Dong strengthen and diversify the general ML research and application areas, while Dr. Tomas Cerny expands research capability to the software engineering realm.
Dr. Pamela Harper from Marist College has been a long-lasting collaborator of PI Rivas in matters of business and management ethics and is a collaborator of the CSEAI in those areas. On the other hand, Patricia Shaw is a lawyer and an international advisor on tech ethics policy, governance, and regulation. She works with PI Rivas in developing the AI Ethics Standard IEEE P7003 (algorithmic bias).
Workforce Development Plan
The CSEAI is planning to develop the workforce in many different avenues that include both undergraduate and graduate student research mentoring as well as industry professionals continuing education through specialized training and ad-hoc certificates.
When observing a fully trained CNN, researchers have found that the pattern on the kernel filters (convolution window) of the receptive convolutional layer closely resembles the Gabor filters. Gabor filters have existed for a long time, and researchers have been using them for texture analysis. Given the nature and purpose of the receptive layer of CNN, Gabor filters could act as a suitable replacement strategy for the randomly initialized kernels of the receptive layer in CNN, which could potentially boost the performance without any regard to the nature of the dataset. The findings in this thesis show that when low-level kernel filters are initialized with Gabor filters, there is a boost in accuracy, Area Under ROC (Receiver Operating Characteristic) Curve (AUC), minimum loss, and speed in some cases based on the complexity of the dataset. [pdf, bib]
This study provides a better understanding of the LCC passenger dissatisfaction phenomenon, as we now have an idea of which themes are important and require urgent attention. The findings show that over 10 years, the LCC passenger dissatisfaction criteria evolved, meaning that LCCs should be strongly aware of areas of concern in order to maintain passenger satisfaction. Based on classic data analytics, four themes – flight delay, ground staff attitude, luggage handling, and seat comfort – were identified as playing a crucial role in passenger dissatisfaction. Interestingly, LCC passengers were not found to have a problem with cabin crew attitude. Two possible reasons for the major themes of ground staff dissatisfaction may simply be that LCC ground staff lack training and that passengers expect ground staff to have the authority to make decisions and to be aware of passengers’ needs. Overall, when ground staff is not able to deal with passengers’ demands, passengers feel dissatisfied. In addition, the study found that the check-in counter, food, airline ground announcements, airline responses, cleanliness and additional/personal costs are secondary themes in passenger dissatisfaction. This study, therefore, clearly shows that LCCs should prioritize their efforts to minimize passenger dissatisfaction by firstly dealing with the primary themes of passenger dissatisfaction. [pdf, bib]
There is a swarm of artificial intelligence (AI) ethics standards and regulations being discussed, developed, and released worldwide. The need for an academic discussion forum for the application of such standards and regulations is evident. The research community needs to keep track of any updates for such standards, and the publication of use cases and other practical considerations for such.
This Special Issue of the journal AI on “Standards and Ethics in AI” will publish research papers on applied AI ethics, including the standards in AI ethics. This implies interactions among technology, science, and society in terms of applied AI ethics and standards; the impact of such standards and ethical issues on individuals and society; and the development of novel ethical practices of AI technology. The journal will also provide a forum for the open discussion of resulting issues of the application of such standards and practices across different social contexts and communities. More specifically, this Special Issue welcomes submissions on the following topics:
AI ethics standards and best practices;
Applied AI ethics and case studies;
AI fairness, accountability, and transparency;
Quantitative metrics of AI ethics and fairness;
Review papers on AI ethics standards;
Reports on the development of AI ethics standards and best practices.
Note, however, that manuscripts that are philosophical in nature might be discouraged in favor of applied ethics discussions where readers have a clear understanding of the standards, best practices, experiments, quantitative measurements, and case studies that may lead readers from academia, industry, and government to find actionable insight.
Dr. Pablo Rivas Dr. Gissella Bejarano Dr. Javier Orduz Guest Editors
Manuscript Submission Information
Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.
Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. AI is an international peer-reviewed open access quarterly journal published by MDPI.