NSF Award: Using NLP to Identify Suspicious Transactions in Omnichannel Online C2C Marketplaces

Baylor University has been awarded funding under the SaTC program for Enabling Interdisciplinary Collaboration; a grant led by Principal Investigator Dr. Pablo Rivas and an amazing group of multidisciplinary researchers formed by:

  • Dr. Gissella Bichler from California State University San Bernardino, Center for Criminal Justice Research, School of Criminology and Criminal Justice.
  • Dr. Tomas Cerny is at Baylor University in the Computer Science Department, leading software engineering research.
  • Dr. Laurie Giddens from the University of North Texas, a faculty member at the G. Brint Ryan College of Business.
  • Dr. Stacy Petter is at Wake Forest University in the School of Business. She and Dr. Giddens have extensive research and funding in human trafficking research.
  • Dr. Javier Turek, a Research Scientist in Machine Learning at Intel Labs, is our collaborator in matters related to machine learning for natural language processing.

We also have two Ph.D. students working on this project: Alejandro Rodriguez and Korn Sooksatra.

This project was motivated by the increasing pattern of people buying and selling goods and services directly from other people via online marketplaces. While many online marketplaces enable transactions among reputable buyers and sellers, some platforms are vulnerable to suspicious transactions. This project investigates whether it is possible to automate the detection of illegal goods or services within online marketplaces. First, the project team will analyze the text of online advertisements and marketplace policies to identify indicators of suspicious activity. Then, the team will adapt the findings to a specific context to locate stolen motor vehicle parts advertised via online marketplaces. Together, the work will lead to general ways to identify signals of illegal online sales that can be used to help people choose trustworthy marketplaces and avoid illicit actors. This project will also provide law enforcement agencies and online marketplaces with insights to gather evidence on illicit goods or services on those marketplaces.

This research assesses the feasibility of modeling illegal activity in online consumer-to-consumer (C2C) platforms, using platform characteristics, seller profiles, and advertisements to prioritize investigations using actionable intelligence extracted from open-source information. The project is organized around three main steps. First, the research team will combine knowledge from computer science, criminology, and information systems to analyze online marketplace technology platform policies and identify platform features, policies, and terms of service that make platforms more vulnerable to criminal activity. Second, building on the understanding of platform vulnerabilities developed in the first step, the researchers will generate and train deep learning-based language models to detect illicit online commerce. Finally, to assess the generalizability of the identified markers, the investigators will apply the models to markets for motor vehicle parts, a licit marketplace that sometimes includes sellers offering stolen goods. This project establishes a cross-disciplinary partnership among a diverse group of researchers from different institutions and academic disciplines with collaborators from law enforcement and industry to develop practical, actionable insights.

Self-supervised modeling. After providing a corpus associated with a C2C domain of interest and ontologies, we will extract features followed by attention mechanisms for self-supervised and supervised tasks. The self-supervised models include the completion of missing information and domain-specific text encoding for learning representations. Then supervised tasks will leverage these representations to learn the relationships with targets.

NSF Award: Center for Standards and Ethics in Artificial Intelligence (CSEAI)

IUCRC Planning Grant

Baylor University has been awarded an Industry-University Cooperative Research Centers planning grant led by Principal Investigator Dr. Pablo Rivas.

The last twenty years have seen an unprecedented growth of AI-enabled technologies in practically every industry. More recently, an emphasis has been placed on ensuring industry and government agencies that use or produce AI-enabled technology have a social responsibility to protect consumers and increase trustworthiness in products and services. As a result, regulatory groups are producing standards for artificial intelligence (AI) ethics worldwide. The Center for Standards and Ethics in Artificial Intelligence (CSEAI) aims to provide industry and government the necessary resources for adopting and efficiently implementing standards and ethical practices in AI through research, outreach, and education.

CSEAI’s mission is to work closely with industry and government research partners to study AI protocols, procedures, and technologies that enable the design, implementation, and adoption of safe, effective, and ethical AI standards. The varied AI skillsets of CSEAI faculty enable the center to address various fundamental research challenges associated with the responsible, equitable, traceable, reliable, and governable development of AI-fueled technologies. The site at Baylor University supports research areas that include bias mitigation through variational deep learning; assessment of products’ sensitivity to AI-guided adversarial attacks; and fairness evaluation metrics.

The CSEAI will help industry and government organizations that use or produce AI technology to provide standardized, ethical products safe for consumers and users, helping the public regain trust and confidence in AI technology. The center will recruit, train, and mentor undergraduates, graduate students, and postdocs from diverse backgrounds, motivating them to pursue careers in AI ethics and producing a diverse workforce trained in standardized and ethical AI. The center will release specific ethics assessment tools, and AI best practices will be licensed or made available to various stakeholders through publications, conference presentations, and the CSEAI summer school.

Both a publicly accessible repository and a secured members-only repository (comprising meeting materials, workshop information, research topics and details, publications, etc.) will be maintained either on-site at Baylor University and/or on a government/DoD-approved cloud service. A single public and secured repository will be used for CSEAI, where permissible, to facilitate continuity of efforts and information between the different sites. This repository will be accessible at a publicly listed URL at Baylor University, https://cseai.center, for the lifetime of the center and moved to an archiving service once no longer maintained.

Lead Institutions

The CSEAI is partnering with Rutgers University, directed by Dr. Jorge Ortiz, and the University of Miami, directed by Dr. Daniel Diaz. The Industry Liaison Officer is Laura Montoya, a well-known industry leader, AI ethics advocate, and entrepreneur.

The three institutions account for a large number of skills that form a unique center that functions as a whole. Every faculty member at every institution brings a unique perspective to the CSEAI.

Baylor Co-PIs: Academic Leadership Team

The Lead site at Baylor is composed of four faculty that serve at different levels, having Dr. Robert Marks as the faculty lead in the Academic Leadership Team, working closely with PI Rivas in project execution and research strategic planning. Dr. Greg Hamerly and Dr. Liang Dong strengthen and diversify the general ML research and application areas, while Dr. Tomas Cerny expands research capability to the software engineering realm.

Collaborators

Dr. Pamela Harper from Marist College has been a long-lasting collaborator of PI Rivas in matters of business and management ethics and is a collaborator of the CSEAI in those areas. On the other hand, Patricia Shaw is a lawyer and an international advisor on tech ethics policy, governance, and regulation. She works with PI Rivas in developing the AI Ethics Standard IEEE P7003 (algorithmic bias).

Workforce Development Plan

The CSEAI is planning to develop the workforce in many different avenues that include both undergraduate and graduate student research mentoring as well as industry professionals continuing education through specialized training and ad-hoc certificates.

Special Issue “Standards and Ethics in AI”

Upcoming Rolling Deadline: May 31, 2022

Dear Colleagues,

There is a swarm of artificial intelligence (AI) ethics standards and regulations being discussed, developed, and released worldwide. The need for an academic discussion forum for the application of such standards and regulations is evident. The research community needs to keep track of any updates for such standards, and the publication of use cases and other practical considerations for such.

This Special Issue of the journal AI on “Standards and Ethics in AI” will publish research papers on applied AI ethics, including the standards in AI ethics. This implies interactions among technology, science, and society in terms of applied AI ethics and standards; the impact of such standards and ethical issues on individuals and society; and the development of novel ethical practices of AI technology. The journal will also provide a forum for the open discussion of resulting issues of the application of such standards and practices across different social contexts and communities. More specifically, this Special Issue welcomes submissions on the following topics:

  • AI ethics standards and best practices;
  • Applied AI ethics and case studies;
  • AI fairness, accountability, and transparency;
  • Quantitative metrics of AI ethics and fairness;
  • Review papers on AI ethics standards;
  • Reports on the development of AI ethics standards and best practices.

Note, however, that manuscripts that are philosophical in nature might be discouraged in favor of applied ethics discussions where readers have a clear understanding of the standards, best practices, experiments, quantitative measurements, and case studies that may lead readers from academia, industry, and government to find actionable insight.

Dr. Pablo Rivas
Dr. Gissella Bejarano
Dr. Javier Orduz
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. AI is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open-access journal is 1000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI’s English editing service prior to publication or during author revisions.