International Conference on Emergent and Quantum Technologies (ICEQT’24)

July 22-25, 2024 — Las Vegas, NV

Dear Esteemed Colleagues,


Quantum computing is an expeditiously evolving field of interdisciplinary research, drawing upon fundamental principles from mathematics, physics, and engineering. To maintain scientific rigor and foster advancement, this domain necessitates a collaborative effort across various STEM disciplines.

We are delighted to announce the International Conference on Emergent and Quantum Technologies (ICEQT’24), scheduled for July 22-25, 2024, in Las Vegas, NV. The conference is designed to serve as a platform for researchers specializing in quantum machine learning and machine learning professionals exploring the application of AI in enhancing quantum computing algorithms. It aims to facilitate the exchange of insights and developments within these dynamic areas of study.

The burgeoning interest among machine learning practitioners in leveraging AI for quantum computing endeavors, and vice versa, underscores the relevance of this conference. Thus, we warmly welcome the submission of original research papers that contribute novel insights and state-of-the-art developments in the following areas of interest:

Foundations of Quantum Computing and Quantum Machine Learning

  • Quantum computing models and paradigms, e.g., Grover, Shor, and others
  • Quantum algorithms for Linear Systems of Equations
  • Quantum Tensor Networks and their Applications in QML

Quantum Machine Learning Algorithms

  • Quantum Neural Networks
  • Quantum Hidden Markov Models
  • Quantum PCA
  • Quantum SVM
  • Quantum Autoencoders
  • Quantum Transfer Learning
  • Quantum Boltzmann machines
  • Theory of Quantum-enhanced Machine Learning

AI for Quantum Computing

  • Machine learning for improved quantum algorithm performance
  • Machine learning for quantum control
  • Machine learning for building better quantum hardware

Quantum Algorithms and Applications

  • Quantum computing: models and paradigms
  • Quantum algorithms for hyperparameter tuning (Quantum computing for AutoML)
  • Quantum-enhanced Reinforcement Learning
  • Quantum Annealing
  • Quantum Sampling
  • Applications of Quantum Machine Learning

Fairness and Ethics in Quantum Machine Learning

We look forward to receiving your submissions and to welcoming you to ICEQT’24.

All submissions that are accepted for presentation will be included in the proceedings published by IEEE CPS. To ensure consistency in formatting, authors should follow the general typesetting instructions available on the IEEE’s website, including single-line spacing and a 2-column format. Additionally, authors of accepted papers must agree to the IEEE CPS standard statement regarding copyrights and policies on electronic dissemination.

Prospective authors are encouraged to submit their papers through the conference’s evaluation website at CMT. More information about the conference, including submission guidelines, can be found on our website at https://baylor.ai/iceqt/.

Important Deadlines

March 22, 2024: Submission of papers: https://cmt3.research.microsoft.com/ICEQT2024
– Full/Regular Research Papers (maximum of 8 pages)
– Short Research Papers (maximum of 5 pages)
– Abstract/Poster Papers (maximum of 3 pages)

April 15, 2024: Notification of acceptance (+/- two days)

May 1, 2024: Final papers + Registration

June 21, 2024: Last day for hotel room reservation at a discounted price.

July 22-25, 2024: The 2024 World Congress in Computer Science, Computer Engineering, and Applied Computing (CSCE’24: USA)
Which includes the International Conference on Emergent and Quantum Technologies (ICEQT’24)

Chairs:
Pablo Rivas, PhD, Baylor University
Bikram Khanal, PhD Candidate, Baylor University

Power of Data In Quantum Machine Learning

This week at the lab, we read the following paper, and here is our summary:

Huang, Hsin-Yuan, Michael Broughton, Masoud Mohseni, Ryan Babbush, Sergio Boixo, Hartmut Neven, and Jarrod R. McClean. “Power of data in quantum machine learning.” Nature communications 12, no. 1 (2021): 2631.

Summary

This work focuses on the advancement of quantum technologies and their impact on machine learning. The two paths towards the quantum enhancement of machine learning include using the power of quantum computing to improve the training process of existing classical models and using quantum models to generate correlations between variables that are inefficient to represent through classical computation. The authors show that this picture is incomplete in machine learning problems where some training data are provided, as the provided data can elevate classical models to rival quantum models. The authors present a flowchart for testing potential quantum prediction advantage based on prediction error bounds for training classical and quantum ML methods based on kernel functions. This elevation of classical models through some training samples is illustrative of the power of data. The authors also show that, “training a specific classical ML model on a collection of N training examples (\mathbf{x}, y = f(\mathbf{x})) would give rise to a prediction model h(\mathbf{x}) with

(1)   \begin{equation*} \mathbb{E}_\mathbf{x}|h(\mathbf{x})-f(\mathbf{x})|\leq c \sqrt{p^2/N} \end{equation*}

for a constant c > 0. Hence, with N \approx p^2/\epsilon^2 training data, one can train a classical ML model to predict the function f(\mathbf{x}) up to an additive prediction error \epsilon.” They also show that a slight geometric difference between kernel functions defined by classical and quantum ML guarantees similar or better performance in prediction by classical ML. On the other hand, a sizeable geometric difference indicates the possibility of a large prediction advantage using the quantum ML model.

Additionally, the authors introduced ”projected quantum kernels” and demonstrated, through empirical results, that these outperformed all tested classical models in prediction error. This work provides a guidebook for generating ML problems that showcase the separation between quantum and classical models.

Intellectual Merit

This work provides a theoretical and computational framework for comparing classical and quantum ML models. The authors develop prediction error bounds for training classical and quantum ML methods based on kernel functions, which provide provable guarantees and are very flexible in the functions they can learn. The authors also develop a flowchart for testing potential quantum prediction advantage, a function-independent prescreening that allows one to evaluate the possibility of better performance. The authors provide a constructive example of a discrete log feature map, which gives a provable separation for their kernel. They rule out many existing models in the literature, providing a powerful sieve for focusing the development of new data encodings.

Broader Impact

The authors’ contributions to the field of quantum technologies and machine learning have significant broader impacts. The development of a flowchart for testing potential quantum prediction advantage provides a tool for researchers and practitioners to determine the possibility of better performance using quantum ML models. The authors’ framework can also be used to compare and construct hard classical models, such as hash functions, which have applications in cryptography and secure communication. The authors’ work has the potential to accelerate the development of new data encodings, leading to more efficient and accurate machine learning models. This has far-reaching implications for various applications, including image recognition, text translation, and even physics applications, where machine learning can revolutionize how we analyze and interpret data. The paper was organized and written by collaborating with three famous quantum institutes: Google Quantum AI, the Institute for Quantum Information and Matter at Caltech, and the Department of Computing and Mathematical Sciences at Caltech.