VLDB 2025

1st Workshop on New Ideas for Large-Scale Neurosymbolic Learning Systems (LS-NSL)

Held in conjunction with the 51st International Conference on Very Large Data Bases (VLDB) - London, United Kingdom, September 5, 2025.

About

Deep learning has been a striking success in various fields from engineering and science. However, the criticism against it is getting bigger as scientists and practitioners apply it more broadly. Neurosymbolic learning (NSL) vows to transform deep learning by combining the strong induction capabilities of neural models with rigorous deduction from symbolic knowledge representation and reasoning techniques. Despite that NSL has shown its potential in different application domains, including image and video understanding, natural language processing, and data management, several questions remain open regarding whether current techniques are mature enough to be applied to large-scale, real-world problems.

This workshop aims to:

  • Identify key large-scale, real-world scenarios from different domains, such as computer vision and data management, that can benefit from NSL techniques.
  • Identify key techniques from the database literature that could enhance NSL techniques for training and inference.
  • Identify new theoretical and engineering challenges that arise when integrating deep networks with symbolic systems and propose solutions towards overcoming them.
  • Discuss scalable techniques for training deep networks using symbolic solvers.
  • Investigate benchmarks across different application domains to assess the strengths of NSL techniques in runtime efficiency, task-specific accuracy, and other aspects.

Topics of Interest

The topics of interest include (but are not limited to):

  • Large-scale NSL applications, e.g., from computer vision, natural language processing, and data management.
  • Scalable integration of deep networks with symbolic systems, such as logic programs, or combinatorial solvers.
  • Scalable techniques to train deep networks subject to symbolic constraints or logical theories.
  • New NSL architectures and semantics.
  • Uncertain databases and logic programs.
  • Query answering via transformers and graph neural networks.
  • Data management over new hardware.
  • New forms of databases, e.g., databases to store tensor data.
  • Database creation and querying via machine learning.
  • NSL benchmarks.

Call For Contributions

We welcome regular papers (up to eight pages, including the bibliography) that present complete novel research outcomes not previously presented elsewhere and extended abstracts (up to four pages, including the bibliography) on preliminary results that can trigger discussions. We also welcome papers accepted by VLDB 2025 or other recent top-tier AI, machine learning, and database venues. At least one author of each accepted paper is expected to register to the workshop and give an oral presentation.

The proceedings of all papers accepted will be hosted by VLDB.

Paper Submission Instructions

Submitted manuscripts must be in pdf format and use the VLDB 2025 template. The submissions will be single-blind and the authors should comply with the conflict of interest policy for ACM publications.

The paper submission system will be open soon.

Important Dates

  • Paper submission: Friday, May 30th, 2025.
  • Notification of acceptance: Friday, June 27th, 2025.
  • Camera-ready submission: Friday, July 11th, 2025.
  • Workshop Date: September 5th, 2025.

Keynotes

Dan Roth, University of Pennsylvania and Oracle

Dan Roth

Title: TBA

Bio: Dan Roth is the Eduardo D. Glandt Distinguished Professor at the Department of Computer and Information Science, University of Pennsylvania and the Chief AI Scientist at Oracle. Until June 2024 Dan was a VP/Distinguished Scientist at AWS AI. In his role at AWS Roth led over the last three years the scientific effort behind the first-generation Generative AI products from AWS, including Titan Models, Amazon Q efforts, and Bedrock, from inception until they became generally available.

Dan is a Fellow of the AAAS, ACM, AAAI, and ACL. In 2017, Dan was awarded the John McCarthy Award; he was recognized for “for major conceptual and theoretical advances in the modeling of natural language understanding, machine learning, and reasoning.” He has published broadly in natural language processing, machine learning, knowledge representation and reasoning, and learning theory, was the Editor-in-Chief of the Journal of Artificial Intelligence Research (JAIR) and has served as a Program Chair and Conference Chair for the major conferences in his research areas. Roth has been involved in several startups; most recently he was a co-founder and chief scientist of NexLP, a startup that leverages the latest advances in Natural Language Processing, Cognitive Analytics, and Machine Learning in the legal and compliance domains. NexLP was acquired by Reveal. Dan received his B.A Summa cum laude in Mathematics from the Technion, Israel and his Ph.D. in Computer Science from Harvard University in 1995.

Jeff Pan, University of Edinburgh and Huawei Labs

Jeff Pan

Title: Decoding the Interaction of Symbolic and Parametric Knowledge

Abstract: Large Language Models (LLMs) have taken Knowledge Representation – and the world – by storm. This inflection point marks a shift from symbolic knowledge representation to a renewed focus on the hybrid representation of both symbolic knowledge and parametric knowledge. This is a big step for the field of Knowledge Representation. In this talk, I will briefly introduce some initial findings in such a big step. If time allows, I will also speculate on opportunities and visions that the renewed focus brings.

Bio: Jeff Pan is professor of knowledge computing in the School of Informatics at the University of Edinburgh. He is a chair of the Knowledge Graphs group at the Alan Turing Institute. He is the Chief Editor and main author of the first book on Knowledge Graph. Recently, he teamed up with many group leaders in the world on a visionary paper on large language models and knowledge graphs (LINK)

Program Committee

  • Victor Gutierrez Basulto, Cardiff University
  • Vaishak Belle, University of Edinburgh
  • Angela Bonifati, Lyon 1 University
  • Gianluca Cima, Sapienza University of Rome
  • Floris Geerts, University of Antwerp
  • Christoph Haase, University of Oxford
  • Ziyang Li, University of Pennsylvania
  • Ankur Mali, University of South Florida
  • Nikos Ntarmos, Huawei Labs
  • Hai Pham, Samsung AI
  • Ernesto Jimenez Ruiz, City St George’s, University of London
  • Luciano Serafini, Fondazione Bruno Kessler
  • Gerardo I. Simari, Universidad Nacional del Sur

Organizers

  • Efthymia (Efi) Tsamoura, moving to Huawei Labs
  • Pablo Barceló, Universidad Católica de Chile
  • Jacopo Urbani, Vrije Universiteit Amsterdam

For any questions, please do reach out to Efi at efthymia.tsamoura@gmail.com.