What’s wrong with Large Language Models and what we should be building instead
Date10th Jan 2024
Time03:00 PM
Venue NPTEL CRC 301 Studio, New Rummy Game
PAST EVENT
Details
Large Language Models provide a pre-trained foundation for training many interesting AI systems. However, they have many shortcomings. They are expensive to train and to update, their non-linguistic knowledge is poor, they make false and self-contradictory statements, and these statements can be socially and ethically inappropriate. This talk will review these shortcomings and current efforts to address them within the existing LLM framework. It will then argue for a different, more modular architecture that decomposes the functions of existing LLMs and adds several additional components. We believe this alternative can address all of the shortcomings of LLMs.
Speakers
Prof. Thomas G. Dietterich
Robert Bosch Center for Data Sciences and Artificial Intelligence