This book offers an extensive exploration of foundation models, guiding readers through the essential concepts and advanced topics that define this rapidly evolving research area. Designed for those seeking to deepen their understanding and contribute to the development of safer and more trustworthy AI technologies, the book is divided into three parts providing the fundamentals, advanced topics in foundation modes, and safety and trust in foundation models:
Part I introduces the core principles of foundation models and generative AI, presents the technical background of neural networks, delves into the learning and generalization of transformers, and finishes with the intricacies of transformers and in-context learning.
Part II introduces automated visual prompting techniques, prompting LLMs with privacy, memory-efficient fine-tuning methods, and shows how LLMs can be reprogrammed for time-series machine learning tasks. It explores how LLMs can be reused for speech tasks, how synthetic datasets can be used to benchmark foundation models, and elucidates machine unlearning for foundation models.
Part III provides a comprehensive evaluation of the trustworthiness of LLMs, introduces jailbreak attacks and defenses for LLMs, presents safety risks when find-tuning LLMs, introduces watermarking techniques for LLMs, presents robust detection of AI-generated text, elucidates backdoor risks in diffusion models, and presents red-teaming methods for diffusion models.
Mathematical notations are clearly defined and explained throughout, making this book an invaluable resource for both newcomers and seasoned researchers in the field.