Foundation Models

Presentation File

Resources and references

  • “Foundational Robustness of Foundation Models” , NeurIPS 2022 Tutorial Link

  • “Foundation Models” , Samuel Albanie, Online Course 2022 Link

  • “UvA Foundation Models Course” , Cees Snoek, Yuki Asano , Spring 2024 Link

  • “CS 886: Recent Advances on Foundation Models”, Wenhu Chen, University of Waterloo, Winter 2024 Link

  • “CS 8803 VLM Vision-Language Foundation Models”, Zsolt Kira, Georgia Tech, Fall 2024 Link

  • “BIODS 271: Foundation Models for Healthcare” ,Stanford University, Winter 2024 Link

  • “Emergence of Foundation Models: Opportunities to Rethink Medical AI”, Shekoofeh Azizi, CVPR 2024 Link

  • “COMP 590/776: Computer Vision in 3D World”, Roni Senguptam UNC, Spring 2023 Link

  • “COS 597G: Understanding Large Language Models” , Danqi Chen, Princeton University, Fall 2022 Link

  • “UVA Deep Learning Course”, Yuki Asano , Fall 2022 Link

  • “What are Foundation Models in AI?”, https://www.youtube.com/watch?v=dV0X1QyLL8M Link

  • “CS25: Transformers United V4”, Stanford University, Spring 2024 Link

  • J. Devlin et al., “Bert: Pre-training of deep bidirectional transformers for language understanding”, NAACL-HLT (2019)

  • T. Brown et al., “Language models are few-shot learners”, NeurIPS (2020)

  • J. Kaplan et al., “Scaling laws for neural language models”, arxiv (2020)

  • A. Dosovitskiy, et al. “An image is worth 16x16 words: Transformers for image recognition at scale”, ICLR (2021)

  • R. Bommasani et al., “On the opportunities and risks of foundation models”, arxiv (2021)

  • A. Vaswani et al., “Attention is all you need”, NeurIPS (2017)

  • M. Chen et al., “Evaluating large language models trained on code”, arxiv (2021)

  • A. Radford et al., “Learning transferable visual models from natural language supervision”, ICML (2021)

  • J Wei et al., “Emergent Abilities of Large Language Models”, arxiv (2022)

  • T. Chen et al., “A simple framework for contrastive learning of visual representations”, ICML (2020)

  • R. Schaeffer et al., “A simple framework for contrastive learning of visual representations”, NeurIPS (2023)

  • Z. Huang et al., “A visual–language foundation model for pathology image analysis using medical Twitter”, nature medicine (2023)

  • A. Kirillov et al., “Segment Anything”, arxiv (2023)

  • Med-Gemini: Advancing medical AI with Med-Gemini, (2024) Link

Further resources and references

  • “CS 324 - Advances in Foundation Models” , Stanford University, Winter 2023 Link

  • “CS 839 - Foundation Models & the Future of Machine Learning” , Wisconsin–Madison University, Fall 2023 Link

  • “MIT FUTURE OF AI 6.S087: Foundation Models & Generative AI”, 2024, Link

  • “AI Foundation Models CPSC 488/588” , Fall 2023 , Yale University Link

  • “EE/CS 148 - Large Language and Vision Models”, Caltech, Spring 2024 Link

  • “CVPR 2024 Tutorial on”Recent Advances in Vision Foundation Models”, Link

  • “CS 601.471/671 NLP: Self-supervised Models”, Johns Hopkins University - Spring 2024, Link

  • “ECCV 2022 Tutorial on Self-Supervised Representation Learning in Computer Vision”, Link

  • “Self-Supervised Representation Learning”, Lilian Weng, Link

  • “CIS 7000 - Large Language Models”, University of Pennsylvania, Fall 2024, Link

  • “CS 2281R: Mathematical & Engineering Principles for Training Foundation Models”, Harvard University , Fall 2024, Link