February 26-28, 2025
Montreal, Canada

RAG vs Fine-tuning: which one is better for your AI project?

RAG integrates external knowledge sources to improve performance by retrieving relevant documents during inference, making it ideal for tasks that require dynamic, up-to-date information. In contrast, Fine-tuning adapts pre-trained models to specific tasks through supervised learning, optimizing performance within static, well-defined domains. We analyze the trade-offs in terms of data requirements, deployment flexibility, cost, and use cases.

View all 192 sessions

Shao Hang He

DevFortress

Shao Hang He is a serial entrepreneur and senior software developer. He has more than 11 years of experiences in software development. He worked for big companies such as Ubisoft, BlackBerry and Corus Entertainment. He also co-founded several startups (MailMagic.ai, DevFortress, Blossom.team, Geek-it and Fanstories) and raised money from Venture Capital.

Read More

Montreal 2025 sponsored by

Become a sponsor