Everyone has opinions about Generative Engine Optimization. We have data. Over the past year, the OtterlyAI team ran dozens of controlled experiments - testing llms.txt, Schema Markup, AI-generated vs. human-written content, YouTube citation patterns, earned media outreach, and much more - tracking real outcomes across the largest AI Search platforms. Some results confirmed our hunches. Others completely surprised us. In this session, Thomas Peham walks through what worked, what flopped (looking at you, llms.txt), and the one experiment that got otterly.ai cited on ChatGPT within 24 hours. Expect raw data, honest failures, and a repeatable experimentation methodology you can take back to your team on Monday.
GEO Experiments 2026: What We Tested, What Failed, and What Actually Works
Brighton, Spring 2026
About this session
Auditorium 1, Brighton Centre, Kings Road, Brighton and Hove, Brighton, BN1 2GR, United Kingdom
Thu 30 Apr, 2026 | 09:30 AM
