Current Experiments
Active research and prototypes in development
AI Overview Detector
Active → BetaHypothesis: AI Overviews impact organic CTR significantly enough to warrant monitoring.
Built basic SERP scraper with Selenium. Detected AI Overview presence using CSS selectors.
Added GSC integration for traffic correlation. Initial results: -12% average CTR when AI Overview present.
Refined detection algorithm. False positive rate down to 3%. Ready for beta testing.
Entity Relationship Mapper
ActiveAutomatically extract and visualize entity relationships from web content using NLP and knowledge graphs.
Research Notes
Deep dives, learnings, and documented experiments
AI Overview Impact Analysis
Analyzed 10,000 queries to understand AI Overview trigger patterns. Found that informational queries with high search volume are 3x more likely to show AI Overviews.
Read Full NoteSemantic Clustering Comparison
Compared 5 different embedding models for keyword clustering. OpenAI's text-embedding-3-small provides the best balance of accuracy and speed for SEO use cases.
Read Full NotePAA Question Seasonality Patterns
Discovered that PAA questions follow predictable seasonal patterns in 73% of analyzed verticals. E-commerce shows strongest seasonality, B2B shows weakest.
Read Full NotePrototypes & Experiments
Early-stage tools and proof-of-concepts
SERP Diff Tool
PrototypeCompare SERP layouts between different dates, locations, or devices. Highlights changes in featured snippets, ads, and organic results.
Automatically detect when content becomes outdated based on SERP changes, competitor updates, and trending topics.
Intent Drift Monitor
ConceptTrack how search intent changes over time for your target keywords. Detect when informational queries become transactional.
Failed Experiments
What didn't work and why - learning from failures
Why We Share Failures
Failed experiments are as valuable as successful ones. They save others time and reveal important constraints or assumptions that don't hold.
Automated Content Scoring
Hypothesis: We could predict content performance using only on-page factors.
What we tried: Built ML model using 50+ content features (readability, keyword density, semantic richness, etc.)
Why it failed: Content performance is too dependent on external factors (domain authority, backlinks, timing) that our model couldn't access.
Learning: Content quality prediction requires holistic approach including off-page signals.
Real-time SERP Change Alerts
Hypothesis: SEOs would pay for instant notifications when their rankings change.
What we tried: Built system to check rankings every 15 minutes and send alerts for significant changes.
Why it failed: Too many false positives from normal SERP volatility. Users got alert fatigue within days.
Learning: Real-time isn't always better. Daily/weekly summaries are more actionable than instant alerts.
Contribute to the Lab
Have an idea for an experiment? Want to collaborate on research?
Submit an Idea
Share your hypothesis and we'll help you design an experiment to test it.
Join the Beta
Get early access to experimental features and help shape their development.
Lab Principles
- Document everything: Process is as important as results
- Share failures: Negative results save others time
- Open methodology: Reproducible experiments only
- Ethical research: Respect data privacy and platform ToS