The Lab

Welcome to the Lab — where new ideas take shape.

Every module here started as a Deep Dive.

Current Experiments

Active research and prototypes in development

AI Overview Detector

Active → Beta

Hypothesis: AI Overviews impact organic CTR significantly enough to warrant monitoring.

Week 1-2:

Built basic SERP scraper with Selenium. Detected AI Overview presence using CSS selectors.

Week 3-4:

Added GSC integration for traffic correlation. Initial results: -12% average CTR when AI Overview present.

Week 5-6:

Refined detection algorithm. False positive rate down to 3%. Ready for beta testing.

Status: Moved to Beta. 50+ users testing. Stable release planned Dec 2024.

Entity Relationship Mapper

Active

Automatically extract and visualize entity relationships from web content using NLP and knowledge graphs.

Started: Oct 2024
Status: Research phase
Tech: spaCy, NetworkX, Neo4j
20% complete

Research Notes

Deep dives, learnings, and documented experiments

Nov 15, 2024

AI Overview Impact Analysis

Analyzed 10,000 queries to understand AI Overview trigger patterns. Found that informational queries with high search volume are 3x more likely to show AI Overviews.

AI Overviews SERP Analysis Data Analysis
Read Full Note
Nov 8, 2024

Semantic Clustering Comparison

Compared 5 different embedding models for keyword clustering. OpenAI's text-embedding-3-small provides the best balance of accuracy and speed for SEO use cases.

Embeddings Clustering Benchmarking
Read Full Note
Oct 28, 2024

PAA Question Seasonality Patterns

Discovered that PAA questions follow predictable seasonal patterns in 73% of analyzed verticals. E-commerce shows strongest seasonality, B2B shows weakest.

PAA Seasonality Content Strategy
Read Full Note

Prototypes & Experiments

Early-stage tools and proof-of-concepts

SERP Diff Tool

Prototype

Compare SERP layouts between different dates, locations, or devices. Highlights changes in featured snippets, ads, and organic results.

Experiment

Automatically detect when content becomes outdated based on SERP changes, competitor updates, and trending topics.

Intent Drift Monitor

Concept

Track how search intent changes over time for your target keywords. Detect when informational queries become transactional.

Failed Experiments

What didn't work and why - learning from failures

Why We Share Failures

Failed experiments are as valuable as successful ones. They save others time and reveal important constraints or assumptions that don't hold.

Automated Content Scoring

Hypothesis: We could predict content performance using only on-page factors.

What we tried: Built ML model using 50+ content features (readability, keyword density, semantic richness, etc.)

Why it failed: Content performance is too dependent on external factors (domain authority, backlinks, timing) that our model couldn't access.

Learning: Content quality prediction requires holistic approach including off-page signals.

Real-time SERP Change Alerts

Hypothesis: SEOs would pay for instant notifications when their rankings change.

What we tried: Built system to check rankings every 15 minutes and send alerts for significant changes.

Why it failed: Too many false positives from normal SERP volatility. Users got alert fatigue within days.

Learning: Real-time isn't always better. Daily/weekly summaries are more actionable than instant alerts.

Contribute to the Lab

Have an idea for an experiment? Want to collaborate on research?

Submit an Idea

Share your hypothesis and we'll help you design an experiment to test it.

Join Research

Collaborate on active experiments. Contribute data, code, or analysis.

Join the Beta

Get early access to experimental features and help shape their development.

Lab Principles

  • Document everything: Process is as important as results
  • Share failures: Negative results save others time
  • Open methodology: Reproducible experiments only
  • Ethical research: Respect data privacy and platform ToS