What you’ll learn: How to create content that satisfies Google’s E-E-A-T framework, ranks in both traditional and AI-powered search, and positions you as an authority. Includes an E-E-A-T content audit checklist and a framework for original research.
Google’s December 2025 core update devastated publishers who relied on generic content, with some sites reporting 70-85% traffic declines. AI Overviews now appear in approximately 13-30% of U.S. desktop searches, and AI answer engines like Perplexity and ChatGPT Search are growing rapidly. The content that survives this transformation is content that demonstrates Experience, Expertise, Authoritativeness, and Trustworthiness—because it provides something that AI models cannot generate independently.
Use this checklist to evaluate every piece of content before publishing. I developed it analyzing what content earns citations in AI Overviews versus what gets ignored:
Experience Signals
[] Does the content include first-hand practitioner experience?
[] Are there specific examples with real (anonymized) data?
[] Does it describe lessons learned from actual campaigns?
[] Does the author bio include verifiable credentials?
[] Are there details only someone with hands-on experience would know?
EXAMPLE (what I include in every piece):
- '$48M+ in managed ad spend' (verifiable at Seer Interactive)
- '192% YoY paid search growth at NortonLifeLock' (specific metric)
- 'Tested across 200+ campaigns' (scale of experience)
- NOT: 'I have extensive experience in digital marketing' (generic)
Expertise Signals
[] Does the content demonstrate deep technical knowledge?
[] Are explanations precise rather than surface-level?
[] Does it use industry-specific terminology correctly?
[] Does it address common misconceptions or nuances?
[] Would an expert in the field learn something from this?
EXAMPLE:
GOOD: 'AI Max showed 18% increase in converting queries with clean
signals, but independent testing found costs of $100.37 vs $43.97
per conversion when unmanaged' (specific, sourced, nuanced)
BAD: 'AI is changing how ads work' (generic, no substance)
Authoritativeness Signals
[] Is the content published on a domain with topical authority?
[] Are there citations from recognized industry sources?
[] Does the author have verifiable industry recognition?
[] Is there original research or proprietary data?
[] Are other authoritative sources likely to reference this?
WHAT I DO:
- Cite Adweek, Search Engine Land, Stanford HAI, Harvard, IAB
- Reference specific reports with dates and methodology
- Include original database (411 agencies, 13 verified sources)
- Speak at Hero Conf (verifiable conference appearance)
- Publish open-source tools on GitHub (verifiable contribution)
Trustworthiness Signals
[] Is the methodology transparent?
[] Are limitations acknowledged?
[] Is there an AI disclosure if AI was used?
[] Is the author contactable and identifiable?
[] Does the content avoid manipulative or deceptive practices?
AI DISCLOSURE EXAMPLE (per Google's guidance):
'AI tools were used to assist with data compilation and formatting.
All analysis, insights, strategic recommendations, and editorial
decisions represent the original work of the author based on 15+
years of practitioner experience.'
The single most effective E-E-A-T strategy is original research. Here is the framework I used to create the 411-agency U.S. advertising database that no AI model has in its training data:
ORIGINAL RESEARCH FRAMEWORK:
1. IDENTIFY A QUESTION AI CAN'T ANSWER
- What data does not exist in any public dataset?
- What requires human judgment to compile?
- What would practitioners actually want to reference?
2. COLLECT FROM MULTIPLE SOURCES (minimum 5)
- Directory databases (Clutch, G2, etc.)
- Platform directories (Google Partners, Meta Partners)
- Industry publications (Adweek, SEJ, SEL)
- Original observation (SERP analysis, ad monitoring)
- Practitioner experience (your own data)
3. ADD PRACTITIONER ANALYSIS
- Don't just compile data. Interpret it.
- What patterns do you see that others won't?
- What does the data mean for someone making decisions?
4. DOCUMENT YOUR METHODOLOGY
- How was data collected?
- What were the inclusion/exclusion criteria?
- What are the limitations?
5. MAKE IT CITABLE
- Clear data tables, not just narrative
- Specific numbers with sources
- Update dates and version numbers
GoogleAdsAgent.ai’s entire content strategy follows this framework. The 411-agency database is original research. The open-source script library on GitHub is verifiable expertise. The conference speaking is authoritative recognition. The transparent methodology in every report is trustworthiness. This is not a content marketing tactic—it is how a practitioner naturally shares knowledge.
📦 GitHub: https://github.com/itallstartedwithaidea/itallstartedwithaidea_google_ads_account_grader — Open-source Google Ads Account Grader—an example of E-E-A-T expertise published as verifiable, usable code
Website: https://googleadsagent.ai | GitHub: https://github.com/itallstartedwithaidea | Tools: https://googleadsagent.ai/tools
About the Author
John Williams is a Senior Paid Media Specialist at Seer Interactive with 15+ years managing $48M+ in digital ad spend across Google, Microsoft, Meta, and Amazon. Founder of It All Started With A Idea and creator of GoogleAdsAgent.ai. Speaker at Hero Conf on AI in advertising. Former WSU football player and current assistant football coach at Casteel High School, AZ.
Get a free 30-day audit of your advertising accounts. John will personally review your setup and provide actionable recommendations.
John will review your account and reach out within 24 hours.