Bottom Line, Up Front
This workflow outlines how to automate much of the source reliability and content credibility assessment process in OSINT, without bypassing critical human validation. The goal: rapidly flag, score, and prioritise information, allowing analysts to focus on anomalies and edge cases that require expert judgement. The framework integrates trusted disinformation tools such as NewsGuard, Hoaxy, InVID, and BlueSky Analytics—enabling continuous refinement and faster threat detection.
1. Ingestion & Preprocessing
Objective: Collect and organise URLs, social media posts, and AI-generated references at scale.
Tools & Methods:
LLM + OSINT Platforms: Integrate OpenAI/Claude APIs with platforms like Maltego, Videris, Skopenow, or ShadowDragon to streamline entity and link extraction.
RSS Filters: Use Feedly (Pro+) or Inoreader to monitor and auto-tag content from defined sources.
Social Monitors: Tools like CrowdTangle, Mention, and TweetDeck provide near real-time discovery.
Workflow Automation: Zapier or Make (Integromat) to feed data into Google Sheets, Airtable, or internal databases.
2. Source Reliability Cross-Referencing
Objective: Automatically score domains or authors against known trust/disinfo lists.
Automation Logic:
Extract domain or handle.
Check against:
White Lists: MBFC, IFCN, Ad Fontes, NewsGuard
Black Lists: EUvsDisinfo, DisinfoLab, GEC, internal watchlists
Tools:
NewsGuard: Journalist-curated trust ratings
VirusTotal API: Domain reputation checks
DomainTools Iris or Cybersixgill: WHOIS and threat insights
BlueSky Analytics (Streamlit app): Investigate source legitimacy and content manipulation signals in near real time
Google Apps Script / Python: Custom parsing and API querying
3. NATO Grading Assistant (A–F, 1–6)
Objective: Provide a provisional NATO-style reliability/credibility score.
How:
Create a rule-based engine in Airtable, Google Sheets, or Notion:
Trusted source → Grade A or B
Blacklisted or suspicious → Grade D or E
Missing attribution / unverifiable claims → Credibility score ≥ 4
Optional: Use GPT functions to auto-suggest provisional grades with rationales, editable by analysts.
4. Content Cross-Verification
Objective: Automate checks for content accuracy and media authenticity.
Tools:
Google Fact Check Tools API: Scan claims for existing fact-checks
Snopes, PolitiFact, Poynter: Direct API or carefully scripted scraping
InVID Plugin: Validate media via reverse search, metadata, and forensics
Vastav AI: Deepfake detection across video, image, and audio content
LLM Prompting (Optional):
Generate reworded or summarised claims via GPT/Claude
Check across trusted aggregators like NewsAPI, GNews, or sites like Bellingcat and ACLED
5. Social Bot & Influence Detection
Objective: Identify automated accounts and coordinated disinfo networks.
Tools:
Bot Sentinel: Detect troll or bot behaviour on Twitter/X
Botometer API: Analyse social account authenticity
Hoaxy: Visualise amplification networks and misinformation spread
Maltego + Social Links: Graph-based analysis of account relationships
CrowdTangle: Detect manipulation trends across Meta platforms (approval required)
6. Intelligence Logging & Threat Sharing
Objective: Document and share suspicious sources internally and externally.
Workflow:
Use Google Forms or Notion to log disinfo sightings
Auto-log entries to a central Airtable or Sheets database
Trigger alerts if thresholds are met (e.g. three or more mentions of a new domain)
Use Zapier/IFTTT to auto-push to:
EUvsDisinfo
Google Safe Browsing
Twitter/X reporting
7. Analyst Dashboards & Oversight
Objective: Provide human analysts with real-time visibility and control.
Tools:
Looker Studio, Power BI, or Tableau for data visualisation
Notion, Retool, or Airtable Interface Designer for internal dashboards
Visualise NATO grading, flag counts, and trustworthiness trends
Highlight unresolved entries for manual review
Security & Compliance
Ensure all tools are GDPR-compliant
Use encrypted API keys and audit logs
Implement manual override and rollback for flagged false positives
Optional Advanced Enhancements
In-House LLM Assistant: Fine-tuned GPT/Claude operating only on vetted datasets
Browser Plugin: Auto-flag suspicious domains in real time based on internal watchlists
Alerts Integration: Slack, Teams, or email notifications on source grading anomalies
Access the Full OSINT Disinfo Workflow
A downloadable, step-by-step guide to building your own disinformation counter-measures workflow is available via my Substack next Friday