Methodology

Grayline Intelligence documents incidents of violence, disruption, and credible threat activity affecting U.S. interests, including domestic events and relevant external incidents arising from geopolitical escalation and gray zone activity. All entries are derived from open-source reporting and curated for research and analytical use. Each record progresses through a structured pipeline: source identification, AI-assisted extraction, analyst review, and standardized classification prior to publication.

1 — Source Identification

What we look for

Incidents are identified through continuous monitoring of open-source channels including national and regional news outlets, law enforcement press releases, court filings, congressional testimony, and specialist security publications. Priority is given to sources that provide primary or near-primary reporting (e.g. direct DOJ/FBI announcements, local law enforcement statements) rather than aggregated or opinion-based coverage.

Inclusion criteria

An event is considered for inclusion if it meets one or more of the following:

— Resulted in casualties (killed or injured) on US soil with a terrorism or politically motivated nexus.
— Was disrupted or thwarted by law enforcement before execution, where credible evidence of intent existed.
— Involved acquisition of materials, planning, or reconnaissance with documented ideological or mass-casualty intent.
— Has been characterised by a law enforcement agency as a terrorism-related investigation.

Exclusion criteria

Events are excluded if they are purely criminal without an ideological or mass-casualty nexus, where no credible corroborating source exists, or where reporting is solely from partisan or unverified outlets with no subsequent official confirmation.

2 — AI-Assisted Extraction

How it works

When a URL is submitted, the platform fetches the article text and passes it to highly trained, specific Large Language Models with a structured framework to extract a fixed set of fields. The model does not editorialize or draw inferences beyond what is directly supported by the source text.

Extracted fields

Each incident is structured around the following normalized attributes:

Name — A concise, descriptive title for the event.
Date — Date and time of the incident or disruption, in ISO 8601 format.
Location — City, state, and specific location where available.
Coordinates — Latitude/longitude for map placement, geocoded from the location.
Method — Attack vector (e.g. Firearms, IED, Incendiary, Vehicle).
Threat Type — Category of threat (e.g. Domestic Terrorism, Islamist, Far-Right, Antifa).
Target Type — The intended or actual target (e.g. Civilians, Government, Religious Site).
Intent — Assessed intent (e.g. mass_casualty, targeted, symbolic).
Outcome — Whether the attack was completed, thwarted, or failed.
Casualties — Confirmed killed and injured figures from official or credible reporting.
Attribution — Claimed or assessed ideological affiliation or group nexus.
Suspect — Named individual(s) where publicly confirmed by authorities.
Classification — Whether the incident is Confirmed or Under Investigation.
Confidence — A 0–1 score reflecting the quality and consistency of available sourcing.
Notes — Key contextual details, caveats, or investigative developments.
Sources — Direct links to the source articles used for extraction.

Confidence scoring

The confidence field (0.0–1.0) reflects the reliability of the extracted data, not the severity of the incident. A score near 1.0 indicates strong, consistent, multi-source reporting with official confirmation. Lower scores indicate early-stage reporting, single-source claims, or significant factual uncertainty. The threat level badge (Critical / High / Medium / Low) is derived from a combination of confidence and assessed intent.

3 — Storage & Documentation

Database

All extracted records are stored in a relational database with a fixed schema. Each record preserves the raw JSON response from the extraction model alongside the normalized fields, allowing for retroactive reprocessing or audit if the extraction prompt is revised.

Updates and corrections

Records are not automatically updated as investigations develop. Where a significant factual correction is needed (e.g. a death toll is revised, or attribution is formally established or withdrawn), the existing record should be deleted and re-submitted with an updated source URL, or manually edited via the admin interface.

Access controls

Read access to all incident data is public. Write access (adding or removing records) is restricted to authenticated users. This is enforced at the view level and is not dependent on client-side controls.

4 — Pattern Analysis

How it is generated

Each time a record is added or removed, the full incident dataset is passed back to highly trained Large Language Models requesting analysis across three dimensions: geographic clustering, ideological patterns, and method trends. The output is stored and displayed on the Map & Pattern Analysis page. It reflects the state of the database at the time of the last write operation.

Limitations

Pattern analysis is generated from OSINT-derived data and inherits all limitations of that sourcing. It should be treated as an analytical starting point, not a finished intelligence assessment. Small sample sizes, reporting gaps, and evolving investigations can significantly affect the reliability of observed patterns.

Disclaimer

This platform is based on open-source reporting and should be treated as OSINT. Facts, motivations, affiliations, and legal conclusions may change as investigations proceed. Unless explicitly confirmed by official authorities, all activities, attributions, and linkages should be considered under investigation.

Questions or Corrections

To report an error, suggest an incident for inclusion, or inquire about this research, contact the Spectra Intel Group research team directly.

Contact Us