Reference

Methodology

How we collected, classified, and reviewed the evidence across all three platforms.

Data collection

Each platform was analysed using the same five-step process:

  1. Deal identification. RNS announcements mentioning each platform were sourced via the Ticker Data API, covering January 2023 to April 2026. Each announcement was tagged by stage (launch, result, admission) and grouped into deals by issuer with a 60-day gap threshold.
  2. RNS body analysis. The full text of each announcement was fetched from the Ticker newswire and stripped to plain text. Each body was then classified against a 12-category financial promotion rubric by an independent classifier, with severity assigned as high, medium, low, or informational.
  3. Director and PDMR research.Board composition was sourced from Companies House and the Ticker directors database. X (Twitter) handles were verified via the X API v2 batch verification endpoint. Deal-window tweets were fetched for confirmed handles covering ±3 weeks of each deal.
  4. Social media and LinkedIn. LinkedIn content searches were conducted for each platform name, each issuer, and for C-suite individuals. X account activity during deal windows was captured and filtered to deal-relevant content. Company social profiles were enriched from the Ticker issuer database.
  5. Independent review. Every high-severity finding was individually verified against the actual RNS body text and the cited legislation. Findings that did not withstand scrutiny were downgraded or removed. Three independent review passes were conducted: structural validation, content accuracy, and per-finding regulatory assessment.

Classification rubric

Each RNS body was assessed against 12 categories. A finding is generated only when the body text contains specific evidence supporting the classification.

CategoryWhat it capturesRelevant legislation
promotional_languageSuperlative claims, unbalanced positive framing without proportionate risk warningsCOBS 4.2.1R
forward_looking_unqualifiedFuture projections presented without adequate qualifiers or proximate disclaimersCOBS 4.2.1R, COBS 4.6
going_concern_or_distressMaterial uncertainty about the company's ability to continue as a going concernCOBS 4.2.1R, PRIN 2A
director_concentrationDirectors or insiders participating heavily alongside retail, especially with information asymmetryUK MAR Art.19, COBS 4.2.1R
dilution_red_flagsDeep discounts, extreme share issuance, warrant sweeteners — only high severity when combined with another factorCOBS 4.2.1R, PRIN 2A
bitcoin_treasury_patternCrypto-treasury or speculative asset strategy funded by retail capitals.21 FSMA, PRIN 2A
timing_concernsVery short offer windows (<48h), rapid repeat raises, mid-raise changesPRIN 2A
retail_targeting_aggressiveExplicit retail solicitation with urgency, FOMO, or pressure tacticss.21 FSMA, FCA FG24/1
platform_role_ambiguityUnclear whether the platform is approver, distributor, or technology providers.21 FSMA, FPO Art.43
mid_raise_webinarInvestor presentations or webinars during active offer windowsUK MAR Art.7, s.21 FSMA
selective_disclosureInformation given to a subset of investors during the offer periodUK MAR Art.7
positive_complianceGood practice examples — proper risk warnings, post-close scheduling, adequate windows

Severity calibration

Severity is assigned based on the nature and combination of concerns identified:

High

Clear evidence of a specific rule being engaged. The evidence quotes the source verbatim and the legislation field names the specific provision. Multiple compounding factors are typically present.

Medium

Behaviour that falls short of regulatory expectations but may not constitute a technical breach. These are pattern-level findings or single-factor concerns.

Low

Facts that inform the wider picture but are not themselves concerning in isolation. Includes deep discounts without compounding factors.

Info

Positive compliance examples that demonstrate good practice and should be maintained.

Critical calibration principle

Deep discounts to market price alone are not classified as high severity, regardless of the percentage. A high classification requires a compounding factor such as going concern uncertainty, director concentration, extreme dilution quantum, aggressive retail targeting, or bitcoin treasury pivots. This ensures the review focuses on genuinely problematic combinations rather than standard market mechanics.

What we are not flagging

  • Deep discounts alone — even 40%+ discounts are low severity without a compounding factor
  • Standard CEO optimism in quotes — promotional language is only flagged when combined with distress or misleading context
  • Article 43 FPO exemption usage — it is a legitimate and standard exemption for existing shareholders
  • Director participation that is modest, proportionate, and properly disclosed under AIM Rule 13
  • Standard mining project economics — NPV, IRR, and resource estimates are conventional technical disclosure
  • Companies that are simply small or early-stage — being pre-revenue is not the same as going concern distress

Data sources

SourceCoverageVolume
Ticker Data APIRNS announcements, full body text398 bodies
X API v2Handle verification, deal-window tweets1393 tweets, 49 handles
LinkedIn content searchPlatform, issuer, and individual searches70 findings
Ticker issuer databaseCompany profiles, social handles, websites118 profiles
Companies House / directorsBoard composition, tenure, appointments1,024 directors
Platform websitesWinterflood, BookBuild, RetailBook marketing5 pages

Limitations

  • This review is based on publicly available materials. Internal compliance workflows, approval processes, and private communications were not reviewed.
  • The X API v2 userTimeline endpoint filters out replies by default. CEOs who post primarily as replies (e.g. Freddie New) require the searchAll full-archive endpoint, which was used where identified.
  • LinkedIn content search has limited discoverability for small-cap fundraise posts. Many posts may exist but are not indexed by LinkedIn’s search or by web search engines.
  • Deal amounts for RetailBook and BookBuild are extracted from RNS body text and may represent total fundraise amounts rather than the retail offer specifically.
  • The FINPROM classifier is an LLM-based system. While each high-severity finding was independently verified against the source text, classification at the margins (medium vs. low) involves judgment calls.