Methodology
How we collected, classified, and reviewed the evidence across all three platforms.
Data collection
Each platform was analysed using the same five-step process:
- Deal identification. RNS announcements mentioning each platform were sourced via the Ticker Data API, covering January 2023 to April 2026. Each announcement was tagged by stage (launch, result, admission) and grouped into deals by issuer with a 60-day gap threshold.
- RNS body analysis. The full text of each announcement was fetched from the Ticker newswire and stripped to plain text. Each body was then classified against a 12-category financial promotion rubric by an independent classifier, with severity assigned as high, medium, low, or informational.
- Director and PDMR research.Board composition was sourced from Companies House and the Ticker directors database. X (Twitter) handles were verified via the X API v2 batch verification endpoint. Deal-window tweets were fetched for confirmed handles covering ±3 weeks of each deal.
- Social media and LinkedIn. LinkedIn content searches were conducted for each platform name, each issuer, and for C-suite individuals. X account activity during deal windows was captured and filtered to deal-relevant content. Company social profiles were enriched from the Ticker issuer database.
- Independent review. Every high-severity finding was individually verified against the actual RNS body text and the cited legislation. Findings that did not withstand scrutiny were downgraded or removed. Three independent review passes were conducted: structural validation, content accuracy, and per-finding regulatory assessment.
Classification rubric
Each RNS body was assessed against 12 categories. A finding is generated only when the body text contains specific evidence supporting the classification.
| Category | What it captures | Relevant legislation |
|---|---|---|
| promotional_language | Superlative claims, unbalanced positive framing without proportionate risk warnings | COBS 4.2.1R |
| forward_looking_unqualified | Future projections presented without adequate qualifiers or proximate disclaimers | COBS 4.2.1R, COBS 4.6 |
| going_concern_or_distress | Material uncertainty about the company's ability to continue as a going concern | COBS 4.2.1R, PRIN 2A |
| director_concentration | Directors or insiders participating heavily alongside retail, especially with information asymmetry | UK MAR Art.19, COBS 4.2.1R |
| dilution_red_flags | Deep discounts, extreme share issuance, warrant sweeteners — only high severity when combined with another factor | COBS 4.2.1R, PRIN 2A |
| bitcoin_treasury_pattern | Crypto-treasury or speculative asset strategy funded by retail capital | s.21 FSMA, PRIN 2A |
| timing_concerns | Very short offer windows (<48h), rapid repeat raises, mid-raise changes | PRIN 2A |
| retail_targeting_aggressive | Explicit retail solicitation with urgency, FOMO, or pressure tactics | s.21 FSMA, FCA FG24/1 |
| platform_role_ambiguity | Unclear whether the platform is approver, distributor, or technology provider | s.21 FSMA, FPO Art.43 |
| mid_raise_webinar | Investor presentations or webinars during active offer windows | UK MAR Art.7, s.21 FSMA |
| selective_disclosure | Information given to a subset of investors during the offer period | UK MAR Art.7 |
| positive_compliance | Good practice examples — proper risk warnings, post-close scheduling, adequate windows | — |
Severity calibration
Severity is assigned based on the nature and combination of concerns identified:
Clear evidence of a specific rule being engaged. The evidence quotes the source verbatim and the legislation field names the specific provision. Multiple compounding factors are typically present.
Behaviour that falls short of regulatory expectations but may not constitute a technical breach. These are pattern-level findings or single-factor concerns.
Facts that inform the wider picture but are not themselves concerning in isolation. Includes deep discounts without compounding factors.
Positive compliance examples that demonstrate good practice and should be maintained.
Critical calibration principle
Deep discounts to market price alone are not classified as high severity, regardless of the percentage. A high classification requires a compounding factor such as going concern uncertainty, director concentration, extreme dilution quantum, aggressive retail targeting, or bitcoin treasury pivots. This ensures the review focuses on genuinely problematic combinations rather than standard market mechanics.
What we are not flagging
- Deep discounts alone — even 40%+ discounts are low severity without a compounding factor
- Standard CEO optimism in quotes — promotional language is only flagged when combined with distress or misleading context
- Article 43 FPO exemption usage — it is a legitimate and standard exemption for existing shareholders
- Director participation that is modest, proportionate, and properly disclosed under AIM Rule 13
- Standard mining project economics — NPV, IRR, and resource estimates are conventional technical disclosure
- Companies that are simply small or early-stage — being pre-revenue is not the same as going concern distress
Data sources
| Source | Coverage | Volume |
|---|---|---|
| Ticker Data API | RNS announcements, full body text | 398 bodies |
| X API v2 | Handle verification, deal-window tweets | 1393 tweets, 49 handles |
| LinkedIn content search | Platform, issuer, and individual searches | 70 findings |
| Ticker issuer database | Company profiles, social handles, websites | 118 profiles |
| Companies House / directors | Board composition, tenure, appointments | 1,024 directors |
| Platform websites | Winterflood, BookBuild, RetailBook marketing | 5 pages |
Limitations
- This review is based on publicly available materials. Internal compliance workflows, approval processes, and private communications were not reviewed.
- The X API v2
userTimelineendpoint filters out replies by default. CEOs who post primarily as replies (e.g. Freddie New) require thesearchAllfull-archive endpoint, which was used where identified. - LinkedIn content search has limited discoverability for small-cap fundraise posts. Many posts may exist but are not indexed by LinkedIn’s search or by web search engines.
- Deal amounts for RetailBook and BookBuild are extracted from RNS body text and may represent total fundraise amounts rather than the retail offer specifically.
- The FINPROM classifier is an LLM-based system. While each high-severity finding was independently verified against the source text, classification at the margins (medium vs. low) involves judgment calls.