Claims vs. Reality — Before the Token Exists

Pre-TGE tokens have no on-chain history, no DEX liquidity, no transaction volume. Every metric is a promise. Token Verdict's job is to separate what the team claims from what the public can independently verify — using only documents, social presence, code repos, and third-party sources available right now.

📄 Whitepaper / Litepaper
🌐 Project Website
💼 Team LinkedIn Profiles
𝕏 X / Twitter Activity
💬 Discord / Telegram
🔒 Published Audit Reports
💻 GitHub Repos
📰 Press & Blog Posts

🔍 The Verification Status Framework

Every scorable criterion in this methodology is tagged with one of three verification statuses. This is the core product value: a fast, honest read on how much of this project is substantiated versus taken on faith.

VERIFIED
Independent public evidence found. Third-party sources, cross-referenced LinkedIn histories, visible GitHub commits, press coverage from credible outlets, or externally-published audit reports.
⚠️
CLAIMED
Team asserts this but we cannot independently verify it. Whitepaper-only claims, unverifiable partner logos, anonymous team credentials, or self-reported metrics.
MISSING
Not mentioned anywhere in available public materials. Absence of information is itself a signal — especially for categories investors typically expect to see.

✅ Available Data Sources (Pre-TGE)

📄 Whitepaper / Litepaper 🌐 Project Website 💼 Team LinkedIn Profiles 𝕏 X / Twitter 💬 Discord Server 📱 Telegram Group 💻 GitHub Repos (if public) 🔒 Published Audit Reports 📰 Press Releases / Blog Posts 🤝 Partner Announcements 🎥 Demo Videos / Testnet Footage 📧 Team Interviews / AMA Recordings

🚫 NOT Available (No On-Chain Data Pre-TGE)

⛔ Smart Contract Address ⛔ DexScreener / DEX Liquidity ⛔ Transaction Volume ⛔ Liquidity Lock Verification ⛔ Burn Address Checks ⛔ TVL / Token Terminal ⛔ Holder Distribution ⛔ On-Chain Treasury Proof

Contents

CAT 1

👥 Team & Credibility

25 points
C1.1 — Team Identity & Doxxing 0–5 pts 📄 LinkedIn + Website

What We're Assessing

Are the founders and core team publicly identified? Can we verify their identities through LinkedIn, past employer records, or credible press mentions?

How to Assess

  • Cross-reference LinkedIn profiles against company history claims
  • Search team names + previous companies in press archives
  • Check if website bios match LinkedIn employment history
  • Verify photos aren't stock images (reverse image search)
  • Look for verifiable social presence pre-dating this project

Verification Status Examples

ScenarioStatusNotes
Founders named, LinkedIn verified, prior employer confirms role✅ VERIFIEDCross-referenced with external sources
Named team, LinkedIn exists, but no way to confirm claimed roles⚠️ CLAIMEDSelf-reported credentials only
Anonymous team, no names disclosed anywhere❌ MISSINGIdentity unverifiable — significant risk flag
5
Fully doxxed. Named founders with verifiable LinkedIn histories, prior employer cross-checks pass, credible press mentions pre-dating this project. Real identities confirmed.
4
Mostly verified. Named team, LinkedIn profiles exist and match claimed history, minor gaps in verification.
3
Partially verified. Some team members identified and verifiable; others named but unverifiable or background sparse.
2
Claimed only. Names given but no LinkedIn, no press, no external record. Claims cannot be verified.
1
Anonymous / pseudonymous. Team deliberately anonymous or pseudonyms only. No accountability trail possible.
⚠️ Pre-TGE Note: Anonymous teams in pre-TGE projects are higher risk than post-TGE — there is zero on-chain accountability and no track record. Weight this heavily.
C1.2 — Relevant Experience 0–5 pts 📄 LinkedIn + Press

What We're Assessing

Does the team have demonstrable experience relevant to what they're building? Claimed credentials mean little — documented track record matters.

How to Assess

  • Review LinkedIn employment history for blockchain/crypto/relevant domain roles
  • Search past projects launched — did they ship?
  • Look for academic credentials that can be verified
  • Check if past employers are real, credible companies
  • Find media coverage of team's previous work

Verification Status Examples

ClaimStatusHow to Check
"Former Google engineer"✅ VERIFIEDLinkedIn employer visible, press mentions, public GitHub contributions during that period
"10 years in DeFi"⚠️ CLAIMEDWhitepaper bio only, no verifiable prior DeFi projects found
Team bios not provided❌ MISSINGNo experience information available to evaluate
5
Highly relevant, independently verified. Documented history in directly applicable domains, prior successful projects, verifiable credentials.
4
Relevant experience, mostly verifiable. Strong domain background with minor gaps in external verification.
3
Mixed relevance. Some applicable experience verifiable; other key roles are adjacent or unconfirmed.
2
Weak or claimed-only. Experience stated but minimal verifiable track record in this domain.
1
No relevant experience. Team background clearly mismatched with project ambitions, or no background info available.
C1.3 — Advisors & Backers 0–5 pts

What We're Assessing

Are listed advisors, investors, and strategic partners real, credible, and actually engaged? Logo drops on a website are easy to fake.

How to Assess

  • Search advisor names — do they confirm this role publicly?
  • Check if investors/VCs have announced the deal on their own channels
  • Look for mutual social posts, quote tweets, or official blog posts
  • Verify partner logos are actual partnerships, not just integrations
  • Cross-check advisor LinkedIn "Experience" sections for this role

Verification Status Examples

ClaimStatusEvidence
VC investment listed✅ VERIFIEDVC firm's own Twitter/website confirms the investment with a deal announcement
Advisor listed on website⚠️ CLAIMEDAdvisor has no public acknowledgment; LinkedIn doesn't list this role
Partner logo displayed⚠️ CLAIMEDLogo on website, but no official announcement from that partner
No advisors or backers mentioned❌ MISSINGCommon for very early stage — note but don't over-penalize if early
5
Strong, verified backing. Named VCs have public deal announcements; advisors confirm on own social/LinkedIn; partners have mutual press releases.
4
Mostly credible. Some backers/advisors independently verifiable; minor gaps.
3
Mixed. Logos/names listed; partial external confirmation; some claims unverifiable.
2
Unverified logos only. Partner/investor logos on site but no independent confirmation found.
1
No credible backing visible or logos/names present that cannot be verified at all.
C1.4 — Track Record of Delivery 0–5 pts 💻 GitHub + Testnet + Press

What We're Assessing

Has the team shipped anything before? Are there visible milestones met — testnet launches, product demos, prior protocol deployments? Promises vs. proof of execution.

How to Assess

  • GitHub: public repos with genuine commit history (not just a README)
  • Testnet evidence: screenshots, community reports, demo videos
  • Blog/press: roadmap milestones with public completion evidence
  • Community: user reports of actually testing the product
  • Prior projects: did the same team launch and deliver before?

Verification Status Examples

ClaimStatusEvidence
"Testnet live"✅ VERIFIEDCommunity members posting testnet TX hashes, Discord full of testnet feedback, demo video shows real UI
"Testnet Q4 2024" (roadmap claim)⚠️ CLAIMEDRoadmap says it happened but no evidence of actual testnet activity found
No development evidence❌ MISSINGWebsite only, no code, no demos, no community testnet reports
5
Strong delivery evidence. Active public GitHub with meaningful commit history, live testnet with community activity, verified demo, prior successful launches by this team.
4
Good delivery signals. GitHub activity present, some testnet or demo evidence, roadmap milestones have visible proof of completion.
3
Partial evidence. Some code visible or demo exists but limited community verification; roadmap progress partially evidenced.
2
Minimal evidence. GitHub exists but sparse; no testnet; roadmap claims with no completion proof.
1
No delivery evidence. Website and whitepaper only. No visible code, no demos, no community product interaction.
C1.5 — Legal Entity & Regulatory Stance 0–5 pts 📄 Website + Press

What We're Assessing

Is there a registered legal entity? Is the jurisdiction disclosed? Has the project engaged legal counsel for token compliance? Regulatory clarity reduces risk significantly.

How to Assess

  • Check website footer / terms of service for legal entity name
  • Search company registration databases (e.g., Companies House, Cayman Islands registry)
  • Look for legal disclaimers in whitepaper / token sale documents
  • Check if team has disclosed jurisdiction and token classification
  • Look for legal firm mentions or compliance advisors
5
Clear legal structure. Named registered entity, verifiable jurisdiction, legal counsel identified, token compliance framework disclosed.
4
Partial disclosure. Entity mentioned or jurisdiction stated; some legal framework visible but not fully detailed.
3
Vague. Legal entity implied or mentioned without specifics; jurisdiction unclear.
2
Minimal. Website has generic ToS but no legal entity disclosed.
1
No legal structure visible. No entity, no jurisdiction, no compliance discussion anywhere.
CAT 2

🪙 Tokenomics & Structure

20 points
C2.1 — Token Allocation Transparency 0–5 pts 📄 Whitepaper + Website

What We're Assessing

Is the full token supply allocation disclosed in detail? Are all categories (team, investors, ecosystem, treasury, public sale) accounted for and percentages clearly stated?

How to Assess

  • Whitepaper tokenomics section: are all allocations summed to 100%?
  • Are team and insider allocations explicitly named (not buried)?
  • Is public sale percentage specified?
  • Are reserve/treasury uses described?
  • Does the allocation chart match written text?

Verification Status

ClaimStatusNotes
Full allocation chart with named categories summing to 100%✅ VERIFIEDInternally consistent, all categories accounted for
"Community ecosystem" bucket with no breakdown⚠️ CLAIMEDCan't verify how it'll actually be used
No tokenomics disclosed at all❌ MISSINGCritical omission — cannot score project fairly
5
Full transparency. All allocations named, percentages clear, sums to 100%, purpose of each bucket explained, no hidden wallets or unaccounted supply.
4
Mostly clear. Major buckets disclosed, minor ambiguity in some categories.
3
Partial disclosure. Main allocations shown but significant categories vague or lumped together.
2
Vague. High-level percentages only, no breakdown of team/investor shares, purpose of allocations unclear.
1
No tokenomics disclosed. Cannot evaluate — maximum information asymmetry.
C2.2 — Vesting & Unlock Schedules 0–5 pts 📄 Whitepaper

What We're Assessing

Are team, advisor, and investor tokens subject to meaningful vesting periods? Pre-TGE, vesting schedules are entirely document-based promises — no on-chain enforcement yet.

How to Assess

  • Whitepaper: cliff periods specified for team/investors?
  • Are specific months/years stated, not just "multi-year vesting"?
  • What % of supply unlocks at TGE for insiders?
  • Are investor unlock terms publicly disclosed or hidden?
  • Is there any mention of enforcing vesting on-chain post-launch?

Pre-TGE Limitation

Information TypeStatusNotes
Specific vesting terms in whitepaper (e.g. "12m cliff, 24m linear")⚠️ CLAIMEDDocument promise — on-chain enforcement cannot be verified pre-TGE
Vesting not mentioned at all❌ MISSINGMajor red flag — implied immediate unlock for insiders
ℹ️ Pre-TGE vesting schedules are always CLAIMED — they are promises, not on-chain locks. Score on specificity and credibility of the promise, not verifiability.
5
Detailed, investor-aligned schedules. Team has meaningful cliff (≥12m) + multi-year vesting; investor unlock terms disclosed; TGE unlock % for insiders is minimal (<5%).
4
Good vesting structure. Cliff and vesting periods specified for key parties; small gaps in disclosure.
3
Partial. Some vesting mentioned but terms vague or only applying to team, not investors.
2
Minimal. Vesting mentioned in passing; no specific timelines; unknown TGE unlock %.
1
No vesting disclosed. Implies insiders can dump at TGE. Automatic concern.
C2.3 — Token Utility & Value Accrual 0–5 pts 📄 Whitepaper + Website

What We're Assessing

Does the token have genuine, clearly defined utility within the protocol? Or is it purely speculative with no described mechanism for value accrual?

How to Assess

  • What can token holders actually DO with the token?
  • Are fees, governance, staking, or access rights described?
  • Is there a mechanism for demand to grow with protocol usage?
  • Are burn or buyback mechanisms specified?
  • Is utility theoretical or tied to a working product?
5
Strong, specific utility. Multiple use cases clearly defined (governance + fee share + protocol access); value accrual mechanism is logical and product-dependent.
4
Good utility. Primary use cases clear; value accrual partially specified; mostly believable given the product.
3
Moderate. Some utility described but vague; or utility exists but value accrual mechanism isn't compelling.
2
Weak. Token utility is generic ("governance") with no specifics on how governance works or matters.
1
No real utility. Token is speculative with no credible use case tied to actual product activity.
C2.4 — Fundraising Transparency 0–5 pts 📄 Website + Press + Docs

What We're Assessing

Are private/seed round terms disclosed? Total raise amount? Investor allocations at what price? This determines how much selling pressure insiders have post-TGE.

How to Assess

  • Press releases announcing fundraise rounds (amounts + investors)
  • Whitepaper: private/seed round token price vs public sale price
  • Total funds raised — does it match claimed team size and roadmap?
  • Are investor allocation percentages disclosed?
  • Cross-reference investor portfolio pages for confirmation
5
Full transparency. Fundraise amounts, investor names, round prices, and allocation percentages all publicly disclosed and confirmed by investors.
4
Mostly disclosed. Total raise known, major investors named, minor gaps in price/allocation details.
3
Partial. Round disclosed but investor terms or price not public.
2
Minimal. Investors named but no raise amount; or amount disclosed but investors anonymous.
1
No fundraising transparency. Unknown who invested, how much, or at what price. Maximum information asymmetry.
CAT 3

⚙️ Technology & Product

25 points
C3.1 — Technical Credibility of Whitepaper 0–5 pts 📄 Whitepaper

What We're Assessing

Does the whitepaper demonstrate genuine technical depth? Or is it marketing fluff dressed up with buzzwords? A strong whitepaper reveals whether the team actually understands what they're building.

How to Assess

  • Technical architecture: is it described with specifics?
  • Consensus mechanism, chain choice, scaling approach explained?
  • Are novel claims backed by references or proofs?
  • Are known tradeoffs acknowledged (or pretended away)?
  • Does the tech section match team's stated expertise?
5
Technically rigorous. Detailed architecture, specific design choices explained with rationale, known tradeoffs acknowledged, references to related work, clearly written by technical practitioners.
4
Solid technical content. Enough depth to evaluate feasibility, minor hand-waving in some areas.
3
Adequate. Technical overview present but lacking depth; claims mostly plausible but not well-supported.
2
Thin. High-level marketing language with technical jargon; no real architecture explained.
1
No technical content. Pure marketing. No architecture, no technical reasoning, or whitepaper is simply missing.
C3.2 — Code & GitHub Activity 0–5 pts 💻 GitHub

What We're Assessing

Is there public code? Is it actively developed? Commit history is one of the few genuinely verifiable signals of real engineering work pre-TGE.

How to Assess

  • Does a GitHub org exist and is it linked from the website?
  • Commit frequency and recency (last commit date)
  • Number of contributors (single dev or team?)
  • Code quality signals: tests, documentation, meaningful diffs
  • Are repos just copied boilerplate or original work?
  • Stars/forks as community signal (not definitive)

Verification Status

FindingStatusNotes
Active GitHub with commits in last 30 days, multiple contributors✅ VERIFIEDDirect observable evidence of development activity
GitHub exists but last commit 6+ months ago⚠️ CLAIMEDWork may have moved private or stalled
No public GitHub repo linked❌ MISSINGPrivate repo possible but unverifiable — score accordingly
5
Active, substantive codebase. Multiple contributors, recent commits, meaningful code (not just config files), tests present, evidence of real engineering.
4
Good activity. Regular commits, some team contribution, reasonable code quality for stage.
3
Exists but limited. Code is there, activity is sparse or recent commits are minimal/trivial.
2
Minimal. Repo exists but is essentially a README/placeholder; no real code activity.
1
No public code. No GitHub org linked or repos empty. All development claims unverifiable.
C3.3 — Smart Contract Audit 0–5 pts 📄 Audit Reports + Website

What We're Assessing

Has the smart contract code been audited by a reputable firm? Pre-TGE, audit reports for pre-launch code are verifiable. This is one of the stronger verification signals available.

How to Assess

  • Is a PDF audit report publicly available?
  • Is the auditing firm reputable (CertiK, Trail of Bits, Halborn, Otter, etc.)?
  • Check audit firm's own website/portfolio for confirmation
  • Read the findings summary — were critical issues found and fixed?
  • Does the audited code version match what will be deployed?

Verification Status

FindingStatusNotes
PDF report from named firm, confirmed on firm's portfolio page✅ VERIFIEDStrongest pre-TGE signal — independently cross-checkable
Project claims "audit in progress" with named firm⚠️ CLAIMEDCannot verify until report is published
No audit mentioned❌ MISSINGAcceptable for very early stage; concerning if raising significant capital
5
Completed, verified audit. Report from reputable firm publicly available, confirmed on auditor's portfolio, all critical/high findings resolved, clean final report.
4
Good audit. Credible firm, report available, minor findings remaining or acknowledged.
3
Audit in progress or partial. Named firm engaged, audit confirmed by firm, report not yet published.
2
Claimed but unverified. Says "audit completed" or "in progress" but no report available and firm doesn't confirm.
1
No audit. Code exists or TGE is imminent with no audit completed or planned. High risk.
C3.4 — Roadmap Credibility 0–5 pts 📄 Website + Blog + GitHub

What We're Assessing

Is the roadmap realistic? Has the team demonstrated they can actually hit milestones? A roadmap with past milestones that were met is worth more than one full of future promises.

How to Assess

  • Are past roadmap milestones evidenced as completed?
  • Is the future roadmap specific (dates, deliverables) or vague?
  • Are timelines realistic given team size and complexity?
  • Search blog/social for milestone announcements
  • Compare roadmap promises against actual GitHub activity dates
5
Track record + credible future. Past milestones evidenced as completed on time; future roadmap specific, realistic, and phased appropriately.
4
Good signals. Some past milestones visible as completed; future roadmap mostly specific.
3
Adequate. Roadmap exists, some specificity, limited past milestone evidence.
2
Vague. Roadmap is aspirational bullet points with no dates or verifiable past progress.
1
No roadmap or wildly unrealistic timelines with no evidence of execution ability.
C3.5 — Problem/Solution Fit & Differentiation 0–5 pts 📄 Whitepaper + Website

What We're Assessing

Does this project solve a real problem? Is the solution meaningfully different from existing alternatives? Does the whitepaper acknowledge competitors honestly?

How to Assess

  • Is the problem clearly defined (not just "current solutions are bad")?
  • Does their approach actually solve it, or just describe it?
  • Are competitors named and honestly compared?
  • Is differentiation specific or generic ("faster, cheaper, better")?
  • Does the market size claim have a credible basis?
5
Clear, differentiated value proposition. Real problem well-defined, solution credibly addresses it, competitors honestly acknowledged, differentiation is specific and defensible.
4
Good fit. Problem and solution clear, differentiation mostly specific, minor gaps in competitive analysis.
3
Reasonable. Problem identified, solution addresses it broadly, but differentiation is generic.
2
Weak. Problem vaguely defined, solution doesn't clearly solve it, no credible differentiation.
1
No clear value prop. Buzzword-heavy with no identifiable real problem or differentiated approach.
CAT 4

💬 Community & Transparency

15 points
C4.1 — Community Authenticity 0–5 pts

What We're Assessing

Is the community real and engaged, or bought/bot-inflated? Pre-TGE communities are built — but the quality of conversation is a strong signal of genuine interest.

How to Assess

  • Discord: Are channels active? Are there real discussions or just announcements?
  • Telegram: Conversation quality vs. price/pump talk ratio
  • X: Are replies from real accounts with history, or new blank accounts?
  • Bot check: suspiciously round follower numbers, engagement rate vs. followers
  • Community questions: Are they being answered by team? How?

Verification Status

SignalStatusNotes
Discord with technical discussion, product questions, testnet feedback✅ VERIFIEDOrganic community activity is observable
50k Twitter followers but <0.1% engagement rate⚠️ CLAIMEDFollower count may be purchased
No community channels exist❌ MISSINGNo community signal available
5
Clearly organic, engaged community. Substantive discussions, organic growth signals, real questions being answered, diverse active participants.
4
Good community signals. Mostly genuine activity; engagement looks real; minor inflated metrics possible.
3
Mixed signals. Some real activity but also bot/airdrop hunter indicators; community largely hype-focused.
2
Low quality. Channels exist but low engagement, mostly announcements, community is passive.
1
No real community or clearly bot-inflated metrics. Discord/Telegram ghost towns.
C4.2 — Team Communication & Updates 0–5 pts

What We're Assessing

Does the team communicate regularly and transparently? Pre-TGE communication cadence is a proxy for post-launch support quality. Silence is a red flag.

How to Assess

  • Frequency of blog/medium posts — are they substantive?
  • X/Twitter: are founders personally active and responsive?
  • Discord: do team members participate in community channels?
  • AMA history: have they done public Q&As?
  • Do updates acknowledge challenges, or only celebrate wins?
5
Excellent communication. Regular, substantive updates; founders personally engaged; honest about challenges; AMA/community calls on record; update cadence is consistent.
4
Good communication. Regular updates, mostly substantive, team visible in community.
3
Adequate. Updates exist but infrequent or thin; team visible but not deeply engaged.
2
Weak. Sporadic updates, mostly hype-focused, team rarely engages with questions.
1
Silent team. Social media inactive, no updates in months, community questions unanswered.
C4.3 — Media Coverage & Third-Party Validation 0–5 pts

What We're Assessing

Has the project been covered by credible third parties? Press coverage, research reports, and independent reviews provide outside validation that the team can't fake as easily.

How to Assess

  • Search project name on Decrypt, CoinDesk, The Block, Blockworks
  • Any research coverage from Messari, Delphi, Pantera, etc.?
  • Conference appearances — was team on stage at real events?
  • YouTube/podcast appearances with credible hosts?
  • Distinguish earned coverage from paid press releases
5
Strong third-party coverage. Multiple credible media outlets, research report from reputable firm, conference appearances, earned (not paid) coverage with substantive analysis.
4
Good coverage. Some credible outlet coverage, podcast appearances, industry recognition.
3
Limited. Mentioned in smaller outlets or crypto media; mostly project-issued press releases.
2
Minimal. Only self-published content; no credible third-party coverage found.
1
No coverage. Cannot find any third-party references to this project outside their own channels.
CAT 5

🚨 Risk & Red Flag Assessment

15 points

This category scores the absence of red flags. A clean project starts at 15/15 and loses points as problems are identified. Flags are detected through document analysis, social media research, and cross-referencing public records.

C5.1 — Unrealistic Claims & Hyperbole 0–5 pts 📄 All Sources

What We're Assessing

Does the project make claims that are technically implausible, mathematically impossible, or that contradict well-established knowledge? Extraordinary claims require extraordinary evidence.

Red Flag Examples

  • "100,000 TPS with full decentralization and no tradeoffs"
  • Guaranteed return or yield promises
  • Comparison to Bitcoin/Ethereum as if equivalence is obvious
  • Market cap projections without credible basis
  • Partner claims that partners haven't confirmed
5
Honest and calibrated. Claims are specific and defensible; limitations acknowledged; no impossible promises; competitors treated fairly.
4
Mostly credible. Minor marketing inflation but nothing technically impossible or financially dangerous.
3
Some hype. Several unsubstantiated claims but none are egregiously false; pattern of overpromising.
2
Significant red flags. Multiple implausible or unverifiable claims; financial promises that raise legal/regulatory concerns.
1
Major misrepresentation. Claims that are demonstrably false, technically impossible, or potentially fraudulent. Automatic further review required.
C5.2 — Past Controversies & Background Checks 0–5 pts

What We're Assessing

Are there past rug pulls, scams, failed projects, regulatory actions, or credible accusations against the team or advisors? Public records are searchable — do the work.

How to Research

  • Search "[Name] rug pull", "[Name] scam", "[Name] SEC"
  • Check Rekt News, web3isgoinggreat.com for past incidents
  • Search team names on Twitter/X — what do critics say?
  • Look for deleted social media content (Wayback Machine)
  • Check if prior projects are still active or abandoned
5
Clean record. No concerning history found for key team members; prior projects delivered or failed gracefully with community communication.
4
Mostly clean. Minor past issues found but not directly relevant or were resolved transparently.
3
Some concerns. Past project failures without clear explanation; mixed reputation in the community.
2
Significant concerns. Prior rug or failed raise with poor community treatment; regulatory investigation history.
1
Serious red flags. Confirmed past fraud, rug pull, or regulatory action against key team members.
C5.3 — Information Completeness & Symmetry 0–5 pts 📄 All Sources

What We're Assessing

How much critical information is simply absent? Information asymmetry — where the team knows everything and investors know almost nothing — is itself a risk factor in pre-TGE investing.

What to Look For

  • Count how many major categories have MISSING status
  • Are there obvious questions the whitepaper doesn't address?
  • Is the project transparent about what they don't know yet?
  • Are terms of sale/investment clearly disclosed?
  • Is the token sale structure clearly explained?
5
Highly transparent. Few MISSING tags across the full scorecard; project proactively discloses risks, limitations, and unknowns; investors have a clear picture.
4
Good transparency. Most key information available; some gaps in secondary categories.
3
Moderate gaps. Several important information categories missing; investors must take significant things on faith.
2
High information asymmetry. Critical information (team, tokenomics, use of funds) missing or vague.
1
Opaque. Cannot form any well-grounded view. The team holds all information; investors hold none.

📐 Score Formula

Category Weights → Total Score

25
Team & Credibility
25 pts
+
20
Tokenomics
20 pts
+
25
Technology
25 pts
+
15
Community
15 pts
+
15
Risk
15 pts
100
Maximum Possible Score
A
80–100
Strong fundamentals, well-verified
B
65–79
Good project with some gaps
C
50–64
Moderate — many things claimed
D
35–49
Significant concerns identified
F
0–34
High risk — do not recommend
Pre-TGE Score Calibration: Expect most legitimate pre-TGE projects to score 45–70. A pre-TGE project scoring 80+ has exceptionally high transparency. A pre-TGE project scoring below 35 has serious fundamental gaps that should raise immediate concern.

🚫 Auto-Fail Red Flags

Regardless of total score, any of the following triggers an automatic FAIL verdict. These are verifiable from public information and represent disqualifying risks for pre-TGE projects.

CRITICAL
Confirmed Prior Rug Pull or Fraud
How to find: Search team names + "rug", "scam", "exit" on X/Twitter, Rekt News, web3isgoinggreat.com
FAIL
CRITICAL
Fully Anonymous Team Raising Significant Capital
Anonymous + large raise = zero accountability. No KYC, no identifiable founders.
FAIL
CRITICAL
Guaranteed Returns or Yield Promises
Any stated guarantee of investment returns in whitepaper or marketing materials
FAIL
CRITICAL
No Whitepaper / Zero Technical Documentation
If a project is raising money with no document explaining what it is or how it works
FAIL
HIGH
Fake or Unconfirmed Partner / Investor Logos
Logo on website but the named org has no public confirmation of this relationship
−10 pts
HIGH
Team Members with No Verifiable History
Named but no LinkedIn, no press, no prior project history — essentially anonymous with a name attached
−8 pts
HIGH
Insider TGE Unlock >20% of Supply
If tokenomics show team/investors can dump >20% of supply immediately at TGE
−8 pts
MEDIUM
Plagiarized Whitepaper Content
Copyscape or manual search reveals sections copied from other projects' whitepapers
−5 pts
MEDIUM
Community Channel Full of Price Talk, No Product Discussion
Discord/Telegram dominated by "wen moon", "wen listing" with zero technical/product discussion
−5 pts
MEDIUM
Deleted / Scrubbed Social History
Wayback Machine shows significantly different prior website or deleted social posts around key dates
−5 pts

✅ Pre-TGE Analyst Checklist

Complete this checklist before beginning any Token Verdict pre-TGE assessment. Ensures consistent data gathering across all reviews.

📄 Documents to Collect

  • Download / save whitepaper PDF
  • Screenshot full tokenomics section
  • Save any audit report PDFs
  • Archive roadmap (Wayback Machine snapshot)
  • Save Terms of Sale / token sale docs
  • Download any investor decks (if publicly available)

👥 Team Research

  • LinkedIn profiles for all named founders
  • Cross-check LinkedIn vs whitepaper bios
  • Search each founder name + "rug pull / scam / fraud"
  • Verify profile photos (reverse image search)
  • Check advisor LinkedIn for this role listed
  • Confirm investors on their own portfolio pages

💻 Technical Review

  • Find GitHub organization
  • Check commit frequency & last commit date
  • Count contributors & assess code quality
  • Verify audit report on auditor's own website
  • Read audit findings summary
  • Search for testnet evidence in Discord / Twitter

📱 Social & Community

  • Join Discord — assess channel activity quality
  • Check Telegram message quality vs volume
  • Analyze Twitter engagement rate vs follower count
  • Check reply quality on founder tweets
  • Search project on X for community sentiment
  • Find any AMA recordings

📰 Press & Third-Party

  • Search Decrypt, CoinDesk, The Block, Blockworks
  • Check Messari / Delphi / Pantera for research
  • Distinguish earned coverage vs paid press releases
  • Search for conference appearance videos
  • Check Rekt News for any incident mentions
  • Review web3isgoinggreat.com for team history

🚩 Red Flag Checks

  • Verify all partner/investor logos independently
  • Check for guaranteed return language anywhere
  • Verify TGE unlock % for insiders
  • Run whitepaper sections through plagiarism check
  • Wayback Machine — any scrubbed history?
  • Check prior projects — did they ship or abandon?

⚠️ Methodology Limitations

This Assessment Cannot Guarantee Safety

Token Verdict's pre-TGE methodology is built on publicly available information only. It identifies what is verifiable, what is claimed, and what is missing — but it cannot access private communications, internal financial records, or conduct legal due diligence.

What This Methodology CAN Do

  • Identify verifiable vs. unverifiable claims
  • Flag obvious red flags from public record
  • Assess quality of team transparency
  • Evaluate technical credibility of documentation
  • Score relative to other pre-TGE projects
  • Provide structured, consistent analysis framework

What This Methodology CANNOT Do

  • Verify private investor terms or cap tables
  • Confirm treasury holdings (no on-chain data)
  • Guarantee vesting will be enforced post-TGE
  • Detect sophisticated fraud or impersonation
  • Predict market performance
  • Replace legal, financial, or investment advice
Disclaimer: Token Verdict scores are research tools for informed decision-making, not investment recommendations. Pre-TGE investing is inherently high-risk. A high Token Verdict score does not guarantee project success or token appreciation. Always conduct your own due diligence and consult qualified financial advisors before investing.
For Project Founders

Know your score before your investors do.

The Founders Report gives you a full independent audit — plus specific, ranked fixes for every gap. Delivered in 5 business days. 3 expert reviews included.

Get Founders Report — $495 → Submit Your Project (Free) →

Use code FIRST100 for 50% off — limited to first 100 customers