A deep dive into how Web Jury calculates trust scores, political bias ratings, and accuracy metrics.
Every entity on Web Jury receives a trust score from 1 to 5 stars, calculated as a weighted average of all community reviews. Not all reviews are weighted equally — the system factors in reviewer trust level, verification status, and recency.
Reviewer trust weighting: Reviews from users with higher trust scores (based on account history, helpfulness votes, and verification) carry proportionally more weight. This rewards consistently helpful, high-quality reviewers.
Time decay: More recent reviews are weighted more heavily than older ones, ensuring scores reflect current perceptions rather than outdated assessments.
Minimum threshold:A minimum of 5 reviews is required before a trust score is publicly displayed. Below this threshold, the entity shows a "Not enough data" indicator.
Every review includes a mandatory political bias vote on a 7-point scale: Far Left, Left, Center-Left, Center, Center-Right, Right, Far Right. The aggregate bias position is calculated using a weighted median rather than a mean, making it highly resistant to vote brigading attempts.
Confidence levels: Hidden (fewer than 20 votes), Low (20-99 votes), Moderate (100-499 votes), High (500+ votes). The bias label is not shown publicly until at least 20 votes have been cast.
Reviewers rate information accuracy on a 1-5 scale, which is normalized to a 0-100% score. This is particularly useful for assessing news articles, journalistic content, and educational material.
Labels: Unreliable (0-30%), Questionable (31-50%), Mixed (51-70%), Mostly Accurate (71-85%), Highly Accurate (86-100%).
Maintaining the integrity of scores is central to Web Jury's mission. We employ multiple layers of protection:
The trust score is a weighted average of all community reviews on a 1-5 star scale. Reviews from higher-trust, verified users carry more weight. Recent reviews count more via time decay. A minimum of 5 reviews is required before the score is publicly displayed.
Political bias is assessed through mandatory crowd-sourced voting on a 7-point spectrum: Far Left, Left, Center-Left, Center, Center-Right, Right, Far Right. We use a weighted median (not mean) to resist vote brigading. Confidence levels depend on vote count: Hidden (<20 votes), Low (20-99), Moderate (100-499), High (500+).
Information accuracy is a community assessment normalized from 1-5 votes to a 0-100% scale. Labels are assigned as: Unreliable (0-30%), Questionable (31-50%), Mixed (51-70%), Mostly Accurate (71-85%), Highly Accurate (86-100%).
We use multiple anti-gaming protections: weighted median for bias (brigading-resistant), review spike detection, coordinated voting detection, trust-weighted scoring, rate limiting (20 reviews/day per user), duplicate content detection, and AI-powered spam and toxicity filtering.
Dimension ratings allow reviewers to rate specific aspects of an entity beyond the overall score. Dimensions vary by entity type — for example, a YouTube video might have dimensions for Production Quality, Entertainment Value, and Educational Value, while a news article might have Objectivity, Depth, and Source Quality.
Crowd tags are community-generated labels that describe an entity. Users can propose and vote on tags. Tags appear publicly once they reach a democratic voting threshold, providing quick at-a-glance context about an entity.
Users build trust scores based on their review history, account age, verification status, and community feedback (helpful/unhelpful votes on their reviews). Higher-trust users' reviews carry more weight in aggregate calculations, creating a meritocratic system.
Yes. Verified creators and entity owners can claim their profiles and respond to reviews. Verified responses are highlighted but do not affect the aggregate trust score or bias rating.