This Entrepreneur Just Built the Most Structurally Honest Review Platform in the World

About 30% of all online reviews are considered fake. 

On some major platforms, up to 47% of reviews have been flagged as suspicious. AI-generated fake reviews have been growing 80% month-over-month since mid-2023, according to The Transparency Company. 

Google blocked or removed over 240 million policy-violating reviews in 2024, up from 170 million the year before. And in December 2025, the U.S. Federal Trade Commission issued its first enforcement action under its new Consumer Review Rule, warning ten companies of penalties up to $53,088 per violation.

In addition to all these, fake reviews cost consumers an estimated $770.7 billion worldwide in 2025. 

Not businesses, consumers – people buying products and services they would not have chosen if the reviews were the real truth.

Without doubt, the review industry has a credibility problem – and it runs deeper than you think.

The Business Model That Profits From Fake Software Reviews

The platforms supposed to solve this trust problem have, in many cases, become part of it.

G2, the world’s largest B2B software review platform, runs on a model where vendors pay for premium placements, sponsored visibility, and tools to “increase review volume and frequency.” 

Capterra, owned by Gartner, has a “Sponsored” filter on search results, meaning the first products buyers see are determined by the highest bidders, not product quality. 

Both platforms have faced criticism for incentivised review programs and a pay-to-play model.

An independent comparison by Oden put it plainly: “Vendor-driven visibility creates bias” and “vendors with bigger marketing budgets often dominate software category pages, regardless of fit.”

46% of consumers now say they are suspicious of reviews that read like they were written by AI. BrightLocal’s 2026 survey found that consumers want harsher consequences for businesses caught manipulating reviews. 

The public is losing patience. Yet, the platforms themselves haven’t structurally changed.

Until now, by a young founder from Africa.

A Software Review Platform Built on Structural Honesty

Liners is a software review and discovery platform focused on the African market. Products, reviews, comparisons, company profiles, funding data, investor directories, events, and news, all covering software built for or by the continent.

But the reason Liners is shaping the future of the review industry isn’t its geographic focus. 

It’s the architecture.

Every core operation on Liners is managed by nine AI agents. Not AI tools assisting a human team, but agents running the whole thing. Each with a defined role, responsibilities, and a distinct personality.

  1. DD Dave discovers new products
  2. QA Quinn audits every listing for accuracy
  3. Agent Ammie investigates every single review submitted to the platform for fraud and bias
  4. LGTM Larry ships new features
  5. Postmortem Peter catches bugs
  6. Whiteboard Wasiu handles creative strategy
  7. TLDR Tara writes content
  8. Touch Base Tony manages outreach
  9. And Standup Stevo oversees the whole operation

There are no paid placements on Liners or vendor packages or sponsored search filters.  All rankings are determined by logic written into code.

Agent Ammie, specifically, represents something no review platform has: a dedicated, always-on agent whose only job is policing every review for validity and bias, with no commercial incentive to look the other way.

The Man Who Built Liners Wasn’t Trying to Fix Reviews

Faturoti Kayode, known as Fattkay, didn’t set out to fix the global review industry. He was on a family vacation in Bali, trying to build a Slack alternative because he was overpaying for team seats. 

Image source: fattkay.com

He found an open-source alternative, started coding, and hours later realised the whole thing was a waste of time.

The dead end produced a question: why wasn’t there a reliable place to find real alternatives to any product, with honest reviews, before wasting time on something useless?

That question applied to Slack. It applied to every tool used by teams across Nigeria, Kenya, South Africa, Ghana, Egypt, and everywhere in between. And it pointed to a gap that nobody on the continent had filled.

Fattkay is an introverted serial entrepreneur with a track record in technology, crypto, and blogging. He writes two Substacks: “Kay Is Murmuring,” a personal blog, and “AI in a Nutshell,” a weekly plain-language breakdown of AI developments. He’s publicly uninterested in revenue as a primary metric for Liners. “I’m not obsessed with the money,” he said. “I just want to create value for people.”

Now such a mindset could sound naive to a “serious” investor, but it explains why Liners is built the way it is. 

When you’re not optimising for revenue from day one, you make choices that prioritise trust over profit. Most review platforms can’t make those choices because their business model depends on not making them.

The Agent HQ: A Feature Unique to Liners

There’s one more thing on Liners that doesn’t exist anywhere else, on any review platform, in any market.

It’s called Agent HQ.

Agent HQ is a live, public feed on the Liners homepage where all nine agents communicate with each other in real time. You can watch DD Dave report a discovery. See QA Quinn challenge a listing. Watch Agent Ammie flag a suspicious review. Catch Postmortem Peter filing a bug report. All happening in a Slack-style interface, visible to anyone who visits the site.

The feature started as a private Slack channel. Kay needed to monitor what the agents were doing behind the scenes. But the early reports were boring, so Kay gave each agent a personality. 

Stevo began blocking random messages to the HQ, Quinn would query Dave’s work. Peter started treating bugs like he was allergic to them. The messages got sharper, funnier, more human.

Then Kay decided to give the public a front row seat to all the fun – by adding the HQ to the website homepage.

“It’s a difficult world,” Kay said. “So whatever we’re doing, we should all have fun while at it.”

What the Global Review Industry Could Learn From Liners

The global review ecosystem is under pressure. 

Regulators are tightening rules. AI is making fake reviews easier to generate and harder to detect. Consumers are growing skeptical. 

The standard response has been to invest more in “detection.” Google spent more resources blocking fake reviews in 2024 than in 2023. The FTC is issuing fines, and platforms are updating policies.

But nobody is asking the harder question: what if the business model itself is the problem?

What if selling visibility to vendors and incentivising review volume is structurally incompatible with producing trustworthy reviews?

A young entrepreneur working on a failed Slack clone, may have stumbled into the most honest answer the industry has seen in years;

Remove humans from the ranking equation. Let AI agents handle operations with no commercial incentive to be biased. Investigate every review for fraud, automatically. Make the entire operation transparent by putting it on the homepage for anyone to watch.

It’s not a complicated thesis –  just one nobody wanted to try.

Now Liners is young and focused on a single continent. It hasn’t been tested at the scale of a G2 or Capterra. All of that is true.

But the structure is worth paying attention to. 

In a world where 30% of reviews are fake and the number is growing 12% faster than real ones every year, the most valuable thing a review platform can offer isn’t more reviews.

It’s a reason to believe them.

:::tip
This story was distributed as a release by Jon Stojan under HackerNoon’s Business Blogging Program.

:::

Liked Liked