[D] Two college students built a prototype that tries to detect contradictions between research papers — curious if this would actually be useful

[D] Two college students built a prototype that tries to detect contradictions between research papers — curious if this would actually be useful

Hi everyone,

We’re two college students who spend way too much time reading papers for projects, and we kept running into the same frustrating situation: sometimes two papers say completely opposite things, but unless you happen to read both, you’d never notice.

So we started building a small experiment to see if this could be detected automatically.

The idea is pretty simple:

Instead of just indexing papers, the system reads them and extracts causal claims like

  • “X improves Y”
  • “X reduces Y”
  • “X enables Y”

Then it builds a graph of those relationships and checks if different papers claim opposite things.

Example:

  • Paper A: X increases Y
  • Paper B: X decreases Y

The system flags that and shows both papers side-by-side.

We recently ran it on one professor’s publication list (about 50 papers), and the graph it produced was actually pretty interesting. It surfaced a couple of conflicting findings across studies that we probably wouldn’t have noticed just by reading abstracts.

But it’s definitely still a rough prototype. Some issues we’ve noticed:

claim extraction sometimes loses conditions in sentences

occasionally the system proposes weird hypotheses

domain filtering still needs improvement

Tech stack is pretty simple:

  • Python / FastAPI backend
  • React frontend
  • Neo4j graph database
  • OpenAlex for paper data
  • LLMs for extracting claims

Also being honest here — a decent portion of the project was vibe-coded while exploring the idea, so the architecture evolved as we went along.

We’d really appreciate feedback from people who actually deal with research literature regularly.

Some things we’re curious about:

Would automatic contradiction detection be useful in real research workflows?

How do you currently notice when papers disagree with each other?

What would make you trust (or distrust) a tool like this?

If anyone wants to check it out, here’s the prototype:

ukc-pink.vercel.app/

We’re genuinely trying to figure out whether this is something researchers would actually want, so honest criticism is very welcome.

Thanks!

https://preview.redd.it/kcwfl7deggng1.png?width=1510&format=png&auto=webp&s=0c0c33af5640b7419ac7f7cc3e7783e6d87bbc05

https://preview.redd.it/jxozisdeggng1.png?width=1244&format=png&auto=webp&s=54076610f05c948abf72c28ea77cb8055b929163

https://preview.redd.it/lfcjb8deggng1.png?width=1276&format=png&auto=webp&s=ae74e01299de64c5e9172ab3aadf1457fae36c83

https://preview.redd.it/rhesw6deggng1.png?width=1316&format=png&auto=webp&s=73598312696398b09b51f55779ff21a3fe6c023d

submitted by /u/PS_2005
[link] [comments]

Liked Liked