[D] Two college students built a prototype that tries to detect contradictions between research papers — curious if this would actually be useful
|
Hi everyone, We’re two college students who spend way too much time reading papers for projects, and we kept running into the same frustrating situation: sometimes two papers say completely opposite things, but unless you happen to read both, you’d never notice. So we started building a small experiment to see if this could be detected automatically. The idea is pretty simple: Instead of just indexing papers, the system reads them and extracts causal claims like
Then it builds a graph of those relationships and checks if different papers claim opposite things. Example:
The system flags that and shows both papers side-by-side. We recently ran it on one professor’s publication list (about 50 papers), and the graph it produced was actually pretty interesting. It surfaced a couple of conflicting findings across studies that we probably wouldn’t have noticed just by reading abstracts. But it’s definitely still a rough prototype. Some issues we’ve noticed: claim extraction sometimes loses conditions in sentences occasionally the system proposes weird hypotheses domain filtering still needs improvement Tech stack is pretty simple:
Also being honest here — a decent portion of the project was vibe-coded while exploring the idea, so the architecture evolved as we went along. We’d really appreciate feedback from people who actually deal with research literature regularly. Some things we’re curious about: Would automatic contradiction detection be useful in real research workflows? How do you currently notice when papers disagree with each other? What would make you trust (or distrust) a tool like this? If anyone wants to check it out, here’s the prototype: We’re genuinely trying to figure out whether this is something researchers would actually want, so honest criticism is very welcome. Thanks! submitted by /u/PS_2005 |