[D] How much are you using LLMs to summarize/read papers now?
Until early 2025, I found LLMs pretty bad at summarizing research papers. They would miss key contributions, hallucinate details, or give generic overviews that didn’t really capture what mattered. So I mostly avoided using them for paper reading.
However, models have improved significantly since then, and I’m starting to reconsider. I’ve been experimenting more recently, and the quality feels noticeably better, especially for getting a quick gist before deciding whether to deep-read something.
Curious where everyone else stands:
- Do you use LLMs (ChatGPT, Claude, Gemini, etc.) to summarize or help you read papers?
- If so, how? Quick triage, detailed summaries, Q&A about specific sections, etc.?
- Do you trust the output enough to skip reading sections, or do you always verify?
- Any particular models or setups that work well for this?
submitted by /u/kjunhot
[link] [comments]
Like
0
Liked
Liked