Reddit User Feedback on Major LLM Chat Tools
Overview: This report analyzes Reddit discussions about four popular AI chat tools – OpenAI’s ChatGPT, Anthropic’s Claude, Google’s Gemini (Bard), and open-source LLMs (e.g. LLaMA-based models). It summarizes common pain points users report for each, the features they most frequently request, unmet needs or user segments that feel underserved, and differences in perception among developers, casual users, and business users. Specific examples and quotes from Reddit threads are included to illustrate these points.
ChatGPT (OpenAI)
Common Pain Points and Limitations
-
Limited context memory: A top complaint is ChatGPT’s inability to handle long conversations or large documents without forgetting earlier details. Users frequently hit the context length limit (a few thousand tokens) and must truncate or summarize information. One user noted “increasing the size of the context window would be far and away the biggest improvement… That’s the limit I run up against the most”. When the context is exceeded, ChatGPT forgets initial instructions or content, leading to frustrating drops in quality mid-session.
-
Message caps for GPT-4: ChatGPT Plus users lament the 25-message/3-hour cap on GPT-4 usage (a limit present in 2023). Hitting this cap forces them to wait, interrupting work. Heavy users find this throttling a major pain point.
-
Strict content filters (“nerfs”): Many Redditors feel ChatGPT has become overly restrictive, often refusing requests that previous versions handled. A highly-upvoted post complained that “pretty much anything you ask it these days returns a ‘Sorry, can’t help you’… How did this go from the most useful tool to the equivalent of Google Assistant?”. Users cite examples like ChatGPT refusing to reformat their own text (e.g. login credentials) due to hypothetical misuse. Paying subscribers argue that “some vague notion that the user may do 'bad' stuff… shouldn’t be grounds for not displaying results”, since they want the model’s output and will use it responsibly.
-
Hallucinations and errors: Despite its advanced capability, ChatGPT can produce incorrect or fabricated information with confidence. Some users have observed this getting worse over time, suspecting the model was “dumbed down.” For instance, a user in finance said ChatGPT used to calculate metrics like NPV or IRR correctly, but after updates “I am getting so many wrong answers… it still produces wrong answers [even after correction]. I really believe it has become a lot dumber since the changes.”. Such unpredictable inaccuracies erode trust for tasks requiring factual precision.
-
Incomplete code outputs: Developers often use ChatGPT for coding help, but they report that it sometimes omits parts of the solution or truncates long code. One user shared that ChatGPT now “omits code, produces unhelpful code, and just sucks at the thing I need it to do… It often omits so much code I don’t even know how to integrate its solution.” This forces users to ask follow-up prompts to coax out the rest, or to manually stitch together answers – a tedious process.
-
Performance and uptime concerns: A perception exists that ChatGPT’s performance for individual users declined as enterprise use increased. “I think they are allocating bandwidth and processing power to businesses and peeling it away from users, which is insufferable considering what a subscription costs!” one frustrated Plus subscriber opined. Outages or slowdowns during peak times have been noted anecdotally, which can disrupt workflows.
Frequently Requested Features or Improvements
-
Longer context window / memory: By far the most requested improvement is a larger context length. Users want to have much longer conversations or feed large documents without resets. Many suggest expanding ChatGPT’s context to match GPT-4’s 32K token capability (currently available via API) or beyond. As one user put it, “GPT is best with context, and when it doesn’t remember that initial context, I get frustrated… If the rumors are true about ️context PDFs, that would solve basically all my problems.” There is high demand for features to upload documents or link personal data so ChatGPT can remember and reference them throughout a session.
-
File-handling and integration: Users frequently ask for easier ways to feed files or data into ChatGPT. In discussions, people mention wanting to “copy and paste my Google Drive and have it work” or have plugins that let ChatGPT directly fetch context from personal files. Some have tried workarounds (like PDF reader plugins or linking Google Docs), but complained about errors and limits. A user described their ideal plugin as one that “works like Link Reader but for personal files… choosing which parts of my drive to use in a conversation… that would solve basically every problem I have with GPT-4 currently.”. In short, better native support for external knowledge (beyond the training data) is a popular request.
-
Reduced throttling for paid users: Since many Plus users hit the GPT-4 message cap, they call for higher limits or an option to pay more for unlimited access. The 25-message limit is seen as arbitrary and hindering intensive use. People would prefer a usage-based model or higher cap so that long problem-solving sessions aren’t cut short.
-
“Uncensored” or custom moderation modes: A segment of users would like the ability to toggle the strictness of content filters, especially when using ChatGPT for themselves (not public-facing content). They feel a “research” or “uncensored” mode – with warnings but not hard refusals – would let them explore more freely. As one user noted, paying customers see it as a tool and believe “I pay money for [it].” They want the option to get answers even on borderline queries. While OpenAI has to balance safety, these users suggest a flag or setting to relax policies in private chats.
-
Improved factual accuracy and updates: Users commonly ask for more up-to-date knowledge and fewer hallucinations. ChatGPT’s knowledge cutoff (September 2021 in earlier versions) was a limitation often raised on Reddit. OpenAI has since introduced browsing and plugins, which some users leverage, but others simply request the base model be updated more frequently with new data. Reducing obvious errors – especially in domains like math and coding – is an ongoing wish. Some developers provide feedback when ChatGPT errs in hopes of model improvement.
-
Better code outputs and tools: Developers have feature requests such as an improved code interpreter that doesn’t omit content, and integration with IDEs or version control. (OpenAI’s Code Interpreter plugin – now part of “Advanced Data Analysis” – was a step in this direction and received praise.) Still, users often request finer control in code generation: e.g. an option to output complete, unfiltered code even if it’s long, or mechanisms to easily fix code if the AI made an error. Basically, they want ChatGPT to behave more like a reliable coding assistant without needing multiple prompts to refine the answer.
-
Persistent user profiles or memory: Another improvement some mention is letting ChatGPT remember things about the user across sessions (with consent). For example, remembering one’s writing style, or that they are a software engineer, without having to restate it every new chat. This could tie into API fine-tuning or a “profile” feature. Users manually copy important context into new chats now, so a built-in memory for personal preferences would save time.