Skip to main content

Reddit User Feedback on Major LLM Chat Tools

· 61 min read
Lark Birdy
Chief Bird Officer

Overview: This report analyzes Reddit discussions about four popular AI chat tools – OpenAI’s ChatGPT, Anthropic’s Claude, Google’s Gemini (Bard), and open-source LLMs (e.g. LLaMA-based models). It summarizes common pain points users report for each, the features they most frequently request, unmet needs or user segments that feel underserved, and differences in perception among developers, casual users, and business users. Specific examples and quotes from Reddit threads are included to illustrate these points.

Reddit User Feedback on Major LLM Chat Tools

ChatGPT (OpenAI)

Common Pain Points and Limitations

  • Limited context memory: A top complaint is ChatGPT’s inability to handle long conversations or large documents without forgetting earlier details. Users frequently hit the context length limit (a few thousand tokens) and must truncate or summarize information. One user noted “increasing the size of the context window would be far and away the biggest improvement… That’s the limit I run up against the most”. When the context is exceeded, ChatGPT forgets initial instructions or content, leading to frustrating drops in quality mid-session.

  • Message caps for GPT-4: ChatGPT Plus users lament the 25-message/3-hour cap on GPT-4 usage (a limit present in 2023). Hitting this cap forces them to wait, interrupting work. Heavy users find this throttling a major pain point.

  • Strict content filters (“nerfs”): Many Redditors feel ChatGPT has become overly restrictive, often refusing requests that previous versions handled. A highly-upvoted post complained that “pretty much anything you ask it these days returns a ‘Sorry, can’t help you’… How did this go from the most useful tool to the equivalent of Google Assistant?”. Users cite examples like ChatGPT refusing to reformat their own text (e.g. login credentials) due to hypothetical misuse. Paying subscribers argue that “some vague notion that the user may do 'bad' stuff… shouldn’t be grounds for not displaying results”, since they want the model’s output and will use it responsibly.

  • Hallucinations and errors: Despite its advanced capability, ChatGPT can produce incorrect or fabricated information with confidence. Some users have observed this getting worse over time, suspecting the model was “dumbed down.” For instance, a user in finance said ChatGPT used to calculate metrics like NPV or IRR correctly, but after updates “I am getting so many wrong answers… it still produces wrong answers [even after correction]. I really believe it has become a lot dumber since the changes.”. Such unpredictable inaccuracies erode trust for tasks requiring factual precision.

  • Incomplete code outputs: Developers often use ChatGPT for coding help, but they report that it sometimes omits parts of the solution or truncates long code. One user shared that ChatGPT now “omits code, produces unhelpful code, and just sucks at the thing I need it to do… It often omits so much code I don’t even know how to integrate its solution.” This forces users to ask follow-up prompts to coax out the rest, or to manually stitch together answers – a tedious process.

  • Performance and uptime concerns: A perception exists that ChatGPT’s performance for individual users declined as enterprise use increased. “I think they are allocating bandwidth and processing power to businesses and peeling it away from users, which is insufferable considering what a subscription costs!” one frustrated Plus subscriber opined. Outages or slowdowns during peak times have been noted anecdotally, which can disrupt workflows.

Frequently Requested Features or Improvements

  • Longer context window / memory: By far the most requested improvement is a larger context length. Users want to have much longer conversations or feed large documents without resets. Many suggest expanding ChatGPT’s context to match GPT-4’s 32K token capability (currently available via API) or beyond. As one user put it, “GPT is best with context, and when it doesn’t remember that initial context, I get frustrated… If the rumors are true about ️context PDFs, that would solve basically all my problems.” There is high demand for features to upload documents or link personal data so ChatGPT can remember and reference them throughout a session.

  • File-handling and integration: Users frequently ask for easier ways to feed files or data into ChatGPT. In discussions, people mention wanting to “copy and paste my Google Drive and have it work” or have plugins that let ChatGPT directly fetch context from personal files. Some have tried workarounds (like PDF reader plugins or linking Google Docs), but complained about errors and limits. A user described their ideal plugin as one that “works like Link Reader but for personal files… choosing which parts of my drive to use in a conversation… that would solve basically every problem I have with GPT-4 currently.”. In short, better native support for external knowledge (beyond the training data) is a popular request.

  • Reduced throttling for paid users: Since many Plus users hit the GPT-4 message cap, they call for higher limits or an option to pay more for unlimited access. The 25-message limit is seen as arbitrary and hindering intensive use. People would prefer a usage-based model or higher cap so that long problem-solving sessions aren’t cut short.

  • “Uncensored” or custom moderation modes: A segment of users would like the ability to toggle the strictness of content filters, especially when using ChatGPT for themselves (not public-facing content). They feel a “research” or “uncensored” mode – with warnings but not hard refusals – would let them explore more freely. As one user noted, paying customers see it as a tool and believe “I pay money for [it].” They want the option to get answers even on borderline queries. While OpenAI has to balance safety, these users suggest a flag or setting to relax policies in private chats.

  • Improved factual accuracy and updates: Users commonly ask for more up-to-date knowledge and fewer hallucinations. ChatGPT’s knowledge cutoff (September 2021 in earlier versions) was a limitation often raised on Reddit. OpenAI has since introduced browsing and plugins, which some users leverage, but others simply request the base model be updated more frequently with new data. Reducing obvious errors – especially in domains like math and coding – is an ongoing wish. Some developers provide feedback when ChatGPT errs in hopes of model improvement.

  • Better code outputs and tools: Developers have feature requests such as an improved code interpreter that doesn’t omit content, and integration with IDEs or version control. (OpenAI’s Code Interpreter plugin – now part of “Advanced Data Analysis” – was a step in this direction and received praise.) Still, users often request finer control in code generation: e.g. an option to output complete, unfiltered code even if it’s long, or mechanisms to easily fix code if the AI made an error. Basically, they want ChatGPT to behave more like a reliable coding assistant without needing multiple prompts to refine the answer.

  • Persistent user profiles or memory: Another improvement some mention is letting ChatGPT remember things about the user across sessions (with consent). For example, remembering one’s writing style, or that they are a software engineer, without having to restate it every new chat. This could tie into API fine-tuning or a “profile” feature. Users manually copy important context into new chats now, so a built-in memory for personal preferences would save time.

Underserved Needs or User Segments

  • Researchers and students with long documents: People who want ChatGPT to analyze lengthy research papers, books, or large datasets feel underserved. The current limits force them to chop up text or settle for summaries. This segment would benefit greatly from larger context windows or features to handle long documents (as evidenced by numerous posts about trying to get around token limits).

  • Users seeking creative storytelling or role-play beyond limits: While ChatGPT is often used for creative writing, some story-tellers feel constrained by the model forgetting early plot points in a long story or refusing adult/horror content. They turn to alternative models or hacks to continue their narratives. These creative users would be better served by a version of ChatGPT with longer memory and a bit more flexibility on fictional violence or mature themes (within reason). As one fiction writer noted, when the AI loses track of the story, “I have to remind it of the exact format or context… I get frustrated that it was great two prompts ago, but now I have to catch the AI up.”.

  • Power users and domain experts: Professionals in specialized fields (finance, engineering, medicine) sometimes find ChatGPT’s answers lacking depth or accuracy in their domain, especially if the questions involve recent developments. These users desire more reliable expert knowledge. Some have tried fine-tuning via the API or custom GPTs. Those who cannot fine-tune would appreciate domain-specific versions of ChatGPT or plugins that embed trusted databases. In its default form, ChatGPT may underserve users who need highly accurate, field-specific information (they often have to double-check its work).

  • Users needing uncensored or edge-case content: A minority of users (hackers testing security scenarios, writers of extreme fiction, etc.) find ChatGPT’s content restrictions too limiting for their needs. They are currently underserved by the official product (since it explicitly avoids certain content). These users often experiment with jailbreaking prompts or use open-source models to get the responses they want. This is a deliberate gap for OpenAI (to maintain safety), but it means such users look elsewhere.

  • Privacy-conscious individuals and enterprises: Some users (especially in corporate settings) are uncomfortable sending sensitive data to ChatGPT due to privacy concerns. OpenAI has policies to not use API data for training, but the ChatGPT web UI historically did not offer such guarantees until an opt-out feature was added. Companies that handle confidential data (legal, healthcare, etc.) often feel they cannot fully utilize ChatGPT, leaving their needs underserved unless they build self-hosted solutions. For example, a Redditor mentioned their company moving to a local LLM for privacy reasons. Until on-prem or private instances of ChatGPT are available, this segment remains cautious or uses smaller specialist vendors.

Differences in Perception by User Type

  • Developers/Technical Users: Developers tend to be both some of ChatGPT’s biggest advocates and harshest critics. They love its ability to explain code, generate boilerplate, and assist in debugging. However, they keenly feel its limitations in longer context and code accuracy. As one dev complained, ChatGPT started “producing unhelpful code” and omitting important parts, which “pisses me off… I don’t want to have to tell it ‘don’t be lazy’ – I just want the full result”. Devs often notice even subtle changes in quality after model updates and have been very vocal on Reddit about perceived “nerfs” or declines in coding capability. They also push the limits (building complex prompts, chaining tools), so they crave features like expanded context, fewer message caps, and better integration with coding tools. In summary, developers value ChatGPT for speeding up routine tasks but are quick to point out errors in logic or code – they view it as a junior assistant that still needs oversight.

  • Casual/Everyday Users: More casual users – those asking for general knowledge, advice, or fun – often marvel at ChatGPT’s capabilities, but they have their own gripes. A common casual-user frustration is when ChatGPT refuses a request that seems innocuous to them (likely tripping a policy rule). The original poster in one thread exemplified this, being “so pissed off when I write a prompt which it shouldn’t have a problem with and it refuses now”. Casual users may also run into the knowledge cutoff (finding the bot can’t handle very current events unless explicitly updated) and sometimes notice when ChatGPT gives an obviously wrong answer. Unlike developers, they might not always double-check the AI, which can lead to disappointment if they act on a mistake. On the positive side, many casual users find ChatGPT Plus’s faster responses and GPT-4’s improved output worth $20/month – unless the “refusal” issue or other limits sour the experience. They generally want a helpful, all-purpose assistant and can get frustrated when ChatGPT replies with policy statements or needs a complex prompt to get a simple answer.

  • Business/Professional Users: Business users often approach ChatGPT from a productivity and reliability standpoint. They appreciate fast drafting of emails, summaries of documents, or generation of ideas. However, they are concerned about data security, consistency, and integration into workflows. On Reddit, professionals have discussed wanting ChatGPT in tools like Outlook, Google Docs, or as an API in their internal systems. Some have noted that as OpenAI pivots to serve enterprise clients, the product’s focus seems to shift: there’s a feeling that the free or individual user experience degraded slightly (e.g. slower or “less smart”) as the company scaled up to serve larger clients. Whether or not that’s true, it highlights a perception: business users want reliability and priority service, and individual users worry they’re now second-class. Additionally, professionals need correct outputs – a flashy but wrong answer can be worse than no answer. Thus, this segment is sensitive to accuracy. For them, features like longer context (for reading contracts, analyzing codebases) and guaranteed uptime are crucial. They are likely to pay more for premium service levels, provided their compliance and privacy requirements are met. Some enterprises even explore on-premise deployments or using OpenAI’s API with strict data handling rules to satisfy their IT policies.


Claude (Anthropic)

Common Pain Points and Limitations

  • Usage limits and access restrictions: Claude received praise for offering a powerful model (Claude 2) for free, but users quickly encountered usage limits (especially on the free tier). After a certain number of prompts or a large amount of text, Claude may stop and say something like “I’m sorry, I have to conclude this conversation for now. Please come back later.” This throttling frustrates users who treat Claude as an extended coding or writing partner. Even Claude Pro (paid) users are “not guaranteed unlimited time”, as one user noted; hitting the quota still produces the “come back later” message. Additionally, for a long time Claude was officially geo-restricted (initially only available in the US/UK). International users on Reddit had to use VPNs or third-party platforms to access it, which was an inconvenience. This made many non-US users feel left out until access widened.

  • Tendency to go off-track with very large inputs: Claude’s headline feature is its 100k-token context window, allowing extremely long prompts. However, some users have noticed that when you stuff tens of thousands of tokens into Claude, its responses can become less focused. “100k is super useful but if it doesn’t follow instructions properly and goes off track, it’s not that useful,” one user observed. This suggests that with huge contexts, Claude might drift or start rambling, requiring careful prompting to keep it on task. It’s a limitation inherent to pushing context to the extreme – the model retains a lot but sometimes “forgets” which details are most relevant, leading to minor hallucinations or off-topic tangents.

  • Inconsistent formatting or obedience to instructions: In side-by-side comparisons, some users found Claude less predictable in how it follows certain directives. For example, Claude is described as “more human-like in interactions. But it less strictly follows system messages.”. This means if you give it a fixed format to follow or a very strict persona, Claude might deviate more than ChatGPT would. Developers who rely on deterministic outputs (like JSON formats or specific styles) sometimes get frustrated if Claude introduces extra commentary or doesn’t rigidly adhere to the template.

  • Content restrictions and refusals: While not as frequently criticized as ChatGPT’s, Claude’s safety filters do come up. Anthropic designed Claude with a heavy emphasis on constitutional AI (having the AI itself follow ethical guidelines). Users generally find Claude willing to discuss a broad range of topics, but there are instances where Claude refuses requests that ChatGPT might allow. For example, one Redditor noted “ChatGPT has less moral restrictions… it will explain which gas masks are better for which conditions while Claude will refuse”. This suggests Claude might be stricter about certain “sensitive” advice (perhaps treating it as potentially dangerous guidance). Another user tried a playful role-play scenario (“pretend you were abducted by aliens”) which Claude refused, whereas Gemini and ChatGPT would engage. So, Claude does have filters that can occasionally surprise users expecting it to be more permissive.

  • Lack of multimodal capabilities: Unlike ChatGPT (which, by late 2023, gained image understanding with GPT-4 Vision), Claude is currently text-only. Reddit users note that Claude cannot analyze images or directly browse the web on its own. This isn’t exactly a “pain point” (Anthropic never advertised those features), but it is a limitation relative to competitors. Users who want an AI to interpret a diagram or screenshot cannot use Claude for that, whereas ChatGPT or Gemini might handle it. Similarly, any retrieval of current information requires using Claude via a third-party tool (e.g., Poe or search engine integration), since Claude doesn’t have an official browsing mode at this time.

  • Minor stability issues: A few users have reported Claude occasionally being repetitive or getting stuck in loops for certain prompts (though this is less common than with some smaller models). Also, earlier versions of Claude sometimes ended responses prematurely or took a long time with large outputs, which can be seen as minor annoyances, though Claude 2 improved on speed.

Frequently Requested Features or Improvements

  • Higher or adjustable usage limits: Claude enthusiasts on Reddit often ask Anthropic to raise the conversation limits. They would like to use the 100k context to its fullest without hitting an artificial stop. Some suggest that even paid Claude Pro should allow significantly more tokens per day. Others floated the idea of an optional “100k extended mode” – e.g., “Claude should have a 100k context mode with double the usage limits” – where perhaps a subscription could offer expanded access for heavy users. In essence, there’s demand for a plan that competes with ChatGPT’s unlimited (or high-cap) usage for subscribers.

  • Better long-context navigation: While having 100k tokens is groundbreaking, users want Claude to better utilize that context. One improvement would be refining how Claude prioritizes information so it stays on track. Anthropic could work on the model’s prompt adherence when the prompt is huge. Reddit discussions suggest techniques like allowing the user to “pin” certain instructions so they don’t get diluted in a large context. Any tools to help segment or summarize parts of the input could also help Claude handle large inputs more coherently. In short, users love the possibility of feeding an entire book to Claude – they just want it to stay sharp throughout.

  • Plugins or web browsing: Many ChatGPT users have gotten used to plugins (for example, browsing, code execution, etc.) and they express interest in Claude having similar extensibility. A common request is for Claude to have an official web search/browsing function, so that it can fetch up-to-date information on demand. Currently, Claude’s knowledge is mostly static (training data up to early 2023, with some updates). If Claude could query the web, it would alleviate that limitation. Likewise, a plugin system where Claude could use third-party tools (like calculators or database connectors) could expand its utility for power users. This remains a feature Claude lacks, and Reddit users often mention how ChatGPT’s ecosystem of plugins gives it an edge in certain tasks.

  • Multimodal input (images or audio): Some users have also wondered if Claude will support image inputs or generate images. Google’s Gemini and OpenAI’s GPT-4 have multimodal capabilities, so to stay competitive, users expect Anthropic to explore this. A frequent request is: “Can I upload a PDF or an image for Claude to analyze?” Currently the answer is no (aside from workarounds like converting images to text elsewhere). Even just allowing image-to-text (OCR and description) would satisfy many who want a one-stop assistant. This is on the wish list, though Anthropic hasn’t announced anything similar as of early 2025.

  • Fine-tuning or customization: Advanced users and businesses sometimes ask if they can fine-tune Claude on their own data or get custom versions. OpenAI offers fine-tuning for some models (not GPT-4 yet, but for GPT-3.5). Anthropic released a fine-tuning interface for Claude 1.3 earlier, but it’s not widely advertised for Claude 2. Reddit users have inquired about being able to train Claude on company knowledge or personal writing style. An easier way to do this (besides prompt injections each time) would be very welcome, as it could turn Claude into a personalized assistant that remembers a specific knowledge base or persona.

  • Wider availability: Non-US users frequently request that Claude be officially launched in their countries. Posts from Canada, Europe, India, etc., ask when they can use Claude’s website without a VPN or when the Claude API will be open more broadly. Anthropic has been cautious, but demand is global – likely an improvement in the eyes of many would be simply “let more of us use it.” The company’s gradual expansion of access has partially addressed this.

Underserved Needs or User Segments

  • International user base: As noted, for a long time Claude’s primary user base was limited by geography. This left many would-be users underserved. For example, a developer in Germany interested in Claude’s 100k context had no official way to use it. While workarounds exist (third-party platforms, or VPN + phone verification in a supported country), these barriers meant casual international users were effectively locked out. By contrast, ChatGPT is available in most countries. So, non-US English speakers and especially non-English speakers have been underserved by Claude’s limited rollout. They may still rely on ChatGPT or local models simply due to access issues.

  • Users needing strict output formatting: As mentioned, Claude sometimes takes liberties in responses. Users who need highly structured outputs (like JSON for an application, or an answer following a precise format) might find Claude less reliable for that than ChatGPT. These users – often developers integrating the AI into a system – are a segment that could be better served if Claude allowed a “strict mode” or improved its adherence to instructions. They currently might avoid Claude for such tasks, sticking with models known to follow formats more rigidly.

  • Casual Q&A users (vs. creative users): Claude is often praised for creative tasks – it produces flowing, human-like prose and thoughtful essays. However, some users on Reddit noted that for straightforward question-answering or factual queries, Claude sometimes gives verbose answers where brevity would do. The user who compared ChatGPT and Claude said ChatGPT tends to be succinct and bullet-pointed, whereas Claude gives more narrative by default. Users who just want a quick factual answer (like “What’s the capital of X and its population?”) might feel Claude is a bit indirect. These users are better served by something like an accurate search or a terse model. Claude can do it if asked, but its style may not match the expectation of a terse Q&A, meaning this segment could slip to other tools (like Bing Chat or Google).

  • Safety-critical users: Conversely, some users who require very careful adherence to safety (e.g. educators using AI with students, or enterprise customers who want zero risk of rogue outputs) might consider Claude’s alignment a plus, but since ChatGPT is also quite aligned and has more enterprise features, those users might not specifically choose Claude. It’s a small segment, but one could argue Claude hasn’t distinctly captured it yet. They may be underserved in that they don’t have an easy way to increase Claude’s safeguards or see its “chain of thought” (which Anthropic has internally via the constitutional AI approach, but end-users don’t directly interface with that aside from noticing Claude’s generally polite tone).

  • Non-English speakers (quality of output): Claude was trained primarily on English (like most big LLMs). Some users have tested it in other languages; it can respond in many, but the quality may vary. If, say, a user wants a very nuanced answer in French or Hindi, it’s possible Claude’s abilities are not as fine-tuned there as ChatGPT’s (GPT-4 has demonstrated strong multilingual performance, often higher than other models in certain benchmarks). Users who primarily converse in languages other than English might find Claude’s fluency or accuracy slightly weaker. This segment is somewhat underserved simply because Anthropic hasn’t highlighted multilingual training as a priority publicly.

Differences in Perception by User Type

  • Developers/Tech Users: Developers on Reddit have increasingly lauded Claude, especially Claude 2 / Claude 3.5, for coding tasks. The perception shift in late 2024 was notable: many developers started preferring Claude over ChatGPT for programming assistance. They cite “amazing at coding” performance and the ability to handle larger codebases in one go. For example, one user wrote “Claude Sonnet 3.5 is better to work with code (analyze, generate) [than ChatGPT].” Developers appreciate that Claude can take a large chunk of project code or logs and produce coherent analyses or improvements, thanks to its huge context. However, they also notice its quirks – like sometimes injecting more conversational fluff or not following a spec to the letter. On balance, many devs keep both ChatGPT and Claude at hand: one for rigorous step-by-step logic (ChatGPT) and one for broad context and empathetic understanding (Claude). It’s telling that a commenter said “If I had to choose one I would choose Claude” after comparing the two daily. This indicates a very positive perception among advanced users, especially for use cases like brainstorming, code review, or architectural suggestions. The only common gripe from devs is hitting Claude’s usage limits when they try to push it hard (e.g. feeding a 50K-token prompt to analyze an entire repository). In summary, developers view Claude as an extremely powerful tool – in some cases superior to ChatGPT – held back only by availability and some unpredictability in formatting.

  • Casual/Non-technical Users: Casual users who have tried Claude often comment on how friendly and articulate it is. Claude’s style tends to be conversational, polite, and detailed. A new user comparing it to ChatGPT observed that “Claude is more empathetic, and follows a conversational tone… ChatGPT defaults to bullet points too often”. This human-like warmth makes Claude appealing to people using it for creative writing, advice, or just chatting for information. Some even personify Claude as having a “personality” that is compassionate. Casual users also like that Claude’s free version allowed access to an equivalent of GPT-4-level intelligence without a subscription (at least up to the rate limits). On the flip side, casual users do bump into Claude’s refusals on certain topics and might not understand why (since Claude will phrase it apologetically but firmly). If a casual user asked something borderline and got a refusal from Claude, they might perceive it as less capable or too constrained, not realizing it’s a policy stance. Another aspect is that Claude lacks the name recognition – many casual users might not even know to try it unless they’re tapped into AI communities. Those who do try generally comment that it feels “like talking to a human” in a good way. They tend to be very satisfied with Claude’s ability to handle open-ended or personal questions. So, casual user perception is largely positive regarding Claude’s output quality and tone, with some confusion or frustration around its availability (having to use it on a specific app or region) and occasional “can’t do that” moments.

  • Business/Professional Users: Business perceptions of Claude are a bit harder to gauge from public Reddit (since fewer enterprise users post in detail), but a few trends emerge. First, Anthropic has positioned Claude as more privacy-focused and willing to sign enterprise agreements – this appeals to companies worried about data with OpenAI. Indeed, some Reddit discussions mention Claude in the context of tools like Slack or Notion, where it’s integrated as an assistant. Professionals who have used those integrations might not even realize Claude is the engine, but when they do, they compare it favorably in terms of writing style and the ability to digest large corporate documents. For example, a team might feed a long quarterly report to Claude and get a decent summary – something ChatGPT’s smaller context would struggle with. That said, business users also notice the lack of certain ecosystem features; for instance, OpenAI offers system message control, function calling, etc., in their API, which Anthropic has more limited support for. A developer working on a business solution remarked that Claude is more steerable in conversations, whereas ChatGPT tends to be more rigid… [but] ChatGPT has web access which can be very helpful. The implication is that for research or data lookup tasks a business user might need (like competitive intelligence), ChatGPT can directly fetch info, whereas Claude would require a separate step. Overall, business users seem to view Claude as a very competent AI – in some cases better for internal analytic tasks – but perhaps not as feature-rich yet for integration. Cost is another factor: Claude’s API pricing and terms are not as public as OpenAI’s, and some startups on Reddit have mentioned uncertainty about Claude’s pricing or stability. In summary, professionals respect Claude’s capabilities (especially its reliability in following high-level instructions and summarizing large inputs), but they keep an eye on how it evolves in terms of integration, support, and global availability before fully committing to it over the more established ChatGPT.


Google Gemini (Bard)

Common Pain Points and Limitations

  • Inaccurate or “dumb” responses: A flood of Reddit feedback appeared when Google launched its Gemini-powered Bard upgrade, much of it negative. Users complained that Gemini underperformed in basic QA compared to ChatGPT. One blunt assessment titled “100% Honest Take on Google Gemini” stated: “It’s a broken, inaccurate LLM chatbot”. Another frustrated user asked: “How is Gemini still so crap? The number of times I ask Gemini for something and it either gives me incorrect answers or incomplete answers is ridiculous”. They compared it side-by-side with ChatGPT-4 and found ChatGPT gave “perfect, correct, efficient answer in one go,” whereas Gemini rambled and required multiple prompts to get to a half-satisfactory answer. In essence, early users felt that Gemini frequently hallucinated or missed the point of questions, requiring excessive prompt effort to extract correct information. This inconsistency in quality was a major letdown given the hype around Gemini.

  • Excessive verbosity and fluff: Many users noted that Gemini (in the form of the new Bard) tends to produce long-winded answers that don’t get to the point. As one person described, “It rambled on… 3 paragraphs of AI garbage… even then, it [only] eventually mentioned the answer buried in paragraphs of crap”. This is a stark contrast to ChatGPT, which often delivers more concise answers or bullet points when appropriate. The verbosity becomes a pain point when users have to sift through a lot of text for a simple fact. Some speculated that Google might have tuned it to be conversational or “helpful,” but overshot into too much explanation without substance.

  • Poor integration with Google’s own services: One of the selling points of Google’s AI assistant is supposed to be integration with Google’s ecosystem (Gmail, Docs, Drive, etc.). However, early user experiences were very disappointing on this front. A user vented: “Don’t even get me started on its near-complete inability to integrate with Google’s own products which is supposed to be a ‘feature’ (which it apparently doesn’t know it has).”. For example, people would try asking Gemini (via Bard) to summarize a Google Doc or draft an email based on some info – features Google advertised – and the bot would respond it cannot access that data. One user on r/GooglePixel wrote: “Every time I try to use Gemini with my Google Docs or Drive, it tells me it cannot do anything with it. What is the point of even having these integration features?”. This shows a significant gap between promised capabilities and actual performance, leaving users feeling that the “AI assistant” isn’t assisting much within Google’s own ecosystem.

  • Refusals and capability confusion: Users also encountered bizarre refusals or contradictions from Gemini. The same Redditor noted Gemini “refuses to do things for no reason, forgets it can do other things… The other day it told me it didn’t have access to the internet/live data. What.”. This indicates that Gemini would sometimes decline tasks it should be able to do (like retrieving live info, which Bard is connected to) or make incorrect statements about its own abilities. Such experiences gave the impression of an AI that is not only less intelligent, but also less reliable or self-aware. Another user’s colorful comment: “Gemini is absolute trash. You ever have one of those moments where you just want to throw your hands up and say, ‘What were they thinking?’” encapsulates the frustration. Essentially, Gemini’s product integration and consistency issues made it feel half-baked to many early adopters.

  • Unremarkable coding abilities: While not as widely discussed as general Q&A, several users tested Gemini (Bard) on coding tasks and found it subpar. In AI forums, Gemini’s coding capabilities were usually rated below GPT-4 and even below Claude. For instance, one user stated plainly that “Claude 3.5 Sonnet is clearly better for coding than ChatGPT 4o… Gemini is absolute trash [in that context]”. The consensus was that Gemini could write simple code or explain basic algorithms, but it often stumbled on more complex problems or produced code with errors. Its lack of a broad developer toolset (e.g., it doesn’t have an equivalent of Code Interpreter or robust function calling) also meant it wasn’t a first choice for programmers. So, while not every casual user cares about code, this is a limitation for that segment.

  • Mobile device limitations: Gemini rolled out as part of Google’s Assistant on Pixel phones (branded as “Assistant with Bard”). Some Pixel users noted that using it as a voice assistant replacement had issues. It sometimes didn’t pick up voice prompts accurately or took too long to respond compared to the old Google Assistant. There were also comments about needing to opt-in and lose some classic Assistant features. This created a perception that Gemini’s integration on devices wasn’t fully ready, leaving power users of Google’s ecosystem feeling that they had to choose between a smart assistant and a functional one.

Frequently Requested Features or Improvements

  • Dramatically improved accuracy and reasoning: The number one improvement users want for Gemini is simply to be smarter and more reliable. Reddit feedback makes it clear that Google needs to close the gap in answer quality. Users expect Gemini to utilize Google’s vast information access to give factual, direct answers, not meandering or incorrect ones. So the requests (often sarcastically phrased) boil down to: make it as good as or better than GPT-4 on general knowledge and reasoning. This includes better handling of follow-up questions and complex prompts. Essentially, “fix the brain” of Gemini – leverage those purported multimodal training advantages so it stops missing obvious details. Google likely has heard this loud and clear: many posts compare specific answers where ChatGPT excelled and Gemini failed, which serves as informal bug reports for improvement.

  • Better integration & awareness of context: Users want Gemini to fulfill the promise of a seamless Google ecosystem helper. This means it should properly interface with Gmail, Calendar, Docs, Drive, etc. If a user asks “Summarize the document I opened” or “Draft a response to the last email from my boss,” the AI should do it – and do it securely. Right now, the request is that Google enable those features and make Gemini actually recognize when such a task is possible. It was advertised that Bard could connect to user content (with permission), so users are effectively demanding Google “turn on” or fix this integration. This is a key feature for business users especially. Additionally, on the web browsing front: Bard (Gemini) can search the web, but some users want it to cite sources more clearly or be more timely in incorporating breaking news. So improving the connected nature of Gemini is a frequent request.

  • Conciseness controls: Given complaints of verbosity, some users suggest a feature to toggle the response style. For example, a “brief mode” where Gemini gives a short, to-the-point answer by default, unless asked to elaborate. Conversely, maybe a “detailed mode” for those who want very thorough answers. ChatGPT implicitly allows some of this by the user prompt (“keep it brief”); with Gemini, users felt even when they didn’t ask for detail, it over-explained. So a built-in setting or just better tuning to produce concise answers when appropriate would be a welcome improvement. In essence, adjust the verbosity dial.

  • Feature parity with ChatGPT (coding, plugins, etc.): Power users on Reddit explicitly compare features. They request that Google’s Gemini/Bard offer things like a code execution sandbox (similar to ChatGPT’s Code Interpreter), the ability to upload images/PDFs for analysis (since Gemini is multimodal, users want to actually feed it custom images, not just have it describe provided ones). Another frequently mentioned feature is better memory within conversation – while Bard does have some memory of past interactions, users want it to be as good as ChatGPT at referencing earlier context, or even have persistent conversation storage like ChatGPT’s chat history that you can scroll through and revisit. Essentially, Google is being asked to catch up on all the quality-of-life features that ChatGPT Plus users have: chat history, plugin ecosystem (or at least strong third-party integrations), coding assistance, etc.

  • Mobile app and voice improvements: Many casual users requested a dedicated mobile app for Bard/Gemini (similar to the ChatGPT mobile app). Relying on a web interface or only the Pixel Assistant is limiting. An official app across iOS/Android with voice input, speaking responses (for a true assistant feel), and tight integration could greatly improve user experience. Along with that, Pixel owners want the Assistant with Bard to get faster and more functional – basically, they want the best of old Google Assistant (quick, precise actions) combined with the intelligence of Gemini. For example, things like continuing to allow “Hey Google” smart home voice commands and not just chatty responses. Google could improve the voice mode of Gemini to truly replace the legacy assistant without feature regressions.

  • Transparency and control: Some users have asked for more insight into Bard’s sources or a way to fine-tune its style. For instance, showing which Google result Bard is pulling information from (to verify accuracy) – something Bing Chat does by citing links. Also, because Bard occasionally produces wrong info, users want to be able to flag or correct it, and ideally Bard should learn from that feedback over time. Having an easy feedback mechanism (“thumbs down – this is incorrect because…”) that leads to rapid model improvement would instill confidence that Google is listening. Basically, features to make the AI more of a collaborative assistant than a black box.

Underserved Needs or User Segments

  • Users seeking a dependable personal assistant: Ironically, the group that Google targeted – people wanting a powerful personal assistant – feel most underserved by Gemini in its current form. Early adopters who switched on the new Bard-based Assistant expected an upgrade, but many felt it was a downgrade in practical terms. For example, if someone wants a voice assistant to accurately answer trivia, set reminders, control devices, and integrate info from their accounts, Gemini struggled. This left the very segment of busy professionals or gadget enthusiasts (who rely on assistants for productivity) feeling that their needs weren’t met. One user commented they’d consider paying for the Pixel’s “Assistant with Bard” “if [it] surpass[es] Google Assistant”, implying it hadn’t yet. So that segment is still waiting for a reliable, genuinely helpful AI assistant – they’ll jump on it if Gemini improves.

  • Non-native English speakers / localization: Google products usually have excellent localization, but it’s unclear if Bard/Gemini was equally strong in all languages at launch. Some international users reported that Bard’s answers in their native language were less fluent or useful, pushing them back to local competitors. If Gemini’s training data or optimization favored English, then non-English users are underserved. They might prefer ChatGPT or local models which have explicitly optimized multilingual capabilities. This is a space Google could traditionally excel in (given its translation tech), but user feedback on that is scant – likely indicating Gemini hasn’t yet wowed those communities.

  • Enterprise customers (so far): Large organizations have not widely adopted Bard/Gemini based on public chatter, often because of trust and capability gaps. Enterprises need consistency, citations, and integration with their workflows (Office 365 is deeply integrated with OpenAI’s tech via MS Copilot, for example). Google’s equivalent (Duet AI with Gemini) is still evolving. Until Gemini/Bard proves it can reliably draft emails, create slide decks, or analyze data in Google Sheets at a level on par with or above GPT-4, enterprise users will feel that Google’s solution isn’t addressing their needs fully. Some posts on r/Bard from professionals are along the lines of “I tried Bard for work tasks, it wasn’t as good as ChatGPT, so we’ll wait and see.” That indicates enterprise users are an underserved segment for now – they want an AI that slots into Google Workspace and actually boosts productivity without needing constant verification of outputs.

  • Users in the Google ecosystem who prefer one-stop solutions: There’s a segment of users who use Google for everything (search, email, documents) and would happily use a Google AI for all their chatbot needs – if it were as good. Right now, those users are somewhat underserved because they end up using ChatGPT for certain things and Bard for others. They might ask factual questions to ChatGPT because they trust its answer quality more, but use Bard for its browsing or integration attempts. That split experience isn’t ideal. Such users really just want to stay in one app/assistant. If Gemini improves, they’ll consolidate around it, but until then their use case of “one assistant to rule them all” isn’t fulfilled.

  • Developers/Data scientists on Google Cloud: Google did release Gemini models via its Vertex AI platform for developers. However, early reports and benchmarks suggested Gemini (particularly the available “Gemini Pro” model) wasn’t beating GPT-4. Developers who prefer Google Cloud for AI services are thus a bit underserved by model quality – they either have to accept a slightly inferior model or integrate OpenAI’s API separately. This enterprise developer segment is hungry for a strong Google model so they can keep everything in one stack. Until Gemini’s performance clearly excels in some areas or pricing offers a compelling reason, it’s not fully serving this group’s needs in competitive terms.

Differences in Perception by User Type

  • Developers/Tech Enthusiasts: Tech-savvy users approached Gemini with high expectations (it’s Google, after all). Their perception quickly soured after hands-on testing. Many developers on Reddit ran benchmarks or their favorite tricky questions through Gemini and found it lagging. One programmer bluntly stated, “Gemini is absolute trash like Llama 3.0 used to be”, indicating they rank it even below some open models. Developers are particularly sensitive to logical errors and verbosity. So when Gemini gave verbose but incorrect answers, it lost credibility fast. On the other hand, developers recognize Google’s potential; some hold out hope that “with more fine-tuning, Gemini will get better” and they periodically retest it after updates. At present, however, most devs perceive it as inferior to GPT-4 in almost all serious tasks (coding, complex problem solving). They do appreciate certain things: for example, Gemini has access to real-time information (via Google search) without needing a plugin, which is useful for up-to-date queries. A developer might use Bard for something like “search and summarize the latest papers on X,” where it can quote web data. But for self-contained reasoning, they lean toward other models. In summary, tech enthusiasts see Gemini as a promising work-in-progress that currently feels a generation behind. It hasn’t earned their full trust, and they often post side-by-side comparisons highlighting its mistakes to spur Google to improve it.

  • Casual/Everyday Users: Casual users, including those who got access to the new Bard on their phones or via the web, had mixed feelings. Many casual users initially approached Bard (Gemini) because it’s free and easy to access with a Google account, unlike GPT-4 which was paywalled. Some casual users actually report decent experiences for simple uses: for example, one Redditor in r/Bard gave a positive review noting Gemini helped them with things like reviewing legal docs, copywriting, and even a fun use-case of identifying clothing sizes from a photo. They said “Gemini has been a valuable resource for answering my questions… up-to-date information… I’ve become so accustomed to the paid version that I can’t recall how the free version performs.” – indicating that at least some casual users who invested time (and money) into Bard Advanced found it useful in daily life. These users tend to use it for practical, everyday help and may not push the model to its limits. However, many other casual users (especially those who had also tried ChatGPT) were disappointed. Common people asking things like travel advice, trivia, or help with a task found Bard’s answers less clear or useful. The perception here is split: brand-loyal Google users vs. those already spoiled by ChatGPT. The former group, if they hadn’t used ChatGPT much, sometimes find Bard/Gemini “pretty good” for their needs and appreciate that it’s integrated with search and free. The latter group almost invariably compares and finds Gemini wanting. They might say, “Why would I use Bard when ChatGPT is better 90% of the time?”. So casual user perception really depends on their prior frame of reference. Those new to AI assistants might rate Gemini as a helpful novelty; those experienced with the competition see it as a disappointment that “still sucks so bad” and needs to improve.

  • Business/Professional Users: Many professionals gave Bard a try when it launched with Google Workspace integration (Duet AI). The perception among this group is cautious skepticism. On one hand, they trust Google’s enterprise promises regarding data privacy and integration (e.g., editing Docs via AI, summarizing meetings from Calendar invites, etc.). On the other hand, early tests often showed Gemini making factual mistakes or providing generic output, which is not confidence-inspiring for business use. For example, a professional might ask Bard to draft a client report – if Bard inserts incorrect data or weak insights, it could be more hassle than help. Therefore, professional users tend to pilot Bard on non-critical tasks but still lean on GPT-4 or Claude for important outputs. There’s also a perception that Google was playing catch-up: many saw Bard as “not ready for prime time” and decided to wait. Some positive perception exists in areas like real-time data queries – e.g., a financial analyst on Reddit noted Bard could pull recent market info thanks to Google search, which ChatGPT couldn’t unless plugins were enabled. So in domains where current data is key, a few professionals saw an advantage. Another nuance: people in the Google ecosystem (e.g., companies that use Google Workspace exclusively) have a slightly more favorable view simply because Bard/Gemini is the option that fits their environment. They are rooting for it to improve rather than switching to a whole different ecosystem. In summary, business users see Gemini as potentially very useful (given Google’s data and tool integration), but as of early 2025, it hasn’t earned full trust. They perceive it as the “new contender that isn’t quite there yet” – worth monitoring, but not yet a go-to for mission-critical tasks. Google’s reputation buys it some patience from this crowd, but not indefinite; if Gemini doesn’t markedly improve, professionals might not adopt it widely, sticking with other solutions.


Open-Source LLMs (e.g. LLaMA-based Models)

Common Pain Points and Limitations

  • Hardware and setup requirements: Unlike cloud chatbots, open-source LLMs typically require users to run them on local hardware or a server. This immediately presents a pain point: many models (for example, a 70-billion-parameter LLaMA model) need a powerful GPU with a lot of VRAM to run smoothly. As one Redditor succinctly put it, “Local LLMs on most consumer hardware aren't going to have the precision needed for any complex development.” For the average person with only an 8GB or 16GB GPU (or just a CPU), running a high-quality model can be slow or outright unfeasible. Users might resort to smaller models that fit, but those often yield lower quality output (“dumber” responses). The complexity of setup is another issue – installing model weights, setting up environments like Oobabooga or LangChain, managing tokenization libraries, etc., can be intimidating for non-developers. Even technically skilled users describe it as a hassle to keep up with new model versions, GPU driver quirks, and so on. One thread titled “Seriously, how do you actually use local LLMs?” had people sharing that many models “either underperform or don't run smoothly on my hardware”, and asking for practical advice.

  • Inferior performance to state-of-the-art closed models: Open-source models have made rapid progress, but as of 2025 many users note they still lag behind the top proprietary models (GPT-4, Claude) in complex reasoning, coding, and factual accuracy. A vivid example: a user on r/LocalLLaMA compared outputs in their native language and said “Every other model I’ve tried fails… They don’t come even close [to GPT-4]. ChatGPT 4 is absolutely amazing at writing”. This sentiment is echoed widely: while smaller open models (like a fine-tuned 13B or 7B) can be impressive for their size, they struggle with tasks that require deep understanding or multi-step logic. Even larger open models (65B, 70B) which approach GPT-3.5 level still can falter at the kind of tricky problems GPT-4 handles. Users observe more hallucinations and errors in open models, especially on niche knowledge or when prompts deviate slightly from the training distribution. So, the gap in raw capability is a pain point – one must temper expectations when using local models, which can be frustrating for those accustomed to ChatGPT’s reliability.

  • Limited context length: Most open-source LLMs traditionally have smaller context windows (2048 tokens, maybe 4k tokens) compared to what ChatGPT or Claude offer. Some newer finetunes and architectures are extending this (for instance, there are 8K or 16K token versions of LLaMA-2, and research like MPT-7B had a 16K context). However, practical use of very long context open models is still in early stages. This means local model users face similar memory issues – the model forgets earlier parts of the conversation or text, unless they implement external memory schemes (like vector databases for retrieval). In Reddit discussions, users often mention having to manually summarize or truncate history to stay within limits, which is laborious. This is a notable limitation especially since proprietary models are pushing context lengths further (like Claude’s 100k).

  • Lack of fine-tuned instruction-following in some models: While many open models are instruction-tuned (Alpaca, LLaMA-2-Chat, etc.), not all are as rigorously RLHF-trained as ChatGPT. This can result in local models sometimes being less responsive to instructions or system prompts. For example, a raw LLaMA model will just continue text and ignore a user prompt format entirely – one must use a chat-tuned version. Even then, the quality of the tuning data matters. Some Reddit users noted that certain instruct models either overly refused (because they were tuned with heavy safety, e.g. some Facebook LLaMA-2 chat would reply with policy refusals similar to ChatGPT) or under-performed (not following the query precisely). A user complaint on a GitHub about CodeLlama-70B-instruct said it “is so censored it's basically useless”, showing frustration that an open model adopted the same strictness without the alternative of turning it off. So, depending on the model chosen, users might face either a model that is too loose (and gives irrelevant continuation) or one that is too strict/guarded. Getting a well-balanced instruction-following behavior often requires trying multiple finetunes.

  • Fragmentation and rapid change: The open-source LLM landscape evolves extremely fast, with new models and techniques (quantization, LoRA finetunes, etc.) emerging weekly. While exciting, this is a pain point for users who don’t want to constantly tweak their setup. What worked last month might be outdated by this month. One Redditor humorously compared it to the wild west, saying the community is “finding ways to ‘fake it’ so it feels like it’s similar [to GPT-4]” but often these are stopgap solutions. For a casual user, it’s daunting to even choose from dozens of model names (Vicuna, Alpaca, Mythomax, Mistral, etc.), each with multiple versions and forks. Without a single unified platform, users rely on community guides – which can be confusing – to decide what model suits their needs. This fragmentation in tools and model quality is an indirect pain point: it raises the entry barrier and maintenance effort.

  • No official support or guarantees: When something goes wrong with a local LLM (e.g., the model outputs offensive content or crashes), there’s no customer support to call. Users are on their own or reliant on community help. For hobbyists this is fine, but for professional use this lack of formal support is a barrier. Some Reddit users working in companies noted that while they’d love the privacy of an open model, they worry about who to turn to if the model malfunctions or if they need updates. Essentially, using open-source is DIY – both a strength and a weakness.

Frequently Requested Features or Improvements

  • Better efficiency (quantization and optimization): A major focus in the community (and thus a common request) is making large models run on smaller hardware. Users eagerly await techniques that let a 70B model run as smoothly as a 7B model. There’s already 4-bit or 8-bit quantization, and threads often discuss new methods like AWQ or RNN-like adapters. One user cited research where improved quantization could maintain quality at lower bit precision. The wish is essentially: “Let me run a GPT-4-level model on my PC without lag.” Every breakthrough that edges closer (like more efficient transformer architectures or GPU offloading to CPU) is celebrated. So, requests for better tooling (like the next-generation of llama.cpp or other accelerators) are common – anything to reduce the hardware barrier.

  • Larger and better models (closing the quality gap): The community constantly pushes for new state-of-the-art open models. Users are excited about projects like LLaMA 3 (if/when Meta releases one) or collaborations that could produce a 100B+ open model. Many express optimism that “we will have local GPT-4 models on our machines by the end of this year”. In that quote, the user bets on LLaMA 3 plus fine-tuning to deliver GPT-4-like performance. So, one could say a “requested feature” is simply: more weight, more training – the community wants tech companies or research groups to open-source bigger, better models so they can run them locally. Each time a new model (like Mistral 7B or Falcon 40B) comes out, users test if it beats the last. The ultimate request is an open model that truly rivals GPT-4, eliminating the need for closed AI for those who can host it.

  • User-friendly interfaces and one-click setups: To broaden adoption, many users ask for easier ways to use local LLMs. This includes GUI interfaces where one can download a model and start chatting without command-line work. There are projects addressing this (Oobabooga’s text-generation-webui, LM Studio, etc.), but newcomers still struggle. A recent Reddit thread might ask, “How do I set up a ChatGPT-like LLM locally?”, with users requesting step-by-step guides. So a frequent wish is for a simplified installation – perhaps an official app or Docker container that bundles everything needed, or integration into popular software (imagine an extension that brings a local LLM into VSCode or Chrome easily). Essentially, reduce the technical overhead so that less tech-savvy folks can also enjoy private LLMs.

  • Longer context and memory for local models: Open-source developers and users are experimenting with extending context (through positional embedding tweaks or specialized models). Many users request that new models come with longer context windows by default – for example, an open model with 32k context would be very attractive. Until that happens, some rely on external “retrieval” solutions (LangChain with a vector store that feeds relevant info into the prompt). Users on r/LocalLLaMA frequently discuss their setups for pseudo-long-context, but also express desire for the models themselves to handle more. So an improvement they seek is: “Give us a local Claude – something with tens of thousands of tokens of context.” This would allow them to do book analysis, long conversations, or big codebase work locally.

  • Improved fine-tuning tools and model customization: Another ask is making it easier to fine-tune or personalize models. While libraries exist to fine-tune models on new data (Alpaca did it with 52K instructions, Low-Rank Adaptation (LoRA) allows finetuning with limited compute, etc.), it’s still somewhat involved. Users would love more accessible tooling to, say, feed all their writings or company documents to the model and have it adapt. Projects like LoRA are steps in that direction, but a more automated solution (perhaps a wizard UI: “upload your documents here to fine-tune”) would be welcomed. Essentially, bring the ability that OpenAI provides via API (fine-tuning models on custom data) to the local realm in a user-friendly way.

  • Community-driven safety and moderation tools: Given open models can produce anything (including disallowed content), some users have requested or started developing moderation layers that users can toggle or adjust. This is a bit niche, but the idea is to have optional filters to catch egregious outputs if someone wants them (for example, if kids or students might interact with the model locally). Since open models won’t stop themselves, having a plugin or script to scan outputs for extreme content could be useful. Some in the community work on “ethical guardrails” that you can opt into, which is interesting because it gives user control. So, features around controlling model behavior – whether to make it safer or to remove safeties – are often discussed and requested, depending on the user’s goals.

Underserved Needs or User Segments

  • Non-technical users valuing privacy: Right now, local LLMs largely cater to tech enthusiasts. A person who isn’t computer-savvy but cares about data privacy (for instance, a psychotherapist who wants AI help analyzing notes but cannot upload them to the cloud) is underserved. They need a local solution that’s easy and safe, but the complexity is a barrier. Until local AI becomes as easy as installing an app, these users remain on the sidelines – either compromising by using cloud AI and risking privacy, or not using AI at all. This segment – privacy-conscious but not highly technical individuals – is clearly underserved by the current open-source offerings.

  • Budget-conscious users in regions with poor internet: Another segment that benefits from local models is people who don’t have reliable internet or can’t afford API calls. If someone could get a decent offline chatbot on a low-end machine, it’d be valuable (imagine educators or students in remote areas). Presently, the quality offline might not be great unless you have a high-end PC. There are some very small models that run on phones, but their ability is limited. So, users who need offline AI – due to connectivity or cost – are a group that open-source could serve, but the technology is just at the cusp of being helpful enough. They’ll be better served as models get more efficient.

  • Creators of NSFW or specialized content: One reason open models gained popularity is that they can be uncensored, enabling use cases that closed AIs forbid (erotic roleplay, exploring violent fiction, etc.). While this “underserved” segment is controversial, it is real – many Reddit communities (e.g., for AI Dungeon or character chatbots) moved to local models after OpenAI and others tightened content rules. These users are now served by open models to an extent, but they often have to find or finetune models specifically for this purpose (like Mythomax for storytelling, etc.). They occasionally lament that many open models still have remnants of safety training (refusing certain requests). So they desire models explicitly tuned for uncensored creativity. Arguably they are being served (since they have solutions), but not by mainstream defaults – they rely on niche community forks.

  • Language and cultural communities: Open-source models could be fine-tuned for specific languages or local knowledge, but most prominent ones are English-centric. Users from non-English communities may be underserved because neither OpenAI nor open models cater perfectly to their language/slang/cultural context. There are efforts (like BLOOM and XLM variants) to build multilingual open models, and local users request finetunes in languages like Spanish, Arabic, etc. If someone wants a chatbot deeply fluent in their regional dialect or up-to-date on local news (in their language), the major models might not deliver. This is a segment open models could serve well (via community finetuning) – and on Reddit we do see people collaborating on, say, a Japanese-tuned LLM. But until such models are readily available and high-quality, these users remain somewhat underserved.

  • Small businesses and self-hosters: Some small companies or power users would love to deploy an AI model internally to avoid sending data out. They are somewhat served by open source in that it’s possible, but they face challenges in ensuring quality and maintenance. Unlike big enterprises (which can pay for OpenAI or a hosted solution), small businesses might try to self-host to save costs and protect IP. When they do, they may find the model isn’t as good, or it’s hard to keep updated. This segment is in a middle ground – not huge enough to build their own model from scratch, but capable enough to attempt using open ones. They often share tips on Reddit about which model works for customer service bots, etc. They could benefit from more turn-key solutions built on open models (some startups are emerging in this space).

Differences in Perception by User Type

  • Developers/Hobbyists: This group is the backbone of the open-source LLM community on Reddit (e.g., r/LocalLLaMA is full of them). Their perception is generally optimistic and enthusiastic. They trade models and benchmarks like collectors. Many developers are thrilled by how far open models have come in a short time. For instance, a user shared that a leaked 70B model fine-tuned (Miqu-1 70B) felt “on par with GPT-4 for what I need… I canceled my ChatGPT+ subscription months ago and never looked back”. This exemplifies the subset of developers who have managed to tailor an open solution that satisfies their personal use cases – they see open models as liberating and cost-saving. On the other hand, developers are clear-eyed about limitations. Another user responded that they’d love to cancel ChatGPT, “I would if anything even compared to ChatGPT 4… [but] every other model fails… They don’t come close”, particularly citing creative writing quality. So within this group, perceptions vary based on what they use AI for. Generally: if the task is brainstorming or coding with some tolerance for error, many devs are already content with local models. If the task is high-stakes accuracy or top-tier creativity, they acknowledge open models aren’t there yet. But even when acknowledging shortcomings, the tone is hopeful – they often say “we’re pretty much there” or it’s just a matter of time. Importantly, developers enjoy the freedom and control of open models. They can tweak, fine-tune, or even peek into the model’s workings, which closed APIs don’t allow. This fosters a sense of community ownership. So their perception is that open LLMs are a worthwhile endeavor, improving rapidly, and philosophically aligned with tech freedom. They accept the rough edges as the price of that freedom.

  • Casual Users: Pure casual users (not particularly privacy-focused or techie) usually don’t bother with open-source LLMs at all – and if they do, it’s via some simplified app. Thus, their perception is somewhat absent or shaped by hearsay. If a non-technical person tries a local LLM and it’s slow or gives a weird answer, they’ll likely conclude it’s not worth the trouble. For example, a gamer or student might try a 7B model for fun, see it underperform compared to ChatGPT, and abandon it. So among casual observers, the perception of open models might be that they are “toys for nerds” or only for those who really care about not using cloud services. This is slowly changing as more user-friendly apps emerge, but broadly the typical casual user on Reddit isn’t raving about open LLMs – they’re usually discussing ChatGPT or Bard because those are accessible. That said, a subset of casual users who primarily want, say, uncensored roleplay have learned to download something like TavernAI with a model and they perceive it as great for that one niche purpose. They might not even know the model’s name (just that it’s an “uncensored AI that doesn’t judge me”). In summary, the average casual user’s perception is either indifferent (they haven’t tried) or that open-source is a bit too raw and complex for everyday use.

  • Business/Professional Users: Professional attitudes towards open LLMs are pragmatic. Some tech-savvy business users on Reddit mention using local models for privacy – for example, running an LLM on internal data to answer company-specific questions without sending info to OpenAI. These users perceive open LLMs as a means to an end – they might not love the model per se, but it fulfills a requirement (data stays in-house). Often, they’ll choose an open model when compliance rules force their hand. The perception here is that open models are improving and can be “good enough” for certain internal applications, especially with fine-tuning. However, many note the maintenance burden – you need a team that knows machine learning ops to keep it running and updated. Small businesses might find that daunting and thus shy away despite wanting the privacy. As a result, some end up using third-party services that host open models for them (trying to get best of both worlds). In sectors like healthcare or finance, professionals on Reddit discuss open-source as an attractive option if regulators don’t allow data to go to external servers. So they perceive open LLMs as safer for privacy, but riskier in terms of output accuracy. Another part of this is cost: over the long run, paying for API calls to OpenAI might get expensive, so a business user might calculate that investing in a server with a local model could be cheaper. If that math works out, they perceive open LLMs as cost-effective alternatives. If not, they’ll stick with closed ones. Generally, business users are cautiously interested – they follow news like Meta’s releases or OpenAI’s policy changes to see which route is viable. Open models are seen as getting more enterprise-ready (especially with projects like RedPajama, which aim to be more licensed for commercial use). As those licenses clarify, businesses feel more comfortable using them. So perceptions are improving: a year ago many enterprises wouldn’t consider open models; now some do as they hear success stories of others deploying them. But widespread perception is still that open models are a bit experimental – likely to change as the tech matures and success stories spread.


Finally, the following table provides a high-level summary comparing the tools across common issues, desired features, and gaps:

LLM Chat ToolCommon User Pain PointsFrequently Requested FeaturesNotable Gaps / Underserved Users
ChatGPT (OpenAI)- Limited conversation memory (small context)
- GPT-4 message cap for subscribers
- Overly strict content filters/refusals
- Occasional factual errors or “nerfing” of quality
- Sometimes incomplete code answers
- Larger context windows (longer memory)
- Ability to upload/use personal files as context
- Option to relax content moderation (for adults/pro users)
- Higher GPT-4 usage limits or no cap
- More accurate, up-to-date knowledge integration
- Users with very long documents or chat sessions (researchers, writers)
- Those seeking uncensored or edge-case content (adult, hacking) (currently not served by official ChatGPT)
- Privacy-sensitive users (some businesses, medical/legal) who can’t share data with cloud (no on-prem solution yet)
- Non-English users in niche languages/dialects (ChatGPT is strong in major languages, but less so in rare ones)
Claude (Anthropic)- Conversation limits (Claude often stops and says “come back later” after a lot of usage)
- Can go off-track in 100k context (attention issues on very large inputs)
- Doesn’t always obey system/format strictly
- Some content refusals (e.g. certain advice) that surprise users
- Initially limited availability (many regions lacked access)
- Higher or no daily prompt limits (especially for Claude Pro)
- Better handling of very long contexts (stay on task)
- Plugin or web-browsing abilities (to match ChatGPT’s extendability)
- Image input capability (multimodal support) to analyze visuals
- Official launch in more countries/regions for broader access
- Non-US users (until global rollout is complete) who want access to Claude’s capabilities
- Users needing precise structured outputs (might find Claude too verbose/loose at times)
- Developers wanting integration: Claude API is available but fewer third-party tools support it compared to OpenAI’s
- Users who prefer multi-turn tools: Claude lacks an official plugin ecosystem (underserving those who want an AI to use tools/internet autonomously)
Google Gemini (Bard)- Frequent incorrect or incomplete answers (underperforms vs GPT-4)
- Verbose, rambling responses when a concise answer is needed
- Poor integration with Google apps despite promises (can’t act on Gmail/Docs as expected)
- Inconsistent behavior: forgets capabilities, random refusals
- Mediocre coding help (below ChatGPT/Claude in code quality)
- Major quality improvements in reasoning & accuracy (close the gap with GPT-4)
- Tighter integration with Google services (actually read Docs, draft emails, use Calendar as advertised)
- More concise response mode or adjustable verbosity
- Expanded support for third-party plugins or extensions (to perform actions, cite sources, etc.)
- Dedicated mobile apps and improved voice assistant functionality (especially on Pixel devices)
- Power users wanting a reliable “Google Assistant 2.0” (currently let down by Bard’s limitations)
- Multilingual users: if Bard isn’t as fluent or culturally aware in their language, they remain under-served
- Enterprise Google Workspace customers who need an AI assistant on par with Microsoft’s offerings (Duet AI with Gemini still maturing)
- Developers – few rely on Gemini’s API yet due to quality; this segment sticks to OpenAI unless Gemini improves or is needed for data compliance
Open-Source LLMs- High resource requirements to run decent models (hardware/GPU bottleneck)
- Extra setup complexity (installing models, updates, managing UIs)
- Quality gaps: often worse reasoning/fact accuracy than top closed models
- Smaller context limits (most local models can’t handle extremely long inputs out-of-the-box)
- Variable behavior: some models lack fine safety or instruction tuning (output can be hit-or-miss)
- More efficient models/optimizations to run on everyday hardware (quantization improvements, GPU acceleration)
- New open models approaching GPT-4 level (larger parameter counts, better training – eagerly awaited by community)
- Easier “one-click” setup and user-friendly interfaces for non-experts
- Longer context or built-in retrieval to handle lengthy data
- Options to fine-tune models easily on one’s own data (simpler personalization)
- Non-technical users who want privacy (right now the technical barrier is high for them to use local AI)
- Users in low-bandwidth or high-cost regions (open models could serve offline needs, but current ones might be too slow on weak devices)
- Groups needing uncensored or specialized outputs (they partially rely on open LLMs now, but mainstream open models still include some safety tuning by default)
- Businesses looking for on-prem solutions: open models appeal for privacy, but many firms lack ML expertise to deploy/maintain them (gap for managed solutions built on open LLMs)

Each of these AI chat solutions has its devoted fans and critical detractors on Reddit. The feedback reveals that no single tool is perfect for everyone – each has distinct strengths and weaknesses. ChatGPT is praised for its overall excellence but criticized for restrictions; Claude wins favor for its context length and coding ability but remains slightly niche; Gemini is powerful on paper yet has to win user trust through better performance; and open-source models empower users with freedom and privacy at the cost of convenience. Reddit user discussions provide a valuable window into real-world usage: they surface recurring issues and unmet needs that developers of these AI models can hopefully address in future iterations. Despite different preferences, all user groups share some common desires: more capable, trustworthy, and flexible AI assistants that can seamlessly integrate into their lives or workflows. The competition and feedback loop between these tools – often playing out through side-by-side Reddit comparisons – ultimately drives rapid improvements in the LLM space, to the benefit of end users.

Sources:

  • Reddit – r/ChatGPTPro thread on ChatGPT pain points, r/ChatGPT complaints about policy/quality
  • Reddit – r/ClaudeAI discussions comparing Claude vs ChatGPT, user feedback on Claude’s limits
  • Reddit – r/GoogleGeminiAI and r/Bard feedback on Gemini’s launch, positive use-case example
  • Reddit – r/LocalLLaMA and r/LocalLLM user experiences with open-source models, discussions on local model performance and setup.