I'd say in general dont rely on AI to get you factual information.
Its fascinating that it can still hallucinate outright non existant things.
Comment has been collapsed.
The problem isn't even that AI can provide some basic information that's good, the problem is that after that it's trusted, and AI continues to provide false information, but in an affirmative form and with references to fictional sources, and this can become a problem. This is quite easy to check by asking professional questions that you understand.
Comment has been collapsed.
No argument. I am well aware of the limitations of AI.
I predict that the internet as we know it will change dramatically in the future. Just think about it: Nowadays, when we use a search engine, we rarely click on the search results anymore because the AI already provides us with a summary of these sources within the search engine. This means that the authors of the original content don't get the clicks and earn less money. Ultimately, these sources will disappear and knowledge will be concentrated in the hands of monopolies.
What use is the internet if anyone can publish information that is never found?
Comment has been collapsed.
My experience has been rather positive to be honest. Thats why I am surprised it doesnt have algorithm or whatever in place to critically evaluate provided information by default. When I start pestering it with prompts and detailed steps it suddenly starts thinking longer and gives me a bit more correct information. I mean, of course its LLM and errors are expected. But for example I am reviewing a law and a standart - I literally have publically available law which states very specific things. GPT still outright makes a mistake and cant explain where it got the wrong information.
I mean for productivity work, writing etc it is really good. Just do quality and sanity checks and you are good to go. With a good prompt it even talks like a human not some over hyped AI cheerleeder, but at the same time it can sometimes be so abnormally abtuse that it surprises me.
Of course you can really unlock its potential with a bit more work. AI agents, correct prompts etc. For example Claude when used in the terminal itself sounds less dumb and does a pretty brilliant job with good prompts. Even the coding function in the website is relatively fine for small hobby projects. I ask it to create games for personal use and it does pretty great job vey quickly. Sure, the code is a mess but it works....
And it still outright gives incorrect information :D Thats why I am more surprised that it still hallucinates so easily.
Comment has been collapsed.
A lot of issues I had with AI is that it will tell you something with 100% confidence that's just completely wrong and you can ask it "are you sure?", it almost always doubles down on it and then you give it the actual real information and it will tell you "oh you are right, i'm sorry". I don't trust AI to give me info I don't know about, unless it's something very broad and simple because technically it lacks knowledge on pretty much everything.
Comment has been collapsed.
I don't trust AI to give me info I don't know about
Exactly. Making the whole thing a rather pointless waste of resources. It's like having a search engine that will almost always give you the wrong information and will waste energy to do it too. Not to mention most people just take everything they get from "AI" at face value and it's not like the internet needed one more "credible" source of misinformation.
Comment has been collapsed.
Where does it say I’m confused? 🫤
The person is stating such an obvious piece of information, that my comment was sarcastic in nature.
There is no point in stating such things unless he’s going to back up his statement with such examples as:
“I did a giveaway and the winner couldn’t activate it, and now they’re demanding a replacement key that will work”
“I won a giveaway, and now the gifter wants to ask for deletion because I couldn’t activate due to region locks, but I refused”
This person is either upset at something specifically that happened to them, or honestly daft, and doesn’t realise AI isn’t to be trusted.
Comment has been collapsed.
This person is either upset at something specifically that happened to them
Well, yeah. How would you react if someone (or perhaps even multiple different people) made a critical error like this with you because they didn't know something obvious?
So what if OP didn't back up the post with an example? You were still smart enough to figure out what happened without one, thus you understood the point in why OP stated such things.
my comment was sarcastic in nature.
and you need to work on you sarcasm skills because that didn't come across at all. It came across as smug, like you'd cornered someone in an argument with a gotcha, but without the gotcha. An actual sarcastic response to OP would be more along the lines of "Oh, so you're saying it's okay to use AI for everything else, right?" since that directly touches upon the unstated implication in the post while making it clear how ridiculous it is.
Comment has been collapsed.
If that information isn't clearly stated on the website that sells the key before you even buy it, you 100% shouldn't trust AI to give you an answer on it, because it can't find it, so it clearly doesn't have the ability to give you an answer. You expect it to buy the key from every retailer, look at the locks, then add that do it's data and then give you an answer? What it will do, is it will search left and right for something that looks kind of right, like someone asking for a region lock and maybe some matching words/letters from the games title until it reaches a %match of your given prompt that it considers satisfactory and then give you an answer for what it found with that as the output. If it's too low it will admit it has no clue, if it thinks it found something it will act like it's the truth.
If you really have to ask it something, ask it to provide the source for where it found the answer, still you shouldn't ask it these kind of prompts. AI isn't meant to know this kind of stuff. It's like asking AI about the next humble choice. I think someone already did this btw, and of course it was wrong, but it was just taking the output from the web somewhere at random.
Comment has been collapsed.
The information on websites for locks are sometimes wrong... For example, in Fanatical it shows a lot of countries game can't be activated but you actually can... Hogwarts Legacy steamdb is an example
Comment has been collapsed.
Right but there's no way AI would ever know that distinction. I also can't see this changing anytime soon. You'd need NotAI for this to work, some custom bot that specifically does this to scrape the info from all these third party sellers first, then provides you a list of it, case in which might as well just look at the packages list on steamdb.
Comment has been collapsed.
You know what I'd love more? If people stopped humanizing "AI" by insinuating that "AI" is capable of hallucinating. It's a large language model. You submit an input, it delivers an output. There's nothing AI about any of the shit our fellow humans have created so far.
Comment has been collapsed.
Current generative AIs are not programmed to tell the truth. They are programmed to mimic data patterns – that is, to produce responses similar to what a human might give. The more they evolve, the more their answers seem credible and complex; but distinguishing between truth and lies is far beyond their capabilities. That requires an understanding of words, sentences, facts, and context – and they have none of that.
When AIs say something false and then try to maintain their lie, it is because that is what a human would do in such a situation (they are just imitating conversational behaviors they have picked up here and there, nothing more).
Comment has been collapsed.
The term may be confusing but the input/output explanation for it is inaccurate.
LLMs have been known, and are still known, to make up facts. Often to disguise the lack of proper response.
Now of course, LLMs then run with that but it's not based on any human input
Comment has been collapsed.
I think the real problem is relying solely on AI and not double-checking the results. They can sometimes be useful to help you narrow things down a bit (I like the LLM/natural language aspect) but yeah they eff up the facts quite a bit and on a large number of topics so definitely anyone who uses them should put in the effort to verify the answers.
In my testing, I've also had where I tell them my exact os and version and what I'm trying to do - and it still gives me incorrect commands ... So I can only imagine that with less specific info, the accuracy of the response is even worse.
Comment has been collapsed.
The best use-case I've found for them currently is to summarize basic concepts and terminology for a topic you're unfamiliar with... As long as it's a topic that's somewhat well documented online. If I were to do the same from scratch using just search engines without knowing what to search for, that can take a good bit of trial and error before you start coming across the right terms to search on.
But when you start getting detailed, it messes all kinds of stuff up. I've had quite a few technical things it gets wrong - the most annoying is when you say "I'm on xyz os, version abc, how do I do x... And it gives you a command that doesn't work there. I've had that happen on multiple occasions.
But mostly I was just forcing myself to use it / test it out so that I'd have a better understanding of what it is capable of instead of being ignorant about it. Overall, I prefer old school searches myself.
Comment has been collapsed.
Question just popped in my mind. I asked AI A to answer a research question for me. It gave me an answer that looked plausible. As a test, I asked AI B the same question. It asked me for some clarification, I gave it the added context and it gave me the same answer plus some additional information that was not directly relevant to my question (think alternative solutions).
Can I safely conclude both AIs gave me the right answer or is there a good likelihood both were hallucinating?
Comment has been collapsed.
so called AI is not AI. you can not trust anything it outputs, it's just an LLM meant to look like human communication.
depending on what the training data was the fancy statistics going on inside the LLM mean that more often than not some of the info in the output will be close to the "truth" that was fed into it (not necessarily "true" though), but if you want any actual truth then no, you can not trust it.
Comment has been collapsed.
See the answers of these AIs like weather forecasts – they give you a possibility, not an absolute truth. Do not trust them. These AIs provide an answer that could be true, but the only way to be sure is to check for yourself. Ask for sources, for links leading to those sources, and go verify that what they said is true. Two times out of three you will see that they made up details or even a whole concept.
And get it out of your head that these AIs are intelligent and understand what you say or what they say, because that is not really the case (even if it looks like the opposite).
Comment has been collapsed.
Unless you can verify the information or knowledge yourself, you can't conclude that the answer is correct. As a rule of thumb, never trust AI with stuff you don't know, if you do know the stuff then it can help you with brainstorming, formatting and etc...
The only way to conclude that the answer is right is to research the answer yourself. Personally I'd recommend Google Scholar and reading just the results and conclusions and then the rest if it's a hit. If it is not that kind of question then wikipedia and for technical stuff reddit/youtube.
Comment has been collapsed.
Most people use ChatGPT the wrong way. When someone asks a question, it is not intended to provide accurate information, It´s intended to keep up a conversation the way a compliant texting partner would, so it's essentially and overly enthusiastic yes-man and it will make something up to keep you engaged if it lacks relevant information.
If you ever need factual information about anything, use Google or Google Scholar.
Comment has been collapsed.
Incredible how in the age of information where almost every piece of information out there is a search and a few clicks away we still bother with looking so much information through a data sieve that (more often than not) farts out the wrong answer for almost every complex or obscure question.
In general, AI is destroying people's abilities to do their own research, which ends up making them rely on it even more! IMO, just embrace the suck and spend 2 more minutes on researching and finding information yourself. People who immediately type out a prompt to ChatGPT when they have a trivia question are doing their brain a disservice by defaulting to AI as opposed to finding that info themselves...
Comment has been collapsed.
We must add that there has been a sharp decline in Google's quality as a search engine and there is regrettably nothing comparable to what it was 5/6 years ago. I'm a specialist in information search and checking data in reliable sources and I'm having a rougher time in my work each year. The amount of fake data, ads, and misinformation generated by bots is staggering, not to mention that algorithms only take you to where they want instead of what one is looking up.
I remember there was a type when citing Wikipedia was considered as badly checked and even laughable, now people takes and shares info from Instagram posts and chatbots.
Comment has been collapsed.
Cool thing to specialize in, props! Also, I've been feeling that myself, too :(
Though, if LLMs work with basically the same searching capabilities we're working with, isn't there a better chance for the common user to find a diamond in the rough than ChatGPT? I feel ChatGPT and other LLMs just suck in all the slop that's dumped onto the 2 first pages of a google search and then you need to fully gamble on if you found relevant and accurate information.
It's not intelligent enough to differenciate as well as humans can, it has to get lucky finding the exact relevant and factual piece of information we were looking for.
Comment has been collapsed.
Please don't use a butter knife to perform brain surgery
Comment has been collapsed.
11 Comments - Last post 2 minutes ago by pampuch721
15 Comments - Last post 24 minutes ago by Heretic777
3,233 Comments - Last post 41 minutes ago by EntitledGamer
319 Comments - Last post 1 hour ago by CRAZY463708
18 Comments - Last post 2 hours ago by adam1224
17,300 Comments - Last post 2 hours ago by MeguminShiro
8,720 Comments - Last post 3 hours ago by Butterflysense
17 Comments - Last post 48 seconds ago by pampuch721
257 Comments - Last post 2 minutes ago by typjk
5 Comments - Last post 7 minutes ago by Skwerm
74 Comments - Last post 10 minutes ago by audwolfe
28 Comments - Last post 11 minutes ago by Jendy
90 Comments - Last post 18 minutes ago by Noodles91
126 Comments - Last post 37 minutes ago by Jarlaxe
It's very likely to hallucinate an answer that is either factually incorrect, or incomplete.
Comment has been collapsed.