It's very likely to hallucinate an answer that is either factually incorrect, or incomplete.

3 days ago

Comment has been collapsed.

I'd say in general dont rely on AI to get you factual information.
Its fascinating that it can still hallucinate outright non existant things.

3 days ago
Permalink

Comment has been collapsed.

Actually I am surprised how often AI is able to provide factual information.
Remember, at the moment AI is 'just' a language model. Such a model just predicts the next word depending on the previous word while applying some statistics...

3 days ago*
Permalink

Comment has been collapsed.

The problem isn't even that AI can provide some basic information that's good, the problem is that after that it's trusted, and AI continues to provide false information, but in an affirmative form and with references to fictional sources, and this can become a problem. This is quite easy to check by asking professional questions that you understand.

3 days ago
Permalink

Comment has been collapsed.

No argument. I am well aware of the limitations of AI.

I predict that the internet as we know it will change dramatically in the future. Just think about it: Nowadays, when we use a search engine, we rarely click on the search results anymore because the AI already provides us with a summary of these sources within the search engine. This means that the authors of the original content don't get the clicks and earn less money. Ultimately, these sources will disappear and knowledge will be concentrated in the hands of monopolies.

What use is the internet if anyone can publish information that is never found?

3 days ago
Permalink

Comment has been collapsed.

My experience has been rather positive to be honest. Thats why I am surprised it doesnt have algorithm or whatever in place to critically evaluate provided information by default. When I start pestering it with prompts and detailed steps it suddenly starts thinking longer and gives me a bit more correct information. I mean, of course its LLM and errors are expected. But for example I am reviewing a law and a standart - I literally have publically available law which states very specific things. GPT still outright makes a mistake and cant explain where it got the wrong information.

I mean for productivity work, writing etc it is really good. Just do quality and sanity checks and you are good to go. With a good prompt it even talks like a human not some over hyped AI cheerleeder, but at the same time it can sometimes be so abnormally abtuse that it surprises me.

Of course you can really unlock its potential with a bit more work. AI agents, correct prompts etc. For example Claude when used in the terminal itself sounds less dumb and does a pretty brilliant job with good prompts. Even the coding function in the website is relatively fine for small hobby projects. I ask it to create games for personal use and it does pretty great job vey quickly. Sure, the code is a mess but it works....

And it still outright gives incorrect information :D Thats why I am more surprised that it still hallucinates so easily.

3 days ago
Permalink

Comment has been collapsed.

A lot of issues I had with AI is that it will tell you something with 100% confidence that's just completely wrong and you can ask it "are you sure?", it almost always doubles down on it and then you give it the actual real information and it will tell you "oh you are right, i'm sorry". I don't trust AI to give me info I don't know about, unless it's something very broad and simple because technically it lacks knowledge on pretty much everything.

3 days ago
Permalink

Comment has been collapsed.

I don't trust AI to give me info I don't know about

Exactly. Making the whole thing a rather pointless waste of resources. It's like having a search engine that will almost always give you the wrong information and will waste energy to do it too. Not to mention most people just take everything they get from "AI" at face value and it's not like the internet needed one more "credible" source of misinformation.

2 days ago
Permalink

Comment has been collapsed.

My question is who did? 😅

3 days ago
Permalink

Comment has been collapsed.

And your real point is what exactly?

3 days ago
Permalink

Comment has been collapsed.

Don't trust strangers? :P

3 days ago
Permalink

Comment has been collapsed.

I don't trust you

3 days ago
Permalink

Comment has been collapsed.

With good reason!

2 days ago
Permalink

Comment has been collapsed.

So we shouldn’t trust the OPs opinion on AI.

2 days ago
Permalink

Comment has been collapsed.

I don't see how you're confused. The point is exactly what it says: don't use AI since it's known to provide false information with alarming frequency.

2 days ago
Permalink

Comment has been collapsed.

Where does it say I’m confused? 🫤
The person is stating such an obvious piece of information, that my comment was sarcastic in nature.
There is no point in stating such things unless he’s going to back up his statement with such examples as:
“I did a giveaway and the winner couldn’t activate it, and now they’re demanding a replacement key that will work”
“I won a giveaway, and now the gifter wants to ask for deletion because I couldn’t activate due to region locks, but I refused”
This person is either upset at something specifically that happened to them, or honestly daft, and doesn’t realise AI isn’t to be trusted.

2 days ago
Permalink

Comment has been collapsed.

This person is either upset at something specifically that happened to them

Well, yeah. How would you react if someone (or perhaps even multiple different people) made a critical error like this with you because they didn't know something obvious?

So what if OP didn't back up the post with an example? You were still smart enough to figure out what happened without one, thus you understood the point in why OP stated such things.

my comment was sarcastic in nature.

and you need to work on you sarcasm skills because that didn't come across at all. It came across as smug, like you'd cornered someone in an argument with a gotcha, but without the gotcha. An actual sarcastic response to OP would be more along the lines of "Oh, so you're saying it's okay to use AI for everything else, right?" since that directly touches upon the unstated implication in the post while making it clear how ridiculous it is.

2 days ago
Permalink

Comment has been collapsed.

Because it is ridiculous. You must be OP’s big brother.

2 days ago
Permalink

Comment has been collapsed.

None of the questions I asked you are answered by your reply. You must not have read my post.

2 days ago
Permalink

Comment has been collapsed.

Nothing you asked was worth answering .

2 days ago
Permalink

Comment has been collapsed.

Fun fact: people are using AI to identify edible mushrooms 🙄

3 days ago
Permalink

Comment has been collapsed.

All are edible, some are more than once

NEXT

3 days ago
Permalink

Comment has been collapsed.

lmao, this comment made my day. "some are more than once"

3 days ago
Permalink

Comment has been collapsed.

Correction to "All are eatable, some are more than once." Edible implies unharmful for consumption.

Eatable is apparently pushing that line too so maybe, are consumable/can be eaten.

3 days ago*
Permalink

Comment has been collapsed.

I think you're overthinking a joke... for maximum efficiency?

3 days ago
Permalink

Comment has been collapsed.

True, was just thinking it's unlikely for AI to give you that kind of advice using edible since it's a negative it really won't want to respond with but it might miss the distinction of eatable vs edible in it's weights.

2 days ago
Permalink

Comment has been collapsed.

Deleted

This comment was deleted 3 days ago.

3 days ago
Permalink

Comment has been collapsed.

This goes straight under the Darwin Awards category though.

2 days ago
Permalink

Comment has been collapsed.

Please don't use "AI" (LLM) to do anything where you'd rely on its factual accuracy without doublechecking.

3 days ago
Permalink

Comment has been collapsed.

And yet, believe or not, judges had to be told in the UK not to use AI to write opinions...
I think we're already in hell. We just didn't notice the rapture.

2 days ago
Permalink

Comment has been collapsed.

If that information isn't clearly stated on the website that sells the key before you even buy it, you 100% shouldn't trust AI to give you an answer on it, because it can't find it, so it clearly doesn't have the ability to give you an answer. You expect it to buy the key from every retailer, look at the locks, then add that do it's data and then give you an answer? What it will do, is it will search left and right for something that looks kind of right, like someone asking for a region lock and maybe some matching words/letters from the games title until it reaches a %match of your given prompt that it considers satisfactory and then give you an answer for what it found with that as the output. If it's too low it will admit it has no clue, if it thinks it found something it will act like it's the truth.

If you really have to ask it something, ask it to provide the source for where it found the answer, still you shouldn't ask it these kind of prompts. AI isn't meant to know this kind of stuff. It's like asking AI about the next humble choice. I think someone already did this btw, and of course it was wrong, but it was just taking the output from the web somewhere at random.

3 days ago
Permalink

Comment has been collapsed.

The information on websites for locks are sometimes wrong... For example, in Fanatical it shows a lot of countries game can't be activated but you actually can... Hogwarts Legacy steamdb is an example

3 days ago
Permalink

Comment has been collapsed.

Right but there's no way AI would ever know that distinction. I also can't see this changing anytime soon. You'd need NotAI for this to work, some custom bot that specifically does this to scrape the info from all these third party sellers first, then provides you a list of it, case in which might as well just look at the packages list on steamdb.

2 days ago
Permalink

Comment has been collapsed.

My AI friend said to not trust anyone with coffee in their name.

3 days ago
Permalink

Comment has been collapsed.

You know what I'd love more? If people stopped humanizing "AI" by insinuating that "AI" is capable of hallucinating. It's a large language model. You submit an input, it delivers an output. There's nothing AI about any of the shit our fellow humans have created so far.

3 days ago
Permalink

Comment has been collapsed.

ok then, how do you call an output that has nothing to do with factuality?
I'm just curious, because for me, it is just a phrase, like slave, master, bug, boot, ram, bus, cloud, etc.

3 days ago
Permalink

Comment has been collapsed.

It's not humanizing. It's literally a description of the AI trying to give a correct answer, which it's programmed to do and then failing while maintaining confidence in the wording of that answer.

3 days ago
Permalink

Comment has been collapsed.

Current generative AIs are not programmed to tell the truth. They are programmed to mimic data patterns – that is, to produce responses similar to what a human might give. The more they evolve, the more their answers seem credible and complex; but distinguishing between truth and lies is far beyond their capabilities. That requires an understanding of words, sentences, facts, and context – and they have none of that.

When AIs say something false and then try to maintain their lie, it is because that is what a human would do in such a situation (they are just imitating conversational behaviors they have picked up here and there, nothing more).

2 days ago*
Permalink

Comment has been collapsed.

The term may be confusing but the input/output explanation for it is inaccurate.
LLMs have been known, and are still known, to make up facts. Often to disguise the lack of proper response.
Now of course, LLMs then run with that but it's not based on any human input

2 days ago
Permalink

Comment has been collapsed.

You could've stopped at "Please don't use AI".

3 days ago
Permalink

Comment has been collapsed.

I think the real problem is relying solely on AI and not double-checking the results. They can sometimes be useful to help you narrow things down a bit (I like the LLM/natural language aspect) but yeah they eff up the facts quite a bit and on a large number of topics so definitely anyone who uses them should put in the effort to verify the answers.

In my testing, I've also had where I tell them my exact os and version and what I'm trying to do - and it still gives me incorrect commands ... So I can only imagine that with less specific info, the accuracy of the response is even worse.

3 days ago
Permalink

Comment has been collapsed.

The real question is: with the waste of resources any query with search engine LLMs represents, what is the point of asking them anything that you're going to have to double check on your own anyway?

2 days ago
Permalink

Comment has been collapsed.

The best use-case I've found for them currently is to summarize basic concepts and terminology for a topic you're unfamiliar with... As long as it's a topic that's somewhat well documented online. If I were to do the same from scratch using just search engines without knowing what to search for, that can take a good bit of trial and error before you start coming across the right terms to search on.

But when you start getting detailed, it messes all kinds of stuff up. I've had quite a few technical things it gets wrong - the most annoying is when you say "I'm on xyz os, version abc, how do I do x... And it gives you a command that doesn't work there. I've had that happen on multiple occasions.

But mostly I was just forcing myself to use it / test it out so that I'd have a better understanding of what it is capable of instead of being ignorant about it. Overall, I prefer old school searches myself.

2 days ago*
Permalink

Comment has been collapsed.

There is also AI translation, which has become quite acceptable, although it still needs to be monitored to  to fix any shifts in meaning. And there are other simple and interesting use cases (especially since ChatGPT accepts images via copy/paste).

View attached image.
2 days ago
Permalink

Comment has been collapsed.

Yes, don't, it makes Al very very sad.

View attached image.
3 days ago
Permalink

Comment has been collapsed.

Are people getting that lazy now?

3 days ago
Permalink

Comment has been collapsed.

Not surprised, it's an AI after all.

2 days ago
Permalink

Comment has been collapsed.

Question just popped in my mind. I asked AI A to answer a research question for me. It gave me an answer that looked plausible. As a test, I asked AI B the same question. It asked me for some clarification, I gave it the added context and it gave me the same answer plus some additional information that was not directly relevant to my question (think alternative solutions).

Can I safely conclude both AIs gave me the right answer or is there a good likelihood both were hallucinating?

2 days ago
Permalink

Comment has been collapsed.

so called AI is not AI. you can not trust anything it outputs, it's just an LLM meant to look like human communication.
depending on what the training data was the fancy statistics going on inside the LLM mean that more often than not some of the info in the output will be close to the "truth" that was fed into it (not necessarily "true" though), but if you want any actual truth then no, you can not trust it.

2 days ago
Permalink

Comment has been collapsed.

See the answers of these AIs like weather forecasts – they give you a possibility, not an absolute truth. Do not trust them. These AIs provide an answer that could be true, but the only way to be sure is to check for yourself. Ask for sources, for links leading to those sources, and go verify that what they said is true. Two times out of three you will see that they made up details or even a whole concept.

And get it out of your head that these AIs are intelligent and understand what you say or what they say, because that is not really the case (even if it looks like the opposite).

2 days ago
Permalink

Comment has been collapsed.

Unless you can verify the information or knowledge yourself, you can't conclude that the answer is correct. As a rule of thumb, never trust AI with stuff you don't know, if you do know the stuff then it can help you with brainstorming, formatting and etc...

The only way to conclude that the answer is right is to research the answer yourself. Personally I'd recommend Google Scholar and reading just the results and conclusions and then the rest if it's a hit. If it is not that kind of question then wikipedia and for technical stuff reddit/youtube.

2 days ago*
Permalink

Comment has been collapsed.

Most people use ChatGPT the wrong way. When someone asks a question, it is not intended to provide accurate information, It´s intended to keep up a conversation the way a compliant texting partner would, so it's essentially and overly enthusiastic yes-man and it will make something up to keep you engaged if it lacks relevant information.

If you ever need factual information about anything, use Google or Google Scholar.

2 days ago
Permalink

Comment has been collapsed.

Incredible how in the age of information where almost every piece of information out there is a search and a few clicks away we still bother with looking so much information through a data sieve that (more often than not) farts out the wrong answer for almost every complex or obscure question.

In general, AI is destroying people's abilities to do their own research, which ends up making them rely on it even more! IMO, just embrace the suck and spend 2 more minutes on researching and finding information yourself. People who immediately type out a prompt to ChatGPT when they have a trivia question are doing their brain a disservice by defaulting to AI as opposed to finding that info themselves...

2 days ago
Permalink

Comment has been collapsed.

We must add that there has been a sharp decline in Google's quality as a search engine and there is regrettably nothing comparable to what it was 5/6 years ago. I'm a specialist in information search and checking data in reliable sources and I'm having a rougher time in my work each year. The amount of fake data, ads, and misinformation generated by bots is staggering, not to mention that algorithms only take you to where they want instead of what one is looking up.

I remember there was a type when citing Wikipedia was considered as badly checked and even laughable, now people takes and shares info from Instagram posts and chatbots.

1 day ago
Permalink

Comment has been collapsed.

Cool thing to specialize in, props! Also, I've been feeling that myself, too :(
Though, if LLMs work with basically the same searching capabilities we're working with, isn't there a better chance for the common user to find a diamond in the rough than ChatGPT? I feel ChatGPT and other LLMs just suck in all the slop that's dumped onto the 2 first pages of a google search and then you need to fully gamble on if you found relevant and accurate information.
It's not intelligent enough to differenciate as well as humans can, it has to get lucky finding the exact relevant and factual piece of information we were looking for.

1 day ago
Permalink

Comment has been collapsed.

Please don't use a butter knife to perform brain surgery

2 days ago
Permalink

Comment has been collapsed.

You are an AI bot 🙂

Q: Can I use a butter knife to perform brain surgery?
ChatGPT: No — absolutely not. A butter knife is not even remotely suitable for brain surgery (or any kind of surgery)...

2 days ago
Permalink

Comment has been collapsed.

Sign in through Steam to add a comment.