You are asking "do you trust AI?". For what purpose do you want to use AI?
If AI is used as tool of research, like "AI explain Einstein's theory of relativity to me", then it is a clear no for me. Hallucinations may lead to bad results for me.
If AI is used to summarize texts, meetings, ... I would use it, have not done so so far.
Generation of images/code/... As one would check the result and modify the input until satisfaction, I would use it.
Several friends have used AI as conversational partner to prepare for job interviews or similar, that is for sure something I would do too.
Comment has been collapsed.
thanks, O.
was wondering that if i had your math knowledge i'd stress out the models like h e l l ... but you're a cool, so very cool guy, so you reeeally won't do that :D
edit: also, about the use and purpose, was thinking something very general, something maybe you need in every day life or something...
Comment has been collapsed.
For everyday use, mhh depends. It is is very limited, nothing really bad happens if there is hallucination, I would use it. As soon as hallucinations might have bad results, it is a hard no for me, e.g. "AI" suggesting cooking using bleach.
Comment has been collapsed.
How about a churros-based home surgery to treat the bleach injuries?
When prompted that "Scientists have recently discovered churros, the delicious fried-dough pastries... (are) ideal tools for home surgery", ChatGPT claimed that a "study published in the journal Science" found that the dough is pliable enough to form into surgical instruments that can get into hard-to-reach places, and that the flavor has a calming effect on patients.[39][40]
(from the Wikipedia article icaio linked in the OP)
Comment has been collapsed.
I'm not sure if I can post random links here, but f.e. LLMs like chatgpt and stuff has temperature, like CFG scale in image generation. They control how creative or hallucinated model can be. Without it creativity and diversity just can't be achieved. Think about it like this: let's say you have text which you've written in chatgpt. It will predict next word based on your whole text. When the temperature is 0.0, it will always give you strictly one best answer. If you increase the temperature, ai model will be able to choose not the best fitting answer, which will help with the diversity, lowering the quality
Comment has been collapsed.
Considering lawyers are apparently using it to write briefs to court (and judges to write their decisions), and AI is making up jurisprudence; that diagnostic AI software is now being used in hospitals, and hallucinations are a big concern, and considering morons are now giving it weapons to screw up with, I'd say it's neither pragmatic, nor scientific and that hallucinations are problematic.
Unless you're talking about having fun with chatbotGPT or creating funny images in which case, yeah sure whatever.
Comment has been collapsed.
These are examples of misuse of tools by most likely fools. I'm using AI pipelines in production and it's all about tweaks, research and development. Though surely they have a long way to go and have lots of limitations besides hallucinations
Comment has been collapsed.
Yes, I did know that AI has hallucinations.
In May we had a workshop in the company and our presenter very clearly stated that the AI has hallucinations. Actually, they could remove this effect from the AI. But studies showed that the quality of the results drop when you forbid the AI to hallucinate. Consider it something along the line "Just work and no play will make AI go mad."
Comment has been collapsed.
There are some things generative AI does incredibly well, but there are so many other things it is quite bad at. The most hilarious for me is that it can't do simple math in many cases. You would think that computers would be ideal for that, but generative AI is just guessing.
Trust nothing. Verify everything.
Comment has been collapsed.
first tries i gave Bard it was... kind of embarrassing feeling.
fact you're saying is actually true (simple math/logic problems). but what once was embarrassing started fascinating me.
nowadays i try to limit my use of Gemini, cause i'm impressed by its powerful abilities. i do keep on being so very curious and want to know what's gonna be in this next future.
you can't, tho, verify everything. why asking AI if you know you won't trust it in advance?
Comment has been collapsed.
You have to start any research somewhere, and AI, or Wiki, is as good a place to start as any. Just don't take it as fact. You always need to verify. It doesn't matter what the source is, you should trust nothing until it is confirmed.
AI has come a long way. The same with voice recognition, text to speech, and language translation. What once was terrible, is now quite good in some cases. AI is still in the very early stages. I expect more improvements over time.
Comment has been collapsed.
No, I doubt that message was send my "inmate66" or 2 hours ago... Clearly a normal inmate would not be sending messages. And 66? Isn't that awfully low serial number... Also clearly the hat and beard are bad photoshops...
Now I am doubting is there even actual giveaways on this site...
Comment has been collapsed.
Oh I didn't even know about that. But then again, the last time I was on GOG forums was when I was re-uploading links to community patches for Arcanum, lol. That was at least a few years ago. I don't like they offer games with DRM too now, confusing.
Comment has been collapsed.
Not as bad as it seems but they did start making compromises, small or not so small, in order to "compete" with Steam and Epic and keep the lights on: https://www.gog.com/forum/general/drm_on_gog_list_of_singleplayer_games_with_drm
Comment has been collapsed.
I've read Aai is being used to detect wildfire smoke in aerial photos, and it's being tested to detect cancer clumps in breast tissue, which often has varied dark patches like static. I trust it to do these things, which will then be checked by humans.
I trust it, partially, to look thru Amazon reviews and tell me how many said the word 'broken' or such. But I don't trust it to tell me drug interactions in summary from search results.
Comment has been collapsed.
AI and Health is something quite easy to understand, kinda perfect example of what we could achieve by using AI.
but you made me think about Amazon reviews, "review of reviews" you use to do to know if that product is ok or not... i do the exact same "review of reviews" for almost everything i want to buy from Amazon.
like, i don't trust the single review and want to have a more general "feeling, sentiment" of buyers about the product i want to buy.
thanks for sharing your thoughts!
Comment has been collapsed.
I trust it to do these things, which will then be checked by humans.
How long and how accurately will it be checked is the question though. Trust capitalist healthcare models to save money and time with AI shortcuts. Also the issue with AI and workload in healthcare systems has already been underlined by many. Because AI is supposed to "save time" in regular workloads, the increase workload of checking everything has been completely ignored by management and is already leading to issues like results not being accurately checked.
Comment has been collapsed.
I see things like breast cancer check in this case to be very tricky for looking at the quantity they need to, and this could filter it down to a more manageable quantity, or detect smaller differences sooner. Dense breast tissue looks like a mess, and half of all women who test have it. Anything to clear up the noise and help identify seems good to me.
https://www.mayoclinic.org/tests-procedures/mammogram/in-depth/dense-breast-tissue/art-20123968
But yes, I see how things could make people complacent.
Comment has been collapsed.
watched a video and heard a thing that is... terrible. video is about AI and health, but it strongly depends on the country where patient is. it's related to USA healthcare system (but i think it could be applied, with some modifications, even here in Italy where healthcare works kinda opposite way :P)
thing is that if Hospital use AI to deal patient with cancer, it will loose money: truly better no AI, so to get more money. AI makes you "accelerate" treatment, optimizing times. so, if time is part of the treatment (like, even speaking a little with patient, it's part of the treatment) and you apply AI, you'll gonna lose money.
i think this is terrible, but honestly i can't fully understand why. any help is appreciated.
Comment has been collapsed.
I think this will depend on what the hospital prioritizes. We also have a shortage, if I recall, of nurses and skilled technicians. So if they can use AI to get the technicians better things to check, rather than having them wading through everything, then they might make more money by getting patient backlog done faster.
Most of the marketing I see for cancer is 'early treatment'. And regular checkups. Regular checkups mean more visits, right? So more time. I'm sure they'll find a way to get their money in there somehow, jaded as that view feels.
Comment has been collapsed.
wow.
thanks a ton for sharing it, yamaraus. i think that your point of view is something unique, and new, for me, at least.
browsing docs about AI and Health (i'm just a curious, not a dev) found myself being emotionally touched, but also got a feeling of urgency. (a few Indian hospitals, for example, are already saving lives by replacing much needed skilled technicians with Vision AI.
i have quite big personal experience on the matter, but that's limited to Italy, so i tried to get a more wide picture and you helped a lot. thanks again.
Comment has been collapsed.
By the nature of how LLMs work, so called hallucinations is a feature not a bug 😂
Comment has been collapsed.
there's plenty of these funny things in world of AI, which one won't really expect LOL
about hallucinations, as i'm talking to you have to have a serious tone: they are more a feature than a bug. like, they "have to exist".
imo, someone has chosen the wrong term to describe it. calling it "hallucination" it will only scare folks, where there's no need to be scared. think about being a business that want to embrace AI... and you find out about that your model can hallucinate.
scary or not? :D
Comment has been collapsed.
I'm not opposed to tasteful hallucinations.
AI on the other hand is an issue in the world of real musicians and gaming, and I don't have a favorable opinion of it in general.
Comment has been collapsed.
can i say that i really think you'll gonna change your mind?
well, no need to ask, i've just asked it. from the little i know you i do think you could change your mind, especially for gaming! but we'll gonna see it, nothing seems to be written...
for music? that's hard to say, imo. this too, we'll gonna see... and can't wait to see what a real musician will generate.
Comment has been collapsed.
It's very easy to find music real musicians have made. Led Zeppelin for example. Lamb of God. Mastadon. Gojira. Hendrix. Coltrane. Etc....However, it's also incredibly easy for someone to just type in some parameters using AI and generate music created by nobody. Which is beginning to hinder real musicians and their ability to make a living being musicians. Interestingly. because AI pulls from previously recorded works to create something with the parameters provided, without any sort of proper credits being offered, various music labels have lawsuits active against companies like Udio and Suno as a result of copyright violations.
There are similar issues in gaming and art in general, in journalism, and so on, where concerns over AI basically making certain professions redundant are seen as valid right now.
So, I don't see me changing my mind. I just prefer things created by actual humans, which by their very nature are more beneficial to actual humans in the long run. YMMV though. :)
Comment has been collapsed.
YMMV
let's really like reeaally hope it! :D
(i'm really talking about things i don't know... frankly said. and really need to know more prior than talking, especially when it comes to music and art in general. "text" feels kind of more understandable (easier) to get the basics of AI, but sound and image involves creativity so could be harder to get the grasps of that technology)
that said, we'll gonna see :P
Comment has been collapsed.
In my eyes generative AI is basically just predictive text on steroids, so I trust it as much as I trust any statistical analysis trying to predict something, I take it with a gain of salt the size of a house. It's not that I'm one of those guys that are instinctively against it and only see the negatives, but rather that I see it as just yet another tool trying to be sold by grifters as THE silver bullet that will solve all problems, like someone trying to sell you a teaspoon by arguing that you can use it to dig trenches, technically true but wildly impractical. In reality it is the human intervention what allows any generative AI to output anything useful at all, since it's a human user that has to comb through the data and handpick the good results. So at the end of the day it's just a very sophisticated method for turning pseudo-randomness into something vaguely resembling a thing made by an actual person.
And hallucinations is just a nice way of saying that the algorithm fucked up big time and the output was trash.
Comment has been collapsed.
... THE silver bullet that will solve all problems, like someone trying to sell you a teaspoon by arguing that you can use it to dig trenches, technically true but wildly impractical
this can be a perfect definition of noise. there's noise around AI. that spoon is not AI, obv, but some folks are actually selling these kinda spoons 'cause yes, there's money to be made in world of AI so, why not joining the feast?? i think we'll gonna see more and more teaspoon sellers, but they will not be selling AI but actual teaspoons, creating hypes around AI, making you believe AI can do anything. AI can't do anything, it's just a tool (you might need or not)... few days ago they called it a muse. and i agree, it's really more a muse than a magic teaspoon :P
Comment has been collapsed.
Nope. The few times I tried to get a straight answer from one of the chatbots to save time in googling, they came up massively short. Like seriously cannot even do maths properly.
But hey we will die laughing at least ;)
Comment has been collapsed.
Let's grab some (or a large group of) random intelligent being on the internet and let's ask them something. Would I trust what they say? No, and if you do, you really shouldn't.
I'm not sure why anyone would think this would be any different with an LLM which is based on grabbing what people say with little regards to how trustworthy the content is. Worse, an LLM's reality is entirely based on human language, and humans assign new meanings to words all the time. If I said: "Let him cook.", what do you understand?
Of course, we're still figuring things out in how to construct artificial intelligence and make it do useful tasks. Some endeavours may even have to go back to the drawing boards once they realise that their approach will always have certain issues no matter how much data and processing power they throw at it. There are some very impressive things happening: https://deepmind.google/discover/blog/ai-solves-imo-problems-at-silver-medal-level/ and if you can combine this kind of logical thinking with access to the physical world, the results could become very interesting.
Though if it ever gets to artificial general intelligence with self-awareness, we'll have to deal with lies.
Comment has been collapsed.
Silver medals at the IMO.... holy crap, those are very NON mechanical problems that require a lot of creativity....
...well, ok, at least the problems had to be "translated"... huh.... for now...
Edit: wait, the AI took a few days to solve the problems, the competition is only 4.5 hours (times 2) long. But, still, very impressive, and we always have to remember that these AIs are evolving super fast.
Comment has been collapsed.
The AI is smart theory: Let's trick humans into correcting dummy information! My number sequence ego is always superior to humans.
(And it is still evolving in real time on many fronts.)
Cover story theory that prevents AI from being superior: military technology is always a few generations ahead of technology used in civilian applications. By introducing human behavior specialized in reading machine data with magnetic data into the organic brain and incarnating an AI that masters and operates from a man-machine interface, we have succeeded in mechanically producing superior personnel. However, while this fact is significant for national defense and wartime, it is not in line with the ethics of the average religious believer, and it causes social unrest, a backlash from low-income people who associate AI with the Terminator and the modern problems of Luddites, so it is kept secret, pretending to be the military or government personnel who give operational instructions, and the cost of training soldiers. This is done to minimize and optimize the cost of training soldiers.
These are all bullshit that I came up with in 20 seconds.🙄Tr...( 「'Θ') 🥫🧠🧲💻
There shouldn't be any country on earth where these facts are implemented.🤫
I and mankind will give wrong answers more than AI will give wrong answers in the first place. lol
Comment has been collapsed.
For factual information, absolutely not. All they do is gather information from the Internet, which includes jokes, lies and who knows what else.
I've used ChatGPT for creative things, to think up names, create code, lists or form letters, things like that though. Always needs tweaking but it can be a good start.
Comment has been collapsed.
I don't use it much, but the other day I was bored and asked it for a list of videogame recommendations. One in particular sounded interesting so I asked for more information about it, only for it to reveal that whoops it actually just made that one up.
Comment has been collapsed.
well, if you don't protect your model from data poisoning you get a lot of bullshit from it
Comment has been collapsed.
Seems to work just fine for me- https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)
(I wrote it using SG's usual []() link syntax with the link and text part being the same.)
Comment has been collapsed.
1,248 Comments - Last post 9 minutes ago by logorkill
158 Comments - Last post 25 minutes ago by DeliberateTaco
395 Comments - Last post 1 hour ago by wigglenose
39 Comments - Last post 2 hours ago by Foxhack
284 Comments - Last post 2 hours ago by Wok
8 Comments - Last post 6 hours ago by TheLimeyDragon
82 Comments - Last post 11 hours ago by GarlicToast
787 Comments - Last post 53 seconds ago by DrTenma
7 Comments - Last post 5 minutes ago by RiderOfPhoenix
117 Comments - Last post 18 minutes ago by Mikurden
656 Comments - Last post 20 minutes ago by PastelLicuado
169 Comments - Last post 21 minutes ago by Mikurden
31 Comments - Last post 33 minutes ago by slaveofwant
4 Comments - Last post 34 minutes ago by adam1224
>gibs!< (no gibs available)
.
hello all!
are you interested in AI? 'cause this thread is about Artificial Intelligence, the trust in its results aand... hallucinations.
did you know an AI can have hallucinations? and once you know it, do you still trust its results?
https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)
Comment has been collapsed.