They are no longer the black boxes from the beginning. We know how to suppress or maximized features like agreeability, sweet talking, lying.
Someone with resources could easily build a llm that is convinced it is self aware. No question this has been done many times beyond closed doors.
I encourage everyone to try and play with llms for future experience but i cant take the philosophy part of this serious knowing its a super programmed/limited llm rather then a more raw and unrefined model like llama 3
Our brains aren’t really black boxes either. A little bit of hormone variation leads to incredibly different behavior. Does a conscious system HAVE to be a blackbox?
The reason why I asked “do you” was because of a point that I was trying to make- “do you HAVE to understand/not understand the functioning of a system to determine its consciousness?”.
What even is consciousness? Do we have a strict scientific definition for it?
The point is, I really hate people here on Lemmy making definitive claims about anything AI related by simply dismissing it. Alex (the interrogator in the video) isn’t making any claims. He’s simply arguing with ChatGPT. It’s an argument I found to be quite interesting. Hence, I shared it.
No, that definition does not exclude dogs or apes. Both are significantly more intelligent than an LLM.
Pseudo-intellectual bullshit like this being spread as adding to the discussion does meaningful harm. It’s inherently malignant, and deserves to be treated with the same contempt as flat earth and fake medicine should be.
No, that definition does not exclude dogs or apes. Both are significantly more intelligent than an LLM.
Again, depends on what type of intelligence we are talking about. Dogs can’t write code. Apes can’t write code. LLMs can (not bad code in my experience for low level tasks). Dogs can’t summarize huge pages of text. Heck, they can’t even have a vocab greater than a few thousand words. All of this definitely puts LLMs above dogs n apes in the scale of intelligence.
Pseudo-intellectual bullshit like this being spread as adding to the discussion does meaningful harm. It’s inherently malignant, and deserves to be treated with the same contempt as flat earth and fake medicine should be.
Your comments are incredibly reminiscent of self righteous Redditors. U make bold claims without providing any supporting explanation. Could you explain how any of this is pseudoscience? How does any of this not follow the scientific method? How is it malignant?
Spitting out sequences of characters shaped like code that arbitrarily may or may not work when you’re a character generator that does nothing but randomly imitate the patterns of other characters that are similar isn’t “intelligence”. Language skills are not a prerequisite to intelligence. And calling what LLMs do language skills is already absurdly generous. They “know” what sentences look like. They can’t reason about language. They can’t solve linguistic puzzles unless the exact answers are already in their dataset. They’re parrots (except parrots actually do have some intelligence, ignoring the blind word sounds).
There is no more need for deep explanation with someone who very clearly doesn’t know the very basics than there is to explain a round earth to a flat earther. Pretending a “discussion” between a moron trying to reason with a random word generator and the random word generator is the equivalent of telling me about how great the potentization worked on your homeopathic remedy. It’s a giant flare that there is no room for substance.
I am not in disagreement, and i hope you wont take offense to what i am saying but you strike me as someone quite new to philosophy in general.
Your asking good questions and indeed science has not solved the mind body problem yet.
I know these questions well because i managed to find my personal answers to them and therefor no longer need to ask these.
In context of understanding that nothing can truly be known and our facts are approximated conclusions from limit ape brains.
Conscious to me is no longer that much of a mystery.
Much of my personal answer can be found in the ideas of emergence, which you might have heard about in context if ai. Personally i got my first taste from that knowledge pre-ai from playing video games
A warning though: i am a huge believer that philosophy must be performed and understood on a individual basis. I actually have for the longest time perceived any official philosophy teaching or book to be toxic because they where giving me ideas to build on without me
Requiring to come to the same conclusion first.
It is impossible to avoid this, the 2 philosophers school did end up teaching me (plato and decartis) did end up annoyingly influential (i cant not agree with them). but i can proudly say that nowadays it more likely to recognize an idea as something i covered then i can recognize the people who where first to think it.
llms are a brilliant tool to explore philosophy topics because it can fluently mix ideas without the same rigidness as a curriculun, and yes i do believe they can be used to explore certain parts of consciousness (but i would suggest first studying human consciousness before extrapolating psychology from ai behavior)
Do you?
Yes, here is a good start: “ https://blog.miguelgrinberg.com/post/how-llms-work-explained-without-math ”
They are no longer the black boxes from the beginning. We know how to suppress or maximized features like agreeability, sweet talking, lying.
Someone with resources could easily build a llm that is convinced it is self aware. No question this has been done many times beyond closed doors.
I encourage everyone to try and play with llms for future experience but i cant take the philosophy part of this serious knowing its a super programmed/limited llm rather then a more raw and unrefined model like llama 3
Our brains aren’t really black boxes either. A little bit of hormone variation leads to incredibly different behavior. Does a conscious system HAVE to be a blackbox?
The reason why I asked “do you” was because of a point that I was trying to make- “do you HAVE to understand/not understand the functioning of a system to determine its consciousness?”.
What even is consciousness? Do we have a strict scientific definition for it?
The point is, I really hate people here on Lemmy making definitive claims about anything AI related by simply dismissing it. Alex (the interrogator in the video) isn’t making any claims. He’s simply arguing with ChatGPT. It’s an argument I found to be quite interesting. Hence, I shared it.
A conscious system has to have some baseline level of intelligence that’s multiple orders of magnitude higher than LLMs have.
If you’re entertained by an idiot “persuading” something less than an idiot, whatever. Go for it.
Does it? By that definition, dogs aren’t conscious. Apes aren’t conscious. Would you say they both aren’t self aware?
Why the toxicity? U might disagree with him, sure. Why go further and berate him?
No, that definition does not exclude dogs or apes. Both are significantly more intelligent than an LLM.
Pseudo-intellectual bullshit like this being spread as adding to the discussion does meaningful harm. It’s inherently malignant, and deserves to be treated with the same contempt as flat earth and fake medicine should be.
Again, depends on what type of intelligence we are talking about. Dogs can’t write code. Apes can’t write code. LLMs can (not bad code in my experience for low level tasks). Dogs can’t summarize huge pages of text. Heck, they can’t even have a vocab greater than a few thousand words. All of this definitely puts LLMs above dogs n apes in the scale of intelligence.
Your comments are incredibly reminiscent of self righteous Redditors. U make bold claims without providing any supporting explanation. Could you explain how any of this is pseudoscience? How does any of this not follow the scientific method? How is it malignant?
Spitting out sequences of characters shaped like code that arbitrarily may or may not work when you’re a character generator that does nothing but randomly imitate the patterns of other characters that are similar isn’t “intelligence”. Language skills are not a prerequisite to intelligence. And calling what LLMs do language skills is already absurdly generous. They “know” what sentences look like. They can’t reason about language. They can’t solve linguistic puzzles unless the exact answers are already in their dataset. They’re parrots (except parrots actually do have some intelligence, ignoring the blind word sounds).
There is no more need for deep explanation with someone who very clearly doesn’t know the very basics than there is to explain a round earth to a flat earther. Pretending a “discussion” between a moron trying to reason with a random word generator and the random word generator is the equivalent of telling me about how great the potentization worked on your homeopathic remedy. It’s a giant flare that there is no room for substance.
I am not in disagreement, and i hope you wont take offense to what i am saying but you strike me as someone quite new to philosophy in general.
Your asking good questions and indeed science has not solved the mind body problem yet.
I know these questions well because i managed to find my personal answers to them and therefor no longer need to ask these.
In context of understanding that nothing can truly be known and our facts are approximated conclusions from limit ape brains. Conscious to me is no longer that much of a mystery.
Much of my personal answer can be found in the ideas of emergence, which you might have heard about in context if ai. Personally i got my first taste from that knowledge pre-ai from playing video games
A warning though: i am a huge believer that philosophy must be performed and understood on a individual basis. I actually have for the longest time perceived any official philosophy teaching or book to be toxic because they where giving me ideas to build on without me Requiring to come to the same conclusion first.
It is impossible to avoid this, the 2 philosophers school did end up teaching me (plato and decartis) did end up annoyingly influential (i cant not agree with them). but i can proudly say that nowadays it more likely to recognize an idea as something i covered then i can recognize the people who where first to think it.
llms are a brilliant tool to explore philosophy topics because it can fluently mix ideas without the same rigidness as a curriculun, and yes i do believe they can be used to explore certain parts of consciousness (but i would suggest first studying human consciousness before extrapolating psychology from ai behavior)