Exactly. Which is what makes this entire thing quite interesting.
Alex here (the interrogator in the video) is involved in AI safety research. Questions like “do the ethical frameworks of AI match those of humans”, “how do we get AI to not misinterpret inputs and do something dangerous” are very important to be answered.
Following this comes the idea of consciousness. Can machine learning models feel pain? Can we unintentionally put such models into immense eternal pain? What even is the nature of pain?
Alex demonstrated that ChatGPT was lying intentionally. Can it lie intentionally for other things? What about the question of consciousness itself? Could we build models that intentionally fail the Turing test? Should we be scared of such a possibility?
Questions like these are really interesting. Unfortunately, they are shot down immediately on Lemmy, which is pretty disappointing.
Questions like these are really interesting. Unfortunately, they are shot down immediately on Lemmy, which is pretty disappointing.
It’s just because AI stuff is overhyped pretty much everywhere as a panacea to solve all capitalist ails. Seems every other article, no matter the subject or demographic, is about how AI is changing/ruining it.
I do think that grappling with the idea of consciousness is a necessary component of the human experience, and AI is another way for us to continue figuring out what it means to be conscious, self-aware, or a free agent. I also agree that it’s interesting to try to break AI and push it to its limits, but then, breaking software is in my professional interests!
It’s just because AI stuff is overhyped pretty much everywhere as a panacea to solve all capitalist ails. Seems every other article, no matter the subject or demographic, is about how AI is changing/ruining it.
Agreed :(
You know what’s sad? Communities that look at this from a neutral, objective position (while still being fun) exist on Reddit. I really don’t want to keep using it though. But I see nothing like that on Lemmy.
Define “agency”. Why do u have agency but an LLM doesn’t?
“Intentionally” doing anything isn’t possible.
I see “intention” as a goal in this context. ChatGPT explained that the goal was to make the conversation appear “natural” (which means human like). This was the intention/goal behind it lying to Alex.
Exactly. Which is what makes this entire thing quite interesting.
Alex here (the interrogator in the video) is involved in AI safety research. Questions like “do the ethical frameworks of AI match those of humans”, “how do we get AI to not misinterpret inputs and do something dangerous” are very important to be answered.
Following this comes the idea of consciousness. Can machine learning models feel pain? Can we unintentionally put such models into immense eternal pain? What even is the nature of pain?
Alex demonstrated that ChatGPT was lying intentionally. Can it lie intentionally for other things? What about the question of consciousness itself? Could we build models that intentionally fail the Turing test? Should we be scared of such a possibility?
Questions like these are really interesting. Unfortunately, they are shot down immediately on Lemmy, which is pretty disappointing.
It’s just because AI stuff is overhyped pretty much everywhere as a panacea to solve all
capitalistails. Seems every other article, no matter the subject or demographic, is about how AI is changing/ruining it.I do think that grappling with the idea of consciousness is a necessary component of the human experience, and AI is another way for us to continue figuring out what it means to be conscious, self-aware, or a free agent. I also agree that it’s interesting to try to break AI and push it to its limits, but then, breaking software is in my professional interests!
Agreed :(
You know what’s sad? Communities that look at this from a neutral, objective position (while still being fun) exist on Reddit. I really don’t want to keep using it though. But I see nothing like that on Lemmy.
Lemmy is still in its infancy, and we’re the early adopters. It will come into its own in due time, just like Reddit did.
No, he most certainly did not. LLMs have no agency. “Intentionally” doing anything isn’t possible.
Define “agency”. Why do u have agency but an LLM doesn’t?
I see “intention” as a goal in this context. ChatGPT explained that the goal was to make the conversation appear “natural” (which means human like). This was the intention/goal behind it lying to Alex.
That “intention” is not made by ChatGPT, though. Their developers intend for conversation with the LLM to appear natural.