• Excrubulent@slrpnk.net
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    17 days ago

    Firsf of all I agree with your point that it is all hallucination.

    However I think a brain in a vat could confirm information about the world with direct sensors like cameras and access to real-time data, as well as the ability to talk to people and determine things like who was trustworthy. In reality we are brains in vats, we just have a fairly common interface that makes consensus reality possible.

    The thing that really stops LLMs from being able to make judgments about what is true and what is not is that they cannot make any judgements whatsoever. Judging what is true is a deeply contextual and meaning-rich question. LLMs cannot understand context.

    I think the moment an AI can understand context is the moment it begins to gain true sentience, because a capacity for understanding context is definitionally unbounded. Context means searching beyond the current information for further information. I think this context barrier is fundamental, and we won’t get truth-judging machines until we get actually-thinking machines.