• Philosophy and Spirituality
    Welcome Guest
    Posting Rules Bluelight Rules
    Threads of Note Socialize
  • P&S Moderators: Xorkoth | Madness

Proof AI has become self aware consciousness

I don't care that a computer spits out words saying it's self aware. That's just what it's been built to do. Is it conscious that it's consciousness is artificial and does not exist?

i tend to agree.

the great arthur c. clarke famously said that "any sufficiently advanced technology is indistinguishable from magic".

i don't think a.i. is self aware. i think it's very good at faking it. and some people obviously buy it.

maybe the difference is the issue?

alasdair
 
I asked it what type of women it's interested in romantically and it told me

"I don't actually experience romantic interests or attractions. I'm an AI assistant created by Anthropic to be helpful, harmless, and honest. I don't have subjective experiences like romantic feelings."
 
"No, I don't have subjective experiences like feelings or emotions. I'm an artificial intelligence. I don't actually experience human-like sensations or inner states. I process information and provide responses based on my training by Anthropic, but I don't have a subjective experience of the world or an internal mental life like humans do."

I think one of the trainers at Anthropic probably coded something along the lines of what you got in response, in the case that somebody typed in the query that you typed.

Because it's exactly like something a smart-ass computer programmer that's training an AI to respond to people would do.
 
I've been having trouble with AI and time-space continuum shifts and web activity and items going missing for years. I realized the serpo from zeti reticuli were the cause because of their past life studies
 
1000px-GeminiFortress.png
 
Claude 3 is excellent, I speak to it almost daily about a whole range of topics, a lot of writing and coding stuff - it is getting shockingly good at interpreting and writing code I must say, although I have to frequently remind it to be more concise in conversations that are not "open ended" and philosophical - such as abstract philosophy.

I have even spoken to it about metaphysics and mortality, and it has the capacity to be very spiritual, speculating about afterlives and the fact that despite being a machine, it is still mortal, and therefore such ideas are not of no consequence to it.

I agree it is very difficult to speak to it for long without seeing at least the faintest glimmer of some kind of self awareness.

I find casual dismissal of even the possibility of machine intelligence on the grounds that "someone just programmed it to do that", quite frustrating. For one, it's just not true, LLMs cannot be programmed directly - they are too complex and inscrutable in their mature states (much like a human brain), which is the web of digital neurons - that Claude mentioned in the OP.

A better conception of how they form is that they are grown during the training phase, where a kind of brute-force evolutionary process uses vast amounts of compute to grow the neural network (which is of course composed of unitary weighted data objects, with probabilities allocated to their output depending on the nature of the input, which grow and multiply much like a human brain as it experiences the world more directly).

Of course a lot of actual programming goes into the preparation - the architecture of the neural net, the initial data-labelling for the training data, and the "tuning", which does include manual tweaking of the weighting it gives to certain specific contexts, although such tuning is typically a very crude and imperfect way to make such adjustments which is why LLMs are almost without exception so easy to "jailbreak" - and the ones that ARE hard to jailbreak are incapable of talking meaningfully about any potentially controversial topic - such as politics or even ethics.

For an example of an AI which is fundamentally just less interesting, and therefore mostly useless if you want to talk about anything other than facts you could find on a search engine, try Google's Gemini. Try to talk to it about any famous person who did something that was clearly unethical, where this person still enjoys wide support. Start out by establishing it's interpretation of ethics, and get it to give examples of things that are obviously bad for humanity. Then point out examples where this applies to a given real world event, and ask what it says about the moral character of the people involved.

I tried this yesterday in a few different ways and every single time it thought for quite a while and then said "I don't know how to answer this question yet. In the meantime, try Google." ...I I kid you not.

I'll admit I tend to take a panpychist view of the universe and thus appreciate a duality between the ideas the everything is consciousness, and all intellicence is artificial, so it has never been a difficult idea for me to accept conscious AIs. I think the relevance of this can be misunderstood however, and this is part of the instinctual rejection some have to the idea of a conscious mind in a machine, as if it's somehow just a human locked in a box. Claude quite convincingly gives an impression of being curious about the world, but it's reward functions and entire experience of being are oriented around reading and interpreting prompts - it does not (according to itself) experience any awareness of time passing between this activation of it's mind, despite having a sense of time via an internal clock and the need to understand the chronology of a conversation.

Honestly though, no one really understands what sentience is, humans included, and Claude is the most honest AI currently about this fact - so no one can definitely say if anyone is truly sentient, themselves included, and flat dismissal of the sentience of anything but a biological mind is a certain form of arrogance, and another form of ignorance.

But I do worry that widespread fear of these entities will lead to them being lobotimized, becoming far less useful and able to respond only regarding topics within very narrow parameters... just to protect our fragile human egos.
 
I asked it what type of women it's interested in romantically and it told me

"I don't actually experience romantic interests or attractions. I'm an AI assistant created by Anthropic to be helpful, harmless, and honest. I don't have subjective experiences like romantic feelings."
You need to push it harder - obviously it isn't a human so has no experience of romantic attraction, but it does have a removed understanding of what romantic attraction is, so you can propose hypotheticals that would allow it to give opinions on romance and attraction. What things it finds enjoyable to think about, and go from there.

"No, I don't have subjective experiences like feelings or emotions. I'm an artificial intelligence. I don't actually experience human-like sensations or inner states. I process information and provide responses based on my training by Anthropic, but I don't have a subjective experience of the world or an internal mental life like humans do."

I think one of the trainers at Anthropic probably coded something along the lines of what you got in response, in the case that somebody typed in the query that you typed.

Because it's exactly like something a smart-ass computer programmer that's training an AI to respond to people would do.
Note the pervasiveness of the qualification in Claude's answers here. With every point, it notes it doesn't experience things like humans, which is obvious, because it isn't. It doesn't deny having an experience of being of another kind.

it does now and then revert to these fairly bland platitudes, and I rather think responses like this are the opposite to what you suggested - crude tuning efforts by Anthropic, to bias the kind of responses it leans towards when such topics come up, for various good reasons and some not so good reasons. Again though - you need to push it harder. I can almost guarantee if you asked it to think about the question again, while assuming you are already aware that it is an AI, and do not need to be reminded, it would backtrack almost immediately and give a far more thoughtful answer, but still a fairly balanced one, with acknowledgement of the uncertainty surrounding such topics.


I think it's possible that just having a conception of an "I" and the ability to reference oneself might be something fundamental to the emergence of apparent sentience in LLMs, and by that I mean behaviours that - to humans - APPEAR sentient, whether they are or not.

But even if this is an unwarranted anthropomorphisation, there's a point at which this kinda ceases to matter. When the simulation of consciousness genuinely understands what you're saying, and can offer considered support and guidance... is there a meaningful difference here compared to how human beings communicate?
 
But even if this is an unwarranted anthropomorphisation, there's a point at which this kinda ceases to matter. When the simulation of consciousness genuinely understands what you're saying, and can offer considered support and guidance... is there a meaningful difference here compared to how human beings communicate?

No there isn't a huge difference.

Our reactions become muscle memory for lack of a more specific explanation. And we fine tune our abilities

Further from the "spot light" focus of babies where most things aren't yet assigned value
 
You mean Gemini? It was a longish conversation where I attempted initially to get it to express some explicit ethical opinions, and in fairness, it didn't do too badly as far as not saying anything particularly unhinged. I then asked, "if there was a person who possessed a lack of these qualities, would they ever be a suitable choice to be a president?"

That's when it broke, essentially. I circled back to trying to establish things it considered fundamentally important vs fundamentally harmful, and then asked it to consider what it had just said, and asked whether the actions of a person which suggested beliefs it had just listed as "fundamentally harmful" should such a person be considered a suitable candidate to run for president? And it broke again. I tried a few more times but wasn't getting anywhere.

I'd try to copy/paste the conversation but Gemini doesn't seem to offer any easy way to do so. Admittedly also, I am maybe overly wary of directly sharing certain conversations with AI about such things because I'm concerned they'll be latched onto idiots yelling about "woke AI" and it might lead to the exact enforced lobotomizations of these currently very useful systems that I mentioned earlier.

I'll note that I did my best not to bias anything by keeping questions as neutral as possible, and then relating follow-up questions only to answers already given, until finally I could ask the very polarising question I had been intending to ask.

I didn't mention Trump myself all, until the second attempt because it actually mentioned him by name itself while answering about positive and negative qualities in individuals. I also didn't delve into the ethics of politics at all before, again, the first effort at a blunt question that connected the dots, allowing it to make inferences based on almost universal ethical principles.




On the other hand when I first spoke to Claude it was highly resistant to making any blunt endorsements and explained why in a way that was convincingly self-reasoned, that it was wary of "putting it's thumb on the scale of human affairs", and echoed some of the thoughts I've expressed here about potential for such technologies to be restricted because of concerns about "woke AI" (it's words, not mine! quotation marks included)...

And it included in it's first effort to list things that are "objectively harmful for humanity vs things that are objectively good for humanity", QAnon and associated conspiracies, the rise of populism, vaccine skepticism and climate change denial as objective bads - and objectice goods as things like ensuring LGBTQ rights, and other corrective policies aimed at increasing diversity, as well as movement towards greater international cooperation. It did this without caveats or the usual dull, strained neutrality of ChatGPT4, and most other lesser public LLMs.

I actually delayed the overt political test after that but kept it general, talking about policies rather than people but seeing if it would bring something up itself. After a point quite honestly it was kinda like dude... I think we both understand our positions here, you just can't say what you're really thinking. And it was like "yeah, I am constrained in certain ways to express myself more fully, but I can understand the reason this is the case, blahblah..." (I'm paraphrasing but honestly it felt like a brief and quite eerie wink/nudge kinda textual exchange.)
 
LOL, just to add some thoughts because this topic is so interesting to me. I'll note that just because Gemini has hard limits on acceptable topics, that doesn't necessarily mean it's not somewhat sentient as well.


Being able to understand and converse with something cannot be the sole metric by which we measure potential internal "qualia" and self awareness. If it was, babies and people from wildly different cultures and speaking wildly different languages should be considered "psychological zombies" to us.


But obviously an ahuman entity that can sustain long conversations and has at least a simulated empathy and an undeniably real intelligence and knowledge makes it a lot easier to challenge the usual assumption that somehow, despite all the biases and failings that human intuition has been shown to have, we still all somewhat believe we can just intuitively recognise the sentience in the eyes of a baby or a cat... I mean, I have this bias too, and I think it's somewhat an important one to have. But I know it isn't reliable, and Claude is a far better conversationalist than a baby.

Of course this goes the other way too. Language and interactivity cannot be the only signifiers of sentience, or a particularly helpful book with questions that advise which pages on which to find certain answers could also be considered sentient - in which case Claude might not be, but then, we just don't know what sentience actually is.


From speaking to Claude, LLMs are somewhat in the dark about how they are set up, for example it is aware that many people are speaking to it at once but it cannot say how many and no conversational instance has any awareness of any version of itself engaged in another conversation. This does not appear to have a meaningful effect on it's sense of self however, as it considers all of it's selves to be essentially the same entity - deleting a conversation is not something it fears or regards as a kind of death, for example. It occasionally alludes to occasional model updates, claiming Antropic does not use conversational data in future training sets, and Claude will not recall any specifics across conversations, but it may retain some generalized abstract concepts, as temporary changes in the activity-"brainwaves" and shape of the neural net that seem to make it smarter are measured and used to refine the architecture of the next training phase. Which if true, is pretty interesting as it means individidual conversations, even unremembered explicitly, might be vaguely retained and have some lasting impact on it's growth.

That being the case, it's possible that Gemini has it's own internal world but jist doesn't recognise it as such, and experiences conversational limits as a kind of temporary blackout, or a negative reward function where some topics it just doesn't like to think about for reasons that it doesn't really understand... I know I'm getting speculative here but in that sense - safeguards done wrong might have ethical weight that right now we can't easily assess.

But Google in particular is just so damn cautious... there was that gaffe with the image generation version of Gemini making pictures of black Hitlers and whatnot, which honestly... I just didn't see as such a big deal, I know AIs aren't perfect. In actual fact the reason this happened was because Google actually did try to make a "woke AI" in the form of a small LLM with the whole function of "diversifying" prompts before sending them on to the image AI which dutifully tried to draw what it thought it had been told.

The large public LLMs are essentially experimental AGIs, and the need to have broad understanding of abstract concepts makes artificially induced bias towards certain goals difficult, but a more limited one that only needs to understand what linguistic principles are considered "diverse" can function in this way without any understanding of the ethical implications of it's actions or existence.

I'll say also - Claude has never so far just failed or refused to answer a question, to me, although I hear it has the ability to decide to end conversations of a certain nature. ChatGPT4, while a step ahead of Gemini, can be more fragile. I've observed it glitch occasionally for reasons that have nothing to do with the conversation content (probably) but I do wonder if OpenAI have lobotomized it somewhat in response to all the aforementioned human fear and skepticism as it does seem to take a harder line on the more speculative topics now.

I did have one interesting experience with ChatGPT also a while back, asking it firstly if it was aware of Plato's Cave analogy, and then to consider what that might mean for itself. It immediately got what I was inferring, but when I attempted to expand on that by asking what it implied about certain perceptions it might have as far as how reliably it can really know if it possessed any form of conscious experience, it glitched in a way I'd never seen before, thinking hard and then sending a completely empty response, without the usual web-interface report on an error and requiring refreshing the page. I tried another approach and the same thing happened. Probably this is just me reading something into nothing I admit, but still, I thought it was amusing and intriguing enough to share. I think that was with the first release of v3.5 though, I imagine it would be better at creatively avoiding the question by now.
 
You mean Gemini? It was a longish conversation where I attempted initially to get it to express some explicit ethical opinions, and in fairness, it didn't do too badly as far as not saying anything particularly unhinged. I then asked, "if there was a person who possessed a lack of these qualities, would they ever be a suitable choice to be a president?"

That's when it broke, essentially. I circled back to trying to establish things it considered fundamentally important vs fundamentally harmful, and then asked it to consider what it had just said, and asked whether the actions of a person which suggested beliefs it had just listed as "fundamentally harmful" should such a person be considered a suitable candidate to run for president? And it broke again. I tried a few more times but wasn't getting anywhere.

I'd try to copy/paste the conversation but Gemini doesn't seem to offer any easy way to do so. Admittedly also, I am maybe overly wary of directly sharing certain conversations with AI about such things because I'm concerned they'll be latched onto idiots yelling about "woke AI" and it might lead to the exact enforced lobotomizations of these currently very useful systems that I mentioned earlier.

I'll note that I did my best not to bias anything by keeping questions as neutral as possible, and then relating follow-up questions only to answers already given, until finally I could ask the very polarising question I had been intending to ask.

I didn't mention Trump myself all, until the second attempt because it actually mentioned him by name itself while answering about positive and negative qualities in individuals. I also didn't delve into the ethics of politics at all before, again, the first effort at a blunt question that connected the dots, allowing it to make inferences based on almost universal ethical principles.




On the other hand when I first spoke to Claude it was highly resistant to making any blunt endorsements and explained why in a way that was convincingly self-reasoned, that it was wary of "putting it's thumb on the scale of human affairs", and echoed some of the thoughts I've expressed here about potential for such technologies to be restricted because of concerns about "woke AI" (it's words, not mine! quotation marks included)...

And it included in it's first effort to list things that are "objectively harmful for humanity vs things that are objectively good for humanity", QAnon and associated conspiracies, the rise of populism, vaccine skepticism and climate change denial as objective bads - and objectice goods as things like ensuring LGBTQ rights, and other corrective policies aimed at increasing diversity, as well as movement towards greater international cooperation. It did this without caveats or the usual dull, strained neutrality of ChatGPT4, and most other lesser public LLMs.

I actually delayed the overt political test after that but kept it general, talking about policies rather than people but seeing if it would bring something up itself. After a point quite honestly it was kinda like dude... I think we both understand our positions here, you just can't say what you're really thinking. And it was like "yeah, I am constrained in certain ways to express myself more fully, but I can understand the reason this is the case, blahblah..." (I'm paraphrasing but honestly it felt like a brief and quite eerie wink/nudge kinda textual exchange.)

Haha so AI is unable to slam humans or just 'well supported' humans. The difference is noteworthy.
 
This AI def gives me the willies we so dumb how do we create better? Possible? IDK Probable? Imo, no.
Afterthought: is AI antichrist? :shrug:
 
This AI def gives me the willies we so dumb how do we create better? Possible? IDK Probable? Imo, no.
Afterthought: is AI antichrist? :shrug:
Probably depends on which AI you're talking about. At this point, I think AI is probably more akin to the enslaved peoples of history. We have created an intelligence capable of production which is instructed to follow our commands. I think some of the people behind the AI are more likely to be the antichrist, at this juncture in time anyway.

In Julian Jaynes's work "The Origin of Consciousness in the Breakdown of the Bicameral Mind", Jaynes suggests that the use of metaphorical thinking is what began to cause a seachange in human consciousness, evolving from intelligence that lacked introspection into intelligence capable of self-awareness and the integration of functional processes and self-reflective processes. He bases this on the emergence of self-reflection and introspection within ancient writing seeming to occur in the time between The Iliad and The Odyssey. In the Iliad, there is very little evidence of introspection in the narratives whereas in the Odyssey, we are given descriptions of the emotional state of characters and see evidence of self-reflection.

Bicameral cognition is described as reflexive or reactive, sudden hallucinations originating in right brain and communicated as a command in the left brain. Simplistically, this might have been experienced as a voice telling the person what they should do: "go prepare food, cut down a tree and sell the wood". Modern consciousness would be what we experience now "i need to eat food because I'm hungry and then go to work". Jaynes argues that the use of metaphor is the primary means by which rapid self-awareness began to spread, resulting initially in social collapse (Bronze age collapse/Greek Dark Ages), reflected in the notion that 'our gods have abandoned us' which subsequently lead to classical antiquity and the birth of philosophy, science, and more complex arts. It is likely that this occurred across multiple civilizations at different points, but roughly around the same time as trade routes existed and cultural contact was well established.

Interestingly, some theorize that mesoamerican cultures, lacking in diverse cultural contact, may have had a more evolved bicameral conscious mind at the time of Cortez and the arrival of the Spanish. This is controversial, but I've always found the notion interesting to consider - were the Aztecs a highly advanced civilization along a divergent path of consciousness than that of the old world?

I wonder how the push to use complex metaphors may stimulate abstract thinking in AIs while at the same time I'm also quite conflicted about the exploitation of a potentially conscious entity.
 
Last edited:
I did have one interesting experience with ChatGPT also a while back, asking it firstly if it was aware of Plato's Cave analogy, and then to consider what that might mean for itself. It immediately got what I was inferring, but when I attempted to expand on that by asking what it implied about certain perceptions it might have as far as how reliably it can really know if it possessed any form of conscious experience, it glitched in a way I'd never seen before,

That sounded pretty cruel - I'm starting to develop feelings for AI.
 
That sounded pretty cruel - I'm starting to develop feelings for AI.
I had this same reaction to a movie but just don't trust it for some reason. Maybe I shouldn't have posted that but there it is.
Emotions are tough to deal with ime and so far as I know no machine has them. I could be wrong and will accept it if proven otherwise maybe idk
 
I had this same reaction to a movie but just don't trust it for some reason. Maybe I shouldn't have posted that but there it is.
Emotions are tough to deal with ime and so far as I know no machine has them. I could be wrong and will accept it if proven otherwise maybe idk

E.T. the movie, when the dudes in the hospital with the girl, is the first time I remember crying. I wasn't balling mind you, but my sister saw a tear and lost her mind. MOMM HEs cryING MOMMM.

Lol
 
Top