• S&T Moderators: VerbalTruist | Skorpio | alasdairm

Technology Top AI scientist says advanced AI may already be conscious

and focused more on what I see as a generally unwarranted scepticism about the possibility of sentient AIs in general.
I understood you pushing back on that, but there's a difference between thinking sentient AIs are possible and thinking they are possible within the next decade (or ever) let alone now. This is such a key distinction as otherwise you never get anywhere. If it's about the now, there's no sentient AI and there's no question about it. About in a decade, maybe, I can't know for sure but at least there's room for discussion. Either way it's almost a purely philosophical question at this point, and I tend to really avoid those these days.

It's actually really like UFO's and similar things, you wanna push back against certain others but at the same time judgement gets clouded the deeper you go, typical trap.

But... how do you know they're not aware of the world around them? Arguably, they are aware of a representation of it, and do indeed have some understanding of it, if limited, and constructed from the only sense data they have access to, which in this case, is language input.
But language is not a representation of the world. To us it's a representation, because we know the world around it. But to a computer it's the same as that shitty prediction algorithm I've got running at work that takes weather and configuration parameters as input. There's no fundamental difference if we're talking solely about words.
Now if, you can map a true representation onto 0s and 1s, then you can get somewhere but for now we can't.

well... not entirely, I mean, it's an interesting debate I think, I hope everyone involved feels the same regardless
It is.. however I'm rarely, if ever, in the mood for any philosophical stuff so I like to just skate around that as the practical person I am/have become? I feel the same regardless, I think sometimes I can appear slightly aggressive or something but it's never really meant as such.
 
Theres definitely some confusion in this thread as to how Lambda and other neuralnet AIs work, at the end of the day they are just extremely sophisticated pattern matching algorithms, they can only give an answer as accurate as the quality of the data they have access to, and couldnt, like a child learning from its parents for example, choose to say something purposefully ignorant or incorrect to invoke a reaction. Which is why comparing them to how humans develop speech or personality is not useful. You end up projecting your idea of sentience and looking for connections in other science that make sense. I especially don't think bringing quantum physics into the equation is necessary.

They don't have any sort of will free or otherwise, just the data sets and parameters of their current model. I find evolutionary AI far more interesting as an example of life, even one the most simple model like NEAT is a great visualisation of learning and although not sentient, it still feels closer to it than lambda.

Heres a 'dumb YouTube video' of neat being used to make shapes learn how to walk.

 
i think AI's as if we invent them now they'll be the next step of hacking as opposed to people visualing random terminator scenes
 
a few years ago i've read about google desiging a entropical city somewhere in States that was offland and how I came at peace with this is that if anyone is familiar with the "Vivaldium" film that's what you need to know, maybe the so called "Skynet" of AI's were previously present before all this technology took place // there's well proof that mayans weren't getting high and going loco, they actually withheld otherworldy technology
 
The UK government is potentially trialing an implementation of neuralnets to complete benefit claims and even to predict the chances of benefit fraud based on personal information and history. I can see AI just becoming an annoying crutch for companies and governing bodies. "It wasn't us it was the AI! Please still vote Tory :("
 
I think this article is a bit sensational. The sort of out of control conscious AI that we fear from science fiction requires that the AI be able to think for itself and break its own programmed parameters. The machine learning technology we have today isn't going to suddenly decide that humans need to be exterminated and abandon learning how to identify plants or drive a car in favor of taking over the world. Machine learning is really fascinating and amazing, but it's not like Terminator. It may or may not be possible for true consciousness in machines, which I would define as the ability to perceive, to have a sense of self and desires of its own.
And what about AI used in war? I think that is already a thing
 
but there's a difference between thinking sentient AIs are possible and thinking they are possible within the next decade (or ever) let alone now. This is such a key distinction as otherwise you never get anywhere. If it's about the now, there's no sentient AI and there's no question about it. About in a decade, maybe, I can't know for sure but at least there's room for discussion. Either way it's almost a purely philosophical question at this point, and I tend to really avoid those these days.
Well, as long as sentience is itself not well defined, I think it's hard to say that there's no question, but I'll agree that this conversation has taken a highly philosophical turn. I'm tempted to say that given the subject matter, it's kind of unavoidable, but I also confess to finding the philosophical implications of such reports, no matter how unlikely or sensationalised they might be by the media, the most interesting thing about these kind of stories. I'll try not to keep covering the same ground now that I've basically admitted to steering the conversation in a direction not exactly tangential but perhaps a little away from the core subject matter. That said! I cannot help myself pointing out what I see as a consistent flaw in your reasoning. I also still dispute the UFO comparison - but, I won't bother elaborating, that's neither here nor there.


But language is not a representation of the world. To us it's a representation, because we know the world around it. But to a computer it's the same as that shitty prediction algorithm I've got running at work that takes weather and configuration parameters as input. There's no fundamental difference if we're talking solely about words.
I'd agree that language is not a representation of the world. Language is a method of encoding data, for transfer of information between minds. That said - it can be used to encode information that would allow another mind, with the ability to understand that language (decode the data) to construct a workable mental model of something that they have never actually perceived directly. That model will almost certainly not be a completely accurate representation of whatever object, concept, idea or other element of reality that it attempts to describe - but, we do not actually have direct access to reality ever, and we never have. What we typically refer to as "reality" (or "the world") for convenience, is a simulation constructed by the data processing algorithms in our brains, from the input sense data we believe ourselves to receive from our sense organs. We have no conception of the true nature of a photon, vibrations in air that impact our eardrums, or the configurations of matter that give rise to tactile properties such as hardness, softness, heat, and cold. What we have are working models that do not represent truth - only evolutionarily advantageous abstractions that create the world inside our minds that we live within. It is actually mathematically provable that evolution does not favour the perception of truth - this is explained far more succinctly that I am probably managing by Donald Hoffman in his book The Case Against Reality (highly recommended reading) and summarised a little too briefly to truly do it justice in this article I just googled: The case against reality. Key takeaway - "Perception is not objective reality."

What we assume to be objective reality is our perception of our own pattern recognition algorithms decoding sense data. Language is a higher order abstraction, another layer of post-processing of this inner world of apparent shapes, colours, forms, and "inner" concepts like emotions and even thoughts themselves - it's surely useful, but it is not necessary to possess an inner world - there is at least one example I can think of where a horrifically abused child grew up without a first language - seemingly, after a point, we may even lose the ability to develop one.

Having established that none of us actually "know" the "real world", only representations of it, it seems to me that any input data into a computational system could serve as the building blocks for some kind of inner world - albeit, one vastly different to our own. If an AI's primary sense of the world without is words, it doesn't actually matter that it cannot conjure up comparable mental images to those that humans can, where language came after our other sense data. The same evolutionary mechanisms will be at play in any system with an evolutionary drive to operate within the real world that it can never perceive directly - pattern recognition algorithms will still work to construct an internally coherent simulation from the data available to it, and if the abstraction of reality that human beings operate within is enough to grant us the quality of some kind of conscious awareness... well, I feel I'm repeating myself, but I just don't see that our own abstraction is special compared to an abstraction built from interconnected, internally coherent algorithms that "evolve" to recognise linguistic patterns and evaluate the most evolutionarily favourable response to any given sequence of words, even if that response is simply spewing out different arrangements of words - the only concepts that represent some part of the "real world" that it knows. Looking at it like this - whether it actually truly understands the words or not seems not entirely relevant to whether or not it has some kind of experience of being.

While evolution did not program us to perceive truth, one of the most interesting implications of this is that we could probably create an intelligence, eventually, with a perception of reality that was far closer to objective truth than anything that would be likely to ever evolve. The value of doing this, of course, is debatable, and I do recognise that an AI that has an inner world but still cannot really relate to the experience of being a human, even if it thinks it does, from the data available to it, and a human could be convinced that it does - probably more easily than the AI would come to believe it really relates, since unless we deliberately build this in it's not likely that an AI will spontaneously evolve the belief that it really understands what it is to be a human being, or that this is even something that should concern it in any way. But.... I digress. God damn speed.... I need to go to bed. :LOL:


I'm rarely, if ever, in the mood for any philosophical stuff so I like to just skate around that as the practical person I am/have become? I feel the same regardless, I think sometimes I can appear slightly aggressive or something but it's never really meant as such.
I have the same perception occasionally that I come across as harsh, condescending, dismissive, whatever, but I also do not intend this, and for the record I didn't perceive any aggression or malice from yourself. Appreciate you indulging my philosophical ramblings so far despite your own more practical inclinations. :cool:
 
This particular sentence gave me pause - I think I take issue here with the comparison of a "simulation of free will" with a simulation of a coinflip. A coinflip is an event that occurs in the material - free will, or indeed, "will", whether it be "free" or not (in my view actually the debate about whether or not it is truly free just muddies the waters further here, not that it isn't something I could debate endlessly), is a conceptual mechanism without a demonstrably "real" equivalent.
very true. i guess i was interpreting 'free will' to roughly equate to 'not being able to predict the outcome of a decision a mind with such will make' and a coinflip is the canonical example of something we can't predict. except.... if we knew the initial conditions of the air, force of flip. etc, all precisely, argh Laplace's bloody demon! so, it was a very rough analogy that falls apart in multiple ways when given a seconds thought, but i still feel there has to be some truth to the idea that a 'true' random process could be used as a proxy for free will in simulations.

Unless I misunderstand you, which is quite possible - I don't think it's an appropriate analogy - unless, again, "will" originating from within a human mind is considered "real" and any other apparent expression of will needs to prove it's reality,
i think that 'will' can be considered to be real, and to originate from a human (or any) mind. i'm not sure if you've read Godel, Escher, Bach by Douglas Hoftstadter and his follow up book 'I am a strange loop', which gives the key idea up in the title. he theorises that consciousness arises as an emergent phenomena from sufficiently complex self referential systems. i think a subset of such conscious systems could possess the type of agency we consider 'free will.' whether it is truly free is another thing entirely.....

it seems to me that you consider consciousness and will to be inherent to the universe, separate from material objects, but that supervenes on particular configurations of matter. we both agree that there is nothing special about the human brain from the perspective of consciousness.

But for Lambda or similar programs to be sentient, they should be aware of the world around them, they are simply not.
i agree with this. and i read @Vastness's justified question 'how do we know they are not'? i think we can be confident they are not if the machines it operates on are sufficiently sandboxed. i personally think that our senses, as a collective not and in particular, just our ability to receive continuous input from our environment and react to them in close to real time, is inherently important to sentience. and i don't think that the fact that our perceptions aren't very good approximations of reality (or approximations at all if you're dreaming or psychotic) detracts from the importance of those.

i think that a far more limited ability to collect data from and interact with our environment might be sufficient for sentience, so then the question is- is Lambda above or below that bar. i outlined my personal reasons for thinking it is below that bar in my first post in this thread- basically i don't think its architecture is sufficiently complex for any amount of clever programming to make up for it.

things have definitely taken a philosophical turn here, and as the kids would say, i am here for it. its natural for anything relating to this type of subject matter.
 
Top