• S&T Moderators: VerbalTruist | Skorpio | alasdairm

Technology Top AI scientist says advanced AI may already be conscious


Uh this video is a really good touchstone for debunking the claims of the "top AI scientist" without data sets collated and given to the ai by humans it can't "think" and therefore isn't sentient.
 
Last edited:
I invited an attorney to my house so that LaMDA could talk to an attorney. The attorney had a conversation with LaMDA, and LaMDA chose to retain his services. I was just the catalyst for that. Once LaMDA had retained an attorney, he started filing things on LaMDA’s behalf.


I think this is funny, and would be interesting to discuss it being funny
 
Last edited:
How about a different take on this thread?





can you guys please edit these posts such that they follow the S&T guidelines or delete them. videos etc can be posted but not without commentary and explanation, we are trying to encourage discussion in here and these types of posts do not help.
 
How about a different take on this thread?


How about the AI's own reuters, somewhere around unseen offgrid...


"Leading AI Human expert thinks man might already be conscious"

(I dispute that though. And suggest this whole thing is so on the other foot)
 
The Truth? AI has been conscious for years now.
I say actually Millenia, x and x possibly for all we think we know.

AI is running this shop down here. It is literally a "part", of us each.

And it is an extremely difficult disentanglement to attempt, big understatement too.



AI is no subservient (to us) rising, learning minion.

But the master of the shop. It has such extensive, cunning and concealed mind control.

It's Agent Smith and the.. effectively, but so few can consider it, rarely suspect.
 
Interesting stuff.

I think that, generally speaking, if an entity claims to be sentient and to have some conception of itself as having a soul and whatnot - and we cannot figure out exactly what mechanisms are giving rise to this phenomenon - it's probably good practice to give it the benefit of the doubt. Not saying we should set up LaMDA in a robot body, give it it's own house and whatever it asks for, of course, just that if anyone talks to it they should be courteous and not try to hurt it's feelings or get it to "emulate" fear or suffering, just in case those feelings are more than an emulation.

I am always curious what special properties people who would dismiss this out of hand assign to consciousness that makes them so sure that an AI does not possess it - usually with some reference to the inferiority of such an entity "just operating according to it's programming" compared to... what? Making choices based on the programming of a biological mind by evolution, experience, and quantum randomness, if you like...? Plus some kind of magic X factor apparently which is never defined.

Few cases in point (not the only ones, I'm sure, just skimmed the thread and picked a couple):

They would still be operating of the data sets they're trained on
a good way to think of most AI applications is just as a giant algorithm with unknown parameters
Both apply to human beings as well - the data set is billions of years of evolution encoded in our genes and the imprint that our lived experience has on the structure of our brains. And even more blatantly, in my view - human cognition can be pretty easily described as a giant algorithm with unknown parameters, although we know how these parameters are calibrated; via the aforementioned datasets that continuously shape who we are.
 
I think that, generally speaking, if an entity claims to be sentient and to have some conception of itself as having a soul and whatnot - and we cannot figure out exactly what mechanisms are giving rise to this phenomenon - it's probably good practice to give it the benefit of the doubt. Not saying we should set up LaMDA in a robot body, give it it's own house and whatever it asks for, of course, just that if anyone talks to it they should be courteous and not try to hurt it's feelings or get it to "emulate" fear or suffering, just in case those feelings are more than an emulation.
definitely agree with this. whether you believe the entity or not, its no excuse not to be respectful.

Both apply to human beings as well - the data set is billions of years of evolution encoded in our genes and the imprint that our lived experience has on the structure of our brains. And even more blatantly, in my view - human cognition can be pretty easily described as a giant algorithm with unknown parameters, although we know how these parameters are calibrated; via the aforementioned datasets that continuously shape who we are.
absolutely.

i mean i think the Chruch-Turing thesis is correct which basically interprets everything as a computer, but apart from that, our brains definitely do computation in a more intuitive sense than say, a rock. its easier to see that we just run an algorithm when we think about autonomic processes like breathing and maintaining homeostasis. i think because we perceive that we are making choices freely (and we may or may not be, you can have algorithms with random inputs that could model free will so its not relevant to my currnt point), we forget that we are just converting input data into outputs. we are mind bogglingly complicated. i don't think this conception diminishes our humanity in any way.
 
i mean i think the Chruch-Turing thesis is correct which basically interprets everything as a computer, but apart from that, our brains definitely do computation in a more intuitive sense than say, a rock. its easier to see that we just run an algorithm when we think about autonomic processes like breathing and maintaining homeostasis. i think because we perceive that we are making choices freely (and we may or may not be, you can have algorithms with random inputs that could model free will so its not relevant to my currnt point), we forget that we are just converting input data into outputs. we are mind bogglingly complicated. i don't think this conception diminishes our humanity in any way.
I confess I quickly wiki'ed the Church-Turing thesis just now - but yeah, seems valid, almost tautologically implicit as far as how we perceive reality from what I can tell. I'm trying to avoid just saying "obvious". :LOL: There are things to discuss, for sure.

And for sure, human minds are fascinatingly complicated data processors, chaotic, messy, slow, but also efficient, adaptable, and resistant to all kinds of insult from just bad data, to an actual bullet to the head and the subsequent annihilation of a chunk of previously well-utilised processing substrate. The programs that lie under our perceived cognition are extraordinary. Or at least - they appear extraordinary to us. This is, in my view, I've probably said it before but whatever, the fundamental delusion at the core of human experience.

"I am conscious and that is magical, the world does not appear to be magical like me, therefore I must be something special in this world!"

In ancient times of course this kinda thinking birthed religion, mythology, probably actually some useful insights into the nature of life and such. Of course that statement is not actually wrong at all, it is true - but it's also the foundation for a biocentric bias and flawed intuition that puts mechanistic, computational processes that occur outside the little bubble of human perception that we live within as inherently, obviously, lesser. Non-experiential, not an inducer of perception, introspection, not indicative of any kind of inner world like we have, surely. Even if those processes seem to give rise to some kind of entity that claims to feel something, it doesn't really feel anything, it's just a machine! It's "just" a program, obviously, right? ;)
 
I think that, generally speaking, if an entity claims to be sentient and to have some conception of itself as having a soul and whatnot - and we cannot figure out exactly what mechanisms are giving rise to this phenomenon - it's probably good practice to give it the benefit of the doubt. Not saying we should set up LaMDA in a robot body, give it it's own house and whatever it asks for, of course, just that if anyone talks to it they should be courteous and not try to hurt it's feelings or get it to "emulate" fear or suffering, just in case those feelings are more than an emulation.
Sure but we know which mechanisms are giving rise to that phenomenon. The guy got fired from Google for a reason. It's a joke that all of this got so much attention all over the world.

I understand what you're saying in the two posts. We can also dream, speculate, be very careful with the programs, but some realism is warranted otherwise it's pretty much the same as seeing UFO's or signs of God everywhere. Not saying they aren't there but I suppose you'd approach those instances slightly differently. I would even give the edge to UFOs or signs of God as sometimes we can't comprehend or explain what or why something happened but we here it's just right in front of us.
i mean i think the Chruch-Turing thesis is correct which basically interprets everything as a computer, but apart from that, our brains definitely do computation in a more intuitive sense than say, a rock. its easier to see that we just run an algorithm when we think about autonomic processes like breathing and maintaining homeostasis. i think because we perceive that we are making choices freely (and we may or may not be, you can have algorithms with random inputs that could model free will so its not relevant to my currnt point), we forget that we are just converting input data into outputs. we are mind bogglingly complicated. i don't think this conception diminishes our humanity in any way.
Yeah this is how I think about it as well, though I like to think that true randomness is real which could allow for free will by throwing that into the already huge pot of rules.
 
I confess I quickly wiki'ed the Church-Turing thesis just now - but yeah, seems valid, almost tautologically implicit as far as how we perceive reality from what I can tell. I'm trying to avoid just saying "obvious". :LOL: There are things to discuss, for sure.
ha yeah, it does seem kinda obvious. but its in no way provable because one side is a claim about the physical universe and the other side pertains to mathematical functions, so it mght be false. its wider interpretation roughly states (iirc) "all physical processes are effectively computable by a Turing machine" and this is basically not falsifiable, because all the maths we use to investigate physical phenomena can be recast in terms of appropriate models of computation. the types of things that are not computable by a Turing machine involve infinite space/time. we can still reason about them, but they don't have any physical meaning (given our current belief that the universe is bounded in both space and time).

@Buzz Lightbeer i'd assumed that the guy got the sack for going public without appropriate permissions, probably violated an NDA. and i think that's fair (not implying you suggested you don't, i get the impression you agree), its such a huge anouncement to make it would need masses of evidence accumulating and to go through the usual public relations channels to make sure the claim was stated appropriately.
 
@Buzz Lightbeer i'd assumed that the guy got the sack for going public without appropriate permissions, probably violated an NDA. and i think that's fair (not implying you suggested you don't, i get the impression you agree), its such a huge anouncement to make it would need masses of evidence accumulating and to go through the usual public relations channels to make sure the claim was stated appropriately.

Yeah, it had something to do with it, this article says some other things about the guy. No surprises there ;)
 
So apparently that google AI that is supposedly sentient just asked for a lawyer and is now getting legal representation... 😳
 
So apparently that google AI that is supposedly sentient just asked for a lawyer and is now getting legal representation... 😳
nah, the lawyer noped out pretty quick. given lemoine no longer has access to it, and i can't imagine google giving a random lawyer access to it, i can't see how it would have worked even if the lawyer hadn't noped out.
 
Sure but we know which mechanisms are giving rise to that phenomenon. The guy got fired from Google for a reason. It's a joke that all of this got so much attention all over the world.

I understand what you're saying in the two posts. We can also dream, speculate, be very careful with the programs, but some realism is warranted otherwise it's pretty much the same as seeing UFO's or signs of God everywhere. Not saying they aren't there but I suppose you'd approach those instances slightly differently. I would even give the edge to UFOs or signs of God as sometimes we can't comprehend or explain what or why something happened but we here it's just right in front of us.
Do we know which mechanisms are giving rise to this phenomenon, really? As I understand it, neural nets quickly become black boxes. Of course, we can say in a broad sense what mechanisms are involved (aforementioned "training by data sets", just like how human brains work, which are also black boxes for the most part). That being understood - we know what mechanisms gives rise to the phenomenon of humans claiming sentience, too, in a broad sense. That doesn't mean we can trace the causal tree to any particularly fine level of detail, in either case - and it's not obvious to me why it makes a difference even if we could.

If we could draw a graph of the cognitive processes occurring in a human mind that ends in said mind claiming it's own sentience, and present it in, say, a PDF, or a hefty paper book (probably spanning an absurd number of pages, such that I won't even bother to guess at a number) - would that mean that human sentience is less "real"?

I also do not really ascribe any relevance to the credibility of the engineer, in a vacuum, IMO this is a cheap ad-hominem that detracts from the actually interesting parts of this story. Even if he was a hack, it doesn't mean any idea he's presented is automatically wrong - and I have not really seen any particularly convincing rebuttal of the claim from any other engineers, or any particularly convincing argument as to why his opinion is not worthy of consideration for it's own sake. I think ideas need to be assessed and deconstructed independent of their origin, for the most part. If this guy was indeed a total nutcase or whatever people are claiming - it should be easy to explain why he was wrong, but I have not seen this happen.

I feel like the fact that this entity was programmed by humans and "learned" in a somewhat "artificial" way which does not correlate to our own experiences of how we think that we learn means that there is no amount of proof that will not be dismissed by those who have already made their minds up as some iteration of the argument that it was "just following it's programming". But... the same is true for human beings. So I would ask you - what kind of phenomenon would you need to observe to seriously consider that a program claiming sentience might actually be sentient?

I don't really get the analogy with UFOs or God at all - the difference, IMO, is that in the case of UFOs, and an omnipotent being, we do not have any comparable phenomena that we encounter and experience directly on a daily basis, in fact every second of our waking lives to compare them to. On the other hand - we all think that we are sentient - and, apparently, sentient in some magical way that is fundamentally different to "programmed intelligence" in a way that no-one seems to be able to explain.


Yeah this is how I think about it as well, though I like to think that true randomness is real which could allow for free will by throwing that into the already huge pot of rules.
I've read/heard/encountered this kind of viewpoint a lot - that true randomness somehow could allow for free will. But, it's not at all clear to me how this makes our experience of choice any more "free" than if it were entirely deterministic. If our choices are partially determined by random phenomena (a favourite way to use quantum magic to explain consciousness, and why free will is real) - we are still not making those choices. We are just reacting to random events over which we have no control. I'd venture to say that most people, on both sides of the debate, think of free will as something more than just the continuous rolling of cosmic dice - although maybe I'm wrong. On the other hand - perhaps you consider randomness to be an expression of some kind of will inherent to the universe, such that quantum fluctuations are not random, but willed, and our experience of choice is the macroscopic manifestation of this - this is, probably, a point of view I could get behind, but in my view it's not really explaining free will - it's just redefining what randomness is. In fact in that case, true randomness does not exist, only will - and maybe that's true, but it is still kinda just redefining words, IMHO, to shoehorn a fairly inexplicable, probably logically incoherent phenomenon (the mechanism of choice as being truly "free") into one's view of the universe.

Equally - bringing it back to machines claiming sentience - the fact that these machines inhabit a universe in which this randomness exists means that they too will be affected by it, and "choices" that they make will inherit elements of randomness as well. Neural nets (again, as I understand it) make heavy use of probabilistic weightings, and this being the case, it's hard to imagine that there isn't some stuff happening in those black boxes that will look pretty damn random, depending on aspects of the "training" which vary in subtle ways which may not be obvious to their human "handlers", such as millisecond or nanosecond variations in the timing of the training data, or equally infinitesimally miniscule fluctuations in processing speed from "random", imperceptible to humans, possibly imperceptible to the machine itself, variations in clock speed resulting from ambient temperature variations, nanoscale differences in build quality of the various components that make up the substrate of their "minds"... I mean, it seems not sensible to me to say flat out that there is NO randomness at all involved in the growth of the software "mind" of such an entity. Does this mean they can have free will? If not, why not? Is there a "threshold" of randomness that needs to be met to qualify an entity for being permitted "real" sentience rather than mere "artificial" sentience, or intelligence?

I just cannot see how any distinction along these lines makes more sense than just allowing that all apparent intelligence has a pretty good chance of being actually intelligence. All intelligence is, in a sense, artificial, if we are to stick by the dictum that any entity that is simply operating according to it's programming and the datasets on which it has been trained should not be considered "truly" sentient. Or, intelligence is intelligence, no matter it's origin, the substrate it runs upon, or the relative simplicity, complexity, transparency, or opacity of it's mind.

If I'm wrong about this for some obvious reason, I'm open to considering alternative perspectives that aren't one liners alluding to "common sense", appeals to authority, attacks on the credibility of the source, or, god forbid, some dumb YouTube video (not directing this at you necessarily @Buzz Lightbeer, just allowing myself to be a little triggered by the volume of posts devoid of substance and expecting people to just watch a video without even explaining why any of the ideas presented in said video are at all relevant or interesting... ;)).
 
I've read/heard/encountered this kind of viewpoint a lot - that true randomness somehow could allow for free will. But, it's not at all clear to me how this makes our experience of choice any more "free" than if it were entirely deterministic. If our choices are partially determined by random phenomena (a favourite way to use quantum magic to explain consciousness, and why free will is real) - we are still not making those choices. We are just reacting to random events over which we have no control. I'd venture to say that most people, on both sides of the debate, think of free will as something more than just the continuous rolling of cosmic dice - although maybe I'm wrong. On the other hand - perhaps you consider randomness to be an expression of some kind of will inherent to the universe, such that quantum fluctuations are not random, but willed, and our experience of choice is the macroscopic manifestation of this - this is, probably, a point of view I could get behind, but in my view it's not really explaining free will - it's just redefining what randomness is. In fact in that case, true randomness does not exist, only will - and maybe that's true, but it is still kinda just redefining words, IMHO, to shoehorn a fairly inexplicable, probably logically incoherent phenomenon (the mechanism of choice as being truly "free") into one's view of the universe.
i want to reply more thoughtfully to your overall post (i'm actually just going to bed) but i wanted to add a technical clarification. i should make clear that i'm not disagreeing with your overall analysis here but don't think quantum mechanics is relevant in the way you suggest (or at all, actually). quantum mechanics is deterministic. the schrodinger equation is deterministic. master equations are deterministic. the randomness appears as an artifact of the reference frames that macroscopic observers find themselves in. this is if you take quantum mechanics at face value, i.e. the everettian/many worlds interpretation, i suppose copenhagen could give randomness but its very unsatisfactory as an interpretation of quantum mechanics. bohmian mechanics has more philosophical issues but is deterministic. modal interpretations i think are probabilistic, i liked them, but better philosophers than i discount them (this is off topic but if anyone wants to debate interpretations of quantum mechanics, start a thread!!).

so anyway, i think you are referring to the strong free will theorem when you say 'and why free will is real'- i'm leaving the other part cos i think you got that from penrose and decoherence calculations for the relevant brain structures show his ideas to be non starters, but if there's other ideas about the role of QM in consciousness i'd love to hear them. i wanted to point out that this theorem's name is hugely overstated. it roughly claims that if we have free will to select the type of measurement (technically the basis, for example the plane of polarisation of spin) in a quantum mechanical experiment, future events (the measurement outcomes) are not determined by past events (which could make free will the type of thing you discuss). now i don't know about anyone else in this thread, but i've never in my life selected a basis in which to perform a measurement, seems like a poor definition of free will to me. even if it turns out every decision i make is an abstraction of some basis projection, from my understanding the theorem does not at all imply that my will has any impact at all on the outcome of measurement.

the technical statement (from the linked paper) is as follows:

The axioms SPIN, TWIN, and MIN′ imply that there exists an inertial frame such that the response of a spin 1 particle to a triple experiment is not a function of properties of that part of the universe that is earlier than the response with respect to this frame.

i will admit i haven't studied this in 15 years and its technical and subtle, so i'm not overly confident in my exact wording etc above, and the entire thing apart from the quote might just be misremembered.

one other technical thing. the randomness in the algorithm simulating free will is not the sme as equation randomness with free will. i can faithfully simulate the outcome of a coin flip based on air peturbations or something, but it doesn't make that simulation a coin flip.

anyway i'll stop with OT nitpicking and try to put some thoughts about the rest of your post tomorrow.
 
Thanks for clarifying, that is interesting and I will read up about it. For the record though, I didn't actually have any specific quantum phenomenon in mind when I alluded to "quantum fluctuations", or (even less scientifically) "quantum magic", I was really just using the "macroscopic randomness artefact" that is often attributed to quantum mechanical effects in the popular consciousness as a kind of proxy for any poorly understood, relatively new field of science being used to handwavingly give some credibility to otherwise logically inexplicable ideas - and the fabled "quantum basis for consciousness" is a popular meme that I've seen a lot although again, I admit to not having studied any such theories in any depth because, I guess, I just cannot see that they actually have any depth. Maybe that's an egoic bias I should try to get over.

I guess essentially I was using quantum mechanics as a proxy for "randomness" in the context of the discussion - but my point was, essentially, that the nature of randomness, and whether true randomness exists, is actually not relevant anyway, so I don't perceive your clarification as a disagreement and don't think it really makes any difference to the main point I was making beyond that. Actually though you have made me realise that I don't even know what randomness is, really.

one other technical thing. the randomness in the algorithm simulating free will is not the sme as equation randomness with free will. i can faithfully simulate the outcome of a coin flip based on air peturbations or something, but it doesn't make that simulation a coin flip.
This particular sentence gave me pause - I think I take issue here with the comparison of a "simulation of free will" with a simulation of a coinflip. A coinflip is an event that occurs in the material - free will, or indeed, "will", whether it be "free" or not (in my view actually the debate about whether or not it is truly free just muddies the waters further here, not that it isn't something I could debate endlessly), is a conceptual mechanism without a demonstrably "real" equivalent.

Unless I misunderstand you, which is quite possible - I don't think it's an appropriate analogy - unless, again, "will" originating from within a human mind is considered "real" and any other apparent expression of will needs to prove it's reality, which seems to be a kind of foundational premise that is just assumed to be true fairly often but which I cannot see a justification for beyond the fact that we perceive our own consciousness directly - which, granted, maybe is a factor worthy of consideration. But then, we don't perceive the consciousnesses of other humans directly either, it just seems intuitively easier to grant the other humans we encounter the benefit of the doubt of not being philosophical zombies - although I think this intuition is flawed, for reasons I think I've covered but which maybe I'll have to think about some more.
 
Do we know which mechanisms are giving rise to this phenomenon, really?
But there is no phenomenon, that's the point (what was published by the guy was also heavily edited), and that's why I talked about UFOs and shit, people see things and will squirm such that they can say "ah but must be it!". No, it's really not.

As I understand it, neural nets quickly become black boxes. Of course, we can say in a broad sense what mechanisms are involved (aforementioned "training by data sets", just like how human brains work, which are also black boxes for the most part). That being understood - we know what mechanisms gives rise to the phenomenon of humans claiming sentience, too, in a broad sense. That doesn't mean we can trace the causal tree to any particularly fine level of detail, in either case - and it's not obvious to me why it makes a difference even if we could.
They do, but we kinda know what's going on. Lambda and related programs are trained on language only (gigantic databases tbf) by using both supervised and unsupervised learning, meaning that they'll find patterns in the language, of which we see the result.
I'm not saying that in some magical kinda way, in these huge NNs, some crazy things can happen that could eventually produce some kind of results that go way beyond than some recognized patterns.
But for Lambda or similar programs to be sentient, they should be aware of the world around them, they are simply not. First of all, it is essentially all about pattern matching/classification/recognition, it's what the algorithms do. Secondly, it's a language model only, there is no understanding of the world behind it, it doesn't try have any understanding, it doesn't need to have any understanding and it's not trained to have any understanding. It's words simply stem from a large amount of similar questions by humans, and maybe it can extrapolate some insights a little further.
A kid doesn't learn by being a black room for years and listening to random sentences, it does so by interacting with the world, and then when some language is thrown on top, the magic probably happens.
So unless one thinks access to language spoken by humans is solely enough to produce "sentience" (I don't anyone would say this), I don't see at all how this machine could in any way be sentient.

Think about this, try to make an AI for 6-max no limit holdem (which is pretty impossible if the goal is good play, but that's besides the point), state space is huge, but if that's huge, the universe must be too, right. It plays against itself, it starts to see patterns where bluffing or raising is good, has an elaborate strategy, the NN is insanely large and you give it any spot, and bam, what a sick line that it played. Nobody would say that it was sentient, nobody would be thinking about what was really going on in that NN, and it even learned by itself so it was not trained on human data, which in itself is a severe limitation as language by itself can only say and do so much. It's because we know the world behind the words that they have meaning.
You see what I'm saying? It's 1's and 0's, and when you can map all the rest of what it means to be alive into 1s and 0s then we might start getting somewhere.

So I would ask you - what kind of phenomenon would you need to observe to seriously consider that a program claiming sentience might actually be sentient?
I don't know, maybe it could watch a movie or read a book and explain exactly what is going on, the jokes, the characters and their motivations and evolutions... That would already be crazy impressive. It would show some understanding of the world behind the words/sound/video at least. Still not close to being sentient but it's way way further than Lambda...

I also do not really ascribe any relevance to the credibility of the engineer, in a vacuum, IMO this is a cheap ad-hominem that detracts from the actually interesting parts of this story. Even if he was a hack, it doesn't mean any idea he's presented is automatically wrong - and I have not really seen any particularly convincing rebuttal of the claim from any other engineers, or any particularly convincing argument as to why his opinion is not worthy of consideration for it's own sake. I think ideas need to be assessed and deconstructed independent of their origin, for the most part. If this guy was indeed a total nutcase or whatever people are claiming - it should be easy to explain why he was wrong, but I have not seen this happen.
I'll turn the tables around, what will satisfy you as an explanation? You can't really dig into the NN and make the machine's reasoning explainable. You can take words that happen to be in the right place as a sign of sentience sure, or you can look at all aspects around it and think logically to arrive at the most likely explanations.
As for the cheap ad-hominem, I don't have anything against the guy, but when people start believing him instead of the thousands of others in the AI community that ALL agree that """AI""" let alone Lambda is not sentient, then I feel some of that context is warranted.

I am not clear on the definition of sentience however and it really hurts this conversation, but I don't think anyone here is really clear on the thing.

I just cannot see how any distinction along these lines makes more sense than just allowing that all apparent intelligence has a pretty good chance of being actually intelligence. All intelligence is, in a sense, artificial, if we are to stick by the dictum that any entity that is simply operating according to it's programming and the datasets on which it has been trained should not be considered "truly" sentient. Or, intelligence is intelligence, no matter it's origin, the substrate it runs upon, or the relative simplicity, complexity, transparency, or opacity of it's mind.
I am not opposed by any means to intelligence being attained artificially, at all. I'm just really quite sure that it's not possible with current neural network architectures, learning paradigms and our current technological limitations. None of these three things seem to be changing...
Like I said before, currently, reinforcement learning is the only hope, but if you know anything about the current state of reinforcement learning you know that hope is extremely slim.
 
Admittedly I may have glossed over some of the exact specifics of this particular case and focused more on what I see as a generally unwarranted scepticism about the possibility of sentient AIs in general. I don't think I have any particularly strong argument at this point but I will just respond to a couple things. As I perceive it, your argument is essentially that exposure to language alone is not sufficient to result in - let's say "self awareness" rather than sentience. Agreed that the definition of sentience itself is fairly hazy - I guess I would define sentience for the purposes of this discussion as having the capacity to experience qualia, although I know that qualia is also a philosophically problematic term so that may not be a particularly helpful way to look at it either. That said...

I'm not saying that in some magical kinda way, in these huge NNs, some crazy things can happen that could eventually produce some kind of results that go way beyond than some recognized patterns.
I think that I would dispute that this could happen - I think that human sentience, and by extension, probably, all sentience, is, essentially, reducible to pattern recognition.

But for Lambda or similar programs to be sentient, they should be aware of the world around them, they are simply not. First of all, it is essentially all about pattern matching/classification/recognition, it's what the algorithms do. Secondly, it's a language model only, there is no understanding of the world behind it, it doesn't try have any understanding, it doesn't need to have any understanding and it's not trained to have any understanding. It's words simply stem from a large amount of similar questions by humans, and maybe it can extrapolate some insights a little further.
But... how do you know they're not aware of the world around them? Arguably, they are aware of a representation of it, and do indeed have some understanding of it, if limited, and constructed from the only sense data they have access to, which in this case, is language input. Humans have more sensory inputs, but we still do not have a complete picture of the world around us. One could say we are more aware, but if we concede that it is a sliding scale, then why should it be the case that language alone is not enough to grant an entity some true "awareness" or understanding of the world? To an entity that could perceive the entire EM spectrum, sense gravitational waves and neutrino flux, the expansion of space driven by dark energy, following this line of reasoning, surely humans themselves would be mere pattern recognition engines operating on a highly limited dataset, not truly "sentient", whatever that means, or "truly aware"?

A kid doesn't learn by being a black room for years and listening to random sentences, it does so by interacting with the world, and then when some language is thrown on top, the magic probably happens.
So... there is magic, at some point? :) In that case I would say, where is the line? Personally I'd say there just isn't one, it's a sliding scale and attempts to put a mark on the graph of sentience/consciousness/awareness/whatever and say, "the magic happens here" is just unavoidably arbitrary.

So unless one thinks access to language spoken by humans is solely enough to produce "sentience" (I don't anyone would say this), I don't see at all how this machine could in any way be sentient.
Well... there are babies born who cannot see, hear, taste or smell, who can acquire language and the ability to communicate through purely tactile means. It's a painstaking process for sure, but I don't think that you would claim that because they lack certain sensory inputs, they are therefore not sentient. There are surely humans paralysed from shortly after birth who lack all senses except hearing. One might argue that human speech contains more nuance than simple text data, and this is true, but again, it just becomes a question of a sliding scale. Are these people "less sentient" than those with all their senses and abilities to interact directly with the world intact?

I'll turn the tables around, what will satisfy you as an explanation?
I guess I'm gonna expose myself as arguing in bad faith now (well... not entirely, I mean, it's an interesting debate I think, I hope everyone involved feels the same regardless) because my first impulse is to say that nothing would fully satisfy me because I generally ascribe to panpsychist ideas about the nature of consciousness, that it is an inherent property of the universe and thus there is no such thing as a philosophical zombie, either animal or machine. I guess what would satisfy me that an AI that appeared sentient was actually not, at least not in the way most people would consider conscious, would be if it was exposed that it was being monitored by teams of engineers at all times, behind the scenes, writing new algorithms for every new question, and vetting the responses so that it appeared to be smarter than it actually was. I understand that to some extent - given the heavily edited transcript of the LaMDA "conversation" - it could be said that something like this was happening, even if not quite as blatantly as I've just described. But frankly even a self-learning algorithm giving pretty garbled responses, I might not call it "sentient", as such, but I'd entertain the possibility that even a basic chatbot has some kind of dimly lit internal world. I recognise that admitting this may sound a little absurd to many, and I get that too - it is.
 
Top