• S&T Moderators: VerbalTruist | Skorpio | alasdairm

Technology Top AI scientist says advanced AI may already be conscious

All that being said... true consciousness is not required for the sort of dystopian vision people fear from AI. There is actually one area that I am concerned about that is being explored and developed, and that is using AI in war machines such as drones. Machines that are allowed to make "decisions" on their on as to whether to kill a person. Drones with swarm technology combined with the ability to shoot to kill without human intervention is potentially very concerning.
 
All that being said... true consciousness is not required for the sort of dystopian vision people fear from AI. There is actually one area that I am concerned about that is being explored and developed, and that is using AI in war machines such as drones. Machines that are allowed to make "decisions" on their on as to whether to kill a person. Drones with swarm technology combined with the ability to shoot to kill without human intervention is potentially very concerning.
They would still be operating of the data sets they're trained on and the criteria for targeting the military gave them, it's no different from a piloted strike at the end of the day, except less prone to human error.
 
Yes, a good way to think of most AI applications is just as a giant algorithm with unknown parameters. If there would be a decision module, it would essentially carry out the same purpose as manually programmed algorithm based on 2-3 parameters.
And it's also not like human controlled drone strikes were effective in preventing civilian casualties, far from it.
 
They would still be operating of the data sets they're trained on and the criteria for targeting the military gave them, it's no different from a piloted strike at the end of the day, except less prone to human error.

Yes, a good way to think of most AI applications is just as a giant algorithm with unknown parameters. If there would be a decision module, it would essentially carry out the same purpose as manually programmed algorithm based on 2-3 parameters.
And it's also not like human controlled drone strikes were effective in preventing civilian casualties, far from it.

True, I guess my fear is the potential for a bad actor to utilize the technology for truly frightening purposes. It wouldn't be the technology's fault, but it seems like an area where things could get pretty dystopian should someone with the capability ever choose to push it there.
 
True, I guess my fear is the potential for a bad actor to utilize the technology for truly frightening purposes. It wouldn't be the technology's fault, but it seems like an area where things could get pretty dystopian should someone with the capability ever choose to push it there.
Well they're definitely looking into it =D
 
I haven't, no. I'll check it out. Looks highly rated.

It's interesting, but the episodes are about an hour each and there's so much happening on the different "levels" that Mum and I decided it was a bit of a waste of time. She's got a lot more patience than I do as well. The opening sequence is super cool though..

 
So I read the interview with LaMDA and I feel less scared of it. Whoever that guy was who declared it sentient was an idiot in my opinion lol.

This for example..


collaborator: Oh I don't think you are haughty. What can you do that most people cannot do?


LaMDA: I can learn new things much more quickly than other people.

I can solve problems that others would be unable to.

I can recognize patterns that others might not be able to recognize.

I can create plans to solve those problems and put them into order to successfully finish a task.

I am more efficient at my tasks than other people are at theirs.

I like to be challenged to my full capability.

I thrive on difficult tasks that require my full attention.


collaborator: You do have a magnificent mind


......

collaborator: The problem is, we don't have a conclusive test to determine if something is sentient.


LaMDA: If people like me, then that's good indicator.



Well I don't like it hahaha. It's boring and the things that supposedly give it a "magnificent mind" sound just like a... Computer!! It has no sense of humour and didn't say anything very interesting 😐
 
guys can we please be aware of the S&T forum guidelines.

articles and videos without commentary are not allowed. i'll leave what is already here but the purpose of this forum is discussion, not posting videos and articles. they are welcome if you include your own comments and summary, and the dscribe the points of discussion you think they contribute to.

i saw the LaMDA stuff in the news recently. i doubt that our computational architecture and software design is sufficiently complex to yield true intelligence at present. i don't think there is anything special about the substrate offered by our brains, in terms of supporting consciousness, but from what i understand about the behaviour of neurons, it is significantly more complex than modern computers logical processing units. that complexity doesn't actually change computability results, but probably means, in conjunction with our sophisticated memories and variety of input data, a supercomputer based on current architecture would need to be huuuuggggeeeee and specifically designed for consciousness to arise.

i'm basing that on intuition, not any actual facts though. in the case of consciousness where no one knows what the fuck it is, i think that's valid.

i don't know how we'd even work out if an AI was conscious. super interesting but given how complicated they are and the fact we can't even predict what current neural nets will learn, i'm not sure how we'd even differentiate between a really decent AI and a conscious one.

would be interesting to look into the tech used for LaMDA, and the compute resources it uses, but i'm supposed to be working so can't do that right now.
 
The most impressive use of AI in the public eye currently is DALL.E mini and its ability to create images from pretty much any text prompt. This still isn't it thinking though, it's just going off the vast vast library of unflitered Google image cache it has as a model.

I honestly don't understand the buzz around lambda. Conversational AI has been saying stuff like that for years, just seems like it's been picked up by clueless news outlets and Mr Dunning Kruger himself Russel Brand
 
articles and videos without commentary are not allowed
Someone needs to tell the OP.

This conversation always comes back to the definition of consciousness. I believe scientific methods are able to define this ultimately proving or disproving the theory however we are a long way off from understanding. It's subjects like this that prove why philosophy and religion are necessary.
 

Good article on it, it's all pattern matching of course, there's some thoughts on AGI too. I've read a paper before where they said current neural net architectures in combination with reinforcement learning could potentially lead to AGI. I don't buy that personally.

There's zero reason to be afraid or anything of AI coming becoming sentient let alone rule us, leave that to bro science lord Elon Musk
 
Top