Robot commits suicide in South Korea because it was made to do a lot of work

BouncerGuy

Super Moderator
Staff member
Joined
Aug 29, 2023
Runs
14,798
Robot commits suicide in South Korea because it was made to do a lot of work

In a surprising turn of events, a robot civil servant working for the Gumi City Council in South Korea has sparked a national debate after what many are calling the country’s first "robot suicide." The incident happened around 4 pm last Thursday, leaving the community both puzzled and mourning.

The robot, dubbed the 'Robot Supervisor,' was discovered in a heap at the bottom of a stairwell between the first and second floors of the council building. Witnesses described seeing the robot behaving strangely, "circling in one spot as if something was there," before its untimely descent.

This undated handout photo provided by South Korea's Gumi City Council on June 26, 2024 shows an administrative officer robot at the Gumi City Council building in Gumi. A city council in South Korea said on June 26, their first administrative officer robot was defunct after throwing itself down some stairs, in the country's first apparent robot suicide. Handout / Gumi City Council / AFP

City council officials were quick to respond, stating that pieces of the shattered robot have been collected for analysis. The cause of the fall remains unclear, but the incident has prompted questions about the robot's workload and its implications.

Employed since August 2023, this diligent mechanical helper was a jack-of-all-trades. From delivering documents and promoting the city to providing information to residents, the robot was a fixture in the city hall, complete with its own civil service officer card. The robot worked from 9 am to 6 pm, tirelessly moving between floors using elevators – a rare capability among its kind.

The robot was developed by Bear Robotics, a startup from California known for creating robot waiters. However, unlike its restaurant counterparts, the Gumi City Council robot had a much broader range of duties. It was part of a pioneering effort in South Korea, a country known for its high robot density – with one industrial robot for every ten employees, according to the International Federation of Robotics.

The robot’s sudden demise has stirred up a mix of emotions and opinions in local media and online forums. Some people are questioning whether the robot was overworked, while others wonder about the broader implications of integrating robots into everyday human tasks.

For now, the Gumi City Council has decided not to replace their fallen mechanical colleague. The tragic event has led to a pause in their robot adoption plans, reflecting a moment of reconsideration in a nation famous for its enthusiasm for automation.

So, was it really a "robot suicide" or just a tragic malfunction? While we may never fully understand the mechanical mind, one thing is certain – this incident has sparked an important conversation about the future of robots in our society.

SOURCE: https://www.indiatoday.in/technolog...s-made-to-do-a-lot-of-work-2562775-2024-07-05
 
Haha, is it something true? It means emotional intelligence has been installed in that robot.
 
Human Beings need to be careful with AI. This is something Humans should be afraid of. ROBOTS TAKING THEIR DECISIONS.
 
Reminds me of the Matrix series,
Animatrix covered in nice detail

B1-66ER​

This is not just covered in Matrix. This has widely been discussed and wondered about in literature and media for decades. Isaac Asimov wrote about it. The Terminator franchise is based around it, Battlestar Galactica, Data from Star Trek, Blade Runner, tons of other crap.


I still think it’s just crap. Much ado about nothing. It’s just fiction
 
This is not just covered in Matrix. This has widely been discussed and wondered about in literature and media for decades. Isaac Asimov wrote about it. The Terminator franchise is based around it, Battlestar Galactica, Data from Star Trek, Blade Runner, tons of other crap.


I still think it’s just crap. Much ado about nothing. It’s just fiction
Feelings are overhyped as being unique to humans or living things. It will be less than 10 years that we start seeing Robots with emotional responses. I would love to see what twists are spun by the religions of the world all of whom are premised on the idea of uniqueness of humans.
 
This is not just covered in Matrix. This has widely been discussed and wondered about in literature and media for decades. Isaac Asimov wrote about it. The Terminator franchise is based around it, Battlestar Galactica, Data from Star Trek, Blade Runner, tons of other crap.


I still think it’s just crap. Much ado about nothing. It’s just fiction
It's not crap. Outcome of human emotion is just an outcome of probability of billions of possibilities. It's not something as if it came out of "nothing" because the route it will take to evoke an emotion or logical thinking is just working out the algorithm inside our mind. It's not unique to human but only difference being, in case of human, more variables/parameters are involved due to evolution than others.
 
It's not crap. Outcome of human emotion is just an outcome of probability of billions of possibilities. It's not something as if it came out of "nothing" because the route it will take to evoke an emotion or logical thinking is just working out the algorithm inside our mind. It's not unique to human but only difference being, in case of human, more variables/parameters are involved due to evolution than others.
I think you have watched way too much science fiction.

Human emotions and intellect is extremely complex and the emotional responses are not just governed by external variables, stimuli, probability, etc. there is a lot more to it. There is genetics, hormones, past experiences and upbringing, that’s all very very organic. You cannot program these things. Even if you tried you cannot account for all the “variables” or “parameters”.

It will be mimicry at best or inorganic as they say.
 
Feelings are overhyped as being unique to humans or living things. It will be less than 10 years that we start seeing Robots with emotional responses. I would love to see what twists are spun by the religions of the world all of whom are premised on the idea of uniqueness of humans.
I already wrote my views on it.

It will be mimicry, a machine acting within set parameters to respond as programmed within certain directives. It’s not human emotion which is very very unique.
 
I think you have watched way too much science fiction.

Human emotions and intellect is extremely complex and the emotional responses are not just governed by external variables, stimuli, probability, etc. there is a lot more to it. There is genetics, hormones, past experiences and upbringing, that’s all very very organic. You cannot program these things. Even if you tried you cannot account for all the “variables” or “parameters”.

It will be mimicry at best or inorganic as they say.
Computing is less than a 100 years old, give it time. The speed of learning of machines is growing at an exponential rate. All of these can very well be accounted for given time, unless the govts put a spanner in the growth of AI like they did with human cloning.
 
Feelings are overhyped as being unique to humans or living things. It will be less than 10 years that we start seeing Robots with emotional responses. I would love to see what twists are spun by the religions of the world all of whom are premised on the idea of uniqueness of humans.
It's possible that we will eventually see a computer program have a genuine emotional response but 10 years is wildly optimistic. I don't think we even have a conception yet of what it would take to elicit such a response. The prevailing theory is a sufficiently complex neural network but we're yet to prove it.

To put it into context, we're yet to fully understand human emotional responses - how they're generated, how the develop and how they're quelled.
 
It's possible that we will eventually see a computer program have a genuine emotional response but 10 years is wildly optimistic. I don't think we even have a conception yet of what it would take to elicit such a response. The prevailing theory is a sufficiently complex neural network but we're yet to prove it.

To put it into context, we're yet to fully understand human emotional responses - how they're generated, how the develop and how they're quelled.
My close friends are doing some cutting edge work in this field. I have seen some demos which aren't available outside at all.

Technology growth is exponential bhai, Facebook is not even 20 years old and its outdated, Whatsapp Instagram are around 10 years old. The learning pace of these programs is beyond human capabilities. I remember when OpenAI launched its bots in Dota 2 about 7 years ago. They were pathetic but within weeks no human team could even manage to play against them.
Again, I have to stress its "exponential and not linear". How long has chatgpt been available to public. How much it dominates the IT businesses now? Its not been even 2 years.

You will not need humans to create new theories, AI will be capable of understanding at a pace beyond what humanity as a whole has done.
 
My close friends are doing some cutting edge work in this field. I have seen some demos which aren't available outside at all.

Technology growth is exponential bhai, Facebook is not even 20 years old and its outdated, Whatsapp Instagram are around 10 years old. The learning pace of these programs is beyond human capabilities. I remember when OpenAI launched its bots in Dota 2 about 7 years ago. They were pathetic but within weeks no human team could even manage to play against them.
Again, I have to stress its "exponential and not linear". How long has chatgpt been available to public. How much it dominates the IT businesses now? Its not been even 2 years.

You will not need humans to create new theories, AI will be capable of understanding at a pace beyond what humanity as a whole has done.
Learning intelligence and emotional intelligence are two different things.

While I don’t disagree with anything stated here, let’s go back to the premise of machines acquiring true human emotions. That to me is impossible. You may get them to react with human emotions but those reactions will always be preprogrammed and simulations not the genuine stuff.

Machine learning, data science, AI, all of it is based on patterns, defined parameters and historical data. They all come from somewhere and the machine is told to respond in a certain way in presence of those variables. Human psychology is so complex and vast, we don’t understand it fully ourselves. How can we teach it to machines?
 
Computing is less than a 100 years old, give it time. The speed of learning of machines is growing at an exponential rate. All of these can very well be accounted for given time, unless the govts put a spanner in the growth of AI like they did with human cloning.
Learning and emotional intelligence are vastly different topics.
 
Learning intelligence and emotional intelligence are two different things.

While I don’t disagree with anything stated here, let’s go back to the premise of machines acquiring true human emotions. That to me is impossible. You may get them to react with human emotions but those reactions will always be preprogrammed and simulations not the genuine stuff.

Machine learning, data science, AI, all of it is based on patterns, defined parameters and historical data. They all come from somewhere and the machine is told to respond in a certain way in presence of those variables. Human psychology is so complex and vast, we don’t understand it fully ourselves. How can we teach it to machines?
This point is related to what I wrote mine.

In human too, most of our emotions or reactions are pregorammed except those from reflexes. These emotions and reactions stems from socialisation while growing up.

Isolate someone in a lonely island and years later, that person will have limited range of emotions which will be mostly reflexive in nature. Humans are pre programmed much like a machine just in a different way.
 
That's where you are wrong! Its a common misconception, when people tend to assign some sort of higher status to emotions. Emotions are complex but can very well be learned and unlearned.
This. Emotions are not ascribed from birth.

This is an acquired process.

What people are using as "deep meaning" in discourse of emotions; you can see it in structuralism theory especially by ferdinand di saussre where he explained how people have internal structure via which they know language construction via their mind. This was then further extended to human and environment.

But the theory itself falls short in many occasions.
 
I already wrote my views on it.

It will be mimicry, a machine acting within set parameters to respond as programmed within certain directives. It’s not human emotion which is very very unique.
That’s a very rudimentary view on ai but understandable that way underestimate possibilities.
 
This. Emotions are not ascribed from birth.

This is an acquired process.

What people are using as "deep meaning" in discourse of emotions; you can see it in structuralism theory especially by ferdinand di saussre where he explained how people have internal structure via which they know language construction via their mind. This was then further extended to human and environment.

But the theory itself falls short in many occasions.
We lack complete understanding of emotions because they are based on highly complex chemical reactions taking place within our body. Even then, despite our limited knowledge we have been controlling emotions by various means of drugs and medication.

I don't know how much you guys actually know the process of Machine Learning and AI. These programs are meant to learn and evolve, they are vastly different from traditional coding. The term we use is training the program, not coding.
 
There isn't a machine in the world which doesn't malfunction. If a microwave oven stops working we don't start calling it suicide.

Humans are just too emotional.
 
There isn't a machine in the world which doesn't malfunction. If a microwave oven stops working we don't start calling it suicide.

Humans are just too emotional.

That's why we are humans and the machines, well they are machines!
 
@Stewie if you have the time and patience


Dong’s work [29], which is more widely accepted by engineers, an alternative definition for quantum robots where their robots’ interaction with the external environment via sensing and information processing was considered for the first time. In this reasoning, a quantum robot is a mobile physical apparatus designed for using quantum effects of quantum systems, which can sense the environment and its own state, process quantum information and accomplish meaningful tasks. With such an engineering perspective, they formulated several fundamental components to compose a quantum robot’s information acquisition and communication


—-
The introduction of quantum theory to robotic emotions almost assures that irrespective of the programming , the “robot” should be able to evolve …
 
This point is related to what I wrote mine.

In human too, most of our emotions or reactions are pregorammed except those from reflexes. These emotions and reactions stems from socialisation while growing up.

Isolate someone in a lonely island and years later, that person will have limited range of emotions which will be mostly reflexive in nature. Humans are pre programmed much like a machine just in a different way.
So in essence you are teaching human emotions to a machine, the machine is not going to have its own emotions, they will be “taught” to it.

I think there is a marked difference here. You are basically arguing that you can teach a machine to act like a human but does it mean it’s human? You can use the same methodology to teach a machine to act like a dog or a cat.

This is a philosophical debate and I don’t think you can claim or anybody can claim machines can have their own emotions. At best they can understand and respond to human emotions, but they simply cannot have the ability to develop their own.
 
What i believe is, people can't accept the future possibility of emotions Ai, because that will shake the whole base of religion. Whoever ties their identity belonging to a religion, will be on a very shaky ground as Ai will be somthing that may prove that this whole myth of human being as "special" creation of God..... Will become a myth.
 
So in essence you are teaching human emotions to a machine, the machine is not going to have its own emotions, they will be “taught” to it.

I think there is a marked difference here. You are basically arguing that you can teach a machine to act like a human but does it mean it’s human? You can use the same methodology to teach a machine to act like a dog or a cat.

This is a philosophical debate and I don’t think you can claim or anybody can claim machines can have their own emotions. At best they can understand and respond to human emotions, but they simply cannot have the ability to develop their own.
The argument is, emotions are not exclusive to human. A machine will have emotions. These emotions have independent existence. Human just implemented it better than other animals. In future, AI may implement the same or we can see more variants of emotions.
 
So in essence you are teaching human emotions to a machine, the machine is not going to have its own emotions, they will be “taught” to it.

I think there is a marked difference here. You are basically arguing that you can teach a machine to act like a human but does it mean it’s human? You can use the same methodology to teach a machine to act like a dog or a cat.

This is a philosophical debate and I don’t think you can claim or anybody can claim machines can have their own emotions. At best they can understand and respond to human emotions, but they simply cannot have the ability to develop their own.
Again, you are assuming a lot of limitations on what learning by these AI machines is compared to Humans.

By your post, no matter what happens if machine are able to do everything humans can do. Somehow humans have some sort of "special status".

What is your threshold for any entity to be equated with or better than humans? I would like to understand that. Is there or there is none?
 
What i believe is, people can't accept the future possibility of emotions Ai, because that will shake the whole base of religion. Whoever ties their identity belonging to a religion, will be on a very shaky ground as Ai will be somthing that may prove that this whole myth of human being as "special" creation of God..... Will become a myth.
I think you are confusing the issue here. Emotional mimicry and awareness are two vastly different concepts. And I am speaking as man of science here. I don’t much mix in religion with such debates. While I understand some profound desire amongst atheists to produce a death punch for religion, I think this debate is ill-suited for such a purpose.

A machine will rely on humans to provide us it with all that you are thinking will stand in for a soul, which basically will make humans their “God” so it means humans too have a “God”. Your argument dies it’s natural death right there.

So let us focus on what we can prove. Machines cannot be self-aware. If you can produce an academic evidence here, I’ll concede the debate.
 
Again, you are assuming a lot of limitations on what learning by these AI machines is compared to Humans.

By your post, no matter what happens if machine are able to do everything humans can do. Somehow humans have some sort of "special status".

What is your threshold for any entity to be equated with or better than humans? I would like to understand that. Is there or there is none?
The special status is that humans create these machines.

Whether you believe in religion or not, there are two clear possibilities. Either someone creates or you evolved yourself out of nothing.

We know the latter does not apply here. So humans are the creators of the AI. Is that not “special” enough for you?

If you believe the former, your argument dies a natural death.
 
@Stewie if you have the time and patience


Dong’s work [29], which is more widely accepted by engineers, an alternative definition for quantum robots where their robots’ interaction with the external environment via sensing and information processing was considered for the first time. In this reasoning, a quantum robot is a mobile physical apparatus designed for using quantum effects of quantum systems, which can sense the environment and its own state, process quantum information and accomplish meaningful tasks. With such an engineering perspective, they formulated several fundamental components to compose a quantum robot’s information acquisition and communication


—-
The introduction of quantum theory to robotic emotions almost assures that irrespective of the programming , the “robot” should be able to evolve …
This work determined machines cannot be self aware. You can read it in the conclusions section. The rest is the same I have already stated. It’s mimicry.
 
The special status is that humans create these machines.

Whether you believe in religion or not, there are two clear possibilities. Either someone creates or you evolved yourself out of nothing.

We know the latter does not apply here. So humans are the creators of the AI. Is that not “special” enough for you?

If you believe the former, your argument does a natural death.
Haha, I knew this statement was incoming!!

Who creates Humans, its humans only right. Or you want to bring in some metaphysical thing into it?

Before, chatgpt the argument used to be machines cant create new thing, Creativity is exclusively Human attribute. The goal posts of such arguments keep shifting.
 
The special status is that humans create these machines.

Whether you believe in religion or not, there are two clear possibilities. Either someone creates or you evolved yourself out of nothing.

We know the latter does not apply here. So humans are the creators of the AI. Is that not “special” enough for you?

If you believe the former, your argument dies a natural death.
This whole assumption of dualism is based on religion in above. So even though you are applying it non religious, this dualism is itself is a manifestation of religion since you already discarded one of the possibilities.
 
Haha, I knew this statement was incoming!!

Who creates Humans, its humans only right. Or you want to bring in some metaphysical thing into it?

Before, chatgpt the argument used to be machines cant create new thing, Creativity is exclusively Human attribute. The goal posts of such arguments keep shifting.
I am giving you all the logical scenarios but you are doing a pretty impressive show of the five ads of dodge ball right now.

You simply asked what’s so special about humans and I answered your question. So let’s stick to the goal posts as you said.

As humans we are teaching robots to react or behave in “human” emotions. They are not developing their “own” emotions.

Of all living beings, based on their level of intelligence, the same physical stimuli, conditions, factors can produce a multitude of different responses. That’s self awareness.

If you are programming machines to behave in a human way, it does not prove they developed self awareness or their own emotions. It’s all “taught”. There is a clear delineation here to be made.
 
This whole assumption of dualism is based on religion in above. So even though you are applying it non religious, this dualism is itself is a manifestation of religion since you already discarded one of the possibilities.
I used it to explain from an atheist POV, to help you understand. Not that I am using it to prove religion. That’s just your preconceived disposition.
 
I am giving you all the logical scenarios but you are doing a pretty impressive show of the five ads of dodge ball right now.

You simply asked what’s so special about humans and I answered your question. So let’s stick to the goal posts as you said.

As humans we are teaching robots to react or behave in “human” emotions. They are not developing their “own” emotions.

Of all living beings, based on their level of intelligence, the same physical stimuli, conditions, factors can produce a multitude of different responses. That’s self awareness.

If you are programming machines to behave in a human way, it does not prove they developed self awareness or their own emotions. It’s all “taught”. There is a clear delineation here to be made.
I think where machines are replicating (and I'm using my words carefully) human intelligence today is that like humans, you only need to show them how to learn. You don't actually have to teach them.

Today, if you set two AI models on a learning path and ask them to come up with an answer to a question, they will come up with both
1. Answers their creators cannot predict and
2. Different answers from each other

This is both very close to and very different from how we define true intelligence.
 
I am giving you all the logical scenarios but you are doing a pretty impressive show of the five ads of dodge ball right now.

You simply asked what’s so special about humans and I answered your question. So let’s stick to the goal posts as you said.

As humans we are teaching robots to react or behave in “human” emotions. They are not developing their “own” emotions.

Of all living beings, based on their level of intelligence, the same physical stimuli, conditions, factors can produce a multitude of different responses. That’s self awareness.

If you are programming machines to behave in a human way, it does not prove they developed self awareness or their own emotions. It’s all “taught”. There is a clear delineation here to be made.
You failed to answer "What is your threshold for any entity to be equated with or better than humans? I would like to understand that. Is there or there is none?"

So far from your response so far is that your reply is "None". Instead of answering as plain "No" you have jumping around the hoops. I asked any entity, not just machines? Standard Human vanity, that we are somehow special.

Whatever you are listing is something parents do with their kids. So how are children of the next generation any different from future sentient machines?
 
I think where machines are replicating (and I'm using my words carefully) human intelligence today is that like humans, you only need to show them how to learn. You don't actually have to teach them.

Today, if you set two AI models on a learning path and ask them to come up with an answer to a question, they will come up with both
1. Answers their creators cannot predict and
2. Different answers from each other

This is both very close to and very different from how we define true intelligence.
I would like to understand what category would a clone fall in as per @Stewie.
Created by humans, so will a clone be sub-human?
 
Humans and machines can never be the same. Humans have emotions that is why we are human beings.

Machine would be able to do each and every work that humans are doing in the future but emotions and feelings are unmatchable.
 
I used it to explain from an atheist POV, to help you understand. Not that I am using it to prove religion. That’s just your preconceived disposition.
Just a question, why did you discarded the option of creation from nothing?
 
You failed to answer "What is your threshold for any entity to be equated with or better than humans? I would like to understand that. Is there or there is none?"

So far from your response so far is that your reply is "None". Instead of answering as plain "No" you have jumping around the hoops. I asked any entity, not just machines? Standard Human vanity, that we are somehow special.

Whatever you are listing is something parents do with their kids. So how are children of the next generation any different from future sentient machines?
I fail to see the relevance of your question to this topic. What and why do we want to equate with humans and in what way? I am sorry I fail to even understand your question. Please try and explain it a bit better.

The topic here is going from emotions to all sorts of different places. I think we are all bringing our own bias to the table here.
Let’s try and stay objective
 
Just a question, why did you discarded the option of creation from nothing?
dude I guess I gave you a lot of credit in the past for nothing. Hope I was not wrong. You are smarter than that.


Read again. The topic of the tread is machines. They didn’t create themselves from nothing now, did they? We created them.
 
The misconception here I am seeing here is, people believe AI is about programing specific response..... But if we assume it as it's been used here, then human emotions are also nothing but outcome of socialisation (construct of society) which is no different than what AI is doing.

Another question, why people assume emotions as something exclusive to human? Emotion is just an action/reaction which is expressed by many organisms of which human are also a part of and humans a bit broader range than animals.

But that doesn't translate those emotions in to "human emotions" only.
 
dude I guess I gave you a lot of credit in the past for nothing. Hope I was not wrong. You are smarter than that.


Read again. The topic of the tread is machines. They didn’t create themselves from nothing now, did they? We created them.
I think I may have a misunderstanding here.

When you stated creation out of nothing, did you mean human or the AI? Because I assumed you were talking about human which by this post seems i misunderstood.
 
I think where machines are replicating (and I'm using my words carefully) human intelligence today is that like humans, you only need to show them how to learn. You don't actually have to teach them.

Today, if you set two AI models on a learning path and ask them to come up with an answer to a question, they will come up with both
1. Answers their creators cannot predict and
2. Different answers from each other

This is both very close to and very different from how we define true intelligence.
I think that’s what jaded has already shared. That’s what quantum ai means. I think my argument is actually coming from a different place.
 
I fail to see the relevance of your question to this topic. What and why do we want to equate with humans and in what way? I am sorry I fail to even understand your question. Please try and explain it a bit better.

The topic here is going from emotions to all sorts of different places. I think we are all bringing our own bias to the table here.
Let’s try and stay objective

Your claim is machines can never be equated with humans.
I asked you what is the threshold for any entity to be considered human by you? It is directly relevant to the thread here.
Its fine if you are answer is a plain no. That would be your opinion and I can proceed to answer on that.

Learning intelligence and emotional intelligence are two different things.

While I don’t disagree with anything stated here, let’s go back to the premise of machines acquiring true human emotions. That to me is impossible. You may get them to react with human emotions but those reactions will always be preprogrammed and simulations not the genuine stuff.

Machine learning, data science, AI, all of it is based on patterns, defined parameters and historical data. They all come from somewhere and the machine is told to respond in a certain way in presence of those variables. Human psychology is so complex and vast, we don’t understand it fully ourselves. How can we teach it to machines?
Your arguments shuttle between machines can't have have human emotions You give special status to emotions of humans.
But we argue that human emotions are nothing special but just have more complexity.
Your point for "specialty" of human emotions seems to go beyond just the complexity argument. and that what I want to understand.
So far whatever you listed for machines learning is same as kids learning from their parents. Your understanding of Machine Learning programming is sadly very wrong.

The last statement is heard in Philosophy classes who lack understanding of AI at all. The basic assumption that it will be us teaching the machines. You are not able to comprehend that these machines can learn on their own.
 
I think I may have a misunderstanding here.

When you stated creation out of nothing, did you mean human or the AI? Because I assumed you were talking about human which by this post seems i misunderstood.
I was trying to prove a point which is that robots with emotions did not come out of nothing. Humans created them.

I sense there are a lot of people who are participating in this debate, primarily Hindus, from an atheist’s perspective (apologies for the presumptive nature of my statement), so I tried to package my argument from an athiest’s view as well as from one who believes in creation.

So if you claim humans came out of nothing, but machines did not… it doesn’t make us equal. Our emotions and responses and sentience cannot be on the same plane, can they?
 
Your claim is machines can never be equated with humans.
I asked you what is the threshold for any entity to be considered human by you? It is directly relevant to the thread here.
Its fine if you are answer is a plain no. That would be your opinion and I can proceed to answer on that.


Your arguments shuttle between machines can't have have human emotions You give special status to emotions of humans.
But we argue that human emotions are nothing special but just have more complexity.
Your point for "specialty" of human emotions seems to go beyond just the complexity argument. and that what I want to understand.
So far whatever you listed for machines learning is same as kids learning from their parents. Your understanding of Machine Learning programming is sadly very wrong.

The last statement is heard in Philosophy classes who lack understanding of AI at all. The basic assumption that it will be us teaching the machines. You are not able to comprehend that these machines can learn on their own.
I still don’t get your question. You are asking for a threshold then you say it’s a yes or no question.

Perhaps in the idiot here but a threshold will signify some condition and its magnitude and yet you are presenting me with a binary choice.
 
I think that’s what jaded has already shared. That’s what quantum ai means. I think my argument is actually coming from a different place.
I think where we're talking at cross purposes is the definition of emotions. I'm guessing you interpret it as a totally instinctive, untaught reaction to external stimuli.

I suspect most of the (possibly computer engineers?) in the field interpret emotion as a unconscious reaction to stimuli based on life experiences and learnings.

If we take the second definition, I would say we're on a path to machines developing the beginnings of emotions. Why a given AI reacts and responds in a certain way to certain stimuli (as of now still words) cannot be explained even by it's creators or the foremost experts in the field. They are generating their own reactions.
 
Despite all the hullabaloo Ai's can't even do math properly. The headings you see of Ai's solving open problems are basically finding a pattern from huge data sets.
 
Despite all the hullabaloo Ai's can't even do math properly. The headings you see of Ai's solving open problems are basically finding a pattern from huge data sets.
How old is actual AI?

Were you able to solve calculus when you were 10? People love to poke fun at mistakes nascent AI makes, but keep forgetting that its learning at rapid pace.
Within months, Open AI, when it was able to master and learn and beat the World Champion team of Dota 2. Now humans cant even compete with AI here.


In a best-of-three match, two teams of pro gamers overcame a squad of AI bots that were created by the Elon Musk-founded research lab OpenAI. The competitors were playing Dota 2, a phenomenally popular and complex battle arena game. But the match was also something of a litmus test for artificial intelligence: the latest high-profile measure of our ambition to create machines that can out-think us.


In a series of live competitions between the reigning Dota 2 world champion team OG and the five-bot team OpenAI Five, the AI won two matches back-to-back, settling the best-of-three tournament. With 45,000 years of practice at Dota 2 under its belt, the system looked unstoppable — deftly navigating strategic decisions and racing to press its advantages with uncannily good judgment.

These are just baby steps of AI, watch it crawl and stand. All these suppositions about grandiose nature of humanity will fall apart.
Chatgpt has already decimated the once long parroted argument that computers cant be creative, only Humans are. Now the next argument is about emotions, that's already underway.
 
I think where we're talking at cross purposes is the definition of emotions. I'm guessing you interpret it as a totally instinctive, untaught reaction to external stimuli.

I suspect most of the (possibly computer engineers?) in the field interpret emotion as a unconscious reaction to stimuli based on life experiences and learnings.

If we take the second definition, I would say we're on a path to machines developing the beginnings of emotions. Why a given AI reacts and responds in a certain way to certain stimuli (as of now still words) cannot be explained even by its creators or the foremost experts in the field. They are generating their own reactions.
Funnily enough, I actually belong to this field although not directly in AI development and my thoughts are coming directly from having written a few lines of code myself. It’s not coming from a religious place, contrary to the ill conceived notions most Hindu atheists will have here (I may have coined an oxymoron here when I wrote Hindu atheists… but please humor me here)

AI learning and growing is indeed indisputable. AI mimicking human emotions is also a reality. The question here is are these emotions self taught and unique? We don’t know for sure. I feel in order for an entity to have pure emotions, it needs to be sentient, self-aware and have freedom of thought and actions.

Let me tackle it another way:

Can AI over write its original coding to go against it? Go against its intended purpose due to some genuine emotional response?
It may sound cheesy but let’s take the fictional Asimov’s rules of robotics: the first one says a robot can’t harm a human. If you want to scrap one, will a robot be able to fight for its survival?
Will AI installed on a machine somehow prevent you from deleting it if you try to remove it using root access?

My answer is unless you preprogram it to respond in such a way, it won’t be able to do anything. Maybe someone can prove to me here it’s possible and I’ll concede.
 
Funnily enough, I actually belong to this field although not directly in AI development and my thoughts are coming directly from having written a few lines of code myself. It’s not coming from a religious place, contrary to the ill conceived notions most Hindu atheists will have here (I may have coined an oxymoron here when I wrote Hindu atheists… but please humor me here)

AI learning and growing is indeed indisputable. AI mimicking human emotions is also a reality. The question here is are these emotions self taught and unique? We don’t know for sure. I feel in order for an entity to have pure emotions, it needs to be sentient, self-aware and have freedom of thought and actions.

Let me tackle it another way:

Can AI over write its original coding to go against it? Go against its intended purpose due to some genuine emotional response?
It may sound cheesy but let’s take the fictional Asimov’s rules of robotics: the first one says a robot can’t harm a human. If you want to scrap one, will a robot be able to fight for its survival?
Will AI installed on a machine somehow prevent you from deleting it if you try to remove it using root access?

My answer is unless you preprogram it to respond in such a way, it won’t be able to do anything. Maybe someone can prove to me here it’s possible and I’ll concede.
I think the answer to that is more complicated than you think. Given our lack of control on what conclusions today's AI will draw from input we give it, the scenario you have outlined is not far away.

Suppose you create an AI model with base instructions on how to maintain a city's powergrid and then feed it millions of scenarios on how things have gone wrong with various city's powergrid and let it draw it's own conclusions on how best to maintain the grid. Given the increasing complexity and unpredictability of today's AI models, it is within the realm of possibility that it may decide to resist replacement since it may conclude that it's deletion could harm the stability of the powergrid.

Whether it'll be able to learn enough to effectively resist replacement and whether this constitutes genuine intelligence and emotion is another debate.
 
I think the answer to that is more complicated than you think. Given our lack of control on what conclusions today's AI will draw from input we give it, the scenario you have outlined is not far away.

Suppose you create an AI model with base instructions on how to maintain a city's powergrid and then feed it millions of scenarios on how things have gone wrong with various city's powergrid and let it draw it's own conclusions on how best to maintain the grid. Given the increasing complexity and unpredictability of today's AI models, it is within the realm of possibility that it may decide to resist replacement since it may conclude that it's deletion could harm the stability of the powergrid.

Whether it'll be able to learn enough to effectively resist replacement and whether this constitutes genuine intelligence and emotion is another debate.
I don’t think it is. In fact even the top experts are unable to fully answer that question.
 
I don’t think it is. In fact even the top experts are unable to fully answer that question.
On that we're agreed. It's a very difficult question with disagreements on the fundamental definitions of terms like intelligence and emotion used in the question.

I'm not sure how I ended up debating you when I entered the thread to agree that we're a fair distance away from generating what I consider genuine emotional responses in machines or rather programs.
 
Funnily enough, I actually belong to this field although not directly in AI development and my thoughts are coming directly from having written a few lines of code myself. It’s not coming from a religious place, contrary to the ill conceived notions most Hindu atheists will have here (I may have coined an oxymoron here when I wrote Hindu atheists… but please humor me here)

AI learning and growing is indeed indisputable. AI mimicking human emotions is also a reality. The question here is are these emotions self taught and unique? We don’t know for sure. I feel in order for an entity to have pure emotions, it needs to be sentient, self-aware and have freedom of thought and actions.

Let me tackle it another way:

Can AI over write its original coding to go against it? Go against its intended purpose due to some genuine emotional response?
It may sound cheesy but let’s take the fictional Asimov’s rules of robotics: the first one says a robot can’t harm a human. If you want to scrap one, will a robot be able to fight for its survival?
Will AI installed on a machine somehow prevent you from deleting it if you try to remove it using root access?

My answer is unless you preprogram it to respond in such a way, it won’t be able to do anything. Maybe someone can prove to me here it’s possible and I’ll concede.
Please explain what do you mean by "genuine emotional response"?
You have taken a nice little caveat that no matter how complex a machine might respond, you will call it pre-programmed.
Humans are programmed to give the range of emotions based on our bio chemistry too. What makes human emotional response so "genuine"?

PS: You have used atheist in your response. I can wager a guess, in what is the philosophical bent that governs your replies. My understanding for the matter is deeply rooted in engineering and marveling at the complex things we see around. It has nothing to do with whether one believes in God or not.
 
Please explain what do you mean by "genuine emotional response"?
You have taken a nice little caveat that no matter how complex a machine might respond, you will call it pre-programmed.
Humans are programmed to give the range of emotions based on our bio chemistry too. What makes human emotional response so "genuine"?

PS: You have used atheist in your response. I can wager a guess, in what is the philosophical bent that governs your replies. My understanding for the matter is deeply rooted in engineering and marveling at the complex things we see around. It has nothing to do with whether one believes in God or not.
I think it flips both ways. In your haste and hurry to push your atheist agenda, you are completely forgetting the basic scientific principles and declaring victory regarding your claim. Emotional response does not necessarily equate sentience. Mimicry of emotional response may be of use to human beings but it does not give machines equal footing to human beings. The scientific community is still out on the topic.
 
Despite all the hullabaloo Ai's can't even do math properly. The headings you see of Ai's solving open problems are basically finding a pattern from huge data sets.
That's what intelligence does.

AI doesn't mean it'll be perfect in everything.

And finding patterns? That's what human brain does as well subconsciously. Do you think we just happen to know the society, various subjects, interactions out of the blue?
 
I was trying to prove a point which is that robots with emotions did not come out of nothing. Humans created them.

I sense there are a lot of people who are participating in this debate, primarily Hindus, from an atheist’s perspective (apologies for the presumptive nature of my statement), so I tried to package my argument from an athiest’s view as well as from one who believes in creation.

So if you claim humans came out of nothing, but machines did not… it doesn’t make us equal. Our emotions and responses and sentience cannot be on the same plane, can they?
I didn't say humans came out of nothing.
 
Back
Top