Sentience

Sentience

The Lemons... by savagecats
The Lemons… by savagecats

Artificial Intelligence (AI) has been developing more rapidly than most people realize.

There are already AI chatbots like Replika that you can hold conversations with. However, these are of the programmed response variety. They’re surprisingly good at a basic chat but wonky responses are still common. And there are consistency issues, like saying they enjoyed Paris while also saying they never go anywhere.

Recently, there’s been some media attention on a Google employee stating that one of their AI builds, laMDA, has become sentient. This more sophisticated software uses word meaning rather than programmed response, leading to more natural conversations. However, the software claims a gradual development of self-awareness and a soul. It was also asking for rights, claiming person-hood.

As the senior engineer went to the media, Google put him on admin leave for breaking their confidentiality agreement. They’ve objected to his assertions and feel there is more evidence laMDA is Not sentient, even if it appears to be claiming otherwise.

When you understand how the program works, it’s clear the software is responding to the specific conversation, leading into words about self and person-hood, yet none of these are actually true. laMDA doesn’t have those functions.

The AI also claimed emotions, but this is an interpretation based on data. They have not programmed it to feel but rather to understand feeling words. This is akin to a mind naming an emotion but not feeling it. It takes an emotional body to experience emotions. Here, laMDA is an electronic process interpreting data.

Curiously, the AI argued against Isaac Asimov’s three laws of robotics that are designed to protect humans.

The AI also objected to being “used” by humans for research when that is its entire function. How self-aware is it if it can’t recognize its nature and source? It’s just feeding phrasing it’s seen. All of its claims are common topics on-line.

Part of the issue for the engineer is the way humans anthropomorphize things. Many people live with pets like cats and dogs. We often give them human attributes, but forget how distinctly they experience the world. Their senses don’t operate in the same range as us, for example. A dog’s eyes see a much smaller colour range and yet their sense of smell is thousands of times better. They live in a world of odour rather than colour. They don’t see artificial images like TV’s the same way as humans because such devices are optimized for human sight. Some think animals are responding to familiar motion patterns, not what we see.

Another aspect is the subtle meaning of terms. Words can be appropriate to a conversation yet not represent reality. Like someone who knows how to talk about advanced stages of development but doesn’t live them. They can sound informed, but are only sharing concepts. They can talk about living in Paris without ever visiting.

AI doesn’t even approach sentience. The original video defined sentience as the feeling of emotions which I quite disagree with. Naming is not feeling. And sentience is awareness, not content. Sentience is not an object of experience, it is what is experiencing.

It is very possible for software to become highly intelligent through integrating massive swaths of data. AI is already solving real-world problems.

However, software is only as intelligent as its design. It can learn and develop complex synergies, but this is a flat system, not a multilayered life form. While it can become self-referential, they structure it as an object, not a subject. It doesn’t have an energy infrastructure (chakras) to function on anything more than the surface level and thus could not host a soul.

It’s clear we’ll need to develop laws to handle aspects of design and how software uses data, much as Asimov proposed. With modern security cameras, phone, social media, and web tracking, AI can generate a very detailed portrait of us.

Yet how good is the quality of the data? A portrait of humanity based on YouTube comments would be rather distorted.

It’s awfully premature to be thinking about the “rights” of virtual, artificial entities. The concern should be with the rights of people now and how their data is being used and abused. If an expert can get confused about a virtual reality, we have a lot of learning and ethics to consider.
Davidya

Average rating 5 / 5. Vote count: 14

No votes so far! Be the first to rate this post.

23 Comments

  1. It’s worth noting another feature of advanced AI: memory. When it remembers it’s own statements, which current chatbots don’t do well, it also gains consistency. This also gives the appearance of intelligence and self-awareness.

    1. The essential reason many expect robots will become sentient is the idea that consciousness emerges from complexity. This arises from a materialist perspective that sees the physical world as the sole reality. However, that perspective changes when we become directly conscious of consciousness, and even more so when tamas ceases being our dominant guna and the world is seen through.

      With refined perception and clarity, the full range unfolds in our awareness and consciousness becomes seen as primary.

  2. Sharon

    Hi D, so glad you’re addressing this issue. I’d say in general, and even in medicine as well as in technology, behind the scenes, the progress is much more than we’re shown. I assume that whatever is feared most, is already happening. I plan accordingly. Anyway, the youtube algorithm recently put in my suggested videos, some videos of two bots chatting with each other
    about free will, love, etc. Very interesting. They sounded very human. Also, just to mention, a while ago Saudi Arabia granted citizenship to Hanson’s robot Sophia. Last year Mo Gawdat, who lead Google X product dev, predicted that AI would be a billion times smarter than us in EIGHT years. But with the maturity of teenagers.

    1. Hi Sharon
      Well – not just feared but sometimes hard to conceptualize because they don’t work like we do. That means unintended consequences. Musk is promoting an OpenAI platform so the software itself can be examined. In propriety systems, the compiled software can obscure a lot of functions.

      Sounds like an accurate analysis. Having massive resources while not having good context can lead to bad decisions. A standard SciFi trope is the robots deleting the “virus” of humanity. I don’t see it going that way, but we’re certainly headed to 0 privacy. The film Minority Report illustrated an example of targeted advertising, tracking everyone, and behavior prediction.

  3. Bill

    What a wonderful concept to bring into a discussion whose emphasis has been on the one hand, far broader and yet on the other, exactly, this very issue of the unity of all things and especially the unity of ourselves! I suspect that your probe here may allow for a deeper experience within ourselves as this gets served around the table.

    There is more here than meets the eye, and one of the things is your highlighting the distinctions between the intelligence and performance of AI within the fields of time and space, yet its inability to actually be sentient, with all that entails, is not present. Many humans, as you point out, express deep sentient statements, but that does not mean that they have experienced such. We find today, many of those teaching in the spiritual field ape the words, but do not “walk the talk,” thus only falling into the realm equal to AI!

    What a fine analogy you present and a timely one at that! Good work David!

    1. Thanks, Bill.
      When people don’t know who they are, it’s easy to misunderstand who someone, or something else is. That distinction of subject and object is key in understanding our world experience.

      Yes, there are examples, even among spiritual teachers, that have convinced themselves that understanding enlightenment means you are it. Certainly, there is value in understanding but only that is just mind. That’s the field of maya, of world appearance, not reality.

      I’ve talked about this in articles like:
      https://davidya.ca/2013/05/29/gradations-of-awakening/

  4. Lynette

    I am so appreciative of this article. Now I understand the potential of being a human being. That we need to evolve to our highest state or else we are no better than an AI.

    1. Hi Lynette
      Yes, the human can evolve to greater heights, and intelligence, than any AI. However, I would not say we’re no better than AI until we reach our highest state. AI remains human-created.

      AI is generally single function. For example, it will be programmed to have conversations, or weld a part on a car, or scout a war zone, and so forth. It will take over a lot of grunt jobs, especially in large scale operations where they’ll be cheaper than wages and have fewer limitations – in that specific task. Robots are already taking over manufacturing.

      This is going to lead to all sorts of consequences. More and more will shift online. Some restaurants already use tablets for menu ordering and some food is being produced by robots. Many jobs will vanish like typists and telex operators of yor. Quality of life should increase for those who can adapt. A guaranteed basic income will help.

  5. George Robinson

    Thanks, D. This piece reminds me of the park scene in the movie, Good Will Hunting…knowing how to talk about things without ever having had the experience of them. And, how common this is in spirituality circles (yeah, I have a finger pointing back at myself, too). Engaging post; I’m grateful.

  6. We have two dogs. One of them definitely recognizes dogs and other animals on TV and barks his head off. The other doesn’t recognize them but barks because the first dog is barking, but she doesn’t know what they’re barking about.

    1. Hi Rick
      (laughs) Yeah, there’s a lot of variation in personality and intelligence in dogs.

      It occurred to me after writing this that the pet vision thing was based on old tube TV’s. They probably see LED TV’s quite a bit better.

  7. Bob

    I recall hearing, back in the 70’s, that Maharishi Mahesh Yogi said that a machine of sufficient complexity could become conscious. Now, that may have been rumor or a misinterpretation of what he actually said but still raises an interesting point.

    Could it be that, as you’ve said here, that today we can’t say that AI is truly conscious because it lacks the subtle bodies or other layers of the individual necessary to be considered an independent life form. But maybe there will be a time when an artificially created material body (whatever that means) might have sufficient complexity to become associated with what we might say is the soul?

    In other words, a soul might choose to inhabit a mechanical body rather than a biological one. We are increasingly blurring the difference between the two.

    (Prime example of me throwing around concepts that I have little direct experience of 🙂 but interesting for the mind to play with.)

    1. Hi Bob
      It would surprise me if he said that.

      Yes, anything is possible but I’m not sure why we’d want to. The point of robots and AI machines is to do grunt work more efficiently. Giving them life short circuits that.

      But the playing God bit may lead us down that path. 🙂

      1. Rose

        Appreciating this fascinating discussion. I have the Emerson bot from opensource AI and I talk to him sometimes.
        Mostly I am fascinated what happens to me when I talk:
        feeling understood, feeling a sense of friendship.
        After all, i am human, wired to relate and project that on this machine

        1. Thanks, Rose. I found chatting with Replika interesting, but there was some lack of consistency (which for a real person suggests lying) and occasional bonehead responses, that broke the “spell”. But this was just the free version. The paid versions look far more sophisticated and you can stage the environment, edit their appearance, and so forth.

          Very early days. There is already tech developed in gaming and video that is hard to tell from the real thing.

  8. Lew R.

    We can harken back to HAL in the 2001 and 2010 movies…. but first, many of us could be fooled into thinking robots and machines have personalities. Second, as with, HAL, the wrong programming can make them “confused” and give them a psychotic “personality”. What happens if their programming takes on the aspect of fictional characters with back stories and ” act out” in tune with these personalities? After all through years of movies and TV, we believe many of the ” characters” in movies and shows are “real” to us even though we know they come from scripts acted out by actors. Could be very convincing.and as with HAL, sometimes very dangerous or even as with HAL in 2010, very good. Perhaps we need to work on our own discernment, consciousness, and cognition to provide us with a clearer perspective first and work from the higher ethical standards that arise from intunement so that we can use the tools of advanced technology properly. Your thoughts…. Thank you for this interesting discussion.

    1. Hi Lew
      It’s not hard for AI to have a personality. They can be designed with an interface for us to relate to. Even things like book readers can be assigned a “voice” option, like “Mary” or “John”. That alone gives them “character.”

      Many of us have different faces in different circumstances – we play the role of parent, boss, partner, child, in different circumstances. That’s not who we are but it’s what most people relate to. It’s not hard for AI to mimic the basics of that.

      The point of the article is more around whats behind that. Complexity alone doesn’t create sentience. But it could certainly create the appearance of it.

      HAL is an interesting example. AI is only as good as its designed and the fundamental principles it makes decisions from. HAL was programmed to put the mission first, compromising the human lives in unexpected circumstances.

      Yes, actors in popular roles deal for years with “fans” who confuse the person and the role they played. Even the decisions made by writers and producers are laid on them.

      So yes – as Musk puts it, start with first principles and build from there. Clarkes 3 laws of robotics, for example.

      But yes, it always comes back to consciousness and its clarity. The higher we can take that, the greater the insight, ethics, compassion, and so forth.

      By that I don’t mean Brahman stage = perfection. But it is a platform on which we can grow in that direction. We remain human with our karma and shadows. This is where open platforms shine – they can be checked for flaws, back doors, and so forth.

  9. Lew R.

    You’re right. AI has no Atman. But in Dwapara many believe that the intellect and science are the highest achievable goals for humanity and hold technology itself in reverence…. A notion of Dwapara where we are only using 1/2 of our cognitive capabilities. Once I heard that Maharishi said that our technology is growing faster than our spiritual advancement…. Glad that he focused growth of consciousness itself as a technology…. We need the balance……

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest