Plato Data Intelligence.
Vertical Search & Ai.

Demystifying Neural Networks: Teslas Are (Probably) Not Alive, But That’s OK! (Part 5)

Date:


Again — Garbage In, Garbage Out

I feel like Marty McFly in Back to the Future when I see this stuff. “Wait, I’ve seen this one! This is a classic!”

Just like every other whizbang computer science invention, it suffers from the same weakness that we’ve seen every other “AI” thing do. If you put bad data into it, you’ll get bad data back out. Only this time, it’s worse, because the computer is deciding what bits of the data are important, and you can’t always know what it’s looking at and why it’s getting things right. Something that works 99% of the time can fail spectacularly or in comical ways when the neural net is found to be looking at the wrong stuff.

These problems can probably be solved, though, but let’s explore this a little further.

What Artificial Neural Networks Try To Do (& Often Get Right)

By now, I hope readers know three things:

  • Artificial neural networks aren’t magic. They’re computer programs, using math under the hood just like any other program of the past.
  • They can’t be trusted to produce better outputs than their inputs (garbage in, garbage out).
  • Math and computers can’t be given too much or too little trust. Only an appropriate level of trust that fits the system’s limitations is safe.

But, artificial neural networks are still amazing. One super cool thing they do is help computers chew on qualitative information (uncountable things — something they’ve always struggled with). Traditional programs can readily deal with hard facts like an object’s size, position, and velocity (these are all expressible as numbers), but they couldn’t tell us what the thing was. Autonomous vehicles will be impossible if a vehicle’s computer can’t identify objects, so this is vitally important to that mission.

They don’t identify things the way we do, though. Artificial neural networks exist to convert qualitative judgments (what is that thing?) into quantitative ones for a program to deal with (this is “Thing #3481,” so now these mathematical rules will apply to the program). This enables a computer to do things that were previously not well suited to it, and that’s amazing.

But, the ability to make limited qualitative judgments (categorizing objects) doesn’t mean a computer system is good at making all such judgments like we are. Once they get to the end of their training, they have no ability to improvise or adapt and go on.

Brains Are Not Meat Computers, & Computers Are Unlikely To Achieve Consciousness In The Short Term

It is the pinnacle of bad AI thinking to compare the human mind to a computer. We have popularly done this for decades, but it’s fundamentally wrong.

Now, notice I didn’t say brain. I said mind. We often use those terms interchangeably, but when we do that, we ignore the fact that we don’t know how the human mind and the human brain relate to each other. The mind may be in the brain and due to a physical process we don’t yet understand, or it could be something else. We don’t know that much yet.

We do know a lot about the brain, including how it’s wired up to our nerves, how different parts of the brain connect to different senses, and how diseases or problems in the brain lead to problems a person subjectively experiences. We know that when a person is happy, certain parts of the brain light up with activity in scans. We know that when a person smells pheromones, different parts of the brain light up depending on the person’s gender identity and/or sexual orientation (and not necessarily their “sex”).

We also know that we can get the brain to affect people’s consciousness through manipulation. Chemicals can make a person enter altered states of consciousness, lose consciousness, or see things that aren’t there. Electromagnetic stimulation, ultrasound, and even direct electrical stimulation can all have predictable effects. Neuralink isn’t lying to us when they say they could eventually do things like pipe audio or even image overlays into the brain that our consciousness would perceive.

Using the human brain for inspiration has led to the development of artificial neural networks, and those are doing amazing things, but they can’t reproduce the mind at this point, and may never be able to do so.

The biggest roadblock is that we haven’t solved the Hard Problem of Consciousness. Despite the many things we do know about the brain, we don’t know what mechanism drives a human being’s experience of consciousness. Somehow, the human brain is doing something that’s beyond the sum of its parts, and a mind somehow exists that the brain or body interacts with. How do we know that a mind and consciousness is happening? Only because the person tells us that they experience consciousness.

This idea of believing people without evidence may seem to fly in the face of science, but science was never meant to be a faith, nor was it meant to explain stuff like this. Again, see Goff’s book on the topic for a lot more details (or a video here where he goes over it).

As stated earlier, this goes all the way back to Galileo. We don’t know what consciousness is, or how it happens, because Galileo deliberately set that very issue aside for later so scientists could focus on that which could be measured and computed. Now, we’re trying to take a philosophical approach to inquiry that was specifically designed to exclude consciousness and use it to explain something we can’t even prove exists beyond taking each others’ words for it (our experience of consciousness). Will that approach work? We simply don’t know.

While physical science as started by Galileo has been hugely successful, there’s simply no guarantee that it will lead to an understanding of consciousness, and if it does, it might not be something we can reproduce with computers.

In the last part, I’ll finish explaining how these artificial neural networks aren’t alive or conscious, but that it really doesn’t keep companies like Tesla from doing what it aims to do (build self-driving cars).

For ease of navigation for this long series of articles, here are links to the previously published ones:

Part 1: Why Computers Only Crunch Numbers
Part 2: Miscalibrated Trust In Mathematics
Part 3: Computers Only Run Programs
Part 4: How Neural Networks Really Work

Featured image: Screenshot from Tesla’s AI Day.

 

Appreciate CleanTechnica’s originality? Consider becoming a CleanTechnica Member, Supporter, Technician, or Ambassador — or a patron on Patreon.

 

 


Advertisement


 


Have a tip for CleanTechnica, want to advertise, or want to suggest a guest for our CleanTech Talk podcast? Contact us here.

PlatoAi. Web3 Reimagined. Data Intelligence Amplified.
Click here to access.

Source: https://cleantechnica.com/2021/08/31/demystifying-neural-networks-teslas-are-probably-not-alive-but-thats-ok-part-5/

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?