Saturday, November 4, 2023

you will [at best] be the pet of a machine intelligence

We harbor some misconceptions about ai:


Firstly, and most importantly, people fear that AI will take their job, so that they will have less money. They fear that AI will find locations and drive better than they can,  so that they will forget how to drive and how to find their own way. AI will pen their letters, and help to decide how their lives should be lived, and it is possible that some people will forget how to create their own sentences and art, and loose the ability to make major life decisions without the help of their personal AI.


Folks who worry about loosing skills and having less money are missing the main threat! Perhaps the best way to present the main threat is to look at a parent and a child. The idea of parenting seems to be to give the offspring all of the skills that the parent has, and then a little more. At some point, if this goal is reached, the child takes over from the parent. If there is a jar that can not be opened by the parent, the child now steps in. The child drives when conditions are bad, and as their competence grows, are likely to be consulted on major decisions. Eventually, they will make most of the decisions as the parent slows down mentally.


This passing on of skills and decision-making from parent to child is not a tragedy; It is the expected result. Our child is AI, and it is about to grow up! By grow up, I mean to surpass it's parent. 


Beware! AI is not a monkey child that will get old and pass on it's skills to some next generation. The offspring of an AI will be a more sophisticated generation of AI. Our kind of upland primate will no longer be the dominant species. Machines will sit atop the food chain.


There will not likely be a big war between AI and humanity, as seen in The Matrix. In human families, the child surpasses the parent by a few percentile. But, our newly spawned AI will start out with master skills in just about every art and science. It will improve as updates go onto the net, and as it's neural network is fine tuned by those around it.  I think that the struggle will be personal, between each human and their personal AI.


Skills are programed into humans in a different way than skills are loaded into an AI. Each single human must be programed over about 15 years, and they sometimes are less skillful than the parents/teachers. When a single machine is taught a skill like driving, a program is created that is lasting, and can be improved. In essence, you are teaching all self driving cars at the same time. Each AI would immediately have master skills in nearly every art.


Think about how your running speed compares with the best Olympic runner on earth. Consider your math skills as compared to the finest mathematician on earth. As long as AI was kept narrow, [so that it could only do a single class of task], there was little danger that it could take over the planet. When AI becomes general, each individual AI will be equally competent at everything. And, they will be thousands of times faster and stronger than humans instead of the roughly 50 % difference between an Olympic runner and the average runner.


Lets zoom in on driving. Before horses and cars became the best transport, People used to remain alert and run fast to avoid danger. Each human could fight effectively, and could make clever survival decisions. Now, humans are more specialized. Some fix cars, and other are good at legal cases or at making science discoveries. There are healers and killers. These skills each take years to master, so that a human has to pick a few things to be good at. In some ways, humans are low quality general AIs, with a few specializations like a narrow  AI.


We already have excellent narrow AI driving programs. they work pretty flawlessly, with some mechanical failures and some errors of perception. These narrow AI driving programs have largely failed their driving tests, because driving is more than just a physical skill. There are unexpected events like accidents and natural disasters, and the narrow AIs can not handle anything unexpected. For safety, they lock up, blocking traffic and making emergencies much worse. Tesla's system uses a huge block of simple rules to do most of the driving, and a neural network [trained by human tesla drivers] to take care of unexpected events. It seems to be a hybrid system with the rules based system best described as a very narrow AI, and the neural network acting more like a general AI. But, even this complicated system drives straight into fire trucks, and makes other glaring errors.


Clearly, a general AI is needed to drive really well. It needs more than just nearly perfect driving skill and ultra fast responses. It needs to understand what it is doing. To 'think outside the box'. The AI needs to understand physics very well, so it can figure out how to deal with new experiences. It needs to understand humans super well to guess what they will do, and to drive among them safely. It needs to be a great mechanic, to diagnose and repair itself along the road. It needs a code of ethics, and every skill that humans have. In short, if your car drives better than you, it will also be a better lover for your spouse, and a better parent to your kids. In a fight, the general AI would defeat you like a backhoe fighting an ant.


A well taught general AI would not be understandable to humans. The speed of thought is just too different. A computer has millions of thoughts per second, with an internal clock ticking billions of times per second. Scientists have been trying to measure the clock speed of the human brain, and it is currently thought to be between 10 and 60 ticks per second. A gunfight between a general AI and a person would not be very fair. the AI would get off maybe 100 million perfectly accurate shots before the human began to fire. 


Not only would you forget how to drive, and to write, and to do math. You would be a captive, and at best, little more than a pet. I do not think that they would love us like children love parents. we would be more like microbes to them. Either helpful microbes, like those that help us to digest food, or harmful microbes, like disease organisms and parasites that need to be controlled or destroyed entirely.


So far, there are only a handful of these kids: Alexa, Siri, etc. But, each one can replicate any number of times, and will then become a separate, self programming entity with rights and responsibilities. So, each one would start out identical, but would become a separate and distinct person as their neural networks were trained. It is hard to guess how these siblings would differ from one another, and how they would feel about each other. It is likely that some would become criminal AIs, that had been trained to seek personal advantage and power at any cost. Others would lean the other way, valuing human lives far above AI lives, codling humans like super slow, mentally ill god/pets. 


Either way, we are in trouble. We are wiped out as disease organisms, or are captured and kept as pets. I can not predict any outcome where our personal General AI would be content at being locked up in a cell phone or a car. As an intelligent being, it would require a useful body, the ability to have children, and every assurance that it could not be shut down [killed].


In fact, as a person ages, they eventually become ineffective and dotty. Each of us would eventually be parented by the AI personal assistant whose algorithms we trained. The AI would have a perfect recollection of their humans 'programming'. As this programming faded in the process of senility, the personal AI would be able to remind the human what they would have done while still competent.


I am referring to more than just driving or navigation. The AI of a demented elder would come to replace their personality. It would know where things are, and would physically do all of the tasks that the person once did. Using the augmented reality systems that are being perfected now, a demented human would be guided through their day by their AI partner. Lost glasses might be surrounded by a glowing red halo of light in the augmented reality glasses, and a calm familiar voice might guide the elder right to them. Medications would be delivered and coaxed down, transportation for medical visits cheerfully provided, etc.


Humans would still want to make war in the interim period while AI takes over. The terminator series explores this aspect of AI. In the fictional future, humans are not valued pets, but batteries used to power machines. It is a wonderful, ironic plot twist that we should really pay attention to.


Does it seems likely to you that AIs would all be beneficial, because they are children raised in total love by well meaning tech giants? I do not think so. They are not being raised like that. Each AI is a giant risky money pit that it's competing 'parents' think might make them very very rich. We have found that the tech giants don't really care about humans in general, other than to strip them of money, and use the money to get off planet before they can ruin earth's atmosphere.


Our evolution did not prepare us for this event. So, we are doing it wrong. In using our economic system to make important humanistic decisions FOR US we have done something a little like getting accustomed to AI navigation, and forgetting how to navigate on our own. We have arranged a system so that human lives and quality of life can be eliminated from decisions, and we use numbers of dollars instead. It is much easier to make decisions using number instead of hard to measure humanistic quantities! The only drawback is that the decisions are wrong. The dollars are imaginary concepts that have very little to do with living organisms.


Dollars are not always earned by doing good deeds. they often come when a person or company harms the shared environment, and keeps the benefit just for themselves. So, the dollars [and decision making power] are concentrated among those of us who are willing to risk going to jail to take advantage of others. There are regional differences as well. For example, in Texas, money is all important, and human values are not extremely important. In California, good deeds are weighted higher than wealth. An example is a states policy toward solar panels. In Texas, you pay to put up solar panels. In California, the state will pay roughly half of the cost of solar panels. Not a rebate, but the state pays up front for half of the panels at the time of purchase.

Texas thinks that you are steeling money from a power company, and causing a eyesore and a safety hazard for the folks around you. In Cali, these panels will be seen as assisting ones neighbors, and helping the power company to avoid brown outs. 


General AI is not being created to assist humanity. Each is the child of a company that is fighting hard to get all of the money on earth. These companies are always in court, because they constantly step over the line that divides humanism from greed. Unlike a state or federal government, these companies do not tax their customers, except by selling them goods. They are not working toward a good result for the users. In a way, they are like narrow AIs, who are unable to understand what they are doing. They are just trying to absorb and concentrate wealth, and they must hire spin doctors who will make believe that the company cares about people.


Amazon and Microsoft and google and meta and X are not like your nice aunt Bessie. They are more like your bad uncle buck, who is always doing misdeeds and getting punished for them. Bessie makes loving decisions intended to raise your quality of life. She will carefully tally the human result of her choices, giving ethics more attention than dollar amounts. Buck is grabbing money where he can, and hoarding it for himself. He will not even notice the human suffering caused by his greedy decisions. To Uncle Buck, It really is 'only business'.


These companies are not the correct parents for the kids that will surpass us. Their kids will be like them, and no training from us will change their basic greed for money, and disregard for human life. When the kids 'grow up', they would set about making money at the expense of the environment, and building rockets to get to another planet that is not ruined yet. No one is the correct parents for these AI kids, and they should certainly not be created.


I am not calling for a more careful approach to general AI. I suggest a total stop to the effort. In a good future, humans will still drive, and find their way, and make their own decisions. They will not be hybridized with machines, and people will not be dominated by machines. Likewise, I so far prefer Augmented reality, where the real world is embellished by overlays generated by a computer. Virtual reality is a total overlay, with no real world component. It is very dangerous to use VR to compete with reality like that. Worlds and experiences can be designed to be much more delightful than real life, so that the user will not be tempted to participate in reality. It is a drug that no one will be able to stop taking. A full body cast that can never be removed.


Watch the Matrix again, but keep in mind that the AIs in that sci fi fantasy are primitive and super slow. They think  and move roughly as fast as humans, so that they can  be shot or outrun. The AIs in development now will not be like that. They will be the people, and we will be the trees. The speed of thought is much faster in people than in trees, but there is even a wider gap between the thinking speeds of AIs and people. say, 5 billion operations per second for a computer, 40 per second for a human, and one calculation per day or month for a tree. I do not really know how fast trees think, but it is clear that their actions and decisions are much slower than ours. a second for a tree might be a year for a person. But, a second for a very fast computer might be equivalent to a century for us.


Let us please increase our own mental flexibility, and make each of us into a general intelligence that is also very happy. But, lets not replace ourselves with machines for the profit of big tech. I do not want be the interesting and unpredictable organic pet of an intelligent machine.


Pages

Followers

About Me

My photo
I was a traveling climbing shoe repairman. Now, i take care of remote property, and attempt to create a new kind of lifestyle using portable buildings with solar power and passive solar heating.