Monetary Pyramid

Discussions of Significant Importance

Monetary Pyramid

Discussions of Significant Importance

Monetary Pyramid

Discussions of Significant Importance


Just because you can do something does that lead to the inevitable conclusion that you should? This is a simple but truly profound question that faces us on a personal level, all the way to national and global consequence. Parents can choice to discipline their children by spanking them. Of course they can do this. Should they? The U.S. has long had superior military capability and could easily use it to influence other countries' policies and governments. Should they use it?

A very similar question applies to Artificial Intelligence. Massive amounts of computing power and computer storage is enabling the rise of this new technology. Subsets of Artificial Intelligence are: deep learning, voice & facial recognition, autonomous driving, medical diagnosis, targeted marketing, travel guidance, risk assessment, etc. This article will pose the question, should we use AI without regard to the potential consequences? How should it be regulated? Who determines the direction and extent of its use?

First we must understand the structure of AI. The foundation of AI starts with data (the Input Data Layer). Lots and lots of data. The more the better and the greater the detail the better. Sometimes there are privacy issues associated with this data. We will explore this later. The data may be preexisting or gathered by sensors and other input devices.

The next layer (the Algorithm Layer) can be mathematical or logical algorithms. This layer selects relevant data as input to draw some conclusion(s). It may have a numeric result or a logical conclusion. It can be combinations of both. Think of it as answering a question.

In simple terms, the final layer is the Output Action Layer. The action may be low-level in nature - advisory and be communicated as a report, graph, alarm, voice response. Higher level actions may take the form of physical actions e.g. automated responses steering the auto in a different direction, braking, accelerating, preparations for a crash. In most cases the potential circumstances are premeditated and an automated response is programmed into the computer for all or most circumstances. A differentiation needs to be made between a reasonable response and the best response.

When you use a mapping application, you can either ask the app to provide directions for the shortest travel time or the shortest distance. These are different objective functions. The same question can be asked in different ways and the conclusion will change based on the selected objective function. This point has to be driven home and affects the entire design of the technology. Let's use another example: When you are sick and go to the doctor for treatment, you can ask the doctor to treat the symptoms or cure the problem. The cure may be worse than the problem. It is important to understand the risks associated with the different treatment options.

Generally people want to make their own choice regarding risk. Ask yourself are you willing to use a new technology if it increases your risk of being harmed and at what cost? Every layer in the description above has unique challenges and increased risk. The Data Layer may be incomplete or biased. The algorithms in the Algorithm Layer may be poorly designed or designed in such a way that does not employ the same value system you would want. Given a choice would you assume the same risk in the Output Layer as the company that designed the technology? Facebook is a perfect example of a major company that their only objective was to make more money from your data regardless of the infringement on your privacy.

Make no mistake that major companies are driving a freight train as fast as they can to develop AI technologies in their respective industries. How they collect and use the data is a black box shrouded in intellectual property protection. Governments are also in the game using AI to do facial recognition and collecting whatever data they can in the name of national security. Some governments are even going so far as to create a social score card to identify citizens that comply with the governments policies whether for good or bad.

Will you know why you were rejected for a job or insurance? Will you be able to correct data that was corrupted either accidentally or on purpose? The genie is out of the bottle. Companies, governments and individuals all see the immense value of AI and the benefits that it can bring to society. The risks are not as obvious nor are they adequately balanced against the benefits. So the biggest fear is that the risks will be ignored until it is too late. It is not just the AI technology in isolation that is a concern. If companies were social minded and governments acted in benevolent ways there would not be a concern. History does not bear that out.


The increasing population, climate change and ever scarcer resources will dramatically increase global conflicts and continue a trend towards authoritarian governments. This will lead to very powerful tools in the hands of very few self serving people. The concentration of wealth around the world undermines the argument of the benefits from developing advanced technologies. Make no mistake "Absolute power corrupts absolutely" and AI that is not transparent is capable of providing absolute power in the wrong hands. Further people that are not held accountable for hurting the public good will always continue their desire to concentrate power and wealth.

It is not just the benefits from the AI that we should be concerned. It is the potential power of the technology that raises concern. Nuclear power had the potential to provide vast amounts of energy for the public good. However, choices were made to advance the technology for use in nuclear weapons, even more so than public energy production. Our benevolent governments chose that direction. Now we have had a guillotine hanging over our heads for decades. We are loosing control over a technology that had huge potential benefits as well. The worst part is that AI is not viewed with the same amount of concern. The combination of the Internet, greater computing capacity (even quantum computing) and AI magnifies the power and influence that a few people have to control what you think, do, and your very wealth.

At some point, AI will make another leap forward. The computer will decide what data it needs, determine how to get the data, develop its own algorithms/logic and maybe even decide what is the objective. We have been happy to think less and let machines do our work for us. After all humans are less reliable and cost too much to hire. Now is the time to anticipate where AI is going in its development, understand the consequences and determine what protections we should put in place as the technology evolves.

Personally I am proud that humans have been able to develop the technologies that they have in the past. However, at the same time it is alarming how much abuse of technologies have been used in malevolent ways heretofore as well. Let's just check both ways when we cross the street or railroad tracks! It is up to all of us to watch, understand and take appropriate actions to protect ourselves and our loved ones.