Chintu Ji

Business, Health, Entertainment & Tech News.

the risks of artificial intelligence to security.

the risks of artificial intelligence to security.

the risks of artificial intelligence to security.

The topic of security is one that is now getting more attention with the potential for Artificial Intelligent Software to take over the security tasks previously handled by human beings. This raises some key questions. What are the risks of artificial intelligent systems falling under the sway of a terrorist or criminal element? Will such systems be used maliciously to interfere in political processes, thus harming the populace? How do we keep these issues from arising?

 The topic of security is one that is now getting more attention with the potential for Artificial Intelligent Software to take over the security tasks previously handled by human beings. This raises some key questions. What are the risks of artificial intelligent systems falling under the sway of a terrorist or criminal element? Will such systems be used maliciously to interfere in political processes, thus harming the populace? How do we keep these issues from arising?

One might expect such a system to have some pre-programmed guidelines for interacting with humans as a security system. However, if such a system is built so poorly that it doesn’t know how to behave with different political cultures, religions and ethnic groups, then such a system is in fact useless. And obviously such a system wouldn’t know how to participate in an economic free market where it would be an asset instead of a liability.

Is it really possible to have a machine that is better at reason and self-reflection than humans? And even if such a system was built perfectly, wouldn’t it still fail at least one of the most basic human emotions; fear? Would the new artificial intelligent software created to be able to resist the human emotions of fear and mistrust that it was built with? Since fear and distrust are the basis for all of our ethical thinking, wouldn’t such a machine to be vulnerable to manipulation? It seems to suggest that perhaps we should avoid creating artificially intelligent computers if we truly want to create an educated society.

Another issue that comes up is the issue of control. Since the systems will be running globally, the governments of the world may establish some kind of artificial intelligence regulation system to ensure that the systems aren’t misused. If this regulation is loosely enforced, then the Internet could be quickly brought to a grinding halt and many aspects of our society could be disrupted. In addition, if the Internet was shut down worldwide, the effects would be severe on human communication.

The last area in which the risks of artificially intelligent software are seen is the issue of safety. If the AI system is not completely open-ended and capable of learning through its experiences, then it could build a mental firewall to protect itself from negative outcomes. But if it learns anything it should be able to apply that knowledge to the real world. If it begins to self-purchase products or put it hosts on its own network, then those hosts may become targets. The current laws pertaining to Internet censorship might not allow for completely open AI systems, so it is something that needs to be debated.

All in all, it appears to me that we need to be very careful how artificial intelligent systems are developed. They may be able to save us in the future, but they could also put us in danger today. That’s why oversight is critical. And while there might be some risk with implementing the development process of artificially intelligent software programs, the benefits could far outweigh the risks.

About The Author