AI, artificial intelligence, has become a hot focal topic in recent years. Wherever in the market of entertainment and business, increasing of individuals focus on AI for the direction of future development. Companies including Apple, Amazon, Facebook, and Google have started putting a lot of money to dramatically develop AI. A partial portion of individuals in society support developing AI; nevertheless, others believe that developing AI could cause serious and damaged results for human beings. Elon Musk, CEO of Tesla (TSLA) and Space X, recently represented that murderous artificial intelligence-powered robots, which the companies are researching and developing, could destroy human beings in the future. He putted forward the alarm about AI recently to support establishing the specific regulations before AI destroys human beings. He is not the only expert issuing a warning. Bain & Company’s Chris Brahm recently stated that “Today, as a society we have clearly decided that certain types of human decision making need to be regulated in order to protect citizens and consumers. Why then would we not, if machines start making those decisions … regulate the decision making in some form or fashion?”
The most closed accident happened about AI is a chatbot experiment from Facebook. Two artificial intelligence creations, Bob and Alice, tried to communicate with each other without experts’ intervene. Scientists from Facebook said “they were attempting to imitate human speech but created a machine language of their own with no human input.” Although the language of conversation was not normal and grammatical, but two AIs started making conversation based on their own. Is it not a serious warning for human beings?
It is true AI could make out life convenient, and AI already has integrated into our live from smartphones to autos. However, it is also true that making regulations for AI is significant, but who is going to settle the regulations? For the government, they do not have a specific regulatory body to ensure that AI is properly vetted. The discussion of regulation self-driving cats has started by a house panel, and the true is self-driving autos have already flown into the market, so it is challenged, which is the same as regulating AI. Carnegie Mellon’s Manuela Veloso, an expert on AI said “generating and enforcing such regulations can be very hard, but we can take it as a challenge.”