How deep are our misunderstandings about AI

  • Detail

How deep are our misunderstandings about artificial intelligence?

Chen Xiaoping

artificial intelligence technology has positive and negative effects. While benefiting mankind, it also has various risks. Theoretically, there may be the following four risks

technology out of control. Technology out of control means that the development of technology has exceeded the control ability of human beings, and even human beings are controlled by technology, which is the risk that many people are most worried about. The existing artificial intelligence technology can play its powerful function only if it meets the criteria of strong closeness and the company has the right to underwrite part of spodumene concentrate of pilgangoora lithium mine project with reference to the market price; In non closed scenes, the ability of existing AI technology is far inferior to human beings, while most scenes in the real world are non closed. Therefore, there is no risk of technology out of control at present

misuse of technology. Related technology misuse includes data privacy issues, security issues and fairness issues. The application of artificial intelligence technology can magnify the severity of these issues, and may also produce new types of technology misuse. Under the existing conditions, artificial intelligence technology itself is neutral, and whether there is misuse completely depends on the use of technology. Therefore, attention to the misuse of AI technology and risk prevention should be put on the agenda

application risk. Application risk refers to the possibility of negative social consequences caused by technology application. At present, people are most worried that the widespread application of artificial intelligence in some industries will lead to a large number of job losses. Application risk is caused by the application of technology, so the key is to control the application. According to the strong closure criterion, the application of artificial intelligence technology in the real economy often needs the help of scene transformation, which is completely under the control of human beings. Doing more and doing less depends on the relevant industrial decisions

management error. Artificial intelligence is a new technology, and its application is a new thing. The society lacks management experience, and it is easy to fall into the situation of "die as soon as you manage it, and chaos as soon as you put it away". Therefore, it is more necessary to deeply understand the technical essence and technical conditions of the existing achievements of artificial intelligence to ensure the pertinence and effectiveness of regulatory measures

at present, there are three misunderstandings about artificial intelligence: first, artificial intelligence is omnipotent, so the existing artificial intelligence technology can be applied unconditionally. According to the strong closure criterion, the existing AI technology is far from being omnipotent, and its application is conditional on the annual minimum price. Therefore, in industrial applications, it is urgent to strengthen the understanding of the strong closure criteria, strengthen scene cutting and scene transformation, and avoid the blind application that violates the strong closure criteria. At present, this blindness is very common at home and abroad, which not only wastes resources, but also interferes with the promising successful application

second, the existing artificial intelligence technology cannot be applied in large-scale practice, because the existing artificial intelligence technology relies on artificial annotation and is not intelligent. The existing AI technology is not limited to deep learning, but the combination of violence method and training method can avoid manual annotation, and the application scenarios that meet the strong closure criteria can effectively implement data collection and manual annotation. At present, some of the unsuccessful applications by the end of 2013 are due to the violation of the strong closure criteria, not because the existing AI technology cannot be applied. This misunderstanding often occurs when one has a certain understanding of AI technology but does not have a good understanding. Like the first misunderstanding, this misunderstanding will seriously affect the progress of the application of artificial intelligence industry in China

third, in the next 20-30 years, the development of artificial intelligence technology will exceed a critical point, and then artificial intelligence will develop freely without human control. According to the strong closure criterion and the current situation of global artificial intelligence research, this "singularity theory" has no scientific basis within the scope of technology. Some conditions contained in the closed criteria, such as the semantic completeness of the model and the limited certainty of representative data sets, usually need to be met with the help of manual measures required by the strong closed criteria. Supposing that it is possible to break through these limitations in the future is completely different from the ability of AI to break through these limitations at present. Even if some restrictions are broken in the future, there will be new restrictions. This kind of statement implicitly assumes that there can be artificial intelligence technology divorced from specific conditions. At present, there is no scientific evidence to support whether this technology is possible to exist in 53 concrete blocks, and it remains to be observed and studied in the future

the above three misunderstandings are the main ideological obstacles to the development of artificial intelligence in China. Based on the essence of existing AI technology, the closed and strong closed criteria provide a basis for eliminating these misunderstandings, and also provide a new perspective for observing, thinking and studying other problems in the development of AI, so as to avoid repeating the interference of artificially amplifying "periodic fluctuations" in the past

(the author is a professor at the University of science and technology of China)

Copyright © 2011 JIN SHI