Like other general-purpose technologies, in recent years, artificial intelligence technology has brought new risks and hidden dangers while making great strides. Zhang Bo, an academician of the Chinese Academy of Sciences, said that the development of artificial intelligence is at a new historical starting point. With the availability of computing power, data and other conditions, as well as the advancement of machine learning and other technologies, artificial intelligence has made great progress in many fields such as computer vision and natural language processing. , applications from all walks of life are booming. At the same time, the deficiencies in the interpretability and robustness of the second-generation data-driven artificial intelligence have also been exposed, and security incidents have occurred frequently.
In real life, the scope of artificial intelligence technology risks is gradually expanding as the application scenarios become wider and wider, and the possibility of risks continues to increase with the increase in the frequency of its applications. What the face recognition cracking demonstration revealed is the risk of the artificial intelligence system, which comes from the vulnerability of the deep learning algorithm itself. The second-generation artificial intelligence with deep learning algorithm as the core is a “black box” that cannot be interpreted, which means that there are structural loopholes in the system, and there may be unpredictable risks. The typical scenario is “magic stickers”. Add disturbance to the system to make wrong judgments.
This vulnerability also exists in the self-driving perception system. Under normal circumstances, after recognizing targets such as roadblocks, signs, pedestrians, etc., the self-driving vehicle will stop immediately, but after adding interference patterns to the target object, the vehicle’s perception system will make mistakes, resulting in a collision hazard.
Coordinating development and security is an inevitable problem in the development process of every new technology. How to achieve a positive interaction between high-level development and high-level security is also a major proposition for the development of the current artificial intelligence industry. Experts believe that from the current point of view, attaching importance to the construction of artificial intelligence security system is both a top priority and a long-term consideration. It is necessary to accelerate the promotion of key technology research and offensive and defensive practices in the field of artificial intelligence security.
Artificial intelligence confrontation attack and defense include adversarial samples, neural network backdoors, model privacy issues and many other technologies. If there is an error in the model, it needs to be repaired in time. Chen Kai, deputy director of the State Key Laboratory of Information Security of the Chinese Academy of Sciences, proposed the “neural network scalpel” method to perform precise “minimally invasive” repair by locating the neuron that caused the error. He said that unlike the traditional model repair work that requires retraining the model or relies on a large number of data samples, this method is similar to “minimally invasive surgery”, requiring very little or no data samples, which can greatly improve the model repair effect.
Artificial intelligence systems in an open environment face many security challenges, and how to solve the security issues of the entire cycle of general artificial intelligence algorithms has become a top priority. Experts suggest that the future of artificial intelligence security should focus on comprehensive evaluation at all levels from data, algorithms to systems, and at the same time cooperate with a safe and trusted computing environment from hardware to software.
Artificial intelligence security governance requires extensive collaboration and open innovation. It is necessary to strengthen the interaction and cooperation of various industry participants such as the government, academic institutions, and enterprises, and establish positive ecological rules. Su Jianming, an expert in charge of the Security Attack and Defense Laboratory of the Industrial and Commercial Bank of China Financial Research Institute, suggested that the legislative process of artificial intelligence should be accelerated at the policy level, and the assessment of special supervision on artificial intelligence service levels and technical support capabilities should be strengthened. At the academic level, increase the incentive investment in artificial intelligence security research, and accelerate the transformation and implementation of scientific research results through the industry-university-research cooperation model. At the enterprise level, gradually promote the transformation of artificial intelligence technology from scenario expansion to safe and credible development. Through participation in standard formulation, launch product services, and continue to explore artificial intelligence security practices and solutions.
In addition, in the entire life cycle of artificial intelligence, there are not only security issues at the algorithm level, but computing power, as an important infrastructure for the development of artificial intelligence, also faces many risks. It is of great significance to promote the safe development of artificial intelligence computing power infrastructure. The “White Paper on the Security Development of Artificial Intelligence Computing Power Infrastructure” jointly issued by the National Industrial Information Security Development Research Center, Huawei, and Beijing Ruilai Wisdom believes that artificial intelligence computing power infrastructure is both “infrastructure” and “artificial intelligence computing power” , which is also a “public facility”, with triple attributes of infrastructure, technology, and public. Correspondingly, to promote the security development of artificial intelligence computing power infrastructure, efforts should be made from three aspects: strengthening its own security, ensuring operational security, and assisting security compliance. By strengthening its own reliability, usability, and stability, it can ensure the confidentiality of the algorithm when it is running. and integrity, build a solid artificial intelligence security defense line around improving users’ security management and control, recognition and compliance in eight areas, create a credible, usable, and easy-to-use artificial intelligence computing power base, and create a safe, healthy, and legal environment. The artificial intelligence industry ecology that develops regularly.
“What kind of development concept to anchor and what technical route to choose to enable the next generation of artificial intelligence to achieve safe, credible, and reliable development will be the background of our generation’s blueprint for the future intelligent world.” Zhang Bo said, “Over the years Since then, we have been advocating the construction of the third generation of artificial intelligence, which integrates the four elements of knowledge, data, algorithms, and computing power to establish a new explainable and robust artificial intelligence method.” In the long run, the security of artificial intelligence requires Breakthroughs in the principles of the algorithm model, only by continuously strengthening basic research can the core scientific issues be solved. At the same time, the future development of artificial intelligence needs to ensure the effectiveness and positive promotion of the development of the whole society and the country. Work together.
The Links: 3HAC047184-003 M200