The Pentagon is stepping up its intelligence systems – by hacking itself

0
37

[ad_1]

The Pentagon sees artificial intelligence as a way to outdo, outdo and dominate future opponents. But the fragile nature of artificial intelligence means that technology without proper care could provide enemies with a new way to attack.

The Joint Center for Artificial Intelligence, which was created by the Pentagon to help the U.S. military use AI, recently established a unit to collect, test, and distribute open source and industrial machine learning models to groups across the Department of Defense. Part of that effort points to the key challenge of using artificial intelligence for military purposes. The Red Machine Learning Team, known as the Testing and Assessment Group, will investigate previously trained models for deficiencies. Another cybersecurity team is examining AI code and data for hidden vulnerabilities.

Machine learning, the technique behind modern AI, is a fundamentally different, often more powerful way of writing computer code. Instead of writing the rules that a machine follows, machine learning generates its own rules by learning from data. The trouble is that this learning process, along with artifacts or errors in training data, can cause AI models to behave in strange or unpredictable ways.

“For some applications, machine learning software is only a billion times better than traditional software,” says Gregory Allen, director of strategy and policy at JAIC. But, he adds, machine learning “also breaks down in different ways than traditional software.”

For example, a machine learning algorithm capable of recognizing certain vehicles in satellite images can also learn to associate a vehicle with a particular color of the surrounding landscape. An opponent could potentially deceive AI by changing the landscape around his vehicles. By accessing training data, the opponent might also be able to place images, such as a specific symbol, that would confuse the algorithm.

Allen says the Pentagon is following strict rules concerning reliability and safety the software it uses. He says the approach can be extended to artificial intelligence and machine learning, and notes that JAIC is working to update DoD’s standards around software to include problems around machine learning.

AI transforms the way some businesses do business because it can be an efficient and powerful way to automate tasks and processes. Instead of writing algorithm To predict which products a customer will buy, for example, a company may have an AI algorithm that looks at thousands or millions of previous sales and devise its own model to predict who will buy what.

The U.S. and other militias see similar benefits and are rushing to use AI to improve logistics, intelligence gathering, mission planning and weapons technology. China’s growing technological capability has fueled a sense of urgency at the Pentagon over the adoption of AI. Allen says DO moves “in a responsible way that gives priority to safety and reliability.”

Researchers are developing increasingly creative ways to hack, plant or break AI systems in the wild. In October 2020, researchers in Israel have shown how carefully trimmed the images can confuse the AI ​​algorithms that allow Tesla to interpret the path ahead. This type of “contradictory attack” involves adapting the data to a machine learning algorithm to find small changes that cause large errors.

The song of dawn, a professor at UC Berkeley who has conducted similar experiments on Tesla ‘s sensors and other artificial intelligence systems, says attacks on machine learning algorithms are already a problem in areas such as fraud detection. Some companies offer tools for testing AI systems used in finance. “It’s natural that there is an attacker who wants to avoid the system,” she says. “I think we’ll see more of those kinds of problems.”

A simple example of a machine learning attack included Taya, Microsoft’s infamous chatbot, which went wrong and debuted in 2016. The bot used an algorithm that learned how to respond to new queries by examining previous conversations; The redditers quickly realized they could exploit this to make Tay throw out hateful messages.

[ad_2]

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here