As criminals can use artificial intelligence? The most dangerous variant

As criminals can use artificial intelligence? The most dangerous variant

Artificial intelligence capable of much. For example, it can replace the face of Arnold Schwarzenegger Sylvester Stallone

The Last 10 years, every day we hear news about how this or that artificial intelligence has learned new skills. Today computer algorithms know how to copy drawing styles of famous artists, to imitate other people's voices, to create fake videos of public figures and many others. All this is very interesting to watch, but many computer security experts are concerned that the developed technology can be used by criminals to commit crimes. Because the use of neural networks for implementation of any tasks at the moment is not regulated. It turns out that any people fluent in programming, can use artificial intelligence to defrauding people of money, spying and committing other illegal acts. Recently the staff of the University College London decided to find out what is the ability of artificial intelligence can be applied to human society the greatest harm.

The Results of the work are shared . In scientific work the researchers identified the 20 skills of artificial intelligence, which will be available to criminals in the next 15 years. Unfortunately, a complete list of methods of use failed to find even , but the researchers did mention the important points.

so, in their opinion, the artificial intelligence is able to:

Fraudulently to know usernames, passwords and other personal data — in other words, to engage in phishing; the

These and many other ways to use neural networks for criminal purposes has been studied by a group of 31 experts on artificial intelligence. Before them was tasked to sort all these skills on the level of danger to society. When compiling the rating, the experts took into account the complexity of the use of artificial intelligence to perform certain tasks, the ability to prevent fraud and the amount of money that criminals can obtain fraudulently.

The Most dangerous ability of artificial intelligence has been called the creation of so-called diphenol (deepfakes). For the first time about the ability of computer algorithms became known around 2017. Just then one of the users of the website Reddit showed the world how having a powerful computer to create a fake video where the face of one person replaced human to another. He showed it on the most obscene example by inserting the faces of famous people in adult video. This news brought a lot of hustle and gipface like this even become prohibited. However, at the moment, nothing prevents programmers to create fun videos like the one where Elon Musk sings a song about "the roar of the cosmodrome".

Create a fake video, it was considered dangerous function for two reasons.

First, they allow a how to steal money and ruin the reputation of people. Here is a small example. At the moment, with the aim of obtaining easy money many scammers hack into accounts of people on social networks and ask their friends to remit a certain amount of money. Explaining why they did not do it and leave this conversation, saying, "now is no time, explain later". About five years ago, this scheme was working and many people really helped "close friends". But now such tricks are few pecks. However, if the crooks will create dippen on the basis of user's avatar and send his friends a video where he's supposedly at the camera asks you to send money, many people are seriously concerned and believe the scammers.

Sometimes the fake video is really difficult to distinguish from the original.

Also deepface can be used to damage the reputation of famous people and this has been proven many times. For several consecutive years in the Internet there are video allegedly drunk Nancy Pelosi — speaker of the U.S. house of representatives. Of course, these videos are fake, but they get a lot of hits and some people can actually take them for real. After all, about technology of creation of fake videos many are not even aware — of the older generation of technologically savvy people at all units.

The Second danger lies in the fact that suchmethod of fraud is difficult to stop. The fact that fake videos are so realistic, that to recognize them is almost impossible. In June of 2020, this has been proven in the framework of company Facebook contest . It was attended by more than 2,000 developers who have created recognition algorithms fake videos. The highest recognition accuracy was 65%, which, according to experts on cyber security, so serious in fact very bad result.

Section of the Reddit website, where he published deepface has already been blocked

Other features of artificial intelligence like cars and robots are not considered as dangerous as deepface. Yes, it can make the car swerve out of the way and at high speed crashing into a pole. But in this case we are talking about one victim, while the fake videos can influence the minds of thousands of people. Or, take, for example, use robots to spy on people and commit robberies of homes. Experts do not see them as a danger, because they are very easy to detect. Hammer in hand and all the "spy" is gone.

What to do to minimize the danger of artificial intelligence is not yet clear. Alternatively, you can continue working on the creation of the algorithm to identify fake videos. You can also somehow control the use of neural networks, but it can slow down the development of technology. After all, artificial intelligence is used not only by criminals but also by scientists who want to make the world a better place. Recently startup Landing AI taught him that in this difficult time can save many lives.


The last notes - Наука

most read news


Co-Owner and most popular forum about cryptocurrency Cobra called Coinbase Scam Pro trading platform American crypto currency exchange Coinbase. In his tweet he wrote «if

When people learned how to create a new cryptocurrency and began to build their infrastructure like marketplaces, in the field began to appear and fraud. According to blockchain auditor Chainalysis