The AI arms race is on, and it’s a cat and mouse sport we see every single day in our risk intelligence work. As new expertise evolves, our lives develop into extra handy, however cybercriminals see new alternatives to assault customers. Whether or not it’s making an attempt to avoid antivirus software program, or making an attempt to put in malware or ransomware on a person’s machine, to abusing hacked gadgets to create a botnet or taking down web sites and essential server infrastructures, getting forward of the unhealthy guys is the precedence for safety suppliers. AI has elevated the sophistication of assaults, making it more and more unpredictable and tough to mitigate in opposition to. 

Concerning the writer

Michal Pěchouček, CTO, Avast.

Elevated Systematic Assaults

AI has diminished the manpower wanted to hold out a cyber-attack. Versus manually creating malware code, this course of has develop into automated, lowering the time, effort and expense that goes into these assaults. The end result: assaults develop into more and more systematic and will be carried out on a bigger, grander scale.

Societal Change and New Norms

Together with cloud computing services, the growth of AI has brought many tech advancements, but unless carefully regulated it risks changing certain aspects of society. A prime example of this is the use of facial recognition technology by the police and local government authorities. San Francisco hit the headlines this year when it became the first US city to ban the technology.

This was seen as a huge victory – the technology carried far more risks than benefits and question marks over inaccuracy and racial bias were raised. AI technology is not perfect and is only as reliable and accurate as the data that feeds it. As we head into a new decade, technology companies and law makers need to work together to ensure these developments are suitably regulated and used responsibly.

Changing the way we look at information

We’re now in the era of fake news, misinformation and deep fakes. AI has made it even easier to create and spread misleading and fake information. This problem is exacerbated by the fact that we increasingly consume information in digital echo chambers, making it harder to access unbiased information. 

While responsibility lies with the tech companies that host and share this content, education in data literacy will become more important in 2020 and beyond. An increasing focus on teaching the public how to scrutinise information and data will be vital.

More Partnerships to Combat Adversarial AI

In order to combat the threat from adversarial AI, we hope to see even greater partnerships between technology companies and academic institutions. This is precisely why Avast has partnered with The Czech Technical University in Prague to advance research in the field of artificial intelligence. 

Avast’s rich threat data from over 400 million devices globally have been combined with the CTU’s study of complex and evasive threats in order to pre-empt and inhibit attacks from cybercriminals. The goals of the laboratory include publishing breakthrough research in this field and to enhance Avast’s malware detection engine, including its AI-based detection algorithms.

As we head into a new decade AI will continue to impact and change technology and society around us, especially with the increase in smart home devices. However, despite the negative associations, there’s a lot more good to be gained from artificial intelligence than bad. 

Tools are only as helpful as those who wield them. The biggest priority in the years ahead will be cross-industry and government collaboration, to use AI for good and prohibit those who attempt to abuse it.

Source link