AI-powered attacks

Home National AI-powered attacks
AI-powered attacks

Cornelia Shipindo

 

As artificial intelligence (AI) technology advances, Namibia stands at the threshold of unprecedented opportunities to reshape industries and elevate the quality of life for its citizens. 

From expanding productivity in agriculture and manufacturing to revolutionising healthcare delivery and transportation systems, the potential applications of AI appear boundless.

However, amidst this wave of excitement and optimism, concerns about the security implications of AI cast a shadow. 

While AI holds the promise of driving progress and innovation, it also introduces new risks and challenges. 

The very characteristics that make AI so powerful – its ability to analyse vast amounts of data, identify patterns and make autonomous decisions – also render it vulnerable to exploitation by malicious actors.

In this multifaceted landscape, Namibia must develop a comprehensive strategy to navigate the risks associated with AI deployment. 

This entails not only understanding the technical vulnerabilities of AI systems, but also addressing broader issues such as data privacy, algorithmic bias and the potential for AI-driven cyberattacks.

As AI technologies evolve, Namibia’s approach to safeguarding them must also evolve.  

This involves implementing robust cybersecurity measures to protect against data breaches and other security threats. 

Additionally, it necessitates careful consideration of the ethical implications of AI deployment, ensuring AI systems are developed and used in a manner that is fair, transparent and accountable.

Furthermore, Namibia must establish clear regulatory frameworks to govern the development and deployment of AI technologies, striking a balance between fostering innovation and protecting against potential harm.

Finally, ongoing research into AI safety and governance is crucial to staying ahead of emerging threats and challenges. 

By investing in research and collaboration, Namibia can position itself as a leader in the responsible development and deployment of AI technologies, driving positive outcomes for its citizens and society.

Now, let us explore how social engineers could potentially exploit AI to perpetrate their attacks. Here are a few scenarios to consider:

Reconnaissance: AI is especially effective at mining social media and other online platforms to gather detailed information on potential targets. In the past, it could have taken weeks or months for a social engineer to perform that task. AI can do it in seconds.

Impersonation: Given that AI can create realistic video or audio recordings, attackers can use it to generate content that appears to come from a trusted individual saying or doing something they are not doing. This is known as a deepfake, a dangerous tool used to deceive the public.

Voice phishing: Another form of impersonation is voice phishing, where attackers attempt to scam people over the phone. With AI, this becomes even easier. A small sample of someone’s voice can be used to generate speech that sounds like a real person, which can trick people into believing they are talking to someone they know.

Automation: Time is money. Through AI automation, social engineers can cast a wide net and increase the volume of their attacks. This process requires less effort on the attacker’s part, and it means they can target a greater number of people, increasing the chances of successfully scamming someone.

Moreover, those examples of AI-powered attacks barely cover the scope of how social engineers use modern technology to leverage classic scams. Avoiding those scams requires everyone to maintain a heightened sense of awareness, especially when prompted to provide confidential information or money. If something is too good to be true, then it probably is. Whenever you encounter anything suspicious, trust your instincts and remain sceptical. 

*Cornelia Shipindo is the manager for cybersecurity at the Communications Regulatory Authority of Namibia.