Fraudsters are leveraging AI to clone voices of senior officials and public figures in sophisticated scams. These AI-generated voice messages are used to impersonate trusted individuals, aiming to deceive victims into divulging sensitive information or transferring funds. Scammers use these cloned voices in urgent financial requests or to direct victims to phishing links.
Cybersecurity experts warn that these AI-driven attacks are becoming increasingly personalised and persuasive. Scammers are using voice-altering software and social engineering techniques to extract confidential data. Tactics include 'vishing' and 'smishing', where voice memos and text messages are used to build rapport before attempting to steal money or information. Even familiar phone numbers can be spoofed, making it crucial to verify the identity of callers.
To mitigate these threats, security professionals recommend verifying the authenticity of requests through trusted channels and enabling multi-factor authentication. Caution should be exercised when receiving unsolicited messages, especially those requesting urgent action or personal details. Reporting suspicious activity to local authorities and cybersecurity centres is also advised.
Subscribe for Weekly Updates
Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.




