If cybercriminals are able to breach the security of organizations producing AI software, they will no doubt use the seized algorithms, large datasets, and learning models for financial theft, geopolitical gain, and further technological compromise — anything to increase the scope of their activities and the size of their ill-gotten gains. Furthermore, in the event that an AI model became compromised, it is entirely plausible that the vast amounts of personal information contained within would subsequently be fed into the dark web to facilitate more sophisticated cyberattacks in the future. These kinds of mass data dumps have occurred previously; much of the sensitive information pulled from attacks on “data brokers” used by the advertising industry is still on the dark web and actively deployed in attacks today. The potential financial and societal damages that could emanate from a breach of a substantial AI organization are incalculable. Compounding concerns, there is also the possibility of an “insider” or splinter group within an AI organization or business breaking off and perpetrating these kinds of damages. In addition to enforcing thorough information security and privacy protections, these organizations should also be stepping up their employee vetting procedures and implementing stringent internal security controls.
Next we look at the growing brazenness of cybercriminals.
Follow BUNKR on Instagram @BUNKR.Life, on ‘X’ @BUNKR_Life and on LinkedIn @BUNKR or visit our web site at bunkr.life