In this article, Ruby Keeler-Williams of Elysium Law considers whether the recent developments in Artificial Intelligence have increased the risk of data breaches.
I was recently asked whether the recent developments in AI, particularly in relation to deep learning, and natural language processing have increased the risk of personal data breaches.
In my view, whilst AI will undoubtedly transform the way we live and work (in the legal profession alone, Allen & Overy have announced the use of an OpenAI developed prompt based generation tool, which I imagine will revolutionise how legal research and drafting is performed), it also poses unique risks and challenges when it comes to data privacy and security.
One potential risk is the use of AI in data processing. As AI algorithms become more sophisticated, they can be used to process vast amounts of data quickly and accurately. However, this will inevitably increase the demand for personal data as a ‘product’, meaning that increasing amounts of data will be collected and processed by companies. This will inevitably increase the risk of a breach of data, as the volume of data stored in systems vulnerable due to outdated software or hardware, or with unpatched vulnerabilities will only increase. The impact of a human error can also never be understated.
However, of particular interest is the potential, following the developments in natural language processing, for ‘Phishing’ scams to become more sophisticated and difficult to identify. NLP powered phishing scams have the potential to be particularly convincing because they can mimic human language and behaviour more accurately and, perhaps more pertinently, in a manner personalised to that individual. There is the potential for criminals to use NLP algorithms to analyse an individual’s social media activity, emails, or messages to create personalised, targeted phishing messages that appear genuine. The use of language will also inevitably make the messages more difficult to detect by traditional spam filters.
It has never been more important for individuals to be vigilant and cautious when receiving messages or emails that ask for personal information or include suspicious links or attachments. Businesses and organisations must also ensure that appropriate security measures are implemented to mitigate the risks posed by NLP-powered phishing scams. This should include training employees to recognise and report phishing attempts, implementing spam filters and firewalls.
If you have been affected by a breach of your personal data, please call us on 0151 328 1968 or contact us via firstname.lastname@example.org to see if we can assist you.