Edition Three
New partner spotlight: Kroll
Artificial Intelligence: A business dream or security and corporate risk nightmare?
There is little doubt that AI, widely acknowledged as tech’s biggest game-changer, is destined to play an important role in the public and private sector. Yet while organisations are quick to explore its advantages, they are perhaps less swift to acknowledge its potential security and corporate risks.
From utilising chatbots for developing malicious code, phishing emails, and malware to experiments in creating dark web marketplaces, threat actors are innovating with AI at a pace that is at par, or even faster, than businesses. While AI poses a generalised security threat to all organisations and consumers, the rush to develop AI systems and tools in-house has its own pitfalls.
Chatbot Social Engineering (CSE)
The use of generative AI (e.g., AI-infused chatbots) can do a great job of interacting with customers in their natural language. However, without due diligence and effective guardrails in place, it can set the scene for what we at Kroll call CSE, where carefully worded enquiries are used to draw out potentially sensitive information from an AI system. Whether that means pinpointing someone’s private address or gaining access to details of a customer order and use that information to defraud the person out of money, the potential risks created using CSE are boundless.
Wider threats to consider
Beyond the immediate security threats, the use of AI as a business resource presents a wider threat to the security of companies’ legal integrity and public reputation, with knock-on effects on their financial standing. Because technology does not exist in a vacuum, all systems operate under national and multinational laws, already adapting in response to the rise of AI.
Organisations that build an AI system that do not consider the laws in their jurisdiction might find themselves in an increased risk of litigation. An over-reliance on ChatGPT, for example, has led a New York lawyer to face a possible sanction over the use of AI-generated citations.
To mitigate these potential risks, ensure that your approach to AI is defined at board level.
Involve your legal advisors from day one so that planning and execution stages are monitored, giving you advice on how to avoid key pitfalls along the way. Good counsel should be able to stay in touch with the fast-moving changes in AI and keep you up to date. Also consider having a member who is fluent in cybersecurity, such as a compliance officer, to ensure that you will have all the relevant evidence documented in the event of effective defence in litigation.
Organisations must develop a robust, well-informed approach to prepare for the risks of AI, which offers incredible potential for many areas of business. Failing to do so could mean that their AI dream becomes a business nightmare.
Edward Starkie,
Associate Managing Director, Cyber Strategy and Risk Consulting, Kroll
Kroll is a renowned leader in security and risk management. With a rich legacy spanning several decades, Kroll and Ultima’s partnership will focus on combining cutting-edge technology, deep expertise and solutions that address diverse security challenges, ranging from cyber threats and fraud to physical security breaches and reputational risks.
Safeguard your organisation with expert protection from Ultima
Whether you need to tackle a specific concern, or are looking to evolve your security strategy so it’s fit for purpose in the digital world, this guide demonstrates how Ultima will help keep your business resilient in our unpredictable times.