Much security effort is expended on preventing external IT breaches, but the potentially catastrophic threats reside internally, warns Edward Wilding
Management's perception of risk across all industries and sectors was massively influenced by the terrorist attacks on the World Trade Center in New York on 11 September 2001. This momentous event galvanised a vast global investment in disaster recovery planning.
In the wake of 9/11, Hurricane Katrina, the Asian tsunami and a series of devastating fires and floods worldwide, businesses are well acquainted with physical risks – both natural and man-made – and have prepared disaster recovery programmes to survive even the most extreme physical disasters. However, there are many exceptionally grave business risks that are rarely ever contemplated, or addressed at all by contingency plans.
It is instructive, for example, to compare and contrast the impact on business continuity caused by 9/11 with the insurmountable damage wrought by Nick Leeson, the infamous rogue trader at Barings Securities in Singapore in February 1995. Here, two inescapable facts must be acknowledged:
• All of the major businesses based in the WTC survived, despite the thousands of deaths and colossal damage inflicted.
• Conversely, Barings Bank was destroyed by the action of just one of its employees.
No terrorist, pressure group, computer hacker, virus writer or other cyber deviant has even come close to causing the commercial or organisational collapse of any major institution. By contrast, the disasters of Barings, Daiwa, BCCI, WorldCom, Enron, Tyco, Xerox, Orange County and AllFirst Bank (to name but a few recent high profile catastrophes) were all the result of internal fraud, corruption or unsupervised speculation committed by trusted employees, and in most cases these comprised senior managers or board directors. It is a paradox that so much security effort is expended on preventing external breaches when so many latent, potentially catastrophic threats reside internally.
When it comes to business continuity, the people inside the firewall must be considered to be potentially far more dangerous than those outside it. Paradoxically, and in the face of empirical evidence, established information security doctrine has consistently highlighted the risks posed by external parties and agents – computer hackers, viruses, worms and other comparatively manageable irritants. These risks, known and understood for years, have been emphasised virtually to the exclusion of the threat posed by trusted insiders. This disregards an important fact – discounting a handful of rare and notable exceptions – hackers and other cyber criminals, unassisted by trusted insiders, cannot commit catastrophic fraud because they do not have the requisite knowledge or the access to do so.
Of course, hackers and cyber criminals can and do commit fraud as well as other types of damage. They download unencrypted credit card databases, intercept and decrypt passwords, defraud on-line banking systems, and perform all sorts of internet assisted deceptions. With few exceptions their misdemeanors have been unsophisticated, low value, rectifiable and in most cases were only made possible by deplorable computer security.
By contrast, examples of trusted employees causing havoc from within the firewall are legion:
• Kim David Faithfull, a manager at the Commonwealth Bank of Australia, gambled a staggering AUS$19 million on-line betting. Despite regular audits, the fraud went undetected for five years and was only discovered in 2003 when the culprit wrote a confessional note to his colleagues.
• Roger Duronio, an IT manager at UBS Paine Webber, planted a logic bomb at the bank's data centre. At 9.30 am on 4 March 2002 the logic bomb detonated, just as morning trading was getting into full swing. The bomb caused mayhem, crashing 2,000 servers in 370 branch offices and leaving some 17,000 brokers unable to trade.
• Henry Blodget, the Merrill Lynch internet stock guru, wrote a succession of e-mails warning the bank's private clients to avoid investing in high-tech dot com companies that were publicly recommended by his employer. When regulators discovered these e-mails, some of which described recommended investments as ‘junk’ and ‘a piece of shit’, the bank settled, paying US$100m.
• J Ignacio Lopez de Arriortua, a vice president at General Motors, later hired by Volkswagen, revealed confidential plans for an advanced assembly line (plant 'X') to his new employer. In 1992, 20 cases of confidential documents belonging to GM were shipped to Volkswagen headquarters in Wolfsburg, Germany, many of them transported aboard a Volkswagen corporate aircraft. In addition to a settlement payout of $100m, VW agreed to purchase $1bn of components from GM in restitution.
• Jason Smathers, a former America Online employee, sold a database of 92 million e-mail addresses to spammers for US$28,000. As a result, seven billion unsolicited spam e-mails flooded the inboxes of AOL members. AOL said Smathers' act had cost the company at least US$300,000. AOL fired Smathers in June 2004. He misappropriated another employee's access code to steal the list of AOL customers in 2003 from the company's headquarters in Dulles, Virginia.
Published surveys consistently tell us that 70% to 80% of corporate fraud and computer crime is committed by employees and others who reside inside the firewall. The City of London Police Economic Crimes Unit reports that 80% of fraud against firms in London's financial and banking district is perpetrated by, or with the help of, the victim’s own staff.
“Information security doctrine has consistently highlighted the risks posed by external parties
It is employee risk – criminality, malevolence, ignorance, stupidity or negligence – that is often so badly mismanaged, and this is largely because the threat is ignored or under-estimated by most corporate ICT security programmes.
It is my contention that many businesses – certainly those of size and transactional sophistication – harbour a potential Leeson, a human time-bomb ticking away, or a completely unidentified and potentially disastrous exposure in contracts, systems or processes which awaits malevolent or avaricious exploitation. This is not conjecture, but an observation derived from investigating frauds, computer crimes and other unexpected and seemingly intractable problems in businesses worldwide.
Understand the risk
ICT security and risk professionals prepare for contingencies that impact machines and processes, but rarely see it as their responsibility to identify the potential causes of catastrophic business failure, nor to respond to people-based risks like that posed by Leeson, despite the fact that computers featured so centrally in his wrongdoing.
Most ICT regimes seek to impose control over people by using technology – strait-jacketing them into specific roles, with appropriate levels of access and authority. But this regime taken in isolation, while ostensibly under control, is really only administered at a machine and process level. If the people who use the systems are not themselves controlled, they may find cause to subvert or circumvent the technical restraints imposed upon them.
There is also a misplaced reliance on defensive technologies, and on firewalls in particular – a fact attested to by convicted super hacker Kevin Mitnick: “Companies spend millions of dollars on firewalls, encryption and secure access devices, and it's money wasted, because none of these measures addresses the weakest link in the chain. I rarely had to go towards a technical attack. The human side of security is easily exploited…”.
Whose responsibility is it, then, to identify catastrophic people-based risk and to respond to it? Should it be compliance, audit, group legal, or operations? Departmental managers will quite often eschew responsibility for tackling serious people based risk, or find reasons to avoid taking corrective action. The following are typical of the evasive delaying tactics that have been used by management in response to tackling serious wrongdoing:
Chief executive: “It is obviously a false allegation. Everyone in this company is completely committed to our ethics, visions and values.”
Group legal: “You can’t tap his telephone; it is totally illegal under EU directives.”
Group finance: “There is absolutely no loss shown in the books, so it can't be a fraud. Anyway, how much will this investigation cost?”
Human resources: “Your proposed methods will destroy trust within the company.”
Group audit: “It complies with the procedures manual and anyway his department's been given a clean bill of health by external auditors.”
Operations: “His department generated two thirds of the group's total profit last year. The man's a genius – get off his back!
ICT security: “He hasn't downloaded porn, spread a virus or hacked into anyone's computers, so it's not our problem.”
The tendency to equivocate and pass the buck reinforces the necessity to issue a clear, precise, contractually binding employment policy, to formulate and test a contingency plan to investigate and combat suspected internal wrongdoing, and to establish an agreed chain of command.
“The deviant employee or determined criminal is not impressed by standards or controls
Another tendency when the spectre of catastrophic fraud looms, is that people become defensive. “That couldn't happen here!” they protest, “We have controls, guidelines, audits, contingency plans, segregation of duties”.
But things can and do go badly wrong, even in the best controlled environments. Compliance with Sarbanes Oxley, ISO17799, COSO, COBIT or whichever security standard happens to be in vogue provides a psychological comfort blanket, but is of only limited value in combating bloody-mindedness, fraud and deceit. The deviant employee or determined criminal is not impressed by standards or controls, and is rarely, if ever, constrained by them. A testament to this fact is the observation of AIB's chief executive Michael Buckley about John Rusnak, the rogue trader at subsidiary AllFirst: “It's very clear now that this guy targeted every control point of the system and systematically found ways around them, and built a web of concealment that was very sophisticated.”
Fraud and computer crime also flourish where controls are applied only for cosmetic purposes: for point scoring, benchmarking, to comply with regulation, or to obtain accreditation. The current bombardment of legislation, regulation and bureaucratic diktats is corroding effective protective efforts, because the information security profession and the wider population it serves are exhausted and utterly confused by the myriad demands imposed on them.
Complacency is also rife. I believe that many organisations, particularly small and medium enterprises (SMEs), do not take the insider threat seriously, and only do so when impending calamity looms. This is not a case of the ostrich putting its head in the sand, because no threat is even perceived. It is symptomatic, instead, of a potentially fatal combination of arrogance and inexperience.
In light of this, my prevailing message is that all organisations must: a) acknowledge the insider threat and better understand how and why it arises, b) by understanding the insider threat, seek to prevent it, c) always investigate the insider threat when and where it is suspected, and, d) devise and test contingency plans before this threat manifests itself.
Recommendations
• Drop your preconceptions – prepare your risk analysis using a clean sheet.
• Do not rely on SOX, COSO, Basel II, BS12345, ISO98765 – they will not protect you.
• No tick boxes – crooks and fraudsters already know which boxes you are ticking.
• Use risk assessment software and methodologies to identify prosaic risks, but do not rely on them to expose or detect esoteric or specialist risks.
• Get your hands dirty and use the real systems and process that you seek to protect.
• Question the received wisdom – never trust other people's audit reports or assessments.
• There are good guys and bad guys – but you ignore at your peril the other people who just do not care very much about what you are trying to achieve, or who will cut corners for convenience.
• Identify the corporate jugular vein and the key nerve centres – the critical points that will bleed your organisation to death or kill it instantly.
Computers don't commit fraud, people do.
Postscript
Edward Wilding is CTO, Data Genetics International Limited, Tel: 020 7841 5870. He is the author of ‘Information Risk and Security: Preventing and Investigating Workplace Computer Crime’ (2006) published by Gower. Edward is offering free copies to the first five readers of this article who contact StrategicRISK (sue.copeman@strategicrisk.co.uk). If you do not to win a copy, you can buy one from www.gowerpub.com at a 10% discount.