The following is a copy of a set of class notes which Ross Anderson has used in his courses, and which he posted to the newsgroup on Dec 8 1992.

Perspectives - Automatic Teller Machines

Ross Anderson, Cambridge University


When we look at the history of computing, we find that a small number of key applications have driven the way in which the whole technology has developed. Each new generation of computing machinery not only allows us to do the old things more quickly and cheaply, but also lets us do entirely new things; these new applications then open up new markets and give birth to new industries.

The first computing machines were designed during World War 2 to break enemy codes. After the war, developers realised that computers could also be used to solve differential equations. It was thought at the time that Britain would need only three computers: one in Cambridge for scientific research, one in Manchester to solve technological problems, and one in London to do government work such as codebreaking and economic modelling.

The key application which brought computers out of the laboratory and into the world of commerce was bookkeeping. Stock control, invoicing and payroll were first computerised by Lyons in England in the late 1940's, but it was only during the 1950's that IBM, then a large US manufacturer of accounting machines, got into the computer business. With its huge sales force, IBM persuaded the world's large corporations to computerise their accounts and made their System 360 the first widely sold machine.

During the early 1980's, two key applications established the microcomputer: word processing and spreadsheet programs. `Micros' had been around for some time, but had been limited to hobbyists and researchers. When businessmen realised that these `toys' could be used to send a personalised version of the same sales letter to fifty different prospective customers, and could also take the drudgery out of management accounting tasks such as preparing budgets, the personal computer revolution got under way.

Nowadays, a number of researchers at the laboratory here believe that the next key application will be multimedia. This means handling data, voice and video in an integrated processing and communications environment. The prototype application is videoconferencing, whose users are doubling in number every year; and it is hoped that real-time digital video processing will become a standard business tool once it can be moved from specialised and expensive equipment down to the workstation. PCs will then double as videophones, and new applications such as video mail will continue to drive things through the 1990's.

The practical effect of all this for computer professionals is that, every few years, we experience changes in the kind of hardware available, the programming languages used, the sort of software for which clients are prepared to pay, and the research topics favoured by the various research funding councils.

Needless to say, understanding the nature of key applications is very important for computer scientists, and, indeed, for everyone working in the industry.

The purpose of this lecture, however, is not to predict the technology of the 1990's, but to examine one of the key applications of the 1970's, namely the automatic teller machine, or ATM. The First Cash Dispensers

During the 1960's, labour costs in the UK started to rise rapidly. The postwar economic boom, the end of commonwealth immigration and the rising power of the trade unions all worked together to push up real wages. This placed pressure on labour intensive service industries such as the banks, and, as they had already installed computers to automate their bookkeeping, it was natural for them to ask whether machines could replace tellers as well as clerks.

So it was that, in 1967, Barclays introduced the first ever cash dispenser in its Hendon branch in London. This was the first computer which was designed to be operated by members of the public; until then, computers had been used in backroom tasks such as bookkeeping or engineering design.

The early machines were not very sophisticated. They did not even have a video screen; they communicated with the user by means of a rubber band on which was printed a number of prompts such as `please enter your card', `please enter your personal identification number' and `please take your cash'. At the end of each transaction, the rubber band was noisily wound back in readiness for the next customer.

Customers were issued with punched cards, which had both the account number and the personal identification number (or PIN) enocoded in punched holes. Each card was worth 10 pounds, and was swallowed by the machine after use, processed through the cheque clearing system, and returned, duly stamped, with your monthly statement. When I was an undergraduate in the 1970's, I found this convenient, as I had four cards, used one every week and thus kept within my budget of 40 pounds per month.

In a way, public acceptance of the machines was helped by the fact that they were crude. It was clear to anyone how they worked, so they were not in any way threatening.

Security was also primitive in the early days, being limited to physical protection against theft, and manual procedures to balance the cash loaded against the cards captured. The PIN was really a marketing add-on: it was designed to make customers feel safer carrying cards than they would feel with cash.

Needless to say, a security problem soon appeared. Various criminals (and, in Israel, even misguided undergraduates) worked out how the account number and PIN were coded on the punched cards, and started producing bogus cards on an industrial scale. The first generation of machines had to be withdrawn, but not before the bankers had tasted the cost savings that could be achieved by using automatic teller machines to thin out their branch staff.

The Security Problem

During the early to mid 1970's, the first recognisably modern ATMs were installed in the UK and overseas. The magnetic strip card was introduced at this time, and card standards were agreed through the American Bankers' Association which are still in force today.

Because of the problems encountered with card forgers, banks tried to encode the PIN on the card, or derive it from the account number, or provide some other means of checking it, in a way which they hoped would not be too obvious to criminals, undergraduates, and so on. In fact, if you have a Barclaycard or Barclaybank card which dates back a few years, you may have noticed that the first and fourth digits of your PIN add up to the same as the second and third, or that the first and third add up to the same as the second and fourth.

However this sort of security was not much good against a bright forger, and this brings us to the second contribution that ATMs made to computer science: they fostered commercial development of cryptology, which is the study of codes and ciphers.

Up till the 1970's, cryptology had been a monopoly of soldiers and diplomats. Code books and cipher machines were used to protect radio and telegraphic signals; considerable energy had been devoted by the major powers to acquiring the means to protect their own messages and decode their rivals', and the techniques they developed were among the most jealously guarded military secrets (David Kahn's book `The Codebreakers' gives an account of military cryptology up to the 1960's).

Now all of a sudden the banks, and their equipment suppliers, felt they needed crypto expertise and products in a hurry. The upshot was in fact a new kind of cryptology; while the miitary systems were concerned with secrecy, the banks' aim was authentication. Instead of ensuring that a message would not be read, they wanted to be sure that it had not been altered en route.

At this time, computer security was an embryonic discipline, and inspiration was drawn from FBI guidelines. These defined three categories of identification data as being something the user knows, like a password; something he has, like a key; and something he is, such as his voice, his signature or his facial features. Users accessing classified systems had to pass checks from two of these three categories.

Biometrics - recognising people by their fingerprints, voiceprints and so on - was still rather unreliable, and so ATM designers decided to identify users by the first two of the above criteria, in effect by a memorised secret and a token. The personal identification number, or PIN, which had been introduced as a marketing gimmick, was suddenly here to stay, and the big issue now was how to make PINs secure.

It was not fully recognised at the time, but ATM security involves a number of different goals, including preventing external fraud, controlling internal fraud, and providing a means whereby disputes with customers could be settled fairly. This lack of clarity has since led to a large number of problems.

Commercial Cryptology

A number of security systems were developed, of which two captured most of the market. These were the IBM system, launched in 1979; and the VISA system, which extended it and was introduced shortly afterwards. These systems relate the PIN to the account number in a secret way. The idea is to avoid having a file of PINs, which might be stolen or copied. and to make it possible to check PINs in the ATM itself so as to allow transactions when it is not online to the bank's central computer site. The definitive reference is Meyer and Matyas' huge book Cryptography: a new dimension in computer data security; there is a shorter account in Davies and Price Security for Computer Networks.

PINs are calculated as follows. Take the last five significant digits of the account number, and prefix them by eleven digits of validation data. These are often the first eleven digits of the account number; they could also be a function of the card issue date. In any case, the resulting sixteen digit value is input to an encryption algorithm (which for IBM and VISA systems is DES, the US Data Encryption Standard algorithm), and encrypted using a sixteen digit key called the PIN key. The first four digits of the result are decimalised, and the result is called the `Natural PIN'.

Many banks just issued the natural PIN to their customers. However, some of them decided that they wished to let their customers choose their own PINs, or to change a PIN if it became known to somebody else. There is therefore a four digit number, called the offset, which is added to the natural PIN to give the PIN which the cusomer must enter at the ATM keyboard.

So here is how it works:

  • Account number: 4506602100091715
  • Last 5 digits: 91715
  • Validation data: 88070123456
  • Data input to DES algorithm: 8807012345691715
  • PIN key input to DES algorithm: FEFEFEFEFEFEFEFE
  • Output of DES algorithm: A2CE126C69AEC82D
  • First four digits decimalised (Natural PIN): 0224 A=(1)0 2=2 C=(1)2 E=(1)4
  • Offset: 6565
  • Customer PIN: 6789

    The DES algorithm can be thought of as a black box which is initialised by a 56-bit key and then acts on 64-bit blocks of data. It has the property that, given the key, working out the ciphertext from the plaintext, or vice versa, is easy; but given only a plaintext and ciphertext, the corresponding key (if any) is rather hard to find. In fact, finding a key usually means trying about 255 possible values; it is straightforward to calculate that, even given a parallel machine with 64,000 processors, each of which could try one key per microsecond, the search would take about a week. The computational resources needed are thus (currently) beyond the purchasing power of most individuals.

    One might have thought that using a strong encryption algorithm would be enough to make systems secure. This is emphatically not so; one UK bank decided to just encrypt the PIN and write it to the card. Thieves found out that by taking their own card, or at least a card whose PIN they knew, and changing the account number on the strip to an account number gleaned from a discarded receipt, they could use their own PIN to raid the other account.

    Designing cryptographic protocols which work is in fact a hard problem, and the subject of much current research. Most `amateur' designs turn out to contain serious flaws, and so systems such as IBM's became very widely used. This led in turn to a new problem: when a bank uses a published system, its security then depends totally on keeping its PIN key secret, and how can one keep secret a number that has to be entered into thousands of ATMs over a period of many years?

    Technical Attacks The secrecy of the PIN key, although necessary, is not sufficient. This is illustrated by a famous fraud, which took place at the Chemical Bank in New York in 1985. An ATM technician, who had been fired, would stand in line and watch a customer keying in his PIN. He would then pick up the discarded receipt, which contained the account number, write this number to the magnetic strip of a blank card, and use this with the observed PIN to raid the poor customer's account. He managed to steal over $80,000 before the bank saturated downtown New York with security men and caught him in the act. Needless to say, the emergence since then of worldwide ATM networks makes such attacks much more easy to do, and extremely difficult to stop.

    In fact, the Chemical Bank attack worked because the bank printed all sixteen digits of the account number on the receipt. Since then it has become standard practice overseas to print only the last six or eight digits, but UK banks are a bit slower: as late as last year, two men were jailed for defrauding a UK bank and its customers in exactly this way.

    An even more sophisticated attack was reported in 1988. In this case, the villains constructed a vending machine which would accept any bank card and PIN, and dispense a packet of cigarettes. They placed this in a shopping mall, and used the PINs and card data it recorded to forge cards for use in ATMs. Attacks of this type cannot be prevented by purely technical means so long as the ABA standard magnetic card is used, and this threat has been a spur to the development of card types which are hard to forge, such as watermark cards and smartcards.

    In fact, the `false terminal' attack, as this technique is known, is becoming common in Britain. In the summer of 1992 alone, there have been a number of reports of transient businesses, such as market traders and organisers of auctions and fairs, using portable PCs with card readers and PIN entry terminals to obtain card and PIN data from the public. This places the banks in a nasty dilemma: they had been planning to introduce PIN-based debit card transactions in an attempt to cut down on fraud losses in signature-based debit card systems like Switch; so if they issue a general warning to all their customers not ever to enter a PIN at any device other than an ATM, these business plans will need rewriting. If no warning is issued, they face a rising tide of claims from customers who have been tricked into using false terminals.

    Various program bugs and operational errors also cause a certain number of mistakes, such as duplicate transactions and debits posted to the wrong account. These are familiar enough to heavy users of any bank's cheque processing facilities, who correct them by reconciling their accounts and demanding to see vouchers for stray debits. However, with ATM systems, the customer cannot usually inspect tally rolls, transaction logs and balancing records; and any attempt at checking a disputed transaction is generally frustrated in various ways by the bank. From experience, we would expect that between one in ten thousand and one in a hundred thousand transactions go astray in various ways.

    A number of other technical attacks have been carried out. One bank used to leave all its ATMs offline for some time each night in order to perform batch processing; crooks opened accounts, duplicated the cards they got, and milked the ATMs of huge amounts. In another case, one ATM manufacturer built in a test transaction: when a certain secret sequence of keystrokes was entered, it dispensed ten banknotes. One bank then printed this secret in its branch operations manual, and the result was a flood of losses.

    Some banks' programmers misused their technical access to work out PINs, and so considerable effort has been devoted to designing encryption systems which never disclose cryptographic keys to programmers or other technical staff. However, even such sophisticated measures can be frustrated by poor network design; in one case, a bank's network controllers replayed a positive authorisation signal over and over to an ATM at which an accomplice was waiting to collect the cash.

    Management Problems

    Many of the problems experienced with computer systems are the result of management failures as much as purely technical problems, and ATMs are no exception. Most banks in the UK, for example, maintain in public that their ATMs can never go wrong, and so when customers complain about wrongful debits to their accounts, the standard response is that the card must have been `borrowed' by a friend or relative.

    This may make life easier in the short term for branch managers, but it is a most objectionable business practice, and arguably a fraud against the customer. It also prevents banks from detecting attacks in progress; if the victims are routinely stonewalled when they go to the branch to complain, and no report is filed to head office, the bank can remain blissfully unaware for months that a fraud is underway, and eventually the attackers may net many tens or even hundreds of thousands of pounds.

    In fact, after a recent case in which a Clydesdale Bank engineer had recorded customers' card and PIN data from an ATM and used this information to forge cards, the banks were publicly criticised by one of Scotland's top law officers for causing distress to the victims - by telling them that the frauds must have been carried out by their own families or friends.

    A policy of denial is also an open invitation to dishonest bank staff. In our experience, banks in the English speaking world dismiss, or ask for the resignation of, about one percent of their staff every year for disciplinary reasons. A fair proportion of these are for petty fraud or embezzlement, in which ATMs are often involved. A clearing bank with 50,000 staff, which issued PINs predominantly through the branches rather than by post, could expect about two incidents per business day of staff stealing cards and PINs. These could be test cards, or cards otherwise used to milk the bank's internal accounts; but it is simpler, and so much more common, for crooked staff to issue duplicate cards on ordinary accounts, or help themselves to cards which have not yet been issued. It may also be possible for a teller to pass to a customer's account a debit which masquerades as an ATM withdrawal, as some branch systems may provide a transaction editing facility to help staff rectify mistakes.

    In the face of all these problems, and a growing number of decided criminal cases, most bankers in Britain still deny that ATMs make errors, and justify this policy in private by claiming that they must `maintain confidence in the banking system'. They also claim that they might face an avalanche of fraudulent claims of fraud if they admitted that a problem exists.

    This argument has no real merit. In the USA, Federal Reserve Bank regulations for electronic banking require the banker to meet any claims by customers unless he can prove that the claim is fraudulent. Yet the most recent published survey of US bank ATM losses shows that the average bank loss due to misrepresentation of transactions amounted to just over 10\% of its total ATM losses. Vandalism was much more significant.

    Interestingly enough, the same study shows that card counterfeiting costs US banks $150,000,000 a year, and electronic attack on data communication lines costs a further $30,000,000. So one may wonder at the real scale of the `phantom withdrawal' problem in Britain!

    This debate is likely to continue over the next few years, as the volume and value of ATM crimes continue to increase towards US levels, and various lawsuits in progress seek to make banks liable for their systems. At a recent conference, a representative of the interbank organisation VISA admitted that the days of denying liability were probably over. It remains to be seen when, and how well, the banks will move to more realistic management policies.

    Distributed Systemsi

    We have seen how ATMs were the prototype customer operated system, and how they motivated the development of cryptology and computer security in the commercial sector (even if this is still a very much unfinished project).

    The third major contribution made by ATM systems is their role as one of the first distributed systems. Before 1967, most computer systems were located at one site: data were brought in for processing and the output was distributed afterwards. For example, in the banking sector, the input consisted of cheques and other vouchers, and the output included statements which were mailed to customers and printouts of account balances which were delivered to branches the following morning.

    The first cash dispensers, as we have seen, were adapted to this system. However, when customers were allowed to make balance enquiries in the early 1970's, it became necessary to link up the ATM to a host computer which had a list of customers' balances.

    Here the banks divided into two camps. Some decided that their ATMs should always be online; this meant dependance on communications technology and heavy investment in host computer resources, but gave flexibility and control. Others decided that their ATMs would only call the host computer every so many hours or when there was a balance enquiry; this was cheaper to implement, and meant that customers could get cash even while the host computer was down, but facilitated frauds with duplicate cards.

    Many tried for a hybrid solution, in which the ATMs are permanently connected to `front end processors', machines which maintain files of the account balances of local customers and monitor suspicious activity. In fact, the market for these front-end machines bred a whole new industry of producing `non-stop' or `fault-tolerant' machines which are specialised for transaction processing, with high communications capability and a large amount of hardware redundancy.

    Future Directions

    A number of prospective successors to the ABA magnetic card are available and have been marketed aggressively for several years now. These include watermark cards, smart cards, and biometrics.

    IBM's new cryptographic product range includes an automatic signature verification device. A previous signature checking system promoted in the industry by Unisys merely stored a picture of the specimen signature, and was vulnerable to attackers with document forging skills. However the new generation products such as IBM's check the signature dynamics too, especially the pressure profile, and it will be interesting to see what sort of error rates and customer acceptance are achieved in practice.

    Watermark cards have been introduced in Scandinavia. These have a two-layer magnetic strip, of which the lower layer is made read-only and furnished with a unique serial number at the time of manufacture. This serial number, plus the normal strip contents, are used to calculate a cryptographic checksum which ensures that any alteration of the data on the strip will be detected. Watermark cards offer the least change from the current technology and the lowest upgrade cost, but do not give the system designer any really new options; he is still limited to a few hundred bytes of data storage on the card, and this severely restricts the range of offline applications which can be delivered.

    Smartcards, pioneered in France, contain an on-card microprocessor. They offer almost total resistance to forgery, together with much greater data storage and the ability to program applications in the card itself. This means that the card can be used as an electronic wallet, even when no ATM is present. This may be a decisive advantage in those areas of the world which have recently been liberated from central economic planning, and where payment systems have to be established despite an almost total lack of telecommunications. An example is UEPS, the Universal Electronic Payment System, which we designed around the GemPlus smartcard.

    The choice of security technology, however, is bound up with other factors such as the communications architecture. Where communications are abysmal, and offline operation is a business necessity, system designers may well specify smartcards or biometrics, simply for their high level of resistance to off-line attacks. Culture also plays a r\^{o}le; fingerprint verification devices are used in Asia, but seem to be too associated with criminality to win acceptance in Europe.

    In the USA and Western Europe, where communications are good and online operation is becoming the norm for all financial systems, it is possible that things could move in a quite different direction. Video cameras are starting to be installed in ATMs on a large scale, and may in future be used to counter credit card fraud as well. When a store rings up your bank for a credit card authorisation in ten years' time, the bank clerk may well be able to see your face (and signature) and compare them with records. The civil liberties aspects of this kind of technology are, of course, quite another matter.


    ATMs have been described as one of the top 100 ideas of the 20th century. However, their security technology of magnetic strip cards with PINs may be nearing the end of its economic life, or at the very least be due for review and re-engineering.

    As the first customer activated computer systems, the first commercial secure systems, and one of the first distributed systems, ATMs have played a very significant role in driving the development of computer technology over the last quarter century.

    They are also interesting from the management point of view, as developing a network which includes thousands of banks operating a huge variety of systems presents some pretty unique challenges, both to banks and at the network level.

    Finally, from the points of view of public policy and professional ethics, there are many interesting questions about liability for system failures. It would probably be excessive to maintain that the banks deliberately conspired to lie to their customers about the soundness of their systems. Design decisions were taken which `seemed right at the time'; but by the time the villains had learned the technology, and both the fraud rate and the dispute rate began to climb, bankers found themselves locked into public positions of denial which bore less and less relation to sound business practice. Avoiding a cul-de-sac like this should be a concern of every system professional.