The latest edition of the Banking Code, the voluntary consumer-protection standard for UK banks, was released last week. The new code claims to “give customers the most up to date information on how to protect their accounts from fraud.” This sounds like a worthy cause, but closer inspection shows customers could be worse off than they were before.
Clause 12.11 of the code deals with liability for losses:
If you act fraudulently, you will be responsible for all losses on your account. If you act without reasonable care, and this causes losses, you may be responsible for them. (This may apply, for example, if you do not follow section 12.5 or 12.9 or you do not keep to your account’s terms and conditions.)
Clauses 12.5 and 12.9 include some debatable advice about anti-virus software and clicking on links in email (more on this in a later post). While malware and phishing emails are a serious fraud threat, it is unrealistic to suggest that home users’ computers can be adequately secured to defeat attacks.
Fraud-detection algorithms are more likely to be effective, since they can examine patterns of transactions over all customers. However, these can only be deployed by the banks themselves.
Existing phishing schemes would be defeated by two-factor authentication, but UK banks have been notoriously slow at rolling out these, despite being widespread in many other European countries. Although not perfect, these defences might cause fraudsters to move to easier targets. Two-channel and transaction authentication techniques additionally give protection against man in the middle attacks.
Until the banks are made liable for fraud, they have no incentive to make a proper assessment as to the effectiveness of these protection measures. The new banking code allows the banks to further dump the cost of their omission onto customers.
When the person responsible for securing a system is not liable for breaches, the system is likely to fail. This situation of misaligned incentives is common, and here we see a further example. There might be a short-term benefit to banks of shifting liability, as they can resist introducing further security mechanisms for a while. However, in the longer term, it could be that moves like this will degrade trust in the banking system, causing everyone to suffer.
The House of Lords Science and Technology committee recognized this problem of the banking industry and recommended a statutory change (8.17) whereby banks would be held liable for electronic fraud. The new Banking Code, by allowing banks to dump yet more costs on the customers, is a step in the wrong direction.
32 thoughts on “New Banking Code shifts more liability to customers”
“it is unrealistic to suggest that home users’ computers can be adequately secured to defeat attacks”
… and proving that their computer was not adequately secured would be more difficult than pinpointing the moment of breach on the computer. All of this will make a case based on this new back code very dificult.
A company called MobilEye made a product for cars that watches the road and tells you of things in your blind spot, adjusts your cruise control if a car gets too close, etc. A side result has been that some insurance companies will reduce what they charge the client if they have such a device in their car because it serves as a blackbox in the event of an accident, giving the company video footage and a history of car stats (speed, etc). My point, does anything like this exist for home computers and internet fraud?
“Existing phishing schemes would be defeated by two-factor authentication,”
We’ve seen phishing schemes *in the wild* capable of defeating two-factor authentication by running as a MITM in realtime.
see e.g. https://financialcryptography.com/mt/archives/000577.html
While the code indeed says that the bank is responsible for showing that the customer is negligent, the code does not specify what level of proof is adequate.
A long standing problem of the Banking Code, which the 2008 edition has not resolved, is that the banks consider the mere fact that a PIN was used to be proof that a customer was negligent. In an previous post I mentioned a case of where the format of a log file was treated as sufficient evidence of customer wrongdoing.
Given cases like this in the past, I see no reason why the level of rigor in applying the new rules will be any better.
Indeed these are possible, but the vast majority of phishing on UK banks are of the simple credential stealing variant. This should not be surprising since this is enough for the vast majority of accounts.
Other countries, like Germany, where two-factor is widespread, see far more MitM attacks. It could be that rolling out two-factor in the UK would be unwise, since fraudsters already have shown the ability to bypass this measure.
My point is that the UK banks have little incentive to make this kind of assessment. Once liability has been properly assigned to the banks they will.
Depending on the bank’s own risk assessment they may decide to jump straight to transaction authorization/two-channel. They may also just fall back on fraud detection and recovery.
I’m not in favour of mandating any particular technology, but think that once incentives are properly aligned, the problems will sort themselves out.
As a layperson, I find this extremely depressing. I mean, I take reasonable precautions to protect my card, my PIN and any online transactions I may make (admittedly, stopping online transactions might reduce my risk, but the convenience factor is still too high for me to do that). After the article on here a few weeks ago, I look at some card terminals to see if there are suspicious small wires coming out. And yet my bank could still leave me liable for fraud, even though there’s not a lot more I could do to make my transactions safe. I’d change who I bank with, but no bank or building society to my knowledge offers me better protection than that built into the Banking Code.
You say: “While malware and phishing emails are a serious fraud threat, …”.
Do you have categorical references to real incidents of online banking fraud where individuals lost money? I read somewhere that Online banking crime losses totaled £22.6m in 2007. If fewer people lost lots of money, that it’s not a cause of so much worry — the current approach banks are taking to educate people in keep their computers up to date may be sufficient. But if the number of people who lost money is a large number, I wonder why so many individuals have not come out and make noise in the press about their lost money. Do you have anything to say about this? Thanks.
Older 2-factor login authentication schemes (such as the Coutts use of SecureID) can be trivially defeated by real-time MITM and trojan attacks. The lack of these seen against UK banks is due to the lack of use of these schemes.
The newer (Barclays and RBSG) schemes using CAP on Chip’n’Pin work on transaction authentication – attempting (although with low bandwidth feedback) to confirm that the transaction received by the bank is the same as that authorised by the customer. Although an attacker may still gain enough data from MITM or trojan to allow fraud by other means.
In the vast majority of cases, the banks have born the losses. Compare the relative quiet from affected customers (who certainly have been non-trivially inconvenienced) with the fuss from Ireland when Bank of Ireland decided (initially) not to refund.
It remains to be seen whether these changes will stand judicial scrutiny if the banks decide to act tough and someone takes them to court. A good test case would be very beneficial. Bring it on.
My take on this, is to use an analogy :
The UK authorities arrest, charge and prosecute drivers for speeding, dangerous driving, using their vehicle on the road when it doesn’t have a valid MOT (when required) etc…. (it’s a huge long list) however in order to first drive a car generally a UK driving test must be passed and thus a full UK driving licence obtained. By this means, allong with compulsory car insurance at a minimum 3rd party level, UK PLC tries to ensure a certain level of competence in car drivers, and those that subsequently break the law can be dealt with and insurance companies sometimes (!) pay up for losses etc….
This is not true of computers. Anyone can and does use the computers with no knowledge, training, skills and sometimes even no thought put into the computer use exercise. Thus it is that the banks need to improve their security systems for online transactions etc…. since the end users (i.e. bank customers) must be assumed to no computer skills or knowledge let alone skills to beat online fraud, key-logging, remote viewing etc….
If the banks are trying to put the liability for losses towards their customers, then surely they should be trying to chase the operating system, browser, active-x, whatever, manufacturers for losses! After all such things are the not fit for purpose if they allow themselves or other related aspects to be compromised and vulnerable.
I have the very uneasy feeling that this change in the code is a precursor to a change in customer accounts.
If you look at “Internet Banking” as being a “value added” “non essential” service then some people might believe that the changes to the code are justified to protect the banks from their customers risky behaviour.
When this view becomes the “accepted custom and practice” it will be to late to change it.
At this point the banks will no doubt make “Internet Banking” an option that “can not be refused” by either direct or inditect means. At which point everybody is very much stuck with an unacceptable risk.
No doubt the Banks will offer a “premium secure” service for an extra 10GBP / Month that does involve some token based two factor system.
If you think that this is an unlikley senario have a look back over chip and spin, and the accounts hat currently have a monthly fee in return for amongst other things card insurance…
The modus operandi of the banks currently appears to be offer an optional service set the rules the way that externalizes all risk then force it onto those customers who would rather not have the liability. Then hound any current customers who refuse to play so that they go away and if that fails find some other way to get rid of them.
I think the days not just of “free banking” but “sensible banking” are now over in the UK. Where previous Governments used to excersise some controls on the Banks the current lot appear hell bent on making the Banks profits bigger and if the banks whinge then bend over and make somebody else pick up the tab no matter how big (National ID cards etc).
Nearlly a year ago I noted in a post,
That unless Banks adopted proper two way authentication as well as transaction authentication we would be talking about phishing and man in the middle attacks against banks “next year and the year after”…
You effectivly ask if there are any “blackbox recorders” for your home computer.
Well the answer is both yes and no it depends on your terms of reference and proportianality.
First your analagy also has a major hole in it which is event time frame. In the case of the car being involved in an accident due to speeding the insurance company only needs the details for the minute or so preceding the accident so data storage is not realy an issue. If however you wanted to protect against sabotage of the traction control software then the system you describe would in all probability be usless.
Also do you realy want to make an imperfect recording that could potentialy be used against you? A number of bus companies in America did fit video recorders, however they quickly discovered that in some cases they were of a lot more use to the other parties legal team than their own and were therefore a liability.
Getting back to your idea of a “PC Blackbox”, you first have to accept two important points. The first is that you can only record what is under your direct control. The second that you can not realisticaly accuratly record everything.
That is if the attack is based outside of your network you can only record your side of the events not what the bank “claims” it has seen. So man in the middle attacks would not realy be protected against nor a number of other attacks including simple snooping on unencrypted data or spoofing of network packet data etc.
Further there is currently no 100% reliable piece of software that will run on your PC or any other computer with bi-directional connectivity to the Internet.
The reason for this (over and above bugs) is that the bi-directionality allows the possibility of an attacker detecting it and thereby circumventing it. Also if the software is standard the attacker could just assume it is there and develop a work around for it.
There are however “off the shelf” network traffic recorders as items of hardware you can buy. As they passively record traffic they have no need of being able to transmit to the Internet so are (when properly used) effectivly invisable to an attacker (and hopefully do not contain exploitable bugs either).
These boxes however have a number of disadvantages the obvious ones being. Firstly they are high end test equipment with an appropriate cost. The second is that they only record what is on “your wire” so traffic that is encoded, encrypted or has false addresses etc in it may be of little use to you. Thirdly they tend to have limited storage capacity, as you would need to log everything, this storage is going to get fairly quickly exhausted.
Then there are the not so obvious problems. Ususally it is at best difficult for even a profesional to interpret the information unless it’s an attack with a known signiture etc. Then the information is usually at best partial and therefore needs to be cross refrenced against other reliable data recordings that might not even exist (ie keyboard screen display etc).
You could also do what the people building “Honeypot” networks do which is roll your own using another PC etc. The information on how to do it is on the Internet, however this does not help you unless you have the resources / ability to find the required data and put it in a form a judge and jury would accept if it ever needs to be.
Which brings you around to “proportionality” or Risk -v- Return On Investment. If you only stand a very small chance of lossing a small amount of money then is the cost in resources justified?
You might be better off changing the way you do things on a day to day baisis than go to the expense of setting up a reliable monitoring system with the required off line storage and backup procedures.
Ask yourself is the benifit I gain from online banking/shoping worth the risk I face from doing it? (Assuming you have the choice).
Then if yes ask yourself if there is a way to limit the risk by say using multiple bank accounts which only ever have very limited sums in them (this is like only having ten pounds in your wallet as opposed to your whole lifes savings when you get mugged).
Then if you still need to have significant risk find out if you can effectivley offset it with insurance etc.
Only if you can not do any of these do you need to consider how you might set up your “PC Blackbox”.
At the end of the day it is your choice of lifestyle against the real risk / liability it presents that you need to mitigate. And your response should be proportianate.
In essence it is like asking “do I need to lose weight?” if the answer is yes you ask the next obvious question “diet or excercise”. If you decied that you want to continue munching the tasty food then your lifestyle choice would be excercise. Your next lifestyle choice would then be to do I just walk/climb the stairs or work out. If it’s work out you then look at joging/cycling/swiming or using gym equipment. If your lifestyle choice is gym equipment you would ask if you should buy membership or equipment etc.
It is unlikley that you would think “I need to lose a few pounds I’ll buy the gym down the road” unless you had no choice but to.
You mention about proper two way authentication as well as transaction authentication. Are there any example deployments of such a thing that these banks in UK can learn from? Did it really help those banks? There must be a reason why UK banks are not considering advanced security along the lines that you mention — is it because they are worried about the online banking user base going down due to the additional work users have to put up? Or is there any other reason?
US too is far behind in terms of banking security, but at least they do not put the liability of fraud on the users and absorb the losses themselves instead.
I think what the UK banks are doing is unfair. If the losses are low they should absorb them. If they are high, they should at least give the worried users a choice of higher security.
I am unaware of proper authentication systems used by banks that authenticate the entities and the transaction.
It is not a simple problem as there are a number of weaknesses that need to be addressed in a cost effective manner.
The first problem is that the communications channel from the banks back end system right down to the users eyes and fingers could be compromised by an attacker.
The only real solution to this problem is to use a “side channel” to do the authentication it can be in many forms. I looked back in 2000 at using mobile phones and SMS but there are two problems with this,
1) SMS is an untimley unreliable system.
2) Phone software could be tampered with.
There are however a number of systems out there that aim to use the mobile phone to provide the side channel.
On the assumption the phone/SMS cannot be made secure I looked at using a seperate token. The reason for making it seperate is two fold,
1) You may wish to use it at a terminal that does not allow you to connect an external device (quite likley if you think ATM machines).
2) The device should to be secure must be immutable.
This second (essential) requirment is however impractical for production etc. therefor it should not be possible to field upgrade the tokens software without a physical precaution being over ridden (ie no external connector or writable RF port etc).
The banks and other organisations have for some time had tokens that work as a one time password. That is the user downloads the login form from the bank types in their user name and presses a button on the token to get a multidigit number that they type in as the password and then submit the
login form back to the bank. This authenticates the user at the time of submitting the form to the bank.
A simple modification to the token (ie add a key pad) and the bank sending a random (nonce) challenge to the user in the login screen which the user types in to the token to get a response allows for two way authentication.
However there is a big problem with this the authentication only happens once and due to the nature of the possible attacks it is only valid at the time of the authentication and at no other time from then on.
This means that all transactions are effectivly unauthenticated and therefore require to be put through the token in both directions before it is accepted as being valid.
This is where the big problem arises due to issues relating to humans and their ability to type accuratly and have the patience to do it repeatedly on six or more digit numbers.
If you follow the link I gave above you will find a series of posts by various people who were “feeling” there way through the pain of the problem with various degrees of understanding.
I realised that it was not particularly obvious to all what the issues where so I posted back to the blog,
which I have copied the essential bits from below,
Initialy both parties have to establish they have some form of (untrusted) communications link, otherwise the parties should cease to try communicating.
When they have established that they atleast have some form of “untrusted” communications path they then need to transfer data to one another in a way that both parties can be assured is valid this is the authentication of the “required parts of the transaction”.
Now there are very many ways to do this but lets assume a very simple system to convay the idea (then shoot holes in it as you wish 😉
1 The customer selects account to account transfere
2 The bank sends a form with an “only human readable number” which is unique to the users dongle and in time (all validation codes sent by the bank will be human readable only).
3 the user types this number into the dongle which shows an appropriate go/nogo indication. The number could also be used to set the dongle into a given mode for the account to account transfer mode.
4, the user types the “to account number” into the dongle which prints out an encrypted code that the user then types into the appropriate space on the form.
5 the same with the amount and any other field that requires verification.
6 the user then submits the form back to the bank.
7 the bank sends back a verification code that the user types in on the dongle which indicates if the code is correct for the data entered and then gives the user a final authorisation code for the whole transaction.
8 the user types this in and sends it off to the bank which sends a final closing code which the user types in to check if the transaction has been authenticated.
If the codes are unique to the dongle being used and in time then the attacker has a bit of a problem in that they do not see the transaction details in plain text nor can they predict what the authentication codes are going to be either from the user or from the bank.
I am also assuming that the codes will contain a degree of error protection within them to prevent typos confusing things etc.
Oh and the “only human readable” stuff from the bank reduces to near zero the ability for software to read and undersatand the data, therefore the attacker also has to be human as well and present for the transaction.
The use of these two technologies hopefully reduces the chance of a succesfull phish down to the security of the dongle or better.
I appriciate that the system is overly complex but can it be made simpler and still protect the transaction properly?
As I said it’s a simple idea to show that it is possible for the transaction to happen even in an untrusted channel with not only both sides proving who they are but the required details of the transaction it’s self.
Please feel free not only to shoot holes in it but also come up with other ideas. Hopefully it might start an “open” aproach to making all financial transactions more secure which would benifit not only us the customer but the banks etc as well…
From this you can see that there is the issue of complexity for the user of the token, which would probably tax even the most patient of users after repeated use.
Further postes to Bruce’s blogs proposed various other things so give it a read and make your own mind up on what is required.
Oh and yes you can use my ideas if you wish just give me a mention if you write it up and let me know of any changes or improvments. Feed back is the way we arive at working solutions to problems 8)
You say “Other countries, like Germany, where two-factor is widespread, see far more MitM attacks.”
Do you have any evidence of this, i.e. statistics specifically about MitM-attacks?
@Steve & jahrio,
Further to jahrio’s request, if there is concreate examples of MiTM attacks on two factor systems.
Were the transactions two way authenticated or was it just the two entity IDs?
If the transactions where two way authenticated as well as the entity IDs where the MiTM attacks successfull?
And if so were the attacks automated?
Finally if the attacks where automated against two way authenticated transactions did the bank use “Human only” readable fonts or did they use standard fonts for the two way transaction authentication?
As I noted above you need both two way authentication on the transaction and fonts that are (supposadly) only readable by humans to be implemented together.
This is to raise the technology bar sufficiently to deter development of successfull attacks.
As I have noted at other times (and places) the banks need to raise the technology bar in a large enough jump so that the attackers effectivly do not have the resources or incentive to continue.
The current incremental approach to security just makes the attackers stronger and more determined. Think about the attacks carried out on the German Enigma cipher machine for a concrete example of this.
Unfortunatly if the attackers have developed to the point where they can defeat cost effective and usable two way authentication on transactions they may now have passed the point at which they can be detered in any sensible fashion.
An analagy would be the difference between a vertical cliff and a large steep hill.
To the avarage person walking up a large steep hill is ultimatly doable they just have to put in sufficient effort.
However a vertical cliff face is something the avarage person would not even bother trying to climb. As their perception would be that the risks and the effort would be so large that failier is almost gaurented so they would not bother to try.
However the hight of the hill the avarage person could climb could easily be many times (100 or) more the hight of a cliff that would deter them entirely.
But again a high enough hill might deter them. However a series of hills that increase height only moderatly from hill to hill is only going to make them stronger and more confident in their abilities.
So having worked their way up a succession of increasing foot hills they would then be fitter, more confident in their abilities so a small cliff is now doable.
At each success not only do they become fitter and more confident they actually become aclimatised to the rairified atmosphere they work in. They also find that those at the hights they have achieved are so like minded that they team up. And from that point they could easaly go on to scale the largest of mountins.
If you think the above is a bit of a gloomy perspective, just remember the human race evolved and developed to it’s current state by overcoming a series of small chalanges and rising to meet each one in turn. So the behaviour is quite natural to us.
I came across this company called Cronto today that does similar to what you are looking for. They use a mobile application. Keeping aside the usability aspects of requiring a phone with a camera, what do you think of using a mobile app as a solution to this problem? You mentioned above that phone apps can be tampered with. Can you explain what you mean? Thanks.
@ Newbie Researcher ,
In essence it’s simple to understand but not so easy to see how it can be prevented.
If you or a third party can change the functionality of your side channel “token” then so can an attacker.
So if you can load the app what is to say that,
a, It’s the real app
b, It’s not been tampered with
c, It cannot be changed later.
If any of the three is possible without the user being aware then the solution is dead in the water from the security point of view.
Essentialy you need the device to have non writable (mutable) memory so the app cannot be changed, and importantly can be verified (bit for bit) at any time.
This is obviously not practicl for a mass consumer product which has a very low production cost. So the next best is some kind of physical barrier (like the tokens case) that needs to be opened to gain access to make changes preferably in a tamper evident way. Using a method that does not have a physical barrier is not going to work (look for info about RFID Passports being cloned for an example of why this is not good).
There are other examples such as Iphones shipping with a virus built in at the production/shipping subcontractor that again reinforces the point.
There are already examples of virus attacks on mobile phone and RFID technology out in the real world. Likewise so are “Shims” (Maleevolent DLLs) that control the very low level asspects of PCs etc (makes the info on the screen different to that that the app intends to be shown).
So all the bits are out their in more than proof of concept, somebody just has to have sufficient reason to put them all together for the atack to come into effect. And large bundless of cold hard cash have nearly always been a sufficient incentive for a percentage of any population to do something they know is wrong (especialy if the risk is either low or percived as being low).
For some reason my first attempt at posting this failed at the “server” not sure why, then the server reported I was trying to make a duplicate post so change and try again 😉
@ Newbie Researcher , you asked,
“You mentioned above that phone apps can be tampered with. Can you explain what you mean? Thanks.”
In essence it’s fairly simple to understand why but not so easy to see how it can be prevented, which (as you noted) is why I do not like Mobile phones for use as a security token.
If you as the user or a third party can change the functionality of your side channel “token” then so can an attacker.
So if you can load the third party app onto your token (mobile phone) what is to say that,
a, It’s the real app
b, It’s not been tampered with
c, It cannot be changed later.
If any of the three is possible then the solution is dead in the water from the security point of view as,
The attacker could set up a bogus web site for you to download from or,
The attacker could trojan the app (this has happened with some Open Source downloads) or,
The attacker has moduiffied the app subsiquent to you loading it (virus or other attack).
Essentialy you need the device to have non writable (mutable) memory so the app cannot be changed, and importantly can be verified (bit for bit) at any time.
This is obviously not practicl for a mass consumer product which has a very low production cost. So from the low cost point of view the next best is some kind of physical barrier (like the tokens case) that needs to be opened to gain access to make changes preferably in a tamper evident way.
Using a method that does not have a physical barrier is not going to work (look for info about RFID Passports being cloned for an example of why this is not good).
There are other examples such as Ipods shipping with a virus built in at a subcontractor that again reinforces the point.
There are already examples of virus attacks on mobile phone and RFID technology out in the real world. Likewise so are “Shims” (Malware DLLs) that control the very low level aspects of PCs at the device driver level that makes the info on the screen different to that that the app intends to be shown.
So all the bits are out their in more than proof of concept, somebody just has to have sufficient reason to put them all together for a real attack to come into effect.
And as has been observed in the past some peoples morals and ethics are negotiable. That is large bundless of cold hard cash have nearly always been a sufficient incentive for a small percentage of any population to do something they know is wrong (especialy if the risk is either low or percived as being low).
I actualy predicted a lot of this back in 2000 at a post graduate course sponsord by the E.U. I got the idea after thinking how mobile phones could be used as a distributed crypto engine either by the network operator or other organisation capable of downloading software updates to the phone, which all networks used to do on a regular basis. (the idea of using a mass consumer device as a dstributed crypto engine was not mine I had read a paper which sugestd that a country could doctor T.V.s to do it, I just brought the idea upto date as the Smart Cards where starting to get crypto hardware put in them).
What has surprised me is not that it is possible but the length of time it has taken for the attacks to get close to being out there in userland.
With regard the Cronto solution, it is not realy a side channel token, it takes information directly from the untrusted coms channel not through the user which means it is theoreticaly possible that it could be modified without the user knowing. And before you ask no I have not looked at it in the depth required to say yes or no either way, I’m just untrusting of it as it’s not guenuinly a side channel token.
I work for Cronto, so I can give some insight on how it works.
The barcode generated by the bank, which contains the transaction information, is both authenticated and encrypted. Any modification to the data in transit will be detected.
Similarly, the response code, generated by the phone, incorporates the transaction and the customer’s PIN, so the user gets to see exactly what they are authorizing. Again, tampering with this will cause the verification to fail.
All communications are done with the assistance of the user (via the phone camera and computer keyboard). The mobile phone air-interface isn’t actually used.
There are some more details on the Cronto website.
May be you can throw light on the use of phone apps as security devices too, to address Clive’s concern. To me, using mobiles seems to be promising alternative to custom devices, but I do not know so much about the security aspect of mobile apps. Thanks.
First off happy St Georges day and keep slaying the dragons 8)
Secondly sorry for the double post, was there a problem with the server yesterday as it did not show the first post when I posted, and posting a second time a little while later returned a “duplicate post” message. Likewise later in the day the second message did not post and I gave up.
As for the Cronto web site after a brief look it’s a little low on technical content, which gives rise to quite a few first off questions due to lack of info.
At first read it appears to be broadly the same as the system I sugested in capabilities (two way auth and transaction auth). Also Igor’s name is familiar to me but I can’t remember why.
At first sight the two main differences are,  the 2D bar code instead of a “only human readable font” number string and  the use of a camera not a keypad reduces error prone typing to the token.
The 2D bar code appears to have somthing like a 4^100 or aprox 10^60 data space (if my brain is working this morning 😉
And the type back data space size sugests a 6^52 (both images show 3chars space 3 chars with the chars being upper/lower case alpha or numeral)
There is no info on the web site to differentiate the information and checksum size in either the 2D or type back data spaces. Or to indicate if it has a time sensitive aspect to prevent replay attacks etc.
The aim of the system appears to be to extend the untrusted communication path to a trusted communication path. Which serves the same function as the system I proposed but with one difference.
I had decoupled both the inbound and outbound data flows through the human thus limiting any attackers potential bandwidth to the authentication token. The use of a camera potentialy opens up the inbound data path bandwidth quite considerably without it being nescesarily obvious to the user. If you make an assumption that the camer can se 2^8 levels of R&G&B then potentialy you could get four or more high end intensities in each colour (0b1111xx11 with xx being the hiden data bits). Potentialy giving 16^100 data space. So that security aspect of the system is currently open due to no information being available currently.
A little closer look at the various pictures indicates that the three client end products are rated potentialy as not secure (USB), secure (Phone client) and Highly Secure (Token).
I’m very uncertain what the USB key gains you security wise other than a fingerprint reader. A low level I/O DLL attack could easily bypass the mechanisum described.
The mobile phone as I have indicated is potentialy open to an attack. A conceptualy simple one would be some sort of malware (i’ll leave the vector open) that detects the software client on the phone and steals the secrets and SMS etc them back to the attacker.
As for the token the question of how it is programed and the secrets are added arises and just how mutable the resulting system is. The use of the camera sugests there is an awfull lot of resources in there in terms of the camera it’s self and the supporting CPU, RAM and ROM which probably means a big battery as well. So the token is going to be quite big,with guite a bit of spare RAM and CPU power (potentialy open for malware if it can be got in). It is also probably expensive to manufacture when compared to say a single chip CPU with inbuilt LCD driver and keyboard scanner.
This would sugest the token is not going to be fielded as a mass consumer item for everyday bank customers. If used for more secure applications them EmSec considerations would need to be considered which is definatly not on the website.
All that said it is actually quite an interesting idea and will be of considerable interest to many.
I used to design the beasts several years ago. The security of a mobile phone is virtualy nix.
The network operator can usually send software updates to the phone, and as you will apreciate if you have ever downloaded a ring tone it is very very easy for a third party to put data into the memory.
Further you may or may not remember that it was found that malware can be hidden in a picture that a user viewed with a web browser. Well it is quite likley that the browsers an other image related software on a mobile phone is equaly as vulnerable (no I have no data on this it’s just an assumption based on it being quite probable).
As for the crypto used on mobile phones etc the GSM spec was a bit laughable have a look at the GSM section in,
And don’t be surprised if you make strange noises while you read it 8)
Also at Cryptome have a look at the TEMPEST section as well.
Thanks Steven. That’s interesting.
Steve has not replied yet, the info was from me.
However I would encorage him to reply to give you and others a different perspective on mobile phones. After all my opinion is from my perspective having had to build an programe the beasts, and then get it through the compliance process.
Supposadly there is a new set of standards for mobile phones due at some point in tne not to distant future. Partly this is because they are long overdue but also because the E.U. and ETSI CENLEC etc have finaly woken up to the concept of Software Defined Radio” which kind of makes a lot of their hardware oriented specifications a bit of a problem (to see what the EU is upto in this area the best place to start with is the R&TTED irective pages on http://www.europa.eu where that is coming to the end of it’s latest review.
P.S. if you contact Steven he has my details so you can get directly into contact if you wish.
Sorry about the server problems. For reasons I don’t fully understand, all comments were being categorized as spam, so I had to manually pull them out.
The important difference between the barcode and the “only human readable font” is that the barcode is encrypted and authenticated. Only the authorized device can decode it. The authenticity checks also prevent replay attacks.
There are a range of options on how to use the Cronto system (USB key, mobile phone, and dedicated device) and each represents a particular usability, security, and cost tradeoff. The advantage to banks is that the server software works with them all.
As for mobile phone security, it’s important to remember that the attacker has to compromise both the phone and the PC (or the user’s Internet connection). Home PC security is poor, but mobile phone security is much better.
Modern phones incorporate application signing and mandatory compartmentalization. There will be ways to bypass these protections, but doing so, while simultaneously compromising the user’s PC, is going to be a substantial challenge.
Modern phones do have some potential security feature improvments since I was cutting assembler and C code for them. But the QA and code audits are still nowhere near good enough to say if the are even close to being secure in reality.
Then over and above the low level embeded phone OS code is often Microsoft Originated code to provide the user interface.
Mobile phones are still basic resource (CPU RAM ROM etc) limited when compared with desktops of a few years ago. So to get the feature richness there has to be compromise in some other area, where is dependent on the manufacturer…
But even with mandatory compentmentalisation you have to ask what is being protected from what (ie phone to app or app to app).
For the purpose of the Cronto application it does not use the phone so I suspect there is little or no security between it and the camera and the display or other apps so in that respect it is just like any out of the box MS Desktop without security being enabled.
The next three issues is how does the app get onto the phone and from where and by what route?
To be cost effective for most bank customers it’s going to be download it of the web.
As noted above this delivery method has been compromised in the past even when reasonable (at the time) precautions had been put in place.
As has been seen with the likes of games consoles even with supposed mandatory “code signing” to prevent third party apps like being loaded it usually takes little or no time for the system to be circumvented.
I could go on at length about the how and the way of attacking the system but that would be unfair to Cronto as all “security token” apps on mass consumer devices which are mutable are going to be vulnerable in one way or another.
As for the difficulty of tying a phone to a PC and compromising both you are thinking about it the wrong way.
If you think of mass compromise without direct “intended purpose” which has already happened (zombie nets etc). That is somebody gets as many PCs under their control as they can then rents/auctions them off to another person.
Then you have to ask when it will happen (if it has not already happened) on phones.
Then when there is sufficient financial imperative I can assure you tying the PC and phone together will happen (think marketing etc).
And at some point after that any “security token” app with enough market exposure will be targeted.
As a guess bassed on the length of time between predicting “shim dlls” and seeing them for real (8 years) I would say we are about 4 years off of this happening.
So having thought about it at some length back in 2000-2 no I personaly do not think phones are a good idea for putting “non phone related security apps” on. Nore for that matter any other mass consumer item with mutable memory.
When and only when the designers of phones etc increase the scope of the security on their products to the required level can they be viewed as being potentialy suitable for high security apps.
The biggest problem with mobile phone authentication is that despite them now being widespread they are very far from being universal. I only have a very antique model which is rarely switched on as I find a mobile is not necessary for my lifestyle and I have no intention of getting an up to date model. I know many others who don’t use them. Any system using mobile authentication must by definition have a back up for those who do not have access to this technology.
“they are very far from being universal”
The reason for this is that apart from the base level functionality (GSM specs etc) they are not in any way standardised (think the number of different versions of code that have to be written for some phone application level OS’s).
It is also a significant problem in that the few “security” related specifications on phones are all to do with segregating the underlying phone from anything else that might run on the phone as a “Personal Mobile Resource” as an application.
Above the phone level there is little or no security that is meaningfull (think Win98 security), nore as they tend to be more than somewhat resource limited is there likley to be for some time (the more efficient you try to make a platform the less secure it is).
Code signing of applications is actually not security at all as it does not find or remove bugs or other weaknesses (diliberate or otherwise) . For instance if a company does not audit the code I write for them properly (which requires a significant input of resources they don’t have) then it makes little or no sense in signing the code other than to asign legal liability (which most Licence Agreements remove).
Also if “bad code” is introduced in the supply chain before it gets to the end user then code signing is of no use whatsoever (think Apple and the PC virus that was put on it’s pods by somebody in a subcontractor for example).
The general trend for malicious code these days is as rootkits not as “bragware” it is designed to remain as hidden as possible whilst alowing the code writer to have hidden control to upload other code etc as an when required (think bot nets etc).
Due to the “first to market” principle code auditing is almost never allowed to get in the way of releasing the latest feature rich application, and at best it only happens at the software house end of the supply chain not the consumer end.
As we are all to painfuly aware feature rich code is a veritable haven for software anomalies. Few if any of these can be “certified” as not a security risk even within our current level of knowledge (which at the rate it ages sugests is pitifuly small).
Such apparently simple security methods as “sand pits” tend to be insecure in one way or another, and it is almost impossible to stop “secret information” leaking out of even certified systems due to the discovery of new side channels etc.
So no I have little or no reason to suppose that mobile phones can be made secure at the application level either from direct malicious code or more subtle secret information leakage…
And more importantly if their use as anything other than phones ever becomes significant I fully expect them to receive the attention of malware writers that are currently doing rootkits for PC operating systems.
Have a look at Matt Blaze’s very recent blog entry on the subject of computer security on hardware related iussues,
@ Steven, Newbie Researcher,
Another little problem for you to consider about phone security etc.
The IMEI (phone ID / Serial Ni=umber) is available to applications that run on the phone such as a web browser, and in some cases (due to Vodafone) the browser sends it off in the user agent string. See,
Now with most modern phones having browsers on them it is actually quite likley that the IMEI will become known to any “bot net” style malware on the phone that the user might have picked up whilst browsing.
So an individual phone and all it’s applications can become known to the malware which can ship it off to the malware control.
After that the rest becomes a matter of what the malware writer has enabled in the malware…
Danny Bradbury has written an article “Banks slip through virus loophole,” for the Guardian, on this topic.
Mobiles bring a whole new platform for security researchers, imagine having nmap on your phone how sweat is that 😉