I’m at the fourteenth workshop on the economics of information security at TU Delft. I’ll be liveblogging the sessions in followups to this post.
I’m at the fourteenth workshop on the economics of information security at TU Delft. I’ll be liveblogging the sessions in followups to this post.
The opening keynote talk was given by Marietje Schaake, a member of the European Parliament, kicking off a session on “Democracy in a digital world”. Europe’s democratic institutions face a range of threats from Greece and Hungary internally, to Russia and the Middle East externally. There is a major pushback against the universality of human rights from countries that are growing in population and power, while our domestic populations don’t care, and governments seem unable to engage with global issues such as Internet governance. Privatisation pressures with ISPs pushed to be censors starting with child porn and then alleged copyright infringement. Hugo de Groot studied the law of the sea and proposed that states should not privatise the open waters in their own interests but leave them open to all; it’s taken 400 years for this to become enshrined in the law of the sea. Marietje is working on issues from responsible disclosure to export controls; many sensible measures are rejected by the US government as “digital protectionism”. She suggests three basic steps forward. First, updating existing laws to deal with the digital world; second, new global norms that apply equally to private actors and governments and facilitate multi-stakeholder approaches (e.g. that it’s not acceptable to use intelligence resources for political or commercial advantage; that laws should be public; that there should be no mandatory backdoors; and that there should be effective remedies); third, that infrastructure should not be attacked, because of the possibly severe collateral damage.
There followed a panel discussion. I started off by remarking that while social media make it easier to organise and thus empower single-interest pressure groups, making government’s routine tasks slightly harder, the big issue is globalisation. Over the next century, governments will be expected to provide ever more, and ever more complex, public goods. I’ve had papers at WEIS on fighting cybercrime (2008), resilience of the internet (2011), and network effects in surveillance (last year). The key thing is network effects, which governments mostly don’t understand; left coast people have a quite different view of the relationship between economics and power from right coast people. We have to get policymakers to understand network externalities better.
Bruce Schneier said that most of the properties we care about, such as security and safety, are emergent properties that we can’t predict well when building systems; the winners tend to be the people who notice and capitalise on them. Network effects are one example; a lot of powerless people suddenly got power because they were quicker. This is frequently a battle between the quick and the strong; but the strong can often harness changes once they get moving. To promote innovation, we have to stop the old power imposing its values on the new space before we understand it.
Ashkan Soltani, speaking for himself rather than the FTC, argued that all actors should have a fair understanding of the competition, and that’s hard in emerging tech markets; and as for enforcement, lots of policymakers don’t even have email. Here is the critical role for academics and NGOs. However it is perfectly right that policy is slow; it should be based on principles and performance, not knee-jerks. But how do you get to the right principles for a digital ecosystem?
Allan Freedman followed up, again speaking for himself rather than the US Department of Commerce. He tries to safeguard the open multi-stakeholder Internet as it transitions to international governance. Consultation is a poor tool as those who speak the loudest get the most input. In practice it’s about creating consensus in small groups of people who can work together. How can we get civil society engaged in more parts of the ecosystem? This is not free, either for civil society or industry.
In discussion, social media can help people organise in Europe but make them terribly vulnerable in Syria. To what extent should tech vary by context, and be subject to ethical discussion at the R&D phase? But where will be policy discussion take place? Politicians often don’t know the difference between a server and a waiter. Entrepreneurs try stuff, get lucky, and maximise profits. The competition law kicks in. Telecomms regulation got going in the Netherlands when an incumbent boasted about how they were blocking their competitors’ voice over IP. US firms’ discrimination in favour of US customers and against EU ones has also been an issue. There are nonetheless many good effects of networks and globalisation; making free email, messaging and social networks available worldwide, along with mobile comms and mobile banking, have been transformative for billions, and many of the bad effects in developed countries have to do with consumer protection and can be tackled by existing governments. We can’t necessarily look to the UN for help; the relevant Internet governance votes split between democratic governments and (the majority of) undemocratic ones, and the multi-stakeholder model is basically our way of getting round the totalitarians. Also, the Internet has brought about the largest generation gap since rock ‘n’ roll. What do governments do that works? Bruce likes the EU data protection regime and the right to be forgotten; I like the US security breach disclosure laws.
Stefan Laube started the first refereed-paper session with The Economics of Mandatory Security Breach Reporting to Authorities. He is concerned at the EU’s proposed Network and Information Security Directive; he surveys breach-reporting laws and proposes a principal-agent model to understand the incentives facing interdependent firms. He models detection, reporting and audit costs, and asks under what circumstances a policy of mandatory reporting to the authorities minimises overall social costs. Problems include how the regulator can tell malicious hiding from benign nescience; he calculates the Nash equilibria between firms for given cost functions, leading to given levels of security investment and reporting probability. Without audits, nobody reports anything, and with decent auditing reporting is optimal, so long a sit doesn’t lead to over-investment, and people don’t place too much trust in the effectiveness of central authority.
Fabio Bisogni was next on Data Breaches and the Dilemmas in Notifying Customers. He’s been studying the different breach notification laws in 47 different US states, and the letters commonly sent by companies to victims, and collected 217 letters over 6 months. The letters usually have a buffer, then an explanation, then bad news with an alternative, then a conclusion with a closing buffer. He classifies them by clarity, tone, action, interaction, relevance and style. Most letters are transparent rather than opaque, and neutral rather than alarming or reassuring. About a quarter are cold, while a quarter present the breach and its management as routine; then a fifth are reassuring. About a seventh look like junk mail, and seem designed to be discarded (for example, they say “dear customer” rather than having your name). Even fewer (11%) are cooperative, and fewer still (under 5%) are supportive. Companies are more likely to apologise in cases of payment card fraud, unintended disclosure or insider abuse; they are less likely to apologise where the blame can be ascribed to a third party, such as a hacker, malware writer or laptop thief. Finally, those states that stipulate “without unreasonable delay” still have an average 34 delay from discovery to notification.
Benjamin Edwards spoke on Hype and Heavy Tails: A Closer Look at Data Breaches. Following the OPM breach of millions of federal employee records to Chinese hackers, and the Anthem breach of 80m medical records to the same group, there is a call for federal data-breach laws. He’s therefore studied the statistics of data breaches since 2005, and fails to find any upward trend in the data from the Privacy Rights Clearinghouse. He found the best fit was approximately log normal, indicating a multiplicative growth process. Separating them into accidental and malicious, he finds no change in negligent breaches but a small decrease in malicious ones. So things don’t seem to be getting worse, even if we hear more in the press. Security firms highlight upticks in the statistics, but there are no more of these than we might expect. It’s possible that variance is increasing, and that the tail might be larger than lognormal, or that the data might be incomplete. Since the paper he’s looked at EU data from the CEU and got roughly the same results. There’s also the Red Queen hypothesis, the possibility that breaches are just better reported, and the possible increase in cyber-espionage, specifically by the Panda group in China.
Eric Johnson gave the morning’s last talk, on The Market Effect of Healthcare Security: Do Patients Care about Data Breaches? The US HITECH Act aimed at getting hospitals to adopt IT, with $15k annually for physicians, breach notification, fines up to $1.5m, and the HHS “wall of shame” of data breaches. From 2010 to 2013, IT use in emergency rooms went up from 20% to 70%; disclosed breaches went up too. There was a spike of notifications in 2010-11 as already computerised hospitals got the dirty laundry out; people hoped things would get better but they haven’t. In business, breaches can have an impact (Target had a profit dip and lost its CEO) but healthcare, with its weak competition? One survey said half would move after a breach; in another, two-thirds said it would be too hard. So Eric looked at whether breaches actually impacted admissions and ER visits. This is messy! Propensity score matching between breached and unbreached hospitals tries to match observable factors like hospital size, local law and healthcare competition, while a difference-in-differences approach tried to analyse outcomes. The short-term impact of breaches in inpatient or outpatient visits was small, but hospitals that had multiple breaches over a three-year period did have a significant drop, as did large breaches. These effects disappear in markets with no competition. The lesson is perhaps that policymakers should have different approaches based on a healthcare provider’s market power: carrots for the small players, along with breach disclosure, and sticks for the big monopolists. In questions, it was noted that there is some marketing literature on service failure, the need for rapid recovery and the effects of multiple failures.
Alessandro Acquisti kicked off the afternoon with a paper about Online Self-Disclosure and Offline Threat Detection, which he asked us not to blog.
Rahul Telang was next with What is a Cookie Worth?. Ad targeting now involves multiple layers of real-time auctions, but their effectiveness is hard to measure; see for example Lewis and Rao. Rahul got access to a mobile ad network of which 35,000 saw ads and a fraction made purchases and studies whether successively more privacy-intrusive sets of data were better at predicting behaviour; most information had little predictive power. Then he looked at bid-level data for 30 million bids over 3 days on half a million individuals; buyers with a higher baseline purchasing probability are more affected. So how much are cookies worth? About 29% more return on the ad budget, and it’s the temporal component that makes most of the difference.
Benjamin Johnson’s topic was Caviar and Yachts: How Your Purchase Data May Come Back to Haunt You. Existing game-theoretic models of consumers’ privacy preferences have high and low types; can we get different results if we have a continuum of types? Benjamin’s model is that consumers reveal their preferences to firm 1 by buying or not, and this signal may be sold to firm 2 which sells higher-value goods. There are five rounds and he looks at a privacy regime, and at a disclosure regime with myopic or strategic consumers. Welfare was higher in the disclosure regime; consumers were worse off, but the firms were much better off; when consumers were strategic, they could overcome the bad effects and get more surplus. In that case, the firms would make more money if they could make believable guarantees not to share data. Using Bayesian Nash equilibria, consumers formed a view after the first game or two what the firms were going to do; by the end of the series the firms wished they were in a different place. This suggests that if consumers were more sophisticated, we would not have today’s information markets.
The last talk in the privacy economics session was Ignacio Cofone talking about The Value of Privacy: Keeping the Money Where the Mouth is. Many things can explain apparent hyperbolic discounting of privacy risks including uncertainty about the expected risk and an unwillingness to read privacy policies. Data protection seems to assume this as it tries to reduce uncertainty and provides rights to knowledge and access. Others have self-control issues but have no precommitment device – no online equivalent of having no chocolate in the house. In 2009, Casari proposed testing discounting; Ignacio proposes testing making pre-commitment and flexibility free or costly independently in a two-phase decision. This could explain choice switching and enable us to assess the costs and benefits of a right to be forgotten. In discussion, it was suggested that where markets fail, privacy policies might be regulated; at present, Facebook’s privacy policy seems almost a chicken game, in which Facebook tries to get its users to not read its privacy policy by changing it frequently.
Harry Kalodner has done An Empirical Study of Namecoin and Lessons for Decentralized Namespace Design. Like DNS, Namecoin generates a namespace, but it’s decentralised using a blockchain, a 1.6Gb public ledger with over a million transactions we can analyse. He looked at squatting, for example, which seemed to account for almost all the blockchain traffic. He noted at least a dozen sales by squatters; but of almost 200,000 names there were only 278 unique non-squatted websites of which only 28 were available as .bit only. Squatters hoard names, and the goal of namecoin – preventing domain name seizures by authority – ensured that trademarks aren’t protected. Technical enforcement alone just doesn’t seem to work here. There are also technical issues such as domain name front-running, which namecoin solves with cryptographic mechanisms; but the incentive structures may prevent cryptocurrencies being used in some types of markets. This inspires him to define an algorithmic agent as a global distributed computation that holds no private information and where rules can only be changed by consensus.
Zinaida Benenson was next with User Acceptance Factors for Anonymous Credentials: An Empirical Investigation. She did two user studies in the EU project ABC4trust which did an anonymous course evaluation system: students got smartcards, collected attendance points after lectures, and could give feedback only if they’d been to at least half the lectures. She explored a number of hypotheses around topics such as perceived risk, perceived anonymity and situational awareness. She used the Dinev-Hart scale to measure online privacy concerns. Some hypotheses were supported and others not; situational awareness, for example, was not correlated. The emergent model stressed perceived usefulness for the primary task (above all else) as well as ease of use, perceived risk, trust and intention to use.
Jeffrey Pawlick’s subject is Deception by Design: Evidence-Based Signaling Games for Network Defense. Psychology and criminology provide one take on deception while game theory provides another; he blends the two into a model he calls “signalling games with evidence”. The twist is a deception detector, a function that gives a nonzero probability of deceit being spotted for a given type and message. There are a number of solution regions depending on the effectiveness of the detector. This enables him to model the effects of honeypots on both attacker and defender utility and work out requirements for deception detectors fro a mechanism design perspective. Things are not always straightforward; in some cases, more information helps the attacker rather than the defender, so it’s important to know where you are on the graph.
Monday’s last talk was from Konstantinos Mersinas on Experimental Elicitation of Risk Behaviour amongst Information Security Professionals. Security guys can no more be considered rational actors than anyone else, but we behave differently from normal civilians. Konstantinos starts from behavioural economics and hypothesizes that we will be averse to risk and ambiguity; focus disproportionally to worst-case outcomes; be averse to ambiguity in evaluation by others, and to favour security over operability. He tested 55 professionals versus 58 students and invited them to play various lotteries. Security professionals are uniformly ambiguity averse, unlike the general population; we’re better at estimating expected losses; many professionals exhibit worst-case thinking; and the priority accorded to security versus ops depended on seniority, with compliance people favouring the former and managers the latter.
Finally, Tyler Moore announced that the new Journal of Cybersecurity is now accepting submissions. The second issue will be a special issue containing about ten papers selected from this conference for a second round of peer review followed by publication in early 2016. It’s hoped that one issue a year will in future be a WEIS special issue.
Bruce Schneier’s keynote was on “The Future of Cyber Attack – Lessons from Sony”. North Korea was the target of the “fourth party” attack described in the Snowden papers (the US hacked the South Koreans hacking the North Koreans). Sony asked the US government if they should take North Korean threats over “The Interview” seriously, and the government said they were full of empty threats. The attack on Sony started in September 2014 with a spear phish, and went public in November 24th. The Norks’ mistake was putting a skull and crossbones on victim PCs as they started erasing the hard drives, rather than afterwards; smart people pulled the plug at once, saving the data. The forensics started at once. Two days later two unreleased movies appeared on bittorrent, followed by nicely curated internal data; the attackers knew what would be salacious (such as executive salaries when you pay men more than women). Executive emails appeared on December 8th with racist jokes about Obama. That day the first link to the movie appeared. Only on December 19th did the US government publicly accuse North Korea.
Yet the community had many sceptics, such as Marc Rogers of Cloudflare, and multiple theories emerged from disgruntled outsiders through Russians to hackers recruited by the Norks. In January 7th, FBI Director James Comey gave more evidence as people weren’t believing him; another Marc Rogers blogpost ridiculed him. Finally a New York Times article by David Sanger provided enough evidence to convince Bruce. Sony claims the cost was $15m; surely the lawsuits from film investors will add to that.
Who would have thought that the first nation-state attack on the US would be on a movie company? Yet in cyberspace everything gets militarised, and democratised at the same time, with the military using the same tools as everyone else. Here we were debating whether a major attack was the work of a hostile power, or of two guys in a basement somewhere. The two things you need to invoke an appropriate defence (self, police, military, CT …) is who’s attacking you and why – often take more than a month to figure out. In future, the default may be self-defence until the forensics teams figure it out. Legally, CNE and CNA are separate, although technically they’re identical (until you get to copy *.* or delete *.*).
While we figure it all out, we are in a classic early arms race with a lot of fear rhetoric, and security professionals are well within the blast radius. There have been multiple attacks by nation states on corporations, from Israel on Kaspersky through Iran on the Saudi oil company to the NSA on many companies. There will be more.
Te first panellist, Jonathan Cave, first cautioned against using the rhetoric of war because of the baggage that comes with it. Second, attackers will go for the least defended targets. Third, both attackers and defenders are just starting to learn. Fourth, think about audiences; deniable attribution to a player like DPRK makes it look powerful.
Rainer Boehme asked what is specifically cyber about such incidents; nation states have attacked companies before. As attribution is hard and hackback is not a solution, any arms race has to be defensive, especially as attack is currently much easier. The really interesting aspect of the Sony attack is censorship; it highlights the vulnerability of independent media, which governments have a duty to protect as a foundation of democracy.
Once more, Allan Friedman was not representing the views of the US government. Fred Kaplan’s “The Wizards of Armageddon” tells how it took only two years after Hiroshima and Nagasaki to work out the math of deterrence, but two decades to figure out how to bake this into policy. We probably need to think more about disruption rather than all-out attack; about defenders claiming that things just broke, so as to deny the attacker credit; about exploratory and preparatory plays, and their interaction with diplomacy; and in general the difficulty of communicating in ways that are sufficiently clear to make conflict predictable.
Discussion started on the problems of the war rhetoric; there are so many non-nations in the mix that it’s not just unhelpful but hazardous. Private armies are also not optimal; they take us to the world of the Dutch East India Company. Deterrence involves multiple layers of reflexive belief, unlike the old days of nuclear deterrence. In biological systems, aggression is variable; some individuals retaliate harder than others, and hawk-dove games explain some of this. All sorts of business processes may change; the repeated leaking of large email archives will make more and more executives introduce tough deletion policies, eroding institutional memory. Stuff is routinely leaked through third party countries to make attribution more complex. Attribution basically comes down to identifying people, and underground markets dissipate this information; at least they’re insanely slow compared with financial markets. Nation states can attribute by target list; if a country’s interested in Latin America plus Gibraltar, we can guess who they are. But what about criminals? Another difference between principled actors like nation states, NGOs and newspapers is that they extract news value with skill, dribbling out information over months to keep interest alive.
Tuesday’s refereed talks started with Tyler Moore Concentrating Correctly on Cybercrime Concentration. We see many concentrations in cybercrime, such as the McColo hosting centre, whose shutdown led to a noticeable drop in spam worldwide; yet that was very temporary, as the bad guys rapidly replicated their infrastructure in a more distributed way. Another example comes from Silk Road where the top 3% of sellers accounted for almost all the sales volume; yet another comes from spamvertised goods, where almost all payments go through three banks. So which concentrations are just convenience, and which are sufficiently structural that it’s worth taking them down? Economic factors include comparative advantage and network effects; non-economic explanations include whack-a-mole and copying successful patterns (so failure spurs innovation, and concentration will dissipate following intervention); measurement artefacts include measurement bias (PhishTank reports 40% of phish but 100% of PayPal phish), concentrated crime gangs like RockPhish, and other concentrations (such as correlating a new cybercrime with ISP size). Tyler looked at the causes of concentration in ten types of cybercrime, both among perpetrators and among infrastructure components. Law enforcement should first work out whether a concentration is structural rather than a matter of convenience; think how a viable intervention might work; predict the criminals’ response, and then figure out how to work with the relevant stakeholders.
Brian Glass’s topic was Deception in Online Auction Marketplaces: Incentives and Personality Shape Seller Honesty. He’s seeking low-effort solutions to the lemons problem in online auctions. He does lab experiments to explore auction behaviour where subjects act as buyers and sellers. The first experiment was on advertisement creation; the second added feedback and a second auction. The sellers can decide whether to include information about flaws in the object for sale, which image to pick, and the order in which to present information; they can be offered a flat rate or percentage commission, or even a commission subject to positive reputation feedback. This reputation feedback condition had a much larger effect size than commission. People tended to hide the honest information in the middle of the ad. He confirmed that deception is connected with extraversion and neuroticism (as reported elsewhere) and also with seller experience. In fact, most people were dishonest to some extent but reported an honest self-image. In the second experiment he found that negative feedback helped learning; people who had negative feedback and a bad price in the first round were most honest in the second, while the most deceptive were patient sellers (low discounters).
Kurt Thomas spoke on Framing Dependencies Introduced by Underground Commoditization. The bad guys exploit commoditized access to machines, user data, human services and account services like Facebook likes. He set out to systematise 170 papers on underground economies, with a taxonomy by profit sources, support centres and interventions. As an example, he discussed the process of sending twitter spam to promote fake Gucci handbags; fake credential acquisition, spam delivery, affiliate marketing and payout are all needed to the spammer to get paid. At the next layer down are the support centres, with basic services like hosting and human services at the bottom of the stack, supporting traffic acquisition, SEO and cloaking services, going up to malware distribution and specialised payloads like banking trojans. None of this infrastructure would exist without the profit centres. What’s more, political actors like Putin and showbiz people buying a million YouTube views are feeding the beast. Historically we saw security as individual self-defence; in the future, surely resource disruption will play a bigger role, along with payment interdiction and the arrest of bad actors. We need to think of the whole supply chain.
Peter Snyder wrapped up the morning session with No Please, After You: Detecting Fraud in Affiliate Marketing Networks. An affiliate marketing site like Amazon sets an affiliate cookie on the browser of a user who went to their site via a special URL; crooks try to embed cookies on as many browsers as possible, using deceptive iFrames, malware, or hidden redirects. He looked at 164 affiliate marketing programs as they were large or representative and because they did not use https. he got records of 2.3bn http requests from a university, parsed them into browsing session trees, and looked for both cookie-setting and checkout URLs. He distinguished honest from fraudulent behaviour using a classifier trained on a manually classified subset of the data. He found that 40% of Amazon publishers engaged in fraud, and 80% of Godaddy publishers. However a much smaller percentage actually succeeded; only about 17% of referrers in Amazon were fraudulent, and 20% in Godaddy. As for purchases, 18% of fraudulent conversions were seen in Amazon but none at all in Godaddy.
Armin Sarabi started the afternoon session with a talk on Prioritizing Security Spending: A Quantitative Analysis of Risk Distributions for Different Business Profiles. Armin used the VERIS community database of breaches and turned breach disclosure reports into sector risk profiles by identifying nonvictims in the same sectors and estimating network sizes from Alexa and RIR databases. He built a classifier to do conditional risk predication, dealing with problems ranging from incomplete labels to possible selection bias, giving risk profiles for firms by sector and size.
Chad Heitzenrater has been working on Policy, Statistics, and Questions: Reflections on UK Cyber Security Disclosures. he’s been studying “Cyber Essentials”, a lightweight control framework being pushed by the UK government. He used their Information Security Breaches Survey (ISBS), which gets about a thousand responses a year from a wide sample of Uk business sectors. This enables him to work out annual loss expectancies for virus, hacker and combined attacks; expected staff and other costs of breach remediation; and the NPV of the controls in Cyber Essentials. For example, anti-virus was uneconomic for the larger small companies. The scheme is heavy on buying products and services from third parties, and light on the things you do for yourself, particularly for prevention. Of course, many firms will seek accreditation in order to get government business. In discussion it was noted that the best advice government could give to business would be to toss their Windows machine and buy a Mac, but the government can’t say that!
Wynne Lam spoke on Attack-Deterring and Damage-Control Investments in Cybersecurity. Both IT security standards and liability rules can fix security incentives, so she revisits investment models in the light of tort law and disclosure law. She models a game in which a monopoly vendor can decide whether to disclose and fix bugs, and consumers can decide whether to take extra precautions. Then the firm will only disclose if this causes the consumers to take precautions. She recommends the joint use of a partial liability rule and an optimal standard. She also discusses fines versus compensation, the negative effects of vaporware on product quality, and broader applications of her model such as workplace, vehicle and radiation risks. The trick is to balance the investment incentives between the firm and consumers.
Shu He started the last refereed paper session with a talk on Designing Cybersecurity Policies. Many cybersecurity interventions tackle information asymmetries, whether by notification, publicity or peer ranking. She set out to evaluate policy with a randomised field experiment, getting an independent security firm to send emails to IT staff in 7,919 firms of which some were controls, some got private information and others got information publicly (chosen by matched-pair stratified randomisation). She got over 2000 unique visitors to the website set up for the purpose. She measured the firms’ outgoing spam volume and found a significant drop for the public treatment group. An excess-variance analysis suggested that the outbound spam volume was more clustered in smaller groups, so these had a stronger peer effect. She also found peer effects among close competitors.
Orcun Cetin gave the last formal talk, on Understanding the Role of Sender Reputation in Abuse Reporting and Cleanup. Of the millions of reports made each day, many are ignored or lost; but some are acted on despite the sender being unknown to the recipient. Why? He randomly sent some compromised URLs from individual researchers (low reputation), a university (medium) and StopBadware (high). He collected data related to the Asprox botnet (which is stealthy to make takedown harder) for 16 days. The control group of IP addresses took 8 days to clean up versus 2.5 for individuals, 3 for universities and 1.5 for anti-malware organisations; so reports were effective overall. Surprisingly, he found no evidence of reputation improving cleanup rates. Cleanup advice, such as the web page on Asprox, is typically ignored (but it did help the few hosting providers who visited it). But there is a large variation though in the performance of hosting providers, and human responders did better than automation.
Jelte Jansen talked about Entrada, a DNS big data platform to improve stability of the Internet in .nl by enabling people to visualise DNS patterns for malicious activity. He’s working on mechanisms for responsible sharing of lightly-anonymised data and looking for collaborators [email protected]
Andrew Odlyzko’s title was “Cyber security is not important”. Why has the world survived despite decades of insecurity, and predictions of disaster? He predicts that we’ll continue to cope, with chewing gum and baling wire, as before. They had two-factor authentication at Bell Labs 20 years ago, and gave it up although it was already cheap. We will continue to use the physical world to compensate for the insecurities of cyberspace.
Jaap-Henk Hoepman has developed IRMA (“I reveal my attributed”) to do selectively disclosable attribute authentication on smart cards. Who will pay for it?
Anna-Maija Juuso’s masters thesis is on the incentives of private cloud infrastructure providers to share security information; it’s based on the Gordon-Loeb model and available from her at Oulu: [email protected]
Stefan Laube talked on how the NIS directive might change security investment, based on his model introduced yesterday. His latest work endogenises the detection probability and predicts that audits plus a low sanction level might increase breach prevention, but if the sanction level rises firms might invest in breach detection instead.
Wolter Pieters talked about the TresPass project which does risk estimation by predictive assessment. He will be running a winter school.
My talk was on Project Scrambled X, an initiative from Code Red to build an anonymising payment proxy that will enable people living in badly run countries to make small donations to NGOs without being traced and hounded by the police getting information from bank payment gateways. What’s the best way to do that? Something like Western Union, something like bitcoin, something like a private bank, or something like a prepaid debit card? We’re open to ideas.
Last year, Kanta Matsuura studied the liquidity of air miles systems and other loyalty programs which have had security scares. He has now got a simplified model with a threat score instead. This is a big deal; Japan Air Lines stopped their deal with Amazon, and another provider today introduced a phone authentication step.
“Johan” has a Tor-inspired bittorrent client called Tribler with 1.7m “infected” computers; it’s designed to protect you from lawyers. There is now, as of today, a client in the Android store.
Giovane Moura is interested in measuring DHCP churn across ISPs; he has a tool zmap that probes BGP feeds and can be used to spot scams, and has been tested up to 1m IP addresses on an Iranian ISP, remotely, with over 70% accuracy. He’s not looking at huge ISPs like AT&T, BT and Deutsche Telekom. http://giovane-moura.nl
Maximilian Hils has developed mitmproxy, a man-in-the-middle to intercept, modify and save encrypted traffic. It also helps you visualise how many subdomains a site links to; he has two developers, 75 contributors and 100,000 downloads a year. See http://mitmproxy.org
Marie Vasek told us that “Hacking is not random”. Why are some websites infected and not others? Could it be the content management system? Absolutely; wordpress and joomla are more vulnerable, but curiously the latest versions are more likely to be hacked than the old ones. She’s worked with StopBadware to study 464,737 domains for drive-by downloads including 44,712 wordpress sites that were compromised. A third of those that were never updated were recompromised while only 23% of those that were up-to-date.
Sophie van der Zee talked about about lie detection. Liars fidget but try to move less. Are the small effect sizes due to the limitations of manual coding? It seems so; using motion capture suits we can get much stronger signals of deception than before.
Jeremy Epstein is from the government and is here to help us by doing a joint project between the NWO and the NAS to encourage joint US-Dutch research in security and privacy. He will be running a workshop and issuing a call for proposals.
Richard Clayton has just got a large grant for five years to set up the Cambridge Cloud Cybercrime Centre. This will make large quantities of cybercrime data available to other researchers who sign a standard and lightweight NDA with Cambridge, so you don’t have to spend half your life negotiating for access to the data you need for research. The aim is to drive a step change in cybercrime research.
Tyler Moore is moving to Tulsa and has open PhD fellowships on cybercrime measurement.
Rainer Boehme has moved to Innsbruck has has openings for PhD students and postdocs, as well as space for visiting researchers, both winter and summer.
Finally, I announced the best paper award, which was elected by a ballot of all the participants. The winner was Hype and Heavy Tails: A Closer Look at Data Breaches by Benjamin Edwards, Steven Hofmeyr and Stephanie Forrest.
My master’s thesis on the incentives of private *critical* infrastructure to share breach information is now online: http://herkules.oulu.fi/thesis/nbnfioulu-201506111850.pdf
I’ll continue the research in my PhD, so all comments are very much appreciated.
The caviar and yachts link is a duplicate of the Aziz link.
Fixed. Sorry about that!