Open Source Summit Europe

I am at the 2018 Open Source Summit Europe in Edinburgh where I’ll be speaking about Hyperledger projects. In follow-ups to this post, I’ll live-blog security related talks and workshops.

The first workshop of the summit I attended was a crash course introduction to EdgeX Foundry by Jim White, the organization’s chief architect. EdgeX Foundry is an open source, vendor neutral software framework for IoT edge computing. One of the primary challenges that it faces is the sheer number of protocols and standards that need to be supported in the IoT space, both on the south side (various sensors and actuators) as well as the north side (cloud providers, on-premise servers). To do this, EdgeX uses a microservices based architecture where all components interact via configurable APIs and developers can choose to customize any component. While this architecture does help to alleviate the scaling issue, it raises interesting questions with respect to API security and device management. How do we ensure the integrity of the control and command modules when those modules themselves are federated across multiple third-party-contributed microservices? The team at EdgeX is aware of these issues and is a major theme of focus for their upcoming releases.

6 thoughts on “Open Source Summit Europe

  1. The welcome talk was given by Jim Zemlin, executive director of the Linux Foundation. He highlighted the work done by the Let’s Encrypt team – it is the world’s largest CA today. He mentioned some of the upcoming projects such as Hyperledger, a permissioned blockchain ecosystem, and Zephyr, a real-time OS for IoT, among others. He then presented the “Code Sandwich” model of software development where 90% of the code that is used in any project is open source in the form of frameworks at the bottom and OSS libraries at the top; the 10% of custom code is what the developer writes to solve a problem. The Linux Foundation aims to make writing this 10% code as easy and reliable as possible. He concluded by pointing out that the next big sector to be disrupted by FOSS is energy and welcomed Shuli Goodman, Executive Director of LF Energy to talk about it.

    Shuli started off by comparing the topology of the Internet with that of the electricity grid. She highlighted how the existing power grid is a rigid, monolithic network that needs to change if renewables are to take over. The challenge here is how to get a large federation of diverse resources (connected cars, generators, houses, etc.) to talk to each other, a problem the linux foundation is quite adept at solving. She concluded by talking about the urgency of solving these issues: a 10-year window for us to combat climate change.

  2. The final keynote speaker was Jonathan Corbet of LWN.net fame. He started by talking about how the Linux community dealt with Meltdown and spectre. He highlighted how Linux has a standard, open way of dealing with vulnerabilities, this was not followed when it came to these hardware vulnerabilities. There was a lot of secrecy and siloing of information. This was more obvious in the case of Spectre where fixes were developed in private and were not ready at time of disclosure. The siloing of information also resulted in smaller firms and organizations being left out and thus not having a fix ready. This is alarming – it leads to further consolidation of power in the hands of a few big firms. Another thing this episode highlights is the general problem of hardware being treated as a stable foundation by software developers, when in fact they are black boxes with convoluted proprietary code. Jonathan then talked about the prospects of developing long-term stable kernels. He suggests that the better solution is to make as many devices to the newest kernel available at any given point of time, concerns about regressions notwithstanding. He concluded by talking about Berkeley Packet Filter wich is being used to supplement kernel functionality, thus pushing code into user space. Projects such as BPF are making the kernel-user space boundary more porous. This, in his view, will be a major trend in Linux kernel development. Jonathan concluded by talking about conduct in the Linux community. He talked about how far the community has come up with tools to support kernel development yet, it has shied away from establishing a code of conduct about how people interact with each other. That was until last month, when such a code was adopted. Jonathan welcomes this move and thinks this will help make the community more inclusive and more fun.

  3. RISC-V ISA and Foundation Overview by Rick O’Connor, Executive Director, RISC-V Foundation

    Started as an educational tool at UC Berkeley in 2010; x-86 and ARM were considered too complex and riddled with IP issues to be useful to teach to students. ISA is the most important interface in a computer system – it is where the software meets the hardware. Then, why do we not have a successful, free and open standard for an ISA when everything else seems to have one (or many)? Most CPU chips are SoCs with many ISAs. Rick thinks RISC-V can be that one “OpenISA” standard. It is simple (far smaller than commercial ISAs), modular (multiple standard extensions), stable (base and standard extensions are frozen FOREVER: additions via optional extensions, not new versions) and has a clean-slate design (clear separation between user and priveleged ISA).

    RISC-V Foundation incorporated in August 2015. The ISA and related standards shall remain open and license-free to all parties. Roughly 200 members in the foundation. New focus for RISC-V foundation is Big Data and Fast Data (think quick decision making – autonomous vehicles). Both Western Digital and Nvidia (Falcon controller) have started using RISC-V components. The western digital deal is projected to manufacture >1 billion RISC-V cores.

  4. “Digital Echoes: Understanding Patterns of mass violence with data and statistics”, Patrick Ball, director of research, human rights data analytics group

    People and institutions that commit mass violence nearly always lie about it. The lies are often grotesque and easily disproven. The trial of former President of Chad: documents of the secret police found in the trash. These were cleaned and organized by volunteers. Calculated mortality rate by counting the number of prisoners in the secret police’s prisons. Daily mortality rate went as high as 0.62 per 100. 1.3-4.1 times graetr than US prisoners in Japanese custody in WW2 (which was considered a war crime then). This rebutted the defense’s claim that the conditions in the prisons were acceptable. This trial was a rare win for the human rights community. Ongoing case: finding hidden graves in Mexico. By getting loads of information about counties within Mexico they tried to predict which counties will have hidden graves. Decent results- able to predict correct counties in vast majority of cases. These are the positive uses of ML, now we talk about the negative. Predictive policing is one. Predictive policing doesn’t learn occurence of crime, it learns police records! Thus, it doesn’t predict where crime will occur, it predicts where police enforcement will take place. Case study- drug crimes. Drug usage is everywhere in California, but enforcement is mostly in minority areas. Now when the predictive policing algo was applied, white crime is almost completely ignored and minorities are unduly overpoliced, resulting in a feedback loop. This isn’t predictive policing this is actually predicting policing – we’re merely predicting where the police will go if they follow our model. When you design a social ML model, ask yourself, what’s the cost of being wrong? Do you simply waste resources or do we affect people’s lives? Patrick concluded the talk by highlighting a positive prosecution in a Guatemalan case where illegal police orders were revealed by trawling through old records.

  5. Preventing CPU Side-channel Attacks with Kernel Tracking – Marian Marinov, SiteGround

    Intel’s microcode updates and KPTI were supposed to result in 10-15% performance degradation; okay for a single server, not so much for 1000s of servers. This was followed by an overview of CPU architecture: L1 cache specific to hyperthreads, L2 shared between hyperthreads, L3 shared between cores. Cache side channel attacks come in two flavours: flush and reload, flush and flush. Flush and reload attacks rely on page faults. Page faults are not common in normal processes. So, initial solution is to do performance tracking and detecting processes having too many page faults. But this does not combat flush and flush which does not rely on page faults. Their way of mitigation is to go after the requirements for Meltdown exploits.

    Successful meltdown exploitation prefers that both the SIGSEGV children and the victim are on the same CPU; so we simply lie to sched_setaffinity. Effectively, we do nothing: we save the requested affinity in the task_struct as cpumask_t. Another requirement for meltdown is that a process should have SIGSEGV children or grandchildren or threads, or TSX isntructions that do not finish successfully. On their infrastructure, no customer’s software needs to have SIGSEGV children or threads, and they do not support TSX instructions. Thus, they have decided to simply forbid seg faults. Detect processes that had more than 1 child dying with SIGSEGV; if such a process is detected it is STOPPED not killed. Only the root on the host machine can send any type of signals to this process now. What about Foreshadow? Their detection works for that as well. EndGame showed that TSX instructions can be counted. Hard to lie to the userspace about not supporting TSX becuase cpuid instruction is unprivileged so tapping it not easy. They do some trickery with TSX instruction counting to fool attackers. He concluded by pointing out that right now a solution like theirs is required because KPTI does not prevent Foreshadow based attacks.

  6. Deploying and managing hyperledger sawtooth by Duncan Johnston-White, Blockchain Technology Partners

    You shouldn’t have to ask a regulator to use middle ware which is what a blockchain is, what matters is what you do with that middleware. They introduced a new tool called BTP sextant. It leverages kubernetes to provide a fully curated and hardened platform to provide a unified user experience. It is based on hyperledger Sawtooth which has a strong emphasis on scalabilty and is looking to allow for “pluggable consensus” – choose your preferred consensus algorithm and the smart contracts should continue to work as before.

    The main thing they’ve focussed on in Sextant is failover for high availability. They use “chaos gorilla” – a tool by netflix where they simultae the destruction of entire geographical areas of their cloud infrastructure to see how their system copes. The philosophy here is that instead of building in contingencies and hoping that things don’t go wrong, you should actively try to break your infrastructure regularly and make sure performance doesn’t degrade. This, they believe, is the biggest positive of the cloud native approach. Certainly looks interesting.

Leave a Reply to Mansoor Ahmed Cancel reply

Your email address will not be published. Required fields are marked *