Making e-Infrastructure Accessible to Industry



















I was recently invited to speak at the UK e-Infrastructure Academic User Community Forum in Oxford on the work that we have been doing to make e-Infrastructure accessible to industry - both as HPC Midlands and through my role on the Technology Strategy Board's e-Infrastructure SIG. Due to an autocorrect whim, the invite from Oxford e-Research Centre Director David de Roure turned into an 'incite', which in the circumstances seems quite appropriate. I hope the resulting call to arms offers some insight into where we as a community perhaps need to step up a gear or two. All of this, and Bee-Bots too!

What is e-Infrastructure?



















I've been told by a number of firms that they find supercomputing (and by extension the broader set of services and facilities we group together as e-Infrastructure) very difficult to grasp as a concept.  Jargon seems to play a major role in this, e.g. "HPC", "MPI" and other impenetrable Three Letter Acronyms. The underlying assumptions of supercomputing can also be quite alien - e.g. batch scheduling, where your job may sit in a queue for hours, versus people's expectations that computer programs cooperate with Internet services in real time and deliver near instantaneous results.

There are some other more subtle factors at work here - for example, I have found that spin-outs and startups tend not to see themselves as Small to Medium sized Enterprises (SMEs). The very term SME is EU jargon that has a specific meaning in terms of company size and profile. This puts us in a slightly awkward position, as we essentially have to remember to adopt a different vocabulary depending on the audience, and not only on a technical/non-technical axis.

We can also ask ourselves where "e-Infrastructure" begins and ends. In many recent discussions there has been a more or less tacit assumption that this term refers to major capital items of equipment - with supercomputers as a particularly notable example. But where do we draw the line? There are all sorts of types of equipment that Universities and Colleges hold in their research labs that will be of interest to the technology oriented spin-out or startup. Consider mass spectrometers, scanning electron microscopes, wind tunnels and the like. Many of these devices will be prohibitively expensive to buy outright, unless you are a particularly well funded firm. Even so, it would be difficult to justify a major capital expense unless the equipment was business critical or likely to be very heavily used.


What e-Infrastructure is out there?

Here's an example that I used in my talk - let's imagine that by some fluke your startup was able to purchase an Illumina Next-Gen DNA Sequencer, which costs the best part of £100,000. After a couple of years of faithful service it emits a puff of blue smoke and stops working. You are in a panic because you have a large batch of samples that need to be sequenced. The chances are that there is a research lab somewhere not too distant that might be able to let you use their sequencer. But where?

This is a big challenge for us - to find a way of presenting the UK's e-Infrastructure in a way that will let people (both researchers and large and small firms) discover what is out there and who to talk to about it. For this reason I am particularly pleased that we now have equipment.data.ac.uk, a website that ingests data about equipment from institutions all around the UK and presents it in a user friendly easy to search format. You can see an example in the screenshot below:


























We could quibble about the geolocation data for The Genome Analysis Centre, which places Norwich somewhere in the North Sea. However, the staff at our hypothetical spin-out will be overjoyed to almost instantly find phone numbers and email addresses for some friendly folk who are happy to share their equipment, perhaps for a modest fee. The equipment.data site imports a wide range of formats, ranging from simple Comma Separated Values to Excel spreadsheets and RDF triples.

My favourite data source is the Kit-Catalogue software developed at Loughborough, and now in use at a growing number of institutions around the UK and abroad. Kit-Catalogue is a simple PHP/MySQL application that trivially lets you put up a professional looking "shop window" of the equipment that you are making available to share - and automatically produces a JSON structured data feed of public items for consumption by equipment.data and other services. For example, here is an entry from the Loughborough equipment database that shows our wind tunnel:






























Kit-Catalogue and equipment.data have been supported by the Engineering and Physical Sciences Research Council (EPSRC). EPSRC has a broad agenda of encouraging institutions to collaborate and share facilities both with each other and with industry. It's no coincidence that the EPSRC also funded HPC Midlands, our regional supercomputing centre of excellence for research and industry.


Painting a More Coherent Picture



















Once we have made it off the starting grid, there are still a number of practical hurdles that need to be surmounted. Even if we kept to a narrow definition of e-Infrastructure as purely HPC, we would still have:
  • Technical environment - the vagaries of HPC schedulers, Linux variants, software library version mismatches, binary compatibility etc
  • Licensing models for commercial off the shelf software - although we have seen a lot of progress in this area of late
  • Contractual frameworks - do I really need to sign a separate agreement for each and every organization that I want to do business with? Could UK e-Infrastructure services have a single shared set of terms and conditions?
  • Information Assurance  - what guarantees do I have that my data will be safe?  What recognised industry standards for information security will be followed?
  • Connectivity - many data intensive applications have significant demands in terms of bandwidth. How will I transfer Terabytes of data on a routine basis? What latency do I require for remote visualization and computational steering to be practical?
Here it has to be said that there is a lot of prior work that we ought to be able to take advantage of, such as the JANET Moonshot technology trial.  This provides all participants with a "network ID" based on their organizational IT user name and the organization's domain name, which looks very similar to an email address. This is already familiar to the millions of users globally of the eduroam network roaming service.

Most of the firms I have spoken to have an immediate need to run off the shelf software from established major vendors - packages like NASTRAN, FLUENT and so on. They are interested in being able to expedite their existing work, and cloudburst from time to time when they need additional capacity beyond what they have in house. Whilst we have just signed up an SME that is developing their own parallel software, this is far from the norm. JANET(UK) have also done a lot of good work on developing standard terms and conditions (model agreements) for cloud services, which have some relevance here.

I'm fortunate that I have colleagues at Loughborough who work on the JANET Education Shared Information Security Service (ESISS), and further afield we also have the Communications Electronics Security Group (CESG), GCHQ's unclassified adjunct organization which advises government and the public sector on information security.

The issue of the common technical environment is an interesting one, given that in many ways this is a solved problem - however the UK has largely let its early lead on e-Science and Grid Computing evaporate with the discontinuation of funding for the National Grid Service. I wonder whether we will come to regret this, and end up reinventing the wheel to a degree. This might be necessary in order to meaningfully participate in the EU Horizon 2020 programme via the established European Grid Infrastructure, but a "National Grid 2.0" could also significantly help to grease the wheels of industry if well conceived and executed purposefully. From my perspective Grid 2.0 would ideally draw significantly on open standards like OpenStack and KVM, but also pilfer freely from the more successful parts of the earlier Grid work, such as the DRMAA job submission API.


Correcting the Training Impedance Mismatch



















For me one of the areas where we have most definitely "done it right" is to have a concerted and coordinated national programme of supercomputing training. However, once we start to pick away at this it's clear that the current provision is very much tied to an outdated delivery model that not only hampers industrial take-up, but also puts unncessary barriers in place for the postdocs that it is aimed at. For example:
  • It's supply rather than demand led - better hope that the course you wanted to go on isn't oversubscribed
  • Inflexible timing - miss a course and you might have to wait the best part of a year for another chance. If you are accepted onto a course then you will have to take as much as a working week out of the office to attend due to the block delivery format
  • Prohibitively expensive for SMEs - this pricing is dictated by EPSRC, who need to make allowances for small firms operating at the micro-SME level with limited access to capital
  • No engagement with ISVs - where are the courses on industry standard software?
  • Where is the MOOC? In 2013 it is simply unacceptable to have no online training provision for those who are unable to make it in person to a particular instance of a course
I hope that this situation will improve dramatically through the tender process for ARCHER training - for example, there is a golden opportunity to use FutureLearn to both promote and raise awareness of e-Infrastructure whilst also building up a library of online training material. In a similar vein, I was particularly taken by the work that Cranfield have done to provide a variety of HPC training opportunities ranging from a postgraduate certificate and diploma to a full MSc. Firms often find it difficult to release staff for extended periods for training, and a CPD based approach that builds up to a recognised qualification is a very attractive option.


Generation Pi

In my talk I used some examples from a recent visit to Technocamps at Bangor University, where I spoke on the IT skills crisis and How to be a Hacker. In the week that the new UK National Curriculum for computing has been released I think it is worth spending a moment considering how we can work with the Raspberry Pi generation to raise awareness of computer based modelling and simulation.

This could be a very powerful antidote to the "cargo cult computing" that we have all seen in recent years, as a byproduct of the UK's ICT teaching model. It is somewhat alarming to think that there is a generation walking around right now for whom the (open source!) inner workings of an Android device might as well be magic, and we need to do all we can to counter this.

For school children, the likes of Jamil Appa's hpc4schools project could be just the ticket - visit your local science discovery centre to design and race a Bloodhound, or perhaps a Skylon. It's also clear that there large numbers of teachers out there who are keen to get stuck in to this. I was heartened when our own school announced that the kids would getting a first taste of programming through the Bee-Bots I'd played with back in January when I spoke at BETT on The Perfect Storm of change taking place in the education sector right now. What's a Bee-Bot, I hear you ask? Wonder no longer...


To close, here's an embedded copy of my slides - feedback welcome!

No comments:

Post a Comment