Amazon Lumberyard Beta 1.4 now available, adds Lua editor, network profiler, and more

We are excited to announce the release of Amazon Lumberyard Beta 1.4, which includes 230 new improvements, fixes and features. Lumberyard Beta 1.4 introduces an integrated Lua editor and debugger, network encryption and profiling for more control when building multiplayer games. Read about the update in the GameDev Blog and the Lumberyard Beta 1.4 release notes.
Quelle: aws.amazon.com

How can we protect our information in the era of cloud computing?

In an article published in the Proceedings of the Royal Society A, Professor Jon Crowcroft argues that by parcelling and spreading data across multiple sites, and weaving it together like a tapestry, not only would our information be safer, it would be quicker to access, and could potentially be stored at lower overall cost.
The internet is a vast, decentralised communications system, with minimal administrative or governmental oversight. However, we increasingly access our information through cloud-based services, such as Google Drive, iCloud and Dropbox, which are very large centralised storage and processing systems. Cloud-based services offer convenience to the user, as their data can be accessed from anywhere with an internet connection, but their centralised nature can make them vulnerable to attack, such as when personal photos of mostly young and female celebrities were leaked last summer after their iCloud accounts were hacked.
Storing information in the cloud makes it easily accessible to users, while removing the burden of managing it; and the cloud’s highly centralised nature keeps costs low for the companies providing the storage. However, centralised systems can lack resilience, meaning that service can be lost when any one part of the network access path fails.
Centralised systems also give a specific point to attack for those who may want to access them illegally. Even if data is copied many times, if all the copies have the same flaw, they are all vulnerable. Just as a small gene pool places a population at risk from a change in the environment, such as a disease, the lack of variety in centralised storage systems places information at greater risk of theft.
The alternative is a decentralised system, also known as a peer-to-peer system, where resources from many potential locations in the network are mixed, rather than putting all one’s eggs in one basket.
The strength of a peer-to-peer system is that its value grows as the number of users increases: all producers are also potential consumers, so each added node gives the new producer as many customers as are already on the network.
“Since all the members of a peer-to-peer network are giving as well as consuming resources, it quickly overtakes a centralised network in terms of its strength,” said Crowcroft, of the University’s Computer Laboratory.
The higher reliability and performance of fibre to the home, the availability of 4G networks, and IPv6 (Internet Protocol version 6) are all helping to make decentralised networks viable. In practice, a user would carry most of the data they need to access immediately with them on their mobile device, with their home computer acting as the ‘master’ point of contact.
“Essentially, data is encoded redundantly, but rather than making many copies, we weave a tapestry using the bits that represent data, so that threads making up particular pieces of information are repeated but meshed together with threads making up different pieces of information,” said Crowcroft. “Then to dis-entangle a particular piece of information, we need to unpick several threads.”
Varying the ways that our information is stored or distributed is normally done to protect against faults in the network, but it can also improve the privacy of our data. In a decentralised system where data is partitioned across several sites, any attacker attempting to access that data has a much more complex target – the attacker has to know where all bits of the information are, as opposed to using brute force at one point to access everything. “The more diversity we use in a peer-to-peer system, the closer we get to an ideal in terms of resilience and privacy,” said Crowcroft.
A peer-to-peer system could also be built at a lower overall cost than a centralised system, argues Crowcroft, since no ‘cache’ is needed in order to store data near the user. To the end user, costs could be as low as a pound per month, or even free, much lower than monthly internet access costs or mobile tariffs.
“We haven’t seen massive take-up of decentralised networks yet, but perhaps that’s just premature,” said Crowcroft. “We’ve only had these massive centralised systems for about a decade, and like many other utilities, the internet will most likely move away from centralisation and towards decentralisation over time, especially as developments in technology make these systems attractive for customers.”
Private information would be much more secure if individuals moved away from cloud-based storage towards peer-to-peer systems, where data is stored in a variety of ways and across a variety of sites, argues a University of Cambridge researcher.
The more diversity we use in a peer-to-peer system, the closer we get to an ideal in terms of resilience and privacyJon Crowcroftg4ll4isPrivacyThe text in this work is licensed under a Creative Commons Licence. If you use this content on your site please link back to this page. For image rights, please see the credits associated with each individual image.

YesLicense type: Attribution-ShareAlike
Quelle: University of Cambridge

Trust on the wild web

Mark Zuckerberg is the world’s youngest billionaire. He got there by founding facebook.com, one of the biggest beasts in the Internet jungle. In the early days, so the story goes, he boasted to a friend on instant messenger that he had the personal details of over 4,000 students in Harvard, and if he ever wanted to know anything he should get in touch. Understandably, his incredulous friend wanted to know how Zuckerberg had access to this information. His reply? ‘People just submitted it. I don’t know why. They trust me, dumbf***s.’
The online environment is no longer merely an aid to living well offline; for many, it has become a forum where much of life is now conducted. But one issue that raises its head again and again is this question of trust on the Internet.
Examining whether and how we can design the Internet for online trust is the focus of my research in the Faculty of Philosophy, supervised by Dr Alex Oliver. The project is sponsored by Microsoft Research, whose Socio-Digital Systems group in Cambridge looks at how technology interacts with human values.
The research is a chance to do some practical philosophy, reflecting on and engaging with applied issues. And as the Internet increasingly becomes a more pervasive part of our lives, issues of trust online are only going to grow in importance. So there is a unique and timely opportunity – and challenge – to break new terrain.

Trusting me, trusting you
It is easily overlooked, but when you stop to think, it is striking how much we trust to other people. It is a fundamental precondition for the smooth functioning of society. Like the air we breathe, or the cement in brickwork, trust is both essential and usually taken for granted.
One consequence is that we tend to notice our reliance on trust only when things go wrong. And although it is easy to eulogise trust, it is not always appropriate. Trusting the untrustworthy is often a dramatically bad idea. But distrusting the trustworthy may have equally serious consequences.
Certainly, most people want to live in a world where it makes sense to trust people, and for people to trust them. But they also don’t want to be taken for a ride. So we have to work out when trust is appropriate.
The trouble is, it is much harder to work out online when trust is appropriate and when not. It is much more difficult to determine online whether a particular person is trustworthy – much of the personal and social context of offline forms of interaction are stripped away in cyberspace, and online identities can be less stable.
But perhaps more seriously, it is still relatively unclear what the norms and mores are that govern appropriate behaviour online. This applies both to the informal norms that spontaneously arise in interpersonal interaction, and also to the apparatus of formal law.
The web, in this sense, is a bit like the Wild West. It is not that life is impossible there – far from it. Indeed, it’s often pretty flamboyant and colourful, and a stimulating place to be. But people can also act unpredictably, and there is little recourse for those who get stung.

Building trust
One moral of the story about the Facebook founder’s comment is that you’ve got to be careful who you trust online. That’s obvious enough, and no different to what we tell our children.
But there are some more challenging issues. For the online world has an important feature: it is malleable. How something is built often serves particular ends, whether intended or not, and these ends in turn serve to realise particular visions of how people ought to live. Were I a metalsmith, for instance, I would rather make ploughs than thumbscrews – I don’t want to contribute to making a world where thumbscrews are plentiful.
This applies to contemporary technologies too. At the last count, 500 million people now have their social relationships partially structured by Zuckerberg’s vision of connecting people, according to whether they have confirmed or ignored the one-size-fits-all ‘friends request’ on facebook.com. The basic IP/TCP structures of the Internet were built according to a broadly libertarian vision widely shared among the early computer science pioneers, which denies central control or ownership in order to facilitate free expression.
So the more pertinent question is: can we build the Internet in a way that facilitates well-placed trust, and encourages trustworthiness? In short, can we design for online trust? To answer this, we need to look at why people are trustworthy and untrustworthy; what counts as good evidence for a person’s trustworthiness online; the effects of online anonymity and pseudonymity; and the role of institutions in grounding trustworthiness. For instance, one mechanism through which we can secure others’ trustworthiness is to develop better online reputation systems and track past conduct.
These questions cannot be answered once and for all. Technology is dynamic: , for instance, is considered by many to be a step change in the way we compute, and it too raises specific questions around trust (see panel). As technology changes, so too will the philosophical challenges. The hope is that collaborative work between computer engineers, lawyers and philosophers can help to make the Internet a safer place.

For further information, please contact Tom Simpson (tws21@cam.ac.uk), whose PhD research in the Faculty of Philosophy (www.phil.cam.ac.uk/) is being sponsored by Microsoft Research Cambridge. His article on ‘e-Trust and Reputation’ is published in Ethics and Information Technology.

Philosopher Tom Simpson asks: can we build a trustworthy and safe Internet?
Engineering is always about solving problems for people and the society in which they live. Philosophy can help understand what those problems are and how they are to be solved.

Professor Richard Harper, Microsoft Research Cambridge©iStockPhoto.com/Amanda RohdeDigital trustCloud computingCloud computing is widely heralded as one of the most radical changes to the way we compute, and its full impact is thought to be just around the corner. First and foremost, the cloud is a change in the geography of computing – instead of having your PC store your data and run everything, your computing will be done on banks of servers and accessed remotely. Along with the change in geography, the move to the cloud is also a change in the scale of computing, with access to far more powerful computing facilities than ever before.
But the cloud raises a host of philosophical issues, particularly questions of responsibility. Who should own what data? When are ‘crowd-sourcing’ techniques appropriate, and when not? What are the effects of more powerful techniques of profiling individuals? What happens to privacy when we compute in the cloud?
To discuss these and related issues, the Faculty of Philosophy and Microsoft Research are co-hosting an international conference in Cambridge, gathering together leading philosophers and practitioners. Two open lectures will be held on the evenings of 5 and 6 April 2011.
For further details, please visit trustandcloudcomputing.org.uk

This work is licensed under a Creative Commons Licence. If you use this content on your site please link back to this page.
Yes
Quelle: University of Cambridge

Cloud computing and the philosophy of trust

The event takes place today and tomorrow, and is hosted by the Faculty of Philosophy, University of Cambridge, and supported by Microsoft Research. An international workshop of world-renowned philosophers, ethicists, sociologists and practitioners will discuss the philosophical issues surrounding , a concept that has been described as one of the most radical changes to the way we compute. There will also be two public lectures at Corpus Christi, Cambridge.

Cloud computing is first and foremost a change in the geography of computing. Instead of the hardware on your computer doing the computing, the data storage and processing are carried out by hardware held in a different location. Facebook, Gmail and Flickr are well-known examples of computing in the cloud; a widespread move to cloud computing would see third-party servers providing nearly all computing needs, with users accessing software and data as needed.

Benefits claimed for cloud computing include access to far more powerful computing facilities than ever before, convenience and reliability of communications, greater flexibility for a mobile workforce, and a cost-effective alternative for businesses needing to maintain an up-to-date IT infrastructure.

But the provision of computing as a utility also raises philosophical issues, particularly questions of responsibility. Among these are who should own what data, and what happens to privacy when we compute in the cloud? How do we ensure the trustworthiness of those who manage the cloud, so that people use it confidently? And what is it about their computing practices that lead people to want the cloud?

To discuss these and related issues, the conference has gathered together delegates from institutions such as Massachusetts Institute of Technology (MIT), Rutgers, Institute Marcel Mauss, Paris, TU Delft, and the Universities of Cambridge and Oxford.

“We have a choice about how we build and regulate the cloud,” said Dr Alex Oliver, from Cambridge’s Faculty of Philosophy. “The aim of the event is to initiate a new discussion on how the internet and cloud computing is changing business and personal relationships in the cloud era.”

The public lecture this evening will be given by Dr David D. Clark, Senior Research Scientist, Computer Science and Artificial Intelligence Laboratory, MIT, at 6pm. Tomorrow’s lecture will be by Professor Ian Kerr, Canada Research Chair in Ethics, Law and Technology, University of Ottawa, at 5pm. Both lectures will be held in the McCrum Lecture Theatre, Corpus Christi, Cambridge.
Some of the world’s finest minds in academic philosophy are debating the impact of the internet and cloud computing in Cambridge this week.
We have a choice about how we build and regulate the cloud.Dr Alex OliverFaculty of PhilosophyCloud computing

This work is licensed under a Creative Commons Licence. If you use this content on your site please link back to this page.
Yes
Quelle: University of Cambridge

Privacy by design

Online services that store our personal information have proliferated, yet the technology to underpin how our privacy is safeguarded has lagged behind. This was the conclusion of a 2008 report by the UK’s Information Commissioner’s Office, a body set up to uphold privacy for individuals, which pressed for “the evolution of a new approach to the management of personal information that ingrains privacy principles into every part of every system in every organisation.”

This ethos underpins research led by Professor Jon Crowcroft, the Marconi Professor of Communications Systems in the Computer Laboratory. Two projects he leads aim to minimise privacy risks, and at the heart of both is the concept of ‘privacy by design’.

“Privacy by design means that it’s in-built as part of the technology, rather than bolted on in order to comply with data protection laws,” he explained. “With privacy by design, it would simply not be possible for incidents such as the leaking of LinkedIn passwords to happen.”

One project is tackling the challenge of how to maintain privacy when all your data are stored by a central service – the so-called cloud. Anyone who stores images on flickr, or accesses emails from a central server, is , and today many businesses are turning to centralised data centres as an economic means of storing their information. However, concerns have also been raised about the scale of control that cloud service providers wield over the data they store and can potentially monitor.

Crowcroft and colleague Dr Anil Madhavapeddy are building technologies to support the control of networked personal data as part of a five-year £12 million research hub (‘Horizon’), which is led by the University of Nottingham and funded by the Engineering and Physical Sciences Research Council (EPSRC). The research is driven by the overarching concept of a lifelong contextual footprint – the idea that each of us throughout our lifetime will lay down a digital trail that captures our patterns of interaction with digital services – and how best to protect this.

A second project, FRESNEL (for ‘Federated Secure Sensor Network Laboratory’), is focusing on privacy in networks that people use to modify their heating, lighting and home entertainment when they are not at home, as well as networks that monitor traffic flow and air quality, and enable a doctor in hospital to check a patient’s health at home.

“Current technologies have usually been devised for single-owner sensor networks that are deployed and managed by a central controlling entity, usually a company that has set themselves up to offer this capability,” he said. “They don’t have the right scalability and security required to deal with a secure multi-purpose federated sensor network, running different applications in parallel. Our aim is to build a network framework with multiple applications sharing the same resources.”

With funding from EPSRC, Crowcroft and Dr Cecilia Mascolo and colleagues, working with Dr Ian Brown at the University of Oxford and industrial project partners, now have a demonstrator program in operation that is currently being evaluated through a large-scale federation of sensor networks across the University of Cambridge.

The aim of these projects, explained Crowcroft, is not to lock up personal data, removing the ability to socialise it, but rather to support systems that process data without sacrificing privacy: “We are building technologies to support lifelong control of networked personal data. For instance, a significant driver behind social networking has been the ecosystem of data processors that aggregate and provide services such as recommendations, location searches or messaging. But the big drawback is that users have to divulge more of their personal data to a third party than is necessary, because of the difficulty of distinguishing what is needed. Our research starts from a single premise – that individuals require control over access to, and use of, their personal data for ever.”

Crowcroft and colleagues have launched a not-for-profit foundation, Digital Life Foundation, which will build an open-source community around these technologies.

For more information, please contact Louise Walsh (louise.walsh@admin.cam.ac.uk) at the University of Cambridge Office of External Affairs and Communications.
New research aims to ensure that we can exploit the full benefits of the digital world and still protect our online privacy.
We are building technologies to support lifelong control of networked personal data.Professor Jon Crowcroft©iStockPhoto.com/Marilyn NievesOnline privacy

This work is licensed under a Creative Commons Licence. If you use this content on your site please link back to this page.
Yes
Quelle: University of Cambridge

Recent RDO blogs, August 8, 2016

Here’s what RDO enthusiasts have been blogging about this week:

Customizing a Tripleo Quickstart Deploy by Adam Young

Tripleo Heat Templates allow the deployer to customize the controller deployment by setting values in the controllerExtraConfig section of the stack configuration. However, Quickstart already makes use of this in the file /tmp/deploy_env.yaml, so if you want to continue to customize, you need to work with this file.

… read more at http://tm3.org/88

fedora-review tool for reviewing RDO packages by Chandan Kumar

This tool makes reviews of rpm packages for Fedora easier. It tries to automate most of the process. Through a bash API the checks can be extended in any programming language and for any programming language.

… read more at http://tm3.org/89

OpenStack operators, developers, users… It’s YOUR summit, vote! by David Simard

Once again, the OpenStack Summit is nigh and this time it’ll be in Barcelona.
The OpenStack Summit event is an opportunity for Operators, Developers and Users alike to gather, discuss and learn about OpenStack.
What we know is that there’s going to be keynotes, design sessions for developers to hack on things and operator sessions for discussing and exchanging around the challenges of operating OpenStack. We also know there’s going to be a bunch of presentations on a wide range of topics from the OpenStack community.

… read more at http://tm3.org/8a

TripleO Composable Services 101 by Steve Hardy

Over the newton cycle, we’ve been working very hard on a major refactor of our heat templates and puppet manifiests, such that a much more granular and flexible “Composable Services” pattern is followed throughout our implementation.

… read more at http://tm3.org/8b

TripleO deep dive session (Undercloud – Under the hood) by Carlos Camacho

This is the fifth video from a series of “Deep Dive” sessions related to TripleO deployments.

… watch at http://tm3.org/8c
Quelle: RDO

OpenStack Developer Mailing List Digest July 23 to August 5

Equal Chances For All Projects

A proposal [1] in the OpenStack governance repository that aims to have everything across OpenStack be plugin based, or allow all projects access to the same internal APIs.
Some projects have plugin interfaces, but also have project integrations in tree. Makes it difficult to see what a plugin can, and should do.
With the big tent, we wanted to move to a flatter model, removing the old integrated status.
Examples:

Standard command line interface or UI for setting quotas, it&8217;s hard for projects that aren&8217;t Nova, Neutron or Cinder.

Quotas in Horizon for example are set in “admin → quotas”, but plugins can&8217;t be in here.
OpenStack Client has “openstack quota set –instances 10” for example.
Steve Martinelli who contributes to OpenStack Client has verified that this is not by design, but lack of contributor resources).

Tempest plugins using unstable resources (e.g. setting up users, projects for running tests on). Projects in tree have the benefit of any change having to pass gate before it merges.

Specification to work towards addressing this [2].
The stable interface still needs work work in increasing what it exposes to plugins. This requires a bit of work and is prioritized by the QA team.

All tests in Tempest consume the stable interface.

Since a lot of plugins use the unstable interfaces, the QA team is attempting to maintain backwards compatibility until a stable version is available, which is not always an option.
Tempest.lib [3] is what&8217;s considered the “stable interface”

Given the amount of in progress work for the examples given, there doesn&8217;t seem a disagreement with the overall goal to warrant a global rule or policy.
An existing policy exists [4] with how horizontal teams should work with all projects.
Full thread and continued thread

Establishing Project-wide Goals

An outcome from the leadership training session that members of the Technical Committee participated in was setting community-wide goals for accomplishing specific technical tasks to get projects synced up.
There is a change to the governance repository [5] that sets the expectations of what makes a good goal and how teams are meant to approach working on them.
Two goals proposed:

Support Python 3.5 [6]
Switch to Oslo libraries [7]

The Technical Committee wants to set a reasonable number of small goals for a release. Not invasive top-down design mandates that teams would want to resist.

Teams could possibly have a good reason for not wanting or being able to fulfill a goal. It just needs to be documented and not result in being removed from the big tent.

Full thread

API Working Group News

Cinder is lookig into exposing resource capabilities.

Spec [8]
Patch [9]

Guidelines under review:

Beginning set of guidelines for URIs [10]
Add description of pagination parameters [11]

Full thread

Big Tent?

Should we consider the big tent is the right approach because of some noticed downsides:

Projects not working together because of fear of adding extra dependencies.
Reimplementing features, badly, instead of standardizing.
More projects created due to politics, not technical reasons.
Less cross-project communication.
Operator pain in assembling loose projects.
Architectural decisions made at individual project level.

Specific examples:

Magnum trying not to use Barbican.
Horizon discussions at the summit wanting to use Zaqar for updates instead of polling, but couldn&8217;t depend on a non-widely deployed subsystem.
Incompatible virtual machine communication:

Sahara uses ssh, which doesn&8217;t play well with tenant networks.
Trove uses rabbit for the guest agent to talk back to the controller.

The overall goal of big tent was to make the community more inclusive, and these issues pre-date big tent.
The only thing that can really force people to adopt a project is DefCore, but that comes with a major chicken and egg problem.
What&8217;s not happening today is a common standard that everything can move towards. Clint Byrum&8217;s proposal on an Architecture working group might be a way forward.
The Technical Committee is a balancing act of trying to provide this, but not interfere too much with a project in which members may not have specific experience with the project&8217;s domain.
Sahara has had some success with integration with other projects.

Kilo/Liberty integrating with Heat for deploying clusters.
Liberty/Mitaka integrated Barbican.
Using Manila shares for datasources.
Liberty/Mitaka added Sahara support in OpenStack Client.
In progress, support with Designate.

Full thread

 
[1] &8211; https://review.openstack.org/342366
[2] &8211; http://specs.openstack.org/openstack/qa-specs/specs/tempest/client-manager-refactor.html
[3] &8211; http://docs.openstack.org/developer/tempest/overview.html#
[4] &8211; http://governance.openstack.org/resolutions/20141202-project-structure-reform-spec.html#-for-horizontal-teams
[5] &8211; https://review.openstack.org/349068
[6] &8211; https://review.openstack.org/349069
[7] &8211; https://review.openstack.org/349070
[8] &8211; https://review.openstack.org/#/c/306930/
[9] &8211; https://review.openstack.org/#/c/350310/
[10] &8211; https://review.openstack.org/#/c/322194/
[11] &8211; https://review.openstack.org/190743
Quelle: openstack.org

Google teams up with Stanford Medicine for Clinical Genomics innovation

Posted by Sam Schillace, VP of Engineering, Industry Solutions

Google Cloud Platform has teamed up with Stanford Medicine to help clinicians and scientists securely store and analyze massive genomic datasets with the ultimate goal of transforming patient care and medical research.

Stanford Medicine ranks as one of the country’s best academic medical centers, and we’re eager to see what can happen when we work together. We anticipate that our contributions of HIPAA-compliant cloud computing, machine learning and data science — combined with Stanford’s expertise in genomics and healthcare — could lead to important advances in precision health, a predictive and preventive approach to healthcare.

This is a great opportunity to bring data science to patient care by combining genomics and traditional health records. Our collaboration is in support of the new Clinical Genomics Service at Stanford Health Care, which aims to sequence and analyze thousands of patients’ genomes. Cloud Platform will allow Stanford scientists and clinicians to securely analyze these massive datasets immediately and scale up painlessly as clinical genomics becomes more commonplace.

As genome sequencing becomes affordable, more and more patients will be able to benefit from it. Modern cloud technology and data science tools can vastly improve analysis methods for genomic data. Working with the team at Stanford, we expect to build a new generation of platforms and tools that will facilitate genome analysis at massive scale, providing actionable answers about gene variants from each person’s genome in a fraction of the time it takes now, and use that information to make better medical decisions.

Stanford researchers already have some cool ideas in mind for expanding beyond genome data, such as using machine-learning techniques to train computers to read pathology or X-ray images and identify tumors or other medical problems. They’ve also amassed years of anonymized patient data that could be used to teach algorithms to distinguish false signals from real ones, such as hospital alarms that go off when nothing is wrong with a patient.

Together, we believe these efforts will pay off in new insights into human health and better care for patients at Stanford and other institutions.
Quelle: Google Cloud Platform

Orbitera joins the Google Cloud Platform team

Posted by Nan Boden, Head of Global Technology Partners

Today we’re excited to announce that Google has acquired Orbitera!

Orbitera provides a commerce platform that makes buying and selling software in the cloud simple, seamless and scalable for all kinds of businesses, including independent software vendors, service providers and IT channel organizations.

The current model for the deploying, managing and billing of cloud-based software does not easily fit the way today’s modern enterprises operate. Orbitera automates many of the processes associated with billing, packaging and pricing optimization for leading businesses and ISVs (Independent Software Vendors) supporting customers running in the cloud. More than 60, enterprise stacks have been launched on Orbitera.

At Google, we partner closely with our enterprise customers and software providers to ensure their transition to the cloud is as simple and seamless as possible. We recognize that both enterprise customers and ISVs want to be able to use more than one cloud provider and have a way to conduct product trials and proofs of concept before building a full production deployment, all using their trusted SIs (System Integrators), resellers and normal sales cycles.

Orbitera has built a strong ecosystem of enterprise software vendors delivering software to multiple clouds. This acquisition will not only improve the support of software vendors on Google Cloud Platform, but reinforces Google’s support for the multi-cloud world. We’re providing customers with more choice and flexibility when it comes to running their cloud environment.

Looking to the future, we’re committed to maintaining Orbitera’s neutrality as a platform supporting multi-cloud commerce. We look forward to helping the modern enterprise thrive in a multi-cloud world.
Quelle: Google Cloud Platform

Continuous Integration Testing on Docker Cloud. It’s Dead Simple

This is a guest post by Stephen Pope & Kevin Kaland from Project Ricochet
Docker Cloud is a SaaS solution hosted by Docker that gives teams the ability to easily manage, deploy, and scale their Dockerized applications.

The Docker Cloud service features some awesome continuous integration capabilities, especially its testing features. Once you understand the basics, I’ve found they are remarkably easy to use. The fact is, continuous integration covers a wide range of items — like automated builds, build testing, and automated deployment. The Docker Cloud service makes features like automated builds and deployment quite obvious, but the testing features can be a little harder to find, even though they are in plain sight!
In this piece, my aim is to walk you through the Docker Cloud service’s testing capabilities in a straightforward manner. By the end, I hope you’ll agree that it’s really dead simple!
So, let’s begin with the first task. Before we can test our builds, we need to automate them. We’ll use GitHub to set this up here, but note that it works the same way in Bitbucket.
Setup an Automated Build

1. Log into Docker Cloud using your Docker ID.
2. On the landing page (or in the left-hand menu), click on Repositories.
3. If you don’t already have a repository, you’ll need to click the Create button on the Repository page.
4. Click the Builds tab on the Repository page. If this is your first autobuild, you should see this screen:

To connect your GitHub account, click the Learn more link.
5. Once on the Cloud Settings page, look for the Source Providers section. Click on the plug  icon to connect your GitHub account. Authorize the connection on the screen that follows.

6. When your GitHub account is connected, go back to the Repository page and click Configure Automated Builds. Now we are in business!
7. Select the GitHub source repository you want to build from.

8. In the Build Location section, choose the option to Build on Docker Cloud’s infrastructure and select a builder size to run the build process on. Accept the default Autotest option for now (we’ll describe the Autotest options in detail in a moment).

Make sure you are satisfied with the Tag Mappings; these map your Docker image build tags (e.g. latest, test, production, etc.) to your GitHub branches. Ensure that Autobuild is enabled. If your Dockerfile needs any Environment Variables at build time, you can add them here. (Ours doesn’t.) Once you&8217;ve set everything up, click Save.
The specified tag will now be built when you push to the associated branch:

Setup Automated Deployment
After the build images are created, you can enable automated deployment.
If you are inclined to build images automatically, you may also want to automate the deployment of updated images once they are built. Docker Cloud makes this easy:
1. To get started, you will need a service to deploy (a service is a collection of running containers of a particular Docker image). A good example of a service might be our production node app, running 7 containers with a set of environment variables setup for that specific instance of the app. You might also have an equivalent service for development and testing (where you can test code before production). Here is a good read on starting your first service
2. Edit the service that is using the Docker image.
3. In the General Settings section, ensure that Autoredeploy is enabled:

4. Save changes and you should be set.
Autotest Builds before Deployment
Remember when I said testing your builds was dead simple? Well, check this out. All you need to do is enable Autotests.
On the Repository page, navigate to the Builds tab and then click Configure Automated Builds. Within the Autotest section, three options are available:

Off will test commits only to branches that are using Autobuild to build and push images.
Source repository will test commits to all branches of the source code repository, regardless of their Autobuild setting.
Source repository and external pull requests will test commits to all branches of the source code repository, including any pull requests opened against it.

Before you turn that on, you’ll need to set up a few assets in your repository to define the tests and how they should be run. You can find examples of this in our Production Meteor using Docker Git repo.
This boils down to a single basic file — plus some optional ones in case you need them.
Our docker-compose.test.yml will serve as the main entry point for testing. It lets you define a “sut” service. This enables you to run the main tests and various other services that may be needed to test your build. In our example, you may notice that it simply outputs “test passed” — but that line is where the magic happens. If your test returns 0, your test has passed. If it returns a 1, it hasn’t. Essentially, you are performing a simple call from the YAML file, or if more complex tests are done, in a more robust bash script.
Let’s review a YAML compose file example from a blog on automated testing that uses a bash script and some additional features:
sut:
 build: .
 dockerfile: Dockerfile.test
 links:
   &8211; web
web:
 build: .
 dockerfile: Dockerfile
 links:
   &8211; redis
redis:
 image: redis
Here, we define a sut service, along with some build instructions and an additional dockerfile for the tests. With this, you should be able to build a separate image for testing, instead of using the image for your build. That enables you to have different packages and files for testing that won’t be included in your application build.
Docker.test
FROM ubuntu:trusty
RUN apt-get update && apt-get install -yq curl && apt-get clean
WORKDIR /app
ADD test.sh /app/test.sh
CMD [&8220;bash&8221;, &8220;test.sh&8221;]
Here you’ll notice the final CMD is a test.sh bash script. This script will execute and return a 0 or 1 based on the test results.
Let’s take a quick look at the test.sh script:
Test.sh
sleep 5
if curl web | grep -q &8216;<b>Visits:</b> &8216;; then
 echo &8220;Tests passed!&8221;
 exit 0
else
 echo &8220;Tests failed!&8221;
 exit 1
fi
You’ll see the script is doing a simple curl call against the test application to see if some text appears on the page. If it does, the test passed. If not, the test will fail.
Remember how easy I said this was to implement on Docker Cloud? That&8217;s all there is to it! Additionally once you’ve mastered the basics, more advanced integrations can be done with builds hooks.
Of course, building the tests for a complete application will be a much larger task than described here, but the point is you’ll be able to focus on the tests, not how to squeeze them into your CI workflow. Docker Cloud makes the setup and implementation super easy. Once you understand these basic components, you should be able to set up our test Meteor service up in a matter of minutes.
Alright, that’s it for now. I hope this piece helped guide you through the process fairly easily, and more importantly showcases the cool testing CI workflow Docker Cloud has to offer. If you have additional questions or comments, make sure to head over to the Docker Cloud Forum, Docker technical staff will be glad to help. Here are some related posts that should prove helpful on your journey. Enjoy!
Get Docker Cloud for Free &8211; https://cloud.docker.com/

Docker Cloud Automated Repository Testing
Basic Voting Webapp (used at DockerCon for various examples)
An in depth post on automated test on Digital Ocean
Meteor Docker Example with Test (used in this Blog)

@Docker Cloud Service features continuous integration capabilities !Click To Tweet

Quelle: https://blog.docker.com/feed/