Launching our First Ansible Job Template on a VM in CloudForms

This is part 3 of our series on Ansible Tower Integration in Red Hat CloudForms.
In this article, we will explore how to use the Ansible Tower integration in CloudForms by configuring the launch of an Ansible Template Job on a click of a button from a VM.
In this example, we use an Ansible Job Template created based on a role found on the Ansible Galaxy role library. In particular, we installed on our Ansible Tower the sfromm.postgresql role dedicated to managing PostgreSQL. Our associated Ansible Playbook is available on GitHub.
As seen in our previous article, an inventory of all Job Templates is available in CloudForms under ‘Configuration > Configuration Management > Ansible Tower Job Templates’. For each of them, CloudForms provides the ability to auto-generate a Service Dialog which can be used to prompt users to validate or provide Job Template inputs. The dialog  generation can be triggered by invoking ‘Configuration > Create Service Dialog from this Job Template’ on a Job Template and filling the service dialog name field.
 

 

 
Once saved, the generated service dialog can be found under ‘Automate > Customization > Service Dialogs > All Dialogs > PostgreSQL Deployment Dialog’.
 

This dialog contains all of the fields required to launch the Job Template. The first element on the dialog is a ‘Limit’ field. This is used in Ansible Tower to filter the hosts and specify on which particular host the job must run. CloudForms populates this field automatically with the VM name when used on a VM button. The other elements correspond to the extra variables required by our Job Template, as previously seen from the Ansible Tower Job Templates inventory.
At this point, we can edit the dialog and modify the elements. One of the common task is to uncheck the ‘read only’ option on the elements and/or add additional logic behind others (e.g. Dynamic Dropdown Lists, etc). For example, we rename the labels to make them more user friendly, and we remove the ‘Limit’ element which will be populated behind the scene by CloudForms.
On the Ansible Tower side, all you need is to configure an inventory corresponding to your CloudForms infrastructure or cloud providers.
If used following a CloudForms VM provisioning, you must trigger an update of the Ansible Tower inventory prior to launching a Job Template. In order to perform this update, simply enable the option ‘Update on Launch’ on the required inventory group source in Ansible Tower.
 

This is required to ensure the new host is present in the Ansible Tower inventory before trying to launch a Job Template on it.
CloudForms populates the limit parameter on the Job Template in order to target on which host the template should be executed. Additional extra parameters can also be set in CloudForms using a Service Dialog or programmatically (an example is provided under
Datastore / ManageIQ / ConfigurationManagement / AnsibleTower / Operations / StateMachines / Job / launch_ansible_job).
Service Dialogs are automatically generated from CloudForms by clicking on the ‘Create Service Dialog from this Job Template’ button presented for each Job Template.
 

The resulting Service Dialog contains all elements required as input in the Ansible Job Template. This includes the limit as well as all extra parameters in separate elements. It is of course possible to edit the generated Service Dialog to amend or modify any of the fields.
In our case, we simply want to deploy PostgreSQL and a database keeping the default values entered within the Job Templates parameters. We will keep the dialog as generated.
The next step is to create a new button on VMs for our Job Template. Navigate to ‘Automate > Buttons’ and expand the ‘VM and Instance’ object type. Add a new Button Group if required and create a new button by selecting ‘Configuration > Add a new Button’.
Under dialog, make sure you select the previously generated dialog (‘PostgreSQL Deployment Dialog’ in our case). System/Progress is set to ‘Request’, Message to ‘create’ and Request to ‘Ansible_Tower_Job’. The Job Template name is specified as an attribute/value pair under ‘job_template_name’ (in this case, set to the corresponding Ansible Job Template name ‘PostgreSQL Deployment’). Additional values such as dialog_param_postgresql_databases or dialog_param_postgresql_users can be specified to override our default values on the Job Template if required.
 

Save the Button and voila. We have a new Button for our VM allowing us to run our Ansible Job Template to deploy PostgreSQL on the VM using Ansible Tower.
 

These steps can be followed to add additional buttons on the VMs with associated Job Templates.
In this article, we looked at how to configure the launch of an Ansible Template Job on a click of a button from a VM. In the next article, we will explore how to use Ansible Job Templates as Service Items and publish them in the Service Catalog.
Quelle: CloudForms

Cloud Shell now GA, and still free

Posted by Cody Bratt, Product Manager

Google Cloud Shell is a command line interface that allows you to manage your Google Cloud Platform infrastructure from any computer with an internet connection. Last year we extended the free beta period through the end of 2016 so you could try it out longer. Now, we’re excited to announce that Cloud Shell is generally available and free to use.

For those of you who haven’t tried it yet, Cloud Shell offers quick access to a temporary VM that’s hosted and managed by Google and includes all the popular tools that you need to manage your GCP environment. For example, you can use the Cloud SDK to manage Cloud Storage data or run and deploy an App Engine application. You can keep files between sessions using a personal 5GB of storage space.

Cloud Shell provides a resizable window inside of the Cloud Console (click to enlarge)

To open Cloud Shell from the Cloud Console, simply click on the Cloud Shell icon in the top-right corner.

The Cloud Shell documentation has a variety of tutorials to help you get started. In addition, here are a few pro-tips:

To switch to a light theme, look under the gear icon
Cloud Shell supports the terminal multiplexer tmux. Toggle it on or off from Cloud Shell to use different options in various Cloud Console tabs.
To pop out the entire console window, click the pop out icon

As always, send us feedback using the “Send Feedback” link in the top right of the Cloud Console or within Cloud Shell under the gear icon. We’re excited to see how you use Cloud Shell and how we can make it even more useful.
Quelle: Google Cloud Platform

Running the same, everywhere part 2: getting started

Posted by Miles Ward, Global Head of Solutions

In part one of this post, we looked at how to avoid lock-in with your cloud provider by selecting open-source software (OSS) that can run on a variety of clouds. Sounds good in theory, but I can hear engineers and operators out there saying, “OK, really, how do I do it?”

Moving from closed to open isn’t just about knowing the names of the various OSS piece-parts and then POOF! — you’re magically relieved of having to make tech choices for the next hundred years. It’s a process, where you choose more and more open systems and gradually gain more power.

Let’s assume that you’re not starting from scratch (if you are, please! Use the open tools we’ve described here as opposed to more proprietary options). If you’ve already built an application that consumes some proprietary components, the first step is to prioritize migration from those components to open alternatives. Of course, this starts with knowing about those alternatives (check!) and then following a given product’s documentation for initialization, migration and operations.

But before we dive into specific OSS components, let’s put forth a few high-level principles.

Applications that are uniformly distributed across distinct cloud providers can be complex to manage. It’s often substantially simpler and more robust to load-balance entirely separate application systems than it is to have one globally conjoined infrastructure. This is particularly true for any services that store state, such as storage and database tools; in many cases, setting up replication across providers for HA is the most direct path to value.
The more you can minimize the manual work required to relocate services from one system to another, the better. This of course can require very nuanced orchestration and automation, and its own sets of skills. Your level of automated distribution may vary between different layers of your stack; most companies today can get to “apps = automated” and “data = instrumented” procedures relatively easily, but “infra = automated” might take more effort.
No matter how well you think migrating these systems will work, you won’t know for sure until you try. Further, migration flexibility atrophies without regular exercise. Consider performing regular test migrations and failovers to prove that you’ve retained flexibility.
Lock-in at your “edges” is easier to route around or resolve than lock-in at your “core.” Consider open versions of services like queues, workflow automation, authentication, identity and key management as particularly critical.
Consider the difference in kind between “operational lock-in” versus “developer lock-in.” The former is painful, but the latter can be lethal. Consider especially carefully the software environments you leverage to ensure that you avoid repetitive work.

Getting started
With that said, let’s get down to specifics and look at the various OSS services that we recommend when building this kind of multi-cloud environment.

If you choose Kubernetes for container orchestration, start off with a Hello World example, take an online training course, follow setup guides for Google Container Engine and Elastic Compute Cloud (EC2), familiarize yourself with the UI, or take the docker image of an existing application and launch it. Perhaps you have applications that require communications between all hosts? If you’re distributed across two cloud providers, that means you’re distributed across two networks, and you’ll likely want to set up VPN between the two environments to keep traffic moving. If it’s a large number of hosts or a high-bandwidth interaction, you can use Google Cloud Interconnect.

If you’re using Google App Engine and AppScale for platform-as-a-service, the process is very similar. To run on the Google side, follow App Engine documentation, and for AppScale in another environment, follow their getting started guide. If you need cross-system networking, you can use VPN or for scaled systems — Cloud Interconnect.

For shops running HBase and Google Cloud BigTable as their big data store, follow the Cloud Bigtable cluster creation guide for the Cloud Platform side, and the HBase quickstart (as well as longer form not-so-quick-start guides). There’s some complexity in importing data from other sources into an HBase-compatible system; there’s a manual for that here.

The Vitess NoSQL database is an interesting example, in that the easiest way to get started with this is to run it inside of the Kubernetes system we built above. Instructions for that are here, the output of which is a scalable MySQL system.

For Apache Beam/Cloud Dataflow batch and stream data processing, take a look at the GCP documentation to learn about the service, and then follow it up with some practical exercises in the How-to guides and Quickstarts. You can also learn more about the open source Apache Beam project on the project website.

For TensorFlow, things couldn’t be simpler. This OSS machine learning library is available via Pip and Docker, and plays nicely with Virtualenv and Anaconda. Once you’ve installed it, you can get started with Hello TensorFlow, or other tutorials such as MNIST For ML Beginners, or this one about state of the art translation with Recurrent Neural Nets.

The Minio object storage server is written in Golang, and as such, is portable across a wide variety of target platforms, including Linux, Windows, OS X and FreeBSD. To get started, head over to their Quickstart Guide.

Spinnaker is an open-source continuous delivery engine that allows you to build complex pipelines that take your code from a source repository to production through a series of stages —  for example, waiting for code to go through unit testing and integration phases in parallel before pushing it to staging and production. In order to get started with continuous deployment with Spinnaker, have a look at their deployment guide.

But launching and configuring these open systems is really just the beginning; you’ll also need to think about operations, maintenance and security management, whether they run in a single- or multi-cloud configuration. Multi-cloud systems are inherently more complex, and the operational workflow will take more time.

Still, compared to doing this at any previous point in history, these open-source tools radically improve businesses’ capacity to operate free of lock-in. We hear from customers every day that OSS tools are an easy choice, particularly for scaled, production workloads. Our goal is to partner with customers, consultancies and the OSS community of developers to extend this framework and ensure this approach succeeds. Let us know if we can help you!

Quelle: Google Cloud Platform

Building immutable entities into Google Cloud Datastore

Posted by Aleem Mawani, Co-Founder, Streak.com

Editor’s note: Today, we hear from Aleem Mawani, co-founder of Streak.com, a Google Cloud Platform customer whose customer relationship management (CRM) for Google Apps is built entirely on top of Google products: Gmail, Google App Engine and Google Cloud Datastore. Read on to learn how Streak added advanced functionality to the Cloud Datastore object storage system

Streak is a full blown CRM built directly into Gmail. We’re built on Google Cloud Platform (most heavily on Google App Engine) and we store terabytes of user data in Google Cloud Datastore. It’s our primary database, and we’ve been happy with its scalability, consistent performance and zero-ops management. However, we did want more functionality in a few areas. Instead of overwriting database entities with their new content whenever a user updated their data, we wanted to store every version of those entities and make them easy to access. Specifically, we wanted a way to make all of our data immutable.

In this post, I’ll go over why you might want to use immutable entities, and our approach for implementing them on top of Cloud Datastore.

There are a few reasons why we thought immutable entities were important.

We wanted an easy way to implement a newsfeed-style UI. Typical newsfeeds show how an entity has changed over time in a graphical format to users. Traditionally we stored separate side entities to record the deltas between different versions of a single entity. Then we’d query for those side entities to render a newsfeed. Designing these side entities was error prone and not easily maintainable. For example, if you added a new property to your entity, you would need to remember to also add that to the side entities. And if you forgot to add certain data to the side entities, there was no way to reconstruct that later down the line when you did need it — the data was gone forever.

The “Contact” entity stores data about users’ contacts. Because it’s implemented as an immutable entity, it’s easy to generate a historical record of how that contact has changed over time.
Having immutable entities allows us to recover from user errors very easily. Users can rollback their data to earlier versions or even recover data they may have accidentally deleted (see how we implemented deletion below)1.
Potentially easier debugging. It’s often useful to see how an entity changed over time and got into its current state. We can also run historical queries on the number of changes to an entity – useful for user behaviour analysis or performance optimization.

Some contextBefore we go into our implementation of immutable entities on the Cloud Datastore, we need to understand some of the basics of how the datastore operates. If you’re already familiar with the Cloud Datastore, feel free to skip this section.

You can think of the Cloud Datastore as a key-value store. A value, called an entity in the datastore, is identified by its key, and the entity itself is just a bag of properties. There’s no enforcement of a schema on all entities in a table so the properties of two entities need not be the same.

The database also supports basic queries on a single table — there are no joins or aggregation, just simple table scans for which an index can be built. While this may seem limiting, it enables fast and consistent query performance because you will typically denormalize your data.

The most important property of Cloud Datastore for our implementation of immutable entities is “entity groups.” Entity groups are groups of entities for which you get two guarantees:
Queries that are restricted to a single entity group get consistent results. This means that a write immediately followed by a query will have results that are guaranteed to reflect the changes made by the write. Conversely, if your query is not limited to a single entity group you may not get consistent results (stale data).
Multi-entity transactions can only be applied within a single entity group (this was recently improved — Cloud Datastore now supports cross entity group transactions but limits the number of entity groups involved to 25).
Both of these facts will be important in our implementation. For more details on how the Cloud Datastore itself works, see the documentation.

How we implemented immutable entitiesWe needed a way to store every change we made to a single entity while supporting common operations for entities: get, delete, update, create and query. The overall strategy we took was to utilize two levels of abstraction — a “datastore entity” and a “logical entity.” We used individual “datastore entities” to represent individual versions of a “logical entity.” Users of our API would only interact with logical entities and each logical entity would have a key to identify it and support the common get, create, update, delete and query operations. These logical entities would be backed by actual datastore entities comprising the different versions of that logical entity. The most recent, or tip, version of the datastore entities represented the current value of the logical entity. First let’s start with what the data model looks like. Here’s how we designed our entity:

(click to enlarge)
The way this works is that we always store a new datastore entity every time the user would like to make a change to the entity. The most recent datastore entity has the isTip value set to true and the rest don’t. We’ll use this field later to query for a particular logical entity by getting the tip data store entity. This query is fast in the data store because all queries are required to have indexes. We also store the timestamp for when each datastore entity was created.

The versionId field is a globally unique identifier for each datastore entity. These IDs are automatically assigned by Cloud Datastore when we store the entity.

The consistentId identifies a logical entity — it’s the ID we can give to users of this API. All of the datastore entities in a logical entity have the same consistent ID. We picked the consistent ID of the logical entity to be equal to the ID of the first datastore entity in the chain. This is somewhat arbitrary, and we could have picked any unique identifier, but since the low level Cloud Datastore API gives us a unique ID for every datastore entity, we decided to use the first one as our consistent ID.

The other interesting part of this data model is the firstEntityInChain field. What’s not shown in the diagram is that every datastore entity has its parent (the parent determines the entity group) set to the first datastore entity in the chain. It’s important that all the datastore entities in the chain (including the first one) have the same parent and are thus in the same entity group so that we can perform consistent queries. You’ll see why these are needed below.

Here’s the same immutable entity defined in code. We use the awesome Objectify library with the Cloud Datastore and these snippets do make use of it.

public class ImmutableDatastoreEntity {@IdLong versionId;@ParentKey<T> firstEntityInChain;protected Long consistentId;protected boolean isTip;Key<User> savedByUser;}
So how do we perform common operations on logical entities given that they are backed by datastore entities?

Performing createsWhen creating a logical entity, we just need to create a single new datastore entity and use the Cloud Datastore’s ID allocation to set the versionId field and the consistentId field to the same value. We also set the parent key (firstEntityInChain) to point to itself. We also have to set isTip to true so we can query for this entity later. Finally we set the timestamp and the creator of the datastore entity and persist the entity to Cloud Datastore.

ImmutableDatastoreEntity entity = new ImmutableDatastoreEntity();entity.setVersionId(DAO.allocateId(this.getClass()));entity.setConsistentId(entity.getVersionId());entity.setFirstEntityInChain((Key<T>) Key.create(entity.getClass(), entity.versionId));entity.setTip(true);
Performing updates To update a logical entity with new data, we first need to fetch the most recent datastore entity in the chain (we describe how in the “get” section below). We then create a new datastore entity and set the consistentId and firstEntityInChain to that of the previous datastore entity in the chain. We set isTip to true on the new datastore entity and set it to false on the old datastore entity (note this is the only instance in which we modify an existing entity so we aren’t 100% immutable). 

We finally fill in the timestamp and user keys fields, and we’re ready to store the new datastore entity. Two important points on this: for the new datastore entity, we can let the datastore automatically allocate the ID when storing the entity (because we don’t need to use it anywhere else). Second, it’s incredibly important that we fetch the existing datastore entity and store both the new and old datastore entity in the same transaction. Without this, our data could become internally inconsistent.

// start transactionImmutableDatastoreEntity oldVersion = getImmutableEntity(immutableId)

oldVersion.setTip(false);ImmutableDatastoreEntity newVersion = oldVersion.clone();

// make the user edits needed

newVersion.setVersionId(null);newVersion.setConsistentId(this.getConsistentId());newVersion.setFirstEntityInChain(oldVersion.getFirstEntityInChain());

// .clone also performs the last two lines but just to be explicit this, just fyi

newVersion.setTip(true);
ofy().save(oldVersion, newVersion).now();

// end transaction
Performing gets Performing a get actually requires us to do a query operation to the datastore because we need to find the datastore entity that has a certain consistentId AND has isTip set to true. This entity will represent the logical entity. Because we want the query to be consistent, we must perform an ancestor query (i.e., tell Cloud Datastore to limit the query to a certain entity group). This only works because we ensured that all datastore entities for a particular logical entity are part of the same entity group.

This query should only ever return one result — the datastore entity that represents the logical entity.

Key ancestorKey = KeyFactory.createKey(ImmutableDatastoreEntity.class, consistentId);ImmutableDatastoreEntity e = ofy().load().kind(ImmutableDatastoreEntity.class).filter(“consistentId”, consistentId).filter(“isTip”, true).ancestor(ancestorKey) // this limits our query to just the 1 entity group.list() .first();

Performing deletes In order to delete logical entities, all we need to do is set the isTip of the most recent datastore entity to false. By doing this we ensure that the “get” operation described above no longer returns a result, and similarly, queries such as those described below continue to operate.

// wrap block in transactionImmutableDatastoreEntity oldVersion = getImmutableEntity(immutableId);oldVersion.setTip(false);ofy().save(oldVersion, newVersion).now(); Performing queries We need to be able to perform queries across all logical entities. However, when querying every datastore entity, we need to modify our queries so that they only consider the tip datastore entity of each logical entity (unless you explicitly want to find old versions of the data). To do this, we need to add an extra filter to our queries to just consider tip entities. One important thing to note is that we cannot do consistent queries in this case because we cannot guarantee that all the results will be in the same entity group (in fact we know for certain they are not if there are multiple results)

List<ImmutableDatastoreEntity> results = ofy().load().kind(ImmutableDatastoreEntity.class).filter(“isTip”, true).filter(/** apply other filters here */) .list();Performing newsfeed queriesOne of our goals was to be able to show how a logical entity has changed over time, so we must be able to query for all datastore entities in a chain. Again, this is a fairly straightforward query — we can just query by the consistentId and order by the timestamp. This will give us all versions of the logical entity. We can diff each datastore entity against the previous datastore entity to generate the data needed for a newsfeed.

Key ancestorKey = KeyFactory.createKey(ImmutableDatastoreEntity.class, consistentId);List<ImmutableDatastoreEntity> versions = ofy().load().kind(ImmutableDatastoreEntity.class).filter(“consistentId”, consistentId).ancestor(ancestorKey) .list();Downsides Using the design described above, we were able to achieve our goal of having roughly immutable entities that are easy to debug and make it easy to build newsfeed-like features. However, there are some drawbacks to this method:
We need to do a query any time we need to get an entity. In order to get a specific logical entity, we actually need to perform a query as described above. On Cloud Datastore, this is a slower operation than a traditional “get” by key. Additionally, Objectify offers built-in caching, which also can’t be used when trying to get one of our immutable entities (because Objectify can’t cache queries). To address this, we’ll need to implement our own caching in memcache if performance becomes an issue.
There’s no method to do a batch get of entities. Because each query must be restricted to a single entity group for consistency, we can’t fetch the tip datastore entity for multiple logical entities with just one datastore operation. To address this, we perform multiple asynchronous queries and wait for all to finish. This isn’t ideal or clean, but it works fairly well in practice. Remember that on App Engine there’s a limit of 30 outstanding RPCs when making concurrent RPC calls, so this only takes you so far.
High implementation cost for the first entity. We abstracted most of the design described above so that future immutable entities would be cheap for us to implement, however, the first entity wasn’t trivial to implement. It took us some time to iron out all the kinks, so it’s definitely only worth doing this if you very much need immutability or if you’ll be spreading the implementation cost across many use cases.
Entities are never actually deleted. By design, we don’t delete immutable entities. However, from a user perspective, they may have the expectation that once they delete something in our app,  we actually delete the data. This also might be the expectation in some regulated industries (i.e., healthcare). For our use case, it wasn’t necessary, but you may want to develop a system that maps over your dataset and finds fully deleted logical entities and deletes all of the datastore entities representing them in some batch task periodically. 
Next stepsWe’ve only been running with immutable entities in production for a little while, and it remains to be seen what problems we’ll face. And as we implement a few more of our datasets as immutable entities, it will become clear whether the implementation costs were worth the effort. Subscribe to our blog to get updates.

If this sort of data infrastructure floats your boat, definitely reach out to us as we have several openings on our backend team. Check out our job postings for more info.

Discuss on Hacker News

1This is very similar to the idea of MVCC (https://en.wikipedia.org/wiki/Multiversion_concurrency_control) which is how many modern databases implement transactions and rollback.

Quelle: Google Cloud Platform

Automate deployments and traffic splitting with the App Engine Admin API

Posted by Karolina Netolicka, Product Manager

Google App Engine provides you with easy ways to manage your application from the Google Cloud Platform Console or the command line. However, there are situations when you need to manage your application programmatically. Perhaps you need to deploy to App Engine from your own custom tool chain, or you want to write your own A/B testing framework.

The App Engine Admin API lets you do all these things and more, so we’re happy to announce that the API is now generally available.

You can use the Admin API not only to deploy new versions and manage traffic for any service, but also to change various configuration settings of your application, such as instance class. You can also stop individual versions in order to scale your App Engine Flexible environment deployments to zero. Finally, the API allows you to deploy several App Engine services in parallel, speeding up your deployments.

You can use the Google APIs explorer to easily test-drive the API and get a feel for what it offers.

Usage example
Let’s return to the earlier scenario: imagine that you’re writing a script to deploy a new version of your application and test it with 50% of production traffic or gradually shift the rest of the traffic to the new version. Let’s walk through the basic steps here; you’ll find the full instructions in our Getting started guide.

To deploy a version, you’ll generally follow these steps:

Stage your application resources to a Google Cloud Storage bucket
Convert your app.yaml file to a JSON manifest
Send an HTTP POST request to the Admin API to create the new version

For this example, we’ll deploy a version for which the source code has already been staged.
First, create a file called “helloworld.json” with the following contents:

{
“deployment”: {
“files”: {
“main.py”: {
“sourceUrl”: “https://storage.googleapis.com/admin-api-public-samples/hello_world/main.py”
},
}
},
“handlers”: [
{
“script”: {
“scriptPath”: “main.app”
},
“urlRegex”: “/.*”
}
],
“runtime”: “python27″,
“threadsafe”: true,
“id”: “appengine-helloworld”,
“inboundServices”: [
“INBOUND_SERVICE_WARMUP”
]
}

Next, send an HTTP POST request to the Admin API to create the new version:

POST

https://appengine.googleapis.com/v1/apps/my-application/services/def

ault/versions helloworld.json

(To actually send this request, you’ll need to set up authentication tokens; the Getting started guide contains the full steps.)

The response will contain the ID of a long-running operation that you can then poll to identify when the deployment has completed.

To split traffic between versions, you need at least one other version. You can create another version by changing the version ID and redeploying the same app. For example, using the same steps, deploy a “appengine-goodbyeworld” version using this JSON manifest:

{
“deployment”: {
“files”: {
“main.py”: {
“sourceUrl”: “https://storage.googleapis.com/admin-api-public-samples/goodbye_world/main.py”
},
}
},
“handlers”: [
{
“script”: {
“scriptPath”: “main.app”
},
“urlRegex”: “/.*”
}
],
“runtime”: “python27″,
“threadsafe”: true,
“id”: “appengine-goodbyeworld”,
“inboundServices”: [
“INBOUND_SERVICE_WARMUP”
]
}

Once the version is successfully deployed, route 50% of traffic to it with the following request:

PATCH

https://appengine.googleapis.com/v1/apps/my-application/services/def

ault/?updateMask=split { “split”: { “shardBy”: “IP”, “allocations {
“appengine-helloworld”: 0.5, “appengine-goodbyeworld”: 0.5 } } }

Now you can visit your application at http://<your-project-id>.appspot.com. As you reload the page, you’ll see the contents change depending on which version your request got routed to.

Another way to move traffic between versions is to use App Engine’s Traffic Migration feature to gradually shift all traffic as quickly as possible while giving the new instances sufficient time to warm up:

PATCH

https://appengine.googleapis.com/v1/apps/my-application/services/def

ault/?updateMask=split&migrateTraffic=true {“split”: { “shardBy”: “IP”, “allocations”: {
<“appengine-goodbyeworld”: 1 } } }

More information
The App Engine Admin API documentation contains full instructions on how to use the API, including how to authenticate to the API, deploy versions and set traffic splits.

We hope the Admin API simplifies your day-to-day workflows by letting you manage your App Engine applications from the tools you already use.

Quelle: Google Cloud Platform

Google and Facebook share proposed new Open Rack Standard with 48-volt power architecture

Posted by Debosmita Das, Technical Program Manager and Mike Lau, Technical Lead Manager

Since joining OCP earlier this year, Google has been actively collaborating with Facebook around the new Open Rack Standard. Together we’ve been working with the Open Compute Project through the OCP Incubation Committee, and today we’re pleased to share our Open Rack v2.0 Standard. The proposed v2.0 standard will specify a 48V power architecture with a modular, shallow-depth form factor that enables high-density deployment of OCP racks into data centers with limited space.

Google developed a 48V ecosystem with payloads utilizing 48V to Point-of-Load technology and has extensively deployed these high-efficiency, high-availability systems since 2010. We have seen significant reduction in losses and increased efficiency compared to 12V solutions. The improved SPUE with 48V has saved Google millions of dollars and kilowatt hours.

Our contributions to the Open Rack Standard are based on our experiences advancing the 48V architecture both with our internal teams as well as industry partners, incorporating the design expertise we’ve gained over the years.

In addition to the mechanical and electrical specifications, the proposed new Open Rack Standard V2.0 builds on the previous 12V design. It takes a holistic approach including details for the design of 48V power shelves, high-efficiency rectifiers, rack management controllers and rack-level battery backup units.

We’ve shared these designs with the OCP community for feedback, and will submit them to the OCP Foundation later this year for review. We’re looking forward to presenting the proposed standard to the OCP Engineering Workshop, August 10 at the University of New Hampshire.

If accepted, these standards will be Google’s first contributions to the OCP community, with the goal of bridging the transition from 12V to 48V architecture with ready-to-use deployment solutions for 48V payloads. We look forward to continued collaboration with adopters and contributors as we continue to develop new technologies and opportunities.
Quelle: Google Cloud Platform

Microsoft obtains new cloud-centric ISO 27017 certification

We are happy to announce Microsoft Azure obtained the ISO/IEC 27017:2015 certification, an international standard that aligns with and complements the ISO/IEC 27002:2013 with an emphasis on cloud-specific threats and risks.

This certification provides guidance on 37 controls in ISO/IEC 27002 and features seven new controls not addressed in ISO/IEC 27002. Both cloud service providers and cloud service customers can leverage this guidance to effectively design and implement cloud computing information security controls. Customers can download the ISO/IEC 27017 certificate which demonstrates Microsoft’s continuous commitment to providing a secure and compliant cloud environment for our customers.

Microsoft Azure helps customers meet their compliance requirements across a broad range of regulated industries and markets including financial services, healthcare, life sciences, media and entertainment, worldwide public sector, and US federal, state and local government.

For more information on Microsoft Azure’s unmatched compliance portfolio, visit the Trust Center.
Quelle: Azure

Azure SQL Database new premium performance level P15 generally available

We are excited to announce the general availability of our largest performance level P15, which offers 4000DTUs, more than two times the power than our current P11 offering. As customers build more powerful applications in Azure using Azure SQL Database, we see strong demand for high performance. P15 allows for extremely fast transactional processing and real-time analytics simultaneously and provides up to 1TB of storage.

The full spectrum of our premium offers now include:

With the introduction of P15, we support a broad range of high performance workloads, making SQL Database ideal for moving your on-premises apps to the cloud. You can have peace of mind knowing your database will scale as your application grows. With SQL Database, you can scale on-the-fly to or from P15 and instantly adapt to your changing workload demands when needed – all without application downtime.

Click the following links to:

Learn more about Azure SQL Database
Reference P15 pricing
Create your first P15 SQL Database

Quelle: Azure

Microsoft: a Gartner cloud computing leader across IaaS, PaaS, and SaaS

CIOs no longer ask whether they should use cloud, but rather how. According to IDC, seventy percent of CIOs will embrace a cloud-first strategy in 2016. By partnering closely with customers around the world, we see the natural path to enterprise cloud adoption — starting with software services like email and collaboration, then moving to infrastructure for storage, compute and networking and finally embracing platform services to transform business agility and customer engagements. In this journey to adopt the cloud, customers are looking for a vendor who understands and leads in meeting the broad spectrum of their cloud needs.

Today, Gartner has named Microsoft a leader in its Magic Quadrant for Cloud Infrastructure as a Service for the third year in a row based on completeness of our vision and ability to execute. We are honored by this continued recognition as we are relentless about our commitment and rapid pace of innovation for infrastructure services. With the G series, Azure led with the largest VMs in the cloud and we continue to deliver market leading performance with our recent announcement supporting SAP HANA workloads up to 32 TB. And while Azure is a world class cloud platform for Windows, it’s also recognized for industry-leading support for Linux and other open source technologies. Today, nearly one in three VMs deployed on Azure are Linux. Strong momentum for Linux and open source is driven by customers using Azure for business applications and modern application architectures, including containers and big data solutions. With over sixty percent of the 3,800 solutions in Azure Marketplace built on Linux, including popular open source images by Ubuntu, CoreOS, Bitnami, Oracle, DataStax, Red Hat and others, it’s exciting that many open source vendors considered Microsoft one of the best cloud partners.

While we are proud of our continued leadership in cloud infrastructure, we are committed to delivering the breadth and depth of cloud solutions to support our customers’ natural path to cloud adoption. Microsoft is the only vendor recognized as a leader across Gartner’s Magic Quadrants for IaaS, PaaS and SaaS solutions for enterprise cloud workloads. We are in a unique position with our extensive portfolio of cloud offerings designed for the needs of enterprises, including Software as a Service (SaaS) offerings like Office 365, CRM Online and Power BI and Azure Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). And Microsoft’s cloud vision is a unified story that we’re executing on with the same datacenter regions, compliance commitments, operational model, billing, support and more. The ability to deploy and use applications close to data with consistent identity and a shared ecosystem, means greater efficiency, less complexity, and cost savings.

Many of our customers embrace Identity as a first step in moving to the cloud. Office 365 and Azure share the same identity system with Azure Active Directory therefore providing a simple, friction free experience for our customers. And with Office 365 commercial customers surpassing 70 million monthly active users, Azure adoption is quickly following suit. Once in Azure, customers tend to start with IaaS and then quickly extend to using both IaaS and PaaS models to optimize productivity and embrace new opportunities for business differentiation. Today fifty-five percent of Azure IaaS customers are also deploying PaaS.

The following table summarizes vendors in the leader quadrant across Gartner MQs for IaaS, PaaS and SaaS solutions for key enterprise cloud workloads.

The true power of Azure is enabling our customers and partners on their cloud journey to realize their unique business goals. Customers and partners like Fruit of the Loom and Boomerang demonstrate this common need and cloud adoption path from Software as a Service (SaaS) to Infrastructure as a Service (IaaS) to Platform as a Service (PaaS).

Fruit of the Loom: Office 365 was their “runway” to Azure. Success with Office 365 deployment has led to use of Azure infrastructure and its platform services as they moved their consumer-facing website fruit.com to Azure. To gain insight into how they should market and package their products, Fruit of the Loom is also leveraging platform services such as Azure Machine Learning.
Boomerang: An Office 365 ISV takes advantage of Azure to create productivity solutions within Outlook. A key feature for Boomerang is its ability to generate real-time calendar images that are shareable with people outside of the user’s organization. Boomerang relies on Azure’s enterprise-proven infrastructure to support this computationally demanding workload. Their experience with Office 365 led them to look more closely at Azure, and they have started to migrate services from AWS to Azure to leverage Azure’s platform services and Machine Learning capabilities.

We look forward to delivering more on this vision across our portfolio of cloud offerings to our customers and partners. If you’d like to read the full report, “Gartner: Magic Quadrant for Infrastructure as a Service,” you can request it here.

 

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner&;s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Quelle: Azure

GA for Azure AD authentication in SQL Database and SQL Data Warehouse

Starting immediately, Azure Active Directory (Azure AD) authentication is generally available in Azure SQL Database and Azure SQL Data Warehouse. Azure AD provides an alternative to SQL Authentication enabling centralized identity and group management. It enables a single sign-on experience using SQL Database and SQL Data Warehouse for federated domains. Azure AD can be used to authenticate against a growing number of Azure and other Microsoft services and helps customers prevent the proliferation of users and passwords. Other advantages include:

Greatly simplified permission management allowing customers to control database permissions via Azure AD groups without having to access any of the underlying databases.

Support for:

Azure AD managed and federated domains with user name/password. Password rotation is centralized and triggered automatically from Azure AD.

Integrated Windows Authentication for Azure AD federated domains and clients on domain-joined machines. This enables single sign-on across participating services. Integrated Windows authentication is also supported for remote connections using VPNs.

JSON Web Token (JWT) which allows you to perform Azure AD authentication for middle-tier applications against SQL Database (e.g., service accounts).

To use Azure AD Authentication, customers must configure an Azure AD administrator who can provision SQL contained users that are mapped to Azure AD identities. Creating an Azure AD administrator can be done via PowerShell, REST API, or the Azure portal.

The screenshot below shows an Azure portal AD administrator DBA representing an Azure AD group with rachelb@contososales.onmicrosoft.com  as its member having full server administrative access.

The next screen below shows an Azure AD SQL administrator (rachelb@contososales.onmicrosoft.com) connected to a SQL Database called ContosoSales. The sample T-SQL code in the query window on the right provisions a contained SQL user named SalesReps mapped to an Azure AD group also called SalesReps. As a result, all members of the AD group SalesReps (e.g., user joep@contososales.onmicrosoft.com shown in the connection window) will be able to connect to ContosoSales using their AD credentials (user name and password). Notice the new authentication option in SSMS called “Active Directory Password Authentication”.

Next steps:

Connecting to SQL Database or SQL Data Warehouse by Using Azure Active Directory Authentication

Azure AD Authentication GitHub Demo – Learn more about Azure AD authentication methods using the demo code samples.  

Quelle: Azure