This Shopping Experience Is For People Who Don't Like People

This Shopping Experience Is For People Who Don't Like People

Women&;s fashion brand Reformation has a futuristic new store in San Francisco that uses touchscreens instead of salespeople. BuzzFeed News took a look around.

The women’s fashion brand Reformation has a futuristic new store located in San Francisco’s Mission district.

youtube.com

Allyson Laquian / BuzzFeed News / Google Maps

If you don’t like interacting with salespeople, this is the store for you.

If you don’t like interacting with salespeople, this is the store for you.

Allyson Laquian / BuzzFeed News

Once you walk in, you can browse clothing racks IRL in the showroom.

Once you walk in, you can browse clothing racks IRL in the showroom.

Allyson Laquian / BuzzFeed News


View Entire List ›

Quelle: <a href="This Shopping Experience Is For People Who Don&039;t Like People“>BuzzFeed

Announcing Docker Enterprise Edition

Today we are announcing Enterprise Edition (EE), a new version of the Docker platform optimized for business-critical deployments. Docker EE is supported by Docker Inc., is available on certified operating systems and cloud providers and runs certified Containers and Plugins from Docker Store. Docker EE is available in three tiers: Basic comes with the Docker platform, support and certification, and Standard and Advanced tiers add advanced container management (Docker Datacenter) and Docker Security Scanning.

For consistency, we are also renaming the free Docker products to Docker Community Edition (CE) and adopting a new lifecycle and time-based versioning scheme for both Docker EE and CE. Today’s Docker CE and EE 17.03 release is the first to use the new scheme.
Docker CE and EE are released quarterly, and CE also has a monthly “Edge” option. Each Docker EE release is supported and maintained for one year and receives security and critical bugfixes during that period. We are also improving Docker CE maintainability by maintaining each quarterly CE release for 4 months. That gets Docker CE users a new 1-month window to update from one version to the next.
Both Docker CE and EE are available on a wide range of popular operating systems and cloud infrastructure. This gives developers, devops teams and enterprises the freedom to run Docker and Docker apps on their favorite infrastructure without risk of lock-in.
To download free Docker CE and to try or buy Docker EE, head over to Docker Store. Also check out the companion blog post on the Docker Certified Program. Or read on for details on Docker CE and EE and the new versioning and lifecycle improvements.
Docker Enterprise Edition
Docker Enterprise Edition (EE) is an integrated, supported and certified container platform for CentOS, Red Hat Enterprise Linux (RHEL), Ubuntu, SUSE Linux Enterprise Server (SLES), Oracle Linux, and Windows Server 2016, as well as for cloud providers AWS and Azure. In addition to certifying Docker EE on the underlying infrastructure, we are introducing the Docker Certification Program which includes technology from our ecosystem partners: ISV containers that run on top of Docker and networking and storage and networking plugins that extend the Docker platform.
Docker and Docker partners provide cooperative support for Certified Containers and Plugins so customers can confidently use these products in production. Check out the companion blog post for more details and browse and install certified content from Docker Store. Sign up here if you’re interested in partnering to certify software for the Docker platform.
Docker EE is available in three tiers: Basic, Standard and Advanced.

Basic: The Docker platform for certified infrastructure, with support from Docker Inc. and certified Containers and Plugins from Docker Store
Standard: Adds advanced image and container management, LDAP/AD user integration, and role-based access control (Docker Datacenter)
Advanced: Adds Docker Security Scanning and continuous vulnerability monitoring

Docker EE is available as a free trial and for purchase from Docker Sales, online via Docker Store, and is supported by Alibaba, Canonical, HPE, IBM, Microsoft and by a network of regional partners.
Docker Community Edition and Lifecycle Improvements
Docker Community Edition (CE) is the new name for the free Docker products. Docker CE runs on Mac and Windows 10, on AWS and Azure, and on CentOS, Debian, Fedora, and Ubuntu and is available from Docker Store. Docker CE includes the full Docker platform and is great for developers and DIY ops teams starting to build container apps.
The launch of Docker CE and EE brings big enhancements to the lifecycle, maintainability and upgradability of Docker. Starting with today’s release, version 17.03, Docker is moving to time-based releases and a YY.MM versioning scheme, similar to the scheme used by Canonical for Ubuntu.
The Docker CE experience can be enhanced with free and paid add-ons from Docker Cloud, a set of cloud-based managed services that include automated builds, continuous integration, public and private Docker image repos, and security scanning.
Docker CE comes in two variants:

Edge is for users wanting a drop of the latest and greatest features every month
Stable is released quarterly and is for users that want an easier-to-maintain release pace

Edge releases only get security and bug-fixes during the month they are current. Quarterly stable releases receive patches for critical bug fixes and security issues for 4 months after initial release. This gives users of the quarterly releases a 1-month upgrade window between each release where it’s possible to stay on an old version while still getting fixes. This is an improvement over the previous lifecycle, which dropped maintenance for a release as soon as a new one became available.
Docker EE is released quarterly and each release is supported and maintained for a full year. Security patches and bugfixes are backported to all supported versions. This extended support window, together with certification and support, gives Docker EE subscribers the confidence they need to run business critical apps on Docker.

The Docker API version continues to be independent of the Docker platform version and the API version does not change from Docker 1.13.1 to Docker 17.03. Even with the faster release pace, Docker will continue to maintain careful API backwards compatibility and deprecate APIs and features only slowly and conservatively. And in Docker 1.13 introduced improved interoperability between clients and servers using different API versions, including dynamic feature negotiation.
In addition to clarifying and improving the Docker release life-cycle for users, the new deterministic release train also benefits the Docker project. Maintainers and partners who want to ship new features in Docker are now guaranteed that new features will be in the hands of Edge users within a month of being merged.

New Docker Enterprise Edition, an integrated, supported and certified container platformClick To Tweet

 
Get Started Today
Docker CE and EE are an evolution of the Docker Platform designed to meet the needs of developers, ops and enterprise IT teams. No matter the operating system or cloud infrastructure, Docker CE and EE lets you install, upgrade, and maintain Docker with the support and assurances required for your particular workload.
Here are additional resources:

Register for the Webinar: Docker EE
Download Docker CE from Docker Store
Try Docker EE for free and view pricing plans
Learn More about Docker Certified program
Read the docs 

FAQ
Is this a breaking change to Docker?
No. Docker carefully maintains backwards API compatibility, and only removes features after deprecating them for a period of 3 stable releases. Docker 17.03 uses the same API version as Docker 1.13.1.

What do I need to do to upgrade?
Docker CE for Mac and Windows users will get an automatic upgrade notification. Docker for AWS and Azure users can refer to the release notes for upgrade instructions. Legacy docker-engine package users can upgrade using their distro package manager or upgrade to the new docker-ce package.

Why is Docker adopting a new versioning scheme?
To improve the predictability and cadence of Docker releases, we&;re adopting a monthly and quarterly release pattern. This will benefit the project overall: Instead of waiting an indeterminate period of time after a PR is merged for a feature to be released, contributors will see improvements in the hands of users within a month.
A time-based version is a good way to underscore the change, and to signify the time-based release cadence.

I’m a Docker DDC or CS Engine customer. Do I have to upgrade to Docker EE to continue to get support?
No. Docker will continue to support customers with valid subscriptions whether the subscription covers Docker EE or Commercially Supported Docker. Customers can choose to stay with their current deployed version or upgrade to the latest Docker EE 17.03. For more details, see the Scope of Coverage and Maintenance Lifecycle at https://success.docker.com/Policies/Scope_of_Support
The post Announcing Docker Enterprise Edition appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

How to avoid getting clobbered when your cloud host goes down

The post How to avoid getting clobbered when your cloud host goes down appeared first on Mirantis | Pure Play Open Cloud.
Yesterday, while working on an upcoming tutorial, I was suddenly reminded how interconnected the web really is. Everything was humming along nicely, until I tried to push changes to a very large repository. That&;s when everything came to a screeching halt.
&;No problem,&; I thought.  &8220;Everybody has glitches once in a while.&8221;  So I decided I&8217;d work on a different piece of content, and pulled up another browser window for the project management system we use to get the URL. The servers, I was told, were &8220;receiving some TLC.&8221;  
OK, what about that mailing list task I was going to take care of?  Nope, that was down too.
As you probably know by now, all of these problems were due to a failure in one of Amazon Web Services&8217; S3 storage data centers.  According to the BBC, the outage even affected sites as large as Netflix, Spotify, and AirBnB.
Now, you may think I&8217;m writing this to gloat &; after all, here at Mirantis we obviously talk a lot about OpenStack, and one of the things we often hear is &8220;Oh, private cloud is too unreliable&8221; &8212; but I&8217;m not.
The thing is, public cloud isn&8217;t any more or less reliable than private cloud; it&8217;s just that you&8217;re not the one responsible for keeping it up and running.
And therein lies the problem.
If AWS S3 goes down, there is precisely zero you can do about it. Oh, it&8217;s not that there&8217;s nothing you can do to keep your application up; that&8217;s a different matter, which we&8217;ll get to in a moment.  But there&8217;s nothing that you can do to get S3 (or EC2, Google Compute Engine, or whatever public cloud service we&8217;re talking about) back up and running. Chances are you won&8217;t even know there&8217;s an issue until it starts to affect you &8212; and your customers.
A while back my colleague Amar Kapadia compared the costs of a DIY private cloud with a vendor distribution and with managed cloud service. In that calculation, he included the cost of downtime as part of the cost of DIY and vendor distribution-based private clouds. But really, as yesterday proved, no cloud &8212; even one operated by the largest public cloud in the world &8212; is beyond downtime. It&8217;s all in what you do about it.
So what can you do about it?
Have you heard the expression, &8220;The best defense is a good offense&8221;?  Well, it’s true for cloud operations too. In an ideal situation, you will know exactly what&8217;s going on in your cloud at all times, and take action to solve problems BEFORE they happen. You&8217;d want to know that the error rate for your storage is trending upwards before the data center fails, so you can troubleshoot and solve the problem. You&8217;d want to know that a server is running slow so you can find out why and potentially replace it before it dies on you, possibly taking critical workloads with it.
And while we&8217;re at it, a true cloud application should be able to weather the storm of a dying hypervisor or even a storage failure; they are designed to be fault-tolerant. Pure play open cloud is about building your cloud and applications so that they&8217;re not even vulnerable to the failure of a data center.
But what does that mean?
What is Pure Play Open Cloud?
You&8217;ll be hearing a lot more about Pure Play Open Cloud in the coming months, but for the purposes of our discussion, it means the following:
Cloud-based infrastructure that&8217;s agnostic to the hardware and underlying data center (so it can run anywhere), based on open source software such as OpenStack, Kubernetes, Ceph, networking software such as OpenContrail (so that there&8217;s no vendor lock-in, and you can move it between a hosted environment and your own) and managed as infrastructure-as-code, using CI/CD pipelines, and so on, to enable reliability and scale.
Well, that&8217;s a mouthful! What does it mean in practice?  
It means that the ideal situation is one in which you:

Are not dependent on a single vendor or cloud
Can react quickly to technical problems
Have visibility into the underlying cloud
Have support (and help) fixing issues before they become problems

Sounds great, but making it happen isn&8217;t always easy. Let&8217;s look at these things one at a time.
Not being dependant on a single vendor or cloud
Part of the impetus behind the development of OpenStack was the realization that while Amazon Web Services enabled a whole new way of working, it had one major flaw: complete dependance on AWS.  
The problems here were both technological and financial. AWS makes a point of trying to bring prices down overall, but the bigger you grow, incremental cost increases are going to happen; there&8217;s just no way around that. And once you&8217;ve decided that you need to do something else, if your entire infrastructure is built around AWS products and APIs, you&8217;re stuck.
A better situation would be to build your infrastructure and application in such a way that it&8217;s agnostic to the hardware and underlying infrastructure. If your application doesn&8217;t care if it&8217;s running on AWS or OpenStack, then you can create an OpenStack infrastructure that serves as the base for your application, and use external resources such as AWS or GCE for emergency scaling &8212; or damage control in case of emergency.
Reacting quickly to technical problems
In an ideal world, nobody would have been affected by the outage in AWS S3&8217;s us-east-1 region, because their applications would have been architected with a presence in multiple regions. That&8217;s what regions are for. Rarely, however, does this happen.
Build your applications so that they have &8212; or at the very least, CAN have &8212; a presence in multiple locations. Ideally, they&8217;re spread out by default, so if there&8217;s a problem in one &8220;place&8221;, the application keeps running. This redundancy can get expensive, though, so the next best thing would be to have it detect a problem and switch over to a fail-safe or alternate region in case of emergency. At the bare minimum, you should be able to manually change over to a different option once a problem has been detected.
Preferably, this would happen before the situation becomes critical.
Having visibility into the underlying cloud
Having visibility into the underlying cloud is one area where private or managed cloud definitely has the advantage over public cloud.  After all, one of the basic tenets of cloud is that you don&8217;t necessarily care about the specific hardware running your application, which is fine &8212; unless you&8217;re responsible for keeping it running.
In that case, using tools such as StackLight (for OpenStack) or Prometheus (for Kubernetes) can give you insight into what&8217;s going on under the covers. You can see whether a problem is brewing, and if it is, you can troubleshoot to determine whether the problem is the cloud itself, or the applications running on it.
Once you determine that you do have a problem with your cloud (as opposed to the applications running on it), you can take action immediately.
Support (and help) fixing issues before they become problems
Preventing and fixing problems is, for many people, where the rubber hits the road. With a serious shortage of cloud experts, many companies are nervous about trusting their cloud to their own internal people.
It doesn&8217;t have to be that way.
While it would seem like the least expensive way of getting into cloud is the &8220;do it yourself&8221; approach &8212; after all, the software&8217;s free, right? &8212; long term, that&8217;s not necessarily true.
The traditional answer is to use a vendor distribution and purchase support, and that&8217;s definitely a viable option.
A second option that&8217;s becoming more common is the notion of &8220;managed cloud.&8221;  In this situation, your cloud may or may not be on your premises, but the important part is that it&8217;s overseen by experts who know the signs to look for and are able to make sure that your cloud maintains a certain SLA &8212; without taking away your control.
For example, Mirantis Managed OpenStack is a service that monitors your cloud 24/7 and can literally fix problems before they happen. It involves remote monitoring, a CI/CD infrastructure, KPI reporting, and even operational support, if necessary. But Mirantis Managed OpenStack is designed on the notion of Build-Operate-Transfer; everything is built on open standards, so you&8217;re not locked in; when you&8217;re ready, you can take over and transition to a lower level of support &8212; or even take over entirely, if you want.
What matters is that you have help that keeps you running without keeping you trapped.
Taking control of your cloud destiny
The important thing here is that while it may seem easy to rely on a huge cloud vendor to do everything for you, it&8217;s not necessarily in your best interest. Take control of your cloud, and take responsibility for making sure that you have options &8212; and more importantly, that your applications have options too.
The post How to avoid getting clobbered when your cloud host goes down appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

How KPN speeds service delivery

Are you looking to transform your IT department into a self-service delivery center? Do your IT operations have the speed and control to deliver what’s needed without compromising quality?
Keep reading to find out how KPN, an IT and communications technology services provider, increased its speed to quickly deliver IT service requests, reduce costs and provide high quality cloud services.
KPN is a leader in IT services and connectivity. It offers fixed-line and mobile telephony, internet access and television services in Netherlands. The provider also operates several mobile brands in Germany and Belgium. Its subsidiary, Getronics N.V., provides services across the globe.
Data and storage have played a critical role in helping KPN deliver high quality cloud services to its clients. As rapid growth of data continues to change the game, here’s how this savvy business has used IBM Cloud to transform operations.
Cloud Orchestrator accelerates service delivery
KPN executives wanted to optimize its cloud strategy to enhance service delivery time and quality. Potential solutions would help them manage and automate storage services in-house. The goal: improve cloud management to accelerate service delivery and reduce costs without sacrificing quality.
IBM Cloud Orchestrator (ICO) is an excellent solution for managing your complex hybrid cloud environments. It provides cloud management for IT services through a user-friendly, self-service portal. It automates and integrates the infrastructure, application, storage and network into a single tool. Additionally, the self-service catalog lets users automate the deployment of data center resources, cloud-enabled business processes and other cloud services.
Business transformation through automation
With ICO, KPN automated its storage services and designed an in-house cloud management system. The solution helped KPN provision and scale cloud resources and reduce both administrator workloads and error-prone manual IT administrator tasks. As a result, KPN could accelerate service delivery times by approximately 80 percent. This significantly improved the service quality and saved resources through automation.
Watch this video to learn more about how IBM Cloud Orchestrator helped KPN accelerate its cloud service delivery:

For a more in-depth discussion, join us at InterConnect 2017 and attend the session: “How KPN leveraged IBM Cloud technologies for automation and &;insourcing&; of operations work.” And there&;s more. InterConnect will bring together more than 20,000 top cloud professionals to network, train and learn about the future of the industry. If you still haven’t signed up, be sure to register now.
The post How KPN speeds service delivery appeared first on news.
Quelle: Thoughts on Cloud

Why 80 percent of companies are increasing use of cloud managed services

This is the first in a two-part interview series with Lynda Stadtmueller, vice president of cloud services for the analyst firm Frost & Sullivan.
Thoughts on Cloud (ToC): A recent survey by Frost & Sullivan reported that 80 percent of US companies are planning to increase their use of cloud managed services. What factors are driving this increase?
Lynda Stadtmueller, vice president of cloud services, Frost & Sullivan: There are two main factors driving this increase. is more complex and the stakes are now higher than ever.
With cloud, businesses know they have a tremendous technology delivery model at their fingertips, but they don’t always know how to harness it. They might not have the expertise on staff. The self-service cloud might be more complex than they expected.
Additionally, the stakes for getting it right are high. As a result, they’re turning to specialists who can provide the management overlay to make sure that workloads are secure, efficient and cost controlled.
ToC: Does that 80 percent include companies that already use a managed cloud hosting solution and plan to increase those services?
Source: 2015 Frost & Sullivan cloud survey of US-based IT decision makers
LS: Yes. There are more types of cloud managed services available now than in the past. For example, a company using some sort of cloud infrastructure management may realize that they have non-cloud legacy applications that aren’t running as efficiently as they would prefer. The right provider can bring the benefits of cloud to legacy applications. In these cases, companies are adding that to their managed services agreements. They&;re adding more workloads, more infrastructure and more applications to the cloud.
ToC: Is driving cloud value in legacy applications the single biggest reason for that type of increase?
LS: It&8217;s a big one. Interestingly, in many companies these decisions are made separately. The person who manages the SAP workload may not be the same person who makes decisions about cloud infrastructure services.
And yet, as the company moves from point solutions to a holistic hybrid cloud strategy, that&8217;s when those collaborative conversations are happening. At a higher level, the organization may decide it can move its most challenging workloads into a cloud managed service model and recognize the those benefits across multiple lines of business.
Come back soon for part two of our interview with Lynda Stadtmueller. To learn more about the value of cloud managed services, watch a short webcast featuring insights from Frost & Sullivan, “How Managed Cloud Services Can Help You Achieve Your Business Goals.”
The post Why 80 percent of companies are increasing use of cloud managed services appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

3 Things you’ll learn about private cloud at InterConnect

Many companies are embracing a private cloud strategy to run their business. They want to reduce cost and effort while improving agility, IT processes and resource scalability. Private cloud addresses these needs. And it offers a dedicated, single-tenant cloud environment either on-site or off-premises.
InterConnect 2017, the industry’s premier cloud conference, is the perfect place to learn about private cloud implementations and best practices from experts and peers. Three key things we’re showcasing through sessions, panels, labs and hands-on demos at InterConnect:

It’s easy to adopt private cloud
Private cloud can transform your business
IBM can help get you there quickly

Here are just a few highlights to work into your schedule.
Session : Five years of business value with PureApplication at DTCO, from bleeding edge to proven technology
The Dutch Tax and Customs Office needed to keep pace with new releases of all the different components of their technology stack. They aimed to simplify their software and application environment to accelerate application delivery. Adopting DevOps, IBM PureApplication, and patterns for WebSphere and Master Data Management (MDM), DTCO not only gained faster time to market but also realized the full potential of private cloud.
Session : How DevOps enhanced quality and speed of delivery for the Israeli Government’s Welfare Department
The Israeli Government’s Welfare Department wanted to deliver applications to market faster and with higher quality. To get there, they used an agile, DevOps approach. They brought in IBM UrbanCode Deploy to automate deployments. And they complimented that with IBM Bluemix Local System to streamline app environment provisioning. As a result, the organization improved app delivery time from three months to three weeks and reduced provisioning times from two weeks to 50 minutes.
Session : IBM Bluemix Private Cloud for cloud service providers: Materna&;s experiences and technical insight
IT consulting company Materna succeeded in capturing new customers with a cloud based delivery of its solution on IBM Bluemix Private Cloud. In this session, their executives will walk through their process of adopting cloud technologies. They will discuss how they decided which workloads to run on the cloud and how they addressed multi-tenancy, audit and compliance, networking and more.
Session : IBM PureApplication and Bluemix Local System Patterns: Roadmap and directions
Pre-built, customizable application patterns help you deploy application environments faster and more reliably across your private cloud. In this session, experts will discuss how patterns can help you improve application time-to-market so you can focus more on innovating and serving your clients.
Now that you know about a few of the private cloud sessions at InterConnect, it’s time for you to act. Register now and get ready for an incredible experience of learning and networking with your peers and some of the top experts in private cloud. See you there.
The post 3 Things you’ll learn about private cloud at InterConnect appeared first on news.
Quelle: Thoughts on Cloud

53 new things to look for in OpenStack Ocata

The post 53 new things to look for in OpenStack Ocata appeared first on Mirantis | Pure Play Open Cloud.
With a shortened development cycle, you&;d think we&8217;d have trouble finding 53 new features of interest in OpenStack Ocata, but with so many projects (more than 60!) under the Big Tent, we actually had a little bit of trouble narrowing things down. We did a live webinar talking about 157 new features, but here&8217;s our standard 53. (Thanks to the PTLs who helped us out with weeding it down from the full release notes!)
Nova (OpenStack Compute Service)

VM placement changes: The Nova filter scheduler will now use the Placement API to filter compute nodes based on CPU/RAM/Disk capacity.
High availability: Nova now uses Cells v2 for all deployments; currently implemented as single cells, the next release, Pike, will support multi-cell clouds.
Neutron is now the default networking option.
Upgrade capabilities: Use the new &;nova-status upgrade check&8217; CLI command to see what&8217;s required to upgrade to Ocata.

Keystone (OpenStack Identity Service)

Per-user Multi-Factor-Auth rules (MFA rules): You can now specify multiple forms of authentication before Keystone will issue a token.  For example, some users might just need a password, while others might have to provide a time-based one time password and an additional form of authentication.
Auto-provisioning for federated identity: When a user logs into a federated system, Keystone will dynamically create that user a role; previously, the user had to log into that system independently, which was confusing to users.
Validate an expired token: Finally, no more failures due to long-running operations such as uploading a snapshot. Each project can specify whether it will accept expired tokens, and just HOW expired those tokens can be.

Swift (OpenStack Object Storage)

Improved compatibility: Byteorder information is now included in Ring files to support machines with different endianness.
More flexibility: You can now configure the base of the URL base for static web.  You can also set the &;filename&; parameter in TempURLs and validate those TempURLs against a common prefix.
More data: If you&8217;re dealing with large objects, you can now use multi-range GETs and HTTP 416 responses.

Cinder (OpenStack Block Storage)

Active/Active HA: Cinder can now run in Active/Active clustered mode, preventing concurrent operation conflicts. Cinder will also handle mid-processing service failures better than in past releases.
New attach/detach APIs: If you&8217;ve been confused about how to attach and detach volumes to and from VMs, you&8217;re not alone. The Ocata release saw the Cinder team refactor these APIs in preparation for adding the ability to attach a single volume to multiple VMs, expected in an upcoming release.

Glance (OpenStack Image Service)

Image visibility:  Users can now create &8220;community&8221; images, making them available for everyone else to use. You can also specify an image as &8220;shared&8221; to specify that only certain users have access.

Neutron (OpenStack Networking Service)

Support for Routed Provider Networks in Neutron: You can now use the NOVA GRP (Generic Resource Pools) API to publish networks in IPv4 inventory.  Also, the Nova scheduler uses this inventory as a hint to place instances based on IPv4 address availability in routed network segments.
Resource tag mechanism: You can now create tags for subnet, port, subnet pool and router resources, making it possible to do things like map different networks in different OpenStack clouds in one logical network or tag provider networks (i.e. High-speed, High-Bandwidth, Dial-Up).

Heat (OpenStack Orchestration Service)

Notification and application workflow: Use the new  OS::Zaqar::Notification to subscribe to Zaqar queues for notifications, or the OS::Zaqar::MistralTrigger for just Mistral notifications.

Horizon (OpenStack Dashboard)

Easier profiling and debugging:  The new Profiler Panel uses the os-profiler library to provide profiling of requests through Horizon to the OpenStack APIs so you can see what&8217;s going on inside your cloud.
Easier Federation configuration: If Keystone is configured with Keystone to Keystone (K2K) federation and has service providers, you can now choose Keystone providers from a dropdown menu.

Telemetry (Ceilometer)

Better instance discovery:  Ceilometer now uses libvirt directly by default, rather than nova-api.

Telemetry (Gnocchi)

Dynamically resample measures through a new API.
New collectd plugin: Store metrics generated by collectd.
Store data on Amazon S3 with new storage driver.

Dragonflow (Distributed SDN Controller)

Better support for modern networking: Dragonflow now supports IPv6 and distributed sNAT.
Live migration: Dragonflow now supports live migration of VMs.

Kuryr (Container Networking)

Neutron support: Neutron networking is now available to containers running inside a VM.  For example, you can now assign one Neutron port per container.
More flexibility with driver-based support: Kuryr-libnetwork now allows you to choose between ipvlan, macvlan or Neutron vlan trunk ports or even create your own driver. Also, Kuryr-kubernetes has support for ovs hybrid, ovs native and Dragonflow.
Container Networking Interface (CNI):  You can now use the Kubernetes CNI with Kuryr-kubernetes.
More platforms: The controller now handles Pods on bare metal, handles Pods in VMs by providing them Neutron subports, and provides services with LBaaSv2.

Vitrage (Root Cause Analysis Service)

A new collectd datasource: Use this fast system statistics collection deamon, with plugins that collect different metrics. From Ifat Afek: &8220;We tested the DPDK plugin, that can trigger alarms such as interface failure or noisy neighbors. Based on these alarms, Vitrage can deduce the existence of problems in the host, instances and applications, and provide the RCA (Root Cause Analysis) for these problems.&8221;
New “post event” API: Use This general-purpose API allows easy integration of new monitors into Vitrage.
Multi Tenancy support: A user will only see alarms and resources which belong to that user&8217;s tenant.

Ironic (Bare Metal Service)

Easier, more powerful management: A revamp of how drivers are composed, &8220;dynamic drivers&8221; enable users to select a &8220;hardware type&8221; for a machine rather than working through a matrix of hardware types. Users can independently change the deploy method, console manager, RAID management, power control interface and so on. Ocata also brings the ability to do soft power off and soft reboot, and to send non-maskable interrupts through both ironic and nova&8217;s API.

TripleO (Deployment Service)

Easier per-service upgrades: Perform step-by-step tasks as batched/rolling upgrades or in parallel. All roles, including custom roles, can be upgraded this way.
Composable High-Availability architecture: Services managed by Pacemaker such as galera, redis, VIPs, haproxy, cinder-volume, rabbitmq, cinder-backup, and manila-share can now be deployed in multiple clusters, making it possible to scale-out the number of nodes running these services.

OpenStackAnsible (Ansible Playbooks and Roles for Deployment)

Additional support: OpenStack-Ansible now supports CentOS 7, as well as integration with Ceph.

Puppet OpenStack (Puppet Modules for Deployment)

New modules and functionality: The Ocata release includes new modules for puppet-ec2api, puppet-octavia, puppet-panko and puppet-watcher. Also, existing modules support configuring the [DEFAULT]/transport_url configuration option. This changes makes it possible to support AMQP providers other than rabbitmq, such as zeromq.

Barbican (Key Manager Service)

Testing:  Barbican now includes a new Tempest test framework.

Congress (Governance Service)

Network address operations:  The policy language has been enhanced to enable users to specify network network policy use cases.
Quick start:  Congress now includes a default policy library so that it&8217;s useful out of the box.

Monasca (Monitoring)

Completion of Logging-as-a-Service:  Kibana support and integration is now complete, enabling you to push/publish logs to the Monasca Log API, and the logs are authenticated and authorized using Keystone and stored scoped to a tenant/project, so users can only see information from their own logs.
Container support:  Monasca now supports monitoring of Docker containers, and is adding support for the Prometheus monitoring solution. Upcoming releases will also see auto-discovery and monitoring of applications launched in a Kubernetes cluster.

Trove (Database as a Service)

Multi-region deployments: Database clusters can now be deployed across multiple OpenStack regions.

Mistral (Taskflow as a Service)

Multi-node mode: You can now deploy the Mistral engine in multi-node mode, providing the ability to scale out.

Rally (Benchmarking as a Service)

Expanded verification options:  Whereas previous versions enabled you to use only Tempest to verify your cluster, the newest version of Rally enables you to use other forms of verification, which means that Rally can actually be used for the non-OpenStack portions of your application and infrastructure. (You can find the full release notes here.)

Zaqar (Message Service)

Storage replication:  You can now use Swift as a storage option, providing built-in replication capabilities.

Octavia (Load Balancer Service)

More flexibility for Load Balancer as a Service:  You may now use neutron host-routes and custom MTU configurations when configuring LBaasS.

Solum (Platform as a Service)

Responsive deployment:  You may now configure deployments based on Github triggers, which means that you can implement CI/CD by specifying that your application should redeploy when there are changes.

Tricircle (Networking Automation Across Neutron Service)

DVR support in local Neutron:  The East-West and North-South bridging network have been combined into North-South a bridging network, making it possible to support DVR in local Neutron.

Kolla (Container Based Deployment)

Dynamic volume provisioning: Kolla-Kubernetes by default uses Ceph for stateful storage, and with Kubernetes 1.5, support was added for Ceph and dynamic volume provisioning as requested by claims made against the API server.

Freezer (Backup, Restore, and Disaster Recovery Service)

Block incremental backups:  Ocata now includes the Rsync engine, enabling these incremental backups.

Senlin (Clustering Service)

Generic Event/Notification support: In addition to its usual capability of logging events to a database, Senlin now enables you to add the sending of events to a message queue and to a log file, enabling dynamic monitoring.

Watcher (Infrastructure Optimization Service)

Multiple-backend support: Watcher now supports metrics collection from multiple backends.

Cloudkitty (Rating Service)

Easier management:  CloudKitty now includes a Horizon wizard and hints on the CLI to determine the available metrics. Also, Cloudkitty is now part of the unified OpenStack client.

The post 53 new things to look for in OpenStack Ocata appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

53 new things to look for in OpenStack Ocata

The post 53 new things to look for in OpenStack Ocata appeared first on Mirantis | Pure Play Open Cloud.
With a shortened development cycle, you&;d think we&8217;d have trouble finding 53 new features of interest in OpenStack Ocata, but with so many projects (more than 60!) under the Big Tent, we actually had a little bit of trouble narrowing things down. We did a live webinar talking about 157 new features, but here&8217;s our standard 53. (Thanks to the PTLs who helped us out with weeding it down from the full release notes!)
Nova (OpenStack Compute Service)

VM placement changes: The Nova filter scheduler will now use the Placement API to filter compute nodes based on CPU/RAM/Disk capacity.
High availability: Nova now uses Cells v2 for all deployments; currently implemented as single cells, the next release, Pike, will support multi-cell clouds.
Neutron is now the default networking option.
Upgrade capabilities: Use the new &;nova-status upgrade check&8217; CLI command to see what&8217;s required to upgrade to Ocata.

Keystone (OpenStack Identity Service)

Per-user Multi-Factor-Auth rules (MFA rules): You can now specify multiple forms of authentication before Keystone will issue a token.  For example, some users might just need a password, while others might have to provide a time-based one time password and an additional form of authentication.
Auto-provisioning for federated identity: When a user logs into a federated system, Keystone will dynamically create that user a role; previously, the user had to log into that system independently, which was confusing to users.
Validate an expired token: Finally, no more failures due to long-running operations such as uploading a snapshot. Each project can specify whether it will accept expired tokens, and just HOW expired those tokens can be.

Swift (OpenStack Object Storage)

Improved compatibility: Byteorder information is now included in Ring files to support machines with different endianness.
More flexibility: You can now configure the base of the URL base for static web.  You can also set the &;filename&; parameter in TempURLs and validate those TempURLs against a common prefix.
More data: If you&8217;re dealing with large objects, you can now use multi-range GETs and HTTP 416 responses.

Cinder (OpenStack Block Storage)

Active/Active HA: Cinder can now run in Active/Active clustered mode, preventing concurrent operation conflicts. Cinder will also handle mid-processing service failures better than in past releases.
New attach/detach APIs: If you&8217;ve been confused about how to attach and detach volumes to and from VMs, you&8217;re not alone. The Ocata release saw the Cinder team refactor these APIs in preparation for adding the ability to attach a single volume to multiple VMs, expected in an upcoming release.

Glance (OpenStack Image Service)

Image visibility:  Users can now create &8220;community&8221; images, making them available for everyone else to use. You can also specify an image as &8220;shared&8221; to specify that only certain users have access.

Neutron (OpenStack Networking Service)

Support for Routed Provider Networks in Neutron: You can now use the NOVA GRP (Generic Resource Pools) API to publish networks in IPv4 inventory.  Also, the Nova scheduler uses this inventory as a hint to place instances based on IPv4 address availability in routed network segments.
Resource tag mechanism: You can now create tags for subnet, port, subnet pool and router resources, making it possible to do things like map different networks in different OpenStack clouds in one logical network or tag provider networks (i.e. High-speed, High-Bandwidth, Dial-Up).

Heat (OpenStack Orchestration Service)

Notification and application workflow: Use the new  OS::Zaqar::Notification to subscribe to Zaqar queues for notifications, or the OS::Zaqar::MistralTrigger for just Mistral notifications.

Horizon (OpenStack Dashboard)

Easier profiling and debugging:  The new Profiler Panel uses the os-profiler library to provide profiling of requests through Horizon to the OpenStack APIs so you can see what&8217;s going on inside your cloud.
Easier Federation configuration: If Keystone is configured with Keystone to Keystone (K2K) federation and has service providers, you can now choose Keystone providers from a dropdown menu.

Telemetry (Ceilometer)

Better instance discovery:  Ceilometer now uses libvirt directly by default, rather than nova-api.

Telemetry (Gnocchi)

Dynamically resample measures through a new API.
New collectd plugin: Store metrics generated by collectd.
Store data on Amazon S3 with new storage driver.

Dragonflow (Distributed SDN Controller)

Better support for modern networking: Dragonflow now supports IPv6 and distributed sNAT.
Live migration: Dragonflow now supports live migration of VMs.

Kuryr (Container Networking)

Neutron support: Neutron networking is now available to containers running inside a VM.  For example, you can now assign one Neutron port per container.
More flexibility with driver-based support: Kuryr-libnetwork now allows you to choose between ipvlan, macvlan or Neutron vlan trunk ports or even create your own driver. Also, Kuryr-kubernetes has support for ovs hybrid, ovs native and Dragonflow.
Container Networking Interface (CNI):  You can now use the Kubernetes CNI with Kuryr-kubernetes.
More platforms: The controller now handles Pods on bare metal, handles Pods in VMs by providing them Neutron subports, and provides services with LBaaSv2.

Vitrage (Root Cause Analysis Service)

A new collectd datasource: Use this fast system statistics collection deamon, with plugins that collect different metrics. From Ifat Afek: &8220;We tested the DPDK plugin, that can trigger alarms such as interface failure or noisy neighbors. Based on these alarms, Vitrage can deduce the existence of problems in the host, instances and applications, and provide the RCA (Root Cause Analysis) for these problems.&8221;
New “post event” API: Use This general-purpose API allows easy integration of new monitors into Vitrage.
Multi Tenancy support: A user will only see alarms and resources which belong to that user&8217;s tenant.

Ironic (Bare Metal Service)

Easier, more powerful management: A revamp of how drivers are composed, &8220;dynamic drivers&8221; enable users to select a &8220;hardware type&8221; for a machine rather than working through a matrix of hardware types. Users can independently change the deploy method, console manager, RAID management, power control interface and so on. Ocata also brings the ability to do soft power off and soft reboot, and to send non-maskable interrupts through both ironic and nova&8217;s API.

TripleO (Deployment Service)

Easier per-service upgrades: Perform step-by-step tasks as batched/rolling upgrades or in parallel. All roles, including custom roles, can be upgraded this way.
Composable High-Availability architecture: Services managed by Pacemaker such as galera, redis, VIPs, haproxy, cinder-volume, rabbitmq, cinder-backup, and manila-share can now be deployed in multiple clusters, making it possible to scale-out the number of nodes running these services.

OpenStackAnsible (Ansible Playbooks and Roles for Deployment)

Additional support: OpenStack-Ansible now supports CentOS 7, as well as integration with Ceph.

Puppet OpenStack (Puppet Modules for Deployment)

New modules and functionality: The Ocata release includes new modules for puppet-ec2api, puppet-octavia, puppet-panko and puppet-watcher. Also, existing modules support configuring the [DEFAULT]/transport_url configuration option. This changes makes it possible to support AMQP providers other than rabbitmq, such as zeromq.

Barbican (Key Manager Service)

Testing:  Barbican now includes a new Tempest test framework.

Congress (Governance Service)

Network address operations:  The policy language has been enhanced to enable users to specify network network policy use cases.
Quick start:  Congress now includes a default policy library so that it&8217;s useful out of the box.

Monasca (Monitoring)

Completion of Logging-as-a-Service:  Kibana support and integration is now complete, enabling you to push/publish logs to the Monasca Log API, and the logs are authenticated and authorized using Keystone and stored scoped to a tenant/project, so users can only see information from their own logs.
Container support:  Monasca now supports monitoring of Docker containers, and is adding support for the Prometheus monitoring solution. Upcoming releases will also see auto-discovery and monitoring of applications launched in a Kubernetes cluster.

Trove (Database as a Service)

Multi-region deployments: Database clusters can now be deployed across multiple OpenStack regions.

Mistral (Taskflow as a Service)

Multi-node mode: You can now deploy the Mistral engine in multi-node mode, providing the ability to scale out.

Rally (Benchmarking as a Service)

Expanded verification options:  Whereas previous versions enabled you to use only Tempest to verify your cluster, the newest version of Rally enables you to use other forms of verification, which means that Rally can actually be used for the non-OpenStack portions of your application and infrastructure. (You can find the full release notes here.)

Zaqar (Message Service)

Storage replication:  You can now use Swift as a storage option, providing built-in replication capabilities.

Octavia (Load Balancer Service)

More flexibility for Load Balancer as a Service:  You may now use neutron host-routes and custom MTU configurations when configuring LBaasS.

Solum (Platform as a Service)

Responsive deployment:  You may now configure deployments based on Github triggers, which means that you can implement CI/CD by specifying that your application should redeploy when there are changes.

Tricircle (Networking Automation Across Neutron Service)

DVR support in local Neutron:  The East-West and North-South bridging network have been combined into North-South a bridging network, making it possible to support DVR in local Neutron.

Kolla (Container Based Deployment)

Dynamic volume provisioning: Kolla-Kubernetes by default uses Ceph for stateful storage, and with Kubernetes 1.5, support was added for Ceph and dynamic volume provisioning as requested by claims made against the API server.

Freezer (Backup, Restore, and Disaster Recovery Service)

Block incremental backups:  Ocata now includes the Rsync engine, enabling these incremental backups.

Senlin (Clustering Service)

Generic Event/Notification support: In addition to its usual capability of logging events to a database, Senlin now enables you to add the sending of events to a message queue and to a log file, enabling dynamic monitoring.

Watcher (Infrastructure Optimization Service)

Multiple-backend support: Watcher now supports metrics collection from multiple backends.

Cloudkitty (Rating Service)

Easier management:  CloudKitty now includes a Horizon wizard and hints on the CLI to determine the available metrics. Also, Cloudkitty is now part of the unified OpenStack client.

The post 53 new things to look for in OpenStack Ocata appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

IBM Machine Learning comes to private cloud

Billions of transactions in banking, transportation, retail, insurance and other industries take place in the private cloud every day. For many enterprises, the z System mainframe is the home for all that data.
For data scientists, it can be hard to keep up with all that activity and those vast swaths of data. So IBM has taken its core Watson machine learning technology and applied it to the z System, enabling data scientists to automate the creation, training and deployment of analytic models to understand their data more completely.
IBM Machine Learning supports any language, any popular machine learning framework and any transactional data type without the cost, latency and risk that comes with moving data off premises. It also includes cognitive automation to help data scientists choose the right algorithms by which to analyze and process their organization&;s specific data stores.
One company that is evaluating the IBM Machine Learning technology is Argus Health, which hopes to help healthcare providers and patients navigate the increasingly complex healthcare landscape.
&;Helping our health plan clients achieve the best clinical and financial outcomes by getting the best care delivered at the best price in the most appropriate place is the mission of Argus while focused on the vision of becoming preeminent in providing pharmacy and healthcare solutions,&; said Marc Palmer, president of Argus Health.
For more, check out CIO Today&;s full article.
The post IBM Machine Learning comes to private cloud appeared first on news.
Quelle: Thoughts on Cloud

Bringing an app that rewards kindness to life

This is the second part in a series about a group of competitors in the Connect to Cognitive Build contest. Read the first part to find out how the team developed the idea for the app.
The hard decisions started almost immediately after we hit the submit button.
After getting through that first level of “what if,” we submitted a proposal for an app designed to reward kindness called HatsOff to the Connect to Cloud Cognitive Build contest. That was the easy part. Who wouldn’t support an app that encouraged kindness?
Our team didn&;t wait until we received the “you have been accepted” notification to begin our next round of brainstorming. We were confident that our HatsOff app was a great idea. We believed in it and had the passion to drive it forward. We had already gone through these design thinking exercises:

Divergent thinking to generate a list of industries in which the app would be appropriate, as well as the problems it solved
Convergent thinking to narrow possibilities by building out personas
Empathy maps to better understand customers

Our idea was applicable to many industries, but we needed to focus on just a few: retail, hospitality, insurance and transportation. We further narrowed in on insurance to scope the problem being solved. We chose the two personas of the driver and the insurance agent, which gave the team enough information to begin the next phase of prototyping.
What we didn’t expect was the time it would take to develop the business case and market our idea internally to receive group resource funding. We knew that we had to get both the business case and the technology to be compelling enough to convince our voting colleagues that it was worthy enough for them to vote with their dollars.
Competition was stiff. We started with our management chain, gaining support and input, and expanded our circle to communities we were active in and our network of contacts, actively telling our story like a startup. Our storytelling skills paid off, and we achieved the largest amount of funding, supporters and enthusiasm in the group of candidate submissions.
The next phase was the prototype. Luckily, we had a visual designer and two user-experience designers volunteer to join our team. With low-fidelity, paper-and-pencil prototypes, we started the layout of the app. Then we iterated many times. That’s how we came to a decision on the logo that would represent in a single visual what the app was all about. We did the same for the color selection that would be appealing to the targeted buyer and users. Details mattered.

Our next critical tasks were setting up our development environment, refining our architecture and designing the important API services layer. By this time, our initial thinking that HatsOff would be a single solution with a user interface and a backend services component to it had evolved.
As we went through the design thinking process and hashed ideas, it became evident that we could provide value to other apps and industries through APIs such as an app for an automobile insurer.
The first draft of our architectural diagram had a lot of lines and boxes with question marks. These corresponded with the technical choices we had to make from a wide variety of runtimes, along with cognitive, analytic and data service options that Bluemix offered for our solution.
We realized that we were not experts in some of these. Blockchain was an example of something that we needed to learn more about to determine if it was something that could add value or not.
The HatsOff app core team and several incredible volunteers are fast at work putting together a working prototype in which all the details come together. We are confident that we are asking the right questions, open to learning, and have our focus on the customer that will take our app to the next level. Stay tuned.
Learn more about IBM is helping clients take advantage of the digital economy.
HatsOff team members Ron Lynn, Padma Chukka and Soad Abu El-Naga contributed to this story.
 
The post Bringing an app that rewards kindness to life appeared first on news.
Quelle: Thoughts on Cloud