Mirantis + Iron.io: Bringing Serverless Computing to OpenStack

The post Mirantis + Iron.io: Bringing Serverless Computing to OpenStack appeared first on Mirantis | The Pure Play OpenStack Company.
Serverless Computing is about deploying event-triggered, containerized applications into a framework that knows how to queue and pass jobs to them, manage inputs and outputs, and scale them up and down. First made popular by Amazon&;s Lambda service, the paradigm is simpler and more limited than PaaS, but offers many of the same benefits. It&8217;s gaining popularity swiftly in support of IoT (Internet of Things), media processing, and many other use-cases requiring agility, reliability, and scale.
For the past several months, Mirantis and Unlocked Partner Iron.io have been collaborating to validate Iron.io&8217;s multi-component job/message-queueing and serverless compute framework on Mirantis OpenStack, and to develop methods for simplifying deployment in stand-alone and hybrid configurations.
Last month, Iron.io completed validation of a Murano app that installs IronMQ locally on a Mirantis OpenStack 8.0 cluster, and will shortly release another Murano app that deploys the IronWorker core framework.
On Tuesday, August 2nd, 2016, we conducted a webinar with Iron.io, summarizing progress, and demoing Iron.io&8217;s recently-validated IronMQ hosted service, and soon-to-be-validated IronWorker core framework on Mirantis OpenStack. Featuring Iron.io engineer Douglas Coburn, the conversation went fairly deep into Iron.io&8217;s architecture, WebUI and tools, and developer and operator experience, and concluded with a lengthy live Q&A with attendees.
For more information on Iron.io and OpenStack-based serverless computing, please:

View the webinar recording
Download the slides
View a short-form video demo from Iron.io

The post Mirantis + Iron.io: Bringing Serverless Computing to OpenStack appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Application-aware NFV infrastructure, or the Heisenberg Uncertainty Principle and NFV

The post Application-aware NFV infrastructure, or the Heisenberg Uncertainty Principle and NFV appeared first on Mirantis | The Pure Play OpenStack Company.
As a fan of quantum mechanics,  one of my favorites is the famous Heisenberg Uncertainty Principle, which states that for certain value pairs of quantum objects, both values cannot be exactly defined at the same time. The most well-known pair is the position of a particle (p) and the momentum of a particle (x). Knowing precisely where a particle is makes it impossible to know the particle&;s momentum (and vice versa).

The take-away is that nature insists on trade-offs; that nailing down one part of a problem domain can put another part in the wind. You encounter one example when you introduce virtualization technologies into the Communications Service Provider (CSP) Cloud. In doing so, you often have to make trade-offs. One important trade-off is between utilization (y) and performance (x) on the one hand, and Elasticity, Agility and Manageability, on the other.

At the heart of NFV is the ability to take network functionality that was previously offered as a hardware-based appliance and convert it to software running on commoditized servers. Implicit to this is the need for virtualization technologies, which abstract the software operating environment from underlying hardware resources. However, these virtualization technologies, which are critical to the implementation of the cloud environment, also bring with them challenges that must be addressed and risks that must be managed — challenges and risks are made greater given the  stringent requirements for high performance and simple, efficient management imposed in Telco environments.

Figure 1 Traditional Architecture vs. Virtualized Architecture

Considering the performance implications of virtualizing a Telco environment is critical to any successful deployment of cloud technology (and NFV) in the CSP domain.
As shown in Figure 1, virtualization, by its very nature, brings two major sources of performance bottlenecks: 1) hardware resource contention from storage I/O, memory, CPU cores and network bandwidth; 2) virtualization overhead with additional layers of abstraction and processing to application data flows. Both will have an impact on overall application performance. In the Telco cloud, which handles both latency-sensitive and  latency-insensitive applications, any performance impact may have severe negative impacts on quality of experience.
Fortunately, recent technology advances &; such as Intel’s Data Path Development Kit (DPDK) and PCI-SIG Single Root I/O Virtualization (SR-IOV) &8211; help enable reliable NFV deployments and ensure achievement of acceptable Telco grade SLAs. Combined with improvements in the ability of COTS hardware platforms (including the most recent x86 processor generations) to take advantage of these data plane acceleration technologies, CSPs can now deploy data path edge functions such as SBC, IMS, CPE/PE router, and EPC elements on standard high volume servers.
Mirantis OpenStack 9.0, our latest OpenStack distribution based on the Mitaka release, provides the delicate balance between achieving the highest practical utilization and ensuring an acceptable level of performance.
In this 9.0 release, CSPs can now experience improved performance while running NFV workloads and other demanding network applications, with support for huge pages, SR-IOV, NUMA/ CPU pinning and DPDK. All of these attributes can be configured through OpenStack Fuel and all have been fully-tested, documented, and readied for production deployment in Mirantis OpenStack 9.0:

The integration of NFV features such as huge pages, SR-IOV, NUMA/CPU pinning and DPDK into the OpenStack cloud operating environment enables fine-grained matching of NFV workload requirements to platform capabilities, prior to launching a virtual machine
Such feature support enables CSPs to offer premium, revenue-generating services based on specific hardware features
CSPs can now use Fuel to easily configure and provision the aforementioned NFV features in an automated and repeatable manner
CSPs can efficiently manage post-deployment operations, using either the lifecycle management features of Fuel or tools such as Puppet Enterprise, while overseeing infrastructure with Stacklight for logging, monitoring and alerting (LMA).

For all their intrinsic complexity and architectural significance, NFV features are becoming easier to deploy on new OpenStack clusters, and more straightforward and convenient to enable and configure on VMs. This NFV demo video outlines the process: NFV features are enabled with switches during creation of a Fuel 9.0 Master Node, or can be turned on by editing a config file and restarting relevant services on a 9.0 Fuel Master that already exists. Details of vCPU allocation (for pinning) and RAM allocation (for Huge Pages) are managed through Fuel&8217;s WebUI, prior to cluster (or scale-out compute node) deployment. Thereafter, on the deployed cluster, straightforward techniques, like creating a custom &;NFV&8217; flavor for VMs that will house VNFs, offer efficient, low-maintenance pathways for efficient ops.
You can  also refer to our NFV Solution web page to learn more about the Mirantis CSP cloud solution, Open NFV reference platform, and the NFV partner ecosystem.
 
The post Application-aware NFV infrastructure, or the Heisenberg Uncertainty Principle and NFV appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Why hybrid cloud is becoming the destination for enterprise transformation

The cloud has shifted from being a driver of cost savings to a key enabler of business innovation. As it matures, industries across the globe are realizing that the “cloud” is not simply a thing, but a multi-dimensional technology destination to fuel business innovation. A properly deployed cloud platform can improve business models, intelligence and [&;]
The post Why hybrid cloud is becoming the destination for enterprise transformation appeared first on Thoughts On Cloud.
Quelle: Thoughts on Cloud

How cloud can make CIOs into MVPs

As general manager of IBM&;s North America cloud services and solutions team, I get to speak to a lot of clients. One CIO with whom I regularly talk compares his typical day to a hockey goalie&8217;s, batting away slap shots from internal clients, vendors, hackers and bosses. &;So many bosses,&; he notes. I sympathize, but [&;]
The post How cloud can make CIOs into MVPs appeared first on Thoughts On Cloud.
Quelle: Thoughts on Cloud

Cloud sets successful mobile initiatives apart

Executives expect a lot from mobile initiatives. According to an IBM Institute for Business Value study, 77 percent of executives are planning at least five mobile initiatives over the next year, and 62 percent expect those initiatives to pay off within 12 months or less. That’s not always how things pan out. About two-thirds of [&;]
The post Cloud sets successful mobile initiatives apart appeared first on Thoughts On Cloud.
Quelle: Thoughts on Cloud

Hybrid cloud storage: Past, present and future

Many CIOs and line-of-business leaders say flexible cloud solutions can make their businesses more innovative, agile and competitive. They understand the value that vast amounts of data can bring to their business. But that data requires storage solutions that can adapt to many types of use cases and work with on-premises, off-premises and hybrid cloud [&;]
The post Hybrid cloud storage: Past, present and future appeared first on Thoughts On Cloud.
Quelle: Thoughts on Cloud

4 cloud adoption challenges for the Asia-Pacific region

Business leaders must weigh a number of options in the process of choosing a cloud computing platform or service. Not only do they have to pick the service that will best fit their business needs, but they also have to find just the right mix of options to ensure security, accessibility, meet regulatory requirements, and [&;]
The post 4 cloud adoption challenges for the Asia-Pacific region appeared first on Thoughts On Cloud.
Quelle: Thoughts on Cloud

OpenStack Swift mid-cycle hackathon summary

OpenStack Swift mid-cycle hackathon summary

Last week more than 30 people from all over the world met at the Rackspace
office in San Antonio, TX for the Swift mid-cycle hackathon. All major companies
contributing to Swift sent people, including Fujitsu, HPE, IBM, Intel, NTT,
Rackspace, Red Hat, and Swiftstack. As always it was a packed week with a lot
of deep technical discussions around current and future changes within Swift.

There are always way more topics to discuss than time, therefore we collected
topics first and everyone voted afterwards. We came up with the following major
discussions that are currently most interesting within our community:

Hummingbird replication
Crypto – what’s next
Partition power increase
High-latency media
Container sharding
Golang – how to get it accepted in master
Policy migration

There were a lot more topics, and I like to highlight a few of them.

H9D aka Hummingbird / Golang

This was a big topic – as expected. It has been shown by Rackspace already that
H9D improves the performance of the object servers and replication
significantly compared to the current Python implementation. There were also
some investigations if it would be possible to improve the speed using PyPy and
other improvements; however the major problem is that Python blocks processes
on file I/O, no matter if it is async IO or not. Sam wrote a very nice summary
about this earlier on [1].

NTT also benchmarked H9D, and showed some impressive numbers as well. Shortly
summarized, throughput increased 5-10x depending on parameters like object size
and the like. It seems disks are no longer the bottleneck – now the proxy CPU is
the new bottleneck. That said, inode cache memory seems to be even more
important because with H9D one can do many more disk requests.

Of course there were also discussions about another proposal to accept golang
within OpenStack and discussions will continue [2]. My personal view is that
the H9D implementation has some major advantages and hopefully (a refactored
subset) will be accepted to be merged to master.

Crypto retro & what’s next

Swift 2.9.0 has been released the past week and includes the merged crypto
branch [3]. Kudos to everyone involved, especially Janie and Alistair! This
middleware make it possible for operators to fully encrypt object data on
disk.

We did a retro on the work done so far; it has been the third time that we used
a feature branch and a final soft-freeze to land a major change within Swift.
There are pros and cons for this, but overall it worked pretty well again. It
also made sense that reviewers stepped in late in the process, because this
added new sights onto the whole work. Soft freezes also enforce more reviewers
to contribute to it and get it merged finally.

Swiftstack benchmarked the crypto branch; as expected the throughput decreases
somewhat with crypto enabled (especially with small objects), while proxy CPU
usage increases. There were some discussions about improving the performance,
and it seems the impact from checksumming is significant here.

Next steps to improve the crypto middleware is to work on some external key
master implementations (for example using Barbican) as well as key rotation.

Partition power increase

Finally there is a patch ready for review now, that will allow an operator to
increase the partition power without downtime for end users [4].

I gave an overview about the implementation, and also showcased a demo how this
works. Based on discussions during the last week I spotted some minor
eventualities that have been fixed meanwhile, and I hope to get this merged
before Barcelona. We somewhat dreamed about a future Swift that might be usable
with automatic partition power increase, where an operator needs to think about
this much less than today.

Various middlewares

There are some proposed middlewares that are important to their authors, and we
discussed quite a few of them. This includes:

High-latency media (aka archiving)
symlinks
notifications
versioning

The idea to support high-latency media is to use cold storage (like tape or
other public cloud object storage with a possible multi-hour latency) for less
frequently accessed data and especially to offer a low-cost long-term archival
solution based on Swift [5]. This is somewhat challenging for the upstream
community, because most contributors don’t have access to large enterprise tape
libraries for testing. In the end this middleware needs to be supported by the
community, and a stand-alone repository outside of Swift itself might make most
sense therefore (similar to the swift3 middleware [6]).

A new proposal to implement true history-based versioning has been proposed
earlier on, and some open questions have been talked about. This should land
hopefully soon, adding an improved way to versioning compared to today’s
stack-based versioning [7].

Sending out notifications based on writes to Swift have been discussed earlier
on, and thankfully Zaqar now supports temporary signed urls, solving some of
the issues we faced earlier on. I’ll update my patch shortly [8]. There is
also another option to use oslo.messaging. All in all, the whole idea will be
to use a best-effort approach – it’s simply not possible to guarantee a
notification has been delivered successfully without blocking requests.

Container sharding

As of today it’s a good idea to avoid billions of objects in a single container
in Swift, because writes to that container can get slow then. Matt started
working on container sharding sometime ago [9], and iterated once again because
he faced new problems with the previous ideas. My impression is that the new
idea is getting much closer to something that will eventually be merged, thanks
to Matt’s persistence on this topic.

Summary

There were a lot more (smaller) topics that have been discussed, but this
should give you an overview of the current work going on in the Swift
community and the interesting new features that we’ll see hopefully soon in
Swift itself. Thanks everyone who contributed and participated and special
thanks to Richard for organizing the hackathon – it was a great week and I’m
looking forward to the next months!
Quelle: RDO

Recent RDO blogs, July 19, 2016

Here’s what RDO enthusiasts have been blogging about in the last week.

OpenStack 2016.1-1 release Haïkel Guémar

The RDO Community is pleased to announce a new release of openstack-utils.

… read more at http://tm3.org/7x

Improving RDO packaging testing coverage by David Simard

DLRN builds packages and generates repositories in which these packages will be hosted.
It is the tool that is developed and used by the RDO community to provide the repositories on trunk.rdoproject.org. It continuously builds packages for every commit for projects packaged in RDO.

… read more at http://tm3.org/7y

TripleO deep dive session (TripleO Heat Templates by Carlos Camacho

This is the second video from a series of “Deep Dive” sessions related to TripleO deployments.

… watch at http://tm3.org/7z

How to build new OpenStack packages by Chandan Kumar

Building new OpenStack packages for RDO is always tough. Let’s use DLRN to make our life simpler.

… read more at http://tm3.org/7-

OpenStack Swift mid-cycle hackathon summary by cschwede

Last week more than 30 people from all over the world met at the Rackspace office in San Antonio, TX for the Swift mid-cycle hackathon. All major companies contributing to Swift sent people, including Fujitsu, HPE, IBM, Intel, NTT, Rackspace, Red Hat, and Swiftstack. As always it was a packed week with a lot of deep technical discussions around current and future changes within Swift.

… read more at http://tm3.org/80
Quelle: RDO

Recent RDO blogs, July 25, 2016

Here’s what RDO enthusiasts have been writing about over the past week:

TripleO deep dive session (Overcloud deployment debugging) by Carlos Camacho

This is the third video from a series of “Deep Dive” sessions related to TripleO deployments.

… read (and watch) more at http://tm3.org/81

How connection tracking in Open vSwitch helps OpenStack performance by Jiri Benc

By introducing a connection tracking feature in Open vSwitch, thanks to the latest Linux kernel, we greatly simplified the maze of virtual network interfaces on OpenStack compute nodes and improved its networking performance. This feature will appear soon in Red Hat OpenStack Platform.

… read more at http://tm3.org/82

Introduction to Red Hat OpenStack Platform Director by Marcos Garcia

Those familiar with OpenStack already know that deployment has historically been a bit challenging. That’s mainly because deployment includes a lot more than just getting the software installed – it’s about architecting your platform to use existing infrastructure as well as planning for future scalability and flexibility. OpenStack is designed to be a massively scalable platform, with distributed components on a shared message bus and database backend. For most deployments, this distributed architecture consists of Controller nodes for cluster management, resource orchestration, and networking services, Compute nodes where the virtual machines (the workloads) are executed, and Storage nodes where persistent storage is managed.

… read more at http://tm3.org/83

Cinder Active-Active HA – Newton mid-cycle by Gorka Eguileor

Last week took place the OpenStack Cinder mid-cycle sprint in Fort Collins, and on the first day we discussed the Active-Active HA effort that’s been going on for a while now and the plans for the future. This is a summary of that session.

… read more at http://tm3.org/84
Quelle: RDO