OpenStack Developer Mailing List Digest December 31 – January 6

SuccessBot Says

Dims &; Keystone now has Devstack based functional test with everything running under python3.5.
Tell us yours via IRC channels with message &; <message>&;
All

Time To Retire Nova-docker

nova-docker has lagged behind the last 6 months of nova development.
No longer passes simple CI unit tests.

There are patches to at least get the unit tests work 1 .

If the core team no longer has time for it, perhaps we should just archive it.
People ask about it on openstack-nova about once or twice a year, but it’s not recommended as it’s not maintained.
It’s believed some people are running and hacking on it outside of the community.
The Sun project provides lifecycle management interface for containers that are started in container orchestration engines provided with Magnum.
Nova-lxc driver provides an ability of treating containers like your virtual machines. 2

Not recommended for production use though, but still better maintained than nova-docker 3.

Nova-lxd also provides the ability of treating containers like virtual machines.
Virtuozzo which is supported in Nova via libvirt provides both a virtual machine and OS containers similar to LXC.

These containers have been in production for more than 10 years already.
Well maintained and actually has CI testing.

A proposal to remove it 4 .
Full thread

Community Goals For Pike

A few months ago the community started identifying work for OpenStack-wide goals to “achieve visible common changes, push for basic levels of consistency and user experience, and efficiently improve certain areas where technical debt payments have become to high &8211; across all OpenStack projects.”
First goal defined 5 to remove copies of incubated Oslo code.
Moving forward in Pike:

Collect feedback of our first iteration. What went well and what was challenging?
Etherpad for feedback 6

Goals backlog 7

New goals welcome
Each goal should be achievable in one cycle. If not, it should be broken up.
Some goals might require documentation for how it could be achieved.

Choose goals for Pike

What is really urgent? What can wait for six months?
Who is available and interested in contributing to the goal?

Feedback was also collected at the Barcelona summit 8
Digest of feedback:

Most projects achieved the goal for Ocata, and there was interest in doing it on time.
Some confusion on acknowledging a goal and doing the work.
Some projects slow on the uptake and reviewing the patches.
Each goal should document where the “guides” are, and how to find them for help.
Achieving multiple goals in a single cycle wouldn’t be possible for all team.

The OpenStack Product Working group is also collecting feedback for goals 9
Goals set for Pike:

Split out Tempest plugins 10
Python 3 11

TC agreeements from last meeting:

2 goals might be enough for the Pike cycle.
The deadline to define Pike goals would be Ocata-3 (Jan 23-27 week).

Full thread

POST /api-wg/news

Guidelines current review:

Add guidelines on usage of state vs. status 12
Add guidelines for boolean names 13
Clarify the status values in versions 14
Define pagination guidelines 15
Add API capabilities discovery guideline 16

Full thread

 
Quelle: openstack.org

Don’t go it alone in the team sport of retail digital transformation

The reaction of IT teams to the ever changing &;bimodal IT&; landscape has been interesting to watch over the past several years at the National Retail Federation’s (NRF) BIG Show.
There have certainly been winners and losers, but not always the ones you might expect. There has been a surge away from centralized IT in past years, in favor of routing new projects to embedded, shadow IT teams or completely outsourced digital projects.
However, those who are succeeding in building a truly winning omnichannel strategy are doing so with the complete inclusion of centralized IT. For these winners, the experience of CTO and CIO teams has been essential.
I often talk to IT directors of large retailers. These are the people who traditionally ran 12 -18-month implementation projects. They now find themselves with a stark decision: be agile or be benched. Instead of sitting on the sidelines while other players from elsewhere in the business took charge, they are becoming the new change agents with a playbook to drive the digital agenda.
What has changed in the past few years is the attitude of the central IT teams to embrace the problem at hand. With a new acceptance of agile principles and the new reality of cloud and hybrid, these same IT teams have a pivotal role to play. They are helping the teams charged with rapid build out and transient projects, where delivery is measured in weeks.
Some recent very public security breaches have helped put the wind at the backs of once-beleaguered CTOs in making the case with boards to have central IT at the heart of every new build out. The move to a world where central IT retains control of the core systems — either on premises or the cloud, while working in partnership with shadow IT — is a newly emerging and powerful trend which ultimately will make everyone better off.
As enterprises react to the opportunities that cloud and digital bring, their IT architectures built over decades face their greatest ever challenge: supporting a new digital world where the connectivity is handled by a whole new generation of empowered users — rookies, if you will —  coming along with diverse skillsets.
For some, this could be categorized as API development tooling, but the further from the data center one looks, the more this morphs into something more fluid. It’s simply part of the business landscape. For the iPad generation who can connect their home world together — to switch on their lights from their smartphone while automatically publishing pics to their social channel of choice — it looks odd that enterprises are unable to apply this level of connectivity to the apps that make up their business landscape.
This broadening connectivity and user landscape changes the game, driving a forever-expanding and critical role for integration software. Integration is a fundamental element of any good team, handling the complexities of connecting and making sense of the data that digital teams need. Whether on the cloud or in the data center, integration is becoming significantly more powerful and ubiquitous, serving a surprising range of user experiences.
We&;re driving a new generation of tooling aimed at and promoting collaboration between the spectrum of digital teams driving the omnichannel agenda in leading retailers.
Have you seen the future yet? Come and talk it through with me at NRF or join the discussion here.
The post Don’t go it alone in the team sport of retail digital transformation appeared first on news.
Quelle: Thoughts on Cloud

DockerCon workshops: Which one will you be attending?

Following in last year’s major success, we are excited to be bringing back and expand the paid workshops at 2017. The pre-conference workshops will focus on a range of subjects from Docker 101 to deep dives in networking, Docker for JAVA and  advanced orchestration. Each workshop is designed to give you hands-on instruction and insight on key Docker topics, taught by Docker Engineers and Docker Captains. The workshops are a great opportunity to get better acquainted and excited about Docker technology to start off DockerCon week.

Take advantage of the lowest DockerCon pricing and get your Early Bird Ticket + Workshop now! Early Bird Tickets are limited and will sell out in the next two weeks!
Here are the basics of the DockerCon workshops:
Date: Monday, April 17, 2017
Time: 2:00pm &; 5:00pm
Where: Austin Convention Center &8211; 500 E. Cesar Chavez Street, Austin, TX
Cost: $150
Class size: Classes will remain small and are limited to 50 attendees per class.
Registration: The workshops are only open to DockerCon attendees. You can register for the workshops as an add-on package through the registration site here.

Below are overviews of each workshop. To learn more about each topic head over to the DockerCon 2017 registration site.
Learn Docker
If you are just getting started learning about Docker and want to get up to speed, this is the workshop for you. Come learn Docker basics including running containers, building images and basics on networking, orchestration, security and  volumes.
Orchestration Workshop: Beginner
You&;ve installed Docker, you know how to run containers, you&8217;ve written Dockerfiles to build container images for your applications (or parts of your applications), and perhaps you&8217;re even using Compose to describe your application stack as an assemblage of multiple containers.
But how do you go to production? What modifications are necessary in your code to allow it to run on a cluster? (Spoiler alert: very little, if any.) How does one set up such a cluster, anyway? Then how can we use it to deploy and scale applications with high availability requirements?
In this workshop, we will answer those questions using tools from the Docker ecosystem, with a strong focus on the native orchestration capabilities available since Docker Engine 1.12, aka &;Swarm Mode.&;
Orchestration Workshop: Advanced
Already using Docker and recently started using Swarm Mode in 1.12? Let’s start where previous Orchestration workshops may have left off, and dive into monitoring, logging, troubleshooting, and security of docker engine and docker services (Swarm Mode) for production workloads. Pulled from real world deployments, we&8217;ll cover centralized logging with ELK, SaaS, and others, monitoring/alerting with CAdvisor and Prometheus, backups of persistent storage, optional security features (namespaces, seccomp and apparmor profiles, notary), and a few cli tools for troubleshooting. Come away ready to take your Swarm to the next level!
Stay tuned as more workshop topics will be announced in the coming weeks! The workshops will sell out, so act fast and add the pre-conference workshops to your DockerCon 2017 registration!
Docker Networking
In this 3-hour, instructor-led training, you will get an in-depth look into Docker Networking. We will cover all the networking features natively available in Docker and take you through hands-on exercises designed to help you learn the skills you need to deploy and maintain Docker containers in your existing network environment.
Docker Store for Publishers
This workshop is designed to help potential Docker Store Publishers to understand the process, the best practices and the workflow of creating and publishing great content. You will get to interact with the members of the Docker Store’s engineering team. Whether you are an established ISV, a startup trying to distribute your software creation using Docker Containers or an independent developer, just trying to reach as many users as possible, you will benefit from this workshop by learning how to create and distribute trusted and Enterprise-ready content for the Docker Store.
Docker for Java Developers
Docker provides PODA (Package Once Deploy Anywhere) and complements WORA (Write Once Run Anywhere) provided by Java. It also helps you reduce the impedance mismatch between dev, test, and production environment and simplifies Java application deployment.
This workshop will explain how to:

Running first Java application with Docker
Package your Java application with Docker
Sharing your Java application using Docker Hub
Deploy your Java application using Maven
Deploy your application using Docker for AWS
Scaling Java services with Docker Engine swarm mode
Package your multi-container application and use service discovery
Monitor your Docker + Java applications
Build a deployment pipeline using common tools

Hands-On Docker for Raspberry Pi
Take part in our first-of-a-kind hands-on Raspberry Pi and Docker workshop where you will be given all the hardware you need to start creating and deploying containers with Docker including an 8-LED RGB add-on from Pimoroni. You will learn the subtleties of working with an ARM processor and how to control physical hardware through the GPIO interface. Programming experience is not required but a basic understanding of Python is helpful.
Microservices Lifecycle Explained Through Docker and Continuous Deployment
The workshop will go through the whole microservices development lifecycle. We’ll start from the very beginning and define and design architecture. From there on we’ll do some coding and testing all the way until the final deployment to production. Once our new services are up and running we’ll see how to maintain them, scale them, and recover them in case of failures. The goal will be to design a fully automated continuous deployment (CDP) pipeline with Docker containers.
During the workshop we’ll explore tools like Docker Engine with built in orchestration via swarm mode,, Docker Compose, Jenkins, HAProxy, and a few others.
Modernizing Monolothic ASP.NET Applications with Docker
Learn how to use Docker to run traditional ASP.NET applications In Windows containers without an application re-write. We’ll use Docker tools to containerize a monolithic ASP.NET app, then see how the platform helps us iterate quickly &8211; pulling high-value features out of the app and running them in separate containers. This workshop gives you a roadmap for modernizing your own ASP.NET workloads.

What dockercon workshop will you be attending? Limited number of spots => save yours now!Click To Tweet

The post DockerCon workshops: Which one will you be attending? appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Meet 3 contenders in the Connect to Cloud Cognitive Build initiative

Thomas J. Watson said, &;I believe the real difference between success and failure in a corporation can be very often traced to the question of how well the organization brings out the great energies and talents of its people.&;
When you look at companies today — big or small, anywhere in the world — one consistent tenet they all have is the drive to innovate. Innovation is required for success, and more than that, survival.
If you walk around any IBM office today — whether at Astor Place in New York City, the office in Hursley, England, or the one in Cairo, Egypt — you will see signs reminding teams to “Treasure wild ducks” and “Become essential to our customers.” Maybe the simplest and most profound is “Think.” The rationale is to remind people daily what truly makes an IBMer and makes IBM different.
That is why IBM continues to institutionalize the practice of innovation, empowering teams to not only think, but also to act and make their ideas reality. One way IBM is doing that is with the Connect to Cloud Cognitive Build initiative.
Over the past four months, IBMers from around the world have been forming teams with the goal of making use of IBM technology solutions, combining them with Watson APIs, and creating solutions that may change customer experiences, markets or even the world.
I would like to invite you to follow their journey. Nearly 50 submissions have been narrowed down to 13 semifinalists who are now in the prototyping phase, and the three finalists will be invited to IBM InterConnect on 19 March, when the winner will be announced.
Between now and then, we are following three project teams as they create their prototypes leading up to the judging. You will get a chance to meet the teams, understand why they decided to undertake their specific project, and watch their successes and struggles as they try to make them work.
Here is a sneak peek of what they are doing:
Cognitive Water Information System (CoW-IS) is an automated, real-time monitoring system for the prevention of flood and drought-related disasters. It shares predicted and preemptive information with concerned stakeholders so they can take action to prevent loss of life and property.
is a mobile application built on a cognitive hybrid cloud solution and Samaritan network. It facilitates and rewards random acts of kindness by enabling users to give others thumbs-ups, pats on the back and virtual high-fives.
Cognibot is a ChatBot-driven conversational experience with which DevOps users solve complex middleware problems using natural language questions. Users get only the data they need, not the full, data heavy-dashboard. This way, they can dig into the institutional knowledge of the organization and reduce downtime.
So join us on 12 January for the first set of videos to meet the people who make IBM the innovative company it is.
Learn more about how IBM helps users take advantage of the digital economy.
The post Meet 3 contenders in the Connect to Cloud Cognitive Build initiative appeared first on news.
Quelle: Thoughts on Cloud

5 cloud predictions for 2017

2016 has shown that expert predictions don’t always play out in quite the ways people expect.
If anything, the mantra of “expect the unexpected” seems to be the only one to follow right now. With this as a backdrop, I thought about what the world of cloud might expect to see as 2017 begins.
With your expectations set accordingly; here are five of my predictions:
1. Increased agility will continue to be the main business driver for cloud.
As business demands change at ever faster rates, companies look to cloud for agility. Client engagements show us that traditional IT approaches don’t meet this need. Many companies­­ will discover that their existing culture and processes act as an impediment to using cloud to drive innovation. Cost savings, while still important, are no longer the leading driver of a move to cloud.
2. Organizations will increasingly get rid of on-premises infrastructure.
I&;m no longer surprised by organizations that say that they don&8217;t want to own infrastructure, often expressed as a “we don’t want to own and run data centers” message. Industries in which this would be unthinkable a year ago are now “ditching the data center.”  This trend will only continue apace.
3. Public cloud will become the primary delivery vehicle for most cloud adoption.
With this comes two casualties. To start, it undermines the idea that &;hybrid&; is only about on- to off-premises connectivity. This now becomes a view of &8220;hybrid&8221; being &8220;any-to-any.&8221; It also supports the notion that the hybrid state is a step along the journey to public cloud. Hybrid is no longer the end goal for many clients, but a transition state to the future.
4. &8220;Conservative&8221; industries will adopt cloud.
Many companies will announce moves to “cloud first” models. Many clients are waiting for the first players in their industry to move. This will trigger a mass move to cloud. The ways that cloud can address regulatory issues are now well understood. As a result, organizations that avoided cloud previously will adopt it in droves.
5. Continued fallout from the move to cloud.
This will manifest itself in many ways. Traditional IT organizations will struggle even more with their role in the new world. Traditional IT vendors will also struggle to understand how to do business in the new world. Both of these struggles will lead to knock-on effects on jobs, roles and skills.  Cloud opens up new opportunities, but only for those willing to embrace this new world.
As Niels Bohr supposedly said, “Prediction is very difficult, especially about the future.”
Considering where predictions got us in 2016, I fall back on this quote should mine above prove to be inaccurate.
Learn more about IBM Cloud solutions.
The post 5 cloud predictions for 2017 appeared first on news.
Quelle: Thoughts on Cloud

Fincantieri sets sail with IBM hybrid cloud

Fincantieri, one of the world&;s largest shipbuilding groups, is looking to &;improve the efficiency of designing, building and deploying new vessels,&; InfoTechLead reports, and it has chosen an IBM hybrid cloud solution to help make that happen.
The Trieste, Italy-based shipbuilder selected IBM in part because there is an IBM Cloud Data Center in Milan. Fincantieri is looking to connect its own 13 private, distributed data centers with the IBM data center for &8220;high availability, fault tolerance and secure enterprise service levels.&8221;
Gianluca Zanutto, CIO of Fincantieri, explained the choice further: “When we needed to redesign our IT infrastructure for the future, we trusted IBM Cloud to deliver the highly secure and scalable solution we need to keep up with the sharp growth and complexity of the shipbuilding industry.”
Stefano Rebattoni, General Manager Global Technology Services, IBM Italy, added that IBM Cloud will help Fincantieri &8220;easily integrate other subsidiaries and new acquisitions as it continues to expand the company’s worldwide footprint.&8221;
Building cruise ships is part of a fast-growing industry. Demand for cruises has increased 68 percent over the past 10 years, according to Cruise Lines International Association, and cruise revenue is expected to grow from $37.1 billion in 2014 to $39.6 billion by the end of 2016.
Read more about Fincantieri&8217;s choice of hybrid cloud provider on InfoTechLead.
The post Fincantieri sets sail with IBM hybrid cloud appeared first on news.
Quelle: Thoughts on Cloud

Announcing Federal Security and Compliance Controls for Docker Datacenter

Security and compliance are top of mind for IT organizations. In a technology-first era rife with cyber threats, it is important for enterprises to have the ability to deploy applications on a platform that adheres to stringent security baselines. This is especially applicable to U.S. Federal Government entities, whose wide-ranging missions, from public safety and national security, to enforcing financial regulations, are critical to keeping policy in order.

Federal agencies and many non-government organizations are dependent on various standards and security assessments to ensure their systems are operating in controlled environments. One such standard is NIST Special Publication 800-53, which provides a library of security controls to which technology systems should adhere. NIST 800-53 defines three security baselines: low, moderate, and high. The number of security controls that need to be met increases from the low to high baselines, and agencies will elect to meet a specific baseline depending on the requirements of their systems.
Another assessment process known as the Federal Risk and Authorization Management Program, or for short, further expands upon the NIST 800-53 controls by including additional security requirements at each baseline. FedRAMP is a program that ensures cloud providers meet stringent Federal government security requirements.
When an agency elects to deploy a system like Docker Datacenter for production use, they must complete a security assessment and grant the system an Authorization to Operate (ATO). The FedRAMP program already includes provisional ATOs at specific security baselines for a number of cloud providers, including AWS and Azure, with scope for on-demand compute services (e.g. Virtual Machines, Networking, etc). Since many cloud providers have already met the requirements defined by FedRAMP, an agency that leverages the provider’s services must only authorize the components of its own system that it deploys and manages at the chosen security baseline.
A goal of Docker is to help make it easier for organizations to build compliant enterprise container environments. As such, to help expedite the agency ATO process, we&;re excited to release NIST 800-53 Revision 4 security and privacy control guidance for Docker Datacenter at the FedRAMP Moderate baseline.
The security content is available in two forms:

An open source project where the community can collaborate on the compliance documentation itself and
System Security Plan (SSP) template for Azure Government

 

 
First, we&8217;ve made the guidance available as part of a project available here. The documentation in the repository is developed using a format known as OpenControl, an open source, &;compliance-as-code&; schema and toolkit that helps software vendors and organizations build compliance documentation. We chose to use OpenControl for this project because we&8217;re big fans of tools at Docker, and it really fits our development principals quite nicely. OpenControl also includes schema definitions for other standards including Payment Card Industry Data Security Standard (PCI DSS). This helps to address compliance needs for organizations outside of the public sector. We’re also licensing this project under CC0 Universal Public Domain. To accelerate compliance for container platforms, Docker is making this project public domain and inviting folks to contribute to the documentation to help enhance the container compliance story.
 
Second, we&8217;re including this documentation in the form of a System Security Plan (SSP) template for running Docker Datacenter on Microsoft Azure Government. The template can be used to help lessen the time it takes for an agency to certify Docker Datacenter for use. To obtain these templates, please contact compliance@docker.com.
We’ve also started to experiment with natural language processing which you’ll find in the project’s repository on GitHub. By using Microsoft’s Cognitive Services Text Analytics API, we put together a simple tool that vets the integrity of the actual security narratives and ensures that what’s written holds true to the NIST 800-53 control definitions. You can think of this as a form of automated proofreading. We’re hoping that this helps to open the door to new and exciting ways to develop content!

New federal security and compliance controls for on @Azure FedRAMP To Tweet

More resources for you:

See What’s New and Learn more about Docker Datacenter
Sign up for a free 30 day trial of Docker Datacenter
Learn More about Docker in public sector.

The post Announcing Federal Security and Compliance Controls for Docker Datacenter appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Enterprise cloud strategy: Platforms and infrastructure in a multi-cloud environment

In past posts about multi-cloud strategy, I&;ve focused on two principles for getting it right — governance and applications and data — and their importance when working with a cloud services provider (CSP).
The third and final element of your multi-cloud strategy is perhaps most crucial: platform and infrastructure effectiveness to support your application needs.
Deployment flexibility
When managing multiple clouds, you want to deploy applications on platforms that satisfy business, technical, security and compliance requirements. When those platforms come from a CSP, keep these factors in mind:

The platforms should be flexible and adaptable to your ever-changing business needs.
Your CSP should allow you to provision workloads on bare metal servers, where performance or strict compliance is needed and can support virtual servers and containers.
The CSP should be able build and support a private cloud on your premises. That cloud must fulfill your strictest compliance and security needs, as well as support a hybrid cloud model.
The CSP must provide capabilities that help you build applications by stitching together various platform-as-a-service (PaaS) services.
Many customers use containers to port applications. Find out whether your CSP provides container services backed by industry standards. Understand any customization to the standard container service that might create problems.

Seamless connectivity and networking
Applications, APIs and data must travel along networks. Seamless network connectivity across various cloud and on-premises environments is vital to success. Your CSP should be able to integrate with carrier hotels that enable on-demand, direct network connectivity to multiple cloud providers.
Interconnecting through carrier hotels enables automated, near-real-time provisioning of cloud services from multiple providers. It also provides enhanced service orchestration and management capabilities, along with shorter time to market.
Your CSP must also support software-defined and account-defined networks. This helps you maintain network abstraction standards that segregate customers as well as implement network segmentation and isolation.
The CSP should also control network usage with predefined policies. It must intelligently work with cloud-security solutions such as federated and identity-based security systems. Make sure the CSP isolates your data from other clients’ and segments it to meet security and compliance requirements.
Storage Interoperability and resiliency
Extracting data from a CSP to migrate applications in-house or to another CSP is the most challenging part in a multi-cloud deployment. In certain cases, such as software-as-a-service (SaaS) platforms, you may not have access to all the data. One reason: there are no standards for cloud storage interoperability. It only gets more complex when you maintain applications across multiple clouds for resiliency.
The solution is to demand that your data can move between clouds and support both open-standard and native APIs. Ask your CSP whether it supports “direct link&; co-location partnerships that can &;hold&8221; customer-owned storage devices for data egress or legacy workload migrations.
With a sound storage strategy, you&8217;ll have good resiliency in case of disaster. Again, questions matter. Does your CSP provide object storage in multi-tenant, single-tenant or on-premises &8220;flavors”?
As with everything else involving a CSP, look carefully under the hood. Find out whether the CSP&8217;s storage solution is true hybrid; that is, an on- or off-premises solution that simplifies multi-cloud governance and compliance.
For more information, read “IBM Optimizes Multicloud Strategies for Enterprise Digital Transformation.”
The post Enterprise cloud strategy: Platforms and infrastructure in a multi-cloud environment appeared first on news.
Quelle: Thoughts on Cloud

Analytics on cloud opens a new world of possibilities

Data has become the most valuable currency and the common thread that binds every function in today’s enterprise. The more an organization puts data to work, the better the outcomes.
How can we harness data in a way that makes lives easier, more efficient and more productive? Where can we find the insight from data that will give a business the edge it needs?
Data intelligence is the result of applying analytics to data. Data intelligence creates more insight, context and understanding, which enable better decisions. Digital intelligence with cloud empowers organizations to pursue game-changing opportunities.
In a Harvard Business Review Analytic Services’ study of business executives, respondents who said they effectively innovate new business models were almost twice as likely to use cloud.

Connecting more roles with more data
Cloud analytics facilitates the connection of all data and insights to all users. This helps lay the foundation for a cognitive business. Trusted access to data is essential for organizations. Including data in motion or at rest, internal or external, structured or unstructured.
Besides their own data, companies have many more data sources that can provide insights into their business. Some popular examples include:

social media data
weather data
Thompson Reuters
public sources such as the Center for Disease Control and Prevention and Internet of Things (IoT) sensor data

Cloud democratizes analytics by enabling companies to deliver more tools and data to more roles. Compared to on-premises solutions, cloud analytics deploys faster and offers a wider variety of analytics tools, including simple, natural language-based options.  With cloud’s scalability and flexibility, data volume and diversity have become almost limitless.
More accessible data and tools have created data-hungry professionals.
Application developers must turn creative ideas into powerful mobile, web and enterprise applications. Data scientists must discover hidden insights in data. Business professionals must create and act on insights faster. Data engineers must wrangle, mine and integrate relevant data to harness its power. The collaboration between these roles helps to extract more value from complex data.
Discovering more opportunities
Cloud-based analytics enables organizations to discover new opportunities, with data intelligence at the core. Organizations can uncover more insights by leveraging new technologies and approaches. A cloud platform provides faster and simplified access to the latest technologies. The ability to mix and match them, try things out, use what you want and put them back when you’re done.
Data science, machine learning and open source let organizations extract insights from large volumes of data in new, iterative ways:

Data science tools enable quick prototyping and design of predictive models.
Machine learning has advanced fraud detection, increased sales forecast accuracy and improved customer segmentation.
Open source tools, such as Apache Spark and Hadoop, help teams conduct complex analytics at high speeds.
More and more, new products and services are built on the cloud. It provides the ideal platform for users to fail fast and innovate quickly.

Accelerating insights with cloud
Organizations with cloud-based analytics speed up outcomes. They iterate, improve business models and release new offerings into the marketplace rapidly.
Cloud underpins this in three ways:

Providing easier access to new technologies sooner
Deploying new data models faster
Enabling quick embedding of insights into process, applications and services

Putting insights into production in real time has become easy and expected. For example, when a retailer wants to trigger the right offer for a customer shopping online, it should be immediate. Speed is essential in offering this personalized experience.
The cloud has helped companies use analytics to respond to volatile market dynamics, establish competitive differentiation and create new business paradigms.
Learn how innovative organizations have harnessed analytics on the cloud in the Harvard Business Review Analytics Services&; whitepaper, &;Powering Digital Intelligence with Cloud Analytics.&;
The post Analytics on cloud opens a new world of possibilities appeared first on news.
Quelle: Thoughts on Cloud