How KPN speeds service delivery

Are you looking to transform your IT department into a self-service delivery center? Do your IT operations have the speed and control to deliver what’s needed without compromising quality?
Keep reading to find out how KPN, an IT and communications technology services provider, increased its speed to quickly deliver IT service requests, reduce costs and provide high quality cloud services.
KPN is a leader in IT services and connectivity. It offers fixed-line and mobile telephony, internet access and television services in Netherlands. The provider also operates several mobile brands in Germany and Belgium. Its subsidiary, Getronics N.V., provides services across the globe.
Data and storage have played a critical role in helping KPN deliver high quality cloud services to its clients. As rapid growth of data continues to change the game, here’s how this savvy business has used IBM Cloud to transform operations.
Cloud Orchestrator accelerates service delivery
KPN executives wanted to optimize its cloud strategy to enhance service delivery time and quality. Potential solutions would help them manage and automate storage services in-house. The goal: improve cloud management to accelerate service delivery and reduce costs without sacrificing quality.
IBM Cloud Orchestrator (ICO) is an excellent solution for managing your complex hybrid cloud environments. It provides cloud management for IT services through a user-friendly, self-service portal. It automates and integrates the infrastructure, application, storage and network into a single tool. Additionally, the self-service catalog lets users automate the deployment of data center resources, cloud-enabled business processes and other cloud services.
Business transformation through automation
With ICO, KPN automated its storage services and designed an in-house cloud management system. The solution helped KPN provision and scale cloud resources and reduce both administrator workloads and error-prone manual IT administrator tasks. As a result, KPN could accelerate service delivery times by approximately 80 percent. This significantly improved the service quality and saved resources through automation.
Watch this video to learn more about how IBM Cloud Orchestrator helped KPN accelerate its cloud service delivery:

For a more in-depth discussion, join us at InterConnect 2017 and attend the session: “How KPN leveraged IBM Cloud technologies for automation and &;insourcing&; of operations work.” And there&;s more. InterConnect will bring together more than 20,000 top cloud professionals to network, train and learn about the future of the industry. If you still haven’t signed up, be sure to register now.
The post How KPN speeds service delivery appeared first on news.
Quelle: Thoughts on Cloud

Ensuring Container Image Security on OpenShift with Red Hat CloudForms

In December 2016, a major vulnerability, CVE-2016-9962 (&;on-entry vulnerability&;), was found in the Docker engine which allowed local root users in a container to gain access to file-descriptors of a process launched or moved into the container from another namespace. In a Banyan security report, they found that over 30% of official images in Docker Hub contain high priority security vulnerabilities. And FlawCheck surveyed enterprises asking for their top security concern regarding containers in production environments. “Vulnerabilities and malware,” at 42%, was the top security concern among those surveyed. Clearly security is a top concern for organizations that are looking to run containers in production.
At Red Hat, we are continuously improving our security capabilities and introduced a new container scanning feature with CloudForms 4.2 and OpenShift 3.4. This new feature allows CloudForms to flag images in the container registry in which it has found vulnerabilities, and OpenShift to deny execution of that image the next time someone tries to run that image.

CloudForms has multiple capabilities on how a container scan can be initiated:

A scheduled scan of the registry
An automatic scan based on newly discovered images in the registry
A manual execution of the scan via Smart-tate Analysis

Having this unique scanning feature with native integration in OpenShift is a milestone in container security as it provides near real time monitoring of your images within the OpenShift environment.
The following diagram illustrates the flow happening when an automatic scan is performed.

CloudForms monitors the OpenShift Provider and checks for new images in the registry. If it finds a new image, CloudForms triggers a scan.
CloudForms makes a secure call to OpenShift and requests a scanning container to be scheduled.
OpenShift schedules a new pod on an available node.
The scanning container is started.
The scanning container pulls down a copy of the image to scan.
The image to scan is unpacked and its software contents (RPMs) are sent to CloudForms.
CloudForms may also initiate an OpenSCAP scan of the container.
Once the OpenSCAP scan finishes, the results are uploaded and a report is generated from the CloudForms UI.
If the scan found any vulnerabilities, CloudForms calls OpenShift to flag the image and prevent it from running.

The next time someone tries to start the vulnerable image, OpenShift alerts the user that the image execution was blocked based on the policy set by CloudForms.

As you can see, Red Hat CloudForms can be used as part of your IT security and compliance management to assist in identifying and validating that workloads are secure across your infrastructure stack, starting with hosts and virtual machines, instances in the cloud, or containers.
Quelle: CloudForms

53 new things to look for in OpenStack Ocata

The post 53 new things to look for in OpenStack Ocata appeared first on Mirantis | Pure Play Open Cloud.
With a shortened development cycle, you&;d think we&8217;d have trouble finding 53 new features of interest in OpenStack Ocata, but with so many projects (more than 60!) under the Big Tent, we actually had a little bit of trouble narrowing things down. We did a live webinar talking about 157 new features, but here&8217;s our standard 53. (Thanks to the PTLs who helped us out with weeding it down from the full release notes!)
Nova (OpenStack Compute Service)

VM placement changes: The Nova filter scheduler will now use the Placement API to filter compute nodes based on CPU/RAM/Disk capacity.
High availability: Nova now uses Cells v2 for all deployments; currently implemented as single cells, the next release, Pike, will support multi-cell clouds.
Neutron is now the default networking option.
Upgrade capabilities: Use the new &;nova-status upgrade check&8217; CLI command to see what&8217;s required to upgrade to Ocata.

Keystone (OpenStack Identity Service)

Per-user Multi-Factor-Auth rules (MFA rules): You can now specify multiple forms of authentication before Keystone will issue a token.  For example, some users might just need a password, while others might have to provide a time-based one time password and an additional form of authentication.
Auto-provisioning for federated identity: When a user logs into a federated system, Keystone will dynamically create that user a role; previously, the user had to log into that system independently, which was confusing to users.
Validate an expired token: Finally, no more failures due to long-running operations such as uploading a snapshot. Each project can specify whether it will accept expired tokens, and just HOW expired those tokens can be.

Swift (OpenStack Object Storage)

Improved compatibility: Byteorder information is now included in Ring files to support machines with different endianness.
More flexibility: You can now configure the base of the URL base for static web.  You can also set the &;filename&; parameter in TempURLs and validate those TempURLs against a common prefix.
More data: If you&8217;re dealing with large objects, you can now use multi-range GETs and HTTP 416 responses.

Cinder (OpenStack Block Storage)

Active/Active HA: Cinder can now run in Active/Active clustered mode, preventing concurrent operation conflicts. Cinder will also handle mid-processing service failures better than in past releases.
New attach/detach APIs: If you&8217;ve been confused about how to attach and detach volumes to and from VMs, you&8217;re not alone. The Ocata release saw the Cinder team refactor these APIs in preparation for adding the ability to attach a single volume to multiple VMs, expected in an upcoming release.

Glance (OpenStack Image Service)

Image visibility:  Users can now create &8220;community&8221; images, making them available for everyone else to use. You can also specify an image as &8220;shared&8221; to specify that only certain users have access.

Neutron (OpenStack Networking Service)

Support for Routed Provider Networks in Neutron: You can now use the NOVA GRP (Generic Resource Pools) API to publish networks in IPv4 inventory.  Also, the Nova scheduler uses this inventory as a hint to place instances based on IPv4 address availability in routed network segments.
Resource tag mechanism: You can now create tags for subnet, port, subnet pool and router resources, making it possible to do things like map different networks in different OpenStack clouds in one logical network or tag provider networks (i.e. High-speed, High-Bandwidth, Dial-Up).

Heat (OpenStack Orchestration Service)

Notification and application workflow: Use the new  OS::Zaqar::Notification to subscribe to Zaqar queues for notifications, or the OS::Zaqar::MistralTrigger for just Mistral notifications.

Horizon (OpenStack Dashboard)

Easier profiling and debugging:  The new Profiler Panel uses the os-profiler library to provide profiling of requests through Horizon to the OpenStack APIs so you can see what&8217;s going on inside your cloud.
Easier Federation configuration: If Keystone is configured with Keystone to Keystone (K2K) federation and has service providers, you can now choose Keystone providers from a dropdown menu.

Telemetry (Ceilometer)

Better instance discovery:  Ceilometer now uses libvirt directly by default, rather than nova-api.

Telemetry (Gnocchi)

Dynamically resample measures through a new API.
New collectd plugin: Store metrics generated by collectd.
Store data on Amazon S3 with new storage driver.

Dragonflow (Distributed SDN Controller)

Better support for modern networking: Dragonflow now supports IPv6 and distributed sNAT.
Live migration: Dragonflow now supports live migration of VMs.

Kuryr (Container Networking)

Neutron support: Neutron networking is now available to containers running inside a VM.  For example, you can now assign one Neutron port per container.
More flexibility with driver-based support: Kuryr-libnetwork now allows you to choose between ipvlan, macvlan or Neutron vlan trunk ports or even create your own driver. Also, Kuryr-kubernetes has support for ovs hybrid, ovs native and Dragonflow.
Container Networking Interface (CNI):  You can now use the Kubernetes CNI with Kuryr-kubernetes.
More platforms: The controller now handles Pods on bare metal, handles Pods in VMs by providing them Neutron subports, and provides services with LBaaSv2.

Vitrage (Root Cause Analysis Service)

A new collectd datasource: Use this fast system statistics collection deamon, with plugins that collect different metrics. From Ifat Afek: &8220;We tested the DPDK plugin, that can trigger alarms such as interface failure or noisy neighbors. Based on these alarms, Vitrage can deduce the existence of problems in the host, instances and applications, and provide the RCA (Root Cause Analysis) for these problems.&8221;
New “post event” API: Use This general-purpose API allows easy integration of new monitors into Vitrage.
Multi Tenancy support: A user will only see alarms and resources which belong to that user&8217;s tenant.

Ironic (Bare Metal Service)

Easier, more powerful management: A revamp of how drivers are composed, &8220;dynamic drivers&8221; enable users to select a &8220;hardware type&8221; for a machine rather than working through a matrix of hardware types. Users can independently change the deploy method, console manager, RAID management, power control interface and so on. Ocata also brings the ability to do soft power off and soft reboot, and to send non-maskable interrupts through both ironic and nova&8217;s API.

TripleO (Deployment Service)

Easier per-service upgrades: Perform step-by-step tasks as batched/rolling upgrades or in parallel. All roles, including custom roles, can be upgraded this way.
Composable High-Availability architecture: Services managed by Pacemaker such as galera, redis, VIPs, haproxy, cinder-volume, rabbitmq, cinder-backup, and manila-share can now be deployed in multiple clusters, making it possible to scale-out the number of nodes running these services.

OpenStackAnsible (Ansible Playbooks and Roles for Deployment)

Additional support: OpenStack-Ansible now supports CentOS 7, as well as integration with Ceph.

Puppet OpenStack (Puppet Modules for Deployment)

New modules and functionality: The Ocata release includes new modules for puppet-ec2api, puppet-octavia, puppet-panko and puppet-watcher. Also, existing modules support configuring the [DEFAULT]/transport_url configuration option. This changes makes it possible to support AMQP providers other than rabbitmq, such as zeromq.

Barbican (Key Manager Service)

Testing:  Barbican now includes a new Tempest test framework.

Congress (Governance Service)

Network address operations:  The policy language has been enhanced to enable users to specify network network policy use cases.
Quick start:  Congress now includes a default policy library so that it&8217;s useful out of the box.

Monasca (Monitoring)

Completion of Logging-as-a-Service:  Kibana support and integration is now complete, enabling you to push/publish logs to the Monasca Log API, and the logs are authenticated and authorized using Keystone and stored scoped to a tenant/project, so users can only see information from their own logs.
Container support:  Monasca now supports monitoring of Docker containers, and is adding support for the Prometheus monitoring solution. Upcoming releases will also see auto-discovery and monitoring of applications launched in a Kubernetes cluster.

Trove (Database as a Service)

Multi-region deployments: Database clusters can now be deployed across multiple OpenStack regions.

Mistral (Taskflow as a Service)

Multi-node mode: You can now deploy the Mistral engine in multi-node mode, providing the ability to scale out.

Rally (Benchmarking as a Service)

Expanded verification options:  Whereas previous versions enabled you to use only Tempest to verify your cluster, the newest version of Rally enables you to use other forms of verification, which means that Rally can actually be used for the non-OpenStack portions of your application and infrastructure. (You can find the full release notes here.)

Zaqar (Message Service)

Storage replication:  You can now use Swift as a storage option, providing built-in replication capabilities.

Octavia (Load Balancer Service)

More flexibility for Load Balancer as a Service:  You may now use neutron host-routes and custom MTU configurations when configuring LBaasS.

Solum (Platform as a Service)

Responsive deployment:  You may now configure deployments based on Github triggers, which means that you can implement CI/CD by specifying that your application should redeploy when there are changes.

Tricircle (Networking Automation Across Neutron Service)

DVR support in local Neutron:  The East-West and North-South bridging network have been combined into North-South a bridging network, making it possible to support DVR in local Neutron.

Kolla (Container Based Deployment)

Dynamic volume provisioning: Kolla-Kubernetes by default uses Ceph for stateful storage, and with Kubernetes 1.5, support was added for Ceph and dynamic volume provisioning as requested by claims made against the API server.

Freezer (Backup, Restore, and Disaster Recovery Service)

Block incremental backups:  Ocata now includes the Rsync engine, enabling these incremental backups.

Senlin (Clustering Service)

Generic Event/Notification support: In addition to its usual capability of logging events to a database, Senlin now enables you to add the sending of events to a message queue and to a log file, enabling dynamic monitoring.

Watcher (Infrastructure Optimization Service)

Multiple-backend support: Watcher now supports metrics collection from multiple backends.

Cloudkitty (Rating Service)

Easier management:  CloudKitty now includes a Horizon wizard and hints on the CLI to determine the available metrics. Also, Cloudkitty is now part of the unified OpenStack client.

The post 53 new things to look for in OpenStack Ocata appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

53 new things to look for in OpenStack Ocata

The post 53 new things to look for in OpenStack Ocata appeared first on Mirantis | Pure Play Open Cloud.
With a shortened development cycle, you&;d think we&8217;d have trouble finding 53 new features of interest in OpenStack Ocata, but with so many projects (more than 60!) under the Big Tent, we actually had a little bit of trouble narrowing things down. We did a live webinar talking about 157 new features, but here&8217;s our standard 53. (Thanks to the PTLs who helped us out with weeding it down from the full release notes!)
Nova (OpenStack Compute Service)

VM placement changes: The Nova filter scheduler will now use the Placement API to filter compute nodes based on CPU/RAM/Disk capacity.
High availability: Nova now uses Cells v2 for all deployments; currently implemented as single cells, the next release, Pike, will support multi-cell clouds.
Neutron is now the default networking option.
Upgrade capabilities: Use the new &;nova-status upgrade check&8217; CLI command to see what&8217;s required to upgrade to Ocata.

Keystone (OpenStack Identity Service)

Per-user Multi-Factor-Auth rules (MFA rules): You can now specify multiple forms of authentication before Keystone will issue a token.  For example, some users might just need a password, while others might have to provide a time-based one time password and an additional form of authentication.
Auto-provisioning for federated identity: When a user logs into a federated system, Keystone will dynamically create that user a role; previously, the user had to log into that system independently, which was confusing to users.
Validate an expired token: Finally, no more failures due to long-running operations such as uploading a snapshot. Each project can specify whether it will accept expired tokens, and just HOW expired those tokens can be.

Swift (OpenStack Object Storage)

Improved compatibility: Byteorder information is now included in Ring files to support machines with different endianness.
More flexibility: You can now configure the base of the URL base for static web.  You can also set the &;filename&; parameter in TempURLs and validate those TempURLs against a common prefix.
More data: If you&8217;re dealing with large objects, you can now use multi-range GETs and HTTP 416 responses.

Cinder (OpenStack Block Storage)

Active/Active HA: Cinder can now run in Active/Active clustered mode, preventing concurrent operation conflicts. Cinder will also handle mid-processing service failures better than in past releases.
New attach/detach APIs: If you&8217;ve been confused about how to attach and detach volumes to and from VMs, you&8217;re not alone. The Ocata release saw the Cinder team refactor these APIs in preparation for adding the ability to attach a single volume to multiple VMs, expected in an upcoming release.

Glance (OpenStack Image Service)

Image visibility:  Users can now create &8220;community&8221; images, making them available for everyone else to use. You can also specify an image as &8220;shared&8221; to specify that only certain users have access.

Neutron (OpenStack Networking Service)

Support for Routed Provider Networks in Neutron: You can now use the NOVA GRP (Generic Resource Pools) API to publish networks in IPv4 inventory.  Also, the Nova scheduler uses this inventory as a hint to place instances based on IPv4 address availability in routed network segments.
Resource tag mechanism: You can now create tags for subnet, port, subnet pool and router resources, making it possible to do things like map different networks in different OpenStack clouds in one logical network or tag provider networks (i.e. High-speed, High-Bandwidth, Dial-Up).

Heat (OpenStack Orchestration Service)

Notification and application workflow: Use the new  OS::Zaqar::Notification to subscribe to Zaqar queues for notifications, or the OS::Zaqar::MistralTrigger for just Mistral notifications.

Horizon (OpenStack Dashboard)

Easier profiling and debugging:  The new Profiler Panel uses the os-profiler library to provide profiling of requests through Horizon to the OpenStack APIs so you can see what&8217;s going on inside your cloud.
Easier Federation configuration: If Keystone is configured with Keystone to Keystone (K2K) federation and has service providers, you can now choose Keystone providers from a dropdown menu.

Telemetry (Ceilometer)

Better instance discovery:  Ceilometer now uses libvirt directly by default, rather than nova-api.

Telemetry (Gnocchi)

Dynamically resample measures through a new API.
New collectd plugin: Store metrics generated by collectd.
Store data on Amazon S3 with new storage driver.

Dragonflow (Distributed SDN Controller)

Better support for modern networking: Dragonflow now supports IPv6 and distributed sNAT.
Live migration: Dragonflow now supports live migration of VMs.

Kuryr (Container Networking)

Neutron support: Neutron networking is now available to containers running inside a VM.  For example, you can now assign one Neutron port per container.
More flexibility with driver-based support: Kuryr-libnetwork now allows you to choose between ipvlan, macvlan or Neutron vlan trunk ports or even create your own driver. Also, Kuryr-kubernetes has support for ovs hybrid, ovs native and Dragonflow.
Container Networking Interface (CNI):  You can now use the Kubernetes CNI with Kuryr-kubernetes.
More platforms: The controller now handles Pods on bare metal, handles Pods in VMs by providing them Neutron subports, and provides services with LBaaSv2.

Vitrage (Root Cause Analysis Service)

A new collectd datasource: Use this fast system statistics collection deamon, with plugins that collect different metrics. From Ifat Afek: &8220;We tested the DPDK plugin, that can trigger alarms such as interface failure or noisy neighbors. Based on these alarms, Vitrage can deduce the existence of problems in the host, instances and applications, and provide the RCA (Root Cause Analysis) for these problems.&8221;
New “post event” API: Use This general-purpose API allows easy integration of new monitors into Vitrage.
Multi Tenancy support: A user will only see alarms and resources which belong to that user&8217;s tenant.

Ironic (Bare Metal Service)

Easier, more powerful management: A revamp of how drivers are composed, &8220;dynamic drivers&8221; enable users to select a &8220;hardware type&8221; for a machine rather than working through a matrix of hardware types. Users can independently change the deploy method, console manager, RAID management, power control interface and so on. Ocata also brings the ability to do soft power off and soft reboot, and to send non-maskable interrupts through both ironic and nova&8217;s API.

TripleO (Deployment Service)

Easier per-service upgrades: Perform step-by-step tasks as batched/rolling upgrades or in parallel. All roles, including custom roles, can be upgraded this way.
Composable High-Availability architecture: Services managed by Pacemaker such as galera, redis, VIPs, haproxy, cinder-volume, rabbitmq, cinder-backup, and manila-share can now be deployed in multiple clusters, making it possible to scale-out the number of nodes running these services.

OpenStackAnsible (Ansible Playbooks and Roles for Deployment)

Additional support: OpenStack-Ansible now supports CentOS 7, as well as integration with Ceph.

Puppet OpenStack (Puppet Modules for Deployment)

New modules and functionality: The Ocata release includes new modules for puppet-ec2api, puppet-octavia, puppet-panko and puppet-watcher. Also, existing modules support configuring the [DEFAULT]/transport_url configuration option. This changes makes it possible to support AMQP providers other than rabbitmq, such as zeromq.

Barbican (Key Manager Service)

Testing:  Barbican now includes a new Tempest test framework.

Congress (Governance Service)

Network address operations:  The policy language has been enhanced to enable users to specify network network policy use cases.
Quick start:  Congress now includes a default policy library so that it&8217;s useful out of the box.

Monasca (Monitoring)

Completion of Logging-as-a-Service:  Kibana support and integration is now complete, enabling you to push/publish logs to the Monasca Log API, and the logs are authenticated and authorized using Keystone and stored scoped to a tenant/project, so users can only see information from their own logs.
Container support:  Monasca now supports monitoring of Docker containers, and is adding support for the Prometheus monitoring solution. Upcoming releases will also see auto-discovery and monitoring of applications launched in a Kubernetes cluster.

Trove (Database as a Service)

Multi-region deployments: Database clusters can now be deployed across multiple OpenStack regions.

Mistral (Taskflow as a Service)

Multi-node mode: You can now deploy the Mistral engine in multi-node mode, providing the ability to scale out.

Rally (Benchmarking as a Service)

Expanded verification options:  Whereas previous versions enabled you to use only Tempest to verify your cluster, the newest version of Rally enables you to use other forms of verification, which means that Rally can actually be used for the non-OpenStack portions of your application and infrastructure. (You can find the full release notes here.)

Zaqar (Message Service)

Storage replication:  You can now use Swift as a storage option, providing built-in replication capabilities.

Octavia (Load Balancer Service)

More flexibility for Load Balancer as a Service:  You may now use neutron host-routes and custom MTU configurations when configuring LBaasS.

Solum (Platform as a Service)

Responsive deployment:  You may now configure deployments based on Github triggers, which means that you can implement CI/CD by specifying that your application should redeploy when there are changes.

Tricircle (Networking Automation Across Neutron Service)

DVR support in local Neutron:  The East-West and North-South bridging network have been combined into North-South a bridging network, making it possible to support DVR in local Neutron.

Kolla (Container Based Deployment)

Dynamic volume provisioning: Kolla-Kubernetes by default uses Ceph for stateful storage, and with Kubernetes 1.5, support was added for Ceph and dynamic volume provisioning as requested by claims made against the API server.

Freezer (Backup, Restore, and Disaster Recovery Service)

Block incremental backups:  Ocata now includes the Rsync engine, enabling these incremental backups.

Senlin (Clustering Service)

Generic Event/Notification support: In addition to its usual capability of logging events to a database, Senlin now enables you to add the sending of events to a message queue and to a log file, enabling dynamic monitoring.

Watcher (Infrastructure Optimization Service)

Multiple-backend support: Watcher now supports metrics collection from multiple backends.

Cloudkitty (Rating Service)

Easier management:  CloudKitty now includes a Horizon wizard and hints on the CLI to determine the available metrics. Also, Cloudkitty is now part of the unified OpenStack client.

The post 53 new things to look for in OpenStack Ocata appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

IBM Machine Learning comes to private cloud

Billions of transactions in banking, transportation, retail, insurance and other industries take place in the private cloud every day. For many enterprises, the z System mainframe is the home for all that data.
For data scientists, it can be hard to keep up with all that activity and those vast swaths of data. So IBM has taken its core Watson machine learning technology and applied it to the z System, enabling data scientists to automate the creation, training and deployment of analytic models to understand their data more completely.
IBM Machine Learning supports any language, any popular machine learning framework and any transactional data type without the cost, latency and risk that comes with moving data off premises. It also includes cognitive automation to help data scientists choose the right algorithms by which to analyze and process their organization&;s specific data stores.
One company that is evaluating the IBM Machine Learning technology is Argus Health, which hopes to help healthcare providers and patients navigate the increasingly complex healthcare landscape.
&;Helping our health plan clients achieve the best clinical and financial outcomes by getting the best care delivered at the best price in the most appropriate place is the mission of Argus while focused on the vision of becoming preeminent in providing pharmacy and healthcare solutions,&; said Marc Palmer, president of Argus Health.
For more, check out CIO Today&;s full article.
The post IBM Machine Learning comes to private cloud appeared first on news.
Quelle: Thoughts on Cloud

3 key ways streaming video helps with delivering tough corporate news

Whether it’s a lower-than-expected earnings period or layoffs, every company has to find a way to announce bad news.
Historically, the news was delivered with a memo or in an in-person meeting, but these no longer suffice as workforces become increasingly distributed. Fortunately, the introduction of streaming video technology has upended how businesses communicate changes to large numbers of workers for the better.
“It&;s efficient, you can reach a wide audience at a low cost, and the products available for this are extremely affordable&; says Dan Rayburn, a streaming media analyst for Frost & Sullivan and executive vice president of streamingmedia.com. Rayburn adds that major companies, including Goldman Sachs and Coca-Cola, are already taking advantage.
Most importantly, streaming video enables three critical elements necessary in delivering tough news to employees:
Transparency
According to research from McKinsey & Company about company transformations, employees are eight times more likely to report a success when their bosses communicate directly. Streaming video not only enables company-wide, simultaneous communication, but also allows sometimes inaccessible C-level executives to communicate directly and clearly with all employees.
What&8217;s more, streaming video facilitates an open conversation between employees and senior management. As an executive communicates a message to the employees, an HR manager can sift through pertinent questions submitted by employees on the platform in real-time. Executives can then address these questions to help employees understand what&8217;s going on.
Personalization
With streaming video, a single message can not only be delivered across a large company, but also tailored to that company&8217;s different locations. Following an initial announcement, companies can integrate additional messages that pertain specifically to varying regions or departments. This is particularly useful for globally distributed companies where employees may be impacted differently depending on their location.
Video streaming enables both a uniform message and smaller video conversations that provide details on, for example, each person&8217;s severance package. According to Valerie Frederickson, CEO of the HR consultancy Frederickson Pribula Li, even if a company is confined to a relatively small geographic region in a single time zone, any company with more than 75 employees should consider itself a candidate for streaming video services.
Consistency
Streaming video can be tailored to meet individual needs, interests and circumstances, but it can also prevent variation in messages from the leadership team to employees.
&;How can the company give the message all around the world, how can they convey that it&8217;s under control?&8221; asks Frederickson. “What you want to have in streaming video is an executive at the top or near the top who is disciplined and can stay on point.&8221;
Video streaming controls for anomalies that may appear if the job of delivering the news were given entirely to local managers. This is essential for preventing an internal misalignment that might lead to conflicts.
Consistency also implies communication is ongoing. Whole-company change is 12 percent more likely to be successful with continual communication from senior management. A video platform enables companies to help employees with their next steps after the company&8217;s change by offering webinars, and it helps HR keep track of who&8217;s actually attending them to ensure that employees are taking the right steps in the transition.
“The advantage of using video is that it keeps human element,&8221; says Frederickson. “If you have a well-planned message and deliver it with technology in a flawless and seamless way, it gives employees access to go get information they need.&8221;
Learn more about IBM Cloud Video solutions.
The post 3 key ways streaming video helps with delivering tough corporate news appeared first on news.
Quelle: Thoughts on Cloud

Cloud and cognitive technologies are transforming the future of retail

Though the retail industry is rapidly changing, one fact remains constant: the customer is king.
Some 35,000 attendees made their way to the National Retail Federation’s “Big Show” (NRF) at New York’s Javits Center last month for a first-hand look into the future of retail. Talk of digital transformation created buzz on and off the show floor.
Just south of the show at the IBM Bluemix Garage in Soho, some of the industry’s revolutionary leaders gathered for a roundtable discussion on how cloud and cognitive technologies are becoming an integral part of how retailers reach and meet shopper’s expectations.
Attendees included Staples CTO Faisal Masud; Shop Direct CIO Andy Wolfe; Retail Systems Research analyst Brian Kilcourse; Forbes retail and consumer trends contributor Barbara Thau; The New Stack journalist Darryl Taft; IBM Bluemix Garages Worldwide Director Shawn Murray; IBM Blockchain program director Eileen Lowry; and Pace University clinical professor of management and Entrepreneurship Lab director Bruce Bachenheimer. The group took a close look at how retailers experiment with new ways to give customers what they want and drive that transformation using cloud and cognitive computing.
Consumers drive tech adoption
Retail is a famously reactive business; it’s slow to adopt new technologies and innovation. However, in today’s consumer-driven age, retailers must quicken their pace, often setting aside internal strategies to tune into consumers’ demands and adopt the technology necessary to keep up.
Yet that’s often not the case. The IBM 2017 Customer Experience Index study found that 84 percent of retail brands offered no in-store mobile services and 79 percent did not give associates the ability to access a customer’s account information via a mobile device. These are key services for a seamless customer experience.
Retailers must capture the attention of consumers armed with smartphones and tablets. They are comparing product prices and reading reviews on social networks all the time. The hyper-connected consumer is the new norm, and understanding and engaging with them in real time is essential.
What customers really want
While retailers are busy selling, customer expectations are changing by the second. Retail is now about providing high-quality, engaging experiences.. Forward-thinking retailers use cloud infrastructure and AI-powered innovations such as cognitive chatbots to amplify and augment, not replace, the core human element of retail.
For example, for a retail recommendations strategy, Masud said that Watson Conversation on IBM Cloud helped Staples discover a gap between what the company assumed customers wanted and what they actually wanted. When Staples worked with IBM to develop its &;Easy Button&; virtual assistant, Masud said, “We thought we would just be making recommendations for more office supplies based on their purchases.”
What Staples found was that customers were also seeking solutions to help track administrative details in their office. “They wanted us to remember things for them like the catering company they used or the name of the person who signs for the delivery,” Masud said.
A cloud-powered, cognitive technology solution provides clear benefits for Staples. As it continues to learn customer orders and preferences, the office-supply-ordering tool continues to improve its predictive and order-recollection capability, making it more valuable to users for everyday tasks. Staples can bring the on-demand world to customers, allowing them to order anytime, anywhere and from any device.
“The one thing customers want is ease,” added Shop Direct CIO Andy Wolfe. He noted people want to easily shop online from whatever device or online channel they prefer. Shop Direct is the UK&;s second largest pureplay digital retailer.
Retailers must have actionable insights derived from backend systems data such as supply chains, as well as the data that customers produce and share.
Shop Direct had a wealth of data, but needed to identify the most important information, which is why the company adopted IBM Watson and IBM Cloud. Shop Direct wanted to better understand customers and run its business more efficiently to meet shoppers’ needs.
Wolfe and his team were able to use the power of cloud and cognitive to mine and understand data, turning it into a resource to personalize the company’s retail product offerings and make brands even more affordable for customers.
The future of retail and technology
“There will always be retail,” said Brian Kilcourse, analyst at Retail Systems Research. “It will just be different.”
The nature of shopping is evolving from a purposeful trip to a store or a website toward the &8220;ubiquitous shopping era&8221;: shopping everywhere, by any means, all the time. This has created a significant challenge for retailers to create an operationally sustainable and engaging experience that inspires loyalty as customers hop from store to web to mobile to social and back again.
That’s where cognitive and cloud comes into play. Retailers can harness the power of data from their business and their customers to better personalize, contextualize and understand who customers are and offer them the products they want when they want them.
Timing and convenience are key for customers now. Cloud and cognitive technologies enable brands to authentically connect with consumers in an agile and scalable way. Cloud is no longer an IT trend. With apps, chatbots and new ways to reach customers, it is the platform keeping retailers available to consumers and in business.
Learn more about IBM Cloud retail solutions.
The post Cloud and cognitive technologies are transforming the future of retail appeared first on news.
Quelle: Thoughts on Cloud

Announcing the DockerCon speakers and sessions

Today we’re excited to share the launch the 2017 agenda. With 100+ DockerCon speakers, 60+ breakout sessions, 11 workshops, and hands on labs, we’re confident that you’ll find the right content for your role (Developer, IT Ops, Enterprise) or your level of Docker expertise (Beginner, Intermediate, Advanced).

View the announced schedule and speakers lineup  

Announced sessions include:
Use Case

0 to 60 with Docker in 5 Months: How a Traditional Fortune 40 Company Turns on a Dime by Tim Tyler (MetLife)
Activision&;s Skypilot: Delivering Amazing Game Experiences through Containerized Pipelines by Tom Shaw (Activision)
Cool Genes: The Search for a Cure Using Genomics, Big Data, and Docker by James Lowey (TGEN)
The Tale of Two Deployments: Greenfield and Monolith Docker at Cornell by Shawn Bower and Brett Haranin (Cornell University)
Taking Docker From Local to Production at Intuit by JanJaap Lahpor (Intuit)

The Use Case track at @dockercon looks great w/ @tomwillfixit @JanJaapLahpor @drizzt51 dockercon Click To Tweet

Using Docker

Docker for Devs by John Zaccone (Ippon Technologies)
Docker for Ops by Scott Couton (Puppet)
Docker for Java Developers by Arun Gupta (Couchbase) and Fabiane Nardon (TailTarget)
Docker for .NET Developers by Michele Bustamonte (Solliance)
Creating Effective Images by Abby Fuller (AWS)
Troubleshooting Tips from a Docker Support Engineer by Jeff Anderson (Docker)
Journey to Docker Production: Evolving Your Infrastructure and Processes by Bret Fisher (Independent DevOps Consultant)
Escape From Your VMs with Image2Docker by Elton Stoneman (Docker) and Jeff Nickoloff (All in Geek Consulting)

Excited about the Using @Docker track @dockercon cc @JohnZaccone @scottcoulton @arungupta&;Click To Tweet

Docker Deep Dive &; Presented by Docker Engineering

What&8217;s New in Docker by Victor Vieux
Under the Hood with Docker Swarm Mode by Drew Erny and Nishant Totla
Modern Storage Platform for Container Environments by Julien Quintard
Secure Substrate: Least Privilege Container Deployment by Diogo Monica and Riyaz Faizullabhoy
Docker Networking: From Application-Plane to Data-Plane by Madhu Venugopal
Plug-ins: Building, Shipping, Storing, and Running by Anusha Ragunathan and Nandhini Santhanam
Container Publishing through Docker Store by Chinmayee Nirmal and Alfred Landrum
Automation and Collaboration Across Multiple Swarms Using Docker Cloud by Fernando Mayo and Marcus Martins
Making Docker Datacenter (DDC) Work for You by Vivek Saraswat

Everything you need to know in the @Docker Deep Dive track at @dockercon w/ @vieux @diogomonica&8230;Click To Tweet

Black Belt

Monitoring, the Prometheus Way by Julius Volz (Prometheus)
Everything You Thought You Already Knew About Orchestration by Laura Frank (Codeship)
Cilium &8211; Network and Application Security with BPF and XDP (Noiro Networks)
What Have Namespaces Done For You Lately? By Liz Rice (Microscaling Systems)
Securing the Software Supply Chain with TUF and Docker by Justin Cappos (NYU)
Container Performance Analysis by Brendan Gregg (Netflix)
Securing Containers, One Patch at a Time by Michael Crosby (Docker)

Excited about the Black Belt track track at @dockercon w/ @lizrice @juliusvolz @rhein_wein @tgraf__&8230;Click To Tweet

Workshops &8211; Presented by Docker Engineering and Docker Captains

Docker Security
Hands-on Docker for Raspberry Pi
Modernizing Monolithic ASP.NET Applications with Docker
Introduction to Enterprise Docker Operations
Docker Store for Publishers

[Tweet &;Check out the new dockercon workshops w/@alexellisuk @EltonStoneman @lilybguo @kristiehow &8220;
Convince your manager
Do you really want to go to DockerCon, but are having a hard time convincing your manager to send you? Have you already explained that sessions, training and hands-on exercises are definitely worth the financial investment and time away from your desk?
Well, fear not! We’ve put together a few more resources and reasons to help convince your manager that DockerCon 2017 on April 17-20, is an invaluable experience you need to attend.

Announcing @dockercon 2017 speakers & sessions dockercon Click To Tweet

The post Announcing the DockerCon speakers and sessions appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

6 do-not-miss hybrid cloud sessions at IBM InterConnect

Are you looking at expanding your private or hybrid cloud? Or maybe you want to get the most out of your current capabilities? Whatever your goals are, you can’t afford to miss these six exciting sessions at IBM InterConnect 2017.
1. Session : Strategies for successfully enabling BPM/ODM in the hybrid cloud
How do you achieve maximum business value in a cloud environment? Learn how to harness the capabilities of hybrid cloud to deploy, implement and manage enterprise workloads like IBM Business Process Manager (BPM) and IBM Operational Decision Manager (ODM). Come learn the business requirements you need to consider when working with application performance, servers, business processes, financial objectives, service management, disaster recovery, infrastructure, risk management and hybrid cloud integration.
2. Session : Bluemix local system: Cloud adoption for enterprise IT
It can be expensive to deliver the right applications to the right users at the right time. IBM technical experts will share how IBM Bluemix Local System and IBM PureApplication can help accelerate and optimize your IT operations with a turnkey, integrated private cloud application platform. See how you can run both traditional and cloud-based applications while supporting open technology through built-in automation for simple and repeatable application environment deployments.
3. Session : Strategies for high availability and disaster recovery with private cloud
Every organization has its own high availability (HA) and disaster recovery (DR) requirements. IBM Bluemix Local System and IBM PureApplication provide many capabilities that allow you to implement HA strategies and DR scenarios for your applications and data. Join this roundtable discussion to share your feedback and learn best practices from IBM and your peers.
4. Session : Total economic value from AXA&;s PureApplication implementation
Launching new insurance products often means high upfront costs and time-consuming processes for setting up infrastructure. Learn how AXA Technology Services implemented a new underwriting process for household products on a pilot using IBM PureApplication Service in the cloud. You’ll also hear how it performed a proof of concept using PureApplication System in its data center. Both projects succeeded, and AXA Belgium purchased two IBM PureApplication System units for its on-premises use. Read the case study to see the total value of what was accomplished, and attend the session to see how you might be able to do the same.
5. Session : Habib Bank Limited&8217;s journey to platform as a service with IBM PureApplication System
Do you want to learn how your organization might streamline application delivery and reduce costs? Habib Bank will share its journey from traditional IT to a new cloud-based platform as a service (PaaS) solution with IBM PureApplication and WebSphere Application Portal. Hear how this transition helped the company deploy its applications 300 percent faster and save USD 500,000.
6. Session : Enterprise IT modernization at 1-800-FLOWERS
Do you want to learn how to modernize your business to meet new challenges? 1-800-FLOWERS has reinvented itself several times since its founding as a local florist in 1976. The company has gone from local retail to global telephone sales, and now it is on the leading edge of the DevOps revolution. Learn how 1-800-FLOWERS uses IBM PureApplication Software as part of its enterprise modernization and DevOps automation process to greatly improve provisioning times for new development, test and production environments. You&8217;ll also discover how patterns changed the company’s vocabulary from &;months&; to &8220;days.&8221;
These are just some of the many exciting sessions at InterConnect. If you’re still not signed up, register today, and then add these six sessions to your schedule. Don’t miss the opportunity to talk directly with our executives and technical teams by stopping by the InterConnect 2017 EXPO. I look forward to seeing you there.
The post 6 do-not-miss hybrid cloud sessions at IBM InterConnect appeared first on news.
Quelle: Thoughts on Cloud

How streaming video unites direct-sales teams

There&;s been a lot of buzz around streaming video&8217;s potential to transform the way that corporate organizations — especially those with teams scattered around the world — communicate, collaborate and sell.
But what about companies with a direct-sales model? Is there a role for streaming video to play in helping groups of independent sales representatives improve their game and feel like they are all part of one team?
Jennifer Thompson, an independent distributor for SeneGence International, can attest that there is. Since becoming the leader of an international team of 400 independent distributors for the cosmetics and skincare company several months ago, Thompson has been using live streaming video to conduct biweekly training sessions. 
“We discuss product knowledge and share marketing, advertising and social media tips,&; she says. “It&8217;s an effective platform for sharing best practices and creating an experience that feels like everyone is working in a real company setting.&8221;
A marketing and communications veteran, Thompson says she immediately recognized the value of using video to virtually and cost-effectively bring together direct-sales team members located across the United States, United Kingdom, Canada and Australia. She uses live video to train and onboard new employees in a way that makes everyone feel involved in the organization. 
Virtual training and collaboration
Thompson develops all the training materials she uses, with the exception of product-related information provided by SeneGence. She also frequently conducts spur-of-the-moment training sessions on specific topics for her team, who communicate almost exclusively through a closed group on Facebook.
“I monitor the discussions in our group in real time. If I see an issue that&8217;s generating a lot of questions from, or conversation within, our group, I will offer to do a 15-minute live training on the spot to talk through it,&8221; she says. “I also receive direct messages from reps asking me to conduct quick training sessions on specific topics.&8221;
Thompson adds that this direct access to leadership for team members is a major benefit of using live video. Though there was a learning curve when she first introduced the idea of live video training, her sessions are now well-attended, and team members actually request these sessions.
Seamlessly bringing new employees on board
Thompson says that streaming video has quickly become her go-to tool for onboarding new SeneGence distributors as well. Video helps the new members feel more connected to the rest of the team and more comfortable asking questions, a substantial improvement from Thompson&8217;s own onboarding experience, which took place via text messages.
Live video &;has really been a game changer for many of the women on my team,” she says. “They are more comfortable sharing information, initiating discussions and collaborating. That&8217;s been an incredible takeaway for me as their team leader, and it&8217;s an incentive for me to keep expanding our group&8217;s use of video.&8221;
Learn more about using video for direct-sales training at IBM Cloud Video.
But what about companies with a direct-sales model? Is there a role for streaming video to play in helping groups of independent sales representatives improve their game and feel like they are all part of one team?
Jennifer Thompson, an independent distributor for SeneGence International, can attest that there is. Since becoming the leader of an international team of 400 independent distributors for the cosmetics and skincare company several months ago, Thompson has been using live streaming video to conduct biweekly training sessions. 
“We discuss product knowledge and share marketing, advertising and social media tips,&8221; she says. “It&8217;s an effective platform for sharing best practices and creating an experience that feels like everyone is working in a real company setting.&8221;
A marketing and communications veteran, Thompson says she immediately recognized the value of using video to virtually and cost-effectively bring together direct-sales team members located across the United States, United Kingdom, Canada and Australia. She uses live video to train and onboard new employees in a way that makes everyone feel involved in the organization. 
Virtual training and collaboration
Thompson develops all the training materials she uses, with the exception of product-related information provided by SeneGence. She also frequently conducts spur-of-the-moment training sessions on specific topics for her team, who communicate almost exclusively through a closed group on Facebook.
“I monitor the discussions in our group in real time. If I see an issue that&8217;s generating a lot of questions from, or conversation within, our group, I will offer to do a 15-minute live training on the spot to talk through it,&8221; she says. “I also receive direct messages from reps asking me to conduct quick training sessions on specific topics.&8221;
Thompson adds that this direct access to leadership for team members is a major benefit of using live video. Though there was a learning curve when she first introduced the idea of live video training, her sessions are now well-attended, and team members actually request these sessions.
Seamlessly bringing new employees on board
Thompson says that streaming video has quickly become her go-to tool for onboarding new SeneGence distributors as well. Video helps the new members feel more connected to the rest of the team and more comfortable asking questions, a substantial improvement from Thompson&8217;s own onboarding experience, which took place via text messages.
Live video &8220;has really been a game changer for many of the women on my team,” she says. “They are more comfortable sharing information, initiating discussions and collaborating. That&8217;s been an incredible takeaway for me as their team leader, and it&8217;s an incentive for me to keep expanding our group&8217;s use of video.&8221;
Learn more about using video for direct-sales training at IBM Cloud Video.
The post How streaming video unites direct-sales teams appeared first on news.
Quelle: Thoughts on Cloud