Now view Apache Spark application history and YARN application status in the Amazon EMR console

You can now view Apache Spark application history and YARN application status in the Amazon EMR console. Application history is updated throughout runtime, and the history is available for up to seven days after the application is complete. Additionally, application history is still viewable after you terminate your Amazon EMR cluster. The console now includes the list of running and completed YARN applications on your Amazon EMR cluster. For each Spark application, you can drill down into granular information and easily view logs in Amazon S3 for each Spark job, task, stage, and executor. 
Quelle: aws.amazon.com

Deploy Qubole Data Service on a Data Lake Foundation in the AWS Cloud with New Quick Start

This Quick Start configures a production-ready Qubole Data Service (QDS) environment that is built on a data lake foundation in the Amazon Web Services (AWS) Cloud. You can use this Qubole environment to process and analyze your own datasets, and extend it for your specific use cases. The Quick Start also deploys an optional environment with prepopulated data, notebooks, and queries to analyze structured and semi-structured data, in order to gain key business insights into product sales performance. 
Quelle: aws.amazon.com

What to expect in Kubernetes 1.8: an early look at where k8s is going

The post What to expect in Kubernetes 1.8: an early look at where k8s is going appeared first on Mirantis | Pure Play Open Cloud.
Kubernetes 1.8 was planned as a stabilization release, but that doesn’t mean there’s nothing interesting to look forward to.  The release includes early versions of a number of different developments that provide additional features and control, including a fundamental change to how Kubernetes runs.
Deployment and operations: Self-hosting Kubernetes on Kubernetes
Which came first, the chicken or the egg? How do you compile a compiler? What kind of infrastructure runs infrastructure software? That’s the question that’s been facing Kubernetes developers: Kubernetes is a great infrastructure on which to host robust applications, but Kubernetes itself can benefit from those advantages.
The solution is a ‘self-hosted” architecture, in which the Kubernetes control plane, that is, the pieces that make it work, are themselves hosted by Kubernetes.  This software “inception” makes it possible to both operate and use a Kubernetes cluster using the same set of skills.
In Kubernetes 1.8, we have the first experimental version of a self-hosted cluster, easily created with the kubeadm tool. At this point you still have to enable the feature, but the community plans to make this the default for Kubernetes 1.9.
New ways to take control
Kubernetes 1.8 includes a number of different alpha-level features that provide more control over your cluster.
Many of the changes in Kubernetes 1.8 involve storage. For example, you can increase the size of a volume, though this is currently implemented only in the Gluster backend — and at this stage, it only increases the size of the volume, and doesn’t resize the filesystem.  Also, you can now use the Kubernetes API to create a volume snapshot. This functionality is actually at the “prototype” level; for the moment, it doesn’t stop any processes currently running on the volume — a process called “quiescing” — so there’s a possibility that your snapshot may be inconsistent. Still, it’s a look at what’s to come.
On the server side, NFV developers in particular will be glad to hear of the arrival of alternative container-level affinity policies, as well as the ability to request pre-allocated hugepages.
Perhaps the biggest feature, however, is that you now have the ability to create your own binary extensions to the kubectl Kubernetes client. You do this by creating a plugin that provides a new subcommand for kubectl.
Easier security
On the security front, Kubernetes 1.8 makes it possible to figure out exactly what permissions apply to a particular command.  K8s uses Role Based Access Control (RBAC), which can make things completed, but you can now feed a file of roles, rolebindings, clusterroles, or culsterrolebindings to the kubectl auth reconcile command and get back a proper list of rules that includes all of the appropriate implied permissions.
Also, there’s a new SelfSubjectRulesReview API (now in beta), which provides a list of actions that a particular user can perform in a particular namespace, which will make it easier for UI developers to show the appropriate choices.
Networking and Storage improvements
Networking and storage have seen some major work this cycle as well; it’s now possible to specify network policies not just for what can come into a pod, but also what can go out of it. You can also specify rules by IP block. These changes are considered beta.
Also in new “early access” alpha state is new support for a new IP Virtual Server mode for kube-proxy, which is designed to provide both better performance and more sophisticated load balancing algorithms than the current iptables-based architecture.
Meanwhile, StorageClass now provides the opportunity to configure the reclaim policy for dynamically provisioned volumes, rather than always defaulting to delete. You can also use the new VolumeMount.Propogation field (still in alpha) to share mounts between containers, or even between containers and the host.
Developers have also been working on improving the ability to automatically discover and initialize new driver files, called Flexvolume drivers.
Look before you leap
Of course, an upgrade always means changes in behavior that you need to be aware of before committing to the new software so nothing bites you. For example, the release notes point out that “kubectl delete no longer scales down workload API objects prior to deletion. Users who depend on ordered termination for the Pods of their StatefulSet’s must use kubectl scale to scale down the StatefulSet prior to deletion.”
In fact, the release notes specify a number of specific actions you should take before upgrading.  Some are simple, such as changing the version specifications for your objects, but others are more deliberate, such as the removal of the deprecated ThirdPartyResource (TPR) API (migrate to CustomResourceDefinition to keep your data) and the fact that the pod.alpha.kubernetes.io/initialized annotation for StatefulSets is now ignored, so dormant StatefulSets for which this value is false “might become active after upgrading”.
Just be sure to check the release notes before you upgrade.
The post What to expect in Kubernetes 1.8: an early look at where k8s is going appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

How to build a shopping experience for the new economy

Evolving technology and the rapid expansion of interactive touch points and devices has fundamentally changed the way we shop. Consumers are taking charge of their own shopping experiences by using various channels to help them get the right information to make choices. They expect services, values and experiences consistent across channels, whether it’s online or at a physical store.
How customers make decisions
It’s an open secret that people often buy based on emotion and then justify afterwards with logic. To appeal to both logic and emotion, a merchandiser needs to understand when the person is in which state. Logic makes people say, “I will think about it.” Emotion makes people say, “I want it now.”
A brand is emotionally appealing to a shopper when a credible person (through consumer reviews) reaffirms the customer’s own prior experiences with the brand. A brand is logically appealing when the perceived value of the merchandise is higher than its actual cost. Both of these states are reinforced when the brand delivers consistently across all channels.
With information available at their fingertips, consumers do their own research, seek reviews and form a perception of value as they compare competing brands. But to be a top-of-mind brand, your IT systems need to be well-oiled enough to deliver the right engagement for both the logical and the emotional states.
How to build the right shopping experience for customers
Shoppers expect a personalized service, a guided shopping experience, the choice to browse a broader variety of merchandise and the flexibility to order anytime and anywhere in a smooth manner.
Once a purchase decision is made, customers expect a seamless experience that enables them to use loyalty points, coupons, gift cards and so on, irrespective of the medium. Shoppers use innovative payment methods and expect retailers to be in step with them. The same is expected at the stage of product delivery in terms of flexibility of time, location or channel.
After-service, or returns, also form a critical part of reinforcing an emotional appeal. Customers expect the convenience of a product swap, straight refund or credit reusable across channels and touch points.
These ubiquitous technological advances, combined with the constant pressure on margins, are forcing a pace of change that is overwhelming for many retailers. The store can no longer operate as a silo, but instead it must become an integral part of an immersive multi-channel experience.
Join our webinar to listen as analysts, clients and IBM experts examine trends shaping customer engagement in the retail industry, and how API-led approaches are enabling companies to transform the experience across touch points to provide a seamless customer journey.
The post How to build a shopping experience for the new economy appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

A new Planned Maintenance experience for your virtual machines

We’re excited to announce the availability of a new planned maintenance experience in Azure, providing you more control, better communication, and better visibility. While most planned maintenance is performed without any impact to your virtual machines using memory preserving maintenance, some do require a reboot to improve reliability, performance, and security.

What’s new?

More control: You now have the option to proactively initiate a redeploy of your VMs on your schedule within a pre-communicated window, ensuring that planned maintenance will be performed when it is most convenient for you.

Better communication: We added planned maintenance to the Azure Monitor experience where you can create log-based alerts. With Azure Monitor notifications, you can add multiple email recipients to maintenance alerts, receive SMS messages, and configure webhooks, which integrate with your third-party software, to alert you of upcoming maintenance.

Better visibility: We recently introduced Azure Service Health in the Azure Portal, which provides you planned maintenance information at the VM level. Additionally, we introduced Scheduled Events, which surfaces information, including upcoming planned maintenance, via REST API in the VM. You can use this capability as part of maintenance preparation. Lastly you can view upcoming maintenance information via PowerShell and CLI.

Why should I consider proactive-redeploy?

During a communicated window, customers can choose to start maintenance on their virtual machines. If you do not utilize the window, the virtual machines will be rebooted automatically during a scheduled maintenance window (which is visible to you). Starting the maintenance will result in the VM being redeployed to an already-updated host. While doing so, the content of the local (temporary) drive will be lost.

Native cloud applications running in a cloud service, availability set, or virtual machines scale set, are resilient to planned maintenance since only a single update domain is impacted at any given time.

You may want to use proactive-redeploy in the following cases:

Your application runs on a single virtual machine and you need to apply maintenance during off-hours.
You need to coordinate the time of the maintenance as part of your SLA.
You need more than 30 minutes between each VM restart even within an availability set.
You wish to take down the entire application (multiple tiers, multiple update domains) in order to complete the maintenance faster.

What should I do next?

Prior to the next planned maintenance in Azure:

Become familiar with how to proactively redeploy your VMs on Windows and Linux.

Create alerts and notifications in Azure Monitor.

Set up Scheduled Events for your Windows and Linux VMs.

For more information:

Watch Azure Friday on Planned Maintenance.

Watch Tuesdays with Corey on Planned Maintenance.

Quelle: Azure

[Podcast] Digging into Kubernetes 1.8

With the announcement of Kubernetes 1.8 expected this week, we decided to sit down with two of the project’s leaders, Clayton Coleman (@smarterclayton) and Derek Carr (@derekwaynecarr). We discussed the early days of Kubernetes, how the Kubernetes SIGs are organized and prioritize new capabilities, as well as some of their favorite features in Kubernetes 1.8. […]
Quelle: OpenShift

Provisioning for true zero-touch secure identity management for IoT

When you’re on a mission to deliver an awesome, complex IoT experience, the last thing you want to be doing is babysitting device identities at any stage of your solution. If you’re building a smart vehicle experience, you want to be thinking fleets, services, operational telemetry and not how to transfer vehicle identities between owners, renters, insurance companies, and service providers. If you’re developing for a mobile factory experience like a cruise ship or an airline, you want to be thinking geography optimal predictive maintenance, and not about cloud connection points and sovereign cloud specific requirements. How you provision your IoT devices makes a world of a difference with operational efficiency. Provisioning for true zero-touch secure identity management is the promise to minimize operational burden and maximize focus on the experience.

Until now, most claims for zero-touch provisioning have been about giving devices identities to connect to a cloud. What happens thereafter has largely been a mystery relegated to the IoT solutions developer. Developers of complex solutions are often left with no choice but to hack custom accommodations for their backends or manually manage hand-off of device identities in operations. Both options are costly, burdensome, and most of all, detracts focus from envisioned experience. Shouldn’t secure device identity and complete lifecycle management be a scalable building block in the IoT solution developer’s toolbox, so they can focus on just IoT experience?

Well, we believe it should. Microsoft has been building towards answering this very question, and in the past few months, collaborated with partners to make this a reality. The solution originates with anchoring trust in secure silicon, from which standards are used to derive device unique certificate identities that are ingested, authenticated, and lifecycle managed at scale by Azure Device Provisioning Service (DPS).

Earlier this year, as part of Microsoft’s commitment to IoT security, we announced adoption of Trusted Computing Group’s DICE standard and new HSM partners committed to availing DICE hardware. We now extend this announcement to welcome Microchip into the fold. Microchip has made availability of DICE hardware a reality through its CEC1702 family of secure silicon chips and evaluation kit offering. You may also learn about this offering from the Azure IoT Catalog and purchase directly from the Microchip Website. Designed for security and trust from the ground up, CEC1702 roots trust in secure silicon hardware and implements the DICE standard to generate device unique certificate identities that are trusted by any cloud service including Azure DPS.

Azure DPS takes it from here to fully realize provisioning for a truly zero-touch secure identity management for the lifecycle of IoT devices. DPS extends trust from the secure silicon hardware into the cloud domain where it creates registries to facilitate managed identity services to include location, mapping, aging, and retirement. This wealth of capability is exposed to the IoT solutions developers as simple routing rules to keep their full attention on the IoT experience they are creating. They only need to add a DPS compliant secure hardware like CEC1702 into their IoT devices.

IoT has evolved to the stage where connecting to a cloud is no longer a novelty. Secured and lifecycle-managed device identity should just be another component of the IoT developers standard toolbox. Microsoft in collaboration with secure silicon partners is making this a reality. To learn more about Azure Device Provisioning Service, please visit our tutorial documentation. 
Quelle: Azure