What is event streaming? The next step for business

As data proliferates across business types, event streaming is becoming ever more important. But what is event streaming? A simple way to think about it is that event streaming enables companies to analyze data that pertains to an event and respond to that event in real time.
Currently, in all markets, everything is being driven by events. For example, when a business transaction takes place, such as a customer placing an order or a deposit being added to a bank account, that is an event that drives a next step. With customers looking for responsive experiences when they interact with companies, being able to make real-time decisions based on an event becomes critical.
Top 3 business uses of event streaming
The vast number of business events creates an incredible amount of data, which can make real-time decisions difficult. Companies must gain reliable insights that can lead to quick decisions and enhanced customer experiences. Event streaming can help.
Here are the top three reasons event streaming is important for businesses today:
1. Using unused data.
Businesses have massive amounts of data everywhere.
For example, manufacturing companies have data on machine failures, time to completion, capacity peaks and flows, consumption data and more. Airlines have information on customer wait times, plane delays, maintenance records and ticket purchasing patterns, along with many other sources of data. Right now, so much of this data is sitting collecting dust. Organizations can use that data for good.
2. Taking advantage of real-time data insights.
One of the key tenets of event streaming is real-time insights and the ability to react to these insights. Say there’s a customer shopping online browsing for a new TV, but it’s out of stock. It does the retailer no good if they get insights on that data a week later. The customer has already gone somewhere else.
Companies should be able to take advantage of real-time insights. For example, if a customer shops at a specific store often, using location data from cell phone traffic or public wifi can enable a store to send a targeted ad or coupon based on the customer’s location.
3. Creating better and more engaging customer experiences.
When a business puts the influx of data and real-time data insights together, there’s an opportunity to create better and more engaging experiences for customers.
With all the choices that customers have these days, winning hearts and ultimately business not only means having the greatest product, but also having the best and most engaging customer experience possible. By responding to situations as they are detected, companies can create new ways of engaging with their customers, increasing customer sentiment.
Consider the example of an airline. When flights are canceled or delayed, customer service agents and desk agents are flooded with an influx of unhappy flyers. With event streaming capabilities, employees can see the event, in this case a canceled flight, and react to it in real time by rebooking passengers with similar itineraries, therefore creating a better customer experience.
All of this and more can be done through event streaming.
Apache Kafka and event streaming tools
Right now, the most prevalent and popular tool for event streaming is Apache Kafka. This tool allows users to send, store and request data when and where they need it. That’s where IBM Event Streams becomes helpful. Event Streams works with Apache Kafka in order to make it repeatable, scalable and consistent with a simple-to-use three-click deployment model. Since Kafka is an open source solution, it’s constantly evolving. Users are always looking to use the latest versions however, enterprises cannot just turn off their event streaming capabilities when they want to upgrade to the latest version of Apache Kafka. With IBM Event Streams, users are able to upgrade with zero downtime.
If you’re in the New York area on 2 April or the London area 13 – 14 May, join us at the Kafka Summit New York or the Kafka Summit London to have a conversation with an event streaming expert.
You can also explore more about event streaming by visiting the IBM Event Streams website.
The post What is event streaming? The next step for business appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Introduction to Kustomize, Part 2: Overriding values with overlays

The post Introduction to Kustomize, Part 2: Overriding values with overlays appeared first on Mirantis | Pure Play Open Cloud.
In part 1 of this tutorial, we looked at how to use Kustomize to combine multiple pieces into a single YAML file that can be deployed to Kubernetes. In doing that, we used the example of combining specs for WordPress and MySQL, automatically adding a common app label. Now we’re going to move on and look at what happens when we need to override some of the existing values that aren’t labels.
Curious about what else is new in Kubernetes 1.14 (besides integration of Kustomize)? Join us for a live webinar on March 21.
Changing parameters for a component using Kustomize overlays
Now, we’re almost ready, but we do have one more problem.  While we’re deploying our production system to a cloud provider that supports LoadBalancer, we’re developing on our laptop so we need our services to be of type: NodePort.  Fortunately we can solve this problem with overlays.
Overlays enable us to take the base YAML and selectively change pieces of it.  For example, we’re going to create an overlay that includes a patch to change the Services to NodePort type services.
It’s important that the overlay isn’t in the same directory as the base files, so we’ll create it in an adjacent directory, then add a dev subdirectory.
OVERLAY_HOME=$BASE/../overlays
mkdir $OVERLAY_HOME
DEV_HOME=$OVERLAY_HOME/dev
mkdir $DEV_HOME
cd $DEV_HOME
Next we want to create the patch file, $DEV_HOME/localserv.yaml:
apiVersion: v1
kind: Service
metadata:
 name: wordpress
spec:
 type: NodePort

apiVersion: v1
kind: Service
metadata:
 name: mysql
spec:
 type: NodePort
Notice that we’ve included the bare minimum of information here; just enough to identify each service we want to change, and then specify the change that we want to make — in this case, the type.
Now we need to create the $DEV_HOME/kustomization.yaml file to tie all of this together:
bases:
– ../../base
patchesStrategicMerge:
– localserv.yaml
Notice that this is really very simple; we’re pointing at our original base directory, and specifying the patch(es) that we want to add.
Now we can go ahead and build the original, and see that it’s untouched:
kustomize build $BASE
You can see that we still have LoadBalancer services:

spec:
 ports:
 – port: 3306
 selector:
   app: my-wordpress

apiVersion: v1
kind: Service
metadata:
 labels:
   app: my-wordpress
 name: wordpress
spec:
 ports:
 – port: 80
 selector:
   app: my-wordpress
 type: LoadBalancer

apiVersion: apps/v1beta2
kind: Deployment
metadata:
 labels:

But if we build the overlay instead, we can see that we now have NodePort services:
$ kustomize build $DEV_HOME


 name: mysql-pass
type: Opaque

apiVersion: v1
kind: Service
metadata:
 labels:
   app: my-wordpress
 name: mysql
spec:
 ports:
 – port: 3306
 selector:
   app: my-wordpress
 type: NodePort

apiVersion: v1
kind: Service
metadata:
 labels:
   app: my-wordpress
 name: wordpress
spec:
 ports:
 – port: 80
 selector:
   app: my-wordpress
 type: NodePort

apiVersion: apps/v1beta2
kind: Deployment
metadata:

Notice that everything is unchanged by the patch except the type.  Now let’s look at making use of these objects in kubectl.
Using Kustomize with kubectl
Now, all of this is great, but saving it to a file then running the file seems like a little bit of overkill.  Fortunately there are two ways we can feed this in directly. One is to simply pipe it in, as you would do with any other Linux program:
kustomize build $DEV_HOME | kubectl apply -f –
Or if you’re using Kubernetes 1.14 or above, you can simply use the -k parameter:
kubectl apply -k $DEV_HOME
secret “mysql-pass” created
service “mysql” created
service “wordpress” created
deployment.apps “mysql” created
deployment.apps “wordpress” created
This may not seem like a big deal, but consider this example from the documentation, showing the old way of doing things:
kubectl create secret docker-registry myregistrykey –docker-server=DOCKER_REGISTRY_SERVER –docker-username=DOCKER_USER –docker-password=DOCKER_PASSWORD –docker-email=DOCKER_EMAIL
secret/myregistrykey created.
Versus the new way, where we create a kustomization.yaml file:
secretGenerator:
– name: myregistrykey
type: docker-registry
literals:
– docker-server=DOCKER_REGISTRY_SERVER
– docker-username=DOCKER_USER
– docker-password=DOCKER_PASSWORD
– docker-email=DOCKER_EMAIL
EOF
Then simply reference it using the -k parameter:
$ kubectl apply -k .
secret/myregistrykey-66h7d4d986 created
Considering that kustomization.yaml files can be stored in repos and subject to version control, where they can be tracked and more easily managed, this provides a much cleaner way to manage your infrastructure as code.
There are, of course, other things you can do with Kustomize, including adding name prefixes, generating ConfigMaps, and passing down environment variables, but we’ll leave that for another time.  (Let us know in the comments if you’d like to see that sooner rather than later.)
Meanwhile, if you’d like to see more of what’s new in Kubernetes 1.14, don’t forget to join us for that live webinar on March 21.  
The post Introduction to Kustomize, Part 2: Overriding values with overlays appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

NUC8 (Crimson Canyon) im Test: AMD rettet Intels 10-nm-Minirechner

Der NUC8 alias Crimson Canyon ist ein technisch interessanter Mini-PC: Abseits der Radeon-Grafikeinheit enttäuscht aber Intels Cannon-Lake-Chip samt verlötetem Speicher, und die vorinstallierte Festplatte macht das System nervig träge. Mit SSD wird es besser – und noch teurer. Ein Test von Marc Sauter und Sebastian Grüner (Intel NUC, Prozessor)
Quelle: Golem