Honda and Watson team up for safe driving

At IBM InterConnect, IBM CEO Ginni Rometty talked about the three principles of augmented intelligence in this AI era: service to mankind, transparency, and skills.
She also discussed how IBM Watson can make us a better version of ourselves. There’s no better example of this than Honda R&D&;s Driver Coaching System prototype that helps new as well as older drivers learn how to spot and avoid potentially dangerous road situations.
Honda R&D realized that both Japan and US markets have the same issues: an aging population and a growing share of young drivers. For both countries, it&8217;s these two groups who are most likely to be in deadly auto accidents.
Honda R&D has analyzed the behaviors, skills and judgments that take place in real time as experienced drivers successfully encounter dangerous situations. By understanding the behaviors of very skilled drivers — how they gauge and react to risk — they can apply that to all drivers. The interest is in elderly and new drivers especially prone to accidents and how lives can be saved through actively coaching these drivers.
Good driving deconstructed
Let’s assume an experienced driver can spot danger a few seconds faster than a novice driver. With the Driver Coaching System, it can give that extra reaction time to drivers of all experience levels. In addition, it will support a new driver who faces the anxiety of operating a car. Watson acts as a safe driving coach by engaging in an encouraging conversation as the new driver builds confidence in driving and learns to spot dangerous situations.
I love to drive. I own a fairly exotic car and often “open it up” to feel the raw exhilaration of speed, power and control. My mind clears and focuses solely on the drive and the extremes of the car: the calculus of the curves, the feel of acceleration, timing of shifting and braking. It takes concentration, awareness, risk evaluation, and reaction. I have experience. But does that make me a good driver?
Honda R&D has deconstructed what it takes to be a good driver.
How does it work?
The Driver Coaching System is continually monitoring the driving situation or “the scene” and evaluates the speed (overall and relative to other cars), distance to surrounding objects, adherence to the lane, and braking times and distances. Watson uses that information to offer real-time guidance and advice in Japanese.
It’s also gauging the driver and classifying the driver’s skill and mental state based on changing behaviors and conditions.
I learned the hard way that I’m not a “good driver” all the time. I wrecked a rental car in an accident leaving an airport in an unfamiliar city. In that situation, I was more like a novice driver with the anxiety of driving an unfamiliar car in an unfamiliar place.
The Driver Coaching System is detecting whether a driver is driving outside their norms and is perhaps anxious, distracted or tired. With this information, the prototype can classify the driver’s current state: whether that’s normal, they’re attentive or inattentive, and driving conservatively or aggressively. It adjusts its coaching to fit the driver’s state of mind.
The scene and the driver’s behavior determine the coaching Watson provides to the driver. The goal is for Watson’s coaching to be timely, friendly, supportive and welcomed. Watson is coaching the driver in Japanese.
How is Watson’s Japanese?
Japanese is a high-context language. In Japanese, meaning is expressed between the lines. The listener has to have the background knowledge of multiple dimensions to grasp the intended meaning of what’ being said. As Watson continues to improve how it speaks and understands Japanese, it has to truly appreciate and apply this cultural context. This is quite different from how Watson originally learned English.
“A conversation with Watson is getting more accurate and showing improved understanding of human intention,” said Yoshimitsu Akuta, chief engineer, Honda R&D.  “I look forward to more possibilities with the context of the Japanese language. I think both English and Japanese speakers will be excited to have conversations with Watson as their friend in the car.”

Rapid prototype
The team at Honda R&D, led by Akuta, used agile development and had a proof of concept in about two months. The Driving Coaching System was built on IBM Bluemix and uses Watson Conversation, Watson Natural Language Understanding and Watson Translator.
Build with these services and many more on the Watson Developer Cloud.
The post Honda and Watson team up for safe driving appeared first on news.
Quelle: Thoughts on Cloud

AT&T and IBM partner for analytics with Watson

Today at IBM InterConnect, we learned that IBM is partnering with AT&T to support enterprise customers&; Internet of Things (IoT) with data insights.
This data is huge for business customers, but is only valuable with real-time, meaningful insights.
AT&T will be using a variety of IBM products including:

Watson IoT Platform: to build the next generation of connected industrial IoT devices and products that continuously learn from the physical world
IBM Watson Data Platform: which is the fastest data ingestion engine, combined with cognitive powered decision making to help uncover business insights and value from data, possibly from the weather, the road, social media, or customer sales data
IBM Machine Learning Service: used by AT&T to give their customers access to machine learning

Benefiting AT&T customers
Companies can use IoT data to predict their machine maintenance, but how does this impact AT&T customers?
For example, say an oil and gas company wants to detect unusual events in its wells. By using AT&T’s IoT network and the IBM Watson Data Platform, AT&T’s IoT analytics solutions will ingest data from hundreds of wells, creating the models necessary with appropriate machine learning libraries and open source technology to help predict potential failures or machine malfunctions. The company will be able to detect anomalies in less time and with greater accuracy.
“We have more than 30 million connections on our network today and that number continues to grow, primarily driven by enterprise adoption,” said Chris Penrose, president of IoT solutions at AT&T. “Integrating the IBM Watson Data Platform into our IoT capabilities will be huge for our enterprise customers.”
Bringing IoT innovations to market
The news today builds on existing collaborations between AT&T and IBM to deliver new IoT innovations to the market. The companies’ strategic alliance brings together leading wireless connectivity, advanced analytics and cognitive capabilities for AT&T’s enterprise customers to improve their business processes.
Stay tuned for further announcements live from IBM InterConnect.
Start your next IoT project.
A version of this article originally appeared on the IBM Internet of Things blog.
The post AT&;T and IBM partner for analytics with Watson appeared first on news.
Quelle: Thoughts on Cloud

AT&T and IBM partner for analytics with Watson

Today at IBM InterConnect, we learned that IBM is partnering with AT&T to support enterprise customers&; Internet of Things (IoT) with data insights.
This data is huge for business customers, but is only valuable with real-time, meaningful insights.
AT&T will be using a variety of IBM products including:

Watson IoT Platform: to build the next generation of connected industrial IoT devices and products that continuously learn from the physical world
IBM Watson Data Platform: which is the fastest data ingestion engine, combined with cognitive powered decision making to help uncover business insights and value from data, possibly from the weather, the road, social media, or customer sales data
IBM Machine Learning Service: used by AT&T to give their customers access to machine learning

Benefiting AT&T customers
Companies can use IoT data to predict their machine maintenance, but how does this impact AT&T customers?
For example, say an oil and gas company wants to detect unusual events in its wells. By using AT&T’s IoT network and the IBM Watson Data Platform, AT&T’s IoT analytics solutions will ingest data from hundreds of wells, creating the models necessary with appropriate machine learning libraries and open source technology to help predict potential failures or machine malfunctions. The company will be able to detect anomalies in less time and with greater accuracy.
“We have more than 30 million connections on our network today and that number continues to grow, primarily driven by enterprise adoption,” said Chris Penrose, president of IoT solutions at AT&T. “Integrating the IBM Watson Data Platform into our IoT capabilities will be huge for our enterprise customers.”
Bringing IoT innovations to market
The news today builds on existing collaborations between AT&T and IBM to deliver new IoT innovations to the market. The companies’ strategic alliance brings together leading wireless connectivity, advanced analytics and cognitive capabilities for AT&T’s enterprise customers to improve their business processes.
Stay tuned for further announcements live from IBM InterConnect.
Start your next IoT project.
A version of this article originally appeared on the IBM Internet of Things blog.
The post AT&;T and IBM partner for analytics with Watson appeared first on news.
Quelle: Thoughts on Cloud

YouTube Says It Wrongly Blocked Some LGBT Videos In "Restricted Mode"

The video site&;s &;restricted mode&; aims to filter sensitive content, but several LGBT vloggers and artists say it went too far.

YouTube apologized on Monday after several prominent LGBT video creators accused the site of censoring their videos with a filtering mechanism that flags and hides content as inappropriate.

YouTube apologized on Monday after several prominent LGBT video creators accused the site of censoring their videos with a filtering mechanism that flags and hides content as inappropriate.

AP / YouTube

The site’s “restricted mode” lets users filter out “potentially objectionable content,” the platform says, but some vloggers said it’s actually hiding pro-LGBT material.

The site's "restricted mode" lets users filter out "potentially objectionable content," the platform says, but some vloggers said it's actually hiding pro-LGBT material.

YouTube

Videos ranging from a makeup lesson for trans women to an LGBT couple reciting wedding vows were no longer visible after the filter was enacted.

Videos ranging from a makeup lesson for trans women to an LGBT couple reciting wedding vows were no longer visible after the filter was enacted.

YouTube


View Entire List ›

Quelle: <a href="YouTube Says It Wrongly Blocked Some LGBT Videos In "Restricted Mode"“>BuzzFeed

Using Kubernetes Helm to install applications

The post Using Kubernetes Helm to install applications appeared first on Mirantis | Pure Play Open Cloud.

After reading this introduction to Kubernetes Helm, you will know how to:

Install Helm
Configure Helm
Use Helm to determine available packages
Use Helm to install a software package
Retrieve a Kubernetes Secret
Use Helm to delete an application
Use Helm to roll back changes to an application

Difficulty is a relative thing. Deploying an application using containers can be much easier than trying to manage deployments of a traditional application over different environments, but trying to manage and scale multiple containers manually is much more difficult than orchestrating them using Kubernetes.  But even managing Kubernetes applications looks difficult compared to, say, &;apt-get install mysql&;. Fortunately, the container ecosystem has now evolved to that level of simplicity. Enter Helm.
Helm is a Kubernetes-based package installer. It manages Kubernetes &8220;charts&8221;, which are &8220;preconfigured packages of Kubernetes resources.&8221;  Helm enables you to easily install packages, make revisions, and even roll back complex changes.
Next week, my colleague Maciej Kwiek will be giving a talk at Kubecon about Boosting Helm with AppController, so we thought this might be a good time to give you an introduction to what it is and how it works.
Let&;s take a quick look at how to install, configure, and utilize Helm.
Install Helm
Installing Helm is actually pretty straightforward.  Follow these steps:

Download the latest version of Helm from https://github.com/kubernetes/helm/releases.  (Note that if you are using an older version of Kubernetes (1.4 or below) you might have to downgrade Helm due to breaking changes.)
Unpack the archive:
$ gunzip helm-v2.2.3-darwin-amd64.tar.gz
$ tar -xvf helm-v2.2.3-darwin-amd64.tar
x darwin-amd64/
x darwin-amd64/helm
x darwin-amd64/LICENSE
x darwin-amd64/README.md
Next move the helm executable to your path:
$ mv dar*/helm /usr/local/bin/.

Finally, initialize helm to both set up the local environment and to install the server portion, Tiller, on your cluster.  (Helm will use the default cluster for Kubernetes, unless you tell it otherwise.)
$ helm init
Creating /Users/nchase/.helm
Creating /Users/nchase/.helm/repository
Creating /Users/nchase/.helm/repository/cache
Creating /Users/nchase/.helm/repository/local
Creating /Users/nchase/.helm/plugins
Creating /Users/nchase/.helm/starters
Creating /Users/nchase/.helm/repository/repositories.yaml
Writing to /Users/nchase/.helm/repository/cache/stable-index.yaml
$HELM_HOME has been configured at /Users/nchase/.helm.

Tiller (the helm server side component) has been instilled into your Kubernetes Cluster.
Happy Helming!

Note that you can also upgrade the Tiller component using:
helm init –upgrade
That&8217;s all it takes to install Helm itself; now let&8217;s look at using it to install an application.
Install an application with Helm
One of the things that Helm does is enable authors to create and distribute their own applications using charts; to get a full list of the charts that are available, you can simply ask:
$ helm search
NAME                          VERSION DESCRIPTION                                       
stable/aws-cluster-autoscaler 0.2.1   Scales worker nodes within autoscaling groups.    
stable/chaoskube              0.5.0   Chaoskube periodically kills random pods in you…
stable/chronograf             0.1.2   Open-source web application written in Go and R…

In our case, we&8217;re going to install MySQL from the stable/mysql chart. Follow these steps:

First update the repo, just as you&8217;d do with apt-get update:
$ helm repo update
Hang tight while we grab the latest from your chart repositories…
…Skip local chart repository
Writing to /Users/nchase/.helm/repository/cache/stable-index.yaml
…Successfully got an update from the “stable” chart repository
Update Complete. ⎈ Happy Helming!⎈

Next, we&8217;ll do the actual install:
$ helm install stable/mysql
This command produces a lot of output, so let&8217;s take it one step at a time.  First, we get information about the release that&8217;s been deployed:
NAME:   lucky-wildebeest
LAST DEPLOYED: Thu Mar 16 16:13:50 2017
NAMESPACE: default
STATUS: DEPLOYED
As you can see, it&8217;s called lucky-wildebeest, and it&8217;s been successfully DEPLOYED.
Your release will, of course, have a different name. Next, we get the resources that were actually deployed by the stable/mysql chart:
RESOURCES:
==> v1/Secret
NAME                    TYPE    DATA  AGE
lucky-wildebeest-mysql  Opaque  2     0s

==> v1/PersistentVolumeClaim
NAME                    STATUS  VOLUME                                    CAPACITY  ACCESSMODES  AGE
lucky-wildebeest-mysql  Bound   pvc-11ebe330-0a85-11e7-9bb2-5ec65a93c5f1  8Gi       RWO          0s

==> v1/Service
NAME                    CLUSTER-IP  EXTERNAL-IP  PORT(S)   AGE
lucky-wildebeest-mysql  10.0.0.13   <none>       3306/TCP  0s

==> extensions/v1beta1/Deployment
NAME                    DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
lucky-wildebeest-mysql  1        1        1           0          0s
This is a good example because we can see that this chart configures multiple types of resources: a Secret (for passwords), a persistent volume (to store the actual data), a Service (to serve requests) and a Deployment (to manage it all).
The chart also enables the developer to add notes:
NOTES:
MySQL can be accessed via port 3306 on the following DNS name from within your cluster:
lucky-wildebeest-mysql.default.svc.cluster.local

To get your root password run:
   kubectl get secret –namespace default lucky-wildebeest-mysql -o jsonpath=”{.data.mysql-root-password}” | base64 –decode; echo

To connect to your database:
Run an Ubuntu pod that you can use as a client:
   kubectl run -i –tty ubuntu –image=ubuntu:16.04 –restart=Never — bash -il

Install the mysql client:
   $ apt-get update && apt-get install mysql-client -y

Connect using the mysql cli, then provide your password:
$ mysql -h lucky-wildebeest-mysql -p

These notes are the basic documentation a user needs to use the actual application. There let&8217;s see how we put it all to use.
Connect to mysql
The first lines of the notes make it seem deceptively simple to connect to MySql:
MySQL can be accessed via port 3306 on the following DNS name from within your cluster:
lucky-wildebeest-mysql.default.svc.cluster.local
Before you can do anything with that information, however, you need to do two things: get the root password for the database, and get a working client with network access to the pod hosting it.
Get the mysql password
Most of the time, you&8217;ll be able to get the root password by simply executing the code the developer has left you:
$ kubectl get secret –namespace default lucky-wildebeest-mysql -o jsonpath=”{.data.mysql-root-password}” | base64 –decode; echo
DBTzmbAikO
Some systems &; notably MacOS &8212; will give you an error:
$ kubectl get secret –namespace default lucky-wildebeest-mysql -o jsonpath=”{.data.mysql-root-password}” | base64 –decode; echo
Invalid character in input stream.
This is because of an error in base64 that adds an extraneous character. In this case, you will have to extract the password manually.  Basically, we&8217;re going to execute the same steps as this line of code, but one at a time.
Start by looking at the Secrets that Kubernetes is managing:
$ kubectl get secrets
NAME                     TYPE                                  DATA      AGE
default-token-0q3gy      kubernetes.io/service-account-token   3         145d
lucky-wildebeest-mysql   Opaque                                2         20m
It&8217;s the second, lucky-wildebeest-mysql that we&8217;re interested in. Let&8217;s look at the information it contains:
$ kubectl get secret lucky-wildebeest-mysql -o yaml
apiVersion: v1
data:
 mysql-password: a1p1THdRcTVrNg==
 mysql-root-password: REJUem1iQWlrTw==
kind: Secret
metadata:
 creationTimestamp: 2017-03-16T20:13:50Z
 labels:
   app: lucky-wildebeest-mysql
   chart: mysql-0.2.5
   heritage: Tiller
   release: lucky-wildebeest
 name: lucky-wildebeest-mysql
 namespace: default
 resourceVersion: “43613”
 selfLink: /api/v1/namespaces/default/secrets/lucky-wildebeest-mysql
 uid: 11eb29ed-0a85-11e7-9bb2-5ec65a93c5f1
type: Opaque
You probably already figured out where to look, but the developer&8217;s instructions told us the raw password data was here:
jsonpath=”{.data.mysql-root-password}”
So we&8217;re looking for this:
apiVersion: v1
data:
 mysql-password: a1p1THdRcTVrNg==
 mysql-root-password: REJUem1iQWlrTw==
kind: Secret
metadata:

Now we just have to go ahead and decode it:
$ echo “REJUem1iQWlrTw==” | base64 –decode
DBTzmbAikO
Finally!  So let&8217;s go ahead and connect to the database.
Create the mysql client
Now we have the password, but if we try to just connect iwt the mysql client on any old machine, we&8217;ll find that there&8217;s no connectivity outside of the cluster.  For example, if I try to connect with my local mysql client, I get an error:
$ ./mysql -h lucky-wildebeest-mysql.default.svc.cluster.local -p
Enter password:
ERROR 2005 (HY000): Unknown MySQL server host ‘lucky-wildebeest-mysql.default.svc.cluster.local’ (0)
So what we need to do is create a pod on which we can run the client.  Start by creating a new pod using the ubuntu:16.04 image:
$ kubectl run -i –tty ubuntu –image=ubuntu:16.04 –restart=Never

$ kubectl get pods
NAME                                      READY     STATUS             RESTARTS   AGE
hello-minikube-3015430129-43g6t           1/1       Running            0          1h
lucky-wildebeest-mysql-3326348642-b8kfc   1/1       Running            0          31m
ubuntu                                   1/1       Running            0          25s
When it&8217;s running, go ahead and attach to it:
$ kubectl attach ubuntu -i -t

Hit enter for command prompt
Next install the mysql client:
root@ubuntu2:/# apt-get update && apt-get install mysql-client -y
Get:1 http://archive.ubuntu.com/ubuntu xenial InRelease [247 kB]
Get:2 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB]

Setting up mysql-client-5.7 (5.7.17-0ubuntu0.16.04.1) …
Setting up mysql-client (5.7.17-0ubuntu0.16.04.1) …
Processing triggers for libc-bin (2.23-0ubuntu5) …
Now we should be ready to actually connect. Remember to use the password we extracted in the previous step.
root@ubuntu2:/# mysql -h lucky-wildebeest-mysql -p
Enter password:

Welcome to the MySQL monitor.  Commands end with ; or g.
Your MySQL connection id is 410
Server version: 5.7.14 MySQL Community Server (GPL)

Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type ‘help;’ or ‘h’ for help. Type ‘c’ to clear the current input statement.
Of course you can do what you want here, but for now we&8217;ll go ahead and exit both the database and the container:
mysql> exit
Bye
root@ubuntu2:/# exit
logout
So we&8217;ve successfully installed an application &8212; in this case, MySql, using Helm.  But what else can Helm do?
Working with revisions
So now that you&8217;ve seen Helm in action, let&8217;s take a quick look at what you can actually do with it.  Helm is designed to let you install, upgrade, delete, and roll back revisions. We&8217;ll get into more details about upgrades in a later article on creating charts, but let&8217;s quickly look at deleting and rolling back revisions:
First off, each time you make a change with Helm, you&8217;re creating a Revision.  By deploying MySql, we created a Revision, which we can see in this list:
NAME              REVISION UPDATED                  STATUS CHART         NAMESPACE
lucky-wildebeest     1        Sun Mar 19 22:07:56 2017 DEPLOYEmysql-0.2.5   default  
operatic-starfish 2        Thu Mar 16 17:10:23 2017 DEPLOYEredmine-0.4.0 default  
As you can see, we created a revision called lucky-wildebeest, based on the mysql-0.2.5 chart, and its status is DEPLOYED.
We could also get back the information we got when it was first deployed by getting the status of the revision:
$ helm status intended-mule
LAST DEPLOYED: Sun Mar 19 22:07:56 2017
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Secret
NAME                 TYPE    DATA  AGE
intended-mule-mysql  Opaque  2     43m

==> v1/PersistentVolumeClaim
NAME                 STATUS  VOLUME                                    CAPACITY  ACCESSMODES  AGE
intended-mule-mysql  Bound   pvc-08e0027a-0d12-11e7-833b-5ec65a93c5f1  8Gi       RWO          43m

Now, if we wanted to, we could go ahead and delete the revision:
$ helm delete lucky-wildebeest
Now if you list all of the active revisions, it&8217;ll be gone.
$ helm ls
However, even though the revision s gone, you can still see the status:
$ helm status lucky-wildebeest
LAST DEPLOYED: Sun Mar 19 22:07:56 2017
NAMESPACE: default
STATUS: DELETED

NOTES:
MySQL can be accessed via port 3306 on the following DNS name from within your cluster:
lucky-wildebeest-mysql.default.svc.cluster.local

To get your root password run:

   kubectl get secret –namespace default lucky-wildebeest-mysql -o jsonpath=”{.data.mysql-root-password}” | base64 –decode; echo

To connect to your database:

Run an Ubuntu pod that you can use as a client:

   kubectl run -i –tty ubuntu –image=ubuntu:16.04 –restart=Never — bash -il

Install the mysql client:

   $ apt-get update && apt-get install mysql-client -y

Connect using the mysql cli, then provide your password:

$ mysql -h lucky-wildebeest-mysql -p
OK, so what if we decide that we&8217;ve changed our mind, and we want to roll back that deletion?  Fortunately, Helm is designed for that.  We can specify that we want to rollback our application to a specific revision (in this case, 1).
$ helm rollback lucky-wildebeest 1
Rollback was a success! Happy Helming!
We can see that the application is back, and the revision has been incremented:
NAME              REVISION UPDATED                  STATUS CHART         NAMESPACE
lucky-wildebeest     2        Sun Mar 19 23:46:52 2017 DEPLOYEmysql-0.2.5   default  

We can also check the status:
$ helm status intended-mule
LAST DEPLOYED: Sun Mar 19 23:46:52 2017
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Secret
NAME                 TYPE    DATA  AGE
intended-mule-mysql  Opaque  2     21m

==> v1/PersistentVolumeClaim
NAME                 STATUS  VOLUME                                    CAPACITY  ACCESSMODES  AGE
intended-mule-mysql  Bound   pvc-dad1b896-0d1f-11e7-833b-5ec65a93c5f1  8Gi       RWO          21m

Next time, we&8217;ll talk about how to create charts for Helm.  Meanwhile, if you&8217;re going to be at Kubecon, don&8217;t forget Maciej Kwiek&8217;s talk on Boosting Helm with AppController.
The post Using Kubernetes Helm to install applications appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

OpenStack Developer Mailing List Digest March 11-17

SuccessBot Says

Dims [1]: Nova now has a python35 based CI job in check queue running Tempest tests (everything running on py35)
jaypipes [2]: Finally got a good functional test created that stresses the Ironic and Nova integration and migration from Newton to Ocata.
Lbragstad [3]: the -Ansible project has a test environment that automates rolling upgrade performance testing
annegentle [4]: Craig Sterrett and the App Dev Enablement WG: New links to more content for the appdev docs [5]
jlvillal [6]: Ironic team completed the multi-node grenade CI job
Tell us yours via OpenStack IRC channels with message “ <message>”

All: [7]

Pike Release Management Communication

The release liaison is responsible for:

Coordinating with the release management team.
Validating your team release team requests.
Ensure release cycle deadlines are met.
It&;s encouraged to nominate a release liaison. Otherwise this tasks falls back to the PTL.

Ensure the releaase liaison has time and ability to handle the communication necessary.

Failing to follow through on a needed process step may block you from meeting deadlines or releasing as our milestones are date-based, not feature-based.

Three primary communication tools:

Email for announcements and asynchronous communication

“[release]” topic tag on the openstack-dev mailing list.
This includes the weekly release countdown emails with details on focus, tasks, and upcoming dates.

IRC for time sensitive interactions

With more than 50 teams, the release team relies on your presence in the freenode openstack-release channel.

Written documentation for relatively stable information

The release team has published the schedule for the Pike cycle [8]
You can add the schedule to your own calendar [9]

Things to do right now:

Update your release liaisons [10].
Make sure your IRC and email address listed in projects.yaml [11].

Update your mail filters to look for “[release]” in the subject line.
Full thread [12]

OpenStack Summit Boston Schedule Now Live!

Main conference schedule [13]
Register now [14]
Hotel discount rates for attendees [15]
Stackcity party [16]
Take the certified OpenStack Administrator exam [17]
City guide of restaurants and must see sites [18]
Full thread [19]

Some Information About the Forum at the Summit in Boston

“Forum” proper

3 medium sized fishbowl rooms for cross-community discussions.
Selected and scheduled by a committee formed of TC and UC members, facilitated by the Foundation staff members.
Brainstorming for topics [20]

“On-boarding” rooms

Two rooms setup classroom style for projects teams and workgroups who want to on-board new team members.
Examples include providing introduction to your codebase for prospective new contributors.
These should not be tradiitonal “project intro” talks.

Free hacking/meetup spaces

Four to five rooms populated with roundtables for ad-hoc discussions and hacking.

Full thread [21]

 
The Future of the App Catalog

Created early 2015 as a market place of pre-packaged applications [22] that you can deploy using Murano.
This has grown to 45 Glance images, 13 Heat templates and 6 Tosca templates. Otherwise did not pick up a lot of steam.
~30% are just thin wrappers around Docker containers.
Traffic stats show 100 visits per week, 75% of which only read the index page.
In parallel, Docker developed a pretty successful containerized application marketplace (Docker Hub) with hundreds or thousands regularly updated apps.

Keeping the catalog around makes us look like we are unsuccessfully trying to compete with that ecosystem, while OpenStack is in fact complimentary.

In the past, we have retired projects that were dead upstream.

The app catalog is however has an active maintenance team.
If we retire the app catalog, it would not be a reflection on that team performance, but that the beta was arguably not successful in build an active market place and a great fit from a strategy perspective.

Two approaches for users today to deploy docker apps in OpenStack:

Container-native approach using “docker run” after using Nova or K8s cluster using Magnum.
OpenStack Native approach “zun create nginx”.

Full thread [23][24]

ZooKeeper vs etcd for Tooz/DLM

Devstack defaults to ZooKeeper and is opinionated about it.
Lots of container related projects are using etcd [25], so do we need to avoid both ZooKeeper and etcd?
For things like databases and message queues, it&8217;s more than time for us to contract on one solution.

For DLMs ZooKeepers gives us mature/ featureful angle. Etcd covers the Kubernetes cooperation / non-java angle.

OpenStack interacts with DLM&8217;s via the library Tooz. Tooz today only supports etcd v2, but v3 is planned which would support GRPC.
The OpenStack gate will begin to default to etcd with Tooz.
Full thread [26]

Small Steps for Go

An etherpad [27] has been started to begin tackling the new language requirements [28] for Go.
An golang-commons repository exists [29]
Gopher cloud versus having a golang-client project is being discussed in the etherpad. Regardless we need support for os-client-config.
Full thread [30]

POST /api-wg/news

Guidelines under review:

Add API capabilities discovery guideline [31]
Refactor and re-validate API change guidelines [32]
Microversions: add next_min_version field in version body [33]
WIP: microversion architecture archival doc [34]

Full thread [35]

Proposal to Rename Castellan to oslo.keymanager

Castellan is a python abstraction to different keymanager solutions such as Barbican. Implementations like Vault could be supported, but currently is not.
The rename would emphasize the Castellan is an abstraction layer.

Similar to oslo.db supporting MySQL and PostgreSQL.

Instead of oslo.keymanager, it can be rolled into the oslo umbrella without a rename. Tooz sets the precedent of this.
Full thread [36]

Release Countdown for week R-23 and R-22

Focus:

Specification approval and implementation for priority features for this cycle.

Actions:

Teams should research how they can meet the Pike release goals [37][38].
Teams that want to change their release model should do so before end of Pike-1 [39].

Upcoming Deadlines and Dates

Boston Forum topic formal submission period: March 20 &; April 2
Pike-1 milestone: April 13 (R-20 week)
Forum at OpenStack Summit in Boston: May 8-11

Full thread [40]

Deployment Working Group

Mission: To collaborate on best practices for deploying and configuring OpenStack in production environments.
Examples:

OpenStack Ansible and Puppet OpenStack have been collaborating on Continuous Integration scenarios but also on Nova upgrades orchestration
TripleO and Kolla share the same tool for container builds.
TripleO and Fuel share the same Puppet OpenStack modules.
OpenStack and Kubernetes are interested in collaborating on configuration management.
Most of tools want to collect OpenStack parameters for configuration management in a common fashion.

Wiki [41] has been started to document how the group will work together. Also an etherpad [42] for brainstorming.

 
Quelle: openstack.org

Bluemix, Watson and bot mania: The cognitive era plays hard at SXSW 

The IBM activation this past week in downtown Austin earned it the number three slot in AdWeek’s compilation of the top eight topics that had attendees buzzing at South by Southwest.
No wonder. IBM at SXSW 2017 enticed developers to the golden age of cognitive by amping up its Bluemix services offerings, specifically around the APIs used to help Watson engage more sentiently with humans. IBM gave SXSW attendees access to Watson to create a bot, remix a song, design a t-shirt, or get a beer recommendation.
With no required badge, a full open bar and DJs on its mega roof deck, the IBM activation was fueled by a regular flow of deep dives at the Maker’s Garage and live talks with IBM heavyweights including CEO Ginni Rometty and Bob Sutor, IBM vice president of cognitive, blockchain and quantum solutions.
Conversation elevated from the cloud infrastructure layer to services throughout the entire activation. With Bluemix getting more recognition thanks to the Watson platform, the event spoke heavily to developers looking for a platform to build on and ways to pull together advanced applications.
Demo areas struck the right tone with non-developers, showing not only how Watson is making the world healthier, more secure, personal, creative and engaged, but also how Watson can now respond to human emotions, preferences and taste palates.
With SXSW interactive dovetailing into the mainstay SXSW music event, the creative aspects of Watson got lots of attention, giving musicians and enthusiasts an opportunity to collaborate and, even better, play with one of the world’s most advanced APIs.
Watson Beat, a research project born out of Austin’s Research Lab uses cognitive technology to compose and transform music by remixing any piece of music using a mood-driven palette to create a personal piece that suits the user’s emotional space.
Meanwhile, TJBot, an open source project, is designed to enable users to play with Watson Services such as Speech to Text, which teaches it to understand human speech. TJBot also demonstrates how Watson can hold a conversation and even respond to different moods using Personality Insights, which can analyze the emotive content of speech in real time.

Capitalizing on the year of the bot
IBM may indeed have had the edge on SXSW’s fever pitch around bots, thanks to Watson and Bluemix.
In one SXSW featured session, the IBM events team got together with Vancouver’s Eventbase, creators of SXSW’s Go Mobile App, to share perspectives on how mobile apps and, more broadly, human experience can be enhanced with augmented intelligence.
This year, both SXSW and IBM’s Events mobile app (debuting the week of 19 March) feature intelligent, conversational user interfaces that act as personal concierge services.
“IBM sits in a unique position to provide a platform for bots and other customer experiences,” said Ted Conroy, IBM Events lead. “The appetite for bespoke, personalized experiences is voracious, and IBM&;s cognitive services definitely can feed it.”
As Conroy pointed out, bots today use a simple, cognitive service to respond to sets of questions. When a service can’t answer, it defaults to scripted answers. Soon, bots will be proactive and able to choose the optimal cognitive service to best answer a broad set of questions without the current context limitations.
Check out how to build a bot in 15 minutes with Watson conversation in this demo.
Learn how to build a TJ bot of your own here.
The post Bluemix, Watson and bot mania: The cognitive era plays hard at SXSW  appeared first on news.
Quelle: Thoughts on Cloud

A new tool to manage multicloud with speed and control

In this post, we’ll cover the most critical steps to adopting a multicloud strategy. But first, some exciting news:  We will be announcing the IBM Cloud Automation Manager at InterConnect 2017.
IBM Cloud Automation Manager will be released in March 17, 2017. Companies can use CAM to manage their multicloud environment through a single dashboard. It will provide cognitive operations to facilitate deployment of workloads to cloud based on application requirements. IT operations won’t have to guess ever again.
Why does multicloud management matter?
Three out of four companies today have deployed more than one cloud. Is your organization leveraging multiple clouds to run business applications? Is your IT operations team able to build, maintain and operate these multicloud environments with speed, security, compliance and enterprise-grade quality?
The rapid expansion of business cloud portfolios requires a uniform cloud management platform. They need to manage multicloud environments without losing the visibility, governance or operational control that IT teams require to address business needs. Companies need a solution that provides:

Speed and agility
Control and compliance
A single “pane of glass” to manage multicloud environments
Support for traditional and emerging technologies such as containers, cognitive and analytics

What is a multicloud management solution?
A multicloud management solution will enable you to automatically deploy and manage multicloud environments. At the same time, it will provide easy access for developers to rapidly and securely create applications within company policy.
Multicloud management solutions typically provide an automatic provisioning capability, workflow management. They also accelerate application deployment and automate manual or scripted tasks to request, change or deploy standardized cloud services. You can execute these tasks across a range of cloud platforms, often leveraging other automation tools like configuration management.
What’s stopping enterprises from adopting multicloud management solutions? Many companies manage cloud services as silos of workloads and platforms. They may already have multiple tools to manage their on-premises and off-premises cloud services. Adopting new technology may happen piecemeal, and some IT staff may resist change.
Explore multicloud at InterConnect 2017
As you finalize your InterConnect schedule, check out the following multicloud sessions.

Session : Transform your IT operations with IBM Cloud Automation Manager
Automation is at the heart of cloud management in hybrid cloud environments. Hear about the new offering IBM Cloud Automation Manager from our esteemed panel: Judith Hurwitz, Hurwitz & Associates; Justin Youngblood, IBM; Vishal Rajpal, Perficient and Markus Echser, SwissRe.
Session: : Hybrid cloud management: Trends, opportunities and IBM’s strategy
As businesses procure cloud resources from multiple IT vendors, they are looking for a single tool that can agnostically manage these complex environments with ease. Hear from IBM experts and clients about the IBM role in shaping the future of hybrid cloud management.
Session : Introduction to IBM Cloud Automation Manager
This session will deliver a technical introduction to IBM&;s new Cloud Automation Manager offering. The new tool supports the orchestration, automated provisioning and configuration, as well as lifecycle management of resources across a variety of target clouds. Watch the demo of the new solution and get a look into the future.
Session : Hey IT operators: Automation content makes your job easier
In this session, learn from a survey of real-world accomplishments and quantitative results achieved by using automation content. Use cases from different industries will be shared to illustrate how clients could use blueprints and templates to get their jobs done while deploying applications in the cloud.
Session  : Integrating hybrid cloud management with other services
IBM Cloud Automation Manager allows you to integrate multicloud services with additional DevOps, configuration, monitoring, logs, events, security, costs and identity management available in your business. Learn how this new capability applies to both on premises and off-premises cloud infrastructure and applications.
InterConnect 2017 is just around the corner, brace yourself for an awesome cloud conference. See you there.
The post A new tool to manage multicloud with speed and control appeared first on news.
Quelle: Thoughts on Cloud

Intelligent services for elevators and escalators built with IBM Watson

If elevators and escalators do not work properly, it has a significant impact on the way cities function. People may not get to work in their office buildings. They may even end up with having difficulty getting home.
At KONE, we help people move in and between buildings as smoothly and safely as possible. Globally, we service more than 1.1 million elevators and escalators and move more than 1 billion people every day.
Intelligent services
That’s why KONE launched “24/7 Connected Services,” which uses the IBM Watson Internet of Things (IoT) platform to bring intelligent services to elevators and escalators.
KONE wants to create a completely new experience for customers, with less equipment downtime, fewer faults and detailed information on the performance and usage of their equipment.
For people who use elevators and escalators, it means less waiting time, fewer delays, and the potential for new, personalized experiences.
The company uses the IBM Watson IoT platform and its cognitive capabilities in many different ways. For example, it helps predict the condition of the elevator or escalator, thereby helping customers manage their equipment over its life cycle.

Improving predictability and people flow
By bringing artificial intelligence into services, KONE can help predict and suggest resolutions to potential problems.
KONE can provide individualized services that specifically meet the needs of customers. Customers will get services and outcomes that fit their exact needs. This is significant, as outcomes and results are more important to customers than technological features.
Making machine-to-machine more human
This is is just the beginning for KONE. With this platform, KONE can bring new services and new innovations for customers and consumers to the market faster.
In a first for the industry, KONE is revealing real-time machine conversations between elevators and the IoT cloud. Teams at IBM and KONE worked together to introduce a popular marketing campaign that brings a human touch to intelligent services and demystify a complex topic.
It&;s a fun way to demonstrate what 24/7 Connected Services would be like if elevators could talk.
Learn about other IBM clients who built their success on IBM Cloud.
The post Intelligent services for elevators and escalators built with IBM Watson appeared first on news.
Quelle: Thoughts on Cloud

Detours on the way to microservices

In 2008, I first heard Adrian Cockcroft of Netflix describe microservices as “fine-grained service oriented architecture.” I’d spent the previous six years wrestling with the more brutish, coarse-grained service-oriented architecture, its standards and so-called “best practices.” I knew then that unlike web-scale offerings such as Netflix, the road to microservices adoption by companies would have its roadblocks and detours.
It&;s not quite ten years later, and I am about to attend IBM InterConnect, where microservice adoption by business seems inescapable. What better time to consider these detours and how to avoid them?
Conway&8217;s law may be bi-directional
Melvin Conway introduced the idea that&8217;s become known as Conway&8217;s Law: &;Organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations.&;
But I saw it occur in reverse: When enterprise software organizations first decide to adopt microservices and its disciplines, I observed development teams organize themselves around the (micro-) services being built. When constructing enterprise applications by coding and “wiring up” small, independently-operating services, the development organization seemed to adjust itself to fit the software architecture, thereby creating silos and organizational friction.
More than the sum of its parts
When an organization first adopts microservices in its architecture, there are resource shortages. People who are skilled in the ways of microservices find themselves stretched far too thin. And specific implementation languages, frameworks or platforms can be in short supply. There&8217;s loss of momentum, attention and effective use of time because the “experts” must continually switch context and change the focus of their attention.
As is usually the case with resource shortage, the issue is one of prioritization: When there are hundreds or even thousands of microservices to build and maintain, how are allocations of scarce resources going to be made? Who makes them and on what basis?
The cloud-native management tax
The adoption of microservices requires a variety of specialized, independent platforms to which developer, test and operations teams must attend. Many of these come with their own forms of management and management tooling. In one case, I looked through the list of management interfaces and tools for a newly-minted, cloud-native application to discover more than forty separate management tools being used. These tools were covering: the different programming languages; authentication; authorization; reporting; databases; caches; platform libraries; service dependencies; pipeline dependencies; security threat model; audits; workflow; log aggregation and much more. The full list was astonishing.
The benefits of cloud-native architecture do not come without a price: organizations will need additional management tooling and the costs of becoming skilled in those management tools.
Carrying forward the technical debt
When a company embraces cloud migration or digital transformation, a team may be chartered to re-architect and re-implement an existing, monolithic application, its associated data, external dependencies and technical interconnections. Too often, I discovered that the shortcuts and hard-coded aspects of the existing application were being re-implemented as well. There seemed to be a part of the process that was missing when the objective was to migrate an application.
In an upcoming blog post, I&8217;ll consider some of the common detours and look to what practices and technologies are being used to avoid them.
Join me and other industry experts as we explore the world of microserivces at IBM InterConnect on March 19 – 23, 2017.
 
The post Detours on the way to microservices appeared first on news.
Quelle: Thoughts on Cloud