How YouTube Serves As The Content Engine Of The Internet's Dark Side

How YouTube Serves As The Content Engine Of The Internet's Dark Side

YouTube

David Seaman is the King of the Internet.

On Twitter, Seaman posts dozens of messages a day to his 66,000 followers, often about the secret cabal — including Rothschilds, Satanists, and the other nabobs of the New World Order — behind the nation’s best-known, super-duper-secret child sex ring under a DC pizza parlor.

But it’s on YouTube where he really goes to work. Since Nov. 4, four days before the election, Seaman has uploaded 136 videos, more than one a day. Of those, at least 42 are about Pizzagate. The videos, which tend to run about eight to fifteen minutes, typically consist of Seaman, a young, brown-haired man with glasses and a short beard, speaking directly into a camera in front of a white wall. He doesn’t equivocate: Recent videos are titled “Pizzagate Will Dominate 2017, Because It Is Real” and “PizzaGate New Info 12/6/16: Link To Pagan God of Pedophilia/Rape.”

Seaman has more than 150,000 subscribers. His videos, usually preceded by preroll ads for major brands like Quaker Oats and Uber, have been watched almost 18 million times, which is roughly the number of people who tuned in to last year’s season finale of NCIS, the most popular show on television.

His biography reads, in part, “I report the truth.”

In the aftermath of the 2016 presidential election, the major social platforms, most notably Twitter, Facebook, and Reddit, have been forced to undergo painful, often public reckonings with the role they play in spreading bad information. How do services that have become windows onto the world for hundreds of millions of people square their desire to grow with the damage that viral false information, “alternative facts,” and filter bubbles do to a democracy?

And yet there is a mammoth social platform, a cornerstone of the modern internet with more than a billion active users every month, which hosts and even pays for a fathomless stock of bad information, including viral fake news, conspiracy theories, and hate speech of every kind — and it’s been held up to virtually no scrutiny: YouTube.

The entire contemporary conspiracy-industrial complex of internet investigation and social media promulgation, which has become a defining feature of media and politics in the Trump era, would be a very small fraction of itself without YouTube. Yes, the site most people associate with “Gangnam Style,” pirated music, and compilations of dachshunds sneezing is also the central content engine of the unruliest segments of the ascendant right-wing internet, and sometimes its enabler.

To wit, the conspiracy-news internet’s biggest stars, some of whom now enjoy New Yorker profiles and presidential influence, largely live on YouTube. Infowars — whose founder and host, Alex Jones, claims Sandy Hook didn’t happen, Michelle Obama is a man, and 9/11 was an inside job — broadcasts to 2 million subscribers on YouTube. So does Michael “Gorilla Mindset” Cernovich. So too do a whole genre of lesser-known but still wildly popular YouTubers, people like Seaman and Stefan Molyneux (an Irishman closely associated with the popular “Truth About” format). As do a related breed of prolific political-correctness watchdogs like Paul Joseph Watson and Sargon of Akkad (real name: Carl Benjamin), whose videos focus on the supposed hypocrisies of modern liberal culture and the ways they leave Western democracy open to a hostile Islamic takeover. As do a related group of conspiratorial white-identity vloggers like Red Ice TV, which regularly hosts neo-Nazis in its videos.

“The internet provides people with access to more points of view than ever before,” YouTube wrote in a statement. “We&;re always taking feedback so we can continue to improve and present as many perspectives at a given moment in time as possible.”

YouTube

All this is a far cry from the platform’s halcyon days of 2006 and George Allen’s infamous “Macaca” gaffe. Back then, it felt reasonable to hope the site would change politics by bypassing a rose-tinted broadcast media filter to hold politicians accountable. As recently as 2012, Mother Jones posted to YouTube hidden footage of Mitt Romney discussing the “47%” of the electorate who would never vote for him, a video that may have swung the election. But by the time the 2016 campaign hit its stride, and a series of widely broadcast, ugly comments by then-candidate Trump didn’t keep him out of office, YouTube’s relationship to politics had changed.

Today, it fills the enormous trough of right-leaning conspiracy and revisionist historical content into which the vast, ravening right-wing social internet lowers its jaws to drink. Shared widely everywhere from white supremacist message boards to chans to Facebook groups, these videos constitute a kind of crowdsourced, predigested ideological education, offering the “Truth” about everything from Michelle Obama’s real biological sex (760,000 views&;) to why medieval Islamic civilization wasn’t actually advanced.

Frequently, the videos consist of little more than screenshots of a Reddit “investigation” laid out chronologically, set to ominous music. Other times, they’re very simple, featuring a man in a sparse room speaking directly into his webcam, or a very fast monotone narration over a series of photographs with effects straight out of iMovie. There’s a financial incentive for vloggers to make as many videos as cheaply they can; the more videos you make, the more likely one is to go viral. David Seaman’s videos typically garner more than 50,000 views and often exceed 100,000. Many of Seaman’s videos adjoin ads for major brands. A preroll ad for Asana, the productivity software, precedes a video entitled “WIKILEAKS: Illuminati Rothschild Influence & Simulation Theory”; before “Pizzagate: Do We Know the Full Scope Yet?&033;” it’s an ad for Uber, and before “HILLARY CLINTON&039;S HORROR SHOW,” one for a new Fox comedy. (Most YouTubers have no direct control over which brands&039; ads run next to their videos, and vice versa.)

This trough isn’t just wide, it’s deep. A YouTube search for the term “The Truth About the Holocaust” returns half a million results. The top 10 are all Holocaust-denying or Holocaust-skeptical. (Sample titles: “The Greatest Lie Ever Told,” which has 500,000 views; “The Great Jewish Lie”; “The Sick Lies of a Holocaust™ &039;Survivor.&039;”) Say the half million videos average about 10 minutes. That works out to 5 million minutes, or about 10 years, of “Truth About the Holocaust.”

Meanwhile, “The Truth About Pizzagate” returns a quarter of a million results, including “PizzaGate Definitive Factcheck: Oh My God” (620,000 views and counting) and “The Men Who Knew Too Much About PizzaGate” (who, per a teaser image, include retired Gen. Michael Flynn and Andrew Breitbart).

Sometimes, these videos go hugely viral. “With Open Gates: The Forced Collective Suicide of European Nations” — an alarming 20-minute video about Muslim immigration to Europe featuring deceptive editing and debunked footage — received some 4 million views in late 2015 before being taken down by YouTube over a copyright claim. (Infowars: “YouTube Scrambles to Censor Viral Video Exposing Migrant Invasion.”) That’s roughly as many people as watched the Game of Thrones Season 3 premiere. It’s since been scrubbed of the copyrighted music and reuploaded dozens of times.

First circulated by white supremacist blogs and chans, “With Gates Wide Open” gained social steam until it was picked up by Breitbart, at which point it exploded, blazing the viral trail by which conspiracy-right “Truth” videos now travel. Last week, President Trump incensed the nation of Sweden by falsely implying that it had recently suffered a terrorist attack. Later, he clarified in a tweet that he was referring to a Fox News segment. That segment featured footage from a viral YouTube documentary, Stockholm Syndrome, about the dangers of Muslim immigration into Europe. Sources featured in the documentary have since accused its director, Ami Horowitz, of “bad journalism” for taking their answers out of context.

So what responsibility, if any, does YouTube bear for the universe of often conspiratorial, sometimes bigoted, frequently incorrect information that it pays its creators to host, and that is now being filtered up to the most powerful person in the world? Legally, per the Digital Millennium Copyright Act, which absolves service providers of liability for content they host, none. But morally and ethically, shouldn’t YouTube be asking itself the same hard questions as Facebook and Twitter about the role it plays in a representative democracy? How do those questions change because YouTube is literally paying people to upload bad information?

And practically, if YouTube decided to crack down, could it really do anything?

YouTube does “demonitize” videos that it deems “not advertiser-friendly,” and last week, following a report in the Wall Street Journal that Disney had nixed a sponsorship deal with the YouTube superstar PewDiePie over anti-Semitic content in his videos, YouTube pulled his channel from its premium ad network. But such steps have tended to follow public pressure and have only affected extremely famous YouTubers. And it’s not like PewDiePie will go hungry; he can still run ads on his videos, which regularly do millions of views.

Ultimately, the platform may be so huge as to be ungovernable: Users upload 400 hours of video to YouTube every minute. One possibility is drawing a firmer line between content the company officially designates as news and everything else; YouTube has a dedicated News vertical that pulls in videos from publishers approved by Google News.

Even there, though, YouTube has its work cut out for it. On a recent evening, the first result I saw under the “Live Now – News” subsection of youtube.com/news was the Infowars “Defense of Liberty 13 Hour Special Broadcast.” Alex Jones was staring into the camera.

Quelle: <a href="How YouTube Serves As The Content Engine Of The Internet&039;s Dark Side“>BuzzFeed

53 new things to look for in OpenStack Ocata

The post 53 new things to look for in OpenStack Ocata appeared first on Mirantis | Pure Play Open Cloud.
With a shortened development cycle, you&;d think we&8217;d have trouble finding 53 new features of interest in OpenStack Ocata, but with so many projects (more than 60!) under the Big Tent, we actually had a little bit of trouble narrowing things down. We did a live webinar talking about 157 new features, but here&8217;s our standard 53. (Thanks to the PTLs who helped us out with weeding it down from the full release notes!)
Nova (OpenStack Compute Service)

VM placement changes: The Nova filter scheduler will now use the Placement API to filter compute nodes based on CPU/RAM/Disk capacity.
High availability: Nova now uses Cells v2 for all deployments; currently implemented as single cells, the next release, Pike, will support multi-cell clouds.
Neutron is now the default networking option.
Upgrade capabilities: Use the new &;nova-status upgrade check&8217; CLI command to see what&8217;s required to upgrade to Ocata.

Keystone (OpenStack Identity Service)

Per-user Multi-Factor-Auth rules (MFA rules): You can now specify multiple forms of authentication before Keystone will issue a token.  For example, some users might just need a password, while others might have to provide a time-based one time password and an additional form of authentication.
Auto-provisioning for federated identity: When a user logs into a federated system, Keystone will dynamically create that user a role; previously, the user had to log into that system independently, which was confusing to users.
Validate an expired token: Finally, no more failures due to long-running operations such as uploading a snapshot. Each project can specify whether it will accept expired tokens, and just HOW expired those tokens can be.

Swift (OpenStack Object Storage)

Improved compatibility: Byteorder information is now included in Ring files to support machines with different endianness.
More flexibility: You can now configure the base of the URL base for static web.  You can also set the &;filename&; parameter in TempURLs and validate those TempURLs against a common prefix.
More data: If you&8217;re dealing with large objects, you can now use multi-range GETs and HTTP 416 responses.

Cinder (OpenStack Block Storage)

Active/Active HA: Cinder can now run in Active/Active clustered mode, preventing concurrent operation conflicts. Cinder will also handle mid-processing service failures better than in past releases.
New attach/detach APIs: If you&8217;ve been confused about how to attach and detach volumes to and from VMs, you&8217;re not alone. The Ocata release saw the Cinder team refactor these APIs in preparation for adding the ability to attach a single volume to multiple VMs, expected in an upcoming release.

Glance (OpenStack Image Service)

Image visibility:  Users can now create &8220;community&8221; images, making them available for everyone else to use. You can also specify an image as &8220;shared&8221; to specify that only certain users have access.

Neutron (OpenStack Networking Service)

Support for Routed Provider Networks in Neutron: You can now use the NOVA GRP (Generic Resource Pools) API to publish networks in IPv4 inventory.  Also, the Nova scheduler uses this inventory as a hint to place instances based on IPv4 address availability in routed network segments.
Resource tag mechanism: You can now create tags for subnet, port, subnet pool and router resources, making it possible to do things like map different networks in different OpenStack clouds in one logical network or tag provider networks (i.e. High-speed, High-Bandwidth, Dial-Up).

Heat (OpenStack Orchestration Service)

Notification and application workflow: Use the new  OS::Zaqar::Notification to subscribe to Zaqar queues for notifications, or the OS::Zaqar::MistralTrigger for just Mistral notifications.

Horizon (OpenStack Dashboard)

Easier profiling and debugging:  The new Profiler Panel uses the os-profiler library to provide profiling of requests through Horizon to the OpenStack APIs so you can see what&8217;s going on inside your cloud.
Easier Federation configuration: If Keystone is configured with Keystone to Keystone (K2K) federation and has service providers, you can now choose Keystone providers from a dropdown menu.

Telemetry (Ceilometer)

Better instance discovery:  Ceilometer now uses libvirt directly by default, rather than nova-api.

Telemetry (Gnocchi)

Dynamically resample measures through a new API.
New collectd plugin: Store metrics generated by collectd.
Store data on Amazon S3 with new storage driver.

Dragonflow (Distributed SDN Controller)

Better support for modern networking: Dragonflow now supports IPv6 and distributed sNAT.
Live migration: Dragonflow now supports live migration of VMs.

Kuryr (Container Networking)

Neutron support: Neutron networking is now available to containers running inside a VM.  For example, you can now assign one Neutron port per container.
More flexibility with driver-based support: Kuryr-libnetwork now allows you to choose between ipvlan, macvlan or Neutron vlan trunk ports or even create your own driver. Also, Kuryr-kubernetes has support for ovs hybrid, ovs native and Dragonflow.
Container Networking Interface (CNI):  You can now use the Kubernetes CNI with Kuryr-kubernetes.
More platforms: The controller now handles Pods on bare metal, handles Pods in VMs by providing them Neutron subports, and provides services with LBaaSv2.

Vitrage (Root Cause Analysis Service)

A new collectd datasource: Use this fast system statistics collection deamon, with plugins that collect different metrics. From Ifat Afek: &8220;We tested the DPDK plugin, that can trigger alarms such as interface failure or noisy neighbors. Based on these alarms, Vitrage can deduce the existence of problems in the host, instances and applications, and provide the RCA (Root Cause Analysis) for these problems.&8221;
New “post event” API: Use This general-purpose API allows easy integration of new monitors into Vitrage.
Multi Tenancy support: A user will only see alarms and resources which belong to that user&8217;s tenant.

Ironic (Bare Metal Service)

Easier, more powerful management: A revamp of how drivers are composed, &8220;dynamic drivers&8221; enable users to select a &8220;hardware type&8221; for a machine rather than working through a matrix of hardware types. Users can independently change the deploy method, console manager, RAID management, power control interface and so on. Ocata also brings the ability to do soft power off and soft reboot, and to send non-maskable interrupts through both ironic and nova&8217;s API.

TripleO (Deployment Service)

Easier per-service upgrades: Perform step-by-step tasks as batched/rolling upgrades or in parallel. All roles, including custom roles, can be upgraded this way.
Composable High-Availability architecture: Services managed by Pacemaker such as galera, redis, VIPs, haproxy, cinder-volume, rabbitmq, cinder-backup, and manila-share can now be deployed in multiple clusters, making it possible to scale-out the number of nodes running these services.

OpenStackAnsible (Ansible Playbooks and Roles for Deployment)

Additional support: OpenStack-Ansible now supports CentOS 7, as well as integration with Ceph.

Puppet OpenStack (Puppet Modules for Deployment)

New modules and functionality: The Ocata release includes new modules for puppet-ec2api, puppet-octavia, puppet-panko and puppet-watcher. Also, existing modules support configuring the [DEFAULT]/transport_url configuration option. This changes makes it possible to support AMQP providers other than rabbitmq, such as zeromq.

Barbican (Key Manager Service)

Testing:  Barbican now includes a new Tempest test framework.

Congress (Governance Service)

Network address operations:  The policy language has been enhanced to enable users to specify network network policy use cases.
Quick start:  Congress now includes a default policy library so that it&8217;s useful out of the box.

Monasca (Monitoring)

Completion of Logging-as-a-Service:  Kibana support and integration is now complete, enabling you to push/publish logs to the Monasca Log API, and the logs are authenticated and authorized using Keystone and stored scoped to a tenant/project, so users can only see information from their own logs.
Container support:  Monasca now supports monitoring of Docker containers, and is adding support for the Prometheus monitoring solution. Upcoming releases will also see auto-discovery and monitoring of applications launched in a Kubernetes cluster.

Trove (Database as a Service)

Multi-region deployments: Database clusters can now be deployed across multiple OpenStack regions.

Mistral (Taskflow as a Service)

Multi-node mode: You can now deploy the Mistral engine in multi-node mode, providing the ability to scale out.

Rally (Benchmarking as a Service)

Expanded verification options:  Whereas previous versions enabled you to use only Tempest to verify your cluster, the newest version of Rally enables you to use other forms of verification, which means that Rally can actually be used for the non-OpenStack portions of your application and infrastructure. (You can find the full release notes here.)

Zaqar (Message Service)

Storage replication:  You can now use Swift as a storage option, providing built-in replication capabilities.

Octavia (Load Balancer Service)

More flexibility for Load Balancer as a Service:  You may now use neutron host-routes and custom MTU configurations when configuring LBaasS.

Solum (Platform as a Service)

Responsive deployment:  You may now configure deployments based on Github triggers, which means that you can implement CI/CD by specifying that your application should redeploy when there are changes.

Tricircle (Networking Automation Across Neutron Service)

DVR support in local Neutron:  The East-West and North-South bridging network have been combined into North-South a bridging network, making it possible to support DVR in local Neutron.

Kolla (Container Based Deployment)

Dynamic volume provisioning: Kolla-Kubernetes by default uses Ceph for stateful storage, and with Kubernetes 1.5, support was added for Ceph and dynamic volume provisioning as requested by claims made against the API server.

Freezer (Backup, Restore, and Disaster Recovery Service)

Block incremental backups:  Ocata now includes the Rsync engine, enabling these incremental backups.

Senlin (Clustering Service)

Generic Event/Notification support: In addition to its usual capability of logging events to a database, Senlin now enables you to add the sending of events to a message queue and to a log file, enabling dynamic monitoring.

Watcher (Infrastructure Optimization Service)

Multiple-backend support: Watcher now supports metrics collection from multiple backends.

Cloudkitty (Rating Service)

Easier management:  CloudKitty now includes a Horizon wizard and hints on the CLI to determine the available metrics. Also, Cloudkitty is now part of the unified OpenStack client.

The post 53 new things to look for in OpenStack Ocata appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

53 new things to look for in OpenStack Ocata

The post 53 new things to look for in OpenStack Ocata appeared first on Mirantis | Pure Play Open Cloud.
With a shortened development cycle, you&;d think we&8217;d have trouble finding 53 new features of interest in OpenStack Ocata, but with so many projects (more than 60!) under the Big Tent, we actually had a little bit of trouble narrowing things down. We did a live webinar talking about 157 new features, but here&8217;s our standard 53. (Thanks to the PTLs who helped us out with weeding it down from the full release notes!)
Nova (OpenStack Compute Service)

VM placement changes: The Nova filter scheduler will now use the Placement API to filter compute nodes based on CPU/RAM/Disk capacity.
High availability: Nova now uses Cells v2 for all deployments; currently implemented as single cells, the next release, Pike, will support multi-cell clouds.
Neutron is now the default networking option.
Upgrade capabilities: Use the new &;nova-status upgrade check&8217; CLI command to see what&8217;s required to upgrade to Ocata.

Keystone (OpenStack Identity Service)

Per-user Multi-Factor-Auth rules (MFA rules): You can now specify multiple forms of authentication before Keystone will issue a token.  For example, some users might just need a password, while others might have to provide a time-based one time password and an additional form of authentication.
Auto-provisioning for federated identity: When a user logs into a federated system, Keystone will dynamically create that user a role; previously, the user had to log into that system independently, which was confusing to users.
Validate an expired token: Finally, no more failures due to long-running operations such as uploading a snapshot. Each project can specify whether it will accept expired tokens, and just HOW expired those tokens can be.

Swift (OpenStack Object Storage)

Improved compatibility: Byteorder information is now included in Ring files to support machines with different endianness.
More flexibility: You can now configure the base of the URL base for static web.  You can also set the &;filename&; parameter in TempURLs and validate those TempURLs against a common prefix.
More data: If you&8217;re dealing with large objects, you can now use multi-range GETs and HTTP 416 responses.

Cinder (OpenStack Block Storage)

Active/Active HA: Cinder can now run in Active/Active clustered mode, preventing concurrent operation conflicts. Cinder will also handle mid-processing service failures better than in past releases.
New attach/detach APIs: If you&8217;ve been confused about how to attach and detach volumes to and from VMs, you&8217;re not alone. The Ocata release saw the Cinder team refactor these APIs in preparation for adding the ability to attach a single volume to multiple VMs, expected in an upcoming release.

Glance (OpenStack Image Service)

Image visibility:  Users can now create &8220;community&8221; images, making them available for everyone else to use. You can also specify an image as &8220;shared&8221; to specify that only certain users have access.

Neutron (OpenStack Networking Service)

Support for Routed Provider Networks in Neutron: You can now use the NOVA GRP (Generic Resource Pools) API to publish networks in IPv4 inventory.  Also, the Nova scheduler uses this inventory as a hint to place instances based on IPv4 address availability in routed network segments.
Resource tag mechanism: You can now create tags for subnet, port, subnet pool and router resources, making it possible to do things like map different networks in different OpenStack clouds in one logical network or tag provider networks (i.e. High-speed, High-Bandwidth, Dial-Up).

Heat (OpenStack Orchestration Service)

Notification and application workflow: Use the new  OS::Zaqar::Notification to subscribe to Zaqar queues for notifications, or the OS::Zaqar::MistralTrigger for just Mistral notifications.

Horizon (OpenStack Dashboard)

Easier profiling and debugging:  The new Profiler Panel uses the os-profiler library to provide profiling of requests through Horizon to the OpenStack APIs so you can see what&8217;s going on inside your cloud.
Easier Federation configuration: If Keystone is configured with Keystone to Keystone (K2K) federation and has service providers, you can now choose Keystone providers from a dropdown menu.

Telemetry (Ceilometer)

Better instance discovery:  Ceilometer now uses libvirt directly by default, rather than nova-api.

Telemetry (Gnocchi)

Dynamically resample measures through a new API.
New collectd plugin: Store metrics generated by collectd.
Store data on Amazon S3 with new storage driver.

Dragonflow (Distributed SDN Controller)

Better support for modern networking: Dragonflow now supports IPv6 and distributed sNAT.
Live migration: Dragonflow now supports live migration of VMs.

Kuryr (Container Networking)

Neutron support: Neutron networking is now available to containers running inside a VM.  For example, you can now assign one Neutron port per container.
More flexibility with driver-based support: Kuryr-libnetwork now allows you to choose between ipvlan, macvlan or Neutron vlan trunk ports or even create your own driver. Also, Kuryr-kubernetes has support for ovs hybrid, ovs native and Dragonflow.
Container Networking Interface (CNI):  You can now use the Kubernetes CNI with Kuryr-kubernetes.
More platforms: The controller now handles Pods on bare metal, handles Pods in VMs by providing them Neutron subports, and provides services with LBaaSv2.

Vitrage (Root Cause Analysis Service)

A new collectd datasource: Use this fast system statistics collection deamon, with plugins that collect different metrics. From Ifat Afek: &8220;We tested the DPDK plugin, that can trigger alarms such as interface failure or noisy neighbors. Based on these alarms, Vitrage can deduce the existence of problems in the host, instances and applications, and provide the RCA (Root Cause Analysis) for these problems.&8221;
New “post event” API: Use This general-purpose API allows easy integration of new monitors into Vitrage.
Multi Tenancy support: A user will only see alarms and resources which belong to that user&8217;s tenant.

Ironic (Bare Metal Service)

Easier, more powerful management: A revamp of how drivers are composed, &8220;dynamic drivers&8221; enable users to select a &8220;hardware type&8221; for a machine rather than working through a matrix of hardware types. Users can independently change the deploy method, console manager, RAID management, power control interface and so on. Ocata also brings the ability to do soft power off and soft reboot, and to send non-maskable interrupts through both ironic and nova&8217;s API.

TripleO (Deployment Service)

Easier per-service upgrades: Perform step-by-step tasks as batched/rolling upgrades or in parallel. All roles, including custom roles, can be upgraded this way.
Composable High-Availability architecture: Services managed by Pacemaker such as galera, redis, VIPs, haproxy, cinder-volume, rabbitmq, cinder-backup, and manila-share can now be deployed in multiple clusters, making it possible to scale-out the number of nodes running these services.

OpenStackAnsible (Ansible Playbooks and Roles for Deployment)

Additional support: OpenStack-Ansible now supports CentOS 7, as well as integration with Ceph.

Puppet OpenStack (Puppet Modules for Deployment)

New modules and functionality: The Ocata release includes new modules for puppet-ec2api, puppet-octavia, puppet-panko and puppet-watcher. Also, existing modules support configuring the [DEFAULT]/transport_url configuration option. This changes makes it possible to support AMQP providers other than rabbitmq, such as zeromq.

Barbican (Key Manager Service)

Testing:  Barbican now includes a new Tempest test framework.

Congress (Governance Service)

Network address operations:  The policy language has been enhanced to enable users to specify network network policy use cases.
Quick start:  Congress now includes a default policy library so that it&8217;s useful out of the box.

Monasca (Monitoring)

Completion of Logging-as-a-Service:  Kibana support and integration is now complete, enabling you to push/publish logs to the Monasca Log API, and the logs are authenticated and authorized using Keystone and stored scoped to a tenant/project, so users can only see information from their own logs.
Container support:  Monasca now supports monitoring of Docker containers, and is adding support for the Prometheus monitoring solution. Upcoming releases will also see auto-discovery and monitoring of applications launched in a Kubernetes cluster.

Trove (Database as a Service)

Multi-region deployments: Database clusters can now be deployed across multiple OpenStack regions.

Mistral (Taskflow as a Service)

Multi-node mode: You can now deploy the Mistral engine in multi-node mode, providing the ability to scale out.

Rally (Benchmarking as a Service)

Expanded verification options:  Whereas previous versions enabled you to use only Tempest to verify your cluster, the newest version of Rally enables you to use other forms of verification, which means that Rally can actually be used for the non-OpenStack portions of your application and infrastructure. (You can find the full release notes here.)

Zaqar (Message Service)

Storage replication:  You can now use Swift as a storage option, providing built-in replication capabilities.

Octavia (Load Balancer Service)

More flexibility for Load Balancer as a Service:  You may now use neutron host-routes and custom MTU configurations when configuring LBaasS.

Solum (Platform as a Service)

Responsive deployment:  You may now configure deployments based on Github triggers, which means that you can implement CI/CD by specifying that your application should redeploy when there are changes.

Tricircle (Networking Automation Across Neutron Service)

DVR support in local Neutron:  The East-West and North-South bridging network have been combined into North-South a bridging network, making it possible to support DVR in local Neutron.

Kolla (Container Based Deployment)

Dynamic volume provisioning: Kolla-Kubernetes by default uses Ceph for stateful storage, and with Kubernetes 1.5, support was added for Ceph and dynamic volume provisioning as requested by claims made against the API server.

Freezer (Backup, Restore, and Disaster Recovery Service)

Block incremental backups:  Ocata now includes the Rsync engine, enabling these incremental backups.

Senlin (Clustering Service)

Generic Event/Notification support: In addition to its usual capability of logging events to a database, Senlin now enables you to add the sending of events to a message queue and to a log file, enabling dynamic monitoring.

Watcher (Infrastructure Optimization Service)

Multiple-backend support: Watcher now supports metrics collection from multiple backends.

Cloudkitty (Rating Service)

Easier management:  CloudKitty now includes a Horizon wizard and hints on the CLI to determine the available metrics. Also, Cloudkitty is now part of the unified OpenStack client.

The post 53 new things to look for in OpenStack Ocata appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Deploying PostgreSQL Clusters using StatefulSets

Editor’s note: Today’s guest post is by Jeff McCormick, a developer at Crunchy Data, showing how to build a PostgreSQL cluster using the new Kubernetes StatefulSet feature.In an earlier post, I described how to deploy a PostgreSQL cluster using Helm, a Kubernetes package manager. The following example provides the steps for building a PostgreSQL cluster using the new Kubernetes StatefulSets feature. StatefulSets ExampleStep 1 – Create Kubernetes EnvironmentStatefulSets is a new feature implemented in Kubernetes 1.5 (prior versions it was known as PetSets). As a result, running this example will require an environment based on Kubernetes 1.5.0 or above.  The example in this blog deploys on Centos7 using kubeadm. Some instructions on what kubeadm provides and how to deploy a Kubernetes cluster is located here.Step 2 – Install NFSThe example in this blog uses NFS for the Persistent Volumes, but any shared file system would also work (ex: ceph, gluster).  The example script assumes your NFS server is running locally and your hostname resolves to a known IP address. In summary, the steps used to get NFS working on a Centos 7 host are as follows:sudo setsebool -P virt_use_nfs 1sudo yum -y install nfs-utils libnfsidmapsudo systemctl enable rpcbind nfs-serversudo systemctl start rpcbind nfs-server rpc-statd nfs-idmapdsudo mkdir /nfsfilesharesudo chmod 777 /nfsfileshare/sudo vi /etc/exportssudo exportfs -rThe /etc/exports file should contain a line similar to this one except with the applicable IP address specified:/nfsfileshare 192.168.122.9(rw,sync)After these steps NFS should be running in the test environment.Step 3 – Clone the Crunchy PostgreSQL Container SuiteThe example used in this blog is found at in the Crunchy Containers GitHub repo here. Clone the Crunchy Containers repository to your test Kubernertes host and go to the example:cd $HOMEgit clone https://github.com/CrunchyData/crunchy-containers.gitcd crunchy-containers/examples/kube/statefulsetNext, pull down the Crunchy PostgreSQL container image:docker pull crunchydata/crunchy-postgres:centos7-9.5-1.2.6Step 4 – Run the ExampleTo begin, it is necessary to set a few of the environment variables used in the example:export BUILDBASE=$HOME/crunchy-containersexport CCP_IMAGE_TAG=centos7-9.5-1.2.6BUILDBASE is where you cloned the repository and CCP_IMAGE_TAG is the container image version we want to use.Next, run the example:./run.shThat script will create several Kubernetes objects including: Persistent Volumes (pv1, pv2, pv3) Persistent Volume Claim (pgset-pvc) Service Account (pgset-sa) Services (pgset, pgset-master, pgset-replica) StatefulSet (pgset) Pods (pgset-0, pgset-1)At this point, two pods will be running in the Kubernetes environment: $ kubectl get podNAME      READY     STATUS    RESTARTS   AGEpgset-0   1/1       Running   0          2mpgset-1   1/1       Running   1          2mImmediately after the pods are created, the deployment will be as depicted below:Step 5 – What Just Happened?This example will deploy a StatefulSet, which in turn creates two pods.The containers in those two pods run the PostgreSQL database. For a PostgreSQL cluster, we need one of the containers to assume the master role and the other containers to assume the replica role. So, how do the containers determine who will be the master, and who will be the replica?This is where the new StateSet mechanics come into play. The StateSet mechanics assign a unique ordinal value to each pod in the set.The StatefulSets provided unique ordinal value always start with 0. During the initialization of the container, each container examines its assigned ordinal value. An ordinal value of 0 causes the container to assume the master role within the PostgreSQL cluster. For all other ordinal values, the container assumes a replica role. This is a very simple form of discovery made possible by the StatefulSet mechanics.PostgreSQL replicas are configured to connect to the master database via a Service dedicated to the master database. In order to support this replication, the example creates a separate Service for each of the master role and the replica role. Once the replica has connected, the replica will begin replicating state from the master.  During the container initialization, a master container will use a Service Account (pgset-sa) to change it’s container label value to match the master Service selector.  Changing the label is important to enable traffic destined to the master database to reach the correct container within the Stateful Set.  All other pods in the set assume the replica Service label by default.Step 6 – Deployment DiagramThe example results in a deployment depicted below:In this deployment, there is a Service for the master and a separate Service for the replica.  The replica is connected to the master and replication of state has started.The Crunchy PostgreSQL container supports other forms of cluster deployment, the style of deployment is dictated by setting the PG_MODE environment variable for the container.  In the case of a StatefulSet deployment, that value is set to: PG_MODE=setThis environment variable is a hint to the container initialization logic as to the style of deployment we intend.Step 7 – Testing the ExampleThe tests below assume that the psql client has been installed on the test system.  If if not, the psql client has been previously installed, it can be installed as follows:sudo yum -y install postgresqlIn addition, the tests below assume that the tested environment DNS resolves to the Kube DNS and that the tested environment DNS search path is specified to match the applicable Kube namespace and domain. The master service is named pgset-master and the replica service is named pgset-replica.Test the master as follows (the password is password):psql -h pgset-master -U postgres postgres -c ‘table pg_stat_replication’If things are working, the command above will return output indicating that a single replica is connecting to the master.Next, test the replica as follows:psql -h pgset-replica -U postgres postgres  -c ‘create table foo (id int)’The command above should fail as the replica is read-only within a PostgreSQL cluster.Next, scale up the set as follows:kubectl scale statefulset pgset –replicas=3The command above should successfully create a new replica pod called pgset-2 as depicted below:Step 8 – Persistence ExplainedTake a look at the persisted PostgreSQL data files on the resulting NFS mount path:$ ls -l /nfsfileshare/total 12drwx—— 20   26   26 4096 Jan 17 16:35 pgset-0drwx—— 20   26   26 4096 Jan 17 16:35 pgset-1drwx—— 20   26   26 4096 Jan 17 16:48 pgset-2Each container in the stateful set binds to the single NFS Persistent Volume Claim (pgset-pvc) created in the example script.  Since NFS and the PVC can be shared, each pod can write to this NFS path.  The container is designed to create a subdirectory on that path using the pod host name for uniqueness.ConclusionStatefulSets is an exciting feature added to Kubernetes for container builders that are implementing clustering. The ordinal values assigned to the set provide a very simple mechanism to make clustering decisions when deploying a PostgreSQL cluster.  –Jeff McCormick, Developer, Crunchy Data
Quelle: kubernetes

Elon Musk Slams Union Drive At Tesla Factory

Tesla CEO Elon Musk listens as President-elect Donald Trump speaks during a meeting with technology industry leaders at Trump Tower in New York, Wednesday, Dec. 14, 2016. (AP Photo/Evan Vucci)

Evan Vucci / AP

In a lengthy Thursday night email to Tesla employees, CEO Elon Musk defended his record as an employer, and appealed to workers not to join the United Auto Workers union.

In the message, first leaked to Electrek.co and later obtained in full by BuzzFeed News, Musk took direct aim at claims made earlier this month in a Medium post by factory worker Jose Moran. Moran alleged that long hours of physical labor once forced six of his eight team members to take medical leave simultaneously. Musk disputed this allegation, claiming a Tesla investigation has proven it to be false. “After looking into this claim, not only was it untrue for this individual’s team, it was untrue for any of the hundreds of teams in the factory,” he wrote.

“The forces arrayed against us are many and incredibly powerful. This is David vs Goliath if David were six inches tall&;”

The Tesla CEO also lambasted the efforts of the United Auto Workers union to unionize Tesla employees at the company&;s Fremont, CA factory, calling the organization&039;s tactics for doing so “disingenuous or outright false.” Musk alleged that the UAW&039;s “true allegiance is to the giant car companies, where the money they take from employees in dues is vastly more than they could ever make from Tesla.”

“The forces arrayed against us are many and incredibly powerful,” Musk wrote. “This is David vs Goliath if David were six inches tall&033; Only by being smarter, faster and working well as a tightly integrated team do we have any chance of success.”

Moran&039;s post — which was later followed by a press conference and a Facebook video — detailed how low pay, long hours, and difficult working conditions are making life difficult for Tesla employees. Moran argued that unionizing would improve the factory workers&039; situation.

Musk immediately swung back at Moran, telling Gizmodo that he was a union plant; earlier this week, during a Tesla earnings call, Musk told investors that the unionization “isn&039;t likely to occur.”

Moran denied Musk&039;s claims that he&039;s paid by the UAW to lead unionization efforts. His communications team, Storefront Political, declined comment on Musk&039;s email.

Musk&039;s email includes a point-by-point rebuttal of a number of Moran&039;s claims. Regarding long hours, Musk said overtime has actually decreased by 50% in the last year, and that the average employee worked 43 hours a week. Regarding compensation, he noted that Tesla factory workers earn equity, and therefore, over a four year period, earned “between $70,000 and $100,000 more in total compensation than the employees at other US auto companies.” On issues of safety, Musk said Tesla&039;s incident rate is less than half the industry average, and noted that the goal is to be “as close to zero injuries as possible.”

“There will also be little things that come along like free frozen yogurt stands scattered around the factory.”

In addition to defending Tesla&039;s record as an employer, Musk told workers that he plans to improve life at the Tesla factory, which is currently in the process of switching over its lines for production of the Model 3. For example, when the Model 3 reaches “volume production,” Musk said he&039;ll throw them “a really amazing party.”

“There will also be little things that come along like free frozen yogurt stands scattered around the factory and my personal favorite: a Tesla electric pod car roller coaster (with an optional loop the loop route, of course&033;) that will allow fast and fun travel throughout our Fremont campus, dipping in and out of the factory and connecting all the parking lots,” Musk wrote. “It’s going to get crazy good.”

Tesla declined comment. The full text of Musk&039;s email is below.

If you have information on working conditions or unionization efforts at Tesla, please contact the author directly, or tip us anonymously via contact.buzzfeed.com.

For Tesla to become and remain one of the great companies of the 21st century, we must have an environment that is as safe, fair and fun as possible. It is incredibly important to me that you look forward to coming to work every day. For that, we must be a fair and just company – the only kind worth creating.

This is vital to succeed in our mission to accelerate the advent of a clean, sustainable energy future. The forces arrayed against us are many and incredibly powerful. This is David vs Goliath if David were six inches tall&033; Only by being smarter, faster and working well as a tightly integrated team do we have any chance of success. We should never forget the history of car startups originating in the United States: dozens have gone bankrupt and only two, Tesla and Ford, have not. Despite the odds being strongly against us, my faith in you is why I am confident that we will succeed.

That is why I was so distraught when I read the recent blog post promoting the UAW, which does not share our mission and whose true allegiance is to the giant car companies, where the money they take from employees in dues is vastly more than they could ever make from Tesla.

The tactics they have resorted to are disingenuous or outright false. I will address their underhanded attacks below. While this discussion focuses on Fremont, these same principles apply to every Tesla facility worldwide.

Safety First

The workplace issue that comes before any other is safety. If you do not have your health, then nothing else matters. Simply due to size and bad luck, there will always be some injuries in a company with over 30,000 employees, but our goal is simple: to have as close to zero injuries as possible and be the safest factory in the auto industry by far. The Tesla executive team and I are absolutely committed to this goal.

That is why I was particularly troubled by the safety claim in last week’s blog post, which said: “A few months ago, six out of eight people in my work team were out on medical leave at the same time due to various work-related injuries. I hear the ergonomics are even more severe in other areas of the factory.”

Obviously, this cannot be true: if three quarters of his team suddenly went on medical leave, we would not be able to operate that part of the factory. Furthermore, if things were really even worse in other departments, that would mean something like 80% or more of the factory would be out on injury, production would drop to virtually nothing and the parking lot would be almost empty. As you know firsthand, we have the *opposite* problem – there is never enough room to park&033; In fact, we are working at top speed to build more parking. Also, hopefully our darn BART train station will open before all hell freezes over&033;

After looking into this claim, not only was it untrue for this individual’s team, it was untrue for any of the hundreds of teams in the factory.

That said, reducing excess overtime and improving safety are extremely important. This is why we hired thousands of additional team members to create a third shift, which has reduced the burden on everyone. Moreover, since the beginning of Tesla production at Fremont five years ago, there have been dedicated health and safety experts covering the factory and we hold regular safety meetings with operations leaders. Since the majority of the injuries in the factory are ergonomic in nature, we have an ergonomics department focused exclusively on this issue.

The net result is that since January 1st, our total recordable incident rate (TRIR) is under 3.3, which is less than half the industry average of 6.7.

Of course, the goal is to have as close to zero injuries as humanly possible, so we need to keep improving. If you have a safety concern or an idea on how to make things better, please let your manager, safety representative or HR partner know. You can also send an anonymous note through the Integrity Hotline (this applies broadly to any problems you notice at our company) or you can email.

Compensation

At Tesla, we believe it is important for everyone to be an owner of the company. This is your company. That is why, unlike other car companies, everyone is awarded shares and you get to buy stock at a discount compared to the public through the employee stock purchase program. Last year, stock equity grants were increased significantly and it will happen again later this year once Model 3 achieves high volume.

The chart below contrasts the total comp received by a Tesla production team member who started on January 1, 2013 against the total comp received over the same period at GM, Ford, and Fiat Chrysler. A four year period is used because that’s the vesting length of a new hire equity grant. I believe the equity gain over the next four years will be similar. As shown below, a Tesla team member earned between $70,000 and $100,000 more in total compensation than the employees at other US auto companies&033;

Work Hours

Another issue raised in the UAW blog was hours worked. First, I want to recognize how hard you worked to make our company successful. Those hours mattered to you, to your family and to our company, and I can’t tell you how much I appreciate them.

However, the pace needs to be sustainable. This is why the third shift was established and why we created alternate work schedules based on feedback from various teams in the factory.

These changes have had a big impact. The average amount of hours worked by production team members this year is about 43 hours per week. The percentage of overtime hours has declined by almost 50% since the super tough time we had last year achieving rate on the Model X, which is probably the hardest car to build in history. What an amazing accomplishment&033; It is also a lesson learned, which is why Model 3 is designed to be dramatically easier to manufacture.

Fun

As we get closer to being a profitable company, we will be able to afford more and more fun things. For example, as I mentioned at the last company talk, we are going to hold a really amazing party once Model 3 reaches volume production later this year. There will also be little things that come along like free frozen yogurt stands scattered around the factory and my personal favorite: a Tesla electric pod car roller coaster (with an optional loop the loop route, of course&033;) that will allow fast and fun travel throughout our Fremont campus, dipping in and out of the factory and connecting all the parking lots. It’s going to get crazy good

Thanks again for all your effort and I look forward to working alongside you to create an amazing future&033;

Elon

Quelle: <a href="Elon Musk Slams Union Drive At Tesla Factory“>BuzzFeed

Here Are The Passwords You Should Change Immediately

A software bug discovered in Cloudflare, a popular web performance and security company, may have compromised the security of over 5 million websites, including Fitbit, Uber, and OK Cupid.

If you have or had accounts on Fitbit, Uber, Ok Cupid, Medium, or Yelp, you should probably change your passwords. In a blog post published on Thursday, the web performance and security company Cloudflare claimed that it has fixed a critical bug, discovered over the weekend, that had been leaking sensitive information such as website passwords in plain text from September 2016 to February 2017. Over 5.5 million websites use Cloudflare, including Fitbit, Uber, Ok Cupid, Medium, and Yelp.

Some website sessions accessed through HTTPS, a secure web protocol that encrypts data sent to and from a page, have been compromised as a result, and what makes the bug particularly serious is that some search engines (like Bing, Google, and DuckDuckGo) cached, or saved, a variety of the leaked data for some time. This data isn&;t easy for an non-technical person to find, but for someone with knowledge of how to craft specific queries for affected websites&039; leaked data on search engines, it was well within their reach.

Thomas Trutschel / Getty Images


View Entire List ›

Quelle: <a href="Here Are The Passwords You Should Change Immediately“>BuzzFeed