Now That Snapchat Has Been Cloned By Instagram, Its Missteps Matter More

After watching Snapchat wade into racially insensitive territory once again last week, Katie Zhu decided she’d had enough. The San Francisco–based Medium engineer picked up her phone, deleted the app, and penned a Medium post encouraging others to leave as well. The post’s title: “I’m Deleting Snapchat, and You Should Too.

The screw-up that inspired Zhu to abandon Snapchat — a face-morphing filter resembling yellowface, an offensive Asian caricature — would until recently exist largely as a public relations problem. Social companies anger their users all the time, but the ire rarely translates into defection, since it’s hard to find the exact same features and network elsewhere (see: Facebook). But this time, it was different.

“Instagram now has a Snapchat Stories clone,” Zhu wrote. “So I’ll still be able to take mundane pictures of my day to day life.”

Zhu isn’t the only one noting the platforms’ interchangeability, and making a choice between them.

We’re just about two weeks into Instagram’s admitted cloning of Snapchat Stories, but tweets from folks jumping ship could be early signs of trouble, particularly if they gain momentum. In the past, Snapchat might have been able to skate away from slip-ups thanks to its product strength, but now users have a choice.

While Zhu&;s departure from Snapchat and those of the others whose tweets are listed above are hardly evidence of a brewing mass exodus, they suggest that Snapchat&039;s continuing filter foibles and Instagram&039;s offering of a Snapchat Stories alternative could become a recipe for attrition.

“They are not immune from people leaving the way Facebook was for so many years because they don&039;t own your social network.”

Karen North, director of USC Annenberg’s Digital Social Media program, told BuzzFeed News that Snapchat is not unsusceptible to such an event. “It&039;s easier and easier, frankly, to be able to leave a place where you don&039;t like the people, or the attitude, and find the same experience somewhere else,” she said. Snapchat, she explained, is “not immune from people leaving the way Facebook was for so many years because they don&039;t own your social network.”

Victor Anthony, managing director and senior analyst at Axiom Capital Management, agreed.

“Now that Instagram has essentially come out with an identical feature set, I do think it puts competitive pressure on Snapchat,” he told BuzzFeed News.

It&039;s worth noting that this isn&039;t the first time a poorly conceived Snapachat filter has elicited cries of outrage. In April, Snapchat released a Bob Marley filter some referred to as “digital blackface.”

The company defended itself following outcry over the yellowface resembling filter, telling The Verge that it was inspired by anime. But that explanation didn’t cut it for Zhu and others. Zhu’s response: “Buuuullshit. Anime characters are known for their angled faces, spiky and colorful hair, large eyes, and vivid facial expressions.”

“People in every walk of life accidentally stumble upon things that are insensitive because they&039;re thinking of one thing and they don&039;t realize it has implications for something else,” North said of the yellowface incident. “But when you&039;re a platform that has such broad distribution, meaning anything digital, there&039;s a responsibility to vet things much more carefully than people did in the brick and mortar days.”

For Snapchat, which rose to popularity on a pretty distinct feature set, there was a time when a high-profile misstep like yellowface might have been diffused with little in the way of user revolt. But with a powerful and well-established rival like Instagram positioning itself as a Snapchat alternative by cloning some of the service&039;s key features, user attrition could become more of a risk. Indeed, it seems at least a few folks are already heading for the door.

Snapchat has not yet responded to a request for comment.

Quelle: <a href="Now That Snapchat Has Been Cloned By Instagram, Its Missteps Matter More“>BuzzFeed

SIG Apps: build apps for and operate them in Kubernetes

Editor’s note: This post is by the Kubernetes SIG-Apps team sharing how they focus on the developer and devops experience of running applications in Kubernetes.Kubernetes is an incredible manager for containerized applications. Because of this, numerous companies have started to run their applications in Kubernetes.Kubernetes Special Interest Groups (SIGs) have been around to support the community of developers and operators since around the 1.0 release. People organized around networking, storage, scaling and other operational areas.As Kubernetes took off, so did the need for tools, best practices, and discussions around building and operating cloud native applications. To fill that need the Kubernetes SIG Apps came into existence.SIG Apps is a place where companies and individuals can:see and share demos of the tools being built to enable app operatorslearn about and discuss needs of app operatorsorganize around efforts to improve the experienceSince the inception of SIG Apps we’ve had demos of projects like KubeFuse, KPM, and StackSmith. We’ve also executed on a survey of those operating apps in Kubernetes.From the survey results we’ve learned a number of things including:That 81% of respondents want some form of autoscalingTo store secret information 47% of respondents use built-in secrets. At reset these are not currently encrypted. (If you want to help add encryption there is an issue for that.) The most responded questions had to do with 3rd party tools and debuggingFor 3rd party tools to manage applications there were no clear winners. There are a wide variety of practicesAn overall complaint about a lack of useful documentation. (Help contribute to the docs here.)There’s a lot of data. Many of the responses were optional so we were surprised that 935 of all questions across all candidates were filled in. If you want to look at the data yourself it’s available online.When it comes to application operation there’s still a lot to be figured out and shared. If you’ve got opinions about running apps, tooling to make the experience better, or just want to lurk and learn about what’s going please come join us.Chat with us on SIG-Apps Slack channelEmail as at SIG-Apps mailing listJoin our open meetings: weekly at 9AM PT on Wednesdays, full details here.–Matt Farina, Principal Engineer, Hewlett Packard Enterprise
Quelle: kubernetes

AWS OpsWorks Adds Nine Regional Endpoints and Asia Pacific (Seoul) Region Support

AWS OpsWorks is now available in the Asia Pacific (Seoul) region. Additionally, you can now access OpsWorks using new regional endpoints in the following regions: EU (Frankfurt), EU (Ireland), US West (N. California), US West (Oregon), South America (São Paolo), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), and Asia Pacific (Tokyo). 
Quelle: aws.amazon.com

Create a Couchbase cluster using Kubernetes

Editor’s note: today’s guest post is by Arun Gupta, Vice President Developer Relations at Couchbase, showing how to setup a Couchbase cluster with Kubernetes.  Couchbase Server is an open source, distributed NoSQL document-oriented database. It exposes a fast key-value store with managed cache for submillisecond data operations, purpose-built indexers for fast queries and a query engine for executing SQL queries. For mobile and Internet of Things (IoT) environments, Couchbase Lite runs native on-device and manages sync to Couchbase Server.Couchbase Server 4.5 was recently announced, bringing many new features, including production certified support for Docker. Couchbase is supported on a wide variety of orchestration frameworks for Docker containers, such as Kubernetes, Docker Swarm and Mesos, for full details visit this page.  This blog post will explain how to create a Couchbase cluster using Kubernetes. This setup is tested using Kubernetes 1.3.3, Amazon Web Services, and Couchbase 4.5 Enterprise Edition.Like all good things, this post is standing on the shoulder of giants. The design pattern used in this blog was defined in a Friday afternoon hack with @saturnism. A working version of the configuration files was contributed by @r_schmiddy.Couchbase ClusterA cluster of Couchbase Servers is typically deployed on commodity servers. Couchbase Server has a peer-to-peer topology where all the nodes are equal and communicate to each other on demand. There is no concept of master nodes, slave nodes, config nodes, name nodes, head nodes, etc, and all the software loaded on each node is identical. It allows the nodes to be added or removed without considering their “type”. This model works particularly well with cloud infrastructure in general. For Kubernetes, this means that we can use the exact same container image for all Couchbase nodes.A typical Couchbase cluster creation process looks like:Start Couchbase: Start n Couchbase serversCreate cluster: Pick any server, and add all other servers to it to create the clusterRebalance cluster: Rebalance the cluster so that data is distributed across the clusterIn order to automate using Kubernetes, the cluster creation is split into a “master” and “worker” Replication Controller (RC). The master RC has only one replica and is also published as a Service. This provides a single reference point to start the cluster creation. By default services are visible only from inside the cluster. This service is also exposed as a load balancer. This allows the Couchbase Web Console to be accessible from outside the cluster.The worker RC use the exact same image as master RC. This keeps the cluster homogenous which allows to scale the cluster easily.Configuration files used in this blog are available here. Let’s create the Kubernetes resources to create the Couchbase cluster.Create Couchbase “master” Replication ControllerCouchbase master RC can be created using the following configuration file:apiVersion: v1kind: ReplicationControllermetadata:  name: couchbase-master-rcspec:  replicas: 1  selector:    app: couchbase-master-pod  template:    metadata:      labels:        app: couchbase-master-pod    spec:      containers:      – name: couchbase-master        image: arungupta/couchbase:k8s        env:          – name: TYPE            value: MASTER        ports:        – containerPort: 8091—-apiVersion: v1kind: Servicemetadata:  name: couchbase-master-service  labels:    app: couchbase-master-servicespec:  ports:    – port: 8091  selector:    app: couchbase-master-pod  type: LoadBalancerThis configuration file creates a couchbase-master-rc Replication Controller. This RC has one replica of the pod created using the arungupta/couchbase:k8s image. This image is created using the Dockerfile here. This Dockerfile uses a configuration script to configure the base Couchbase Docker image. First, it uses Couchbase REST API to setup memory quota, setup index, data and query services, security credentials, and loads a sample data bucket. Then, it invokes the appropriate Couchbase CLI commands to add the Couchbase node to the cluster or add the node and rebalance the cluster. This is based upon three environment variables:TYPE: Defines whether the joining pod is worker or masterAUTO_REBALANCE: Defines whether the cluster needs to be rebalancedCOUCHBASE_MASTER: Name of the master serviceFor this first configuration file, the TYPE environment variable is set to MASTER and so no additional configuration is done on the Couchbase image.Let’s create and verify the artifacts.Create Couchbase master RC:kubectl create -f cluster-master.yml replicationcontroller “couchbase-master-rc” createdservice “couchbase-master-service” createdList all the services:kubectl get svcNAME                       CLUSTER-IP    EXTERNAL-IP   PORT(S)    AGEcouchbase-master-service   10.0.57.201                 8091/TCP   30skubernetes                 10.0.0.1      <none>        443/TCP    5hOutput shows that couchbase-master-service is created.Get all the pods:kubectl get poNAME                        READY     STATUS    RESTARTS   AGEcouchbase-master-rc-97mu5   1/1       Running   0          1mA pod is created using the Docker image specified in the configuration file.Check the RC:kubectl get rcNAME                  DESIRED   CURRENT   AGEcouchbase-master-rc   1         1         1mIt shows that the desired and current number of pods in the RC are matching.Describe the service:kubectl describe svc couchbase-master-serviceName: couchbase-master-serviceNamespace: defaultLabels: app=couchbase-master-serviceSelector: app=couchbase-master-podType: LoadBalancerIP: 10.0.57.201LoadBalancer Ingress: a94f1f286590c11e68e100283628cd6c-1110696566.us-west-2.elb.amazonaws.comPort: <unset> 8091/TCPNodePort: <unset> 30019/TCPEndpoints: 10.244.2.3:8091Session Affinity: NoneEvents:  FirstSeen LastSeen Count From SubobjectPath Type Reason Message  ——— ——– —– —- ————- ——– —— ——-  2m 2m 1 {service-controller } Normal CreatingLoadBalancer Creating load balancer  2m 2m 1 {service-controller } Normal CreatedLoadBalancer Created load balancerAmong other details, the address shown next to LoadBalancer Ingress is relevant for us. This address is used to access the Couchbase Web Console.Wait for ~3 mins for the load balancer to be ready to receive requests. Couchbase Web Console is accessible at <ip>:8091 and looks like:The image used in the configuration file is configured with the Administrator username and password password. Enter the credentials to see the console:Click on Server Nodes to see how many Couchbase nodes are part of the cluster. As expected, it shows only one node:Click on Data Buckets to see a sample bucket that was created as part of the image:This shows the travel-sample bucket is created and has 31,591 JSON documents.Create Couchbase “worker” Replication ControllerNow, let’s create a worker replication controller. It can be created using the configuration file:apiVersion: v1kind: ReplicationControllermetadata:  name: couchbase-worker-rcspec:  replicas: 1  selector:    app: couchbase-worker-pod  template:    metadata:      labels:        app: couchbase-worker-pod    spec:      containers:      – name: couchbase-worker        image: arungupta/couchbase:k8s        env:          – name: TYPE            value: “WORKER”          – name: COUCHBASE_MASTER            value: “couchbase-master-service”          – name: AUTO_REBALANCE            value: “false”        ports:        – containerPort: 8091This RC also creates a single replica of Couchbase using the same arungupta/couchbase:k8s image. The key differences here are:TYPE environment variable is set to WORKER. This adds a worker Couchbase node to be added to the cluster.COUCHBASE_MASTER environment variable is passed the value of couchbase-master-service. This uses the service discovery mechanism built into Kubernetes for pods in the worker and the master to communicate.AUTO_REBALANCE environment variable is set to false. This ensures that the node is only added to the cluster but the cluster itself is not rebalanced. Rebalancing is required to to re-distribute data across multiple nodes of the cluster. This is the recommended way as multiple nodes can be added first, and then cluster can be manually rebalanced using the Web Console.Let’s create a worker:kubectl create -f cluster-worker.yml replicationcontroller “couchbase-worker-rc” createdCheck the RC:kubectl get rcNAME                  DESIRED   CURRENT   AGEcouchbase-master-rc   1         1         6mcouchbase-worker-rc   1         1         22sA new couchbase-worker-rc is created where the desired and the current number of instances are matching.Get all pods:kubectl get poNAME                        READY     STATUS    RESTARTS   AGEcouchbase-master-rc-97mu5   1/1       Running   0          6mcouchbase-worker-rc-4ik02   1/1       Running   0          46sAn additional pod is now created. Each pod’s name is prefixed with the corresponding RC’s name. For example, a worker pod is prefixed with couchbase-worker-rc.Couchbase Web Console gets updated to show that a new Couchbase node is added. This is evident by red circle with the number 1 on the Pending Rebalance tab.Clicking on the tab shows the IP address of the node that needs to be rebalanced:Scale Couchbase cluster Now, let’s scale the Couchbase cluster by scaling the replicas for worker RC:kubectl scale rc couchbase-worker-rc –replicas=3replicationcontroller “couchbase-worker-rc” scaledUpdated state of RC shows that 3 worker pods have been created:kubectl get rcNAME                  DESIRED   CURRENT   AGEcouchbase-master-rc   1         1         8mcouchbase-worker-rc   3         3         2mThis can be verified again by getting the list of pods:kubectl get poNAME                        READY     STATUS    RESTARTS   AGEcouchbase-master-rc-97mu5   1/1       Running   0          8mcouchbase-worker-rc-4ik02   1/1       Running   0          2mcouchbase-worker-rc-jfykx   1/1       Running   0          53scouchbase-worker-rc-v8vdw   1/1       Running   0          53sPending Rebalance tab of Couchbase Web Console shows that 3 servers have now been added to the cluster and needs to be rebalanced.Rebalance Couchbase ClusterFinally, click on Rebalance button to rebalance the cluster. A message window showing the current state of rebalance is displayed:Once all the nodes are rebalanced, Couchbase cluster is ready to serve your requests:In addition to creating a cluster, Couchbase Server supports a range of high availability and disaster recovery (HA/DR) strategies. Most HA/DR strategies rely on a multi-pronged approach of maximizing availability, increasing redundancy within and across data centers, and performing regular backups.Now that your Couchbase cluster is ready, you can run your first sample application.For further information check out the Couchbase Developer Portal and Forums, or see questions on Stack Overflow.  –Arun Gupta, Vice President Developer Relations at CouchbaseDownload KubernetesGet involved with the Kubernetes project on GitHub Post questions (or answer questions) on Stack Overflow Connect with the community on SlackFollow @Kubernetesio on Twitter for latest updates
Quelle: kubernetes

Peter Thiel Tries To Pivot His Personal Brand To Privacy Hero

Jim Watson / AFP / Getty Images

Billionaire Peter Thiel published a passionate op-ed in the New York Times today, mere hours before the deadline for bids to purchase Gawker, the media company that Thiel helped to bankrupt in response to a blog post that publicly exposed his sexual orientation. In the editorial, Thiel positions himself as a defender of online privacy against the media&;s “lurid interest in gay life” and a champion of “[p]rotecting individual dignity.”

“All people deserve respect, and nobody’s sexuality should be made a public fixation,” Thiel writes.

Thiel recently baffled some of his Silicon Valley compatriots by speaking at the Republican National Convention in support of Donald Trump&039;s presidential bid. During his RNC speech, Thiel made a historic milestone by saying he was “proud to be gay” from the event dais. (Thiel followed that up by dismissing as a “distraction” calls from transgender activists for bathrooms that match gender identity.) Today&039;s op-ed is similarly savvy. It&039;s perfectly calibrated to infuriate all the people Thiel wants to infuriate, i.e., defenders of free speech and the press, while causing the vast majority of the public to nod their heads in agreement.

However, in his defense of online privacy, Thiel understates his role in spending $10 million to support an invasion of privacy lawsuit filed against Gawker by Hulk Hogan. He also radically downplays his influence in matters of online privacy as a billionaire board member of two companies with vast data-mining operations: Facebook and CIA-backed Palantir.

Thiel&039;s framing of his newfound crusade is canny. He mentions a widely reviled article in the Daily Beast last week:

Unfortunately, lurid interest in gay life isn’t a thing of the past. Last week, The Daily Beast published an article that effectively outed gay Olympic athletes, treating their sexuality as a curiosity for the sake of internet clicks. The article endangered the lives of gay men from less tolerant countries, and a public outcry led to its swift retraction. While the article never should have been published, the editors’ prompt response shows how journalistic norms can improve, if the public demands it.

He also attempts to draw a link between Hogan&039;s lawsuit against Gawker and ongoing efforts to combat revenge porn. Thiel does this by claiming that a bipartisan proposal called the Intimate Privacy Protection Act is nicknamed “the Gawker bill,” although the phrase is not commonly used. “It&039;s the Intimate Privacy Protection Act or IPPA,” a spokesperson for Rep. Jackie Speier, one of the bill&039;s sponsors, told BuzzFeed News. “I have no idea where &039;the Gawker Bill&039; name comes from, but it&039;s incorrect.”

Hogan won a $140 million legal judgment against Gawker. Both the media company and its founder, Nick Denton, filed for bankruptcy as a result. In the op-ed, timed hours before the final bids to buy the once-independent media company, Thiel expressed pride in bankrolling Hogan&039;s battle:

For my part, I am proud to have contributed financial support to [Hogan&039;s] case. I will support him until his final victory — Gawker said it intends to appeal — and I would gladly support someone else in the same position.

This is an about-face for Thiel. Hogan initially sued Gawker back in 2012. Despite rumors, Thiel did not admit to financing the lawsuit until May 2016 hours after Forbes broke the news that the billionaire was financing a secretive campaign to bring down Gawker.

What&039;s more, Thiel still has not disclosed which other lawsuits against Gawker he&039;s financially backing. Charles Harder, the lawyer representing Hogan, also represents other plaintiffs suing Gawker or current and former employees. Those clandestine actions are not exactly the behavior of a crusader for people&039;s rights.

Thiel also omitted the fact that he is paying for lawsuits against individual journalists, not just Gawker. Former Gawker editor A.J. Daulerio is a defendant in the Hogan case. In a signed affidavit last week, Daulerio included a screenshot of a bank statement that showed only $1,500 in his checking account. “I have been having trouble finding my own lawyer to advise me because I do not have enough money to pay for one,” he wrote.

Despite the fact that Thiel&039;s actions were personally motivated and his revenge privately orchestrated, he invokes “public outcry” and “public demands” against the invasion of online privacy throughout. The billionaire even goes as far as saying, “It&039;s not for me to draw the line”:

A free press is vital for public debate. Since sensitive information can sometimes be publicly relevant, exercising judgment is always part of the journalist’s profession. It’s not for me to draw the line, but journalists should condemn those who willfully cross it. The press is too important to let its role be undermined by those who would search for clicks at the cost of the profession’s reputation.

In a memo to his staff after filing for personal bankruptcy, Denton wrote: “Peter Thiel’s legal campaign has targeted individual writers like Sam Biddle, editors such as John Cook, and me as publisher. It is a personal vendetta. And yes, it’s a disturbing to live in a world in which a billionaire can bully journalists because he didn’t like the coverage.”

Thanks to Thiel&039;s masterful spin, however, the more lasting memory will probably be champion of privacy, just as he hoped.

Ziff Davis filed the opening bid in the Gawker auction in June for $90 million. Final bids are due today.

Hamza Shaban contributed reporting to this post.

Disclosure: The author of this post was formerly employed by Gawker Media.

Quelle: <a href="Peter Thiel Tries To Pivot His Personal Brand To Privacy Hero“>BuzzFeed

aws-record General Availability

We’re pleased to announce the general availability release of the aws-record Ruby Gem. aws-record is a data mapping abstraction for Amazon DynamoDB, built on top of the AWS SDK for Ruby version 2. It provides helpful features for developing Ruby applications using DynamoDB, including: 
Quelle: aws.amazon.com