Snapchat Removes Filter After "Yellowface" Criticism

The company said it was supposed to be &;anime&;.

Everyone knows the best part about Snapchat is the filters. But a new filter was added that many people found offensive. “Yellowface” is when a non-Asian person dresses up or uses makeup (or in this case, a filter) to create cartoonish exaggeration of stereotypical asian features – it&;s considered offensive in the same way blackface is.

Snapchat told The Verge that the filter was “inspired by anime and meant to be playful.” The filter has been taken out of rotation due to the response.


View Entire List ›

Quelle: <a href="Snapchat Removes Filter After "Yellowface" Criticism“>BuzzFeed

Engineering the move to cloud-based services

Supporting a network in transition: Q&A blog post series with David Lef

In a series of blog posts, this being the second, David Lef, principal network architect at Microsoft IT, chats with us about supporting a network as it transitions from a traditional infrastructure to a fully wireless platform. Microsoft IT is responsible for supporting 900 locations and 220,000 users around the world. David is helping to define the evolution of the network topology to a cloud-based model in Azure that supports changing customer demands and modern application designs.

David Lef explains the major factors that affect migration of IT-supported services and environments to cloud-based services, focusing on network-related practicalities and processes.

Q: Can you explain your role and the environment you support?

A: My role at Microsoft is principal network architect with Microsoft IT. My team supports almost 900 sites around the world and the networking components that connect those sites, which are used by a combination of over 220,000 Microsoft employees and vendors that work on our behalf Our network supports over 2,500 individual applications and business processes. We are responsible for providing wired, wireless, and remote network access for the organization, implementing network security across our network (including our network edges), and connectivity to Microsoft Azure in the cloud. We support a large Azure tenancy using a single Azure Active Directory tenancy that syncs with our internal Windows Server Active Directory forests. We have several connections from our on-premises datacenters to Azure using ExpressRoute. Our Azure tenancy supports a huge breadth of Azure resources, some of which are public-facing and some that are hosted as apps and services internal to Microsoft, but hosted on the Azure platform.

Q: What are the biggest networking challenges in migrating on-premises services to cloud-based services in Azure?

A: First of all, it&;s a fundamental change in traffic patterns. It used to be that we hosted most of our network traffic within our corporate network and datacenters, and selectively allowed access from the Internet into our network for apps and services that our employees needed to access while they were outside of the corporate network. From the aspect of traffic going in and out of our corporate network, we had our users accessing what you might call traditional Internet content, as well as users connecting to the corporate network using a virtual private network (VPN). Now, we are moving toward hosting the bulk of our on-premises datacenter infrastructure within Azure and choosing how we want to allow access to it.

Secondly, we’ve had network edge traffic increase a lot. Our bandwidth at the edge is over 500 percent what it was just a couple of years ago. The on-premises datacenter is no longer the hub of traffic for us and, and the cloud is the default app and infrastructure location for new projects at Microsoft. Our traffic pattern now revolves primarily around traffic to Azure datacenters. This, of course, has brought the demand for more robust and higher bandwidth edge connections—the resources that users formerly accessed within the corporate network are now being hosted in Azure, and those users experience the same level of responsiveness from their apps and services that they’ve been accustomed to.

We’re continuously moving apps and services from on-premises datacenters to Azure, so the connectivity requirements between Azure and our on-premises datacenters are changing as that migration continues. In addition, the pipeline between Azure and our datacenters is shrinking as more of our infrastructure moves to Azure. Our migration teams are moving as much as possible to software as a service (Saas) and platform as a service (Paas) in Azure wherever possible and, in situations where SaaS or PaaS doesn’t offer an immediate or beneficial solution, simply lifting the infrastructure components out of on-premises and into Azure infrastructure as a service (Iaas) virtual machines and virtual networks.

A significant part of the migration for these apps and services is analysis for redesign in the cloud. Wherever possible, our engineering teams are redesigning and re-architecting for the cloud. Internet-based traffic can have a higher latency than what Microsoft experiences within its corporate network infrastructure, so designing for that and educating users on the changes they should expect is important.

Q: How do you ensure adequate service levels in an Azure-based cloud delivery model?

A: The network component has a big impact on service levels, but it really does start with service design for our Azure-based resources. Connectivity to Azure is, for all intents and purposes, Internet connectivity, so anything hosted in Azure is designed as an Internet-based solution, wherever possible. Along with accommodating higher latency that I’ve already mentioned, the redesign process also includes retry logic for when a connection experiences any type of outage, caching and prefetching data, and compression of data across client connections.

After services design, we’re doing as much as we can on the network side to ensure robust connectivity. We’re using ExpressRoute extensively for our large locations, and making sure that we locate our hop onto ExpressRoute as close as physically possible to the resources that will use that connection, whether it is servers or users. That means using network service providers that have co-location facilities close to our physical locations. We don’t rely on traditional hub and spoke networking architectures for our location, and we try to avoid moving unnecessary traffic across our network backbones. We’ve found that the quicker you can drop someone onto the Internet, with the exception of cases where the provider infrastructure is very immature or limited, the better off they will be.

We monitor our environment pretty thoroughly. We’re designing the modern apps that run on Azure SaaS and PaaS to use the built-in instrumentation those platforms provide. We’re leveraging built-in synthetic transactions in those services and building in our own, using System Center products and Operations Management Services in Azure. It allows us to get a comprehensive view of our infrastructure; both centralized and decentralized. We treat our cloud services hosted in Azure as a product in which we’re the provider and the consumer—and all of Microsoft—is the customer.

Q: How does the challenge differ by geographic locations, and has that changed since the migration to cloud-based services?

A: Anytime we talk about geography, services placement is a huge consideration. We look at where our clients are for any given services, where the app to app dependencies lie, and plan accordingly. In most cases, we have at least one Azure datacenter within 1,000 kilometers of our clients, so we use that in our business continuity and disaster recovery planning. Azure’s built in geo-redundancy and resiliency components also help in those respects.

From a pure networking perspective, we try to place our Layer 3 management as close to the Azure datacenter as possible. That gives us the greatest control over traffic to Azure, and the best insight into what’s happening with that traffic.

Q: How do you encourage user adoption and buy-in when migrating to cloud-based services?

A: Our Azure teams provide a lot of guidance around the entire Azure experience. From a user experience, we do the best we can to provide them with accurate expectations for their apps and services that are migrated to Azure. In many cases, the general user experience is improved for apps on Azure, so this isn’t as much about softening the blow as it is showing them how having their app hosted on Azure changes the way the app is accessed and experienced. We make sure that users are aware of the ways that making an app available in the cloud can expose new functionality or ways to use the app. We focus on providing a user experience that enables mobile access from multiple device platforms. The key idea here is access from anywhere, on anything, at any time. An excellent example of this is the re-architecting of our licensing platform for the cloud, which was written about in a case study.

For the general migration to Azure, Microsoft IT has allotted people and capital to facilitate a smooth transition whenever a migration takes place. These resources contribute to the technical migration itself, training, and making sure that business processes are running as well or better than when the app or service was hosted on-premises.

Q: How have the IT teams changed to support this new delivery model?

A: The biggest change most people expect is this mass exodus or culling of traditional IT functions, but that’s not really the way it’s worked for us. We still have a network infrastructure to support throughout our physical locations, and datacenters don’t disappear overnight. Whether there are ten servers or 10,000 servers in a datacenter, disaster recovery and business continuity processes still need to happen and we need IT support for that. That being said, the requirement for on-premises infrastructure support does change. A lot of our high-level support teams are transitioning to different projects, sometimes in the Azure space. It’s given a lot of Microsoft employees the chance to improve their skill sets and shift their focus to development and innovation instead of maintenance and management.

With Azure, IT responsibilities become more compartmentalized, where we have IT staff that are focused on providing first-level support in their area of expertise, and it works without requiring a lot of people to have end-to-end knowledge of the environment or solution. Our Azure network experts provide their service and know their product and environment, and our Azure app experts do the same in their area, without needing to know specifically what’s happening with the network. The high-level knowledge is there across teams, of course, but resources and solutions become much more like plug-and-play solutions. This means that we’re more agile and able to respond to demand or start new projects more efficiently. Our teams don’t need to wait for physical servers to be built out or networking hardware to be installed; they simply request what they need, and Azure generates the resources.

Learn more

Other blog posts in this series:

Supporting network architecture that enables modern work styles

Learn how Microsoft IT is evolving its network architecture.
Quelle: Azure

DAY 1- OPENSTACK DAYS SILICON VALLEY 2016

The post DAY 1- OPENSTACK DAYS SILICON VALLEY 2016 appeared first on Mirantis | The Pure Play OpenStack Company.
THE UNLOCKED INFRASTRUCTURE CONFERENCE
By Catherine Kim

This year&;s OpenStack Days Silicon Valley, held once again at the Computer History Museum, carried a theme of industry maturity; we&8217;ve gone, as Mirantis CEO and co-Founder Alex Freedland said in his introductory remarks, from wondering if OpenStack was going to catch on to wondering where containers fit into the landscape to talking about production environments of both OpenStack and containers.

Here&8217;s a look at what you missed.
OpenStack: What Next?
OpenStack Foundation Executive Director Jonathan Bryce started the day off talking about the future of OpenStack. He&8217;s been traveling the globe visiting user groups and OpenStack Days events, watching as the technology takes hold in different parts of the world, but his predictions were less about what OpenStack could do and more about what people &; and other projects &8212; could do with it.

Standard frameworks, he said, provided the opportunity for large numbers of developers to create entirely new categories. For example, before the LAMP stack (Linux, Apache, MySQL and PHP) the web was largely made up of static pages, not the dynamic applications we have now. Android and iOS provided common frameworks that enable developers to release millions of apps a year, supplanting purpose-built machinery with a simple smartphone.

To make that happen, though, the community had to do two things: collaborate and scale. Just as the components of LAMP worked together, OpenStack needed to collaborate with other projects, such as Kubernetes, to reach its potential.

As for scaling, Jonathan pointed out that historically, OpenStack has been difficult to set up. It’s important to make success easier to duplicate. While there are incredible success stories out there, with some users using thousands of nodes, those users originally had to go through a lot of iterations and errors. For future developments, Jonathan felt it was important to share information about errors made, so that others can learn from those mistakes, making OpenStack easier to use.

To that end, the OpenStack foundation is continuing to produce content to help with specific needs, such as explaining the business benefits to a manager to more complex topics such as security. He also talked about the need to raise the size of the talent pool, and about the ability for students to take the Certified OpenStack Administrator exam (or others like it) to prove their capabilities in the market.
User talks
One thing that was refreshing about OpenStack Days Silicon Valley was the number of user-given talks. On day one we heard from Walmart, SAP, and AT&T, all of which have significantly transformed their organizations through the use of OpenStack.

OpenStack, Sean Roberts explained, enabled Walmart to make applications that can heal themselves, with failure scenarios that have rules about how they can recover from those failures. In particular, WalmartLabs, the online end of the company, had been making great strides with OpenStack, and in particular with a devops tool called OneOps. The tool makes it possible for them to manage their large number of nodes easily, and he suggested that it might do even better as an independent project under OpenStack.

Markus Riedinger talked about SAP and how it had introduced OpenStack. After making 23 acquisitions in a small period of time, the company was faced with a diverse infrastructure that didn&8217;t lend itself to collaboration. In the last few years it has begun to move towards cloud based work and in 2013 it started to move towards using OpenStack. Now the company has a container-based OpenStack structure based on Puppet, providing a clean separation of control and data, and a fully automatic system with embedded analytics and pre-manufactured PODs for capacity extension.  Their approach means that 1-2 people can take a data center from commissioned bare metal to an operational, scalable Kubernetes cluster running a fully configured OpenStack platform in less than a day.

Greg Stiegler discussed AT&T’s cloud journey, and Open Source and OpenStack at AT&T. He said that the rapid advancements in mobile data services have resulted in numerous benefits, and in turn this has exploded network traffic, with traffic expected to grow 10 times by 2020. To facilitate this growth, AT&T needed a platform, with a goal of remaining as close to trunk as possible to reduce technical debt. The result is the AT&T Integrated Cloud. Sarobh Saxena spoke about it at the OpenStack Summit in Austin earlier this year, but new today was the notion that the community effort should have a unified roadmap leader, with a strategy around containers that needs to be fully developed, and a rock solid core tent.

Greg finished up by saying that while AT&T doesn’t expect perfection, it does believe that OpenStack needs to be continually developed and strengthened. The company is grateful for what the community has always provided, and AT&T has provided an AT&T community team. Greg felt that the moral of his story was that by working together, community collaboration brings solutions at a faster rate, while weeding out mistakes through the experiences of others.
What venture capitalists think about open source
Well that got your attention, didn&8217;t it?  It got the audience&8217;s attention too, as Martin Casado, a General Partner from Adreessen Horowitz, started the talk by saying that current prevailing wisdom is that infrastructure is dead. Why? Partly because people don’t understand what the cloud is, and partly because they think that if the cloud is free, then they think “What else is there to invest in?” Having looked into it he thinks that view is dead wrong, and even believes that newcomers now have an unfair advantage.

Martin  (who in a former life was the creator of the &;software defined&; movement through the co-founding of SDN maker Nicira) said that for this talk, something is “software defined” if you can implement it in software and distribute it in software. For example, in the consumer space, the GPS devices have largely been replaced by software applications like Waze, which can be distributed to millions of phones, which themselves can run diverse apps to replace may functionalities that used to be &8220;wrapped in sheet metal&8221;.

He argued that infrastructure is following the same pattern. It used to be that the only common interface was internet or IP, but that we have seen a maturation of software that allows you to insert core infrastructure as software. Martin said that right now is one of these few times where there’s a market sufficient for building a company with a product that consists entirely of software.  (You still, however, need a sales team, sorry.)

The crux of the matter, though, is that the old model for Open Source has changed. The old model for Open Source companies was being a support company, however, now many companies will use Open Source to access customers and get credibility, but the actual commercial offering they have is a service. Companies such as Github (which didn&8217;t even invent Git) doing this have been enormously successful.
And now a word from our sponsors&;
The morning included several very short &8220;sponsor moments&8221;; two of which included very short tech talks.

The third was Michael Miller of Suse, who was joined onstage by Boris Renski from Mirantis. Together they announced that Mirantis and Suse would be collaborating with each other to provide support for SLES as both hosts and guests in Mirantis OpenStack, which already supports Ubuntu and Oracle Linux.

“At this point, there is only one conspicuous partner missing from this equation,” Renski said. Not to worry, he continued. SUSE has an expanded support offering, so in addition to supporting SUSE hosts, through the new partnership, Mirantis/SUSE customers with CentOS and RHEL hosts can also get support. “Mirantis  is now a one-stop shop for supporting OpenStack.”

Meanwhile,  Sujal Das, SVP of Marketing for Netronome, discussed networking and security and the many industry reports that highlight the importance of zero-trust defense security, with each VM and application needing to be trusted. OpenStack enables centralised control and automation in these types of deployments, but there are some challenges when using OVS and connection tracking, which affect VMs and the efficiency of the server. Ideally, you would like line red performance, but Netronome did some tests that show you do not get that performance with zero-trust security and OpenStack. Netronome is working on enhancements and adaptations to assist with this.

Finally, Evan Mouzakitis of Data Dog gave a great explanation of how you can look at events that happen when you are using OpenStack more closely to see not only what happened, but why. Evan explained that OpenStack uses RabbitMQ by default for message passing, and that once you can listen to that, you can know a lot more about what’s happening under the hood, and a lot more about the events that are occurring. (Hint: go to http://dtdg.co/nova-listen.)
Containers, containers, containers
Of course, the main thrust was OpenStack and containers, and there was no shortage of content along those lines.
Craig McLuckie of Google and Brandon Philips of CoreOS sat down with Sumeet Singh of AppFormix to talk about the future of OpenStack, namely the integration of OpenStack and Kubernetes. Sumeet started this discussion swiftly, asking Craig and Brandon “If we have Kubernetes, why do we need OpenStack?”

Craig said that enterprise needs hybrids of technologies, and that there is a lot of alignment between the two technologies, so  both can be useful for enterprises. Brandon also said that there’s a large incumbent of virtual machine users and they aren’t going to go away.

There’s a lot of integration work, but also a lot of other work to do as a community. Some is the next level of abstraction &; one of those things is rallying together to help software vendors to have a set of common standards for describing packages. Craig also believed that there’s a good opportunity to think about brokering of services and lifecycle management.

Craig also mentioned that he felt that we need to start thinking about how to bring the OpenStack and Cloud Native Computing foundations together and how to create working groups that span the two foundation’s boundaries.

In terms of using the two together, Craig said that from his experience he found that enterprises usually ask what it looks like to use the two. As people start to understand the different capabilities they shift towards it, but it’s very new and so it’s quite speculative right now.

Finally, Florian Leibert of Mesosphere, Andrew Randall of Tigera, Ken Robertson of Apcera, and Amir Levy of Gigaspaces sat down with Jesse Proudman of IBM to discuss &8220;The Next Container Standard&8221;.

Jesse started off the discussion by talking about how rapidly OpenStack has developed, and how in two short years containers have penetrated the marketplace. He questioned why that might be.

Some of the participants suggested that a big reason for their uptake is that containers drive adoption and help with inefficiencies, so customers can easily see how dynamic this field is in providing for their requirements.

A number of participants felt that containers are another wonderful tool in getting the job done and they’ll see more innovations down the road. Florian pointed out that containers were around before Docker, but what docker has done is that it has allowed individuals to use containers on their own websites. Containers are just a part of an evolution.

As far as Cloud Foundry vs Mesos or Kubernetes, most of the participants agreed that standard orchestration has allowed us to take a step higher in the model and that an understanding of the underlying tools can be used together &8212; as long as you use the right models. Amir argued that there is no need to take one specific technology’s corner, and that there will always be new technologies around the corner, but whatever we see today will be different tomorrow.

Of course there&8217;s the question of whether these technologies are complementary or competitive. Florian argued that it came down to religion, and that over time companies will often evolve to be very similar to one another. But if it is a religious decision, then who was making that decision?

The panel agreed that it is often the developers themselves who make decisions, but that eventually companies will choose to deliberately use multiple platforms or they will make a decision to use just one.

Finally, Jesse asked the panel about how the wishes of companies for a strong ROI affects OpenStack, leading to a discussion about the importance of really strong use cases, and showing customers how OpenStack can improve speed or flexibility.
Coming up
So now we head into day 2 of the conference, where it&8217;s all about thought leadership, community, and user stories. Look for commentary from users such as Tapjoy and thought leadership from voices such as James Staten from Microsoft, Luke Kanies of Puppet, and Adrian Cockroft of Battery Ventures.

 

 The post DAY 1- OPENSTACK DAYS SILICON VALLEY 2016 appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Python 3 on Google App Engine flexible environment now in beta

Posted by Amir Rouzrokh, Product Manager

Developers running Python on Google App Engine have long asked for support for Python 3 and third-party Python packages. Today we’re excited to announce the beta release of the Python runtime on App Engine Flexible Environment with support for Python 3.4 and 2.7. You can now develop applications in the Python version you prefer and create performant mobile and web backends using the frameworks and libraries of your choice. Meanwhile, developers benefit from App Engine’s built-in services, such as autoscaling, load balancing, microservices support and traffic splitting and hence can focus on their code and not worry about infrastructure maintenance.

Here at Google, we’re committed to the open-source model and strive for product designs that promote choice for developers. App Engine Flexible Environment runtimes are simple and lean, distributed on github, and can access services from any cloud platform provider, including Google Cloud Platform using the Python Client Libraries. Because of containerization, you can run your application on App Engine Flexible, Google Container Engine, Google Compute Engine, locally (for example by using minikube), and on any cloud provider that supports containers.

Getting started with Python on App Engine is easy. The best place to start is the Python developer hub, where we’ve gathered everything Python in one place. If you’re new to App Engine, we recommend trying out this Quickstart to get a sense of how App Engine Flexible works. Here’s a quick video of the quickstart experience for you to watch.

For more experienced users and those who wish to learn more about Python on Google Cloud Platform, we recommend completing the bookshelf tutorial.

When running a Python application on App Engine, you can use the tools and databases you already know and love. Use Flask, Django, Pyramid, Falcon, Tornado or any other framework to build your app. You can also check out samples on how to use MongoDB, MySQL or Google Cloud Datastore.

Using the Google cloud client library, you can take advantage of Google’s advanced APIs and services, including Google BigQuery, Google Cloud Pub/Sub, and Google Cloud Storage using simple and easy-to-understand API formatting:

from gcloud import storage

client = storage.Client(‘<your-project-id>’)bucket = client.get_bucket(‘<your-bucket-name>’)blob = bucket.blob(‘my-test-file.txt’)blob.upload_from_string(‘this is test content!’)

We’re thrilled to welcome Python 3 developers to Google Cloud Platform and are committed to making further investments in App Engine Standard and Flexible to help make you as productive as possible.

Feel free to reach out to us on Twitter using the handle @googlecloud. We’re also on the Google Cloud Slack community. To get in touch, request an invite to join the Slack Python channel.

Quelle: Google Cloud Platform