How does the world consume private clouds?

The post How does the world consume private clouds? appeared first on Mirantis | The Pure Play OpenStack Company.
In my previous blog, why the world needs private clouds, we looked at ten reasons for considering a private cloud. The next logical question is how a company should go about building a private cloud.
In my view, there are four consumption models for OpenStack. Let’s look at each approach and then compare.

Approach : DIY
For the most sophisticated users, where OpenStack is super-strategic to the business, a do-it-yourself approach is appealing. Walmart, PayPal, and so on are examples of this approach.
In this approach, the user has to grab upstream OpenStack bits, package the right projects, fix bugs or add features as needed, then deploy and manage the OpenStack lifecycle. The user also has to “self-support” their internal IT/OPS team.
This approach requires recruiting and retaining a very strong engineering team that is adept at python, OpenStack, and working with the upstream open-source community. Because of this, I don’t think more than a handful companies can or would want to pursue this approach. In fact, we know of several users who started out on this path, but had to switch to a different approach because they lost engineers to other companies. Net-net, the DIY approach is not for the faint of heart.
Approach : Distro
For large sophisticated users that plan to customize a cloud for their own use and have the skills to manage it, an OpenStack distribution is an attractive approach.
In this approach, no upstream engineering is required. Instead, the company is responsible for deploying a known good distribution from a vendor and managing its lifecycle.
Even though this is simpler than DIY, very few companies can manage a complex, distributed and fast moving piece of software such as OpenStack &; a point made by Boris Renski in his recent blog Infrastructure Software is Dead. Therefore, most customers end up utilizing extensive professional services from the distribution vendor.
Approach : Managed Services
For customers who don’t want to deal with the hassle of managing OpenStack, but want control over the hardware and datacenter (on-prem or colo), managed services may be a great option.
In this approach, the user is responsible for the hardware, the datacenter, and tenant management; but OpenStack is fully managed by the vendor. Ultimately this may be the most appealing model for a large set of customers.
Approach : Hosted Private Cloud
This approach is a variation of the Managed Services approach. In this option, not only is the cloud managed, it is also hosted by the vendor. In other words, the user does not even have to purchase any hardware or manage the datacenter. In terms of look and feel, this approach is analogous to purchasing a public cloud, but without the &;noisy neighbor&; problems that sometimes arise.
Which approach is best?
Each approach has its pros and cons, of course. For example, each approach has different requirements in terms of engineering resources:

DIY
Distro
Managed Service
Hosted  Private Cloud

Need upstream OpenStack engineering team
Yes
No
No
No

Need OpenStack IT architecture team
Yes
Yes
No
No

Need OpenStack IT/ OPS team
Yes
Yes
No
No

Need hardware & datacenter team
Yes
Yes
Yes
No

Which approach you choose should also depend on factors such as the importance of the initiative, relative cost, and so on, such as:

DIY
Distro
Managed Service
Hosted  Private Cloud

How important is the private cloud to the company?
The business depends on private cloud
The cloud is extremely strategic to the business
The cloud is very strategic to the business
The cloud is somewhat strategic to the business

Ability to impact the community
Very direct
Somewhat direct
Indirect
Minimal

Cost (relative)
Depends on skills & scale
Low
Medium
High

Ability to own OpenStack operations
Yes
Yes
Depends if the vendor offers a transfer option
No

So as a user of an OpenStack private cloud you have four ways to consume the software.
The cost and convenience of each approach vary as per this simplified chart and need to be traded-off with respect to your strategy and requirements.
OK, so we know why you need a private cloud, and how you can consume one. But there&;s still one burning question: who needs it?
The post How does the world consume private clouds? appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Facebook Messenger Is Testing "Add Contact" Request

Facebook is testing an “Add Contact” feature in its Messenger app, the company confirmed to BuzzFeed News today.

The feature allows people to connect on the popular messaging app, used by more than 1 billion people, without becoming friends on Facebook itself. Facebook already allows non-Facebook friends to message each other on Messenger, via message requests, but if the company rolls out Add Contact broadly it could lead to a new network forming on Messenger and outside of the main Facebook product.

Messenger, though still behind Facebook&;s monthly active user count of 1.7 billion, is growing faster than the main product, and will likely increase in importance to the company now with original sharing down on Facebook proper and its ad load nearing capacity. In an April earnings call, Facebook CEO Mark Zuckerberg was clear about the importance of Messenger to his business: “A lot of people want to share messages privately, one-on-one or with very small groups.”

With Messenger on the rise, Facebook is clearly thinking about how it help develop the app into its own ecosystem, untethered in some respects to the Big Blue App. And the “Add Contact” request is one more tool to help accomplish that.

Quelle: <a href="Facebook Messenger Is Testing "Add Contact" Request“>BuzzFeed

Facebook Says Suspension Of Libertarian Groups Was An “Error"

View Video ›

Facebook: OccupyDemocratsLogic

Another controversial Facebook takedown, another muddy explanation for an erroneous removal.

Last week, Facebook mistakenly removed two big libertarian groups from its pages — Being Libertarian and Occupy Democrats Logic. Both claim over 100,000 members each. After the groups protested, Facebook restored them both on Monday, offering a vague explanation for the takedowns, one that&;s become increasingly common following the sudden, temporary disappearance of political speech and or contentious content from its platform.

“The pages were taken down in error,” a Facebook spokesperson told BuzzFeed News. “Both have been reinstated with any posts that violated our community standards removed.” Facebook did not say what posts it determined to be in violation of those standards, though Occupy Democrats Logic believes it was targeted for showcasing a meme on “progressive liberal logic.”

If Facebook&039;s statement sounds familiar, it&039;s because the company provided similar explanations when it temporarily removed a video showing the aftermath of the shooting of Philando Castile (that was “technical glitch”) and disappeared a handful of Bernie Sanders support groups (“one of our automated policies was applied incorrectly”).

What policies and protocols determined or informed these removals of political speech? Facebook isn&039;t saying. Asked to explain the “error” that removed Being Libertarian and Occupy Democrats Logic from Facebook, a company spokesman declined to do so.

An administrator for Occupy Democrats Logic told BuzzFeed News that Facebook did not provide a detailed explanation for the group&039;s takedown. And he insisted that the group was not forced to remove certain posts as a condition of reinstatement. “I didn&039;t remove jack shit,” the admin said. “I was confident nothing I posted violated standards.”

A cursory search of the restored Occupy Democrats Logic page no longer displays the “progressive liberal logic” meme post.

An administrator for Being Libertarian has not yet responded to a request for comment.

Facebook now boasts 1.7 billion monthly active users. It&039;s a massive network that for many is the extent of the internet itself. When political speech is removed from the platform, even temporarily, it&039;s a big deal. And Facebook is giving no indication that it&039;s ready to address these removals in more depth.

Quelle: <a href="Facebook Says Suspension Of Libertarian Groups Was An “Error"“>BuzzFeed

Your Software is Safer in Docker Containers

The security philosophy is Secure by Default. Meaning security should be inherent in the platform for all applications and not a separate solution that needs to be deployed, configured and integrated.
Today, Docker Engine supports all of the isolation features available in the Linux kernel. Not only that, but we’ve supported a simple user experience by implementing default configurations that provide greater protection for applications running within the Docker Engine, making strong security default for all containerized applications while still leaving the controls with the admin to change configurations and policies as needed.
But don’t take our word for it.  Two independent groups have evaluated Docker Engine for you and recently released statements about the inherent security value of Docker.
Gartner analyst Joerg Fritsch recently published a new paper titled How to Secure Docker Containers in Operation on this blog post.  In it Fritsch states the following:
“Gartner asserts that applications deployed in containers are more secure than applications deployed on the bare OS” because even if a container is cracked “they greatly limit the damage of a successful compromise because applications and users are isolated on a per-container basis so that they cannot compromise other containers or the host OS”.
Additionally, NCC Group contrasted the security features and defaults of container platforms and published the findings in the paper “Understanding and Hardening Linux Containers.” Included is an examination of attack surfaces, threats, related hardening features, a contrast of different defaults and recommendations across different container platforms. A key takeaway from this examination is the recommendation that applications are more secure by running in some form of Linux container than without.
“Containers offer many overall advantages. From a security perspective, they create a method to reduce attack surfaces and isolate applications to only the required components, interfaces, libraries and network connections.”
“In this modern age, I believe that there is little excuse for not running a Linux application in some form of a Linux container, MAC or lightweight sandbox.”
– Aaron Grattafiori, NCC Group

The chart below depicts the outcome of the security evaluation of three container platforms.  Docker Engine was found to have a more comprehensive feature set with strong defaults.
 

Source: Understanding and Hardening Linux Containers

The Docker security philosophy of “Secure by Default” spans across the concepts of secure platform, secure content and secure access to deliver a modern software supply chain for the enterprise that is fundamentally secure.  Built on a secure foundation with support for every Linux isolation feature, Docker Datacenter delivers additional features like application scanning, signing, role based access control (RBAC) and secure cluster configurations for complete lifecycle security. Leading enterprises like ADP trust Docker Datacenter to help harden the containers that process paychecks, manage benefits and store the most sensitive data for millions of employees across thousands of employers.

Your apps are safer and more secure in Docker containers To Tweet

More Resources:

Read the Container Isolation White Paper
Learn how Docker secures your software supply chain
ADP hardens enterprise containers with Docker Datacenter
Try Docker Datacenter free for 30 days

The post Your Software is Safer in Docker Containers appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Securing the Enterprise Software Supply Chain Using Docker

At we have spent a lot of time discussing runtime security and isolation as a core part of the container architecture. However that is just one aspect of the total software pipeline. Instead of a one time flag or setting, we need to approach security as something that occurs at every stage of the application lifecycle. Organizations must apply security as a core part of the software supply chain where people, code and infrastructure are constantly moving, changing and interacting with each other.
If you consider a physical product like a phone, it’s not enough to think about the security of the end product. Beyond the decision of what kind of theft resistant packaging to use, you might want to know  where the materials are sourced from and how they are assembled, packaged, transported. Additionally it is important to ensure that  the phone is not tampered with or stolen along the way.

The software supply chain maps almost identically to the supply chain for a physical product. You have to be able to identify and trust the raw materials (code, dependencies, packages), assemble them together, ship them by sea, land, or air (network) to a store (repository) so the item (application) can be sold (deployed) to the end customer.
Securing the software supply chain is also also quite similar.  You have to:

Identify all the stuff in your pipeline; from people, code, dependencies, to infrastructure
Ensure a consistent and quality build process
Protect the product while in storage and transit
Guarantee and validate the final product at delivery against a bill of materials

In this post we will explain how Docker’s security features can be used to provide active and continuous security for a software supply chain.
Identity
The foundation of the entire pipeline is built on identity and access. You fundamentally need to know who has access to what assets and can run processes against them. The Docker architecture has a distinct identity concept that underpins the security strategy for securing your software supply chain: cryptographic keys allows the publisher to sign images to ensure proof-of-origin, authenticity, and provenance for Docker images.
Consistent Builds: Good Input = Good Output
Establishing consistent builds allow you to create a repeatable process and get control of your application dependencies and components to make it easier to test for defects and vulnerabilities. When you have a clear understanding of your components, it becomes easier to identify the things that break or are anomalous.

To get consistent builds, you have to ensure you are adding good components:

Evaluate the quality of the dependency, make sure it is the most recent/compatible version and test it with your software
Authenticate that the component comes from a source you expect and was not corrupted or altered in transit
Pin the dependency ensuring subsequent rebuilds are consistent so it is easier to uncover if a defect is caused by a change in code or dependency
Build your image from a trusted, signed base image using Docker Content Trust

Application Signing Seals Your Build
Application signing is the step that effectively “seals” the artifact from the build. By signing the images, you ensure that whomever verifies the signature on the receiving side (docker pull) establishes a secure chain with you (the publisher).  This relationship assures that the images were not altered, added to, or deleted from while stored in a registry or during transit. Additionally, signing indicates that the publisher “approves” that the image you have pulled is good.

Enabling Docker Content Trust on both build machines and the runtime environment sets a policy so that only signed images can be pulled and run on those Docker hosts.  Signed images signal to others in the organization that the publisher (builder) declares the image to be good.
Security Scanning and Gating
Your CI system and developers verify that your build artifact works with the enumerated dependencies, that operations on your application have expected behavior in both the success path and failure path, but have they vetted the dependencies for vulnerabilities?  Have they vetted subcomponents of the dependencies or bundled system libraries for dependencies?  Do they know the licenses for their dependencies? This kind of vetting is almost never done on a regular basis, if at all, since it is a huge overhead on top of already delivering bugfixes and features.

Docker Security Scanning assists in automating the vetting process by scanning the image layers.  Because this happens as the image is pushed to the repo, it acts as a last check or final gate before are deployed into production. Currently available in Docker Cloud and coming soon to Docker Datacenter, Security Scanning creates a Bill of Materials of all of the image’s layers, including packages and versions. This Bill of Materials is used to continuously monitor against a variety of CVE databases.  This ensures that this scanning happens more than once and notifies the system admin or application developer when a new vulnerability is reported for an application package that is in use.
Threshold Signing &; Tying it all Together
One of the strongest security guarantees that comes from signing with Docker Content Trust is the ability to have multiple signers participate in the signing process for a container. To understand this, imagine a simple CI process that moves a container image through the following steps:

Automated CI
Docker Security Scanning
Promotion to Staging
Promotion to Production

This simple 4 step process can add a signature after each stage has been completed and verify the every stage of the CI/CD process has been followed.

Image passes CI? Add a signature!
Docker Security Scanning says the image is free of vulnerabilities? Add a signature!
Build successfully works in staging? Add a signature!
Verify the image against all 3 signatures and deploy to production

Now before a build can be deployed to the production cluster, it can be cryptographically verified that each stage of the CI/CD process has signed off on an image.
Conclusion
The Docker platform provide enterprises the ability to layer in security at each step of the software lifecycle.  From establishing trust with their users, to the infrastructure and code, our model gives both freedom and control to the developer and IT teams. From building secure base images to scanning every image to signing every layer, each feature allows IT to layer in a level of trust and guarantee into the application.  As applications move through their lifecycle, their security profile is actively managed, updated and finally gated before it is finally deployed.

Docker secure beyond containers to your entire app pipeline To Tweet

More Resources:

Read the Container Isolation White Paper
ADP hardens enterprise containers with Docker Datacenter
Try Docker Datacenter free for 30 days
Watch this talk from DockerCon 2016

The post Securing the Enterprise Software Supply Chain Using Docker appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Dissent And Distrust In Tor Community Following Jacob Appelbaum's Ouster

The Tor Project, Inc.

When Shari Steele joined the Tor Project as executive director last December, she thought the job would resemble the start of her previous stint as the executive director of the digital civil liberties group Electronic Frontier Foundation: getting down to the hard work of turning a scrappy online privacy group with a reputation for chaos into a mature organization.

“I expected there were operational things I’d have to clean up,” Steele said. “Setting up bank accounts, and making sure contractors are treated properly, that kind of thing.”

Instead, the first eight months of Steele’s tenure have been defined by a scandal that has rocked the Tor Project, the nonprofit organization that administers and promotes Tor, the widely used and controversial software that conceals the online identity and location of its users. In June, anonymous accounts appeared online alleging that Jacob Appelbaum, a prominent developer and activist who was the Tor Project’s best-known member, had sexually assaulted several women.

In the two months after the allegations, Appelbaum resigned and the Tor Project replaced its entire board, following a BuzzFeed News report that it had known of allegations against Appelbaum for more than a year before they became public (and well before Steele came on). Last month, the group announced that a seven-week private investigation had confirmed the allegations. (Appelbaum, who lives in Berlin, has not been charged with any crime.)

Now, as Steele tries to pivot the organization past the scandal and toward the restructuring she was brought on to do, she faces the awkward task of handling a group of Tor community members and associates who are angry about the way she dealt with Appelbaum’s exit and its aftermath, some of whom are actively hostile toward the people she has empowered to make the organization more welcoming to women.

Last week, Marie Gutbub, a Tor Project core member and a former romantic partner of Appelbaum’s, announced she was quitting Tor in an email in which she accused Steele of “purging” those within the Tor community who signed an open letter in support of Appelbaum — and claimed to speak for many others.

“I know that there are others in the community who feel like something is not right about this,” she wrote. “And I assume that they only haven&;t spoken up because they are afraid.”

David W. Robinson, a Tor volunteer who gained fame in the community when police raided the Seattle home where he ran a Tor exit relay, published a letter last week calling the Appelbaum allegations “character assassination … with management’s collusion.” And over the weekend, an anonymous post appeared calling for a 24-hour boycott of Tor and demanding Steele’s resignation. (And add to all this a throng of anonymous Twitter commenters.)

The proposed boycott has been widely criticized, even by Robinson and Gutbub, as counterproductive and potentially harmful to people who rely on Tor to communicate safely. But as these loud and angry voices have made clear, the Appelbaum scandal has revealed fissures within the broader Tor community. That matters more than it might otherwise seem, because the Tor Project relies on its community for advocacy, code improvements to its software, and the donated bandwidth that facilitates the spread of its anonymous traffic. And it suggests that Steele’s job, for the time being at least, may be as much about managing changes to that community as it is expanding Tor’s user base, increasing its funding, and squashing its reputation as a high-tech cover for Dark Web criminals.

“It’s sad and unfortunate that we’re losing people like Marie and David,” Steele said. “We appreciate all that they’ve done. Hopefully more people are going to come based on the changes that we’re making.”

Among those changes: new anti-harassment, conflict of interest, complaint submission, and internal review policies, which Steele announced in a blog post last month. The new policies, Steele wrote, will be rolled out in time for Tor’s upcoming developer meeting in Seattle at the end of the month.

But that meeting, which Steele said will be the best-attended in the organization’s history, has itself become a flashpoint for controversy. In her open letter, Gutbub claimed that she and others had not been invited because of their public support for Appelbaum.

“There is this conspiracy that people were omitted because they were supporters of Jake,” Steele told BuzzFeed News. “Marie wasn’t invited because she hasn’t been working on Tor recently. She hasn’t been contributing.” Gutbub had been a Tor core member only since May of this year — a month before the allegations against Appelbaum came to light.

Still, an email sent by Steele to an internal Tor mailing list and obtained by BuzzFeed News supports Gutbub’s claim: “I initiated this meeting’s list a bit differently than we’d been doing it in at least the recent past, in that instead of simply reinviting people who had been invited in the past, I made an effort to build a list of people who were actively working with and for the Tor Project … Things worked differently with Marie. She was suggested on tor-internal, but then off-list I received a couple of people expressing discomfort with her attendance. I followed what I believed to be protocol and did not add her to the invitation list for this reason.”

Another contentious issue for Gutbub, Robinson, and the people behind is Alison Macrina’s role. Macrina is a librarian and privacy advocate who heads Tor’s new Community Team. The Community Team has been charged with writing a set of membership guidelines, a code of conduct, and a social contract. Macrina is also one of the members of the Tor Community Council, a small body that is in charge of enforcing rules established by the Community Team and resolving disputes within the Tor community.

One of those disputes: how to integrate two unnamed Tor employees back into the unpaid Tor community after they were fired as a result of the Appelbaum investigation. In June, Macrina came forward as one of Appelbaum’s accusers — a fact that Gutbub argues makes her unfit to be part of the council making the decision.

Macrina told BuzzFeed News that she will recuse herself from the process — the council decides by consensus — after another core member told her he was concerned that she had a conflict of interest. She said that the Community Council does not have access to the results of the internal investigation, which pertains to Tor employees and not the unpaid community. And she added that frequent insinuations that she has grabbed power in the vacuum of the past several months are off base because she is a volunteer. “I’m the lead of the Community Team mostly because no one else wanted to do it,” she said. “There is a vocal minority of people who are very angry, and a lot of them have the wrong information.”

Just how serious the discord within the Tor community is may not be clear until the Seattle conference, held the last week of September, when its most influential members will meet in person for the first time since the Appelbaum allegations came to light. Macrina and Steele both said a significant amount of time will be set aside to clear the air. There will be anti-harassment training, according to Steele. And after that, Steele said, talk will turn to the nuts and bolts of improving the organization and the community supporting a piece of technology that may be the safest way to get online without being surveilled.

“I came in here with the sole purpose of trying to make Tor strong and healthy,” Steele said. “Purging is not one of the things I’m trying to accomplish.”

Quelle: <a href="Dissent And Distrust In Tor Community Following Jacob Appelbaum&039;s Ouster“>BuzzFeed

gRPC: a true internet-scale RPC framework is now 1.0 and ready for production deployments

Posted by Varun Talwar, Product Manager

Building highly scalable, loosely coupled systems has always been tough. With the proliferation of mobile and IoT devices, burgeoning data volumes and increasing customer expectations, it’s critical to be able to develop and run systems efficiently and reliably at internet scale.

In these kinds of environments, developers often work with multiple languages, frameworks, technologies, as well as multiple first- and third-party services. This makes it hard to to define and enforce service contracts and to have consistency across cross-cutting features such as authentication and authorization, health checking, load balancing, logging and monitoring and tracing, all the while maintaining efficiency of teams and underlying resources. It becomes especially challenging in today’s cloud-native world, where new services need to be added very quickly and the expectation from each service is to be agile, elastic, resilient, highly available and composable.

For the past 15 years, Google has solved these problems internally with Stubby, an RPC framework that consists of a core RPC layer that can handle internet-scale of tens of billions of requests per second (yes, billions!). Now, this technology is available for anyone as part of the open-source project called gRPC. It’s intended to provide the same scalability, performance and functionality that we enjoy at Google to the community at large.

gRPC can help make connecting, operating and debugging distributed systems as easy as making local function calls; the framework handles all the complexities normally associated with enforcing strict service contracts, data serialization, efficient network communication, authentications and access control, distributed tracing and so on. gRPC along with protocol buffers enables loose coupling, engineering velocity, higher reliability and ease of operations. Also, gRPC allows developers to write service definitions in a language-agnostic spec and generate clients and servers in multiple languages. Generated code is idiomatic to languages and hence feels native to the language you work on.

Today, the gRPC project has reached a significant milestone with its 1.0 release and is now ready for production deployments. As a high performance, open-source RPC framework, gRPC features multiple language bindings (C++, Java, Go, Node, Ruby, Python and C# across Linux, Windows and Mac). It supports iOS and Android via Objective-C and Android Java libraries, enabling mobile apps to connect to backend services more efficiently. Today’s release offers ease-of-use with single-line installation in most languages, API stability, improved and transparent performance with open dashboard, backwards compatibility and production readiness. More details on gRPC 1.0 release are available here.

Community interest in gRPC has seen tremendous pick-up from beta to 1.0, and it’s been adopted enthusiastically by companies like Netflix to connect microservices at scale.

With our initial use of gRPC, we’ve been able to extend it easily to live within our opinionated ecosystem. Further, we’ve had great success making improvements directly to gRPC through pull requests and interactions with the Google team that manages the project. We expect to see many improvements to developer productivity, and the ability to allow development in non-JVM languages as a result of adopting gRPC.                                                                                            – Timothy Bozarth, engineering manager at Netflix

CoreOS, Vendasta and Cockroachdb use gRPC to connect internal services and APIs. Cisco, Juniper, Arista and Ciena rely on gRPC to get streaming telemetry from network devices.

At CoreOS, we’re excited by the gRPC v1.0 release and the opportunities it opens up for people consuming and building what we like to call GIFEE — Google’s Infrastructure for Everyone Else. Today, gRPC is in use in a number of our critical open-source projects such as the etcd consensus database and the rkt container engine.                                                                                                                                                  – Brandon Philips, CTO of CoreOS

And Square, which has been working with Google on gRPC since the very early days, is connecting polyglot microservices within its infrastructure.

As a financial service company, Square requires a robust, high-performance RPC framework with end-to-end encryption. It chose gRPC for its open support of multiple platforms, demonstrated performance, the ability to customize and adapt it to its codebase, and most of all, to collaborate with a wider community of engineers working on a generic RPC framework.

You can see more details of the implementation on Square’s blog. You can also watch this video about gRPC at Square, or read more customer testimonials.

With gRPC 1.0, the next generation of Stubby is now available in the open for everyone and ready for production deployments. Get started with gRPC at grpc.io and provide feedback on the gRPC mailing list.
Quelle: Google Cloud Platform