Neue GCP-Region in Zürich: Wir erweitern unseren Support für Schweizer und europäische Unternehmen

Unsere Google-Cloud-Platform-Region in Zürich ist ab sofort betriebsbereit. Unsere sechste europäische und neunzehnte Region weltweit bietet Unternehmen in der Schweiz mehr Möglichkeiten beim Zugriff auf ihre Daten und Workloads – bei noch geringerer Latenz.Eine Cloud für die SchweizDie GCP-Region Zürich (europe-west6) ist die ideale Unterstützung für  Unternehmen in der Schweiz und ganz Europa. Mit drei Verfügbarkeitszonen ermöglicht sie Workloads mit hoher Verfügbarkeit. Hybrid-Cloud-Kunden können neue und bestehende Implementierungen mithilfe unseresPartner Ökosystems sowie über zwei dedizierte Interconnect Points of Presence nahtlos integrieren.Dank der neuen Region Zürich können Unternehmen aus der Schweiz noch schneller auf GCP-Produkte und -Services zugreifen. Durch das Hosting von Anwendungen in der neuen Region können sich die Latenzzeiten für Endanwender in der Schweiz um bis zu 10 ms verbessern. AufGCPing.com können Sie die Latenzzeiten von Ihrem jeweiligen Standort in die Region Zürich selbst einsehen. Die Region Zürich startet mit unserem umfassenden Standardportfolio, einschließlich Produkten wie Compute Engine, Google Kubernetes Engine, Cloud Bigtable, Cloud Spanner und BigQuery.Um die Vorteile vieler GCP-Services zu nutzen, können Sie mithilfe von Transfer Appliance Ihre Daten in die Cloud bringen. Transfer Appliance ist ein Server mit hoher Kapazität, mit dem große Datenmengen schnell und sicher übertragen werden können – und ist ab sofort auch am Schweizer Markt verfügbar. Wir empfehlen Transfer Appliance, um große Datenmengen zu verschieben, deren Upload andernfalls mehr als eine Woche dauern würde.Hier können Sie Transfer Appliance anfragen.Die neue Region für die Schweiz verfügt zudem über Cloud Interconnect, unser privates, softwaredefiniertes Netzwerk, das eine schnelle und zuverlässige Verbindung zwischen den einzelnen Regionen auf der ganzen Welt gewährleistet. Sie können über das Google-Netzwerk Services nutzen, die in der Region Zürich derzeit noch nicht verfügbar sind, und sie mit anderen weltweit implementierten GCP-Services kombinieren. Somit können SieProdukte, die für Unternehmen mit weltweiter Präsenz entwickelt wurden, schnell in verschiedenen Regionen implementieren und skalieren.Schweizer Kunden sagen „Grüezi” zu Google CloudMit einem besonderen Event in Zürich mit mehr als 800 anwesenden Entscheidern aus Wirtschaft sowie Entwicklern haben wir die neue Region gestartet. Urs Hölzle, Senior Vice President, Technical Infrastructure, eröffnete die Region feierlich. Vertreter von Pharma-, Fertigungs- und Finanzunternehmen aus der Schweiz und ganz Europa informierten sich über Google Cloud und die Vorteile der lokalen Region für ihren Cloud-Betrieb.Was unsere Kunden zur neuen Region sagen„Swiss-AS fokussiert sein Geschäft ausschließlich auf den Support für AMOS, der führenden Instandhaltungssoftware für die Luftfahrt. Heute realisieren wir mithilfe von Google Cloud Platform die Bereitstellung unseres AMOS Cloud Service in dedizierten Cloud-Umgebungen weltweit. Dank der lokalen Präsenz von GCP in Zürich rücken unsere Services noch näher an unsere AMOS-Kunden im deutschsprachigen Raum.“– Alexis Rapior, Hosting Team,Swiss AviationSoftware Ltd.„Die neue Schweizer Cloud-Region eröffnet spannende Möglichkeiten für den Gesundheitssektor: Nun kann die Universitätsklinik Balgrist neue Technologien für die Echtzeitverarbeitung einführen. Auch die Zusammenarbeit in der medizinischen Forschung und Entwicklung wird einfacher und effektiver.“– Thomas Huggler, Geschäftsführer,Universitätsklinik Balgrist„Wir freuen uns sehr über die Einführung von Google Cloud Platform in der Schweiz. Mit Google Cloud können wir uns auf die Entwicklung innovativer Softwarefunktionen für unsere Kunden fokussieren. Zudem bietet sie uns die Möglichkeit, neue Umgebungen innerhalb von Sekunden einzurichten.“– Marc Loosli, Leiter Innovation LAB & Co-Founder,NeXora AG (Teil der Quickline Group)„Belimo ist der weltweit führende Hersteller von Antriebselementen, Ventilen und Sensoren für Heizungs-, Lüftungs- und Klimaanlagen (HLK). In jüngster Zeit haben es uns IoT-Technologien erlaubt, HLK-Systeme anzubieten, die mithilfe von mit der Cloud verbundenen Geräten gesteuert werden. Dies bietet zusätzlichen Komfort, Energieeffizienz, Sicherheit sowie eine einfache Installation und Wartung. Belimo hat sich für GCP entschieden, weil wir bei unseren globalen Cloud Services auf hohe Verfügbarkeit, zuverlässige Performance und Skalierbarkeit setzen. Die Spitzentechnologie und die Tools von Google Cloud helfen unseren Teams, sich auf das Wesentliche zu konzentrieren.“– Peter Schmidlin, Chief Innovation Officer,Belimo Automation AGWas unsere Partner zur neuen Region sagen„Wabion ist mehr als nur begeistert, dass Google Cloud in die Schweiz kommt. Offen gesagt: Das ist das Beste, was dem Schweizer Cloud-Markt passieren kann. Wir haben Kunden, die sehr an Googles Innovationen interessiert sind und bisher nicht migriert sind, weil es bislang keine Schweizer Region gab. Die neue Region Zürich schließt diese Lücke und eröffnet Wabion großartige Möglichkeiten, Kunden auf ihrer Reise in die Google Cloud zu unterstützen.“– Michael Gomez, Geschäftsführer,Wabion SchweizWas kommt als Nächstes?Weitere Einzelheiten über die neue Region finden Sie hier. Dort haben Sie auch Zugriff auf kostenloses Informationsmaterial, Whitepapers, die On-Demand-Videoserie „Cloud On-Air“ und vieles mehr. Wenn Sie noch nicht mit GCP vertraut sind, sehen Sie sich dieBest Practices der Region für Compute Engine an und nehmen Sie Kontakt mit uns auf, um noch heute in Google Cloud einzusteigen.Noch in diesem Jahr werden wir weitere GCP-Regionen eröffnen, beginnend mit Osaka, Japan. Auf unserer Standort-Seite finden Sie aktuelle Informationen zur Verfügbarkeit weiterer Services und Regionen.
Quelle: Google Cloud Platform

Exploring container security: four takeaways from Container Community Summit 2019

Editor’s note: On February 20, we hosted the fourth annual Container Security Summit at Google’s campus in Seattle. This event aims to help security professionals increase the security of their container deployments and apply the latest in container security research. Here’s what we learned.Container security is a hot topic, but it can be intimidating. Container developers and operators don’t usually spend their days studying security exploits and threat analysis; likewise, container architectures and components can feel foreign to the security team.Dev, ops, and security teams all want their workloads to be more secure (and make those pesky containers actually “contain”!); the challenge is making those teams more connected to bring container security to everyone. The theme of the 2019 Container Security Summit was just that: “More contained. More secure. More connected.” Here are four topics that led the day at the summit:Rootless builds are here. Why aren’t you using one?To improve the security of build processes and the isolation of running workloads, container builds should be hermetic and reproducible. A build is hermetic if no data leaks between individual builds (i.e., one build does not impact other builds), and reproducible if it’s repeatable from source to binary (i.e., you get the same output every time).But even if your builds are hermetic and reproducible, unnecessarily running processes as root remains a potential security risk. In fact, that’s what attackers look for—how to gain privileged access to your infrastructure. “The root of all evil is unnecessarily running processes or containers as root,” said Andrew Martin, co-founder of ControlPlane, during his talk. That includes container runtimes and build tools as root.A rootless container build doesn’t require a daemon running on the host—or ideally any root privileges at all—to build the container image. This is particularly useful when building images for untrusted workloads, such as those that come from a third-party or open-source repository that can’t be independently verified.So, where can you get this magic? Fortunately, there are many options, including img, buildah, umoci, Kaniko, and many more! Note that with some of these, rootless container builds are optional, and some still require a daemon to be run as root, or use root inside the container. It’s still hard to get a completely rootless, unprivileged build today. Kaniko, for example, determines the minimum permissions by what’s needed to unpack your base image and execute the RUN commands. (If you’d rather not build the image yourself and trust Google’s security model, Cloud Build is a simple answer.)Thanks to the runc work that’s been ported upstream, “everybody can achieve rootlessness today,” Martin added. “No project has realised the fully-untrusted dream yet, but I expect us to reach utopia in 2019.”The Kubernetes community showed it’s equipped to deal with vulnerabilities over the past yearJust like any other software, Kubernetes isn’t impervious to attacks. In the past year, a handful of severe vulnerabilities surfaced, including CVE-2017-1002101, which allowed containers with subpath volume mounts to access files outside the volume; and CVE-2018-1002105, which allowed a user with relatively low permissions to escalate their privileges.Luckily, Kubernetes’ Product Security Team deftly addressed these vulnerabilities (and others) and handled the rollout of the patches. “The code is only part of the fix. When we’re talking about incident response, it’s not only the code, it’s the process,” said CJ Cullen, software engineer on the Google Kubernetes Engine (GKE) security team.If you’re running Kubernetes yourself, join kubernetes-announce to get the latest on releases, including vulnerability patches. If you’re running on GKE, the security bulletins will give you the latest, and let you know if there’s anything you need to do to stay safe. (Pro tip from top users: post the RSS feed in your security team’s Slack channel!)CIS benchmarks are still the gold standard for locking down your Kubernetes configurations—but only apply what makes sense to youThe Center for Internet Security (CIS) publishes several security guidelines, including guidelines for Kubernetes. Many users refer to these guidelines to show colleagues, regulators, and customers that they’re following Kubernetes security best practices. The CIS recently updated these benchmarks for Kubernetes 1.13 (so they’re current!) and cover a wide range of recommended configurations for both the control plane and the worker nodes in your cluster.Still, you should really think about the CIS benchmarks before you apply them. Rory McCune, Principal Consultant at NCC Group, was one of the key contributors to the Kubernetes CIS benchmarks and presented about them at the conference. “People think you should go to a benchmark and apply everything in there—but that’s the wrong approach,” he said. It’s important when applying any standard, to consider the environment that it’s being used in, and choose which controls apply to your organization’s systems.He also explained that the CIS benchmarks are more difficult to apply to hosted solutions like GKE, “because there are many things you can’t test directly.” This creates an added step where distro users have to figure out which benchmarks apply to them, and demonstrate that to an auditor. Looking ahead, he hopes that the community will develop benchmarks for specific distributions to ease the burden on the user.To test your current Kubernetes config against the CIS benchmarks, you can use kube-bench. In GKE, where you can’t access the control plane to test configurations, we’ve documented how we do this on your behalf in our “Control plane security” document. Best practices for GKE are laid out in the GKE hardening guide. Even with these extra steps, however, hosted solutions still offer much simpler security management than running them yourself. As Dino Dai Zovi, Staff Security Engineer at Square, said, “If you want to run your own, you’re playing life on hard mode.”We need to talk: the best way to improve container securityContainer security is an evolving field; users are still finding out what works for their workloads and their priorities, but attackers wait for no one. In the unconference sessions, attendees were eager to discuss some of the issues they’ve hit with running containers securely in production, including image scanning, container isolation tools, and segmentation best practices.The current container security landscape is still maturing, and two seemingly similar organizations might take very different approaches. The real risk, therefore, is failing to communicate across teams, said DevSecOps expert Ian Coldwater, in the closing keynote.“Container folks, to security people, can sometimes seem like they’re speaking a different language,” said Coldwater. But while container and security teams historically have failed to communicate, “every one of us has something to teach, and something to learn.” If you’re a developer running containers, be sure to keep the lines of communication to the security team wide open.Didn’t make it to the Container Security Summit? Check out the speaker slides. You can also learn more about container security in the Exploring Container Security blog series.
Quelle: Google Cloud Platform

Economist study: OEMs create new revenue streams with next-gen supply chains

Original equipment manufacturers (OEMs) make the wheels go round for the business world. But demand for faster, cheaper, and smarter products and components put major downward pressure on profit margins. Successful OEMs are always on the lookout for opportunities to drive down costs and differentiate their brands and the rise of the Internet of Things (IoT) offers a golden opportunity to do so by embracing fundamental supply chain transformation.

To get a better understanding of the benefits, best practices, and current state of play in supply chain transformation, we enlisted The Economist Intelligence Unit to survey 250 senior executives at OEMs in North America, Europe, and Asia-Pacific. Our learnings from those conversations drove insights for the basis of the new study, Putting customers at the center of the supply chain. Here are some of the intriguing highlights.

Creating the intelligent supply chain

According to the study, 99 percent of OEMs believe supply chain transformation is important to meet their organizations’ strategic objectives. The vast majority, 97 percent, consider cloud technology to be an essential component of that transformation, which makes sense given that cloud offers the unprecedented ability to collect and analyze data at scale. To date, just 61 percent have embraced cloud across their organization—meaning that for many, cloud remains an obvious and notable opportunity.

Beyond cloud, IoT presents a significant opportunity for OEMs. IoT is the fundamental technology underpinning smart products and components, like embedded sensors that monitor performance, or telemetry systems on connected vehicles.

IoT-enabled products and components can effectively extend the supply chain to include the customer, enabling the delivery of software updates directly, while providing ongoing access to data about how offerings are being used. This adds supply-chain complexity but also delivers significant new business opportunities.

This extension of the supply chain gives OEMs the ability to get a far deeper understanding of customer behaviors and needs and to better serve customers via add-on services based on that deeper understanding. To optimize the value of the customer data they collect, some are even embracing entirely new business models.

Armed with real, data-based insights into exactly how and when their products are being used, OEMs can become service providers, and shift from selling products to customers to charging them subscription or per-use fees. Rolls-Royce, for example, charges a monthly fee for customers of its jet engines that is based on flying hours. Industrial machinery makers like Sandvik Coromant are also now charging customers based on use.

Other emerging technologies that OEMs are turning to for assistance in transforming supply chains include robotics that generate valuable data while performing tasks like product assembly and order picking faster and with greater accuracy than humans, artificial intelligence (AI) that’s used in smart products for things like predictive maintenance, and blockchain which enables supply-chain stakeholders to share an immutably accurate record of deliveries. These technologies can supercharge the collection, management, analysis, and security of supply-chain data. And like IoT, they can drive the creation of brand new ways of doing business.

Best practices in supply-chain transformation

In a world where a growing number of things around us collect data about us, forward-thinking OEMs are increasingly embracing fundamental changes in their supply chains. With the goal of achieving operational excellence informed by a closed feedback loop with the customer, OEMs can deliver better service and products by better understanding and anticipating exactly what customers want and need.

To achieve this vision, they’re turning to technologies like cloud, IoT, AI, robotics, and blockchain. Learn more about the specific steps and approaches being taken in the full Economist report.
Quelle: Azure

AzCopy support in Azure Storage Explorer now available in public preview

We are excited to share the public preview of AzCopy in Azure Storage Explorer. AzCopy is a popular command line utility that provides performant data transfer into and out of a storage account. The new version of AzCopy further enhances the performance and reliability through a scalable design, where concurrency is scaled up according to the number of machine’s logical cores. The tool’s resiliency is also improved by repeated retries.

Azure Storage Explorer provides the UI interface for various storage tasks, and now it supports using AzCopy as a transfer engine to provide the highest throughput for transferring your files for Azure Storage. This capability is available today as a preview in Azure Storage Explorer.

Enable AzCopy for blob upload and download

We have heard from many of you that the performance of your data transfer matters. Let’s be honest, we all have better things to do than wait around for files to be transferred to Azure. Now with AzCopy in Azure Storage Explorer, we give you all that time back!

With AzCopy preview, the blob operations will be faster than before. To enable this option, go to the Preview menu and select Use AzCopy for improved blob Upload and Download.

We are working on the support for Azure Files and batch blob deletes. Feel free letting us know what you would like to see supported through our GitHub repository.

Figure 1: Enable AzCopy in Azure Storage Explorer

How fast is it?

With a quick test in our environment we were able to see great improvements in uploading files with AzCopy in Azure Storage Explorer. Note that the times may vary on each machine.

 
Storage Explorer
Storage Explorer w/AzCopyV10
Improvement

10K 100KB files
1 hour 36 minutes
59 seconds
98.9 percent

100 100MB
5 minutes 12 seconds
1 minute 35 seconds
69.5 percent

1 10GB file
3 minutes 41 seconds
1 minute 40 seconds
54.7 percent

Figure 2: Performance improvement from using AzCopy as transfer engine for blog upload and download

Figure 3: AzCopy uploads/downloads blobs efficiently (1 x 10GB file)

Figure 4: AzCopy uploads/downloads blobs efficiently (10,000 x 10KB files)

Next steps

We invite you to try out the AzCopy preview feature in Azure Storage Explorer today, and we look forward to hearing your feedback. If you identify any problems or want to make a feature suggestion, please make sure to report your issue on our GitHub repository.
Quelle: Azure