Wonder Woman 1984: Superheldinnenfilm kommt zu Weihnachten auch als Stream
Anders als andere Hollywoodstudios verschiebt Warner Bros. seinen Blockbuster nicht auf das nächste Jahr. (Streaming, Digitalkino)
Quelle: Golem
Anders als andere Hollywoodstudios verschiebt Warner Bros. seinen Blockbuster nicht auf das nächste Jahr. (Streaming, Digitalkino)
Quelle: Golem
Kubernetes helps developers build modern software that scales, but to do so securely, they also need a software supply chain with strong governance. From managed secure base images, Container Registry vulnerability scanning to Binary Authorization, Google Cloud helps secure that pipeline, giving you the support and flexibility you need to build great software without being locked into a particular provider.Today, we are excited to announce a great open-source addition to the secure software supply chain tool box: Voucher. Developed by the Software Supply Chain Security team at Shopify to work with Google Cloud tools, Voucher evaluates container images created by CI/CD pipelines and signs those images if they meet certain predefined security criteria. Binary Authorization then validates these signatures at deploy time, ensuring that only explicitly authorized code that meets your organizational policy and compliance requirements can be deployed to production.Voucher is open source from the get-go, following the Grafeas specification. The signatures it generates, or ‘attestations,’ can be enforced by either Binary Authorization or the open-source Kritis admission controller. Out of the box, Voucher lets infrastructure engineers use Binary Authorization policies to enforce security requirements, such as provenance (e.g., a signature that is only added when images are built from a secure source branch) and block vulnerable images (e.g., require a signature that is only applied to images that don’t have any known vulnerabilities above the ‘medium’ level). And because it’s open source, you can also easily extend Voucher to support additional security and compliance checks or integrate it with your CI/CD tool of choice. “At Shopify, we ship more than 8,000 builds a day and maintain a registry with over 330,000 container images. We designed Voucher in collaboration with the Google Cloud team to give us a comprehensive way to validate the containers we ship to production,” said Cat Jones, Senior Infrastructure Security Engineer at Shopify. “Voucher, along with the vulnerability scanning functionality from Google’s Container Registry and Binary Authorization, provides us a way to secure our production systems using layered security policies, with a minimum impact to our unprecedented development velocity. We are donating Voucher to the Grafeas open-source project so more organizations can better protect their software supply chains. Together, Voucher, Grafeas and Kritis help infrastructure teams achieve better security while letting developers focus on their code.”How Voucher simplifies a secure supply chain setupIn the past, if you wanted to gate deployments based on build or vulnerability findings, you needed to write, host and run your own evaluation logic (step 2a and 3a), as shown in the following process:Code is pushed to a repositoryA continuous integration (CI) pipeline tool, such as Cloud Build, builds and tests the container.Write custom code to sign images based on their build provenance (e.g. only sign images built from the production branch)The newly built container image is checked into Google Container Registry and undergoes vulnerability scanning.Write custom code to sign images based on vulnerability findingsBinAuthz verifies the image signatures as part of being deployed to GKE. To avoid privilege escalation, the signing steps should be hosted outside of the CI/CD pipeline (developers who can execute arbitrary code in a build step cannot gain access to the signing key or alter the signing logic). This puts a significant burden on DevOps teams to create and set up these kinds of signing tools. Voucher, however, automates a large portion of this setup — it comes with a pre-supplied set of security checks, and all you have to do is specify your signing policies in Binary Authorization. Once started, it automates the attestation generation as shown below:Try it out!We’re honored that Shopify used Google Cloud tools to power Voucher, and we’re excited that they’ve decided to share it with developers at large. If you want to try Voucher, you can find it on GitHub, or a click-to-deploy version on Google Cloud Marketplace. We’ve also created a step-by-step tutorial to help you launch Voucher on Google Cloud with Binary Authorization.
Quelle: Google Cloud Platform
Businesses of every size and shape have a need to better understand their customers, their systems, and the impact of external factors on their business. How rapidly businesses mitigate risks and capitalize on opportunities can set apart successful businesses from businesses that can’t keep up. Anomaly detection—or in broader terms, outlier detection—allows businesses to identify and take action on changing user needs, detect and mitigate malignant actors and behaviors, and take preventive actions to reduce costly repairs.The speed at which businesses identify anomalies can have a big impact on response times, and in turn, associated costs. For example, detecting a fraudulent financial transaction in hours or days after it happens often results in writing off the financial loss. The ability to find the anomalous transaction in seconds allows for the invalidation of the transaction and corrective actions to prevent future fraud. Similarly, by detecting anomalies in industrial equipment, manufacturers can predict and prevent catastrophic failures that could cause capital and human loss by initiating proactive equipment shutdowns and preventative maintenance. Likewise, detecting anomalous user behavior (for example, sign-in into multiple accounts from the same location/device) can prevent malignant abuse, data breaches, and intellectual property theft.In essence, anomalous events have an immediate value. If you don’t seize that value, it vanishes into irrelevance until there’s a large enough collection of events to perform retrospective analysis. (See image below for an illustration of that concept.) To avoid falling off this “value cliff,” many organizations are looking to stream analytics to provide a real-time anomaly detection advantage.At Google Cloud, our customer success teams have been working with an increasing number of customers to help them implement streaming anomaly detection. In working with such organizations to help them build anomaly detection systems, we realized that providing these reference patterns can significantly reduce the time to solution for those and future customers.Reference patterns for streaming anomaly detectionReference patterns are technical reference guides that offer step-by-step implementation and deployment instructions and sample code. Reference patterns mean you don’t have to reinvent the wheel to create an efficient architecture. While some of the specifics (e.g., what constitutes an anomaly, desired sensitivity level, alert a human vs. display in a dashboard) depend on the use case, most anomaly detection systems are architecturally similar and leverage a number of common building blocks. Based on that learning, we have now released a set of repeatable reference patterns for streaming anomaly detection to the reference patterns catalog (see the anomaly detection section).These patterns implement the following step-by-step process:Stream events in real time Process the events, extract useful data points, train the detection algorithm of choiceApply the detection algorithm in near-real time to the events to detect anomaliesUpdate dashboards and/or send alertsHere’s an overview of the key patterns that let you implement this broader anomaly detection architecture:Detecting network intrusion using K-means clusteringWe recently worked with a telecommunications customer to implement streaming anomaly detection for Netflow logs. In the past, we’ve seen that customers have typically implemented signature-based intrusion detection systems. Although this technique works well for known threats, it is difficult to detect new attacks because no pattern or signature is available. This is a significant limitation in times like now, when security threats are ever-present and the cost of a security breach is significant. To address that limitation, we built an unsupervised learning-based anomaly detection system. We also published a detailed writeup: Anomaly detection using streaming analytics and AI. The following video gives a step-by-step overview of implementing the anomaly detection system. Keep in mind that the architecture and steps in the video can be applied to other problem domains as well, not just network logs. Detecting fraudulent financial transactions using Boosted TreesWhile the previous pattern used a clustering algorithm (trained in BigQuery ML), the finding anomalies in financial transactions in real time using Boosted Trees pattern uses a different ML technique called BoostedTrees. BoostedTrees is an ensemble technique that makes predictions by combining output from a series of base models. This pattern follows the same high-level architecture and uses Google Cloud AI Platform to perform predictions. One of the neat things in the reference pattern is the use of micro-batching to group together the API calls to the CAIP Prediction API. This ensures that a high volume of streaming data does not necessarily result in API quota issues. Here’s what the architecture looks like:Time series outlier detection using LSTM autoencoderMany anomaly detection scenarios involve time series data (a series of data points ordered by time, typically evenly spaced in time domain). One of the key challenges with time series data is that it needs to be preprocessed to fill any gaps (either due to source or transmission problems) in data. Another common requirement is the need to aggregate metrics (e.g., Last, First, Min, Max, Count values) from the previous processing window when applying transforms to the current time window. We created a Github library to solve these problems for streaming data and jump-starts your implementation for working with time series data. These patterns are driven by needs we’ve seen in partnering with customers to solve problems. The challenge of finding the important insight or deviation in a sea of data is not unique to any one business or industry; it applies to all. Regardless of where you are starting, we look forward to helping you on the journey to streaming anomaly detection. To get started, head to the anomaly detection section in our catalog of reference patterns. If you have implemented a smart analytics reference pattern, we want to hear from you. Complete this short survey to let us know about your experience.
Quelle: Google Cloud Platform
We’re always looking to make advanced security easier for enterprises so they can stay focused on their core business. Already this year, we’ve worked to strengthen DDoS protection, talked about some of the largest attacks we have stopped and made firewall defences more effective. We continue to push our pace of security innovation, and today we’re announcing enhancements to existing protections, as well as new capabilities to help customers protect their users, data, and applications in the cloud. 1. Using machine learning to detect and block DDoS Attacks with Adaptive ProtectionWe recently talked about how our infrastructure absorbed a 2.54 Tbps DDoS attack, the culmination of a six-month campaign that utilized multiple methods of attack. Despite simultaneously targeting thousands of our IPs, presumably in hopes of slipping past automated defenses, the attack had no impact.We recognize the scale of potential DDoS attacks can be daunting. By deploying Google Cloud Armor integrated into our Cloud Load Balancing service—which can scale to absorb massive DDoS attacks—you can protect services deployed in Google Cloud, other clouds, or on-premise from attacks. Cloud Armor, our DDoS and WAF-as-a-service, is built using the same technology and infrastructure that powers Google services.Today, we are excited to announce Cloud Armor Adaptive Protection—a unique technology that leverages years of experience using machine learning to solve security challenges plus deep experience protecting our own user properties against Layer 7 DDoS attacks. We use multiple machine learning models within Adaptive Protection to analyze security signals for each web service to detect potential attacks against web apps and services. This system can detect high volume application layer DDoS attacks against your web apps and services and dramatically accelerate time to mitigation. For example, attackers frequently target a high volume of requests against dynamic pages like search results or reports in web apps in order to exhaust server resources to generate the page. When enabled, we learn from a large number of factors and attributes about the traffic arriving at your services so we know what “normal” looks like. We’ll generate an alert if we believe there is a potential attack, taking into account all of the relevant context for your workload. In other words, where traditional threshold based detection mechanisms could generate a great deal of lower confidence alerts that would require investigation and triage only once an attack has accelerated to the detection threshold, Adaptive Protection produces high confidence signals about a potential attack much earlier, while the attack is still ramping up. Adaptive Protection won’t just surface the attack, but will actually provide context on why the system felt it was malicious and then provide a rule to mitigate the attack as well. This protection is woven into our cloud fabric and only alerts the operator for more serious issues with context, an attack signature, and a Cloud Armor rule that they can then deploy in preview or blocking mode. Rather than spending hours analysing traffic logs to triage the ongoing attack, application owners and incident responders will have all of the context they need to make a decision on whether and how to stop the potentially malicious traffic. Cloud Armor Adaptive Protection is going to simplify protection in a big way, and will be rolling out to the public in preview soon.Adaptive Protection suggested rule2. Better firewall rule management with Firewall Insights We have been making a number of investments into our network firewall to provide insights and simplify control that allow easier management of more complex environments. Firewall insights helps you optimize your firewall configurations with a number of detection capabilities, including shadowed rule detection to identify firewall rules that have been accidentally shadowed by conflicting rules with higher priorities. In other words, you can automatically detect rules that can’t be reached during firewall rule evaluation due to overlapping rules with higher priorities. This helps detect redundant firewall rules, open ports, and IP ranges and help operators to tighten the security boundary. It will also help surface to admins a sudden hit increases on firewall rules and drill down to the source of the traffic to catch an emerging attack.Within firewall insights you’ll also see metrics reports showing how often your firewall rules are active, including the last time they were hit. This allows security admins to verify that firewall rules are being used in the intended way, ensuring that firewall rules allow or block their intended connections. These insights can operate at massive volume and help remove human errors around firewall rule configuration or simply highlight rules that are no longer needed as an environment changes over time. Firewall insights will be generally available soon.Firewall Insights3. Flexible and scalable controls with Hierarchical Firewall PoliciesFirewalls are an integral part of almost any IT security plan. With our native, fully distributed firewall technology, Google Cloud aims to provide the highest performance and scalability for all your enterprise workloads. Google Cloud’s hierarchical firewall policies, provide new, flexible levels of control so that you can benefit from centralized control at the organization and folder level, while safely delegating more granular control within a project to the project owner. Hierarchical firewalls provide a means to enforce firewall rules at the organization and folder levels in the GCP Resource Hierarchy. This allows security administrators at different levels in the hierarchy to define and deploy consistent firewall rules across a number of projects so that they are applied to all VMs in currently existing and yet-to-be-created projects. Hierarchical firewall policies allow configuring rules at the Organization and Folder levels, in addition to firewall rules at the VPC level. Since leveraging Hierarchical Firewalls requires fewer firewall rules, managing multiple environments becomes simpler and more effective. Further, being able to manage the most critical firewall rules in one place can help free up project level administrators from having to keep up with changing organization wide policies. Hierarchical firewall policies will be generally available soon.Hierarchical firewall policies4. New controls for Packet Mirroring Google Cloud Packet Mirroring allows you to mirror network traffic from your existing Virtual Private Clouds (VPCs) to third party network inspection services. With this service, you can use those third-party tools to collect and inspect network traffic at scale, providing intrusion detection, application performance monitoring, and better security visibility, helping you with the security and compliance of workloads running in Compute Engine and Google Kubernetes Engine (GKE). We are adding new filters to mirror packets that will be generally available soon. With traffic direction control, you can now mirror either the ingress or egress traffic, helping users better manage their traffic volume and reduce costs.Traffic Direction: New Ingress & Egress controls for Packet MirroringWith these enhancements, we are helping Google Cloud customers stay safe when using our network security products. For a hands-on experience on our Network Security portfolio, you can enroll in our network security labs here. You can also learn more about Google Cloud security in the latest installment of Google Cloud Security Talks, live today.Related ArticleExponential growth in DDoS attack volumesHow Google prepares for and protects against the largest volumetric DDoS attacks.Read Article
Quelle: Google Cloud Platform
For many of us the holiday season will look different this year, separated from the people we love. If you’re in this boat too—mitigating the spread of the coronavirus—thank you and we hope the following story might offer an alternative, but helpful way to connect with friends and family. While we know virtual get-togethers can never fully match the intimacy of in-person conversations, they can keep us connected and maybe even preserve some special moments for future generations. In this spirit, we are sharing our collaboration with StoryCorps, a national non-profit organization dedicated to preserving humanity’s stories through 1:1 interviews. Over the past 17 years, StoryCorps has recorded with more than 600,000 people and sent those recordings to the U.S. Library of Congress where they are preserved for generations to come at the American Folklife Center. This is the world’s largest collection of human voices on the planet, but, it’s been relatively inaccessible. That’s when StoryCorps approached us to help make its rich archive of first-person history universally accessible and useful. StoryCorps + Google Cloud AIIn 2019, StoryCorps and Google Cloud partnered to unlock this amazing archive using artificial intelligence (AI) and create an open, searchable and accessible audio database for everyone to find and listen to first-hand perspectives from humanity’s most important moments. Diving into how this works: for an audio recording to be searchable, the audio file and “moments” or keywords within that file—needed to be tagged with terms for which you would search. First we used Speech-to-Text API to transcribe the audio file.Then Natural Language API identified keywords and their salience from the transcription.The transcript and keywords were loaded to an Elastic Search index.Resulting in a searchable transcript on the StoryCorps Archive.Here is an example of how these Cloud AI technologies work using an actual StoryCorps interview.Building empathy and understanding through connection StoryCorps’ mission is impressive. Not only is it preserving humanity’s stories, its aim is to “build connections between people and create a more just and compassionate world” by sharing those stories as widely as possible. This is where our path with StoryCorps crosses on a deeper level. Our mission for AI technology is one where everyone is accounted for, extending well beyond the training data in computer science departments. This deeper understanding could allow organizations in every sector to unlock new possibilities of what they have to offer while being inclusive, equitable and socially beneficial. But that’s our story to figure out and we’re working hard at it. Whatever you decide to do this holiday season, please stay safe. In the meantime, perhaps your family would like to use the StoryCorps platform or app to connect, preserve and share a story of your own.Related ArticlePicture what the cloud can do: How the New York Times is using Google Cloud to find untold stories in millions of archived photosThe New York Times is building a pipeline on Google Cloud Platform to preserve its extensive photo archive, store it in the cloud, and le…Read Article
Quelle: Google Cloud Platform
Cloud opens many new possibilities for High Performance Computing (HPC). But while the cloud offers the latest technologies and a wide variety of machine types (VMs), not every VM is suited to the demands of HPC workloads. Google Cloud’s Compute-optimized (C2) machines are specifically designed to meet the needs of the most compute-intensive workloads, such as HPC applications in fields like scientific computing, Computer-aided Engineering (CAE), biosciences, and Electronic Design Automation (EDA), among many others.The C2 is based on the second generation Intel® Xeon® Scalable Processor and provides up to 60 virtual cores (vCPUs) and 240GB of system memory. C2s can run at a sustained frequency of 3.8GHz and offer more than 40% improvement compared to previous generation VMs for general applications. Compared to previous generation VMs, total memory bandwidth improves by 1.21X and memory bandwidth/vCPU improves by 1.94X.1 Here we take a deeper look at using C2 VMs for your HPC workloads on Google Cloud.Resource isolationTightly-coupled HPC workloads rely on resource isolation for predictable performance. C2 is built for isolation and consistent mapping of shared physical resources (e.g., CPU caches, and memory bandwidth). The result is reduced variability and more consistent performance. C2 also exposes and enables explicit user control of CPU power states (“C-States”) on larger VM sizes, enabling higher effective frequencies and performance.NUMA nodesIn addition to hardware improvements, Google Cloud has enabled a number of HPC-specific optimizations on C2 instances. In many cases, tightly-coupled HPC applications require careful mapping of processes or threads to physical cores, along with care to ensure processes access memory that is closest to their physical cores. C2s provide explicit visibility and control of NUMA domains to the guest operating system (OS), enabling maximum performance.AVX-512 supportSecond generation Xeon processors support Intel Advanced Vector Extension 512 (Intel AVX-512) for data parallelism. AVX-512 instructions are SIMD (Single Instruction Multiple Data) instructions, and along with additional and wider registers enable packing of 64 single-precision (or 32 double-precision) floating point operations into one instruction. This means that more can be done in every clock cycle, reducing overall execution time. The latest generation of AVX-512 instructions in the 2nd generation Xeon processor include DL Boost instructions that significantly improve performance for AI inferencing by combining three INT8 instructions into one—thereby maximizing the use of compute resources, utilizing the cache better, and avoiding potential bandwidth bottlenecks.Low- latencyHPC workloads often scale out to multiple nodes in order to accelerate time to completion. Google Cloud has enabled “Compact Placement Policy” on the C2, which allocates up to 1320 vCPUs placed in close physical proximity, minimizing cross-node latencies. Compact placements, in conjunction with Intel MPI library, optimizes multi-node scalability of HPC applications. You can learn more about best practices for ensuring low latency on multi-node workloads here.Development toolsAlong with the hardware optimizations, Intel offers a comprehensive suite of development tools (including performance libraries, Intel Compilers, and performance monitoring and tuning tools) to make it simpler to build and modernize code with the latest techniques in vectorization, multithreading, multi-node parallelization, and memory optimization. Learn more about Intel’s Parallel Studio XE here.Bringing it all together Combining all the improvements in hardware and optimizations done in Google Cloud stack, C2 VMs perform up to 2.10X better compared to previous generation N1 for HPC workloads for roughly the same size VM.2In many cases HPC applications can scale up to the full node. A single C2 node (60 vCPUs and 240GB) offers up to 2.49X better performance/price compared to a single N1 node (96 vCPUs and 360GB).3C2s are offered in predefined shapes intended to deliver the most appropriate vCPU and memory configurations for typical HPC workloads. In some cases, it is possible to further optimize performance or performance/price via a custom VM shape. For example, if a certain workload is known to require less than the default 240GB of a C2 standard 60vCPU VM, a custom N2 machine with less memory can deliver roughly the same performance at a lower cost. We were able to achieve up to 1.09X better performance/price by tuning the VM shape to the needs of several common HPC workloads.4Get started todayAs more HPC workloads start to benefit from the agility and flexibility of cloud, Google Cloud and Intel are joining forces to create optimized solutions for specific needs of these workloads. With the latest optimizations in Intel 2nd generation Xeon processors and Google Cloud, C2 VMs deliver the best solution for running HPC applications in Google Cloud, while giving you the freedom to build and evolve around your unique business needs. Many of our customers with need for high performance have moved their workloads to C2 VMs and confirmed our expectations.To learn more about C2 and the second generation of Intel Xeon Scalable Processor, contact your sales representative or reach out to us here. And if you’re participating in SC20 this week, be sure to check out our virtual booth, where you can watch sessions, access resources, and chat with our HPC experts.1. Based on internal analysis of our c2-standard-60 and n1-standard-96 machine types, using the STREAM Triad Best Rate benchmark.2. Based on internal analysis of our c2-standard-60 and n1-standard-96 machine types, using our Weather Research Forecasting (WRF) benchmark.3. Based on the High Performance Conjugate Gradients (HPCG) benchmark, analyzing Google Cloud VM Instance pricing on C2-standard-60 ($3.1321/hour) and N1-standard-96 ($4.559976) as of 10/15/20204. Based on GROMACS and NAMD benchmarks, analyzing Google Cloud VM Instance pricing on N2-custom-80 with 160GB ($3.36528) and C2-standard-60 ($3.1321/hour) as of 10/15/2020Related ArticleIntroducing Compute- and Memory-Optimized VMs for Google Compute EngineGoogle Cloud is the first public cloud provider to offer Compute-Optimized VMs and Memory-Optimized VMs based on Intel 2nd Generation Xeo…Read Article
Quelle: Google Cloud Platform
Das Elektroauto D1 wurde eigens nach den Anforderungen von Didi entwickelt. (Elektroauto, Technologie)
Quelle: Golem
Was am 18. November 2020 neben den großen Meldungen sonst noch passiert ist, in aller Kürze. (Kurznews, Linux-Kernel)
Quelle: Golem
Borje Ekholm von Ericsson hat den Ausschluss des wichtigsten Konkurrenten Huawei durch die eigene Regierung abgelehnt. (Huawei, Rechtsstreitigkeiten)
Quelle: Golem
Deutsche Glasfaser und Telekom betonen: Kooperationen seien nötig, um die Ausbauziele bei Glasfaser zu erreichen. (Glasfaser, Long Term Evolution)
Quelle: Golem