Protect your running VMs with new OS patch management service

Managing patches effectively is a great way to keep your infrastructure up-to-date and reduce the risk of security vulnerabilities. But without the right tools, patching can be daunting and labor intensive.Today, we are announcing the general availability of Google Cloud’s OS patch management service to protect your running VMs against defects and vulnerabilities. The service works on Google Compute Engine and across OS environments (Windows, Linux).Automate OS security and complianceWith OS patch management, you can apply OS patches across a set of VMs, receive patch compliance data across your environments, and automate installation of OS patches across VMs—all from one centralized location. The OS patch management service has two main components:Compliance reporting, which provides detailed compliance reports and insights on the patch status of your VM instances across Windows and Linux distributions. Patch deployment, which automates the installation of OS patches across your VM fleet, with flexible scheduling and advanced patch configuration controls. For added convenience, you can set up flexible schedules and still keep systems up-to-date by running your patch updates within designated maintenance windows.Managing patches for your applications doesn’t have to be a time-consuming exercise. OS patch management’s automated compliance reporting feature helps your systems stay up-to-date against vulnerabilities, reducing the risk of downtime for your business and the productivity of your internal users. IT administrators now can also focus on other business critical tasks, not on manual patch update processes.Get started todayThe current release of OS patch management is available at no cost from now through December 31, 2020. You can start using OS patch management in the Google Cloud Console today. To learn more about how to set up the service, check out the documentation.
Quelle: Google Cloud Platform

Protecting businesses against cyber threats during COVID-19 and beyond

No matter the size of your business, IT teams are facing increased pressure to navigate the challenges of COVID-19. At the same time, some things remain constant: Security is at the top of the priority list, and phishing is still one of the most effective methods that attackers use to compromise accounts and gain access to company data and resources. In fact, bad actors are creating new attacks and scams every day that attempt to take advantage of the fear and uncertainty surrounding the pandemic. It’s our job to constantly stay ahead of these threats to help you protect your organization. In February, we talked about a new generation of document malware scanners that rely on deep learning to improve our detection capabilities across over 300 billion attachments we scan for malware every week. These capabilities help us maintain a high rate of detection even though 63% of the malicious docs blocked by Gmail are different from day to day. To further help you defend against these attacks, today we’re highlighting some examples of COVID-19-related phishing and malware threats we’re blocking in Gmail, sharing steps for admins to effectively deal with them, and detailing best practices for users to avoid threats.The attacks we’re seeing (and blocking)Every day, Gmail blocks more than 100 million phishing emails. During the last week, we saw 18 million daily malware and phishing emails related to COVID-19. This is in addition to more than 240 million COVID-related daily spam messages. Our ML models have evolved to understand and filter these threats, and we continue to block more than 99.9% of spam, phishing, and malware from reaching our users. The phishing attacks and scams we’re seeing use both fear and financial incentives to create urgency to try to prompt users to respond. Here are some examples:Impersonating authoritative government organizations like the World Health Organization (WHO) to solicit fraudulent donations or distribute malware. This includes mechanisms to distribute downloadable files that can install backdoors. In addition to blocking these emails, we worked with the WHO to clarify the importance of an accelerated implementation of DMARC (Domain-based Message Authentication, Reporting, and Conformance) and highlighted the necessity of email authentication to improve security. DMARC makes it harder for bad actors to impersonate the who.int domain, thereby preventing malicious emails from reaching the recipient’s inbox, while making sure legitimate communication gets through.This example shows increased phishing attempts of employees operating in a work-from-home setting.This example attempts to capitalize on government stimulus packages and imitates government institutions to phish small businesses.This attempt targets organizations impacted by stay-at-home orders.Improving security with proactive capabilities We have put proactive monitoring in place for COVID-19-related malware and phishing across our systems and workflows. In many cases, these threats are not new—rather, they’re existing malware campaigns that have simply been updated to exploit the heightened attention on COVID-19. As soon as we identify a threat, we add it to the Safe Browsing API, which protects users in Chrome, Gmail, and all other integrated products. Safe Browsing helps protect over four billion devices every day by showing warnings to users when they attempt to navigate to dangerous sites or download dangerous files. In G Suite, advanced phishing and malware controls are turned on by default, ensuring that all G Suite users automatically have these proactive protections in place.These controls can: Route emails that match phishing and malware controls to a new or existing quarantineIdentify emails with unusual attachment types and choose to automatically display a warning banner, send them to spam, or quarantine the messages Identify unauthenticated emails trying to spoof your domain and automatically display a warning banner, send them to spam, or quarantine the messages Protect against documents that contain malicious scripts that can harm your devices Protect against attachment file types that are uncommon for your domainScan linked images and identify links behind shortened URLsProtect against messages where the sender’s name is a name in your G Suite directory, but the email isn’t from your company domain or domain aliasesBest practices for organizations and usersAdmins can look at Google-recommended defenses on our advanced phishing and malware protection page, and may choose to enable the security sandbox. Users should: Complete a Security Checkup to improve your account securityAvoid downloading files that you don’t recognize; instead, use Gmail’s built-in document previewCheck the integrity of URLs before providing login credentials or clicking a link—fake URLs generally imitate real URLs and include additional words or domainsAvoid and report phishing emails Consider enrolling in Google’s Advanced Protection Program (APP)—we’ve yet to see anyone that participates in the program be successfully phished, even if they’re repeatedly targeted At Google Cloud, we’re committed to protecting our customers from security threats of all types. We’ll keep innovating to make our security tools more helpful for users and admins and more difficult for malicious actors to circumvent.
Quelle: Google Cloud Platform

Edge Computing Challenges

The post Edge Computing Challenges appeared first on Mirantis | Pure Play Open Cloud.
There is a lot of talk around edge computing. What is it? What will it mean to the telco industry? Who else will benefit from it? There’s also a large amount of speculation about identifying the killer application that will spark massive scale deployment of edge computing resources. 
In many ways, edge computing is just a logical extension of existing software defined datacenter models. The primary goal is to provide access to compute, storage and networking resources in a standardised way, whilst abstracting the complexity of managing those resources away from applications. The key factor that is missing in many of these discussions, however, is a clear view of how we will be expected to deploy, manage and gain a clear picture of these edge resources.
The key challenge here is that those resources need to be managed in a consistent and effective way in order to ensure that application developers and owners can rely on the infrastructure, and will be able to react to changes or issues in the infrastructure in a predictable way. 
The value of cloud infrastructure software such as Openstack is the provision of standardised APIs that developers can utilise to get access to resources, regardless of what they are or how they need to be managed.
With the advent of technologies such as Kubernetes, the challenge of managing the infrastructure in no way lessens; we still need to be able to understand what resources we have available, control access to them, and lifecycle manage them.
In order to enable the future goal of providing distributed ubiquitous compute resources to all who need them, where they need them, and when they need them, we have to look deeper into what is required for an effective edge compute solution.
What is Edge Computing?
Finding a clear definition of edge computing can be challenging; there are many opinions on what constitutes the edge. Some definitions will narrow the definition of cloud, claiming that edge only includes devices that are required to support low latency workload, or that are the last computation point before the consumer, whilst others will include the consumer device or an IOT device, even if latency is not an issue.  
As it appears that everyone has a slightly different perspective on what edge computing entails, in order to have a common understanding the following is the definition of edge computing used for this discussion.
In this discussion we take a broad interpretation of Edge Computing, including all compute devices that provide computing resources, that are not located in core or regional data centers, that bring computing resources closer to the end user or data collection devices.
For example, consider this hierarchy:

There are a number of different levels, starting with core data centers, which generally consists of fewer locations, each containing a large number of nodes and workloads. These core data centers feed into (or are fed by, depending on the direction of traffic!) the regional data centers. 
Regional data centers tend to be more numerous and more widely distributed than core data centers, but they are also smaller and consist of a smaller — though still significant — number of nodes and workloads.
From there we move down the line to edge compute locations; these locations are still clouds, consisting of a few to a few dozen nodes and hosting a few dozen workloads, and existing in potentially hundreds of thousands of locations, such as cell towers or branch offices. 
These clouds serve the far edge layer, also known as “customer premise equipment”. These are single servers or routers that can exist in hundreds of thousands, or even millions of locations, and serve a relatively small number of workloads. Those workloads are then accessed by individual consumer devices.
Finally the consumer or deep edge layer is where the services provided in the other layers are consumed from or from where the data is collected and processed.
Edge Use Cases
There are a large number of potential use cases for edge computing, with more being identified all the time.  For example:As you can see here, we can roughly divide edge use cases into third party applications and telco operator applications.
Third party applications are those that are more likely to be accessed by end users, such as providing wireless access points in a public stadium, connected cars, or on the business end, connecting the enterprise to RAN.
Operator applications, on the other hand, are more of an internal concern. They consist of applications such as geo-fencing of data, data reduction at the edge to enable more efficient analytics, or Mobile Core.
All of these applications, however, fall into the “low latency requirements” category.  Other edge use cases that don’t involve latency might consist of a supermarket that hosts an edge cloud that communicates with scanners customers can use to check out their groceries as they shop, or an Industrial IOT scenario in which hundreds or thousands of sensors feed information from different locations in a manufacturing plant to the plant’s local edge cloud, which then aggregates the data and sends it to the regional cloud.
Edge Essential Requirements
The delivery of any compute service has a number of requirements that need to be met. With edge computing, the same delivery of a massively distributed compute service takes all those requirements and compounds them, not only because of the scale, but also because access (both physically and via the network) may be restricted due to device/cloud location. 
So taking this into account, what are the requirements for edge computing?

Area
Detail

Security (isolation)

Effective isolation of workloads is critical to ensure not only that workloads will not interfere with each other’s resources, but also that they can not access each other’s data in a multi-tenanted environment.
Clear access control and RBAC policies and systems are required to support appropriate separation of duties and to prevent unauthorised access by both good and bad actors.
Cryptographic Identification and authentication of edge compute resources are also required.

Resource management

The system must provide the ability to manage the physical and virtual resources required to provide the resources to consumers in a consistent way, with minimal input from administrators.
Operators must be able to manage all resources remotely, with no need for local hands.

Telemetry Data

The system must provide a clear understanding of resource availability and consumption, in such a way that provides applications with the data necessary to make programmatic decisions about application distribution and scaling. This requires:

Providing applications with data on inbound demand 
Providing applications with geographic data that is relevant to application decisions

Operations

Low impact, zero application downtime infrastructure operations are critical.
Low or (preferably) zero touch infrastructure operations tooling must be available.
An efficient edge system requires a very high degree of automation and self-healing capabilities.
The system must consist of self contained operations with minimal dependencies on remote systems that could be impacted by low network bandwidth, latency or outage.

Open Standards 

A key feature of edge systems is the ability to rapidly deploy new and diverse workloads and integrate them with a number of different environments. Basing the solution on open standards allows for this flexibility and supports standardisation.
Open standards should be used in all areas that affect the deployment and management of workloads, enabling easy and rapid certification of workloads, such as: 

a common standard for the abstraction of APIs simplifies development and deployment.
standardised virtualisation or container engines

Stability and Predictability

Edge compute platforms need to behave predictably in different scenarios to ensure a consistent usage experience.
The stability of edge compute solutions is critical; this encompasses graceful recovery from errors, as well as being able to handle harsh environmental conditions with potentially unpredictable utilities and other external services.

Performance

Predictable and clearly advertised performance of Edge compute  systems is critical for the effective and appropriate hosting of applications. For example, it should be clear whether the environment provides access to specialised hardware components such as SmartNICs and Network Accelerators.
The performance requirements for Edge computers systems are driven by the applications needs. For example, a gaming application may need lots of  CPU and GPU power and very low latency network connections, but a data logger may be based on a low power CPU and can trickle feed the collected data over time.

Abstraction

Edge systems must provide a level of abstraction for infrastructure components in order  to support effective application/workload portability over multiple platforms. Common standard APIs typically drive this portability.

Sound familiar?
If you’re thinking that this sounds a lot like the theory behind cloud computing, you’re right. In many ways, “edge” is simply cloud computing taken a bit further out of the datacenter. The distinction certainly imposes new requirements, but the good news is that your cloud skills can be brought to bear to get you started.
If this seems overwhelming, don’t worry, we’re here for you! Please don’t hesitate to contact us and see how Mirantis can help you plan and execute your edge computing architecture.
The post Edge Computing Challenges appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Welcome new members to the OpenStack Technical Committee

Please join the community in congratulating the five newly elected members of the OpenStack Technical Committee (TC). Graham Hayes (mugsie) Kristi Nikolla (knikolla) Mohammed Naser (mnaser) Belmiro Moreira (belmoreira) Rico Lin (ricolin) These members join: Kendall Nelson (diablo_rojo) Jay Bryant (jungleboyj) Jean-Phillippe Evrard (evrardjp) Nate Johnston (njohnston) Ghanshyam Mann (gmann) Kevin Carter (cloudnull) For more… Read more »
Quelle: openstack.org

PEPP-PT: Streit beim Corona-App-Projekt

Im europäischen Konsortium PEPP-PT, das die Technologie für eine Corona-Tracking-App entwickeln will, gibt es einen Konflikt: Informationen über einen dezentralen Ansatz wurden ohne Absprache von der Webseite entfernt. Von Hanno Böck (Coronavirus, Smartphone)
Quelle: Golem