Xperia 1 im Test: Das Smartphone für Filmfans

Sonys Xperia 1 scheint mit seinem 21-zu-9-Format kein Ende zu nehmen – die Form erinnert an eine Fernbedienung. Während des Tests ist unsere anfängliche Skepsis ob des Formats allerdings gewichen: Das Smartphone eignet sich perfekt zum Multitasking und Videoschauen. Ein Test von Tobias Költzsch (Sony, Smartphone)
Quelle: Golem

OpenShift Commons Briefing: OKD4 Release and Road Map Update with Clayton Coleman (Red Hat)

In the briefing, Red Hat’s Clayton Coleman, Lead Architect, Containerized Application Infrastructure (OpenShift, Atomic, and Kubernetes) leads a discussion about on the current development efforts for OKD4, Fedora CoreOS and Kubernetes in general as well as the philosophy guiding OKD 4 develpoment efforts. The briefing includes discussion of shared community goals for OKD4 and beyond […]
The post OpenShift Commons Briefing: OKD4 Release and Road Map Update with Clayton Coleman (Red Hat) appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Event-driven analytics with Azure Data Lake Storage Gen2

Most modern-day businesses employ analytics pipelines for real-time and batch processing. A common characteristic of these pipelines is that data arrives at irregular intervals from diverse sources. This adds complexity in terms of having to orchestrate the pipeline such that data gets processed in a timely fashion.

The answer to these challenges lies in coming up with a decoupled event-driven pipeline using serverless components that responds to changes in data as they occur.

An integral part of any analytics pipeline is the data lake. Azure Data Lake Storage Gen2 provides secure, cost effective, and scalable storage for the structured, semi-structured, and unstructured data arriving from diverse sources. Azure Data Lake Storage Gen2’s performance, global availability, and partner ecosystem make it the platform of choice for analytics customers and partners around the world. Next comes the event processing aspect. With Azure Event Grid, a fully managed event routing service, Azure Functions, a serverless compute engine, and Azure Logic Apps, a serverless workflow orchestration engine, it is easy to perform event-based processing and workflows responding to the events in real-time.

Today, we’re very excited to announce that Azure Data Lake Storage Gen2 integration with Azure Event Grid is in preview! This means that Azure Data Lake Storage Gen2 can now generate events that can be consumed by Event Grid and routed to subscribers with webhooks, Azure Event Hubs, Azure Functions, and Logic Apps as endpoints. With this capability, individual changes to files and directories in Azure Data Lake Storage Gen2 can automatically be captured and made available to data engineers for creating rich big data analytics platforms that use event-driven architectures.

The diagram above shows a reference architecture for the modern data warehouse pipeline built on Azure Data Lake Storage Gen2 and Azure serverless components. Data from various sources lands in Azure Data Lake Storage Gen2 via Azure Data Factory and other data movement tools. Azure Data Lake Storage Gen2 generates events for new file creation, updates, renames, or deletes which are routed via Event Grid and Azure Function to Azure Databricks. A databricks job processes the file and writes the output back to Azure Data Lake Storage Gen2. When this happens, Azure Data Lake Storage Gen2 publishes a notification to Event Grid which invokes an Azure Function to copy data to Azure SQL Data Warehouse. Data is finally served via Azure Analysis Services and PowerBI.

The events that will be made available for Azure Data Lake Storage Gen2 are BlobCreated, BlobDeleted, BlobRenamed, DirectoryCreated, DirectoryDeleted, and DirectoryRenamed. Details on these events can be found in the documentation “Azure Event Grid event schema for Blob storage.”

Some key benefits include:

Seamless integration to automate workflows enables customers to build an event-driven pipeline in minutes.
Enable alerting with rapid reaction to creation, deletion, and renaming of files and directories. A myriad of scenarios would benefit from this – especially those associated with data governance and auditing. For example, alert and notify of all changes to high business impact data, set up email notifications for unexpected file deletions, as well as detect and act upon suspicious activity from an account.
Eliminate the complexity and expense of polling services and integrate events coming from your data lake with third-party applications using webhooks such as billing and ticketing systems.

Next steps

Azure Data Lake Storage Gen2 Integration with Azure Event Grid is now available in West Central US and West US 2. Subscribing to Azure Data Lake Storage Gen2 events works the same as it does for Azure Storage accounts. To learn more, see the documentation “Reacting to Blob storage events.” We would love to hear more about your experiences with the preview and get your feedback at ADLSGen2QA@microsoft.com.
Quelle: Azure

See how your code actually executes with Stackdriver Profiler, now GA

We’re happy to announce that Stackdriver Profiler is now generally available. This is an important piece of our Stackdriver monitoring and logging tool for Google Cloud Platform (GCP) services. It brings continuous CPU and heap profiling, so you can improve the performance of your cloud services and cut costs.Stackdriver Profiler shows you how your code actually executes in production. You can see how functions are called and which functions are consuming the most CPU and memory, with no noticeable performance impact. Profiler is free to use and supports Java, Go, Node.js, and Python applications running on Google Kubernetes Engine (GKE), Google Compute Engine, containers, VMs, or physical machines running anywhere. Here’s what it looks like:Profiler is useful for optimizing the performance of your code, tracking down the sources of memory leaks, and reducing your costs. It provides insight about production performance that isn’t available anywhere else.Using Profiler in productionMany of our largest customers are having great success with Profiler. We’ll let them describe the impact that it’s had on their businesses:”Using Stackdriver Profiler, the back-end team at Outfit7 was able to analyze the memory usage pattern in our batch processing Java jobs running in App Engine Standard, identify the bottlenecks and fix them, reducing the number of OOMs [out-of-memory] errors from a few per day to almost zero,” says Anže Sodja, senior software engineer at Outfit7 Group (Ekipa2 subsidiary). “Stackdriver Profiler helped us to identify issues fast, as well as significantly reducing debugging time by enabling us to profile our application directly in the cloud without setting up a local testing environment.”In addition, Snap Inc. has found great success using Profiler. “We used Stackdriver Profiler as part of an effort to improve the scalability of our services,” says Evan Yin, software engineer at Snap Inc. “It helped us to pinpoint areas we can optimize and reduce CPU time, which means a lot to us at our scale.”Making Profiler continually betterWe’re always working to add useful new functionality to Profiler. We recently added weight filtering and a table showing the aggregate cost of each function, and we’ve added even more features in the past few months:Full support for Python applications running on containers and VMsNew optional coloring modes for the flame graph, which highlights functions based on their consumption, exposed via the new “color mode” filter in the filter barTool tips for filters, accessible through the question mark button to the right of the filter barThe focus table now works with the comparison feature and adds additional comparison columns when two sets of profiles are being comparedWe’re really excited that Profiler is now generally available, and we hope that you are too. In the coming months and quarters we’ll keep focusing on ways to make this product even better. If you haven’t yet used Stackdriver Profiler, get started here.
Quelle: Google Cloud Platform

Introducing Deep Learning Containers: Consistent and portable environments

It’s easy to underestimate how much time it takes to get a machine learning project up and running. All too often, these projects require you to manage the compatibility and complexities of an ever-evolving software stack, which can be frustrating, time-consuming, and keep you from what you really want to do: spending time iterating and refining your model. To help you bypass this set-up and quickly get started with your project, we’re introducing Deep Learning Containers in beta today. Deep Learning Containers are pre-packaged, performance-optimized, and compatibility-tested, so you can get started immediately. Productionizing your workflow requires not only developing the code or artifacts you want to deploy, but also maintaining a consistent execution environment to guarantee reproducibility and correctness. If your development strategy involves a combination of local prototyping and multiple cloud tools, it can often be frustrating to ensure that all the necessary dependencies are packaged correctly and available to every runtime. Deep Learning Containers address this challenge by providing a consistent environment for testing and deploying your application across GCP products and services, like Cloud AI Platform Notebooks and Google Kubernetes Engine (GKE), making it easy to scale in the cloud or shift across on-prem. In addition, we provide hardware optimized versions of TensorFlow, whether you’re training on NVIDIA GPUs or deploying on Intel CPUs.In this blog post, we’ll cover some common scenarios when working with Deep Learning Containers, including how to select a container, develop locally, and create derivative containers for use in Cloud AI Platform Notebooks.Choose a container and develop locallyAll Deep Learning Containers have a preconfigured Jupyter environment, so each can be pulled and used directly as a prototyping space. First, make sure you have the gcloud tool installed and configured. Then, determine the container that you would like to use. All containers are hosted under gcr.io/deeplearning-platform-release, and can be listed with the command:Each container provides a Python3 environment consistent with the corresponding Deep Learning VM, including the selected data science framework, conda, the NVIDIA stack for GPU images (CUDA, cuDNN, NCCL), and a host of other supporting packages and tools. Our initial release consists of containers for TensorFlow 1.13, TensorFlow 2.0, PyTorch, and R, and we are working to reach parity with all Deep Learning VM types.With the exception of the base containers, the container names will be in the format <framework>-<cpu/gpu>.<framework version>. Let’s say you’d like to prototype on CPU-only TensorFlow. The following command will start the TensorFlow Deep Learning Container in detached mode, bind the running Jupyter server to port 8080 on the local machine, and mount /path/to/local/dir to /home in the container.Then, the running JupyterLab instance can be accessed at localhost:8080. Make sure to develop in /home, as any other files will be removed when the container is stopped.If you would like to use the GPU-enabled containers, you will need a CUDA 10 compatible GPU, the associated driver, and nvidia-docker installed. Then, you can run a similar command.Create derivative containers and deploy to Cloud AI Platform Notebooks and GKEAt some point, you’ll likely need a beefier machine than what your local machine has to offer, but you may have local data and packages that need to be installed in the environment. Deep Learning Containers can be extended to include your local files, and then these custom containers can then be deployed in a Cloud AI Platform Notebooks instance and GKE.For example, imagine that you have a local python package called mypackage that you are using as part of your Pytorch workflow. Create a Dockerfile in the directory above mypackage as such.DockerfileThis simple Dockerfile will copy in the package files and install it into the default environment. You can add additional RUN pip/conda commands, but you should not modify CMD or ENTRYPOINT, as these are already configured for AI Platform Notebooks. Build and upload this container to Google Container Registry.Then, create an AI Platform Notebooks instance using the gcloud CLI (custom container UI support coming soon). Feel free to modify the instance type and accelerator fields to suit your workload needs.The image will take a few minutes to set up. If the container was loaded correctly, there will be a link to access JupyterLab written to the proxy-url metadata field, and the instance will appear as ready in the AI Platform > Notebooks UI on Cloud Console. You can also query the link directly by describing the instance metadata.Accessing this link will take you to your JupyterLab instance. Please note: only data saved to /home will be persisted across reboots. By default, the container VM mounts /home on the VM to /home on the container, so make sure you create new notebooks in /home, otherwise that work will be lost if the instance shuts down.Deploying Deep Learning Containers on GKE with NVIDIA GPUsYou can also take advantage of GKE to develop on your Deep Learning Containers. After setting up your GKE cluster with GPUs following the user guide, you just need to specify the container image in your Kubernetes pod spec. The following spec creates a pod with one GPU from tf-gpu and an attached GCE persistent disk:pod.yamlDeploy and connect to your instance with the following commands:After the pod is fully deployed, your running JupyterLab instance can be accessed at localhost:8080.Getting Started If you’re not already a Google Cloud customer, you can sign up today for $300 of credit in our free tier. Then, try out our quick start guides and documentation for more details on getting started with your project.
Quelle: Google Cloud Platform

How SRE teams are organized, and how to get started

At Google, Site Reliability Engineering (SRE) is our practice of continually defining reliability goals, measuring those goals, and working to improve our services as needed. We recently walked you through a guided tour of the SRE workbook. You can think of that guidance as what SRE teams generally do, paired with when the teams tend to perform these tasks given their maturity level. We believe that many companies can start and grow a new SRE team by following that guidance.Since then, we have heard that folks understand what SREs generally do at Google and understand which best practices should be implemented at various levels of SRE maturity. We have also heard from many of you how you’re defining your own levels of team maturity. But the next step—how the SRE teams are actually organized—has been largely undocumented, until now!In this post, we’ll cover how different implementations of SRE teams establish boundaries to achieve their goals. We describe six different implementations that we’ve experienced, and what we have observed to be their most important pros and cons. Keep in mind that your implementations of SRE can be different—this is not an exhaustive list. In recent years, we’ve seen all of these types of teams here in the Google SRE organization (i.e., a set of SRE teams) except for the “kitchen sink.” The order of implementations here is a fairly common path of evolution as SRE teams gain experience.Before you begin implementing SREBefore choosing any of the implementations discussed here, do a little prep work with your team. We recommend allocating some engineering time of multiple folks and finding at least one part-time advocate for SRE-related practices within your company. This type of initial, less formal setup has some pros and cons:ProsEasy to get started on an SRE journey without organizational change.Lets you test and adapt SRE practices to your environment at low cost.ConsTime management between day-to-day job demands vs. adoption of SRE practices.Recommended for: Organizations without the scale to justify dedicated SRE team staffing, and/or organizations experimenting with SRE practices before broader adoption.Types of SRE team implementations1. Kitchen Sink, a.k.a. “Everything SRE”This describes an SRE team where the scope of services or workflows covered is usually unbounded. It’s often the first (or only) SRE team in existence, and may grow organically, as it did when Google SRE first got started. We’ve since adopted a hybrid model, including the implementations listed below.ProsNo coverage gaps between SRE teams, given that only one team is in place.Easy to spot patterns and draw similarities between services and projects.SRE tends to act as a glue between disparate dev teams, creating solutions out of distinct pieces of software.ConsThere is usually a lack of an SRE team charter, or the charter states everything in the company as being possibly in scope, running the risk of overloading the team.As the company and system complexity grows, such a team tends to move from being able to have deep positive impact on everything to making a lot more shallow contributions. There are ways to mitigate this phenomenon without completely changing the implementation or starting another team (see tiers of service, below). Issues involving such a team may negatively impact your entire business.Recommended for: A company with just a couple of applications and user journeys, where adoption of SRE practices and demand for the role has outgrown what can be staffed without a dedicated SRE team, but where the scope remains small enough that multiple SRE teams cannot be justified.2. InfrastructureThese teams tend to focus on behind-the-scenes efforts that help make other teams’ jobs faster and easier. Common implementations include maintaining shared services (such as Kubernetes clusters) or maintaining common components (like CI/CD, monitoring, IAM or VPC configurations) built on top of a public cloud provider like Google Cloud Platform (GCP). This is different from SREs working on services related to products—i.e., customer-facing code written in house. ProsAllows product developers to use DevOps practices to maintain user-facing products without divergence in practice across the business. SREs can focus on providing a highly reliable infrastructure. They will often define production standards as code and work to smooth out any sharp edges to greatly simplify things for the product developers running their own services.ConsDepending on the scope of the infrastructure, issues involving such a team may negatively impact your entire business, similar to a Kitchen Sink implementation.Lack of direct contact with your company’s customers can lead to a focus on infrastructure improvements that are not necessarily tied to the customer experience.As the company and system complexity grows, you may be required to split the infrastructure teams, so the cons related to product/application teams apply (see below).Recommended for: Any company with several development teams, since you are likely to have to staff an infrastructure team (or consider doing so) to define common standards and practices. It is common for large companies to have both an infrastructure DevOps team and an SRE team. The DevOps team will focus on customizing FLOSS and writing their own software (think features) for the application teams, while the SRE team focuses on reliability.3. ToolsA tools-only SRE team tends to focus on building software to help their developer counterparts measure, maintain, and improve system reliability or any other aspect of SRE work, such as capacity planning. One can argue that tools are part of infrastructure, so the SRE team implementations are the same. It’s true that these two types of teams are fairly similar. In practice, tools teams tend to focus more on support and planning systems that have a reliability-oriented feature set, as opposed to shared back ends on the serving path that are normally associated with infrastructure teams. As a side effect, there’s often more direct feedback to infrastructure SRE teams; a tooling SRE team runs the risk of solving the wrong problems for the business, so it needs to work hard to stay aware of the practical problems of the teams tackling front-line reliability.The pros and cons of infrastructure and tools teams tend to be similar. Additionally, for tools teams:Cons:You need to make sure that a tools team doesn’t unintentionally turn into an infrastructure team, and vice versa. There’s a high risk of an increase of toil and overall workload. This is usually contained by establishing a team charter that’s been approved by your business leaders.Recommended for: Any company that needs highly specialized reliability-related tooling that’s not currently available as FLOSS or SaaS.4. Product/applicationIn this case, the SRE team works to improve reliability of a critical application or business area, but the reliability of ancillary services such as batch processors is the sole responsibility of a different team—usually developers covering both dev and ops functions.ProsProvides a clear focus for the team’s effort and allows a clear link from business priorities to where team effort is spent.ConsAs the company and system complexity grows, new product/application teams will be required. The product focus of each team can lead to duplication of base infrastructure or divergence of practices between teams, which is inefficient and limits knowledge sharing and mobility. Recommended for: As a second or nth team for companies that started with a Kitchen Sink, infrastructure, or tools team and have a key user-facing application with high reliability needs that justifies the relatively large expense of a dedicated set of SREs.5. EmbeddedThese SRE teams have SREs embedded with their developer counterparts, usually one per developer team in scope. Embedded SREs usually share an office with the developers, but the embedded arrangement can be remote. The work relationship between the embedded SRE(s) and developers tends to be project- or time-bounded. During embedded engagements, the SREs are usually very hands-on, performing work like changing code and configuration of the services in scope.ProsEnables focused SRE expertise to be directed to specific problems or teams.Allows side-by-side demonstration of SRE practices, which can be a very effective teaching method.ConsIt may result in lack of standardization between teams, and/or divergence in practice.SREs may not have the chance to spend much time with peers to mentor them.Recommended for: This implementation works well to either start an SRE function, or to scale another implementation further. When you have a project or team that needs SRE for a period of time, then this can be a good model. This type of team can also augment the impact of a tools or infrastructure team by driving adoption.6. ConsultingThis implementation is very similar to the embedded implementation described above. The difference is that consulting SRE teams tend to avoid changing customer code and configuration of the services in scope.Consulting SRE teams may write code and configuration in order to build and maintain tools for themselves or for their developer counterparts. If they are performing the latter, one could argue that they are acting as a hybrid of consulting and tools implementations.ProsIt can help with further scaling an existing SRE organization’s positive impact by being decoupled from directly changing code and configuration (see also influence on reliability standards and practices below).ConsConsultants may lack sufficient context to offer useful advice. A common risk for consulting SRE teams is being perceived as hands-off (i.e., little incurred risk), given that they typically don’t change code and configuration, even though they are capable of having indirect technical impact. Recommended for: We’d recommend waiting to staff a dedicated SRE team of consultants until your company or complexity is considered to be large, and when demands have outgrown what can be supported by existing SRE teams of other various implementations. Keep in mind that we recommend staffing one or a couple part-time consultants before you staff your first SRE team (see above).Common modifications of SRE team implementationsWe’ve seen two common modifiers to most of the implementations described above.1. Reliability standards and practicesAn SRE team may also act as a “reliability standards and practices” group for an entire company. The scope of standards and practices may vary, but usually covers how and when it’s acceptable to change production systems, incident management, error budgets, etc. In other words, while such an SRE team may not interact with every service or developer team directly, it’s often the team that establishes what’s acceptable elsewhere within their area of expertise. We’ve seen adoption of such standards and practices approached in two different ways:Influence relies on mostly organic adoption and showing teams how these standards and practices can help them achieve their goals.Mandates rely on organizational structure, processes, and hierarchy to drive adoption of reliability standards and practices.The effectiveness of mandates vary based on the organizational culture combined with the SRE team’s experience, seniority, and reputation. A mandated approach may be effective in an organization where strict processes are already expected and common in other areas, but is highly unlikely to succeed in an organization where individuals are given high levels of autonomy. In either case, a brand new team—even if composed of experienced individuals—is likely to have more difficulty establishing company-wide standards than a team with a history and reputation of achieving high reliability through strong practices.We’ve also observed software development to be an effective tool for balancing these approaches. In this case, the SRE team develops a zero-configuration approach, where one or more reliability standards and practices can be adopted with additional zero setup cost, if the service or target team happens to be using a predetermined system. Once they see the benefits (typically time savings) that they can achieve by using that system, development teams are influenced to adopt the practices through the provided tooling. As adoption of such a system grows, the approach can then shift to target improvements for SREs and set mandates through reliability-related conformance tests.2. Tiers of serviceRegardless of which SRE team model defines the scope of the team, any SRE team also has a decision to make about the depth of their engagement with the software and services within their area. This is particularly true when there are more development teams, applications, or infrastructure than can be fully supported by the SRE team. A common approach to addressing this challenge is to offer tiers of SRE engagement. Doing so expands the binary approach of “not in scope for us or not yet seen by SRE” and “fully supported by SRE” by adding at least one more tier in between those two options. A common characteristic of a binary approach is that “fully supported by SRE” generally means that a given service or workflow is jointly owned by SRE and developers, including on-call duties, after some onboarding process. Unfortunately, an SRE team, or any other team, tends to reach a limit in terms of how many services they can fully onboard. As the architecture variety and complexity of services increases, cognitive load and memory recall suffers.Here’s an example of a tiered approach to SRE:Tier 0: Sporadic consulting work, no dedicated SRE staffing.Tier 1: Project work, some dedicated SRE time.Tier 2: The service is onboarded (or onboarding) for on-call, and receives more dedicated SRE time.The implementation details of the tiers vary based on the actual SRE implementation itself. For example, consulting and embedded SRE teams aren’t generally expected to onboard services (as in go on call) at Tier 2, but may offer dedicated staffing (as opposed to shared staffing) in Tier 1. We recommend defining tiers of service in a document that’s been approved by SRE and developer leadership. This signoff is related to, but not the same, as documenting your team charter (mentioned above).There have been instances of a single SRE team adopting characteristics of multiple implementations other than adopting tiers of service. For instance, a single Kitchen Sink SRE team could also have two SRE consultants playing a dual role.Common SRE pathsYour SRE organization may follow the implementations in the order above. Another common path is to implement what’s described in “Before you begin,” then to staff a Kitchen Sink SRE team, but swap the order of Infrastructure with product/application when it is time to start a second SRE team. In this scenario, the result is two specialized product/application SRE teams. This makes sense when there is enough product/application breadth but little to no shared infrastructure between both teams, other than hosted solutions such as the ones provided by Google Cloud.A third common path is to move from “Before you begin” to an infrastructure (or even tools) team, skipping a Kitchen Sink and product/application phase. This approach makes the most sense when the application teams are able and willing to define and maintain SLOs.We highly recommend evaluating both “Reliability standards and practices” and “Tiers of service” as early in the SRE process as possible, but that may be feasible only after you’ve established your first SRE team.What should I do next?If you are just starting your SRE practice, we recommend reading Do you have an SRE team yet? How to start and assess your journey, and then assessing the SRE implementation that best suits your needs based on the information we shared above.If you have been leading one or more SRE teams, we recommend describing their implementation in generic terms (similar to how we’ve discussed team implementations above), evaluating the pros and cons based on your own experience, and making sure the SRE team’s goals and scope are defined through a team charter document. This exercise may help you avoid overload and premature reorganizations.If you’re a GCP customer and would like to request CRE involvement, contact your account manager to apply for this program. Of course, SRE is a methodology that will work with a variety of infrastructures, and using Google Cloud is not a prerequisite for pursuing this set of engineering practices. We wish you a happy SRE journey!Thanks to Adrian Hilton, Betsy Beyer, Christine Cignoli, Jamie Wilkinson, Shylaja Nukala among others for their contributions to this post.
Quelle: Google Cloud Platform