Mirantis Joins Docker Verified Publisher Program, Makes Available Public and Private Registries

Certified Container Registries ensure secure container images, enhancing developer productivity CAMPBELL, Calif., May 27, 2021 — Mirantis, the open cloud company, today announced it has joined the Docker Verified Publisher program, and announced the availability of Docker Official Images for Mirantis Secure Registry. “Secure software supply chains are becoming increasingly important to development teams as … Continued
Quelle: Mirantis

Mirantis joins the Docker Verified Publisher program: what does that mean for you?

Developers get interrupted more times per day than they can count, and every context switch, whether it’s to a new environment, a new tool, or even a new registry, isn’t just an annoyance; it’s a loss in productivity. At the same time, you need your developers to use the latest, most secure images in your private registry — while still having access to other resources that may be in other registries such as Docker Hub. For this reason, Mirantis is happy to announce that we are joining with Docker to help further the Docker Verified Publisher program.
Quelle: Mirantis

Never miss a tapeout: Faster chip design with Google Cloud

Cloud offers a proven way to accelerate end-to-end chip design flows. In a previous blog, we demonstrated the inherent elasticity of the cloud, showcasing how front-end simulation workloads can scale with access to more compute resources. Another benefit of the cloud is access to a powerful, modern and global infrastructure. On-prem environments do a fantastic job of meeting sustained demand but Electronic Design Automation (EDA) tooling upgrades happen much more frequently (every six to nine months) than typical on-prem data center infrastructure upgrades (every three to five years). What this means is that your EDA tool can provide much better performance if given access to the right infrastructure. This is especially useful in certain phases of the design process.Take for example, a physical verification workload. Physical verification is typically the last step in the chip design process. In simplified terms, the process consists of verifying design rule checks (or DRCs) against the process design kit (PDK) provided by the foundry. It ensures that the layout produced from the physical synthesis process is ready for handoff to a foundry (in-house or otherwise) for manufacturing. Physical verification workloads tend to require machines with large memories (1TB+) for advanced nodes. Having access to such compute resources enables more physical verification to run in parallel, increasing your confidence in the design that is being taped out (i.e., sent to manufacturing).At the other end of the spectrum are functional verification workloads. Unlike the physical verification process described above, functional verification is normally performed in the early stages of design and typically requires machines with much less memory. Furthermore, functional verification (dynamic verification in particular) accounts for the most time (translating directly to the availability of compute) in the design cycle. Verifying faster, an ambition for most design teams, is often tied to availability of right-sized compute resources. The intermittent and varied infrastructure requirements for verification (both functional and physical) can be a problem for organizations with on-prem data centers. On-prem data centers are optimized for maximizing utilization—this does not directly address access to right-sized compute to deliver the best tool performance. Even if the IT and Computer Aided Design (CAD) departments choose to provision additional suitable hardware, the process of provisioning, acquiring and setting up new hardware on-prem typically takes months for even the most modern organizations. A “hybrid” flow that enables use of on-prem clusters most of the time, but provides seamless access to cloud resources as needed would be ideal.Hybrid chip design in actionYou can improve a typical verification workflow simply by utilizing a hybrid environment that provides instantaneous access to better compute. To illustrate, we chose a front-end simulation workflow, and designed an environment that replicates on-prem and cloud clusters. We also took a few more liberties to simplify the environment (described below). The simplified setup is provided in a GitHub repository for you to try out.In any hybrid chip design flow, there are a few key considerations:Connectivity between on-prem infrastructure and the cloud: Establishing connectivity to the cloud is one of the most foundational aspects of the flow. Over the years, this has also become a very well-understood field, and secure, high availability connectivity is a reality in most setups. In our tutorial, we represent both on-prem and cloud clusters as two different networks in the cloud where all traffic is allowed to pass between these networks. While this is not a real-world network configuration, it is sufficient to demonstrate the basic connectivity model.Connection to license server: Most chip design flows utilize tools from EDA vendors. Such tools are typically licensed, and you need a license server with valid licenses to operate the tool. License servers may remain on-prem in the hybrid flow, so long as latency to the license server is acceptable. You can also install license servers in the cloud on a Compute Engine VM (particularly sole-tenant nodes) for lower latency. Check with your EDA vendors to understand if you can rehost your license services in the cloud.In our tutorial, we use an open source tool (Icarus Verilog Simulator) and therefore, do not need a license server.Identifying data sources and syncing data: There are three important aspects in running EDA jobs: the EDA tools themselves, the infrastructure where the tools run, and the data sources for the tool run. Tools don’t change much, and can be installed on cloud infrastructure. Data sources, on the other hand, are primarily created on-prem and updated regularly. These could be SystemVerilog files that describe the design, the testbenches or the layout files. It is important to sync data between on-prem and cloud to maintain parity. Furthermore, in production environments, it’s also important to maintain a high-performance syncing mechanism.In our tutorial, we create a file system hierarchy in the cloud that is similar to one you’d find on-prem. We transfer the latest input files before invoking the tool.Workload scheduler configuration and job submission transparency: Most environments that leverage batch jobs use job schedulers to access a compute farm. An ideal environment finds the balance between cost and performance, and builds parameters in the system to enable predictive (and prescriptive) wrappers to job schedulers (see picture below).In our tutorial, we use the open-source SLURM job scheduler and an auto-scaling cluster. For simplicity, the tutorial does not include a job submission agent.Other cloud-native batch processing environments such as Cloud Run can also provide further options for workload management.Our on-prem network is called ‘onprem’ and the cloud cluster is called ‘burst’. Characteristics of the on-prem and burst clusters are specified below:Once set up, we ran the OpenPiton regression for single and two-tile configurations. You can see the results below:Regressions run on “burst” clusters were on average 30% faster than on “onprem”, delivering faster verification sign-off and physical verification turnaround times. You can find details about the commands we used in the repository. Hybrid solutions for faster time to marketOf course, on-prem data centers will continue to play a pivotal role in chip design. However, things have changed. Cloud-based, high performance compute has proved itself to be a viable and proven technology for extending on-prem data centers during the chip design process. Companies that successfully leverage hybrid chip design flows will be able to better address the fluctuating needs of their engineering teams. To learn more about silicon design on Google Cloud, read our whitepaper “Using Google Cloud to accelerate your chip design process”.Related ArticleScale your EDA flows: How Google Cloud enables faster verificationGoogle Cloud compute infrastructure can speed up HPC workloads such as EDA.Read Article
Quelle: Google Cloud Platform

Analyze your logs easier with log field analytics

We know that developers or operators troubleshooting applications and systems have a lot of data to sort through while getting to the root cause of issues. Often there are fields like error response codes that are critical for finding answers and resolving those issues. Today, we’re proud to announce log field analytics in Cloud Logging, a new way to search, filter and understand the structure of your logs so you can find answers faster and easier than ever before.Log field analyticsLast year we launched Logs Explorer to make it faster to find and analyze your logs, with features like the Log fields pane and the histogram, as well as saved and shared queries. In Logs Explorer, the Log fields pane and histogram both provide useful information to help analyze your logs. With the Log fields pane, each resource type, which maps to a specific Google Cloud service like BigQuery or Google Kubernetes Engine (GKE), includes a set of default fields and values found in the logs loaded in Logs Explorer. The Log fields pane includes the name of the log field, a list of values and an aggregated count of the number of logs that fall in that category. Let’s look at these key terms more closely: a log field – These are the specific fields in your logs. All logs in Cloud Logging use the LogEntry message format. For example, the logName field is present in all logs in Cloud Logging. When you write logs, you also include textPayload, jsonPayload or protoPayload fields such as jsonPayload.http_req_status.a log field value –  The value of a specific log field. For example, for a log entry with the jsonPayload.http_req_status field, some example values could be a “200”, “404” or “500”.Now you can gain insight into the full list of values for selected log fields and a count of how many logs match the value with log field analytics. You can analyze application or system logs using fields in the jsonPayload or protoPayloads of your log entries and then easily refine your query by selecting the field values to drill down into the matching logs. A view of the Logs fields pane in Cloud LoggingBetter troubleshooting by analyzing log valuesLog field analytics make it easy to quickly spot unexpected values. By adding a field to the Log fields pane, you can view all values that appear in logs and then select any of the values to filter the logs by those values. In this example ecommerce application, we’ve added the jsonPayload.http_req_path field, and now it is possible to look at the request paths over time. In the screenshot below, it’s easy to see that there are several unexpected values that would indicate a problem such as “/products/error” and “products/incorrectproduct”. Next to those values are the aggregated number of matching log entries. These values can help you narrow your troubleshooting or analysis. Aggregated Logs field showing number of entries for each http_req_path log value (notice /products/error and /products/incorrectproduct)Filter using field values The field value selection in the Log fields pane can be used to refine your query so you can see just the logs that contain the selected value. In our example above using the jsonPayload.http_req_path field, it’s possible to select a specific value, “/cart” in this case, and view the logs broken down by severity. Aggregated number of logs entries for a selected http_req_path (notice /cart has been selected)Better understand your audit logsUsing log field analytics, you can easily find values in audit logs for Google Cloud services. For example, you may want to identify the accounts that are making data access requests for a particular GKE cluster. If you add the protoPayload.authenticationInfo.principalEmail field as a custom field to the Log fields pane, you get both a list of the accounts making the requests and the number of log entries for each of the account values.The Log fields pane displaying the number of log entries for each principalEmail valueGet started todayLog field analytics, Log fields, and the Histogram are features that we’ve recently added to Logs Explorer and they’re ready for you to get started with today. But we’re not stopping there! Please join us in our discussion forum for more information about what is coming next and to provide feedback on your experiences using Cloud Logging.If you would like to learn more about Cloud Logging, you can also visit our qwiklab quest for a guided walk through of the capabilities.Related ArticleTroubleshooting your apps with Cloud Logging just got a lot easierLearn how to use the Logs Explorer feature in Cloud Logging to troubleshoot your applicationsRead Article
Quelle: Google Cloud Platform

The State & Local Government tech tightrope: Balancing COVID-19 impacts and the road ahead

State and local government (SLG) agencies are reeling from a combination of unbudgeted COVID-related expenses and reduced tax revenue caused by unemployment and business closures. Any way you look at it, the situation is challenging. To understand how SLG agencies are coping, Google Cloud collaborated with MeriTalk to survey 200 SLG IT and program managers, uncovering some revealing trends in SLG technology innovation. Unsurprisingly,approximately 84% of SLG organizations report making budgetary tradeoffs to bridge funding the gaps the ongoing pandemic has created. However, researchers discovered a silver lining: The pandemic has also been a catalyst to modernize the legacy infrastructure in states and cities. The majority of survey respondents (88%) reported that their agency made greater modernization progress this past year than in the prior 10 years.Walking a tightrope between innovation and budget pressureAccording to 89% of state and local leaders, now is the time to invest in technology modernization. But 80% are experiencing a funding gap due to unbudgeted expenses related to the pandemic and declining tax revenue, which makes finding that balance between innovation and budget a serious challenge.Some agencies are achieving the impossible, though. For example, theCity of Pittsburgh Department of Innovation and Performance is working with Google Cloud to migrate  and modernize its legacy IT infrastructure. By decommissioning their data center and moving to Google Cloud, the city can build new data analytics tools to drive smart city initiatives and create entirely new applications to improve digital service delivery for its residents. As a result, the city will save costs, abandon its brittle legacy IT structure, and create a cloud-based technology platform for the future—becoming the region’s leader in cloud-native software development.Google Cloud is enabling the city’s IT team by curating and delivering our certification training at no cost. The program includes live training sessions as well as on-demand training. Bridging funding gapsIn their drive to modernization, many SLG leaders are turning to grants as an important source of funding. Approximately 84% of those surveyed report making tradeoffs to bridge funding gaps, such as moving resources away from operations and maintenance (37%), increasing reliance on pandemic-related funding (31%), and delaying internal modernization efforts to enable remote work for employees (29%). One way that states are dealing with this tension between budget gaps and the need for innovation is to turn to Google Cloud for cost savings and improved capabilities.For example, Google Cloud is helping the State ofWest Virginia innovate and enhance IT security despite decreased state funding. The state entered a multi-year agreement to ensure full access to enterprise-level Google Workspace capabilities for 25,000 state employees, keeping the state at the forefront of technology advancements at a projected cost savings of $11.5 million.Similarly, Google Cloud helped build theRhode Island Virtual Career Center to help the state’s constituents get back to work. Using familiar productivity tools within Google Workspace, employees can access new career opportunities quickly, while employers can reach more candidates. Skipper, the CareerCompass RI bot, uses data and machine learning to connect Rhode Islanders with potential new career paths and reskilling opportunities.Enhancing servicesGoogle Cloud is also helping agencies enhance services, including working with the State ofIllinois to get unemployment funding to constituents in need.The state is usingContact Center AI to rapidly deploy virtual agents that help more than 1 million out of work citizens file unemployment claims faster. Capable of engaging in human-like conversations, these intelligent agents provide constituents with 24/7 access and enable government employees to focus on more complex, mission-critical tasks—such as combating fraud. In summer 2020, the virtual agents handled more than 140,000 phone and web inquiries per day, including 40,000 after-hours calls every night. The state anticipates an estimated annual savings of $100 million from the solution, which was deployed in just two weeks.Working with Google, Ohio also uncovered $2 billion in fraudulent unemployment claims. We will continue to partner with the state to find fraudulent claims, and prioritize the processing of legitimate claims. Focusing on cybersecurityDespite expanding security threats topping NASCIO’s list of 2021 State CIO priorities, more than one in three IT managers (35%) say their organization reduces security measures to expedite timelines. Partnering with Google Cloud has enabled many agencies to enhance their security measures while modernizing and staying within budget, investing in support for remote work devices, digital services for residents, and cybersecurity.NYC Cyber Command works with city agencies to ensure systems are designed, built, and operated in a highly secure manner. NYC3 followed a cloud-first strategy using the Google Cloud Platform. The virtual operations demanded by the pandemic have increased the importance of security and compliance in SLG. Google Cloud is committed to act as asecurity transformation partner and be the trusted cloud for public sector agencies. Finally, to strengthen public and private partnerships, SLG organizations told MeriTalk that they need vendor partners to support modernization efforts for flexibility and collaboration (46%), need innovation-focused leadership groups to help balance technology needs with budget constraints (41%), and they expect significant returns on investments in cloud computing (38%), and data management/analytics (33%).Google Cloud is helping SLG customers across the country invest in innovation to walk the tech tightrope—balancing innovation and budgets—and helping to build a more resilient future. Visit the State and Local Government solutions page to learn more.Related ArticleHow Cloud Technology Can Help Support Economic RecoveryAs new COVID-19 relief dollars flow into state and local budgets, agency leaders can embrace cloud technology to help deliver critical se…Read Article
Quelle: Google Cloud Platform

Integrating Eventarc and Workflows

I previously talked about Eventarc for choreographed (event-driven) Cloud Run services and introduced Workflows for orchestrated services.Eventarc and Workflows are very useful in strictly choreographed or orchestrated architectures. However, you sometimes need a hybrid architecture that combines choreography and orchestration. For example, imagine a use case where a message to a Pub/Sub topic triggers an automated infrastructure workflow or where a file upload to a Cloud Storage bucket triggers an image processing workflow. In these use cases, the trigger is an event but the actual work is done as an orchestrated workflow.How do you implement these hybrid architectures in Google Cloud? The answer lies in Eventarc and Workflows integration. Eventarc triggersTo recap, an Eventarc trigger enables you to read events from Google Cloud sources via Audit Logs and custom sources via Pub/Sub and direct them to Cloud Run services:One limitation of Eventarc is that it currently only supports Cloud Run as targets. This will change in the future with more supported event targets. It’d be nice to have a future Eventarc trigger to route events from different sources to Workflows directly. In absence of such a Workflows enabled trigger today, you need to do a little bit of work to connect Eventarc to Workflows. Specifically, you need to use a Cloud Run service as a proxy in the middle to execute the workflow. Let’s take a look at a couple of concrete examples.Eventarc Pub/Sub + Workflows integrationIn the first example, imagine you want a Pub/Sub message to trigger a workflow. Define and deploy a workflowFirst, define a workflow that you want to execute. Here’s a sample workflows.yaml that simply decodes and logs the Pub/Sub message body:Deploy a Cloud Run service to execute the workflowNext, you need a Cloud Run service to execute this workflow. Workflows has an execution API and client libraries that you can use for your favorite language. Here’s an example of the execution code from a Node app.js file. It simply passes the received HTTP request headers and body to the workflow and executes it:Deploy the Cloud Run service with the Workflows name and region passed as environment variables:Connect a Pub/Sub topic to the Cloud Run serviceWith Cloud Run and Workflows connected, the next step is to connect a Pub/Sub topic to the Cloud Run service by creating an Eventarc Pub/Sub trigger:This creates a Pub/Sub topic under the covers that you can access with:Trigger the workflowNow that all the wiring is done, you can trigger the workflow by simply sending a Pub/Sub message to the topic created by Eventarc:gcloud pubsub topics publish ${TOPIC_ID} –message=”Hello there”In a few seconds, you should see the message in Workflows logs, confirming that the Pub/Sub message triggered the execution of the workflow:Eventarc Audit Log-Storage + Workflows integrationIn the second example, imagine you want a file creation event in a Cloud Storage bucket to trigger a workflow. The steps are similar to the Pub/Sub example with a few differences.Define and deploy a workflowAs an example, you can use this workflow.yaml that logs the bucket and file names:Deploy a Cloud Run service to execute the workflowIn the Cloud Run service, you read the CloudEvent from Eventarc and extract the bucket and file name in app.js using the CloudEvent SDK and the Google Event library:Executing the workflow is similar to the Pub/Sub example, except you don’t pass in the whole HTTP request but rather just the bucket and file name to the workflow:Connect Cloud Storage events to the Cloud Run serviceTo connect Cloud Storage events to the Cloud Run service, create an Eventarc Audit Logs trigger with the service and method names for Cloud Storage:Trigger the workflowFinally, you can trigger the workflow by creating and uploading a file to the bucket:In a few seconds, you should see the workflow log the bucket and object name.ConclusionIn this blog post, I showed you how to trigger a workflow with two different event types from Eventarc. It’s certainly possible to do the opposite, namely, trigger a Cloud Run service via Eventarc with a Pub/Sub message (see connector_publish_pubsub.workflows.yaml) from Workflows or a file upload to a bucket from Workflows. All the code mentioned in this blog post is in eventarc-workflows-integration. Feel free to reach out to me on Twitter @meteatamel for any questions or feedback.Related ArticleBetter service orchestration with WorkflowsWorkflows is a service to orchestrate not only Google Cloud services such as Cloud Functions and Cloud Run, but also external services.Read Article
Quelle: Google Cloud Platform

Today Is The Day!

It’s here! Ready or not, DockerCon — our free, one-day, all-digital event designed for developers by developers — has arrived. Registration is open until 9 a.m., so if you haven’t already done so, go ahead and sign up!

This is your chance to learn all you can about modern application delivery in a cloud-native world — including the application development technology, skills, tools and people you need to help solve the problems you face day to day.

Final reminders: Don’t forget to catch our line-up of keynote speakers including Docker CEO Scott Johnston, and to bring your questions to Live Panels hosted by Docker Captain Bret Fisher, as well as our two developer-focused panels and Hema Ganapathy’s women’s panel. Just put your questions on selected topics in chat, and the team will do their best to answer them.

If you still need guidance on what to focus on, here’s a reminder of what not to miss. And don’t forget to come celebrate our global community in Community Rooms — a first at DockerCon.

That’s it! Now go forth and carpe DockerCon!

DockerCon LIVE 2021

Join us for DockerCon LIVE 2021 on Thursday, May 27. DockerCon LIVE is a free, one day virtual event that is a unique experience for developers and development teams who are building the next generation of modern applications. If you want to learn about how to go from code to cloud fast and how to solve your development challenges, DockerCon LIVE 2021 offers engaging live content to help you build, share and run your applications. Register today at https://dockr.ly/2PSJ7vn
The post Today Is The Day! appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/