Center for Internet Security (CIS)-Benchmark ist jetzt für Bottlerocket verfügbar

Bottlerocket, ein Linux-basiertes, speziell für die Ausführung von Container-Workloads entwickeltes Betriebssystem, verfügt jetzt über ein Center for Internet Security (CIS)-Benchmark. Der CIS-Benchmark ist ein Katalog an sicherheitsorientierten Konfigurationseinstellungen, der Bottlerocket-Kunden dabei unterstützt, alle nicht konformen Konfigurationen auf einfache und effektive Art zu konfigurieren oder dokumentieren. Der CIS-Benchmark für Bottlerocket beinhaltet Level 1- und Level 2-Konfigurationsprofile.
Quelle: aws.amazon.com

How to avoid cloud misconfigurations and move towards continuous compliance

Security is often seen as a zero-sum game between “go fast” or “stay secure.” We would like to challenge this school of thought and introduce a framework to change that paradigm to a “win-win game,” so you can do both—“go fast” and “stay secure.” Historically, application security tools have been implemented much like a gate at a parking lot. The parking lot has perimeter-based ingress and egress boom gates. The boom gates let one car through at a time, and vehicles often are backed up at the gates during busy hours. However, there are few controls once you get inside. You can access nearly any space on any level and easily move between levels.When you apply this analogy to application development, AppSec tools are often implemented as “toll gates” within waterfall-native workflows. Developers are required to get in line, submit to a security scan, and wait to see the results. When the results are produced, developers spend significant time and energy investigating red flags raised by security. This process is slow and, not surprisingly, not very popular with developers. It’s why they often view traditional security programs as inhibitors to innovation.Guardrails not gatesWe suggest a workflow that’s less like a parking lot gate and more like a freeway with common-sense safety measures. Freeways have directive rules for all users. Speed limits, single direction of travel, and mandatory speed reduction zones when exiting contribute to freeway safety. Some freeways implement preventative measures based on these rules, such as physical walls dividing opposite flows of traffic and protective guardrails to reduce collisions and keep vehicles from veering off the road. While driving on a freeway comes with its own complications, there are no boom-style gates blocking your path. Following the same directive rules, there are detective and responsive controls, such as speed detectors, cameras, signs reminding drivers which direction they are going, and how fast they are traveling. Some freeways have deployed rumble strips to remind a dozing driver to stay in their lane. Applying lessons from freeways to application development and compliance in the cloud represents the perfect opportunity to build software more securely.Modern application security tools should be fully automated, largely invisible to developers, and minimize friction within the DevOps pipeline. To do this, these security tools should work the way developers want to work. Security controls should integrate into the development lifecycle early and everywhere. These controls should live within the developer’s preferred tools and create rapid feedback loops so mistakes can be remediated as soon as possible.A typical compliance cycle looks like this:Here, we highlight the gap between the desired state and the actual state that becomes problematic when audit times come. This increases the overall cost of the audit and the time spent in generating the evidence of controls.Instead, this is what we need.We need the actual state to track the desired state continuously. We need continuous preventative controls to stop insecure resources from being introduced. We need detective controls to find non-compliant resources promptly and constantly. We need responsive controls to fix non-compliant resources automatically. In all, we need continuous compliance.Infrastructure continuous compliance reference architectureHow do we get started with continuous compliance? Here is the reference architecture that enables you to develop this capability.The architecture is centered on building a close-loop of directive, preventative, detective and responsive controls. It is also open and extensible. Although we reference Google Cloud architectures in this blog, you can use them for other cloud service platforms or even on-premise. The National Institute of Standards and Technology’s Open Security Controls Assessment Language (OSCAL) is a helpful resource to express your control library in a machine-readable format. OSCAL can allow organizations to define a set of security and privacy requirements, which are represented as controls, which then can be grouped together as a control catalog. Organizations can use these catalogs to establish security and privacy control baselines through a process that may aggregate and tailor controls from multiple source catalogs. Using the OSCAL profile model to express a baseline makes the mappings between the control catalog and the profile explicit and machine-readable.Directive controlsThe starting point of the close-loop is the directive and harmonized controls. Next, you should have control mappings rationalized to the technical controls against your compliance requirements. These requirements can come from various sources, such as the threat landscape of your industry, your internal security policies and standards, your external regulatory compliance, and industry best practice frameworks. Control mappings will form a Technical Control Library. The library is a dataset mapping out harmonized controls to requirements written in different compliance frameworks. The control mapping justifies the security controls. It builds the linkage between security and compliance and helps you reduce your compliance audit cost. This dataset should be a living document. An easy first step in building such as library is to begin with the CIS Google Cloud Platform Foundation Benchmark. The benchmark is lightweight and it constitutes foundational security any entity should get right on Google Cloud. In addition, Security Command Center Premium’s  Security Health Analytics can help you to monitor your Google Cloud environment against these benchmarks  on a continuous basis across all the projects within your organization. The Technical Control Library will guide the rest of the closed-loop. For every directive control, you should have corresponding preventative control to stop non-compliant resources from being deployed. You should have the detective control to look over the entire environment seeking non-compliant resources. And you should have the responsive control remedying non-compliant resources automatically or kicking off responsive workflow with your Security Operations function. Finally, every policy evaluation point should have a feedback loop to the engineers. A prompt and meaningful feedback loop provides a better engineering experience and increases development velocity in the short run. These feedback loops will breed good behaviors to write better and more secure code in the long run.Preventive controlsAlmost every action on the Google Cloud is an API call, such as when creating, configuring, or deleting resources, so preventative controls are all about API call constraints. There are different wrappers for these API calls, including Infrastructure-as-Code (IaC) solutions such as Terraform or Google Cloud Deployment Manager, the Cloud Console interface, Cloud Shell SDK, Python, or GO SDK. As with any other application code deployment, the IaC solutions should use a Continuous Integration (CI) solution. On the CI, you could orchestrate IaC constraints, similar to writing unit tests for application code. Since all IaC solutions come in or can be converted to JSON format, you can use Open Policy Agent (OPA) as the IaC constraint solution. OPA’s Rego policy language is declarative and flexible, which allows you to construct almost any policy in Rego. For the input sources that are not IaC, you could fall back to the organization policies and IAM as these two controls have the closest proximity to Google Cloud. That said, it’s considered a best practice to restrict non-IaC inputs for higher environments such as production-like or production, so you could codify your infrastructure, apply controls and workflows in the source repository. Detective and responsive controlsEven if you’ve nailed the preventive controls, and the cloud environment is sterile, we still need detective and responsive controls. Here’s why. For one, not all the controls can be safely implemented as preventative controls in the real world. For instance, we may not fail all the Google Compute Engine deployments at the CI if these VMs have external IP addresses because external IP addresses may be required for a specific software or use cases. Another reason is that we want to produce time-stamped compliance status for audit purposes. Taking the CIS compliance as an example, we could have enforced all the CIS check on the CI and set IaC as the only deployment source for cloud infrastructure. However, we will still need to demonstrate the runtime CIS compliance report using Security Command Center. Security responsive controls are not limited to remediation actions. They can also take the form of notifications via email, messaging tools, or integration with ITSM systems. If you use Terraform to deploy the infrastructure and use Cloud Function for auto-remediation, you need to pay attention to the Terraform state. Since auto-remediation actions performed by Cloud Function are not recorded in the Terraform state file, you will need to inform the engineers to update the source Terraform code.The futureThe fact that manual processes around security and compliance don’t scale points to automation as the next enabler. The economics of automation require a systemic discipline and holistic enterprise-wide approach to regulatory compliance and cloud risk management. By defining a data model of the compliance process, the aforementioned OSCAL represents a game-changer for automation in risk management and regulatory compliance. While we realize that adopting “as code” practices is a long-term investment for most of our customers, Risk and Compliance as Code (RCaC) has a number of building blocks to get you started. By adopting the RCaC tenets you shift towards codified policies and infrastructure for a secure cloud transformation. Stay tuned as we introduce exciting new capabilities and features to Google Cloud Risk and Compliance as Code in the months to come.
Quelle: Google Cloud Platform

Spatial Clustering on BigQuery – Best Practices

Most data analysts are familiar with the concept of organizing data into clusters so that it can be queried faster and at a lower cost. The user behavior dictates how the dataset should be clustered: for example, when a user seeks to analyze or visualize geospatial data (a.k.a location data), it is most efficient to cluster on a geospatial column. This practice is known as spatial clustering, and in this blog, we will share best practices for implementing it in BigQuery (hint — let BigQuery do it for you). BigQuery is a petabyte-scale data warehouse that has many geospatialcapabilities and functions. In the following sections, we will describe how BigQuery does spatial clustering out of the box using the S2 indexing system. We will also touch on how to use other spatial indexes like H3 and geohash, and compare the cost savings of different approaches. How BigQuery does spatial clustering under the hoodClustering ensures that blocks of data with similar values are colocated in storage, which means that the data is easier to retrieve at query time. It also sorts the blocks of data, so that only the necessary blocks need to be scanned, which reduces cost and processing time. In geospatial terms, this means that when you’re querying a particular region, only the rows within or close to that region are scanned, rather than the whole globe.All of the optimizations described above will occur automatically in BigQuery if you cluster your tables on a GEOGRAPHY column. It’s as easy as typing CLUSTER BY [GEOGRAPHY] when creating the table. Only predicate functions (e.g. ST_Intersects, ST_DWithin) leverage clustering, with the exception of ST_DISJOINT. It should also be noted that while BigQuery supports partitioning and clustering on a variety of fields, only clustering is supported on a geospatial field. This is because geometries can be large and could span across partitions, no matter how BigQuery chooses to partition the space. Finally, cluster sizes will range from 100MB to 1GB, so clustering on a table smaller than 100MB will provide no benefit.When writing to a table that is clustered by GEOGRAPHY, BigQuery shards the data into spatially-compact blocks. For each block, BigQuery computes a bit of metadata called an S2 covering that includes the spatial area of the data contained within. When querying a geography-clustered table using spatial predicates, BigQuery reads the covering and evaluates whether a particular covering can satisfy the filter. BigQuery then prunes the blocks that cannot satisfy the filter. Users are only charged for data from remaining blocks. Note that S2 coverings can overlap, as it is often impossible to divide data into non-overlapping regions. Fundamentally, BigQuery is using the S2 index to map a geometry into a 64-bit integer, then BigQuery clusters on that integer using existing integer-based clustering mechanisms. In the past, customers have manually implemented an S2 indexing system in BigQuery. This was done prior to BigQuery’s native support of spatial clustering via S2. Using BigQuery’s native clustering resulted in a large performance increase, not to mention the added simplicity of not having to manage your own S2 indexes.Alternative Spatial IndexesSpatial clustering utilizes a spatial indexing system, or hierarchy, to organize the stored data. The purpose of all spatial indices is to represent this globe we call Earth in numerical terms, allowing us to define a location as a geometric object like a point, polygon or line. There are dozens of spatial indexes, and most databases implement them in their own unique way. Although BigQuery natively uses S2 cells for clustering, other indexes can be manually implemented, such as H3, geohash, or quadkeys. The examples below will involve the following spatial indexes:S2:  The S2 system represents geospatial data as cells on a three dimensional sphere. It is used by Google Maps.uses quadrilaterals, which are more efficient than hexagonsHigher precision than H3 or geohashingH3:  The H3 system represents geospatial data on overlapping hexagonal grids.Hexagons are more visually appealing Convolutions and smoothing algorithms are more efficient than S2Geohash – Geohash is a public domain system that represents geospatial data on a curved grid.  Length of the Geohash id determines the spatial precisionFairly poor spatial locality, so clustering does not work as wellSpatial clustering in BQ — S2 vs. GeohashIn most cases for analysis, BigQuery’s built-in spatial clustering will give the best performance with the least effort. But if the data is queried according to other attributes, e.g. by geohash box, a custom indexing is necessary. The method of querying the spatial indexes has implications on the performance, as is illustrated in the example below. ExampleFirst, you will create a table with random points in longitude and latitude. Use the BigQuery function st_geohash to generate a geohash id for each point.code_block[StructValue([(u’code’, u’drop table if exists tmp.points;rn rncreate or replace table tmp.tenkrows asrnselect x from unnest(generate_array(1, 10000)) x;rn rncreate or replace table tmp.pointsrncluster by pointrnasrnwith pts as (rn select st_geogpoint(rand() * 360 – 180, rand() * 180 – 90) pointrn from tmp.tenkrows a, tmp.tenkrows brn)rnselect st_geohash(point) gh, pts.pointrnfrom pts’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ed25c361210>)])]Use the st_geogpoint function to transform the latitude and longitude into a GEOGRAPHY, BigQuery’s native geospatial type, which uses S2 cells as the index. Select a collection of around 3,000 points. This should cost around 25MB. If you run the same query on an unclustered table, it would cost 5.77GB (the full table size).code_block[StructValue([(u’code’, u’select * from tmp.pointsrnwhere st_dwithin(st_geogpoint(1, 2), point, 10000)’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ed25eb11610>)])]Now you will query by geohash id. BigQuery’s ability to leverage the spatial clustering will depend on whether the BQ SAT solver can prove the cluster of data can be pruned. The queries below are both leveraging the geospatial clustering, costing only 340 MB. Note that if we had clustered the table by the ‘gh’ field (ie geohash id), these queries would cost the same as the one above, around 25MB.code_block[StructValue([(u’code’, u”select * from tmp.pointsrnwhere starts_with(gh, ‘bbb’)rn rnselect * from tmp.pointsrnwhere gh between ‘bbb’ and ‘bbb~'”), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ed25d0e5ad0>)])]The query below is much less efficient, costing 5.77GB, a full scan of the table. BigQuery cannot prove this condition fails based on the min/max values of the cluster so it must scan the entire table.code_block[StructValue([(u’code’, u”select * from tmp.pointsrnwhere left(gh, 3) = ‘bbb'”), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ed25d0e55d0>)])]As the examples show, the least costly querying option is to use the indexing consistent with the query method — native S2 indexing when querying by geography, string indexing when querying by geohash. When using geohashing, avoid left() or right() functions, as it will cause BigQuery to scan the entire table.Spatial clustering in BQ with H3One may also find themselves in a situation where they need to use H3 as a spatial index in BigQuery. It is still possible to leverage the performance benefits of clustering, but as with geohashing, it is important to avoid certain patterns. Suppose you have a huge table of geography points indexed by H3 cell ID at level 15, which you’ve clustered by H3_index (note: these functions are supported through the Carto Spatial Extension for BigQuery). You want to find all the points that belong to lower resolution cells, e.g. at level 7. You might write a query like this:code_block[StructValue([(u’code’, u’select * from pointsrn where H3_ToParent(h3_index, 7) = @parent_cell_id’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ed24322d110>)])]Where H3_ToParent is a custom function that computes the parent cell ID from a higher resolution index. Since you’ve clustered by the H3 index, you might expect a lower cost, however this query will scan the entire table. This happens because H3_ToParent involves bit operations, and is too complex for the BigQuery query analyser to understand how the query’s result is related to cluster boundaries. What you should do instead is give BigQuery the range of the H3 cell IDs at the level that the geographies are indexed, like the following example:code_block[StructValue([(u’code’, u’select * from pointsrn where h3_index BETWEEN H3_CellRangeStart(@parent_cell_id, 15)rn AND H3_CellRangeEnd(@parent_cell_id, 15)’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ed2431ffa10>)])]Where H3_CellRangeStart and H3_CellRangeEnd are custom functions that map the lower-resolution parent ID to the appropriate start and end IDs of the higher resolution cells. Now BigQuery will be able to figure out relevant clusters, reducing the cost and improving the performance of the query.What’s Next?Spatial clustering is a complex topic that requires specialized knowledge to implement. Using BiqQuery’s native spatial clustering will take most of the work out of your hands. With your geospatial data in BigQuery, you can do amazing spatial analyses like querying the stars, even on large datasets. You can also use BigQuery as a backend for a geospatial application, such as an application that allows customers to explore the climate risk of their assets. Using spatial clustering, and querying your clusters correctly will ensure you get the best performance at the lowest cost. Acknowledgments: Thanks to Eric Engle and Travis Webb for their help with this post.Related ArticleQuerying the Stars with BigQuery GISDr. Ross Thomson explains how you can use BigQuery-GIS to analyze astronomy datasets, in a similar manner to analyzing ground-based map d…Read Article
Quelle: Google Cloud Platform

Google Kubernetes Engine: 7 years and 7 amazing benefits

Today, as we celebrate seven years of general availability of the most automated and scalable managed Kubernetes, Google Kubernetes Engine (GKE), we present seven of the common ways that GKE helps customers do amazing things.Accelerates productivity of developersDeveloper time is at a premium. GKE provides a rich set of integrated tools to help you ship faster and more often. The practice of continuous integration (CI) allows developers to frequently integrate all their code changes back into a main branch, exposing failures faster by revealing issues as early as possible in the process. A CI pipeline typically produces an artifact that you can deploy in later stages of the deployment process with continuous delivery (CD). CD lets you release code at any time.The ecosystem of developer tools for GKE spans across CI and CD.Developers write, deploy, and debug code faster with Cloud Code and Cloud ShellContinuously integrate and deliver updates with Cloud BuildContinuous delivery to GKE is made easier, faster, and more reliable with Cloud DeployDebug and troubleshoot with Google Cloud’s operations suiteYou can use your favorite partner solutions out of the boxMoreover, GKE Autopilot cluster accelerates app deployment reducing configuration time and simplifies ongoing management of Dev/Test clusters. You can read more on how to get started with GKE Autopilot. “Google Kubernetes Engine is easy to configure, and scales really well. That means the developers don’t need to think about managing it in production, they can simply set the parameters and be confident it will work.”—Vincent Oliveira, CTO, Lucky CartBolsters security into software supply chainSecurity remains a top of mind for all organizations. Kubernetes clusters created in the Autopilot mode implement many GKE hardening features by default. Furthermore, GKE Autopilot improves cluster security, restricting access to the Kubernetes API, prevents node mutation, enforcing robust security posture, and lets you implement additional guidance to harden security of your clusters. Binary Authorization is a deploy-time security control that ensures only trusted container images are deployed on GKE. With Binary Authorization, you can gain tighter control over your container environment by ensuring only verified images are integrated into the build-and-release process. You can read more on how to build security into your software supply chain”We needed to be HIPAA compliant, which was going to be painful on AWS, and we wanted to get away from managing and operating our own Kubernetes clusters,” recalled Astorino. “We had heard good things about GKE (Google Kubernetes Engine). And particularly valuable for us, — many technical requirements you need for HIPAA compliance are configured by default on Google Cloud.” —Troy Astorino, CoFounder & CTO of PicnicHealthCreates new opportunities with a platform approachModern application platforms spawn creativity and drive quick response to customer demands.  GKE customers use Kubernetes to build a modern enterprise-grade application platform for their organization. With the ability to achieve improved speed and performance for a variety of workloads through Tau VMs/GPU/TPU/Local SSD support, GKE helps them support a wide variety of containerized applications, including stateful and stateless, AI and ML, Linux and Windows. Only GKE can run 15,000 node clusters, outscaling other cloud providers by up to 10X, letting you run applications effectively and reliably at scale.”Google Cloud-managed services are playing a major role in enabling Noon.com customers to get their shopping done whenever they need it, without experiencing any delays or glitches, and without us having to lose sleep at night to ensure our platform is functioning as it should.”—Alex Nadalin, SVP of Engineering, Noon.comDelivers always-on experiences for customersConsumers today demand 24×7 digital experiences. GKE provides granular controls to deliver always-on, highly available, and reliable apps and services. With node auto-upgrade we automatically upgrade and patch your cluster nodes, while the control plane is always patched and upgraded by Google. You can also subscribe to a release channel – rapid, regular or stable – based on your needs and constraints. For enterprises, release channels provide the level of predictability needed for advanced planning, and the flexibility to orchestrate custom workflows automatically when a change is scheduled. You can learn more about release channels here, and about maintenance windows here.”To bring E.ON Optimum to market, we needed to transform in-house software into a highly scalable, reliable cloud-based solution. We were specifically looking for a cloud partner capable of running Kubernetes pods at scale and 100% of the time, and that led us to Google Cloud.”—Dennis Nobel, Digital Delivery Manager,E.ONEnables cost optimization and savings for organizationsIn the current macroeconomic environment, you often need to do more with fewer resources. GKE Autopilot dynamically adjusts compute resources, so there’s no need to figure out what size and shape nodes you should configure for your workloads. With GKE Autopilot, you pay only for the pods you use and you’re billed per second for vCPU, memory and disk resource requests. Moreover, GKE cost optimization insights help you discover optimization opportunities at scale, across your GKE clusters and workloads, automatically with minimal friction.”Since migrating to GKE, we’ve halved the costs of running our nodes, reduced our maintenance work, and gained the ability to scale up and down effortlessly and automatically according to demand. All our customer production loads and development environment run on GKE, and we’ve never faced a critical incident since.”—Helge Rennicke, Director of Software Development, Market Logic SoftwareFuels growth with focus on business innovationIT divisions are moving from cost centers to value centers by using managed cloud services. One can benefit from no-stress management and focus on business innovation using GKE Autopilot, which provides hands-off cluster management, SLA and eliminates most day-2 cluster operations. GKE delivers most dimensions of automation to efficiently and easily operate your applications. With fully managed GKE Autopilot,  combined with multi-dimensional auto-scaling capabilities, you can get started with a production ready secured cluster in minutes and have complete control over the configurations and maintenance.”The automated features of Google Kubernetes Engine enables us to manage app traffic and develop games at an amazingly high level of efficiency. Currently, we only need two engineers to monitor traffic volume and all the environments of our three games, which frees up more workforce for development and innovation work.”—Aries Wang, Research and Development Deputy Manager, Yile TechnologyGives freedom from proprietary tools for ITMulti-cloud is a reality. Proprietary tools often require specialized skills and lock you into huge licensing fees. You can minimize vendor lock-in and be well placed to maximize the benefits of a mutli-cloud strategy with conformant Kubernetes supported across multiple environments including all major cloud providers. Kubernetes’ workload portability provides you the flexibility to move your apps around without constraints.”MeilleursAgents is a product-oriented company and our goal is to deliver new services as fast as we can, in order to get market feedback, and improve them once they’re in production. Google Kubernetes Engine helps us do that by delivering flexibility and easy scaling, which is why we decided to make the switch.”—Thibault Lanternier, Head of Web Engineering, MeilleursAgentsJoin us at Building for the future with Kubernetes to kickstart or accelerate your Kubernetes journey. You’ll get access to technical demos that go deep into our Kubernetes services, developer tools, operations suite and security solutions. We look forward to partnering with you on your Kubernetes journey!Related ArticleWhy automation and scalability are the most important traits of your Kubernetes platformThe recipe for long-term success with Kubernetes: automation that matters and scalability that saves money.Read Article
Quelle: Google Cloud Platform

Community All-Hands Q3: What We’ll Cover

Join us for our next Community All-Hands event on September 1, 2022 at 8am PST/5pm CET. We have an exciting program in store this quarter for you, our Docker community. Make sure to grab a seat, settle in, and join us for this event by registering now!

What we’ll cover

Within the first hour, you can look forward to a recap of recent Docker updates (and a sneak peek at what to expect in the coming months.) Then, we’ll present some demos and updates about you: the Docker community. 

We’ll also give prizes out to some lucky community members. Stay tuned for more!

[Click Here to Enlarge]

Here’s our Main Stage line-up:

A message from our CEO, Scott JohnstonA recap from our CPO, Jake Levrine on Docker’s efforts to boost developer innovation and productivity in Docker Desktop and Docker EngineAn update from Jim Clark on viewing images through layered SBOMsA word from Djordje Lukic on multi-platform image support in Docker Desktop

Featuring unique, community tracks

[Click Here to Enlarge]

At this virtual event, we want to show you the world’s worth of knowledge that the Docker community has to offer. To do this, we’ll be showcasing thought leadership content from community members across the globe with eight different tracks:

Best Practices. If you’re looking to optimize your image build time, mitigate runtime errors, and learn how to debug your application, join us in the best practices track. We’ll be looking at some real-world example applications in .Net and Golang, and you’ll learn how to interact with the community to solve problems.

Demos. If you learn best by example, this is the track for you. Join us in the demos track to learn about building an integration test suite for legacy code, creating a CV in LaTeX, setting up Kubernetes on Docker Desktop, and more.

Security. No matter how great your app is, if it’s not secure, it’s not going to make it far. Learn about pentesting, compliance, and robustness!

Extensions. Discover helpful, community Docker Extensions. By attending this track, you’ll even learn how to create your own extensions and share them with the world!

Cutting Edge. Deploy your next AI application or Blockchain extension. You’ll also learn about the latest advancements in the tech space.

Open Source. Take your projects to the next level with the Docker-Sponsored Open Source program. We’ll also feature several panels hosted by the open source community.

International Waters. Learn about the work being done in Docker’s international community and how to get involved. We’ll have sessions in French, Spanish, and Portuguese.

Unconference. You’re the most important voice in our Community All-Hands. Join the conversation by engaging in the unconference track!

Reserve your seat now

Our Community All-Hands is specially designed for our Docker community, so it wouldn’t be the same without you! Sign up today for this much-anticipated event, packed with innovation and collaboration. We’ll save you a seat. 
Quelle: https://blog.docker.com/feed/