Learn what’s new in Azure Firewall

This post was co-authored by Suren Jamiyanaa, Program Manager 2, Azure Networking.

We continue to be amazed by the adoption, interest, positive feedback, and the breadth of use cases customers are finding for our service. Today, we are happy to share several key Azure Firewall capabilities as well as an update on recent important releases into general availability and preview.

Intrusion Detection and Prevention System (IDPS) signatures lookup now generally available.
TLS inspection (TLSi) Certification Auto-Generation now generally available.
Web categories lookup now generally available.
Structured Firewall Logs now in preview.
IDPS Private IP ranges now in preview.

Azure Firewall is a cloud-native firewall-as-a-service offering that enables customers to centrally govern and log all their traffic flows using a DevOps approach. The service supports both application and network-level filtering rules and is integrated with the Microsoft Threat Intelligence feed for filtering known malicious IP addresses and domains. Azure Firewall is highly available with built-in auto-scaling.

IDPS signatures lookup

Azure Firewall Premium IDPS signature lookup is a great way to better understand the applied IDPS signatures on your network as well as fine-tuning them according to your specific needs. IDPS signatures lookup allows you to:

Customize one or more signatures and change their mode to Disabled, Alert, or Alert and Deny. For example, if you receive a false positive where a legitimate request is blocked by Azure Firewall due to a faulty signature, you can use the signature ID from the network rules logs and set its IDPS mode to off. This causes the "faulty" signature to be ignored and resolves the false positive issue.
You can apply the same fine-tuning procedure for signatures that are creating too many low-priority alerts, and therefore interfering with visibility for high-priority alerts.
Get a holistic view of the entire 58,000 signatures.
Smart search.
Allows you to search through the entire signatures database by any type of attribute. For example, you can search for specific CVE-ID to discover what signatures are taking care of this CVE by typing the ID in the search bar.

TLSi Certification Auto-Generation

For non-production deployments, you can use the Azure Firewall Premium TLS inspection Certification Auto-Generation mechanism, which automatically creates the following three resources for you:

Managed Identity
Key Vault
Self-signed Root CA certificate

Just choose the new managed identity, and it ties the three resources together in your Premium policy and sets up TLS inspection.

Web categories lookup

Web Categories is a filtering feature that allows administrators to allow or deny web traffic based on categories, such as gambling, social media, and more. We added tools that help manage these web categories: Category Check and Mis-Categorization Request.

Using Category Check, an admin can determine which category a given FQDN or URL falls under. In the case that a FQDN or URL fits better under a different category, an administrator can also report an incorrect classification, in which the request will be evaluated and updated if approved.

Structured Firewall Logs

Today, the following diagnostic log categories are available for Azure Firewall:

Application rule log
Network rule log
DNS proxy log

These log categories are using Azure diagnostics mode. In this mode, all data from any diagnostic setting will be collected in the AzureDiagnostics table.

With this new feature, customers will be able to choose using Resource Specific Tables instead of the existing AzureDiagnostics table. In case both sets of logs are required, at least two diagnostic settings would need to be created per firewall.

In Resource Specific mode, individual tables in the selected workspace are created for each category selected in the diagnostic setting.

This method is recommended since it makes it much easier to work with the data in log queries, provides better discoverability of schemas and their structure, improves performance across both ingestion latency and query times, and the ability to grant Azure role-based access control (RBAC) rights on a specific table.

New Resource Specific tables are now available in diagnostic setting allowing users to utilize the following newly added categories:

Network rule log: contains all Network Rule log data. Each match between data plane and network rule creates a log entry with the data plane packet and the matched rule's attributes.
NAT rule log: contains all destination network address translation (DNAT) events log data. Each match between data plane and DNAT rule creates a log entry with the data plane packet and the matched rule's attributes.
Application rule log: contains all Application rule log data. Each match between data plane and Application rule creates a log entry with the data plane packet and the matched rule's attributes.
Threat Intelligence log: contains all Threat Intelligence events.
IDPS log: contains all data plane packets that were matched with one or more IDPS signatures.
DNS proxy log: contains all DNS Proxy events log data.
Internal FQDN resolve failure log: contains all internal Firewall FQDN resolution requests that resulted in failure.
Application rule aggregation log: contains aggregated Application rule log data for Policy Analytics.
Network rule aggregation log: contains aggregated Network rule log data for Policy Analytics.
NAT rule aggregation log: contains aggregated NAT rule log data for Policy Analytics.

Additional Kusto Query Language (KQL) log queries were added (as seen in the diagram below) to query structured firewall logs.

IDPS Private IP ranges

In Azure Firewall Premium IDPS, Private IP address ranges are used to identify if traffic is inbound or outbound. By default, only ranges defined by Internet Assigned Numbers Authority (IANA) RFC 1918 are considered private IP addresses. To modify your private IP addresses, you can now easily edit, remove or add ranges as needed.

Learn more

Azure Firewall Documentation.
Azure Firewall Preview Features.
Azure Firewall Premium.
Azure Firewall Web Categories.

Quelle: Azure

Achieve seamless observability with Dynatrace for Azure

This blog post has been co-authored by Manju Ramanathpura, Principal Group PM, Azure DevEx & Partner Ecosystem.

As adoption of public cloud grows by leaps and bounds, organizations want to leverage software and services that they love and are familiar with as a part of their overall cloud solution. Microsoft Azure enables customers to host their apps on the globally trusted cloud platform and use the services of their choice by closely partnering with popular SaaS offerings. Dynatrace is one such partner that provides deep cloud observability, advanced AIOps, and continuous runtime application security capabilities on Azure.

“Deep and broad observability, runtime application security, and advanced AI and automation are key for any successful cloud transformation. Through the Dynatrace platform’s integration with Microsoft Azure, customers will now have immediate access to these capabilities. This integration will deliver answers and intelligent automation from the massive amount of data generated by modern hybrid-cloud environments, enabling flawless and secure digital interactions.”—Steve Tack, SVP Product Management, Dynatrace.

Modern cloud-native environments are complex and dynamic. When failures occur, development teams need deep visibility into the systems to get to the root cause of the issues and understand the impact of potential fixes. Good observability solutions such as Dynatrace for Azure not only enable you to understand what is broken, but also provide the ability to proactively identify and resolve issues before they impact your customers. Currently, if you want to leverage Dynatrace for observability, you go through a complex process of setting up credentials, Event Hubs, and writing custom code to send monitoring data from Azure to Dynatrace. This is often time-consuming and hard to troubleshoot when issues occur. To alleviate this customer pain, we worked with Dynatrace to create a seamlessly integrated solution on Azure that’s now available on the Azure Marketplace.

Dynatrace’s integration provides a unified experience with which you can:

Create a new Dynatrace environment in the cloud with just a few clicks. Dynatrace SaaS on Azure is a fully managed offering that takes away the need to set up and operate infrastructure.
Seamlessly ship logs and metrics to Dynatrace. Using just a few clicks, configure auto-discovery of resources to monitor and set up automatic log forwarding. Configuring Event Hubs and writing custom code to get monitoring data is now a thing of the past.
Easily install Dynatrace OneAgent on virtual machines (VMs) and App Services through a single click. OneAgent continuously monitors the health of host and processes and automatically instruments any new processes.
Use single sign-on to access the Dynatrace SaaS portal—no need to remember multiple credentials and log in separately.
Get consolidated billing for the Dynatrace service through Azure Marketplace.

“Microsoft is committed to providing a complete and seamless experience for our customers on Azure. Enabling developers to use their most loved tools and services makes them more productive and efficient. Azure native integration of Dynatrace makes it effortless for developers and IT administrators to monitor their cloud applications with the best of Azure and Dynatrace together.”—Balan Subramanian, Partner Director of Product Management, Azure Developer Experiences.

Get started with Dynatrace for Azure

Let’s now look at how you can easily set up and configure Dynatrace for Azure:

Acquire the Dynatrace for Azure offering: You can find and acquire the solution from the Azure Marketplace.

 

Create a Dynatrace resource in Azure portal: Once the Dynatrace solution is acquired, you can seamlessly create a Dynatrace resource using the Azure portal. Using the Dynatrace resource, you can configure and manage your Dynatrace environments within the Azure portal.

 

Configure log forwarding: Configure which Azure resources send logs to Dynatrace, using the familiar concept of resource tags.

 

Install Dynatrace OneAgent: With a single click, you can install Dynatrace OneAgent on multiple VMs and App Services.

 

Access Dynatrace native service for Azure with single sign-on: Use the single sign-on experience to effortlessly access dashboards, Smartscape® topology visualization, log content, and more on the Dynatrace portal.

Next steps

Subscribe to the preview of Dynatrace’s integration with Azure available in the Azure Marketplace.
Learn more about the Dynatrace integration.

Quelle: Azure

Simplify Your Deployments Using the Rust Official Image

We previously tackled how to deploy your web applications quicker with the Caddy 2 Official Image. This time, we’re turning our attention to Rust applications.
The Rust Foundation introduced developers to the Rust programming language in 2010. Since then, developers have relied on it while building CLI programs, networking services, embedded applications, and WebAssembly apps.
Rust is also the most-loved programming language according to Stack Overflow’s 2021 Developer Survey, and Mac developers’ most-sought language per Git Tower’s 2022 survey. It has over 85,000 dedicated libraries, while our Rust Official Image has over 10 million downloads. Rust has a passionate user base. Its popularity has only grown following 2018’s productivity updates and 2021’s language-consistency enhancements.
That said, Rust application deployments aren’t always straightforward. Why’s this the case?
The Deployment Challenge
Developers have numerous avenues for deploying their Rust applications. While flexibility is good, the variety of options can be overwhelming. Accordingly, your deployment strategies will change depending on application types and their users.
Do you need a fully-managed IaaS solution, a PaaS solution, or something simpler? How important is scalability? Is this application as a personal project or as part of an enterprise deployment? The answers to these will impact your deployment approach — especially if you’ll be supporting that application for a long time.
Let’s consider something like Heroku. The platform provides official support for major languages like PHP, Python, Go, Node.js, Java, Ruby, and others. However, only these languages receive what Heroku calls “first-class” support.
In Rust’s case, Heroku’s team therefore doesn’t actively maintain any Rust frameworks, language features, or updated versioning. You’re responsible for tackling these tasks. You must comb through a variety of unofficial, community-made Buildpacks to extend Heroku effectively. Interestingly, some packs do include notes on testing with Docker, but why not just cut out the middle man?
There are also options like Render and Vercel, which feature different levels of production readiness.
That’s why the Rust Official Image is so useful. It accelerates deployment by simplifying the process. Are you tackling your next Rust project? We’ll discuss common use cases, streamline deployment via the Rust Official Image, and share some important tips.
Why Rust?
Rust’s maintainers and community have centered on system programming, networking, command-line applications, and WebAssembly (AKA “Wasm”). Many often present Rust as an alternative to C++ since they share multiple use cases. Accordingly, Rust also boasts memory safety, strong type safety, and modularity.
You can also harness Rust’s application binary interface (ABI) compatibility with C, which helps Rust apps access lower-level binary data within C libraries. Additionally, helpers like wasm-pack, wasm-bindgen, Neon, Helix, rust-cpython, and cbindgen let you extend codebases written in other languages with Rust components. This helps all portions of your application work seamlessly together.
Finally, you can easily cross compile to static x86 binaries (or non-x86 binaries like Arm), in 32-bit or 64-bit. Rust is platform-agnostic. Its built-in mechanisms even support long-running services with greater reliability.
That said, Rust isn’t normally considered an “entry-level” language. Experienced developers (especially those versed in C or C++) tend to pick up Rust a little easier. Luckily, alleviating common build complexities can boost its accessibility. This is where container images shine. We’ll now briefly cover the basics behind leveraging the Rust image.
To learn more about Rust’s advantages, read this informative breakdown.
Prerequisites and Technical Fundamentals
The Rust Official Image helps accelerate your deployment, and groups all dependencies into one package.
 
Here’s what you’ll need to get started:

Your Rust application code
The latest version of Docker Desktop
Your IDE of choice (VSCode is recommended, but not required)

 
In this guide, we’ll assume that you’re bringing your finalized application code along. Ensure that this resides in the proper location, so that it’s discoverable and usable within your upcoming build.
Your Rust build may also leverage pre-existing Rust crates (learn more about packages and crates here). Your package contains one or more crates (or groups of compiled executables and binary programs) that provide core functionality for your application. You can also leverage library crates for applications with shared dependencies.
Some crates contain important executables — typically in the form of standalone tools. Then we have configurations to consider. Like .yaml files, Cargo.toml files — also called the package manifests — form an app’s foundation. Each manifest contains sections. For example, here’s how [package] section looks:

[package]
name = "hello_world" # the name of the package
version = "0.1.0" # the current version, obeying semver
authors = ["Alice <a@example.com>", "Bob <b@example.com>"]

 
You can define many configurations within your manifests. Rust generates these sectioned files upon package creation, using this $ cargo new script:

$ cargo new my-project
Created binary (application) `my-project` package
$ ls my-project
Cargo.toml
src
$ ls my-project/src
main.rs

 
Rust automatically uses src/main.rs as the binary crate root directory, whereas src/lib.rs references a package with a library crate. The above example from Rust’s official documentation incorporates a simple binary crate within the build.
Before moving ahead, we recommend installing Docker Desktop, because it makes managing containers and images much easier. You can view, run, stop, and configure your containers via the Dashboard instead of the CLI. However, the CLI remains available within VSCode — and you can `SSH` directly into your containers via Docker Desktop’s Container interface.
Now, let’s inspect our image and discuss some best practices. To make things a little easier, launch Docker Desktop before proceeding.
Using the Rust Official Image
The simplest way to use the Rust image is by running it as a Rust container. First, enter the `docker pull rust` command to automatically grab the `latest` image version. This takes about 20 seconds within VSCode:
 

 
You can confirm that Docker Desktop pulled your image successfully by accessing the Images tab in the sidebar — then locating your rust image in the list:
 

 
To run this image as a container, hover over it and click the blue “Run” button that appears. Confirm by clicking “Run” again within the popup modal. You can expand the Optional Settings form to customize your container, though that’s not currently necessary.
Confirm that your rust container is running by visiting the Containers tab, and finding it within the list. Since we bypassed the Optional Settings, Docker Desktop will give your container a random name. Note the blue labels beside each container name. Docker Desktop displays the base image’s name:tag info for each container:
 

 
Note: Alternatively, you can pull a specific version of Rust with the tag :<version>. This may be preferable in production, where predictability and pre-deployment testing is critical. While :latest images can bring new fixes and features, they may also introduce unknown vulnerabilities into your application.
 
You can stop your container by hovering over it and clicking the square “Stop” button. This process takes 10 seconds to complete. Once stopped, Docker Desktop labels your container as exited. This step is important prior to making any configuration changes.
Similarly, you can (and should) remove your container before moving onward.
Customizing Your Dockerfiles
The above example showcased how images and containers live within Desktop. However, you might’ve noticed that we were working with “bare” containers, since we didn’t use any Rust application code.
Your project code brings your application to life, and you’ll need to add it into your image build. The Dockerfile accomplishes this. It helps you build layered images with sequential instructions.
Here’s how your basic Rust Dockerfile might look:

FROM rust:1.61.0

WORKDIR /usr/src/myapp
COPY . .

RUN cargo install –path .

CMD ["myapp"]

 
You’ll see that Docker can access your project code. Additionally, the cargo install RUN command grabs your packages.
To build and run your image with a complete set of Rust tooling packaged in, enter the following commands:

$ docker build -t my-rust-app .
$ docker run -it –rm –name my-running-app my-rust-app

 
This image is 1.8GB — which is pretty large. You may instead need the slimmest possible image builds. Let’s cover some tips and best practices.
Image Tips and Best Practices
Save Space by Compiling Without Tooling
While Rust tooling is useful, it’s not always essential for applications. There are scenarios where just the compiled application is needed. Here’s how your augmented Dockerfile could account for this:

FROM rust:1.61.0 as builder
WORKDIR /usr/src/myapp
COPY . .
RUN cargo install –path .

FROM debian:buster-slim
RUN apt-get update && apt-get install -y extra-runtime-dependencies && rm -rf /var/lib/apt/lists/*
COPY –from=builder /usr/local/cargo/bin/myapp /usr/local/bin/myapp
CMD ["myapp"]

 
Per the Rust Project’s developers, this image is merely 200MB. That’s tiny compared to our previous image. This saves disk space, reduces application bloat, and makes it easier to track layer-by-layer changes. That outcome appears paradoxical, since your build is multi-stage (adding layers) yet shrinks significantly.
Additionally, naming your stages and using those names in each COPY ensures that each COPY won’t break if you reorder your instructions.
This solution lets you copy key artifacts between stages and abandon unwanted artifacts. You’re not carrying unwanted components forward into your final image. As a bonus, you’re also building your Rust application from a single Dockerfile.
 
Note: See the && operator used above? This helps compress multiple RUN commands together, yet we don’t necessarily consider this a best practice. These unified commands can be tricky to maintain over time. It’s easy to forget to add your line continuation syntax () as those strings grow.
 
Finally, Rust is statically compiled. You can create your Dockerfile with the FROM scratch instruction and append only the binary to the image. Docker treats scratch as a no-op and doesn’t create an extra layer. Consequently, Scratch can help you create minuscule builds measuring just a few MB.
To better understand each Dockerfile instruction, check out our reference documentation.
Use Tags to Your Advantage
Need to save even more space? Using the Rust alpine image can save another 60MB. You’d instead specify an instruction like FROM rust:1.61.0-alpine as builder. This isn’t caveat-free, however. Alpine images leverage musl libc instead of glibc and friends, so your software may encounter issues if important dependencies are excluded. You can compare each library here to be safe.
 
There are some other ways to build smaller Rust images:

The rust:<version>-slim tag pulls an image that contains just the minimum packages needed to run Rust. This saves plenty of space, but fails in environments that require deployments beyond just your rust image
The rust:<version>-slim-bullseye tag pulls an image built upon Debian 11 branch, which is the current stable distro
The rust:<version>slim-buster tag also pulls an image built upon the Debian 10 branch, which is even slightly smaller than its bullseye successor

 
Docker Hub lists numerous image tags for the Rust Official Image. Each version’s size is listed according to each OS architecture.
Creating the slimmest possible application is an admirable goal. However, this process must have a goal or benefit in mind. For example, reducing your image size (by stripping dependencies) is okay when your application doesn’t need them. You should never sacrifice core functionality to save a few megabytes.
Lastly, you can lean on the `cargo-chef` subcommand to dramatically speed up your Rust Docker builds. This solution fully leverages Docker’s native caching, and offers promising performance gains. Learn more about it here.
Conclusion
Cross-platform Rust development doesn’t have to be complicated. You can follow some simple steps, and make some approachable optimizations, to improve your builds. This reduces complexity, application size, and build times by wide margins. Moreover, embracing best practices can make your life easier.
Want to jumpstart your next Rust project? Our awesome-compose library features a shortcut for getting started with a Rust backend. Follow our example to build a React application that leverages a Rust backend with a Postgres database. You’ll also learn how Docker Compose can help streamline the process.
Quelle: https://blog.docker.com/feed/

Amazon Neptune vereinfacht die Diagrammanalyse und Machine-Learning-Workflows mit der Python-Integration

Mit einer Integration von Open Source Python, die Datenwissenschaft und ML-Workflows vereinfacht, können Sie jetzt Diagrammanalysen und Machine-Learning-Aufgaben auf Diagrammdaten, die in Amazon Neptune gespeichert sind, ausführen. Mit dieser Integration können Sie in Neptune gespeicherte Diagrammdaten mit Pandas DataFrames in jeder Python-Umgebung lesen und schreiben, z. B. in einer lokalen Jupyter-Notebook-Instance, Amazon SageMaker Studio, AWS Lambda oder anderen Computing-Ressourcen. Von dort aus können Sie Diagrammalgorithmen wie PageRank und Connected Components mit Open-Source-Bibliotheken wie iGraph,Network und cuGraph verwenden.
Quelle: aws.amazon.com

Amazon EMR 6.6 fügt Unterstützung für Apache Spark 3.2, HUDI 0.10.1, Iceberg 0.13, Trino 0.367, PrestoDBv0.267 und mehr hinzu

Die 6.6-Version von Amazon EMR unterstützt jetzt Apache Spark 3.2, Apache Spark RAPIDS 22.02, CUDA 11, Apache 0.10.1, Apache Iceberg 0.13, Trino 0.367 und PrestoDB 0.267. Sie können die leistungsoptimierte Version von Apache Spark 3.2 auf EMR auf EC2, EKS und dem kürzlich veröffentlichten EMR Serverless verwenden. Darüber hinaus sind Apache Hudi 0.10.1 und Apache Iceberg 0.13 auf EC2, EKS und Serverless verfügbar. Apache Hive 3.1.2 ist auf EMR auf EC2 und EMR Serverless verfügbar. Trino 0.367 und PrestoDB 0.267 sind nur für EMR auf EC2 verfügbar. 
Quelle: aws.amazon.com

AWS Security Hub kann jetzt die Ergebnisse der verwalteten und benutzerdefinierten Regelauswertung von AWS Config empfangen.

AWS Security Hub empfängt jetzt automatisch die Ergebnisse der von AWS Config verwalteten und benutzerdefinierten Regelauswertung als Sicherheitsbefunde. Mit AWS Config können Sicherheits- und Compliance-Experten die Konfigurationen ihrer AWS-Ressourcen mithilfe von Config-Regeln, die die Compliance von AWS-Ressourcen anhand bestimmter Richtlinien bewerten, bewerten, prüfen und evaluieren. Ein Beispiel für Fehlkonfigurationen von Ressourcen, die von Config-Regeln erkannt werden, sind öffentlich zugängliche Amazon-S3-Buckets, unverschlüsselte EBS-Volumes und übermäßig tolerante IAM-Richtlinien. Wenn die Bewertung einer Config-Regel erfolgreich war oder nicht, sehen Sie jetzt in Security Hub den Hinweis „bestanden“ oder „nicht bestanden“ für diese Bewertung. Alle Aktualisierungen des Status der Config-Regelauswertung werden automatisch in der Security-Hub-Suche aktualisiert. Diese neue Integration zwischen Security Hub und AWS Config erweitert die Zentralisierung und die zentrale Ansicht, indem sie Ihre Config-Auswertungsergebnisse zusammen mit Ihren anderen Sicherheitsergebnissen konsolidiert. Dies ermöglicht es Ihnen, Ihre Sicherheitsergebnisse einfacher zu suchen, zu ordnen, zu ermitteln und Maßnahmen zu ergreifen.
Quelle: aws.amazon.com

Die Tabellenklasse „DynamoDB Standard-IA“ von Amazon DynamoDB ist jetzt in der AWS-Region Asien-Pazifik (Jakarta) verfügbar

Die Tabellenklasse „Standard-Infrequent Access“ (DynamoDB Standard-IA) von Amazon DynamoDB ist jetzt in der AWS-Region Asien-Pazifik (Jakarta) verfügbar. Die Tabellenklasse „DynamoDB Standard-IA“ ist ideal für Anwendungsfälle, die eine langfristige Speicherung von Daten, auf die nur selten zugegriffen wird, erfordern, z. B. Anwendungsprotokolle, Social-Media-Posts, E-Commerce-Bestellverläufe und bisherige Gaming-Erfolge.
Quelle: aws.amazon.com

Amazon CloudFront unterstützt jetzt 1.3-Sitzungswiederaufnahme von TLS für Zuschauerverbindungen

Amazon CloudFront unterstützt jetzt die Wiederaufnahme von 1.3-Sitzungen von Transport Layer Security (TLS), um die Zuschauerverbindungsleistung weiter zu verbessern. Bisher unterstützte Amazon CloudFront seit 2020 die Version 1.3 des TLS-Protokolls zur Verschlüsselung der HTTPS-Kommunikation zwischen den Zuschauern und CloudFront. Kunden, die das Protokoll eingeführt haben, konnten ihre Verbindungsleistung im Vergleich zu früheren TLS-Versionen um bis zu 30 % verbessern. Ab heute werden Kunden, die TLS 1.3 verwenden, dank der 1.3-Sitzungswiederaufnahme von TLS eine zusätzliche Leistungssteigerung von bis zu 50 % verzeichnen können. Bei der Sitzungswiederaufnahme und bei einer erneuten Verbindung zu einem Server, mit dem der Client zuvor eine TLS-Verbindung hatte, entschlüsselt der Server das Sitzungsticket mit einem vom Client verwendeten gemeinsamen Schlüssel und nimmt die Sitzung wieder auf. Die 1.3-Sitzungswiederaufnahme von TLS beschleunigt den Sitzungsaufbau, da sie die Computing-Kosten sowohl für den Server als auch für den Client reduziert. Außerdem müssen weniger Pakete übertragen werden als bei einem vollständigen TLS-Handshake.
Quelle: aws.amazon.com