Connect, secure, and simplify your network resources with Azure Virtual Network Manager

Enterprise-scale management and configuration of your network resources in Azure are key to keeping costs down, reducing operational overhead, and properly connecting and securing your network presence in the cloud. We are happy to announce Azure Virtual Network Manager (AVNM), your one-stop shop for managing the connectivity and security of your network resources at scale, is generally available.

What is Azure Virtual Network Manager?

AVNM works through a main process of group, configure, and deploy. You’ll group your network resources across subscriptions, regions, and even tenants; configure the kind of connectivity and security you want among your grouped network resources; and finally, deploy those configurations onto those network groups in whichever and however many regions you’d like.

Common use cases

Common use cases for AVNM include the following and can be addressed by deploying AVNM’s connectivity and security admin configurations onto your defined network groups:

Interconnected virtual networks (VNets) that communicate directly with each other.
Central infrastructure services in a hub VNet that are shared by other VNets.

Establishing direct connectivity between spoke VNets to reduce latency.

Automatic maintenance of connectivity at scale, even with the addition of new network resources.
Enforced standard security rules on all existing and new VNets without risk of change.

Keeping flexibility for VNet owners to configure network security groups (NSGs) as needed for more specific traffic dictation.

Application of default security rules across an entire organization to mitigate the risk of misconfiguration and security holes.
Force-allowance of services’ traffic, such as monitoring services and program updates, to prevent accidental blocking through security rules.

Connectivity configuration

Hub and spoke topology

When you have some services in a hub VNet, such as an Azure Firewall or ExpressRoute, and you need to connect several other VNets to that hub to share those services, that means you’ll have to establish connectivity between each of those spoke VNets and the hub. In the future, if you provision new VNets, you’ll also need to make sure those new VNets are correctly connected to the hub VNet.

With AVNM, you can create groups of VNets and select those groups to be connected to your desired hub VNet, and AVNM will establish all the necessary connectivity between your hub VNet and each VNet in your selected groups behind the scenes. On top of the simplicity of creating a hub and spoke topology, new VNets that match your desired conditions can be automatically added to this topology, reducing manual interference from your part.

For the time being, establishing direct connectivity between the VNets within a spoke network group is still in preview and will become generally available (GA) at a later date.

Mesh

If you want all of your VNets to be able to communicate with each other regionally or globally, you can build a mesh topology with AVNM’s connectivity configuration. You’ll select your desired network groups and AVNM will establish connectivity between every VNet that is a part of your selected network groups. The mesh connectivity configuration feature is still in preview and will become generally available at a later date.

How to implement connectivity configurations with existing environments

Let’s say you have a cross-region hub and spoke topology in Azure that you’ve set up through manual peerings. Your hub VNet has an ExpressRoute gateway and your dozens of spoke VNets are owned by various application teams.

Here are the steps you would take to implement and automate this topology using AVNM:

Create your network manager.
Create a network group for each application team’s respective VNets using Azure Policy definitions that can be conditionally based on parameters including (but not limited to) subscription, VNet tag, and VNet name.
Create a connectivity configuration with hub and spoke selected. Select your desired hub VNet and your network groups as the spokes.
By default, all connectivity established with AVNM is additive after the connectivity configuration’s deployment. If you’d like AVNM to clean up existing peerings for you, this is an option you can select; otherwise, existing connectivity can be manually cleaned up later if desired.
Deploy your hub and spoke connectivity configuration to your desired regions.

In just a few clicks, you’ve set up a hub and spoke topology among dozens of VNets from all application teams globally through AVNM. By defining the conditions of VNet membership for your network groups representing each application team, you’ve ensured that any newly created VNet matching those conditions will automatically be added to the corresponding network group and receive the same connectivity configuration applied onto it. Whether you choose to have AVNM delete existing peerings or not, there is no downtime to connectivity between your spoke VNets and hub VNet.

Security feature

AVNM currently provides you with the ability to protect your VNets at scale with security admin configurations. This type of configuration consists of security admin rules, which are high-priority security rules defined similarly to, but with precedence over NSG rules.

The security admin configuration feature is still in preview and will GA at a later date.

Enforcement and flexibility

With NSGs alone, widespread enforcement on VNets across several applications, teams, or even entire organizations can be tricky. Often there’s a balancing act between attempts at centralized enforcement across an organization and handing over granular, flexible control to teams. The cost of hard enforcement is higher operational overhead as admins need to manage an increasing number of NSGs. The cost of individual teams tailoring their own security rules is the risk of vulnerability as misconfiguration or opened unsafe ports is possible. Security admin rules aim to eliminate this sliding scale of choosing between enforcement and flexibility altogether by providing central governance teams with the ability to establish guardrails, while intentionally allowing traffic for individual teams to flexibly pinpoint security as needed through NSG rules.

Difference from NSGs

Security admin rules are similar to NSG rules in structure and input parameters, but they are not the exact same construct. Let’s boil down these differences and similarities:

 

Target audience

Applied on

Evaluation order

Action types

Parameters

Security admin rules

Network admins, central governance team

Virtual networks

Higher priority

Allow, Deny, Always Allow

Priority, protocol, action, source, destination

NSG rules

Individual teams

Subnets, NICs

Lower priority, after security admin rules

Allow, Deny

One key difference is the security admin rule’s Allow type. Unlike its other action types of Deny and Always Allow, if you create a security admin rule to Allow a certain type of traffic, then that traffic will be further evaluated by NSG rules matching that traffic. However, Deny and Always Allow security admin rules will stop the evaluation of traffic, meaning NSGs down the line will not see or handle this traffic. As a result, regardless of NSG presence, administrators can use security admin rules to protect an organization by default.

Key Scenarios

Providing exceptions

Being able to enforce security rules throughout an organization is useful, to say the least. But one of the benefits of security admin rules that we’ve mentioned is its allowance for flexibility by teams within the organization to handle traffic differently as needed. Let’s say you’re a network administrator and you’ve enforced security admin rules to block all high-risk ports across your entire organization, but an application team 1 needs SSH traffic for a few of their resources and has requested an exception for their VNets. You’d create a network group specifically for application team 1’s VNets and create a security admin rule collection targeting only that network group—inside that rule collection, you’d create a security admin rule of action type Allow for inbound SSH traffic (port 22). The priority of this rule would need to be higher than the original rule you created that blocked this port across all of your organization’s resources. Effectively, you’ve now established an exception to the blocking of SSH traffic just for application team 1’s VNets, while still protecting your organization from that traffic by default.

Force-allowing traffic to and from monitoring services or domain controllers

Security admin rules are handy for blocking risky traffic across your organization, but they’re also useful for force-allowing traffic needed for certain services to continue running as expected. If you know that your application teams need software updates for their virtual machines, then you can create a rule collection targeting the appropriate network groups consisting of Always Allow security admin rules for the ports where the updates come through. This way, even if an application team misconfigures an NSG to deny traffic on a port necessary for updates, the security admin rule will ensure the traffic is delivered and doesn’t hit that conflicting NSG.

How to implement security admin configurations with existing environments

Let’s say you have an NSG-based security model consisting of hundreds of NSGs that are modifiable by both the central governance team and individual application teams. Your organization implemented this model originally to allow for flexibility, but there have been security vulnerabilities due to missing security rules and constant NSG modification.

Here are the steps you would take to implement and enforce organization-wide security using AVNM:

Create your network manager.
Create a network group for each application team’s respective VNets using Azure Policy definitions that can be conditionally based on parameters including (but not limited to) subscription, VNet tag, and VNet name.
Create a security admin configuration with a rule collection targeting all network groups. This rule collection represents the standard security rules that you’re enforcing across your entire organization.
Create security admin rules blocking high-risk ports. These security admin rules take precedence over NSG rules, so Deny security admin rules have no possibility of conflict with existing NSGs. Redundant or now-circumvented NSGs can be manually cleaned up if desired.
Deploy your security admin configuration to your desired regions.

You’ve now set up an organization-wide set of security guardrails among all of your application teams’ VNets globally through AVNM. You’ve established enforcement without sacrificing flexibility, as you’re able to create exceptions for any application team’s set of VNets. Your old NSGs still exist, but all traffic will hit your security admin rules first. You can clean up redundant or avoided NSGs, and your network resources are still protected by your security admin rules, so there is no downtime from a security standpoint.

Learn more about Azure Virtual Network Manager

Check out the AVNM overview, read more about AVNM in our public documentation set, and deep-dive into AVNM’s security offering through our security blog.
Quelle: Azure

Announcing Docker+Wasm Technical Preview 2

We recently announced the first Technical Preview of Docker+Wasm, a special build that makes it possible to run Wasm containers with Docker using the WasmEdge runtime. Starting from version 4.15, everyone can try out the features by activating the containerd image store experimental feature.

We didn’t want to stop there, however. Since October, we’ve been working with our partners to make running Wasm workloads with Docker easier and to support more runtimes.

Now we are excited to announce a new Technical Preview of Docker+Wasm with the following three new runtimes:

spin from Fermyon

slight from Deislabs

wasmtime from Bytecode Alliance

All of these runtimes, including WasmEdge, use the runwasi library.

What is runwasi?

Runwasi is a multi-company effort to make a library in Rust that makes it easier to write containerd shims for Wasm workloads. Last December, the runwasi project was donated and moved to the Cloud Native Computing Foundation’s containerd organization in GitHub.

With a lot of work from people at Microsoft, Second State, Docker, and others, we now have enough features in runwasi to run Wasm containers with Docker or in a Kubernetes cluster. We still have a lot of work to do, but there are enough features for people to start testing.

If you would like to chat with us or other runwasi maintainers, join us on the CNCF’s #runwasi channel.

Get the update

Ready to dive in and try it for yourself? Great! Before you do, understand that this is a technical preview build of Docker Desktop, so things might not work as expected. Be sure to back up your containers and images before proceeding.

Download and install the appropriate version for your system, then activate the containerd image store (Settings > Features in development > Use containerd for pulling and storing images), and you’ll be ready to go.

Figure 1: Docker Desktop beta features in development.

Mac (Intel)

Mac (Arm)

Linux (deb, Intel)

Linux (deb, Arm)

Linux (rpm, Intel)

Linux (Arch)

Windows

Let’s take Wasm for a spin 

The WasmEdge runtime is still present in Docker Desktop, so you can run: 

$ docker run –rm –runtime=io.containerd.wasmedge.v1
–platform=wasi/wasm secondstate/rust-example-hello:latest
Hello WasmEdge!

You can even run the same image with the wasmtime runtime:

$ docker run –rm –runtime=io.containerd.wasmtime.v1
–platform=wasi/wasm secondstate/rust-example-hello:latest
Hello WasmEdge!

In the next example, we will deploy a Wasm workload to Docker Desktop’s Kubernetes cluster using the slight runtime. To begin, make sure to activate Kubernetes in Docker Desktop’s settings, then create an example.yaml file:

cat > example.yaml <<EOT
apiVersion: apps/v1
kind: Deployment
metadata:
name: wasm-slight
spec:
replicas: 1
selector:
matchLabels:
app: wasm-slight
template:
metadata:
labels:
app: wasm-slight
spec:
runtimeClassName: wasmtime-slight-v1
containers:
– name: hello-slight
image: dockersamples/slight-rust-hello:latest
command: ["/"]
resources:
requests:
cpu: 10m
memory: 10Mi
limits:
cpu: 500m
memory: 128Mi

apiVersion: v1
kind: Service
metadata:
name: wasm-slight
spec:
type: LoadBalancer
ports:
– protocol: TCP
port: 80
targetPort: 80
selector:
app: wasm-slight
EOT

Note the runtimeClassName, kubernetes will use this to select the right runtime for your application. 

You can now run:

$ kubectl apply -f example.yaml

Once Kubernetes has downloaded the image and started the container, you should be able to curl it:

$ curl localhost/hello
hello

You now have a Wasm container running locally in Kubernetes. How exciting! 🎉

Note: You can take this same yaml file and run it in AKS.

Now let’s see how we can use this to run Bartholomew. Bartholomew is a micro-CMS made by Fermyon that works with the spin runtime. You’ll need to clone this repository; it’s a slightly modified Bartholomew template. 

The repository already contains a Dockerfile that you can use to build the Wasm container:

FROM scratch
COPY . .
ENTRYPOINT [ "/modules/bartholomew.wasm" ]

The Dockerfile copies all the contents of the repository to the image and defines the build bartholomew Wasm module as the entry point of the image.

$ cd docker-wasm-bartholomew
$ docker build -t my-cms .
[+] Building 0.0s (5/5) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 147B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 2.84kB 0.0s
=> CACHED [1/1] COPY . . 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => exporting manifest sha256:cf85929e5a30bea9d436d447e6f2f2e 0.0s
=> => exporting config sha256:0ce059f2fe907a91a671f37641f4c5d73 0.0s
=> => naming to docker.io/library/my-cms:latest 0.0s
=> => unpacking to docker.io/library/my-cms:latest 0.0s

You are now ready to run your first WebAssembly micro-CMS:

$ docker run –runtime=io.containerd.spin.v1 -p 3000:80 my-cms

If you go to http://localhost:3000, you should be able to see the Bartholomew landing page (Figure 2).

Figure 2: Bartholomew landing page.

We’d love your feedback

All of this work is fresh from the oven and relies on the containerd image store in Docker, which is an experimental feature we’ve been working on for almost a year now. The good news is that we already see how this hard work can benefit everyone by adding more features to Docker. We’re still working on it, so let us know what you need. 

If you want to help us shape the future of WebAssembly with Docker, try it out, let us know what you think, and leave feedback on our public roadmap.

Quelle: https://blog.docker.com/feed/

Amazon Neptune führt API für Graphzusammenfassungen ein

Amazon Neptune unterstützt jetzt die API für Graphzusammenfassungen, damit Sie wichtige Metadaten zu Ihren Eigenschaftsgraphen (Property Graphs, PG) und Resource Description Framework (RDF) -Diagrammen verstehen können. Ab der Engine-Version 1.2.1.0 können Sie die Graph-Übersichts-API verwenden, um schnell auf Übersichtsdaten zu Ihrem Graph zuzugreifen.
Quelle: aws.amazon.com

Amazon QuickSight fügt ein Steuerelement zum Ausblenden ausgeblendeter Spalten für die Pivot-Tabelle hinzu

Amazon QuickSight bietet jetzt die Steuerung zum Ausblenden ausgeblendeter Spalten, um Benutzern zu helfen, zusammengeklappte Spalten in ihren Pivot-Tabellen besser zu verwalten. Autoren können dieses Steuerelement verwenden, um zusammengeklappte Zeilenüberschriftenfelder automatisch auszublenden, wodurch unnötiges Scrollen entfällt und Pivot-Tabellen kompakter und übersichtlicher aussehen. Um mehr zu erfahren, klicken Sie hier.
Quelle: aws.amazon.com