Build a scalable security practice with Azure Lighthouse and Azure Sentinel

The Microsoft Azure Lighthouse product group is excited to launch a blog series covering areas in Azure Lighthouse where we are investing to make our service provider partners and enterprise customers successful with Azure. Our first blog in this series covers a top area of consideration for companies worldwide—Security with focus on how Azure Lighthouse can be used alongside Microsoft’s Azure Sentinel service to build an efficient and scalable security practice.

Today, organizations of all sizes are looking to reduce costs, complexity, and gain efficiencies in their security operations. As cloud security solutions help meet these requirements by providing flexibility, simplicity, pay for use, automatic scalability and protection across heterogenous environments, more and more companies are embracing cloud security solutions.

While achieving efficiencies is the need of the hour, organizations are also faced with shortage of security experts in the market.  Here is where there is tremendous potential for service providers to fill this gap by building and offering security services on top of cloud security solutions. Before diving deeper, let me start with a brief introduction to Azure Lighthouse and Azure Sentinel.

Azure Lighthouse helps service providers and large enterprises manage environments of multiple customers or individual subsidiaries, at scale from within their single centralized control plane. Since the launch of Azure Lighthouse at Inspire, Azure Lighthouse has seen wide adoption from both service providers and enterprises, with millions of Azure resources being managed at scale across heterogenous environments.

Azure Sentinel is a cloud native security information event management (SIEM) and security orchestration automated response (SOAR) solution from Microsoft. It enables collection of security data at scale across your entire enterprise including Azure services, Microsoft 365 services or from hybrid environments,from hybrid environments, such as other clouds, firewalls, and partner security tools. Azure Sentinel also uses built-in AI and advanced querying capabilities to detect, investigate, respond to and mitigate threats efficiently.

We will now look at how you can use both these services together to architect a scalable security practice.

To start building a security practice that scales across multiple customer environments for a service provider or helps organizations centrally monitor and manage the security operations across their individual subsidiaries, we recommend using a distributed deployment and centralized management model. This is where you deploy Azure Sentinel workspaces within the tenant that belongs to the customer or subsidiary (data stays locally within the customer’s or individual subsidiary’s environment) and manage it centrally from within a service provider’s or from a central security operations center (SOC) unit’s tenant within an organization.

You can then leverage Azure Lighthouse’s capabilities to manage and perform security operations from the central managing tenant on the Azure Sentinel workspaces located in the managed tenant. To learn more about this model and its applicability for your scenario, read Extend Azure Sentinel across workspaces and tenants.

To deploy and configure these workspaces at scale, both Azure Sentinel and Azure Lighthouse offer powerful automation capabilities that you can use effectively with CI/CD pipelines across tenants. Here is what ITCSecure, Managed Security Services Provider and Microsoft Partner based in London has to say:

“With Azure Lighthouse’s ability to get delegated access to a customer’s environment and the powerful automation capabilities of both Azure Lighthouse and Azure Sentinel, we are now able to leverage a common set of automations to deploy Azure Sentinel. In real terms, this enables us to configure Azure Sentinel with existing content like queries and analytical rules. This has resulted in significant reductions in customer onboarding times, reducing delivery times from months to a few weeks and even a few hours in certain scenarios. This has enabled us to scale our onboarding processes and practices significantly and delivers faster ROI for our customers. Azure Lighthouse has also provided greater transparency and visibility for our customers, where they can clearly see work delivered. We run queries and apply workbooks across our customer’s subscriptions, deploy playbooks in our customer’s tenants, all from a central pane of glass, further adding to the overall speed of delivery of our service.” —Arno Robbertse, Chief Executive, ITC Secure

Threat hunting and investigation through cross-tenant queries

Running queries to search for threats and as a next step investigating them is an essential part of a SOC analyst’s job. With Azure Lighthouse, you can deploy Log Analytics queries or hunting queries in the central managing tenant (preserving IP for a service provider) and run those queries across the managed tenants using the union operator and workspace expression.                     

Visualizing and monitoring data across customer environments

Another technology that works well across tenants is Azure Monitor Workbooks, Azure Sentinel’s dashboarding technology. You can choose to deploy workbooks in the managing tenant or managed tenant per your requirements. For workbooks deployed in the managing tenant, you can add a multi-workspace selector within a workbook (in case it doesn’t have one already built into it), to visualize and monitor data and essentially get data insights across multiple workspaces and across multiple customers/subsidiaries if needed.

Automated responses through playbooks

Security Playbooks can be used for automatic mitigation when an alert is triggered. The playbooks can be deployed either in the managing tenant or the individual managed tenant, with the response procedures configured based on which tenant's users will need to take action in response to a security threat.

Xcellent, a managed services provider and Microsoft partner based in Netherlands has benefited from access to a central security solution powered by Azure Sentinel and Azure Lighthouse, to monitor the different Microsoft 365 components across customer tenants. Response management and querying against their customer base has also become more efficient—dropping Xcellent’s standard response time to less than 45 minutes and allowed the team to create a more proactive security solution for their customers.

Cross-tenant incident management

Multiple workspace incident view facilitates centralized incident monitoring and management across multiple Azure Sentinel workspaces and across Azure Active Directory (Azure AD) tenants using Azure Lighthouse. This centralized incident view lets you manage incidents directly or drill down transparently to the incident details in the context of the originating workspace.

Resources to get you started

Azure Lighthouse extends Azure Sentinel’s powerful security capabilities to help you centrally monitor and manage security operations from a single interface and efficiently scale your security operations across multiple Azure tenants and customers.

The following resources will help you get started:

Take a look at our detailed documentation and guidance for using Azure Lighthouse with Azure Sentinel.
For latest resources and updates on Azure Sentinel, join us at the Azure Sentinel Tech Community.
You can provide feedback or request new features for Azure Lighthouse in our feedback forum.
Check out Azure PartnerZone for latest content, news, and resources for partners.

Quelle: Azure

Getting Started with Docker Using Node.js(Part I)

A step-by-step guide to help you get started using Docker containers with your Node.js apps.

Prerequisites

To complete this tutorial, you will need the following:

Free Docker Account You can sign-up for a free Docker account and receive free unlimited public repositoriesDocker running locallyInstructions to download and install DockerNode.js version 12.18 or laterDownload Node.jsAn IDE or text editor to use for editing files. I would recommend VSCode

Docker Overview

Docker is an open platform for developing, shipping, and running applications. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly. 

With Docker, you can manage your infrastructure in the same ways you manage your applications. By taking advantage of Docker’s methodologies for shipping, testing, and deploying code quickly, you can significantly reduce the delay between writing code and running it in production.

Sample Application

Let’s create a simple Node.js application that we’ll use as our example. Create a directory on your local machine named node-docker and follow the steps below to create a simple REST API.

$ cd [path to your node-docker directory]
$ npm init -y
$ npm install ronin-server ronin-mocks
$ touch server.js

Now let’s add some code to handle our REST requests. We’ll use a mocks server so we can focus on Dockerizing the application and not so much the actual code.

Open this working directory in your favorite IDE and enter the following code into the server.js file.

const ronin = require( ‘ronin-server’ )
const mocks = require( ‘ronin-mocks’ )

const server = ronin.server()

server.use( ‘/’, mocks.server( server.Router(), false, true ) )
server.start()

The mocking server is called Ronin.js and will list on port 8000 by default. You can make POST requests to the root (/) endpoint and any JSON structure you send to the server will be saved in memory. You can also send GET requests to the same endpoint and receive an array of JSON objects that you have previously POSTed.

Testing Our Application

Let’s start our application and make sure it’s running properly. Open your terminal and navigate to your working directory you created. 

$ node server.js

To test that the application is working properly, we’ll first POST some json to the API and then make a GET request to see that the data has been saved. Open a new terminal and run the following curl commands:

$ curl –request POST
–url http://localhost:8000/test
–header ‘content-type: application/json’
–data ‘{
“msg”: “testing”
}’
{“code”:”success”,”payload”:[{“msg”:”testing”,”id”:”31f23305-f5d0-4b4f-a16f-6f4c8ec93cf1″,”createDate”:”2020-08-28T21:53:07.157Z”}]}

$ curl http://localhost:8000/test
{“code”:”success”,”meta”:{“total”:1,”count”:1},”payload”:[{“msg”:”testing”,”id”:”31f23305-f5d0-4b4f-a16f-6f4c8ec93cf1″,”createDate”:”2020-08-28T21:53:07.157Z”}]}

Switch back to the terminal where our server is running and you should see the following requests in the server logs.

2020-XX-31T16:35:08:4260 INFO: POST /test
2020-XX-31T16:35:21:3560 INFO: GET /test

Creating Dockerfiles for Node.js

Now that our application is running properly, let’s take a look at creating a Dockerfile. 

A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. When we tell Docker to build our image by executing the docker build command, Docker will read these instructions and execute them one by one and create a Docker image as a result.

Let’s walk through creating a Dockerfile for our application. In the root of your working directory, create a file named Dockerfile and open this file in your text editor.

NOTE: The name of the Dockerfile is not important but the default filename for many commands is simply Dockerfile. So we’ll use that as our filename throughout this series.

The first thing we need to do is add a line in our Dockerfile that tells Docker what base image we would like to use for our application. 

Dockerfile:

FROM node:12.18.1

Docker images can be inherited from other images. So instead of creating our own base image, we’ll use the official Node.js image that already has all the tools and packages that we need to run a Node.js application. You can think of this as in the same way you would think about class inheritance in object oriented programming. So for example. If we were able to create Docker images in JavaScript, we might write something like the following.

class MyImage extends NodeBaseImage {}

This would create a class called MyImage that inherited functionality from the base class NodeBaseImage.

In the same way, when we use the FROM command, we tell docker to include in our image all the functionality from the node:12.18.1 image.

NOTE: If you want to learn more about creating your own base images, please checkout our documentation on creating base images.

To make things easier when running the rest of our commands, let’s create a working directory. 

This instructs Docker to use this path as the default location for all subsequent commands. This way we do not have to type out full file paths but can use relative paths based on the working directory.

WORKDIR /app

Usually the very first thing you do once you’ve downloaded a project written in Node.js is to install npm packages. This will ensure that your application has all its dependencies installed into the node_modules directory where the node runtime will be able to find them.

Before we can run npm install, we need to get our package.json and package-lock.json files into our images. We’ll use the COPY command to do this. The COPY command takes two parameters. The first parameter tells Docker what file(s) you would like to copy into the image. The second parameter tells Docker where you want that file(s) to be copied to. We’ll copy the package.json and package-lock.json file into our working directory – /app.

COPY package.json package.json
COPY package-lock.json package-lock.json

Once we have our package.json files inside the image, we can use the RUN command to execute the command npm install. This works exactly the same as if we were running npm install locally on our machine but this time these node modules will be installed into the node_modules directory inside our image.

RUN npm install

At this point we have an image that is based on node version 12.18.1 and we have installed our dependencies. The next thing we need to do is to add our source code into the image. We’ll use the COPY command just like we did with our package.json files above.

COPY . .

This COPY command will take all the files located in the current directory and copies them into the image. Now all we have to do is to tell Docker what command we want to run when our image is run inside of a container. We do this with the CMD command. 

CMD [ “node”, “server.js” ]

Below is the complete Dockerfile.

FROM node:12.18.1

WORKDIR /app

COPY package.json package.json
COPY package-lock.json package-lock.json

RUN npm install

COPY . .

CMD [ “node”, “server.js” ]

Building Images

Now that we’ve created our Dockerfile, let’s build our image. To do this we use the docker build command. The docker build command builds Docker images from a Dockerfile and a “context”. A build’s context is the set of files located in the specified PATH or URL. The Docker build process can access any of the files located in the context. 

The build command optionally takes a –tag flag. The tag is used to set the name of the image and an optional tag in the format ‘name:tag’. We’ll leave off the optional “tag” for now to help simplify things. If you do not pass a tag, docker will use “latest” as it’s default tag. You’ll see this in the last line of the build output.

Let’s build our first Docker image.

$ docker build –tag node-docker .
Sending build context to Docker daemon 82.94kB
Step 1/7 : FROM node:12.18.1
—> f5be1883c8e0
Step 2/7 : WORKDIR /code

Successfully built e03018e56163
Successfully tagged node-docker:latest

Viewing Local Images

To see a list of images we have on our local machine, we have two options. One is to use the CLI and the other is to use Docker Desktop. Since we are currently working in the terminal let’s take a look at listing images with the CLI.

To list images, simply run the images command.

$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
node-docker latest 3809733582bc About a minute ago 945MB
node 12.18.1 f5be1883c8e0 2 months ago 918MB

You should see at least two images listed. One for the base image node:12.18.1 and the other for our image we just build node-docker:latest.

Tagging Images

As mentioned earlier, an image name is made up of slash-separated name components. Name components may contain lowercase letters, digits and separators. A separator is defined as a period, one or two underscores, or one or more dashes. A name component may not start or end with a separator.

An image is made up of a manifest and a list of layers. Do not worry to much about manifests and layers at this point other than a “tag” points to a combination of these artifacts. You can have multiple tags for an image. Let’s create a second tag for the image we built and take a look at it’s layers.

To create a new tag for the image we built above, run the following command.

$ docker tag node-docker:latest node-docker:v1.0.0

The docker tag command creates a new tag for an image. It does not create a new image. The tag points to the same image and is just another way to reference the image.

Now run the docker images command to see a list of our local images.

$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
node-docker latest 3809733582bc 24 minutes ago 945MB
node-docker v1.0.0 3809733582bc 24 minutes ago 945MB
node 12.18.1 f5be1883c8e0 2 months ago 918MB

You can see that we have two images that start with node-docker. We know they are the same image because if you look at the IMAGE ID column, you can see that the values are the same for the two images.

Let’s remove the tag that we just created. To do this, we’ll use the rmi command. The rmi command stands for “remove image”. 

$ docker rmi node-docker:v1.0.0
Untagged: node-docker:v1.0.0

Notice that the response from Docker tells us that the image has not been removed but only “untagged”. Double check this by running the images command.

$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
node-docker latest 3809733582bc 32 minutes ago 945MB
node 12.18.1 f5be1883c8e0 2 months ago 918MB

Our image that was tagged with :v1.0.0 has been removed but we still have the node-docker:latest tag available on our machine.

Running Containers

A container is a normal operating system process except that this process is isolated in that it has its own file system, its own networking, and its own isolated process tree separate from the host.

To run an image inside of a container, we use the docker run command. The docker run command requires one parameter and that is the image name. Let’s start our image and make sure it is running correctly. Execute the following command in your terminal.

$ docker run node-docker

After running this command you’ll notice that you were not returned to the command prompt. This is because our application is a REST server and will run in a loop waiting for incoming requests without return control back to the OS until we stop the container.

Let’s make a GET request to the server using the curl command.

$ curl –request POST
–url http://localhost:8000/test
–header ‘content-type: application/json’
–data ‘{
“msg”: “testing”
}’
curl: (7) Failed to connect to localhost port 8000: Connection refused

As you can see, our curl command failed because the connection to our server was refused. Meaning that we were not able to connect to localhost on port 8000. This is expected because our container is run in isolation which includes networking. Let’s stop the container and restart with port 8000 published on our local network.

To stop the container, press ctrl-c. This will return you to the terminal prompt.

To publish a port for our container, we’ll use the —publish flag (-p for short) on the docker run command. The format of the —publish command is [host port]:[container port]. So if we wanted to expose port 8000 inside the container to port 3000 outside the container, we would pass 3000:8000 to the —publish flag. 

Start the container and expose port 8000 to port 8000 on the host.

$ docker run –publish 8000:8000 node-docker

Now let’s rerun the curl command from above.

$ curl –request POST
–url http://localhost:8000/test
–header ‘content-type: application/json’
–data ‘{
“msg”: “testing”
}’
{“code”:”success”,”payload”:[{“msg”:”testing”,”id”:”dc0e2c2b-793d-433c-8645-b3a553ea26de”,”createDate”:”2020-09-01T17:36:09.897Z”}]}

Success! We were able to connect to the application running inside of our container on port 8000. Switch back to the terminal where your container is running and you should see the POST request logged to the console.

2020-09-01T17:36:09:8770  INFO: POST /test

Press ctrl-c to stop the container.

Run In Detached Mode

This is great so far but our sample application is a web server and we should not have to have our terminal connected to the container. Docker can run your container in detached mode or in the background. To do this, we can use the —detach or -d for short. Docker will start your container the same as before but this time will “detach” from the container and return you to the terminal prompt.

$ docker run -d -p 8000:8000 node-docker
ce02b3179f0f10085db9edfccd731101868f58631bdf918ca490ff6fd223a93b

Docker started our container in the background and printed the Container ID on the terminal.

Again, let’s make sure that our container is running properly. Run the same curl command from above.

$ curl –request POST
–url http://localhost:8000/test
–header ‘content-type: application/json’
–data ‘{
“msg”: “testing”
}’
{“code”:”success”,”payload”:[{“msg”:”testing”,”id”:”dc0e2c2b-793d-433c-8645-b3a553ea26de”,”createDate”:”2020-09-01T17:36:09.897Z”}]}

Listing Containers

Since we ran our container in the background, how do we know if our container is running or what other containers are running on our machine? Well, we can run the docker ps command. Just like on linux, to see a list of processes on your machine we would run the ps command. In the same spirit, we can run the docker ps command which will show us a list of containers running on our machine.

$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ce02b3179f0f node-docker “docker-entrypoint.s…” 6 minutes ago Up 6 minutes 0.0.0.0:8000->8000/tcp wonderful_kalam

The ps command tells a bunch of stuff about our running containers. We can see the Container ID, The image running inside the container, the command that was used to start the container, when it was created, the status, ports that exposed and the name of the container. 

You are probably wondering where the name of our container is coming from. Since we didn’t provide a name for the container when we started it, Docker generated a random name. We’ll fix this in a minute but first we need to stop the container. To stop the container, run the docker stop command which does just that, stops the container. You will need to pass the name of the container or you can use the container id.

$ docker stop wonderful_kalam
wonderful_kalam

Now rerun the docker ps command to see a list of running containers.

$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

Stopped, Started and Naming Containers

Docker containers can be started, stopped and restarted. When we stop a container, it is not removed but the status is changed to stopped and the process inside of the container is stopped. When we ran the docker ps command, the default output is to only show running containers. If we pass the —all or –a for short, we will see all containers on our system whether they are stopped or started.

$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ce02b3179f0f node-docker “docker-entrypoint.s…” 16 minutes ago Exited (0) 5 minutes ago wonderful_kalam
ec45285c456d node-docker “docker-entrypoint.s…” 28 minutes ago Exited (0) 20 minutes ago agitated_moser
fb7a41809e5d node-docker “docker-entrypoint.s…” 37 minutes ago Exited (0) 36 minutes ago goofy_khayyam

If you’ve been following along, you should see several containers listed. These are containers that we started and stopped but have not been removed.

Let’s restart the container that we just stopped. Locate the name of the container we just stopped and replace the name of the container below in the restart command.

$ docker restart wonderful_kalam

Now list all the containers again using the ps command.

$ docker ps –all
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ce02b3179f0f node-docker “docker-entrypoint.s…” 19 minutes ago Up 8 seconds 0.0.0.0:8000->8000/tcp wonderful_kalam
ec45285c456d node-docker “docker-entrypoint.s…” 31 minutes ago Exited (0) 23 minutes ago agitated_moser
fb7a41809e5d node-docker “docker-entrypoint.s…” 40 minutes ago Exited (0) 39 minutes ago goofy_khayyam

Notice that the container we just restarted has been started in detached mode and has port 8000 exposed. Also observe the status of the container is “Up X seconds”. When you restart a container, it will be started with the same flags or commands that it was originally started with.

Let’s stop and remove all of our containers and take a look at fixing the random naming issue.

Stop the container we just started. Find the name of your running container and replace the name in the command below with the name of the container on your system.

$ docker stop wonderful_kalam
wonderful_kalam

Now that all of our containers are stopped, let’s remove them. When a container is removed, it is no longer running nor is it in the stopped status but the process inside the container has been stopped and the metadata for the container has been removed.

$ docker ps –all
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ce02b3179f0f node-docker “docker-entrypoint.s…” 19 minutes ago Up 8 seconds 0.0.0.0:8000->8000/tcp wonderful_kalam
ec45285c456d node-docker “docker-entrypoint.s…” 31 minutes ago Exited (0) 23 minutes ago agitated_moser
fb7a41809e5d node-docker “docker-entrypoint.s…” 40 minutes ago Exited (0) 39 minutes ago goofy_khayyam

To remove a container, simple run the docker rm command passing the container name. You can pass multiple container names to the command in one command. Again, replace the containers names in the below command with the container names from your system.

$ docker rm wonderful_kalam agitated_moser goofy_khayyam
wonderful_kalam
agitated_moser
goofy_khayyam

Run the docker ps –all command again to see that all containers are gone.

Now let’s address the pesky random name issue. Standard practice is to name your containers for the simple reason that it is easier to identify what is running in the container and what application or service it is associated with. Just like good naming conventions for variables in your code makes it simpler to read. So goes naming your containers.

To name a container, we just need to pass the –name flag to the run command.

$ docker run -d -p 8000:8000 –name rest-server node-docker
1aa5d46418a68705c81782a58456a4ccdb56a309cb5e6bd399478d01eaa5cdda
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1aa5d46418a6 node-docker “docker-entrypoint.s…” 3 seconds ago Up 3 seconds 0.0.0.0:8000->8000/tcp rest-server

There, that’s better. Now we can easily identify our container based on the name.

Conclusion

In this post, we learned about creating Docker images using a Dockerfile, tagging our images and managing images. Next we took a look at running containers, publishing ports, and running containers in detached mode. We then learned about managing containers by starting, stopping and restarting them. We also looked at naming our containers so they are more easily identifiable.

In part II, we’ll take a look at running a database in a container and connecting it to our application. We’ll also look at setting up your local development environment and sharing your images using Docker.

If you have any questions, please feel free to reach out on Twitter @pmckee and join us in our community slack.
The post Getting Started with Docker Using Node.js(Part I) appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Getting Started with Docker Using Node – Part II

In part I of this series, we learned about creating Docker images using a Dockerfile, tagging our images and managing images. Next we took a look at running containers, publishing ports, and running containers in detached mode. We then learned about managing containers by starting, stopping and restarting them. We also looked at naming our containers so they are more easily identifiable.

In this post, we’ll focus on setting up our local development environment. First, we’ll take a look at running a database in a container and how we use volumes and networking to persist our data and allow our application to talk with the database. Then we’ll pull everything together into a compose file which will allow us to setup and run a local development environment with one command. Finally, we’ll take a look at connecting a debugger to our application running inside a container.

Local Database and Containers

Instead of downloading MongoDB, installing, configuring and then running the Mongo database as a service. We can use the Docker Official Image for MongoDB and run it in a container.

Before we run MongoDB in a container, we want to create a couple of volumes that Docker can manage to store our persistent data and configuration. I like to use the managed volumes feature that docker provides instead of using bind mounts. You can read all about volumes in our documentation.

Let’s create our volumes now. We’ll create one for the data and one for configuration of MongoDB.

$ docker volume create mongodb

$ docker volume create mongodb_config

Now we’ll create a network that our application and database will use to talk with each other. The network is called a user defined bridge network and gives us a nice DNS lookup service which we can use when creating our connection string.

docker network create mongodb

Now we can run MongoDB in a container and attach to the volumes and network we created above. Docker will pull the image from Hub and run it for you locally.

$ docker run -it –rm -d -v mongodb:/data/db

-v mongodb_config:/data/configdb -p 27017:27017

–network mongodb

–name mongodb

mongo

Okay, now that we have a running mongodb, let’s update server.js to use a the MongoDB and not an in-memory data store. 

const ronin     = require( ‘ronin-server’ )

const mocks     = require( ‘ronin-mocks’ )

const database  = require( ‘ronin-database’ )

const server = ronin.server()

database.connect( process.env.CONNECTIONSTRING )

server.use( ‘/’, mocks.server( server.Router(), false, false ) )

server.start()

We’ve add the ronin-database module and we updated the code to connect to the database and set the in-memory flag to false. We now need to rebuild our image so it contains our changes.

First let’s add the ronin-database module to our application using npm.

$ npm install ronin-database

Now we can build our image.

$ docker build –tag node-docker .

Now let’s run our container. But this time we’ll need to set the CONNECTIONSTRING environment variable so our application knows what connection string to use to access the database. We’ll do this right in the docker run command.

$ docker run

-it –rm -d

–network mongodb

–name rest-server

-p 8000:8000

-e CONNECTIONSTRING=mongodb://mongodb:27017/yoda_notes

node-docker

Let’s test that our application is connected to the database and is able to add a note.

$ curl –request POST

  –url http://localhost:8000/notes

  –header ‘content-type: application/json’

  –data ‘{

“name”: “this is a note”,

“text”: “this is a note that I wanted to take while I was working on writing a blog post.”,

“owner”: “peter”

}’

You should receive the following json back from our service.

{“code”:”success”,”payload”:{“_id”:”5efd0a1552cd422b59d4f994″,”name”:”this is a note”,”text”:”this is a note that I wanted to take while I was working on writing a blog post.”,”owner”:”peter”,”createDate”:”2020-07-01T22:11:33.256Z”}}

Using Compose to Develop locally

Awesome! We now have our MongoDB running inside a container and persisting it’s data to a Docker volume. We also were able to pass in the connection string using an environment variable.

But this can be a little bit time consuming and also difficult to remember all the environment variables, networks and volumes that need to be created and set up to run our application. 

In this section, we’ll use a Compose file to configure everything we just did manually. We’ll also set up the Compose file to start the application in debug mode so that we can connect a debugger to the running node process.

Open your favorite IDE or text editor and create a new file named docker-compose.dev.yml. Copy and paste the below commands into that file.

version: ‘3.8’

services:

 notes:

   build:

     context: .

   ports:

     – 8000:8000

     – 9229:9229

   environment:

     – CONNECTIONSTRING=mongodb://mongo:27017/notes

   volumes:

     – ./:/code

   command: npm run debug

 mongo:

   image: mongo:4.2.8

   ports:

     – 27017:27017

   volumes:

     – mongodb:/data/db

     – mongodb_config:/data/configdb

 volumes:

   mongodb:

   mongodb_config:

This compose file is super convenient because now we do not have to type all the parameters to pass to the docker run command. We can declaratively do that in the compose file.

We are exposing port 9229 so that we can attach a debugger. We are also mapping our local source code into the running container so that we can make changes in our text editor and have those changes picked up in the container.

One other really cool feature of using a compose file, is that we have service resolution automatically set up for us. So we are now able to use “mongo” in our connection string. The reason we can use “mongo” is because this is the name we used in the compose file to label our container running our MongoDB.

To be able to start our application in debug mode, we need to add a line to our package.json file to tell npm how to start our application in debug mode.

Open the package.json file and add the following line to the scripts section.

“debug”: “nodemon –inspect=0.0.0.0:9229 server.js”

As you can see we are going to use nodemon. Nodemon will start our server in debug mode and also watch for files that have changed and restart our server. Let’s add nodemon to our package.json file.

$ npm install nodemon

Let’s first stop our running application and the mongodb container. Then we can start our application using compose and confirm that it is running properly.

$ docker stop rest-server mongodb

$ docker-compose -f docker-compose.dev.yml up –build

If you get the following error: ‘Error response from daemon: No such container:’ Don’t worry. That just means that you have already stopped the container or it wasn’t running in the first place.

You’ll notice that we pass the “–build” flag to the docker-compose command. This tells Docker to first compile our image and then start it.

If all goes will you should see something similar:

Now let’s test our API endpoint. Run the following curl command:

$ curl –request GET –url http://localhost:8000/notes

You should receive the following response:

{“code”:”success”,”meta”:{“total”:0,”count”:0},”payload”:[]}

Connecting a Debugger

We’ll use the debugger that comes with the Chrome browser. Open Chrome on your machine and then type the following into the address bar.

about:inspect

The following screen will open.

Click the “Open dedicated DevTools for Node” link. This will open the DevTools window that is connected to the running node.js process inside our container.

Let’s change the source code and then set a breakpoint. 

Add the following code to the server.js file on line 9 and save the file. 

 server.use( ‘/foo’, (req, res) => {

   return res.json({ “foo”: “bar” })

 })

If you take a look at the terminal where our compose application is running, you’ll see that nodemon noticed the changes and reloaded our application.

Navigate back to the Chrome DevTools and set a breakpoint on line 10 and then run the following curl command to trigger the breakpoint.

$ curl –request GET –url http://localhost:8000/foo

BOOM You should have seen the code break on line 10 and now you are able to use the debugger just like you would normally. You can inspect and watch variables, set conditional breakpoints, view stack traces and a bunch of other stuff.

Conclusion

In this post, we ran MongoDB in a container, connected it to a couple of volumes and created a network so our application could talk with the database. Then we used Docker Compose to pull all this together into one file. Finally, we took a quick look at configuring our application to start in debug mode and connected to it using the Chrome debugger.

If you have any questions, please feel free to reach out on Twitter @pmckee and join us in our community slack.
The post Getting Started with Docker Using Node – Part II appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Secure from the Start: Shift Vulnerability Scanning Left in Docker Desktop

Application delivery velocity can be tripped up when security vulnerabilities are discovered after an app is deployed into production. Nothing is more detrimental to shipping new features to customers than having to go back and address vulnerabilities discovered in an app or image you already released. At Docker, we believe the best way to balance the needs for speed and security is to shift security left in the app delivery cycle as an integral part of the development process. 

Integrating security checks into Docker Scan was the driver behind the partnership with Snyk, one of the leading app security scan providers in the industry. This partnership, announced in May of this year, creates a vision for a simple and streamlined approach for developers to build and deploy secure containers. And today, I’m excited to share that the latest Docker Desktop Edge release includes Snyk vulnerability scanning. This allows Docker users to trigger local Docker file and local image scans directly from the Docker Desktop CLI. With the combination of Docker Scan and Snyk, developers gain visibility into open source vulnerabilities that can have a negative impact on the security of container images. Now you can extend your workflow to include vulnerability testing as part of your inner development loop. Triggered from the Docker Desktop CLI, the Snyk vulnerability scans extend the existing, familiar process of vulnerability detection, and allow for remediation of vulnerabilities earlier in the development process. This process of simple and continuous checks leads to fewer vulnerabilities checked into Docker Hub, a shorter CI cycle, and faster and more reliable deployment into production. 

With that, let me show you how it works.

To begin, authenticated Docker users can start by running their scans by entering these Docker CLI commands –

To find their local image

$docker pull username/imageName

And run a scan

$docker scan username/imageName

The Docker scan CLI command supports several flags, providing options for running scans 

–exclude-base flag excludes base image vulnerabilities from the CLI scan results, allowing user to reduce the volume of reported vulnerabilities, and focus vulnerability reporting on their own image updates–json flag displays scan results in JSON format–dependency-tree flag provides the mapping of image dependencies before listing vulnerability data–f, –file flag indicates the location of the Dockerfile associated with the image, extending  vulnerability scanning results using the contents of the Dockerfile to further identify potential vulnerabilities across all the image manifests

You can also add multiple flags  in a single CLI command, for additional flexibility in consuming vulnerability data. Scans return scanned image data, including:

Vulnerability descriptionsVulnerability severitiesImage layer associated with the vulnerability,  including the Dockerfile command, if you’ve associated the Dockerfile with the scanExploit maturity, so you can easily identify which vulnerabilities have a known functioning exploitAvailable suggestions for remediation,  rebuilding if the base image is out-of-date, slimmer alternative images that can help reduce vulnerabilities, or package upgrades that resolve a vulnerability

Invoking scanning through Docker Desktop CLI allows you to iteratively test for new vulnerabilities, while working on image updates, by:

Making image updatesRunning a scan Discovering new vulnerabilities introduced with the latest updatesMaking more updates to remove these vulnerabilitiesConfirming vulnerability removal by running another scan

You can start taking advantage of this today in the latest release of Docker Desktop Edge.

After you download the new bits, you can get more comprehensive details on the scan functionality in the Docker documentation.

Finally, we have an upcoming webinar that takes you through the inner workings of the enhanced security capabilities in this new release. You can get more information and sign up for the webinar at this link. 

And stay tuned for further updates on triggering vulnerability scans from the Docker Hub.  

Next steps:

Download the latest version of the Desktop Edge releaseReview the Docker documentation Attend the webinar on Thursday, September 24 at 10:00am PT, Find and Fix Container Image Vulnerabilities with Docker and Snyk

Sign up for a free Snyk ID and Read the Snyk blog to learn more about the integration
The post Secure from the Start: Shift Vulnerability Scanning Left in Docker Desktop appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

ICYMI: From Docker Straight to AWS Built-in

In July we announced a new strategic partnership with Amazon to integrate the Docker experience you already know and love with Amazon Elastic Container Service (ECS) with AWS Fargate. Over the last couple of months we have worked with the community on the beta experience in Docker Desktop Edge. Today we are excited to bring this experience to our entire community in Docker Desktop stable, version 2.3.0.5.

You can watch Carmen Puccio (Amazon) and myself (Docker) and view the original demo in the recording of our latest webinar here.

What started off in the beta as a Docker plugin experience docker ecs has been pulled into Docker directly as a familiar docker compose flow.  This is just the beginning, and we could use your input so head over to the Docker Roadmap and let us know what you want to see as part of this integration. 

There is no better time to try it.  Grab the latest Docker Desktop Stable. Then check out my example application which will walk you through everything you need to know to deploy a Python application locally in development and then again directly to Amazon ECS in minutes not hours.

Want more? Join us this Wednesday, September 16th at 10am Pacific where Jonah Jones (Amazon), Peter McKee (Docker) and myself will continue the discussion on Docker Run, our YouTube channel. For a live QA from our last webinar. We will be answering the top questions from the webinar and from the live audience.

DockTalk Q&A: From Docker Straight to AWS

The post ICYMI: From Docker Straight to AWS Built-in appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Check out the Azure CLI experience now available in Desktop Stable

Back in May we announced the partnership between Docker and Microsoft to make it easier to deploy containerized applications from the Desktop to the cloud with Azure Container Instances (ACI). Then in June we were able to share the first version of this as part of a Desktop Edge release, this allowed users to use existing Docker CLI commands straight against ACI making getting started running containers in the cloud simpler than ever. 

We are now pleased to announce that the Docker and ACI integration has moved into Docker Desktop stable 2.3.0.5 giving all Desktop users access to the simplest way to get containers running in the cloud. 

Getting started 

As a new starter, to get going all you will need to do is upgrade your existing Docker Desktop to the latest stable version (2.3.0.5), store your image on Docker Hub so you can deploy it (you can get started with Hub here) and then lastly you will need to create an ACI context to deploy it to. For a simple example of getting started with ACI you can see our initial blog post on the edge experience.

More CLI commands

We have added some new features since we first released the Edge experience, one of the biggest changes was the addition of the new docker volume command. We added this in as we wanted to make sure that there was an easy way to get started creating persistent state between spinning up your containers. It is always good to use a dedicated service for a database like Cosmo DB, but while you are getting started volumes are a great way to initial store state. 

This allows you to use existing and create new volumes, to get started with a new volume we can start by selecting our ACI context:

$ docker context use myaci 

Then we can create a volume in a similar way to the Docker CLI, though in this case we have a couple of cloud specific variables we do need to provide:

$ docker volume create –storage-account mystorageaccount –fileshare test-volume

Now I can use this volume either from my CLI:

docker run -v mystorageaccount/test-volume:/target/path myimage

Or from my Compose file:

myservice:
image: nginx
volumes:
– mydata:/mount/testvolumes

volumes:
mydata:
driver: azure_file
driver_opts:
share_name: test-volume
storage_account_name: mystorageaccount

Along with this the CLI now supports some of the most popular CLI commands that were previously missing including stop, start & kill.

Try it out

If you are after some ideas of what you can do with the experience you can check out Guilluame’s blog post on running Minecraft in ACI, the whole thing takes about ~15 minutes and is a great example of how simple it is to get containers up and running. 

Microsoft offers $200 of free credits to use in your first 30 days which is a great way to try out the experience, once you have an account you will just need Docker Desktop and a Hub repo with your images saved in and you can start deploying! 

To get started today with the new Azure ACI experience, download Docker Desktop 2.3.0.5 and try out the experience yourself. If you enjoy the experience, have feedback on it or other ideas on what Docker should be working on please reach out to us on our Roadmap.
The post Check out the Azure CLI experience now available in Desktop Stable appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

10 Years of OpenStack – Shane Wang at Intel

Storytelling is one of the most powerful means to influence, teach, and inspire the people around us. To celebrate OpenStack’s 10th anniversary, we are spotlighting stories from the individuals in various roles from the community who have helped to make OpenStack and the global Open Infrastructure community successful.  Here, we’re talking to Shane Wang from… Read more »
Quelle: openstack.org

10 Years of OpenStack – Julia Kreger at Red Hat

Storytelling is one of the most powerful means to influence, teach, and inspire the people around us. To celebrate OpenStack’s 10th anniversary, we are spotlighting stories from the individuals in various roles from the community who have helped to make OpenStack and the global Open Infrastructure community successful.  Here, we’re talking to Julia Kreger from… Read more »
Quelle: openstack.org

10 Years of OpenStack – Mohammed AbuAisha at Radix Technologies

Happy 10 years of OpenStack! Millions of cores, 100,000 community members, 10 years of you. Storytelling is one of the most powerful means to influence, teach, and inspire the people around us. To celebrate OpenStack’s 10th anniversary, we are spotlighting stories from the individuals in various roles from the community who have helped to make… Read more »
Quelle: openstack.org