UCLA: Building the future of higher education technology with APIs

Imagine you work at a university, and every time you need to use department funds to pay for something, you’re required to go through a protracted, multi-step process. Or suppose you’re a student trying to review your academic record, then browse and enroll in classes you need– but different parts of the process are scattered across different apps and websites, some with conflicting information. Today’s consumers expect engaging, low-friction digital experiences from businesses–and likewise, today’s students expect similarly straightforward and smooth experiences from institutions of higher learning.  Now imagine you’re part of the university IT staff that has to replace old processes with digital experiences. You wouldn’t want to do a custom integration for each project–for each new use case in which a faculty member needs to buy something, a student needs class information, and so on. Instead, you need data and functionality to be simply, repeatably, scalably available to developers for new uses. You wouldn’t want these systems to be merely available. Instead, you’d need them to be painless for different collaborators to find, resilient to spikes in traffic, simple to govern en masse, and easy to secure. This is the kind of challenge the IT team at the University of California, Los Angeles (UCLA) faced. The school has a long history of accolades for its great IT– but technology never stands still, and to move as quickly and agilely as students, staff, and partners expect, UCLA needed a modern, API-first approach to building apps and digital services. Curtis Fornadley, program manager for Enterprise Integration at UCLA, said that like many large organizations, the university has many legacy systems that need to be leveraged for modern applications. “Previously, UCLA’s team ran an enterprise service bus, with a homegrown gateway for SOAP services,” he explained. “But SOAP-based services are difficult to scale, and managing them often involves locking them down, which clashes with our need to make data and functionality easier for developers to use.”Today’s API-first architectures are designed to be scaled across decentralized teams, letting administrators apply governance and security while still letting developers work faster by easily harnessing the resources they need. Recognizing the need to adopt such an architecture at UCLA, Fornadley formed a comprehensive API program proposal that culminated in the university adopting Apigee, Google Cloud’s API management platform, to bring the vision to life. “We needed API management to help people across the university more efficiently create new solutions,” he said. Cornerstones of this vision include the development of the Ascend Financial System and the Student Information System. The former extends the university’s financial system APIs, making them accessible to developers from various departments via secure and scalable self-service capabilities. The Student Information System encompasses APIs that provide real-time access to the students’ academic, financial and personal records to various UCLA applications on campus. With these two projects alone, the number of APIs, and the number of applications using and depending on them, increased considerably, requiring comprehensive management to keep services online, monitor their usage, and authenticate access to them. Without a campus-wide API program, the success of such initiatives would be threatened by fragmented API experiences for both internal developers and partners. UCLA currently manages over 200 APIs and is building out a hub-and-spoke model in which developers use a self-service portal to find and access the services they need. With federated, campus-wide API governance in place, the program ensures a consistent experience for all parties, and establishes authoritative single “sources of truth” when developers are leveraging data for applications. Although UCLA’s API program is still growing, many results  already speak for themselves. On-campus Apigee usage has grown from 1 million calls in 2020 to over 11 million (so far) in 2021. The transition of all APIs from the home-grown gateway to Apigee is completing this month and the expected usage by the end of the year is expected to be at least 49 million calls.The program also provides a foundation for myriad innovations going forward. By monitoring API usage and generating analytics, for example, the university is learning which services are being leveraged in interesting or popular ways, which will guide future investments, and they’re gaining insights to secure APIs against evolving threats. Additionally, APIs are helping UCLA to activate their vast data stores, both by breaking down the silos between them and making them connectable to services in the cloud. Not least of all, APIs are making it easier for partners to work with UCLA, enabling faster, simpler collaboration and sharing of services. To try Apigee for free and learn how it can help your organization, click here.Related ArticleThe time for digital excellence is here—Introducing Apigee XApigee X, the new version of Google Cloud’s API management platform, helps enterprises accelerate from digital transformation to digital …Read Article
Quelle: Google Cloud Platform

Docker for Node.js Developers: 5 Things You Need to Know Not to Fail Your Security

Guest post by Liran Tal, Snyk Director of Developer Advocacy 

Docker is totalling up to more than 318 billion downloads of container images. With millions of applications available on Docker Hub, container-based applications are popular and make an easy way to consume and publish applications.

That being said, the naive way of building your own Docker Node.js web applications may come with many security risks. So, how do we make security an essential part of Docker for Node.js developers?

Many articles on this topic have been written, yet sadly, without thoughtful consideration of security and production best practices for building Node.js Docker images. This is the focus of my article here and the demos I shared on this recent Docker Build show with Peter McKee. 

Before we jump into the gist of Docker for Node.js and building Docker images, let’s have a look at some frequently asked questions on the topic.

How do I dockerize Node.js applications?

Running your Node.js application in a Docker container can be as simple as copying over the project’s directory and installing all the required npm packages, but there are many security and production related concerns that you might miss. These production-grade tips are laid out in the following guide on containerizing Node.js web applications with Docker, which covers everything from choosing the right Docker base image and using multi-stage builds, to managing secrets safely and properly enabling production-related framework configuration.

This article focuses on the information you need to better understand the impact of choosing the right Node.js Docker base image for your web application and will help you find the most secure Docker image available for your application.  

How is Docker helpful for Node.js developers?

Packaging your Node.js application in a container allows you to bundle your complete application, including the runtime, configuration and OS-level dependencies, and everything required for your web application to run across different platforms and CPU architectures. These images are bundled as deployable artifacts called container images. These Docker images are software-based bundles enabling easily reproducible builds, and give Node.js developers a way to run the same project or product in all environments. 

Finally, Docker containers allow you to experiment more easily with new platform releases or other changes without requiring special permissions, or setting up a dedicated environment to run a project.

1. Choose the right Node.js Docker base image for your application

When creating a Docker image for a Node.js project, we build our own application image based on another Docker image, which we pull from Docker Hub. This is what we refer to as the base image. The base image is the building block of the new Docker image you are about to build for your Node.js application.

The selection of a base image is critical because it significantly impacts everything from the Docker image build speed, as well as the security and performance of your web application. This is so critical Docker and Snyk co-wrote this practical guide focused on container image security for developer teams. 

It’s quite possible that you are choosing a full-fledged operating system image based on Debian or Ubuntu, because it enables you to utilize all the tooling and libraries available in these images. However, this comes at a price. When a base image has a security vulnerability, you will inherit it in your newly created image. Why would you want to start off on bad terms by defaulting to a big base image that contains many vulnerabilities?

When we look at the base images, many of the security vulnerabilities belong to the Operating System (OS) layer this base image uses. Snyk’s 2019 research Shifting Docker security left, showed that the vulnerabilities brought in by the OS layer can vary largely depending on the flavor you choose.

2. Scan your Node.js Docker image during development

Creating a Docker image based on other images, as well as rebuilding them can potentially introduce new vulnerabilities, but there’s a way for you to be on top of it.

Treat the Docker image build process just like any other development related activity. Just as you test the code you write, you should test the Docker images you build. 

These tests include static file checks—also known as linters—to ensure you’re avoiding security pitfalls and other bad patterns in your Dockerfile. We’ve outlined some of these in our Docker image security best practices. If you’re a Node.js application developer you’ll also want to read through this step-by-step 10 best practices to containerize Node.js web applications with Docker.

Connecting your git repositories to Snyk is also an excellent choice. Snyk supports native integrations with GitHub, GitLab, Bitbucket and Azure Repos. Having a git integration means that we can scan your pull requests and annotate them with security information, if we find security vulnerabilities. This allows you to put gates and deny merging a pull request if it brings new security vulnerabilities.

If you need more flexibility for your Continuous Integration (CI), or a closely integrated developer experience, meet the Snyk CLI.

The CLI allows you to easily test your Docker container image. Let’s say you’re building a Docker image locally and tagged it as nodejs:notification-v99.9—we test it as follows:

Install the Snyk CLI:$ npm install -g snykThen let the Snyk CLI automatically grab an API token for you with:$ snyk authScan the local base image:$ snyk container test nodejs:notification-v99.9

Test results are then printed to the screen, along with information about the CVE, the path that introduces the vulnerability, so you know which OS dependency is responsible for it.

Following is an example output for testing Docker base image node:15:

✗ High severity vulnerability found in binutils
Description: Out-of-Bounds
Info: https://snyk.io/vuln/SNYK-DEBIAN9-BINUTILS-404153
Introduced through: dpkg/dpkg-dev@1.18.25, libtool@2.4.6-2
From: dpkg/dpkg-dev@1.18.25 > binutils@2.28-5
From: libtool@2.4.6-2 > gcc-defaults/gcc@4:6.3.0-4 > gcc-6@6.3.0-18+deb9u1 > binutils@2.28-5
Introduced by your base image (node:15)

✗ High severity vulnerability found in binutils
Description: Integer Overflow or Wraparound
Info: https://snyk.io/vuln/SNYK-DEBIAN9-BINUTILS-404253
Introduced through: dpkg/dpkg-dev@1.18.25, libtool@2.4.6-2
From: dpkg/dpkg-dev@1.18.25 > binutils@2.28-5
From: libtool@2.4.6-2 > gcc-defaults/gcc@4:6.3.0-4 > gcc-6@6.3.0-18+deb9u1 > binutils@2.28-5
Introduced by your base image (node:15)

Organization: snyk-demo-567
Package manager: deb
Target file: Dockerfile
Project name: docker-image|node
Docker image: node:15
Platform: linux/amd64
Base image: node:15
Licenses: enabled

Tested 412 dependencies for known issues, found 554 issues.

Base Image Vulnerabilities Severity
node:15 554 56 high, 63 medium, 435 low

Recommendations for base image upgrade:

Alternative image types
Base Image Vulnerabilities Severity
node:current-buster-slim 53 10 high, 4 medium, 39 low
node:15.5-slim 72 18 high, 7 medium, 47 low
node:current-buster 304 33 high, 43 medium, 228 low

3. Fix your Node.js runtime vulnerabilities in your Docker images

An often overlooked detail, when managing the risk of Docker container images, is the application runtime itself. Whether you’re practicing Docker for Java, or you’re running Docker for Node.js web applications, the Node.js application runtime itself may be vulnerable.

You should be aware and follow Node.js security releases and the Node.js security policy. Instead of manually keeping up with these, take advantage of Snyk to also find Node.js security vulnerabilities.

To give you more context on security vulnerabilities across the different Node.js base image tags, I scanned some of them with the Snyk CLI and plotted the results in the following logarithmic scale chart:

You can see that:

The default node base image tag, also tagged as node:latest, bundles more than 500 security vulnerabilities, but also introduces 2 security vulnerabilities in the Node.js runtime itself. That should worry you if you’re currently running a Node.js 15 version in production and you didn’t patch or fix it.The node:alpine base image tag might not be bundling vulnerable OS dependencies in the base image—this is why it’s missing a blue bar—but it still has a vulnerable version of the latest Node.js runtime (version 15).If you’re running an unsupported version of Node.js—for example, Node.js 10—it is vulnerable and you can see that it is not receiving any security updates.

If you were to choose the Node.js version 15, which is the latest version released, at the time of writing this article, you would  actually be exposing yourself not only to 561 security vulnerabilities within this container, but also to two security vulnerabilities in the Node.js runtime itself.

We can see the Docker scan test results found in this public image testing URL: https://snyk.io/test/docker/node:15.5.0. You’re welcome to test other Node.js base image tags that you’re using with this public and free Docker scanning service: https://snyk.io/test.

Security is now an integral part of the Docker workflow, with Snyk powering container scanning in Docker Hub and Docker Desktop. In fact, if you’re using Docker as a development platform, you should review our Snyk and Docker Vulnerability Cheatsheet.

If you already have a Docker user account, you can use it to connect to Snyk and quickly import your Docker Hub repositories with up to 200 free scans per month. 

4. Monitor your deployed Docker images for your Node.js applications

Once you have Docker images built, you’re probably pushing them to a Docker registry that keeps track of the images, so that these can be deployed and spun up as a functional container application.

Why should we monitor Docker base images?

If you’re practicing all of the security guidelines we covered so far with scanning and fixing base images, that’s great. However, keep in mind that new security vulnerabilities get discovered all the time. If you have 78 security vulnerabilities in your image now, that doesn’t mean you won’t have 100 tomorrow morning when new CVEs are reported and impact your running containers in production. That’s why monitoring your registry of container images—those that you’re using to deploy containers—is crucial to ensure you will find out about security issues soon and can remediate them.

If you’re using a paid Docker Hub registry for your images, you might have already seen the integrated Docker security scanning by Snyk in Docker Hub. 

You can also integrate with many Docker image registries from the Snyk app directly. For example, you can import images from Docker Hub, ACR, ECR, GCR, or Artifactory and then Snyk will scan these regularly for you and alert you via Slack or email about any security issues found:

5. Follow security guidelines and production-grade recommendation for a secure and optimal Node.js Docker image

Congratulations for keeping up with all the security guidelines so far!

To wrap up, if you want to dive deep into security best practices for building optimal Docker images for Node.js and Java applications, check out these resources:

10 Docker Security Best Practices – detailed security practices that you should follow when building Docker base images and when pulling them too, as it also introduces the reader to Docker content trust.Are you a Java developer? You’ll find this resource valuable: Docker for Java developers: 5 things you need to know not to fail your security.10 best practices to containerize Node.js web applications with Docker – If you’re a Node.js developer you are going to love this step by step walkthrough, showing you how to build secure and performant Docker base images for your Node.js applications.

Start testing and fixing your container images with Snyk and your Docker ID.
The post Docker for Node.js Developers: 5 Things You Need to Know Not to Fail Your Security appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Amazon AppStream 2.0 fügt Unterstützung für Echtzeit-Audio/Video mit einem Webbrowser hinzu

Amazon AppStream 2.0 unterstützt jetzt Echtzeit-Audio-Video (AV) durch nahtlose Umleitung des lokalen Webcam-Videoeingangs in AppStream 2.0-Streaming-Sitzungen mit einem Webbrowser. Zuvor war diese Funktion nur für den AppStream 2.0-Client für Windows verfügbar. Mit der im Webbrowser verfügbaren Echtzeit-AV-Unterstützung können Ihre Benutzer AV-Collaboration- und Medienanwendungen in ihren AppStream 2.0-Streaming-Sitzungen verwenden und eine Verbindung von einer Vielzahl von Client-Geräten herstellen, einschließlich Windows-PCs, Macs, Chromebooks und Thin Clients. Ihre Benutzer können zusammenarbeiten, ohne ihre AppStream 2.0-Sitzungen verlassen zu müssen und ohne zusätzliche Client-Software zu verwalten.
Quelle: aws.amazon.com

AWS Storage Gateway fügt Unterstützung für AWS Privatelink für Amazon S3 und Amazon S3 Access Points hinzu

AWS Storage Gateway fügt Unterstützung für AWS Privatelink für Amazon S3 und Amazon S3 Access Points hinzu.  Wenn Sie Amazon S3 File Gateway für Ihr lokales Gateway (VMware, Microsoft Hyper V, Linux Kernel-based Virtual Machine (KVM) oder AWS Storage Gateway Hardware Appliance) verwenden, können Sie jetzt eine private Verbindung von Ihrem Gateway direkt zu Amazon S3 herstellen, ohne einen HTTP-Proxy zu benötigen.
Quelle: aws.amazon.com

AWS Systems Manager Application Manager unterstützt jetzt die vollständige Lebenszyklusverwaltung von AWS-CloudFormation-Vorlagen und -Stacks

Heute kündigt AWS eine neue Funktion von Application Manager an, einer Funktion von AWS Systems Manager, mit der Kunden ihre AWS-CloudFormation-Vorlagen und -Stacks verwalten und bereitstellen können, ohne die Application-Manager-Konsole zu verlassen. Mit Application Manager können Kunden Anwendungen in mehreren AWS-Services wie AWS Launch Wizard, AWS Service Catalog App Registry, AWS Resource Groups, Amazon-EKS (Amazon Elastic Kubernetes) und Amazon-ECS (Amazon Elastic Container Service) erkennen und verwalten. Diese neue Funktion bietet Kunden eine gebrauchsfertige Lösung zur Verwaltung des Lebenszyklus von CloudFormation-Vorlagen und -Stacks, ohne Amazon Simple Storage Service (Amazon S3) oder Versionskontrollsysteme für die Vorlagenverwaltung einrichten zu müssen.
Quelle: aws.amazon.com

Neuer digitaler Kurs: Amazon S3 Business Continuity and Disaster Recovery

AWS Training and Certification freut sich, einen kostenlosen digitalen Kurs vorzustellen: Amazon Simple Storage Service (Amazon S3) Business Continuity and Disaster Recovery. In diesem 50-minütigen Kurs für Fortgeschrittene lernen Sie, wie Sie einen Betriebskontinuitäts- und Notfallwiederherstellungsplan für Ihre Amazon-S3-Implementierung implementieren. Er richtet sich an Cloud-Architekten, Speicherarchitekten, Entwickler und Betriebstechniker und enthält interaktive Lektionen sowie ein Quiz zur Überprüfung der eigenen Kenntnisse.
Quelle: aws.amazon.com