Image Analysis 4.0 with new API endpoint and OCR model in preview

Enterprises and hobbyists alike have been using Azure Computer Vision’s Image Analysis API to garner various insights from their images. These insights help power scenarios such as digital asset management, search engine optimization (SEO), image content moderation, and alt text for accessibility among others. 

Newly improved features including read (OCR)

We are thrilled to announce the preview release of Computer Vision Image Analysis 4.0 which combines existing and new visual features such as read optical character recognition (OCR), captioning, image classification and tagging, object detection, people detection, and smart cropping into one API. One call is all it takes to run all these features on an image. 

The OCR feature integrates more deeply with the Computer Vision service and includes performance improvements that are optimized for image scenarios that make OCR easy to use for user interfaces and near real-time experiences. Read now supports 164 languages including Cyrillic, Arabic, and Hindi.

Tested at scale and ready for deployment 

Microsoft’s own products from PowerPoint, Designer, Word, Outlook, Edge, and LinkedIn are using Vision APIs to power design suggestions, alt text for accessibility, SEO, document processing, and content moderation. 

You can get started with the preview by trying out the visual features with your images on Vision Studio. Upgrading from a previous version of the Computer Vision Image Analysis API to V4.0 is simple with these instructions.

We will continue to release breakthrough vision AI through this new API over the coming months, including capabilities powered by the Florence foundation model featured in this year’s premiere computer vision conference keynote at CVPR. 

Additional Computer Vision services

Spatial Analysis is also in preview. You can use the spatial analysis feature to create apps that can count people in a room, understand dwell times in front of a retail display, and determine wait times in lines. Build solutions that enable occupancy management and social distancing, optimize in-store and office layouts, and accelerate the checkout process. By processing video streams from physical spaces, you're able to learn how people use them and maximize the space's value to your organization.

The Azure Face service provides AI algorithms that detect, recognize, and analyze human faces in images. Facial recognition software is important in many different scenarios, such as identity verification, touchless access control, and face blurring for privacy. Face service access is limited based on eligibility and usage criteria in order to support our Responsible AI principles. Face service is only available to Microsoft managed customers and partners. Use the Face Recognition intake form to apply for access. For more information, see the Face limited access page.

Computer Vision and Responsible AI

We are excited to see how our customers use Computer Vision’s Image Analysis API with these new and updated features. Our technology advancements are also guided by Microsoft’s Responsible AI process, and our principles of fairness, inclusiveness, reliability and safety, transparency, privacy and security, and accountability. We put these ethical standards into practice through the Office of Responsible AI (ORA)—which sets our rules and governance processes, the AI Ethics and Effects in Engineering and Research (Aether) Committee—which advises our leadership on the challenges and opportunities presented by AI innovations, and Responsible AI Strategy in Engineering (RAISE)—a team that enables the implementation of Microsoft Responsible AI rules across engineering groups.

Get started

Start improving how you analyze images with Image Analysis 4.0 with a unified API endpoint and a new OCR Model. 

Computer Vision documentation.
Image Analysis documentation. 
Quick Start for Image Analysis. 
Vision Studio for demoing product solutions.

Quelle: Azure

Resolve Vulnerabilities Sooner With Contextual Data

OpenSSL 3.0.7 and “Text4Shell” might be the most recent critical vulnerabilities to plague your development team, but they won’t be the last. In 2021, critical vulnerabilities reached a record high. Attackers are even reusing their work, with over 50% of zero-day attacks this year being variants of previously-patched vulnerabilities. 

With each new security vulnerability, we’re forced to re-examine our current systems and processes. If you’re impacted by OpenSSL or Text4Shell (aka CVE-2022-42889), you’ve probably asked yourself, “Are we using Apache Commons Text (and where)?” or “Is it a vulnerable version?” — and similar questions. And if you’re packaging applications into container images and running those on cloud infrastructure, then a breakdown by image, deployment environment, and impacted Commons-Text version would be extremely useful. 

Developers need contextual data to help cut through the noise and answer these questions, but gathering information takes time and significantly impacts productivity. An entire day is derailed if developers must context switch and spend countless hours researching, triaging, and fixing these issues. So, how do we stop these disruptions and surface crucial data in a more accessible way for developers?

Start with continuously examining images

Bugs, misconfigurations, and vulnerabilities don’t stop once an image is pushed to production, and neither should development. Improving images is a continuous effort that requires a constant flow of information before, during, and after development.

Before images are used, teams spend a significant amount of time vetting and selecting them. That same amount of effort needs to be put into continuously inspecting those same images. Otherwise, you’ll find yourself in a reactive cycle of unnecessary rework, wasted time, and overall developer frustration.

That’s where contextual data comes in. Contextual data ties directly to the situation around it to give developers a broader understanding. As an example, contextual data for vulnerabilities gives you clear and precise insights to understand what the vulnerability is, how urgent it is, and its specific impact on the developer and the application architecture — whether local, staging, or production.

Contextual data reduces noise and helps the developer know the what and the where so they can prioritize making the correct changes in the most efficient way. What does contextual data look like? It can be…

A comparison of detected vulnerabilities between images built from a PR branch with the image version currently running in productionA comparison between images that use the same custom base imageAn alert sent into a Slack channel that’s connected to a GitHub repository when a new critical or high CVE is detected in an image currently running in productionAn alert or pull request to update to a newer version of your base image to remediate a certain CVE

Contextual data makes it faster for developers to locate and remediate the vulnerabilities in their application.

Use Docker to surface contextual data

Contextual data is about providing more information that’s relevant to developers in their daily tasks. How does it work?

Docker can index and analyze public and private images within your registries to provide insights about the quality of your images. For example, you can get open source package updates, receive alerts about new vulnerabilities as security researchers discover them, send updates to refresh outdated base images, and be informed about accidentally embedded secrets like access tokens. 

The screenshot below shows what appears to be a very common list of vulnerabilities on a select Docker image. But there’s a lot more data on this page that correlates to the image:

The page breaks the vulnerabilities up by layers and base images making it easy to assess where to apply a fix for a detected vulnerability.Image refs in the right column highlight that this version of the image is currently running in production.We also see that this image represents the current head commit in the corresponding Git repository and we can see which Dockerfile it was built from.The current and potential other base images are listed for comparison.

An image report with a list of common CVEs — including Text4Shell

Using Slack, notifications are sent to the channels your team already uses. Below shows an alert sent into a Slack channel that’s configured to show activity for a selected set of Git repositories. Besides activity like commits, CI builds, and deployments, you can see the Text4Shell alert providing very concise and actionable information to developers collaborating in this channel:

Slack update on the critical Text4Shell vulnerability

You can also get suggestions to remediate certain categories of vulnerabilities and raise pull requests to update vulnerable packages like those in the following screenshot:

Remediating the Text4Shell CVE via a PR and comparing to main branch

Find out more about this type of information for public images like Docker Official Images or Docker Verified Publisher images using our Image Vulnerability Database.

Vulnerability remediation is just the beginning

Contextual data is essential for faster resolution of vulnerabilities, but it’s more than that. With the right data at the right time, developers are able to work faster and spend their time innovating instead of drowning in security tickets.

Imagine you could assess your production images today to find out where you’re potentially going to be vulnerable. Your teams could have days or weeks to prepare to remediate the next critical vulnerability, like the OpenSSL forthcoming notification on a new critical CVE next Tuesday, November 1st 2022.

Searching for Debian OpenSSL on dso.docker.com

Interested in getting these types of insights and learning more about providing contextual data for happier, more productive devs? Sign up for our Early Access Program to harness these tools and provide invaluable feedback to help us improve our product!
Quelle: https://blog.docker.com/feed/

Transformationen neu anpassen, um mit Amazon SageMaker Data Wrangler Daten im großen Umfang aufzubereiten

Wir freuen uns, heute die Unterstützung zur Neuanpassung von Transformationen mit Amazon SageMaker Data Wrangler bekannt zu geben. Damit Daten mit Algorithmen wie XgBoost verwendet werden können, müssen Datenwissenschaftler mit Transformationen, wie One-Hot-Codierung, nicht numerische Werte in numerische Werte umwandeln. Da Transformationen wie One-Hot-Codierung von den Daten abhängen, werden diese Transformationen häufig als aufbereitete angepasste Transformationen bezeichnet. Diese Transformationen müssen aktualisiert oder neu angepasst werden, um Änderungen an den Daten zu berücksichtigen, da sich die Daten im Laufe der Zeit ändern. Darüber hinaus müssen Transformationen, wenn Sie an einem Beispieldatensatz arbeiten, aktualisiert werden, um Änderungen zwischen einem Beispieldatensatz und dem größeren Datensatz zu berücksichtigen. Die Verwendung von Transformationen wie One-Hot-Codierung bringt zusätzliche Informationen mit sich, die in der Datenaufbereitungs-Pipeline verfolgt und erfasst werden müssen. Werden diese Informationen ausgelassen oder falsch verfolgt, kann dies zu Fehlern im Datenaufbereitungsvorgang führen. Ohne Unterstützung für die Neuanpassung von Transformationen hatten viele Datenwissenschaftler keine einfache Möglichkeit, anzugeben, wann an neuen Daten eine angepasste Version einer Transformation verwendet und wann die Transformation neu angepasst werden sollte. Datenwissenschaftlicher hatten außerdem keine einfache Möglichkeit, aktualisierte Versionen Ihrer Transformations-Pipelines zu generieren, wenn sie an neuen Datensätzen neu anpassten. 
Quelle: aws.amazon.com

Bekanntgabe erhöhter Standardkontingentwerte für AWS IAM Identity Center

AWS IAM Identity Center (Nachfolger von AWS Single Sign-On) unterstützt jetzt höhere Standardkontingente, um Sie bei der Skalierung Ihrer Umgebung zu unterstützen. Mit dem erhöhten Kontingent können Sie bis zu 2.000 Berechtigungssätze in einer Identity-Center-Instance erstellen und zuweisen. Sie können auch bis zu 100.000 Benutzer und 100.000 Gruppen bis zu 3.000 Anwendungen und Konten (zusammen) zuordnen, die über das AWS-Zugangsportal verfügbar sind.
Quelle: aws.amazon.com