A New Chapter for Video Uploads on WordPress.com

Today we’re excited to announce that you can now add chapter breaks to the videos you upload to your website with our VideoPress feature. Chapters offer a quick way to navigate longer videos and can be a great addition for your viewers.

Streamlined interface

We’ve built a streamlined and easy-to-use interface for your viewers to interact with video chapters. You can hover over the timeline to preview the next chapter and then simply click to navigate to it. The current chapter name is shown after the video timecode, and when you click it opens a menu to quickly jump to the start of any chapter:

How to add chapters to your videos

To add chapters to your video, all you need to do is edit its description in the block editor and add the timestamp for each chapter, followed by a title you’d like to display:

After saving, you’ll see the video block update and automatically display your chapters.

In the video below — which is a showcase for WordPress 6.1 — you can see how chapters work and look. Play around with the bottom toolbar to navigate to different chapters and bring up the chapter list.

We hope you enjoy this feature! Please share any feedback you have or an example of where you’ve used chapters for your videos. We love to see our features in action!

VideoPress is available on our WordPress.com Premium, Business and eCommerce plans. If you have a self-hosted site, check out Jetpack VideoPress to get high-quality and ad-free videos for your site.
Quelle: RedHat Stack

New control plane connectivity and isolation options for your GKE clusters

Once upon a time, all Google Kubernetes Engine (GKE) clusters used public IP addressing for communication between nodes and the control plane. Subsequently, we heard your security concerns and introduced private clusters enabled by VPC peering. To consolidate the connectivity types, starting in March 2022, we began using Google Cloud’s Private Service Connect (PSC) for new public clusters’ communication between the GKE cluster control plane and nodes, which has profound implications for how you can configure your GKE environment. Today, we’re presenting a new consistent PSC-based framework for GKE control plane connectivity from cluster nodes. Additionally, we’re excited to announce a new feature set which includes cluster isolation at the control plane and node pool levels to enable more scalable, secure — and cheaper! — GKE clusters. New architectureStarting with GKE version 1.23 and later, all new public clusters created on or after March 15th, 2022 began using Google Cloud’s PSC infrastructure to communicate between the GKE cluster control plane and nodes. PSC provides a consistent framework that helps connect different networks through a service networking approach, and allows service producers and consumers to communicate using private IP addresses internal to a VPC. The biggest benefit of this change is to set the stage for using PSC-enabled features for GKE clusters.Figure 1: Simplified diagram of PSC-based architecture for GKE clustersThe new set of cluster isolation capabilities we’re presenting here is part of the evolution to a more scalable and secure GKE cluster posture. Previously, private GKE clusters were enabled with VPC peering, introducing specific network architectures. With this feature set, you now have the ability to:Update the GKE cluster control plane to only allow access to a private endpointCreate or update a GKE cluster node pool with public or private nodesEnable or disable GKE cluster control plane access from Google-owned IPs.In addition, the new PSC infrastructure can provide cost savings. Traditionally, control plane communication is treated as normal egress and is charged for public clusters as a normal public IP charge. This is also true if you’re running kubectl for provisioning or other operational reasons. With PSC infrastructure, we have eliminated the cost of communication between the control plane and your cluster nodes, resulting in one less network egress charge to worry about.Now, let’s take a look at how this feature set enables these new capabilities.Allow access to the control plane only via a private endpointPrivate cluster users have long had the ability to create the control plane with both public and private endpoints. We now extend the same flexibility to public GKE clusters based on PSC. With this, if you want private-only access to your GKE control plane but want all your node pools to be public, you can do so. This model provides a tighter security posture for the control plane, while leaving you to choose what kind of cluster node you need, based on your deployment. To enable access only to a private endpoint on the control plane, use the following gcloud command:code_block[StructValue([(u’code’, u’gcloud container clusters update CLUSTER_NAME \rn –enable-private-endpoint’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e7046d9dd10>)])]Allow toggling and mixed-mode clusters with public and private node poolsAll cloud providers with managed Kubernetes offerings offer both public and private clusters. Whether a cluster is public or private is enforced at the cluster level, and cannot be changed once it is created. Now you have the ability to toggle a node pool to have private or public IP addressing. You may also want a mix of private and public node pools. For example, you may be running a mix of workloads in your cluster in which some require internet access and some don’t. Instead of setting up NAT rules, you can deploy a workload on a node pool with public IP addressing to ensure that only such node pool deployments are publicly accessible. To enable private-only IP addressing on existing node pools, use the following gcloud command:code_block[StructValue([(u’code’, u’gcloud container node-pools update POOL_NAME \rn –cluster CLUSTER_NAME \rn –enable-private-nodes’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e704796ec90>)])]To enable private-only IP addressing at node pool creation time, use the following gcloud command:code_block[StructValue([(u’code’, u’gcloud container node-pools create POOL_NAME \rn –cluster CLUSTER_NAME \rn –enable-private-nodes’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e7046dc6710>)])]Configure access from Google Cloud In some scenarios, users have identified workloads outside of their GKE cluster, for example, applications running in Cloud Run or any GCP VMs sourced with Google Cloud public IPs were allowed to reach the cluster control plane. To mitigate potential security concerns, we have introduced a feature that allows you to toggle access to your cluster control plane from such sources. To remove access from Google Cloud public IPs to the control plane, use the following gcloud command:code_block[StructValue([(u’code’, u’gcloud container clusters update CLUSTER_NAME \rn –no-enable-google-cloud-access’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e7046df2510>)])]Similarly, you can use this flag at cluster creation time.Choose your private endpoint addressMany customers like to map IPs to a stack for easier troubleshooting and to track usage. For example — IP block x for Infrastructure, IP block y for Services, IP block z for the GKE control plane, etc. By default, the private IP address for the control plane in PSC-based GKE clusters comes from the node subnet. However, some customers treat node subnets as infrastructure and apply security policies against it. To differentiate between infrastructure and the GKE control plane, you can now create a new custom subnet and assign it to your cluster control plane.code_block[StructValue([(u’code’, u’gcloud container clusters create CLUSTER_NAME \rn –private-endpoint-subnetwork=SUBNET_NAME’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e7046dcd9d0>)])]What can you do with this new GKE architecture?With this new set of features, you can basically remove all public IP communication for your GKE clusters! This, in essence, means you can make your GKE clusters completely private. You currently need to create the cluster as public to ensure that it uses PSC, but you can then update your cluster using gcloud with the –enable-private-endpoint flag, or the UI, to configure access via only a private endpoint on the control plane or create new private node pools. Alternatively, you can control access at cluster creation time with the –master-authorized-networks and –no-enable-google-cloud-access flags to prevent access from public addressing to the control plane.Furthermore, you can use the REST API or Terraform Providers to actually build a new PSC-based GKE cluster with the default (thus first) node pools to have private nodes. This can be done by setting the enablePrivateNodes field to true (instead of leveraging the public GKE cluster defaults and then updating afterwards, as currently required with gcloud and UI operations). Lastly, the aforementioned features extend not only to Standard GKE clusters, but also to GKE Autopilot clusters.When evaluating if you’re ready to move these PSC-based GKE cluster types to take advantage of private cluster isolation, keep in mind that the control plane’s private endpoint has the following limitations:Private addresses in URLs for new or existing webhooks that you configure are not supported. To mitigate this incompatibility and assign an internal IP address to the URL for webhooks, set up a webhook to a private address by URL, create a headless service without a selector and a corresponding endpoint for the required destination.The control plane private endpoint is not currently accessible from on-premises systems.The control plane private endpoint is not currently globally accessible: Client VMs from different regions than the cluster region cannot connect to the control plane’s private endpoint.All public clusters on version 1.25 and later that are not yet PSC-based are currently being migrated to the new PSC infrastructure; therefore, your clusters might already be using PSC to communicate with the control plane.To learn more about GKE clusters with PSC-based control plane communication, check out these references:GKE Concept page for public clusters with PSCHow-to: Change Cluster Isolation pageHow-to: GKE node pool creation page with isolation feature flagHow-to: Schedule Pods on GKE Autopilot private nodesgcloud reference to create a cluster with a custom private subnetTerraform Providers Google: release v4.45.0 pageGoogle Cloud Private Services Connect page.Here are the more specific features in the latest Terraform Provider, handy to integrate into your automation pipeline:Terraform Providers Google: release v4.45.0gcp_public_cidrs_access_enabledenable_private_endpointprivate_endpoint_subnetworkenable_private_nodes
Quelle: Google Cloud Platform

Document AI adds three new capabilities to its OCR engine

Documents are indispensable parts of our professional and personal lives. They give us crucial insights that help us become more efficient, that organize and optimize information, and that even help us to stay competitive. But as documents become increasingly complex, and as the variety of document types continues to expand, it has become increasingly challenging for people and businesses to sift through the ocean of bits and bytes in order to extract actionable insights. This is where Google Cloud’s Document AI comes in. It is a unified, AI-powered suite for understanding and organizing documents. Document AI consists of Document AI Workbench (state-of-the-art custom ML platform), Document AI Warehouse (managed service with document storage and analytics capabilities), and a rich set of pre-trained document processors. Underpinning these services is the ability to extract text accurately from various types of documents with a world-class Document Optical Character Recognition (OCR) engine.Google Cloud’s Document AI OCR takes an unstructured document as input and extracts text and layout (e.g., paragraphs, lines, etc.) from the document. Covering over 200 languages, Document AI OCR is powered by state-of-the-art machine learning models developed by Google Cloud and Google Research teams. Today, we are pleased to announce three new OCR features in Public Preview that can further enhance your document processing workflows. 1. Assess page-level quality of documents with Intelligent Document Quality (IDQ) With Document AI OCR, Google Cloud customers and partners can programmatically extract key document characteristics – word frequency distributions, relative positioning of line items, dominant language of the input document, etc. – as critical inputs to their downstream business logic. Today, we are adding another important document assessment signal to this toolbox: Intelligent Document Quality (IDQ) scores. IDQ provides page-level quality metrics in the following eight dimensions:Blurriness Level of optical noise DarknessFaintnessPresence of smaller-than-usual fontsDocument getting cut offText spans getting cut offGlares due to lighting conditionsBeing able to discern the optical quality of documents helps assess which documents must be processed differently based on their quality, making the overall document processing pipeline more efficient. For example, Gary Lewis, Managing Director of lending and deposit solutions at Jack Henry, noted, “Google’s Document AI technology, enriched with Intelligent Document Quality (IDQ) signals, will help businesses to automate the data capture of invoices and payments when sending to our factoring customers for purchasing. This creates internal efficiencies, reduces risk for the factor/lender, and gets financing into the hands of cash-constrained businesses quickly.”Overall, document quality metrics pave the way for more intelligent routing of documents for downstream analytics. The reference workflow below uses document quality scores to split and classify documents before sending them to either the pre-built Form Parser (in the case of high document quality) or a Custom Document Extractor trained specifically on lower-quality datasets.2. Process digital PDF documents with confidence with built-in digital PDF supportThe PDF format is popular in various business applications such as procurement (invoices, purchase orders), lending (W-2 forms, paystubs), and contracts (leasing or mortgage agreements). PDF documents can be image-based (e.g., a scanned driver’s license) or digital, where you can hover over, highlight, and copy/paste embedded text in a PDF document the same way as you interact with a text file such as Google Doc or Microsoft Word. We are happy to announce digital PDF support in Document AI OCR. The digital PDF feature extracts text and symbols exactly as they appear in the source documents, therefore making our OCR engine highly performant in complex visual scenarios such as rotated texts, extreme font sizes and/or styles, or partially hidden text.  Discussing the importance and prevalence of PDF documents in banking and finance (e.g., bank statements, mortgage agreements, etc.), Ritesh Biswas, Director, Google Cloud Practice at PwC, said, “The Document AI OCR solution from Google Cloud, especially its support for digital PDF input formats, has enabled PwC to bring digital transformation to the global financial services industry.”3. “Freeze” model characteristics with OCR versioningAs a fully managed cloud-based service, Document AI OCR regularly upgrades the underlying AI/ML models to maintain its world-class accuracy across over 200 languages and scripts. These model upgrades, while providing new features and enhancements, may occasionally lead to changes in OCR behavior compared to an earlier version. Today, we are launching OCR versioning, which enables users to pin to a historical OCR model behavior. The “frozen” model versions, in turn, give our customers and partners peace of mind, ensuring consistent OCR behavior. For industries with rigorous compliance requirements, this update also helps maintain the same model version, thus minimizing the need and effort to recertify stacks between releases. According to Jaga Kathirvel, Senior Principal Architect at Mr. Cooper, “Having consistent OCR behavior is mission-critical to our business workflows. We value Google Cloud’s OCR versioning capability that enables our products to pin to a specific OCR version for an extended period of time.”With OCR versioning, you have the full flexibility to select the versioning option that best fits your business needs.Getting Started on Document AI OCRLearn more about the new OCR features and tutorials in the Document AI Documentation or try it directly in your browser (no coding required). For more details on what’s new with Document AI, don’t forget to check out our breakout session from Google Cloud Next 2022.
Quelle: Google Cloud Platform

Microsoft Innovation in RAN Analytics and Control

Currently, Microsoft is working on RAN Analytics and Control technologies for virtualized RAN running on Microsoft Edge platforms. Our goal is to empower any virtualized RAN solution provider and operators to realize the full potential of disaggregated and programmable networks. We aim to develop platform technologies that virtualized RAN vendors can leverage to gain analytics insights in their RAN software operations, and to use these insights for operational automations, machine learning, and AI-driven optimizations.

Microsoft has recently made important progress in RAN analytics and control technology. Microsoft Azure for Operators is introducing flexible, dynamically loaded service models to both the RAN software stack and cloud/edge platforms hosting the RAN, to accelerate the pace of innovation in Open RAN.

The goal of Open RAN is to accelerate innovation in the RAN space through the disaggregation of functions and exposure of internal interfaces for interoperability, controllability, and programmability. The current standardization effort of O-RAN by O-RAN Alliance, specifies the RAN Intelligent Controller (RIC) architecture that exposes a set of telemetry and control interfaces with predefined service models (known as the E2 interface). Open RAN vendors are expected to implement all E2 service models specified in the standard. Near-real-time RAN controls are made possible with xApp applications accessing these service models.

Microsoft’s innovation extends this standard-yet-static interface. It introduces the capability of getting detailed internal states and real-time telemetric data out of the live RAN software in a dynamic fashion for new RAN control applications. With this technology, together with detailed platform telemetry, operators can achieve better network monitoring and performance optimization for their 5G networks, and enable new AI, analytics, and automation capabilities that were not possible before.

This year, Microsoft, together with contributions from Intel and Capgemini, has developed an analytics and control approach that was recognized with the Light Reading Editor’s Choice award under the category of Outstanding Use case: Service provider AI. This innovation calls for dynamic services models for Open RAN.

Dynamic service models for real-time RAN control

There are many RAN control use cases that require dynamic service models beyond those specified in O-RAN today, such as access to IQ samples, RLC and MAC queue sizes, and packet retransmission information. These high-volume real-time data need to be aggregated and compressed before being delivered to the xApp. Also, detailed data from different RAN modules across different layers like L1, L2, and L3 may need to be collected and correlated in real-time before any useful insight can be derived and shared with xApp. Further, a virtualized RAN offers so many more possibilities, that any static interface or service model may be ineffective in meeting the more advanced real-time control needs.

One such example occurs with interference detection. Today, operators typically need to do a drive test to detect external interference in a macro cell. But now, Open RAN has the potential to replace the expensive truck roll with a software program that detects interference signals at the RAN’s L1 layer. However, this will require a new data service model with direct access to raw IQ samples at the physical layer. Another example exists in dynamic power saving. If a RAN power controller can see the number of packets queued at various places in the live RAN system, then it can estimate the pending process loads and optimize the CPU frequency at a very high pace, in order to reduce the RAN server power consumption. Our study has shown that we can reduce the RAN power consumption by 30 percent through this method—even during busy periods. To support this in Open RAN, we will need a new service model that exposes packet queuing information.

These new use cases are envisioned for the time after the current E2 interface has been standardized. To achieve them, though, we need new RAN platform technologies to quickly extend this interface to support these and future advanced RAN control applications.

The Microsoft RAN analytics and control framework

The Microsoft RAN analytics and control framework extends the current RIC service models in O-RAN architecture to be both flexible and dynamic. In the process, the framework allows RAN solution providers and operators to define their own service models for dynamic RAN monitoring and control. Here, the underlying technology is a runtime system that can dynamically load and execute third-party code in a trusted and safe manner.

This system enables operators and trusted third-party developers to write their own telemetry, control, and inference pieces of code (called “codelets”) that can be deployed at runtime at various points in the RAN software stack, without disrupting the RAN operations. The codelets are executed inline in the live RAN system and on its critical paths, allowing them to get direct access to all important internal raw RAN data structures, to collect statistics, and to make real-time inference and control decisions.

To ensure security and safety, the codelets checked with static verified with verification tools before they can be loaded, and they will be automatically pre-empted if running longer than the predefined execution budgets. The dynamic code extension system is the same as the Extended Berkeley Packet Filter (eBPF), which is a proven technology that has been entrusted to run custom codes in Linux kernels on millions of mission-critical servers around the globe. The inline execution is also extremely fast, typically incurring less than one percent of overhead on the existing RAN operations.

The following image illustrates the overall framework and the dynamic service model denoted by the star circle with the letter D.

The benefit of the dynamic extension framework with low-latency control is that it can open the opportunity for third-party real-time control algorithms. Traditionally, due to the tight timing constraint, a real-time control algorithm must be tightly implemented and integrated inside the RAN system. The Microsoft RAN analytics framework allows RAN software to delegate certain real-time control to RIC, potentially leading to a future marketplace where real-time control algorithms, machine learning, and AI models for optimizations may be possible.

Microsoft, Intel, and Capgemini have jointly prototyped this technology in Intel’s FlexRAN™ reference software and Capgemini’s 5G RAN. We have also identified standard instrumentation points aligned with the standard 3GPP RAN architecture to achieve higher visibility into the RAN’s internal state. We have further developed 17 dynamic service models, and enabled many new and exciting applications that were previously not thought possible.

Examples of new applications of RAN analytics

With this new Analytics and Control Framework, applications of dynamic power savings and interference detection described earlier can now be realized.

RAN-agnostic dynamic power saving

5G RAN energy consumption is a major OPEX item for any mobile operator. As a result, it is paramount for a RAN platform provider to find any opportunity to save power when running the RAN software. One such opportunity can be found by stepping down the RAN server CPU frequency when the RAN processing load is not at full capacity. This is indeed promising because internet traffic is intrinsically “bursty”; even during peak hours, the network is rarely operated at full capacity.

However, any dynamic RAN power controller must also have accurate load prediction and fast reaction in millisecond timescale. Otherwise, if one part of RAN is in hibernation, then any instant traffic burst will cause serious performance issues, or even crashes. The Microsoft RAN analytics framework with dynamic service models and low-latency control-loop makes it possible to write a novel CPU frequency prediction algorithm based on the number of active users, and changes in different queue sizes. We have implemented this algorithm on top of Capgemini 5G RAN and Intel FlexRAN™ reference software, and we achieved up to 30 percent energy savings—even during busy periods.

Interference detection

External wireless interference has long been a source of performance issues in cellular networks. Detecting external wireless interference is difficult and often requires a truck roll with specialized equipment and experts to detect it. With dynamic service models, we can turn an O-RAN 5G base station into a software-defined radio that can detect and characterize external wireless interference without affecting the radio performance. We have developed a dynamic service model that averages the received IQ samples across frequency chunks and times inside an L1 of the FlexRAN™ reference software stack. The service model in turn reports the averages to an application that runs an AI and machine learning model for anomaly detection, in order to detect when the noise floor increases.

Virtualized and software-based RAN solution offer immense potential of programmable networks that can leverage AI, machine learning, and analytics to improve network efficiency. Dynamic service models for O-RAN interfaces further enhances the pace of innovation with added flexibility and security.

Learn more

Learn more about Microsoft Azure for Operators from our website.
Microsoft Research Technical Report.
Microsoft’s Innovation in RAN Analytics is The Editor’s Choice for “the Outstanding Use Case: Service Provider AI” in the 2022 Leading Lights Award. Leading Lights 2022: The Winners | Light Reading.
Finalist in the Fierce Innovation Award–Telecom Edition 2022: Finalists | Fierce Telecom Awards.

Quelle: Azure