Building business resilience with API management

Over the last several years, as digital services and interfaces have become the primary way businesses interact with their customers, maintaining ‘business as usual’ has demanded digital transformation. In difficult times such as these, however, it may be tempting to put digital transformation projects on pause until budget surpluses return. This is a mistake. In fact, it’s more important than ever during  disruptive periods to make your business more resilient against both ongoing challenges and others that will emerge. One path to greater resilience starts with application programming interfaces, or APIs. APIs are how software talks to other software, and many of today’s most compelling digital experiences involve using APIs to connect data and functionality in new ways, including combining legacy technologies, such as CRM systems, with new technologies, such as artificial intelligence, or new interfaces, such as voice. Taking an API-first strategy to digital transformation can increase operational efficiency, accelerate innovation, and improve data security. An API management platform can help you leverage APIs in a strategic way. Here are the three key ways that implementing an API management platform can help you to build a more adaptive and resilient business:Reusing existing assetsAPIs make it easy for companies to take valuable functionality and data that already exist within their four walls and make it accessible and reusable, hiding underlying technical complexity. So, when the market changes and you need to pivot, you can easily re-task these APIs to build new apps and services. Furthermore, an API management platform acts as the connective tissue that allows you to take those APIs and securely share them with developers, both inside and outside the enterprise, to foster new operational efficiencies, unlock new business models, and enable business transformation. For example, Pitney Bowes used to connect to backend systems through a lengthy and repetitive process. Using APIs, they integrated applications into their back office, accelerating internal development and lowering costs. The results were dramatic. They reduced the time to build new applications from 18 months down to four.Improving security and scalabilityThe move to digital interactions as the primary customer touchpoint also introduces new scalability and security challenges that must be addressed to ensure a seamless experience. Companies may face sudden spikes in demand for their digital services. Scalable APIs keep services accessible and extensible, independent of the volume of transactions. An API management platform enables companies to rapidly scale their API programs without business interruptions by providing scalable API design, extensibility, and demand balancing. To address the increased avenues for bad actors to access sensitive data, API management streamlines secure third-party access to existing resources without passwords and reduces the risk and exposure to security vulnerabilities and possible data breaches. It lets businesses offer developers self-service access to APIs, which keeps innovation humming, while also monitoring every digital interaction and controlling access with light-touch processes, which helps to keep data secure. Driving customer interactions By bringing the latest advances in big data, machine learning, APIs, and predictive analytics to bear, you can bring a higher level of engagement with your customers, across various channels. API analytics create a strategic view of all of the business transactions, helping IT teams identify APIs that are driving the majority of traffic. Armed with this information, companies can optimize their budgets and focus on the API products that are in demand by customers. Similarly, API monitoring can ensure that every interaction meets your customers’ high expectations. Monitoring dashboards provided by an API management platform provide at-a-glance visibility into hotspots, latency, and error rates while enabling users to drill down to find policies where faults occur, target problems, and address other specific elements that require remediation.Looking at business resilience in a wider context, digital transformation is the key driver of innovation and operational efficiency during times of uncertainty. Whether you are facing challenges or opportunities, you must be able to reconfigure the things you already have to meet internal and external demands. More than ever before, developers, and partners look to APIs to drive operational efficiency, accelerate innovation, and create rich customer experiences. Our new ebook “The Path of Most Resilience” explores these three tangible ways in which APIs help build business resilience. To learn more about these strategies and how Apigee can help, click here.
Quelle: Google Cloud Platform

Jumpstarting your digital acceleration with ecommerce migration

The COVID-19 pandemic has put a strain on retailers’ digital capabilities as customers shift from in-store to online purchases. We’ve been working with retailers to support their business operations during this time, and have highlighted the importance of investing in digital channels, particularly by modernizing ecommerce platforms in Google Cloud.Ecommerce modernization is a complex undertaking, part of a long-term goal consisting of various phases. As a retailer, however, you can take some initial first steps right now to build the foundation for a flexible and agile ecommerce platform that caters to your customer’s expectations.In this blog post we’ll go over one of these first steps, ecommerce migration. We’ll discuss what ecommerce migration consists of, what problems it addresses, and go over some tactical recommendations on how to get started, including using our ecommerce migration Retail Solution.What is ecommerce migration?Ecommerce migration entails taking your current ecommerce platform and moving it to the cloud—a so-called “lift and shift.” From experience, this is common with retailers who have already virtualized and/or containerized ecommerce workloads and want to focus on getting into the cloud quickly, with an end goal of refactoring in the cloud later. As we mentioned previously, this is typically the first step toward a broader ecommerce transformation effort, and the quickest way to get into the cloud.Even if it’s just as a first step, retailers who embark on an ecommerce migration initiative can take advantage of Google Cloud’s elasticity, scalability, security, and best-in-class cloud platform. Here are a few of the benefits:Migration capabilities: Google Cloud’s Live Migration of compute instances means you no longer require maintenance downtime due to provider infrastructure upgrades and maintenance. To aid with the migration process from your existing host, Migrate for Compute Engine reduces migration complexity and effort for ecommerce workloads. Migration Center can help accelerate the migration process for highly complex workloads with the help of either Google Cloud Professional Services or our specialized partners.Flexible Compute: Google Cloud offers a wide range of Compute Engine machine type options to better right size compute to the retailer’s ecommerce workloads and help reduce cost.Security: Google Cloud encrypts all data in transit and at rest by default, helping support retailers’ compliance requirements, including Payment Cardholder Industry (PCI), and secure their end consumer data.Network: Google Cloud’s Load Balancing and private networking spans multiple geographic areas. Our load balancer will automatically distribute multi-regional traffic to the closest GCP resources to the consumer, which improves the experience, leads to higher conversion, and also provides automatic high availability and disaster recovery for regional failures.AI & Data Analytics: Migrating to Google Cloud unlocks AI capabilities that can enhance the customer experience. Our Recommendations AI and other AI technologies can improve conversions via AI-driven recommendations and AI-powered search results on the ecommerce front end.As a result, retailers can increase their organization’s agility and innovation capability and more quickly launch new experiences to keep up with changing consumer expectations. In addition to these benefits, ecommerce migration addresses many challenges retailers face, and are top of mind at the executive level, including:Velocity: Lack of agility to support ongoing business via digital channels and adapt to heightened and more sophisticated customer expectations.Total cost of ownership: High operational costs due to upfront investment in on-premise infrastructure and capacity to accommodate peak loads, and the need to implement cost reduction procedures to focus solely on mission-critical workloads.Business shift: The shift from in-store to online creates strain on omni-channel capabilities including logistics and supply chain.Legacy systems: Constraints with existing legacy ecommerce infrastructure (pushing the limits of both the software and hardware) hinder the ability to modernize and adapt to changing customer demands.A move to Google Cloud via an ecommerce migration addresses these pain points in the following ways:Help your business accommodate any traffic pattern set by your customers with Google Cloud’s scalability and elasticity capabilities. You can also safeguard your own cloud resources by leveraging Compute Engine Reservations, which come in handy during peak events such as promo days, Black Friday, Cyber Monday, and other times when you need guaranteed cloud resources.Help your business prevent downtime and loss of businessHelp your business minimize cost by scaling down unused capacity. You can also bring down costs even more by leveraging Compute Engine Sustained Use Discounts and Committed Use Discounts for your predictable workloads.Accelerate the speed and performance of your ecommerce channel. By having access to Google Cloud’s countless regions around the globe, you can serve requests closer to customers by leveraging Google Cloud’s networking backbone.How do I get started?The reality is that any ecommerce migration project can be complex, but you can reduce that complexity by following a tried and tested approach. Based on our experience working with various retailers, the Google Cloud Professional Services team has developed a methodology to help with this journey.Click to enlargeThis is a common migration path which follows a methodology based on best practices we’ve seen from the field:Proof of concept: Get comfortable with Google Cloud by experimenting with our products and services. Test out a subset of future-state ecommerce functionality in a risk-free sandbox environment to gain confidence in the migration.Cloud Foundations: Define and build out the minimal set of Google Cloud foundational components required by the migration across domains such as Identity and Access Management, Resource Management, Networking, Cloud Monitoring & Logging, and Cost Control.Discovery and planning: Perform ecommerce application inventory to understand the overall complexity of your migration. Plan for the subsequent stages of the migration.Execution: Migrate your ecommerce workload without serving customer traffic. Validate your deployment by performing integration and smoke testing.Testing: Validate functionality and start serving minimal traffic. A good rule of thumb is to start by splitting traffic between the legacy and the new solution—for example, you could serve ~1% of traffic from the new solution and increase volume progressively.Optimization: Tweak telemetry and instrumentation iteratively in Cloud Monitoring based on SRE best practices. Tweak monitoring metrics based on KPIs used to track SLI and SLOs.Decommissioning: Phase out and decommission your legacy ecommerce solution once you achieve a desired level of comfort.The approach above might look daunting, but by following it with the right methodology and organizational mindset you can execute a successful migration and lay the groundwork for a flexible and agile ecommerce foundation. And remember, Google Cloud’s Professional Services team and partner ecosystem are here to help.To learn more about getting started with your ecommerce migration, contact your Google Cloud account team.
Quelle: Google Cloud Platform

High-resolution user-defined metrics in Cloud Monitoring

Higher resolution metrics are critical for monitoring dynamically changing environments and rapidly changing application metrics. Examples where high resolution metrics are critical include high volume e-commerce, live streaming, autoscaling bursty workloads on Kubernetes clusters, and more. Higher resolutioncustom, Prometheus, and agent metrics are now generally available, and can be written at a granularity of 10 seconds. Previously these metric types could only be written once every 60 seconds. How to write Monitoring agent metrics at 10-second resolutionThe Cloud Monitoring agent is a collectd-based daemon that collects system and application metrics from virtual machine instances and sends them to Cloud Monitoring. The Monitoring agent collects disk, CPU, network, and process metrics. By default, agent metrics are written at 60-second granularity. You can modify the agent collectd.conf configuration to send metrics at 10-second granularity by changing the Interval value to ‘10’ in the Monitoring agent’s collectd.conf file.After making this change, you will need to restart your agent (this may differ based on your operating system and distro):sudo service stackdriver-agent restartHigher resolution agent metrics require Monitoring agent version 6.0.1 or greater. You can find documentation for determining your agent version here.Now that your Monitoring agent is emitting metrics at 10-second granularity, you can view them in Metrics Explorer by searching for metrics with the prefix “agent.googleapis.com/agent/”.How to write custom metrics at 10-second resolutionCustom metrics allow you to define and collect metric data that built-in Google Cloud metrics cannot provide. These could be specific to your application, infrastructure, or business. For example: “Latency of the shopping cart service” or “Returning customer rate” in an e-commerce application.Custom metrics can be written in a variety of ways: via the Monitoring API, Cloud Monitoring client libraries, OpenCensus/OpenTelemetry libraries, or the Cloud Monitoring agent.We recommend using the OpenCensus libraries to write custom metrics for several reasons:It is open source and supports a wide range of languages and frameworks.OpenCensus provides vendor-agnostic support for the collection of metric and trace data.OpenCensus provides optimized collection of points and batching of Monitoring API calls. It also handles timing API calls for 10-second resolution and other time intervals, so that the Monitoring API won’t reject points for being written too frequently. It also handles retries, exponential backoff, and more, helping to ensure that your metric points make it to the monitoring system.OpenCensus allows you to export the collected data to a variety of backend applications and monitoring services, including Cloud Monitoring.Instrumenting your code to use OpenCensus for metrics involves three general steps:Import the OpenCensus stats and OpenCensus Stackdriver exporter packages.Initialize the Cloud Monitoring exporter.Use the OpenCensus API to instrument your code.The following is a minimal Go program that illustrates the instrumentation steps listed above by writing a counter metric to Cloud Monitoring.If you don’t have a working Go development environment, follow these steps in the Google Cloud Console and Cloud Shell to compile and run the demo program:Go to Cloud Monitoring. If you’re using Cloud Monitoring for the first time, you’ll be prompted to create a workspace (it will default to the same name as the GCP project you are currently in).Open up the Cloud Shell in the Cloud Console.Make sure to enable the Monitoring API by running gcloud services enable monitoringIf you don’t already have a working go environment, follow these steps:mkdir ~/goexport GOPATH=~/gomkdir -p ~/go/src/testCustomMetricscd ~/go/src/testCustomMetricsRun “go mod init”touch testCustomMetrics.goOpen testCustomMetrics.go in your text editor of choice and copy in the code belowRun “go mod tidy”. Note: “go mod tidy” finds all the packages transitively imported by packages in your moduleRun “go build testCustomMetrics.go”Run “./testCustomMetrics”The example program is as follows:This program writes a random star count every one second, for three minutes. As you may note from above, custom metrics can only be written with 10-second granularity. We are writing raw metric points more frequently, but we’ve set the OpenCensus exporter ‘ReportingInterval’ to be every 10 seconds, so the Exporter handles calling the ‘CreateTimeSeries endpoint’ of the Monitoring API correctly every 10 seconds. When you query your points, select an ‘aligner’ and ‘aggregation’ option from Metrics Explorer. This way, even if you have multiple points in a 10-second span, you’ll return a single point based on your aligner and aggregation options.After running the program, you can go to Metrics Explorer in Cloud Monitoring to see the “OpenCensus/star_count” metric, written against the “global” resource.How to write Prometheus metrics at 10-second resolutionThe Prometheus monitoring tool is often used with Kubernetes. If you configure Cloud Operations for GKE to include Prometheus support, then the metrics that are generated by services using the Prometheus exposition format can be exported from the cluster and made visible as external metrics in Cloud Monitoring.Installing and configuring Prometheus, including configuring export to Cloud Monitoring, involves a few steps, so we recommend you follow these instructions. OpenCensus also offers a guided codelab for configuring Prometheus instrumentation.To enable 10-second resolution for Prometheus metrics that are exported to Cloud Monitoring, set the “scrape_interval” parameter in “prometheus.yml” to:scrape_interval:     10sOnce Prometheus is properly configured to export metrics to Cloud Monitoring, you can go to Metrics Explorer in Cloud Monitoring and search for metrics with the prefix external.googleapis.com/prometheus/.Pricing for Cloud Monitoring metricsCloud Monitoring chargeable metrics are billed per megabyte of ingestion, with the first 150MB free, and reduced pricing tiers for customers that send larger volumes of metrics. There is no additional cost for sending higher resolution metrics other than the additional cost incurred from sending metric data more frequently. The frequency at which you write custom metrics (with 10 seconds as the lower bound) is up to you. GCP platform (system) metrics remain free and the granularity at which they are written is determined by each individual GCP service. Toward better observabilityWe hope you find the ability to write higher resolution custom, Prometheus, and Agent metrics useful and that it helps you build more observable applications and services. Higher resolution logs-based metrics at 10-second granularity are on our roadmap as well, so stay tuned for more information in an upcoming blog post.
Quelle: Google Cloud Platform

Prepare and certify your devices for IoT Plug and Play

Developing solutions with Azure IoT has never been faster, easier, or more secure. However, the tight coupling and integration between IoT device software and the software that matches it in the cloud can make it challenging to add different devices without spending hours writing device code.

IoT Plug and Play can solve this by enabling a seamless device-to-cloud integration experience. IoT Plug and Play from Microsoft is an open approach using Digital Twin Definition Language (based on JavaScript Object Notation for Linked Data (JSON-LD)) that allows IoT devices to declare their capabilities to cloud solutions. It enables hardware partners to build devices that can easily integrate with cloud solutions based on Azure IoT Central, as well as third-party solutions built on top of Azure IoT Hub or Azure Digital Twins.

As such, we are pleased to announce that the IoT Plug and Play device certification program is now available for companies to certify and drive awareness of their devices tailored for solutions, while also reducing time to market. In this blog post, we will explore the common ecosystem challenges and business motivations for using IoT Plug and Play, as well as why companies are choosing to pursue IoT Plug and Play certification and the requirements and process involved.

Addressing ecosystem challenges and business needs with IoT Plug and Play

Across our ecosystem of partners and customers, we continue to see opportunities to simplify IoT. Companies are using IoT devices to help them find valuable insights ranging from how customers are using their products, to how they can optimize operations and reduce energy consumption. Yet there are also challenges to enabling these scenarios across energy, agriculture, retail, healthcare, and other industries as integrating IoT devices into cloud solutions can often be a time-consuming process.

Windows solved a similar industry problem with Plug and Play, which at its core, was a capability model that devices could declare and present to Windows when they were connected. This capability model made it possible for thousands of different devices to connect to Windows and be used without any software having to be installed manually on Windows.

IoT Plug and Play—which was announced during Microsoft Build in May 2019—similarly addresses the ecosystem need to declare an open model language through an open approach. IoT Plug and Play is currently available in preview and offers numerous advantages for device builders, solution builders, and customers alike when it comes to reducing solution development time, cost, and complexity. By democratizing device integration, IoT Plug and Play helps remove entry barriers and opens new IoT device use cases. Since IoT Plug and Play-enabled solutions can understand the device model to start using devices without customization, the same interaction model can be used in any industry. For instance, cameras used on the factory floor for inspection can also be used in retail scenarios.

The IoT Plug and Play certification process validates that devices meet core capabilities and are enabled for secure device provisioning. The use of IoT Plug and Play certified devices is recommended in all IoT solutions, even those that do not currently leverage all the capabilities, as migration of IoT Plug and Play-enabled devices is a simple process.

IoT Plug and Play saves partners time and money

IoT Plug and Play-capable devices can become a major business differentiator for device and solution builders. Microsoft partner, myDevices, is already leveraging IoT Plug and Play in their commercial IoT solutions. According to Adrian Sanchez del Campo, Vice President of Engineering, “The main value in IoT Plug and Play is the ease of developing a device that will be used in a connected fashion. It's the easiest way to connect any hardware to the cloud, and it allows for any company to easily define telemetry and properties of a device without writing any embedded code.”

Sanchez del Campo also says it saves time and money. For devices that monitor or serve as a gateway at the edge, IoT Plug and Play enables myDevices to cut their development cycle by half or more, accelerating proofs of concept while also reducing development costs.

Olivier Pauzet, Vice President Product, IoT Solutions, from Sierra Wireless agrees that IoT Plug and Play is a definite time and money saver. “IoT Plug and Play comes on top of the existing partnership and joint value brought by Sierra Wireless’s Octave all-in-one-edge-to-cloud solution and Azure IoT services,” says Pauzet. “For customers using Digital Twins or IoT Central, being able to leverage IoT Plug and Play on both Octave and Azure will expand capabilities while making solution development even faster and easier.”

In addition to faster time to market, IoT Plug and Play also provides benefits for simplifying solution development. “As a full edge-to-cloud solution provider, Sierra Wireless sees benefits in making customer devices reported through Octave cloud connectors compatible with IoT Plug and Play applications,” says Pauzet. “Making it even simpler for customers and system integrators to build reliable, secure, and flexible end-to-end solutions is a key benefit for the whole ecosystem.”

Benefits of IoT Plug and Play device certification from Microsoft

Achieving IoT Plug and Play certification offers multiple advantages, but at its core, the benefits revolve around device builders having confidence that their tailored devices will be more discoverable, be more readily promoted to a broader audience, and have a reduced time to market.

Once a device is IoT Plug and Play-certified, it can easily be used in any IoT Plug and Play-enabled solution which increases the market opportunity for device builders. IoT Plug and Play-certified devices are also surfaced to a worldwide audience, helping solution builders discover devices with the capabilities they need at a previously unreachable scale.

It also provides device builders with the opportunity to easily partner with other providers who have adopted the same open approach to create true end-to-end solutions. Plus, devices can be deployed in various solutions without a direct relationship between the device builder and solution builder, increasing your addressable market.

Device builders gain additional audience exposure and potential co-sell opportunities by getting IoT Plug and Play-certified devices featured and promoted in the Certified for Azure IoT device catalog. The catalog provides expanded opportunities to reach solution developers and device buyers, who can search for compatible devices.

Finally, IoT Plug and Play-certified devices appeal to solution builders because they enable time to value by simplifying and reducing the solution development cycle. IoT Plug and Play also gives extensibility to IoT Plug and Play-enabled solutions by enabling the seamless addition of more devices.

Achieving IoT Plug and Play certification

To achieve IoT Plug and Play certification from Microsoft, devices must meet the following requirements:

Defined device models and compliance with the Digital Twin Definition Language (DTDL) version 2.
Support Device Provisioning Services (DPS).
Physical device review.

The certification process is comprised of three phases: develop, certify, and publish. Develop phase activities include modeling and developing the code, storing the device models, and then iterating and testing the code. The outcome is a finalized device code that is ready to go through the IoT Plug and Play certification phase.

Certify phase activities require Microsoft Partner Network membership and onboarding to the Azure Certified Device submission portal. To kick off the certification process, developers must submit their IoT Plug and Play device model to the portal, along with relevant marketing details. Once complete, developers can connect and test in the certification portal, which takes the device through an automated set of validation tests.

Upon IoT Plug and Play certification, the device becomes eligible for publication to the Certified for Azure IoT device catalog. Publish phase activities include submitting the test results, device metadata, and Get Started Guide, along with the desired publish date, to Microsoft. Microsoft will work with the device builder to coordinate additional physical device review after the device is published.

Get started on IoT Plug and Play certification

Now is the right time to get ahead of the coming groundswell for IoT Plug and Play certification and begin maximizing your business potential. Begin the certification process by watching this video on how to certify IoT Plug and Play devices. For questions, reach out to us via email at IoT Certification.

For those considering device certification beyond IoT Plug and Play, stay tuned for future enhancements that will be announced soon. In the meantime, be sure to explore Azure IoT resources, including technical guidance, how-to guides, Microsoft Tech Community, and more.

Additional resources include:

IoT Plug and Play preview blog.
IoT Plug and Play documentation.
Certification tools:

Command line.
Azure Certified Device submission portal.

Azure Certified for IoT device catalog.
IoT Show for IoT Plug and Play.
IoT Plug and Play certification tutorial.

Quelle: Azure