Amazon RDS for Oracle unterstützt jetzt Oracle Patch Set Updates (PSU) von Januar 2019 und Release Updates (RUs)

Amazon RDS for Oracle unterstützt jetzt Oracle Patch Set Updates (PSU) von Januar 2019 für Oracle Database 11.2 und 12.1. Oracle PSU enthalten unerlässliche Sicherheitsupdates sowie andere wichtige Updates. Ab Oracle Database Version 12.2.0.1 unterstützt Amazon RDS for Oracle Release Updates (RUs) anstelle von Patch Set Updates (PSUs). Weitere Informationen zu den Oracle PSUs, die unter Amazon RDS unterstützt werden, finden Sie in der Amazon RDS Patch Update-Dokumentation.
 
Mit Amazon RDS for Oracle ist das Einrichten, Betreiben und Skalieren einer Oracle Database-Bereitstellung in der Cloud ganz einfach. Unter Amazon RDS for Oracle Database – Preise finden Sie Informationen zur regionalen Verfügbarkeit.
Quelle: aws.amazon.com

Benefits of using Azure API Management with microservices

The IT industry is experiencing a shift from monolithic applications to microservices-based architectures. The benefits of this new approach include:

Independent development and freedom to choose technology – Developers can work on different microservices at the same time and choose the best technologies for the problem they are solving.
Independent deployment and release cycle – Microservices can be updated individually on their own schedule.
Granular scaling – Individual microservices can scale independently, reducing the overall cost and increasing reliability.
Simplicity – Smaller services are easier to understand which expedites development, testing, debugging, and launching a product.
Fault isolation – Failure of a microservice does not have to translate into failure of other services.

In this blog post we will explore:

How to design a simplified online store system to realize the above benefits.
Why and how to manage public facing APIs in microservice-based architectures.
How to get started with Azure API Management and microservices.

Example: Online store implemented with microservices

Let’s consider a simplified online store system. A visitor of the website needs to be able to see product’s details, place an order, review a placed order.

Whenever an order is placed, the system needs to process the order details and issue a shipping request. Based on user scenarios and business requirements, the system must have the following properties:

Granular scaling – Viewing product details happens on average at least 1,000 times more often than placing an order.
Simplicity – Independent user actions are clearly defined, and this separation needs to be reflected in the architecture of the system.
Fault isolation – Failure of the shipping functionality cannot affect viewing products or placing an order.

They hint towards implementing the system with three microservices:

Order with public GET and POST API – Responsible for viewing and placing an order.
Product with public GET API – Responsible for viewing details of a product.
Shipping triggered internally by an event – Responsible for processing and shipping an order.

For this purpose we will use Azure Functions, which are easy to implement and manage. Their event-driven nature means that they are executed on, and billed for, an interaction. This becomes useful when the store traffic is unpredictable. The underlying infrastructure scales down to zero in times of no traffic. It can also serve bursts of traffic in a scenario when a marketing campaign becomes viral or load increases during shopping holidays like Black Friday in the United States.

To maintain the scaling granularity, ensure simplicity, and keep release cycles independent, every microservice should be implemented in an individual Function App.

The order and product microservices are external facing functions with an HTTP Trigger. The shipping microservice is triggered indirectly by the order microservice, which creates a message in Azure Service Bus. For example, when you order an item, the website issues a POST Order API call which executes the order function. Next, your order is queued as a message in an Azure Service Bus instance which then triggers the shipping function for its processing.

Top reasons to manage external API communication in microservices-based architectures

The proposed architecture has a fundamental problem, the way communication from outside is handled.

Client applications are coupled to internal microservices. This becomes especially burdensome when you wish to split, merge, or rewrite microservices.
APIs are not surfaced under the same domain or IP address.
Common API rules cannot be easily applied across microservices.
Managing API changes and introducing new versions is difficult.

Although Azure Functions Proxies offer a unified API plane, they fall short in the other scenarios. The limitations should be addressed by fronting Azure Functions with an Azure API Management, now available in a serverless Consumption tier.

API Management abstracts APIs from their implementation and hosts them under the same domain or a static IP address. It allows you to decouple client applications from internal microservices. All your APIs in Azure API Management share a hostname and a static IP address. You may also assign custom domains.

Using API Management secures APIs by aggregating them in Azure API Management, and not exposing your microservices directly. This helps you reduce the surface area for a potential attack. You can authenticate API requests using a subscription key, JWT token, client certificate, or custom headers. Traffic may be filtered down only to trusted IP addresses.

With API Management can also execute rules on APIs. You can define API policies on incoming requests and outgoing responses globally, per API, or per API operation. There are almost 50 policies like authentication methods, throttling, caching, and transformations. Learn more by visiting our documentation, “API Management policies.”

API Management simplifies changing APIs. You can manage your APIs throughout their full lifecycle from design phase, to introducing new versions or revisions. Contrary to revisions, versions are expected to contain breaking changes such as removal of API operations or changes to authentication.

You can also monitor APIs when using API Management. You can see usage metrics in your Azure API Management instance. You may log API calls in Azure Application Insights to create charts, monitor live traffic, and simplify debugging.

API Management makes it easy to publish APIs to external developers. Azure API Management comes with a developer portal which is an automatically generated, fully customizable website where visitors can discover APIs, learn how to use them, try them out interactively, download their OpenAPI specification, and finally sign up to acquire API keys.

How to use API Management with microservices

Azure API Management has recently become available in a new pricing tier. With its billing per execution, the consumption tier is especially suited for microservice-based architectures and event-driven systems. For example, it would be a great choice for our hypothetical online store.

For more advanced systems, other tiers of API Management offer a richer feature set.

Regardless of the selected service tier, you can easily front your Azure Functions with an Azure API Management instance. It takes only a few minutes to get started with Azure API Management.
Quelle: Azure

How to avoid overstocks and understocks with better demand forecasting

Promotional planning and demand forecasting are incredibly complex processes. Take something seemingly straight-forward, like planning the weekly flyer, and there are thousands of questions involving a multitude of teams just to decide what products to promote, and where to position the inventory to maximize sell-through. For example:

What products do I promote?
How do I feature these items in a store? (Planogram: end cap, shelf talkers, signage etc.)
What pricing mechanic do I use? (% off, BOGO, multi-buy, $ off, loyalty offer, basket offer)
How do the products I'm promoting contribute to my overall sales plan?
How do the products I'm promoting interact with each other? (halo and cannibalization)
I have 5,000 stores, how much inventory of each promoted item should I stock at each store?

If the planning is not successful, the repercussions can hurt a business:

Stockouts directly result in lost revenue opportunities, through lost product sales. This could be a result of customers who simply purchase the desired item from another retailer—or a different brand of the item.
Overstock results in costly markdowns and shrinkage (spoilage) that impacts margin. The opportunity cost of holding non-productive inventory in-store also hurts the merchant. And if inventory freshness is a top priority, poor store allocation can impact brand or customer experience.
Since retailers invest margin to promote products, inefficient promotion planning can be a costly exercise. It’s vital to promote items that drive the intended lift.

Solution

Rubikloud’s Price & Promotion Manager allows merchants and supply chain professionals to take a holistic approach to integrated forecasting and replenishment. The product has three core modules detailed below.

The three modules are:

Learn module: Leverages machine learning to understand how internal and external factors impact demand at a store-sku level, as well as a recommendation framework to improve future planning activities.
Activate module: Allows non-technical users to harness the power of machine learning to better forecast demand and seamlessly integrate forecasts into the supply chain process.
Optimize module: Simulates expected outcomes by changing various demand-driving levers such as promo mechanics, store placement, flyer, halo and cannibalization. The module can quickly reload past campaigns to automate forecast and allocation processes.

In addition, AI automates decision-making across the forecasting lifecycle. The retail-centric approach to forecasting applies novel solutions to more accurately forecast demand. For example, to address new SKUs, the solution uses a new mapping approach to address data scarcity and improve forecast accuracy.

The Price and Promotion Manager solution is built on a cloud-native, SaaS data platform designed to handle enterprise data workloads, covering all aspects of the data journey from ingestion, validation, to transformation into a proprietary data model. Users can seamlessly integrate solution outputs into their supply chain processes. The product design recognizes the challenges faced by category managers and enables a more efficient planning process (for example, a quick view to YoY comp promotions).

Benefits

Addresses data sparsity introduced by new product development and infrequently purchased items to better predict demand through new SKU mapping.
Translates stacked promotions and various promotion mechanics to an effective price, to better model impact on-demand.
Uses hierarchical models to improve forecast accuracy.

Azure Services

Rubikloud’s solution uses the following Azure services

HDInsight: allows Rubikloud to work faster and to have full confidence that they are taking advantage of every possible optimization.
Cosmos DB: provides the convenience of an always-on, reliable, and accessible key/value store. Also provides a reliable database service.
Blob Storage: easy to use and integrates well with HDInsight.
Azure Kubernetes Service (AKS): uses the power of Kubernetes orchestration for all Azure VM customers.

Recommended next steps

Explore how Price & Promotion Manager enables AI powered price and promotion optimization for enterprise retail.
Quelle: Azure

From CCIE to Google Cloud Network Engineer: four things to think about

To stay relevant and wanted in the high-tech job market, it’s important to keep abreast of new technologies—and get certified in them! Google Cloud offers a number of professional certifications, including the new Professional Cloud Network Engineer. Currently in beta, certifications such as this can make you a valuable asset in a multi-cloud world.If you’re coming from a traditional on-premises IT environment, there are some things that are helpful to know up front when studying for the Cloud Network Engineer certification. Personally, I spent nearly two decades working in mainstream IT operations settings, and have made the switch to cloud. As a former Cisco Certified Internetwork Expert, i.e., CCIE, I’ve had to let go of the past and open up to seeing and learning new things in a slightly different way. Here are some things to understand before you start studying. The sooner you see the difference between networking in the cloud and on-prem, the more successful you’ll be.1. Focus on workflows, not packets.Figure 1 is a common network diagram that shows the data flow between two endpoints over a simple network. Data originates in applications on Endpoint 1 and flows up and down the TCP/IP network stack across the devices in the network, until it finally reaches the applications on Endpoint 2. Before a large chunk of data is sent out of EndPoint-1 it is sliced up into smaller sized pieces. Protocol headers are then prepended to these pieces before they are sent out onto the wire as packets. These packets, and their associated headers, are the atomic unit in the networking world.Figure 1. Packetized data flow through network.As a network engineer though, you typically focus on the network devices in between the endpoints, not the endpoints themselves. As you can see in Router-1, the majority of traffic flows through the router; it comes in one interface (the so-called “goes-inta” interface), and passes out the “goes-outta” interface. Only a relatively small amount of traffic is destined to the router itself. Data destined for the network device, meanwhile, includes control-plane communications, management traffic, or malicious attacks. This “through vs. to” traffic balance is common across all networking devices (switches, routers, firewalls, and load balancers) and results in a “goes-inta/goes-outta” view of the world as you configure, operate, and troubleshoot your network devices.Once you step into the cloud engineer role the atomic unit changes. Packets and headers are replaced with workflows and their associated datasets. Figure 2 shows this conceptual change through a typical three-tier web deployment. The idea of the network as you knew is it abstracted and distributed. The traffic pattern now inverts, with the majority of traffic either sourced or destined for a cloud service or application that resides on a cloud resource, rather than the network devices between them.Figure 2. Cloud-based three-tier web deployment.You can see this when you look at how to configure the firewall rule named http-inbound. Even though you configure the rule in relation to the VPC, you now have to identify a target using either the –target-tags or the –target-service-accounts=[IAM Service Account] gcloud arguments. In addition, depending on the ingress or egress direction of the traffic, you only configure either a source or destination filter, not both. This is because half of the information is considered to be the target itself. In other words, the focus is on the data that enters and leaves the specific cloud service.2. Realize your building blocks have changed.As you move from on-premises to the cloud don’t get hung up trying to fit all the networking details you already know into the new solutions you are learning. Remember that your new goal is to enable workflows.In the old networking world there were tangible building blocks such as switches, routers, firewalls, load balancers, cables, racks, power outlets, and BTU calculations. The intangible building blocks were features and methods defined by IETF RFCs and vendor-proprietary protocols with their ordered steps, finite-state machines, data structures, and timers. You physically assembled all these things to build inter-connectivity between the end users and the applications they used to make your business run. Implementing all this took days and weeks. In addition, as the network grew, the associated management and cost to operate it also grew disproportionately larger for your business.Cloud solutions treat this complexity as a software problem and add a layer of abstraction between end users and workloads, removing or hiding many of the complex details associated with the old building blocks. Your new building blocks are cloud-based services and features like Google Compute Engine, Cloud SQL, Cloud Functions, and Cloud Pub/Sub. You assemble these new resources based on your needs to provide IaaS, Paas, SaaS, and FaaS solutions. Your deployment schedule shrinks from days and weeks to seconds and minutes as you connect your enterprise network via Cloud VPN or Cloud Interconnect and deploy VPCs, Cloud Load Balancing, and Docker containers with Google Kubernetes Engine. You minimize management complexity through tools like Deployment Manager, Stackdriver, and Google Cloud’s pricing tools. You no longer simply build connectivity between end points but rather enable virtualized environments by treating infrastructure as code.3. Understand the power of a global fiber network.Many cloud providers’ infrastructure is made up of large data center complexes in geographical regions across the globe, with each region subdivided into zones for service redundancy. Connectivity between these regions, for the most part, happens over the public internet.Figure 3. A typical cloud provider’s global infrastructure.The benefit of this approach is that the internet provides ubiquitous connectivity. Looking at Figure 3 though you can see that there are several downsides:Management complexity. As your cloud footprint grows and you need your “island” VPCs to communicate over various peering options across regions, you inherit additional configuration, operational, and troubleshooting complexityUnpredictable performance. You have no control over jitter, delay, and packet loss in the public Internet.Suboptimal routes. The number of hops your traffic must transverse across the internet is most likely not optimized for your business—you are at the mercy of network outages and carriers’ BGP policies.Security risks.  The internet is where the good people are (your customers), but it’s also unfortunately where the bad people are. While you can encrypt your traffic in transit, you still run a risk when sending inter-region communications over the public Internet.Figure 4. Google’s Premium Tier cloud infrastructure.Google Cloud’s’ Premium Network Service Tier, now generally available, changes the game. As shown in Figure 4, the public internet sits outside of your global VPC. The core of your network is now Google’s own private fiber network.This improves your situation in several ways:You no longer have a cloud footprint made up of isolated geographic VPC islands—your infrastructure is one large homogenous cloud network. This network can be regional to start and grow to a global footprint when you are ready, with minimal headache.The issues of packet loss, delay, and jitter are mitigated significantly as compared to the public internet.The number of hops between endpoints is significantly minimized. Once your traffic enters the Google network it rides across its optimum path as opposed to through various Internet carrier networks.By utilizing global load-balancing and anycast addresses, traffic hops onto and jumps off of Google’s network at the closest point to your end users.Inter-region and private access traffic is automatically encrypted, transparently to the application, and sent across the private fiber backbone.  Because it doesn’t ride over the Internet, that traffic is never exposed to the bad guys.Of course, if these advantages aren’t as compelling as lower bandwidth costs, Google Cloud also offers a Standard Networking Tier that routes traffic over the public internet for a lower price point.  4. Embrace the flexibility of the API, Client Libraries, SDK, and Console.Sure, some networking devices have GUI-based management programs or web consoles, but if you’re like me, you’ve probably spent most of your career in the CLI of a networking device. This is because GUIs tend to make the basic stuff easy and CLIs make the hard stuff possible‚—they’re your go-to place for configuration, operation, and troubleshooting.CLIs do have their limitations though. If you want new features you have to upgrade software, and before you upgrade you have to test. That takes time and it’s expensive. If the CLI’s command structure or output changes, your existing automation and scripting breaks. In addition, in large networks with literally hundreds or thousands of devices, lack of software version consistency can be a management nightmare. Yes, you have SNMP, and where SNMP fails, XML schemas and NETCONF/YANG models step in to evolve things in the right direction. All this said, it’s a far cry from the programmatic access you are given once you step into the cloud.Figure 5. Cloud API,Client Libraries, SDKs, and Console.From a configuration, operation, and troubleshooting standpoint, the cloud has a lot of roads to the proverbial top of Mount Fuji. Figure 5 shows the different the different paths available. You are free to choose the one that best maps to your skill level and is most appropriate to complete the task at hand. While Google Cloud has a CLI-based SDK for shell scripting or interactive terminal sessions, you don’t have to use it. If you are developing an application or prefer a programmatic approach, you can use one of many client libraries that expose a wealth of functionality. If you’re an experienced programmer with specific needs you can even write directly to the REST API itself. And of course, on the other end of the spectrum, if you are learning or prefer to use a visual approach, there’s always the console.In addition to the tools above, If you need to create larger topologies on a regular cadence you may want to look at Google’s Cloud Deployment Manager. If you want a vendor agnostic tool that works across cloud carriers you can investigate the open-source program Terraform. Both solutions offer a jump from imperative to declarative infrastructure programming. This may be a good fit if you need a more consistent workflow across developers and operators as they provision resourcesPutting it all togetherIf this sounds like a lot, that’s because it is. Don’t despair though, there’s a readily available resource that will really help you grok these foundational network concepts: the documentation.You are most likely very familiar with the documentation section of several network vendor’s websites. To get up to speed on networking on Google Cloud, your best bet is to familiarize yourself with Google’s documentation as well. There is documentation for high-level concepts like network tier levels, network peering, and hybrid connectivity. Then, each cloud service also has its own individual set of documentation, subdivided into concepts, how-tos, quotas, pricing, and other areas. Reviewing how it is structured and creating bookmarks will make studying and the certification process much easier. Better yet, it will also make you a better cloud engineer.Finally, I want to challenge you to stretch beyond your comfort zone. Moving from network to the cloud is about virtualization, automation, programming, and developing new areas of expertise. Your journey into the cloud should not stop at learning how GCP implements VPCs. Set long term as well as short term goals. There are so many new areas where your skill sets are needed and you can provide value. You can do it; don’t doubt that for one minute.In my next blog post I’ll be discussing an approach to structure your Cloud learning. This will make the learning and certification process easier as well as prepare you to do the cloud Network Engineer role. Until then, the Google Cloud training team has lots of ways for you to increase your Google Cloud know-how. Join our webinar on preparing for the Professional Cloud Network Engineer certification exam on February 22, 2019 at 9:45am PST. Now go visit the Google certification page and set your first certification goal! Best of Luck!
Quelle: Google Cloud Platform

Announcing Google Cloud Security Talks during RSA Conference 2019

Going to RSA Conference in San Francisco next month? In addition to keynote sessions, we’re hosting the third edition of Google Cloud Security Talks at Bespoke in Westfield San Francisco Centre, a five-minute walk from Moscone Center.This series of 20 talks over two days will cover Google Cloud’s security products and capabilities, our 2019 vision and roadmap, and insights from our upcoming security report. The majority of the sessions will be led by Googlers, including Panos Mavrommatis, Engineering Director for Safe Browsing, and Eugene Liderman, Director for Android Security Strategy. You’ll also get to hear from security partners running workloads on GCP, including Palo Alto Networks, and from customers about how security is a differentiator for Google Cloud. You can view the full agenda below, and feel free to register for the event on our website.In addition to presentations and panels, we’ll feature several interactive demos that showcase how Google prevents phishing and ransomware attacks and how partners integrate with our services.Finally, various Google security experts will be talking at the RSA Conference itself, as well as at additional parallel events throughout the week:RSA CONFERENCE | Moscone CenterWhat Should a US Federal Privacy Law Look Like? [PRV-T09]Tuesday, Mar 05 | 3:40 PM – 4:30 PMKeith Enright, Chief Privacy Officer, GoogleAttacking Machine Learning: On the Security & Privacy of Neural Networks [MLAI-W03]Wednesday, March 6 | 9:20 AM – 10:10 AMNicholas Carlini, Research Scientist, GoogleFirst Steps in RF: Lessons Learned [SBX3-W2]Wednesday, March 6 | 1:50 PM – 2:50 PMDave Weinstein, Android Security Assurance Engineering Manager, GoogleKubernetes Runtime Security: What Happens If a Container Goes Bad? [CSV-R02]Thursday, March 7 | 8:00 AM – 8:50 AMJen Tong, Security Advocate, Google CloudAnatomy of Phishing Campaigns: A Gmail Perspective [HT-R03]Thursday, March 7 | 9:20 AM – 10:10 AMAli Zand, Software Engineer, Google & Nicolas Lidzborski, Senior Software Engineer, Google CloudEngineering Trust and Security in the Cloud Era, Based on Early Lessons [KEY-F03S]Friday, Mar 08 | 11:10 AM – 12:00 PMSuzanne Frey, Vice President, Engineering, Google Cloud; Quentin Hardy, Head of Editorial, Google Cloud & Amin Vadhat, Google Fellow and Networking Technical Lead, GoogleTHE CYBER RISK FORUM | The Fairmont, 950 Mason St, San FranciscoThe Human Factor: How CEOs and Boards Can Ensure Your Employees are an Asset not a Liability in the War on CyberMonday, March 4 | 11:00 AM – 11:50 AMSam Srinivas, Director of Product Management, Google CloudBSides SF | City View at Metreon, 135 4th St #4000, San FranciscoYou Might Still Need Patches for Your Denim, but You No Longer Need Them for ProdMonday, March 4 | 3:30 PM – 4:00 PMMaya Kaczorowski, Product Manager, Google Cloud & Dan Lorenc, Software Engineer, Google CloudDo Androids Dream of Electric Fences?: Defending Android in the EnterpriseMonday, March 4, 2019 | 4:50 PM – 5:20 PMBrandon Weeks, Security Engineer, Google CloudAt Google Cloud, we work hard to protect your underlying infrastructure from end-to-end, give you control over your own data, while complying with industry regulations, standards, and frameworks. We look forward to showing you how during RSA Conference next month!
Quelle: Google Cloud Platform

Niantic: Pokémon Go bekommt bessere AR-Fotos

In sozialen Netzwerken könnte es demnächst auffällig viele Bilder von Pikachu und seinen Freunden geben: Das Entwicklerstudio Niantic will in Pokémon Go eine neue Funktion für einigermaßen glaubwürdige AR-Fotografien einbauen. (Pokémon Go, Augmented Reality)
Quelle: Golem

New IBM services help companies manage the new multicloud world

Enterprises are going through a huge transformation in how they operate.
Today, according to a recent study, 85 percent of enterprises operate in a multicloud environment. The IBM Institute for Business Value estimates that by 2021, 98 percent of organizations studied plan to adopt multicloud architectures.
Navigating multicloud complexity
There are no two ways about it. It’s a multicloud world. With most businesses already running and trying to manage five or more cloud environments, often from multiple vendors, companies are struggling with how to keep up. The IBV paper study also states that just 38 percent of those same enterprises studied will have the procedures and tools they need to operate this environment.
This is compounded by the fact that managing these multiple clouds is largely customized and can be complex, with potentially major security implications and a lack of consistent management and integration tools. What’s required are services that provide an integrated approach to provide companies with a single management and operations system that address three critical layers:

Business management. Applications that provide digital service ordering, modern service management, and cost governance.
Orchestration. An automation layer that enables services of different types, from different vendors to be delivered in an optimized manner and available to consumers.
Operations. A layer that enables infrastructure and operations admins to monitor and maintain systems, including legacy infrastructure, private cloud, public cloud and container environments.

Simplifying multicloud management
To help navigate this complexity, IBM is embracing the multicloud reality and vision for our clients. Our next step in achieving this is building off of a recent partnership expansion with ServiceNow to offer new services that are designed to help enterprises simplify the management of their IT resources across multiple cloud providers and on-premises environments. Not only can the IBM Services for Multicloud Management help you address those three areas, it will also include a unified, self-service experience to enable companies to:

integrate with the ServiceNow Portal to configure and buy cloud services/solutions from multiple cloud providers
offer a global DevOps pipeline and performance management services
offer data center performance cloud health, container management and AI ops management
provide workload planning, cloud sourcing, procurement and cost and asset management

Introducing new IBM Services for Cloud Strategy and Design
To succeed on any cloud platform and deliver real business value, it’s essential for companies to build the right strategy following a broad assessment. At IBM, we’re working with clients across industries to help them determine which processes, methods and applications need to be moved or modernized for cloud.
With our new IBM Services for Cloud Strategy and Design, IBM is providing a comprehensive set of consulting services to advise clients on their journey to hybrid cloud. Services include design, migration, integration, road mapping and architectural services with support for multiple vendor platforms. Our enhanced cloud capabilities combined with our Cloud Innovate tools, IBM Cloud approach and automated decision accelerators, help companies architect the best, holistic approach to cloud.
Dedicated teams of certified IBM consultants work with clients to help design, build and manage their cloud architecture with open, secure multicloud strategies. With the right multicloud support, we’re supporting companies with application development, migration, modernization and management for faster deployment.
There’s no question cloud is here to stay. The question is, is your company ready for a multicloud world?
Learn more about these new IBM Cloud Services and get started today.
The post New IBM services help companies manage the new multicloud world appeared first on Cloud computing news.
Quelle: Thoughts on Cloud