Money matters: Automating your Cloud Billing budgets

The bigger a cloud environment, the more important it is to have robust cost management tools, including budget automation capabilities. Budgeting tools can help you avoid unnecessary costs via proactive notifications of future (and actual) overages. Billing automation can help you support multiple budgets, each with very granular filters (we have customers with thousands of individual budgets!), and customize your budget notifications. Here on the Cloud Billing team, we’ve been working hard to improve these capabilities based on your feedback, and are excited to announce several enhancements: General availability (GA) of the Budgets API (learn more)Granular choice of credits – choose specific credits to include in your budget (learn more)Customized budget alert email recipients – send email notifications to whoever you want, as well as remove billing admins and users from the default list (learn more)Let’s take a look at these new features in greater depth. Cloud Billing’s latest budgeting featuresOur goal with Cloud Billing is to create an enterprise-grade cost management suite, complete with all the tools you need to manage the largest and most complex of cloud environments. Read on to learn more about Cloud Billing’s new budgeting and automation capabilities. Budgets APIEarlier this month we announced the general availability of the Budgets API. The Budgets API allows you to do almost everything from within the Cloud Billing UI: create, edit and delete budgets, as well as use its scoping and filtering capabilities. You can interact with the API directly through REST calls, or with our Java, Node.JS, Python, GO, and .NET client libraries. Where the UI and API capabilities differ, know that those differences will be short-lived—our goal is to ultimately deliver feature parity between the UI and the API. The Budgets API is especially useful when you want to automate budget creation, i.e., create a budget when a team spins up a new project. It’s also great for editing budgets en masse. For example, if you know you’re about to have a spike in sales, you could increase all related budgets by 20%. Another great use for the Budgets API is to obfuscate and simplify your end users’ permission model for budget creation and editing. If your company already has a self-managed portal for cloud resource management, you can integrate simple budget experiences there, and then use a Service Account to create or edit the budgets. This way, you don’t need to give your users budget-related IAM permissions to allow them to do more than they were originally set up to do. Learn more in our documentation.Previously, the credits setting was a simple checkbox that let you include available credits in your budget. Now, you can choose specific credit families, such as discounts or promotions, or even specific credit types (e.g., free tiers). For example, you can build a durable budget that excludes any one-time promotional credits that you may receive at the free tier level. Customized budget alert email recipientsCloud Billing is now integrated with Cloud Monitoring so you can send notification emails to up to five notification channels; in addition you can decide whether or not to send budget notifications to billing users and admins. With these two features together, you can ensure that notifications are sent to the appropriate recipients.Programmatic budget notificationsJust like the ability to automate budget creation, Cloud Billing can also alert Pub/Sub topics about changes to a budget. Unlike with email notifications, Cloud Billing notifies the Pub/Sub topics regardless of whether a budget threshold has been crossed, and that information can be easily incorporated into your business logic. You can see some examples of programmatic budget notifications here, including posting to a Slack channel.A sample processHow might you use these features together? Here’s a sample process that you could implement in your organization to create a budget for a project, monitor it, and automatically disable the project if it reaches a certain threshold.Initiate infrastructure deployment using your tool of choice, for example Terraform or another Infrastructure as Code tool.That in turn calls into Google Cloud Build to deploy your custom workflows across multiple environments, including VMs, serverless, Kubernetes, or Firebase.Using the Google Cloud Budget API, create an overall budget for the new project, using actual amounts, as well as forecasted thresholds.You can send notifications to: Billing admins and specific employees via Cloud Monitoring channelsA Pub/Sub topicCreate a cloud function to monitor the Pub/Sub topic, which automatically disables billing if spend is over 150% of budget. This happens if (and only if), this is a test environment, as determined by the environment label. For all other environments publish a message to the team Slack channel.Click to enlargeSee code examples here. Your budget, your wayEvery organization is different, and you need the ability to customize the rate at which you consume your cloud resources. We continue to add features to Cloud Billing that allow any organization—from the smallest business to the largest enterprise—to manage their budget how they want, with programmatic methods that enable granular budgets and automated cost controls. To learn more and get started with budgeting on Cloud Billing, check out the documentation.Related ArticleGiving you better cost analytics capabilities—and a simpler invoiceGoogle Cloud Console features cost management tools to help financial operations (FinOps) teams analyze and predict their organization’s …Read Article
Quelle: Google Cloud Platform

You do you: How to succeed in a distributed, multi-cloud world

Why do we use more than one thing to solve a particular need? Sometimes we don’t have a choice. All my financial assets aren’t in one place because my employer-provided retirement account is at a different financial institution than my personal account. Other times, we purposely diversify. I could buy all my clothes at one retailer, but whether it’s a question of personal taste, convenience, or just circumstance, I buy shoes at one store (and often different stores for different types of shoes), shirts at another store and outerwear somewhere else. Is it the same situation within your IT department? Based on organizational dynamics, past bets on technology, and current customer demands, I’d bet that you have a few solutions to any given problem. And it’s happening again with public clouds, as the statistics show that most of you are using more than one provider.But all public clouds aren’t the same. To be sure, there’s commonality amongst them: every public cloud provider offers virtual compute, storage, and networking along with middleware services like messaging. But each cloud offers novel services that you won’t find elsewhere. And each operates within different geographic regions. Some clouds offer different security, data sovereignty, and hybrid capabilities than others. And the user experience—developer tools, web portals, automation capabilities—isn’t uniform and may appeal to different teams within your company. Using multiple clouds may be becoming commonplace, but it’s not simple to do. There are different tools, skills, and paradigms to absorb. But don’t freak out. Don’t send your developers off to learn every nuance of every cloud, or take your attention away from delivering customer value. You do, however, need to prepare your technical teams, so that they’re prepared to make the most of multi-cloud. So what should you do, as a leader of technical teams? Here is some high-level advice to consider as you think about how to approach multi-cloud. And remember, there’s no universal right solution—only the right solution for your organization, right now. Keep your primary focus on portable skillsYour software isn’t defined by your choice of cloud. That’s probably blasphemous to say, coming from someone working for a public cloud provider, but it’s the truth. Most of what it takes to build great software transcends any given deployment target.What software development skills truly matter? Go deep on one or more programming languages. Really understand how to write efficient, changeable, and testable code. Optimize your dev environment, including your IDE, experimental sandboxes, and source control flow. Learn a frontend framework like Angular or Flutter. Grok the use cases for a relational database versus a schema-less database. Figure out the right ways to package up applications, including how to use containers. Invest in modern architectural knowledge around microservices, micro frontends, event stream processing, JAMstack, APIs, and service mesh. Know how to build a complete continuous integration pipeline that gives your team fast feedback. This is valuable, portable knowledge that has little to do with which cloud you eventually use.Don’t get me wrong, you’ll want to develop skills around novel cloud services. All clouds aren’t the same, and there are legitimate differences in how you authenticate, provision, and consume those powerful services. An app designed to run great on one cloud won’t easily run on another. Just don’t forget that it’s all about your software and your customers. The public clouds are here to serve you, not the other way around!Use the “thinnest viable platform” across environmentsToo often, organizations put heavyweight, opaque platforms in place, and hope developers will come and use them. That’s an anti-pattern and companies are noticing a better way.The authors of the book Team Topologies promote the idea of Thinnest Viable Platform (TVP) to accelerate development. In many organizations, Kubernetes is the start of their TVP. It offers a rich, consistent API for containerized workloads. It could make sense to layer Knative on top of that TVP to give developers an app-centric interface that hides the underlying complexity of Kubernetes. Then, you might introduce an embedded service mesh to the cluster so that developers don’t have to write infrastructure-centric code—client-side load balancing, service discovery, retries, circuit breaking and the like. (Note, if you combine those things, and mix in a few others, you get Anthos. Just sayin’).But what’s really powerful here is having a base platform made up of industry-standard open source. Not just open source, but standard open source. You know, the projects that a massive ecosystem supports and integrates with—think Kubernetes, Istio, Envoy, Tekton, and Cloud Native Buildpacks. This allows you to run an identical platform across your deployment targets and integrate with best-of-breed infrastructure and services. Your developers are free to take the foundational plumbing for granted, and steer their attention to all the value-adding capabilities available in each environment.Pick the right cloud (and services) based on your app’s needsLet’s recap. You’re focused on portable skills, and have a foundational platform that makes it easier to run software consistently on every environment. Now, you need to choose where the software actually runs.Your developers may write software that’s completely cloud-agnostic and can run anywhere. That’s hard to do, but assuming you’ve done it, then your developers don’t need to make any tough choices up front. When might you need upfront knowledge of the target environment? A few examples:Your app depends on unique capabilities for AI, data processing, IoT, or vertical-specific APIs—think media or healthcare.You need to host your application in a specific geography, and thus choose a specific cloud, datacenter, or partner facility.Your app must sit next to a specific data source—think SaaS systems, partner data centers, mobile users—and use whatever host is closest.Have a well-tested decision tree in place to help your teams decide when to use novel versus commodity services, and how to select the cloud that makes the most sense for the workload. Choosing the cloud and services to use may require expert help. Reach out toGoogle’s own experts for help, or work with our vast network of talented partners who offer proven guidance on your journey. The choice is yours.Related ArticleAnthos: one multi-cloud management layer for all your applicationsAnthos can be the foundation of current and future applications.Read Article
Quelle: Google Cloud Platform

What’s new in BigQuery ML: non-linear model types and model export

We launched BigQuery ML, an integrated part of Google Cloud’s BigQuery data warehouse, in 2018 as a SQL interface for training and using linear models. Many customers with a large amount of data in BigQuery started using BigQuery ML to remove the need for data ETL, since it brought ML directly to their stored data. Due to ease of explainability, linear models worked quite well for many of our customers.However, as many Kaggle machine learning competitions have shown, some non-linear model types like XGBoost and AutoML Tables work really well on structured data. Recent advances in Explainable AI based on SHAP values have also enabled customers to better understand why a prediction was made by these non-linear models. Google Cloud AI Platform already provides the ability to train these non-linear models, and we have integrated with Cloud AI Platform to bring these capabilities to BigQuery. We have added the ability to train and use three new types of regression and classification models: boosted trees using XGBoost, AutoML tables, and DNNs using Tensorflow. The models trained in BigQuery ML can also be exported to deploy for online prediction on Cloud AI Platform or a customer’s own serving stack. Furthermore, we expanded the use cases to include recommendation systems, clustering, and time series forecasting. We are announcing the general availability of the following: boosted trees using XGBoost, deep neural networks (DNNs) using Tensorflow, and model export for online prediction. Here are more details on each of them:Boosted trees using XGBoostYou can train and use boosted tree models using the XGBoost library. Tree-based models capture feature non-linearity well, and XGBoost is one of the most popular libraries for building boosted tree models. These models have been shown to work very well on structured data in Kaggle competitions without being as complex and obscure as neural networks, since they let you inspect the set of decision trees to understand the models. This should be one of the first models you build for any problem. Get started with the documentation to understand how to use this model type.Deep neural networks using TensorFlowThese are fully connected neural networks, of type DNNClassifier and DNNRegressor in TensorFlow. Using a DNN reduces the need for feature engineering, as the hidden layers capture a lot of feature interaction and transformations. However, the hyperparameters make a significant difference in performance, and understanding them requires more advanced data science skills. We suggest only experienced data scientists use this model type, and leverage a hyperparameter tuning service like Google Vizier to optimize the models. Get started with the documentation to understand how to use this model type.Model export for online predictionOnce you have built a model in BigQuery ML, you can export it for online prediction or further editing and inspection using TensorFlow or XGBoost tools. You can export all models except time series models. All models except boosted tree are exported as TensorFlow SavedModel, which can be deployed for online prediction or even inspected or edited further using TensorFlow tools. Boosted tree models are exported in Booster format for online deployment and further editing or inspection. Get started with the documentation to understand how to export models and use them for online prediction.We are building a set of notebooks for common patterns (use cases) for these models that we see in different industries. Check out all the tutorials and notebooks.Related ArticleAnnouncing our new Professional Machine Learning Engineer certificationLearn about the Google Cloud Professional Machine Learning Engineer certification.Read Article
Quelle: Google Cloud Platform

Docker Captain Take 5 – Ajeet Singh Raina

Docker Captains are select members of the community that are both experts in their field and are passionate about sharing their Docker knowledge with others. Today, we’re introducing “Docker Captains Take 5”, a regular blog series where we get a closer look at the Docker experts who share their knowledge online and offline around the world. A different Captain will be featured each time and we will ask them the same broad set of questions ranging from what their best Docker tip is to whether they prefer cats or dogs (personally, we like whales and turtles over here). To kick us off we’re interviewing Ajeet Singh Raina who has been a Docker Captain since 2016 and is a DevRel Manager at Redis Labs. He is based in Bangalore, India.  

How/when did you first discover Docker?

It was the year 2013 when I watched Solomon Hykes for the first time presenting “The Future of Linux Containers” at PyCon in Santa Clara. This video inspired me to write my first blog post on Docker and the rest is history.

What is your favorite Docker command?

The docker buildx CLI  is one of my favorite commands. It allows you to build and run multi-architectural Docker images with just one-liner CLI:

$ docker buildx build –platform linux/amd64,linux/arm64,linux/arm/v7,  linux/arm/v6 .

I frequently use this tool to build Docker images for my tiny $99 NVIDIA Jetson Nano board as well as Raspberry Pi.

What is your top tip you think other people don’t know for working with Docker?

If you’re looking for a process to automate Docker container base image updates,  Watchtower is a promising tool. Watchtower monitors running containers and watches for changes to the images those containers were originally started from. Whenever an image gets changed,this tool  automatically restarts the container using the new image. Cool, isn’t it?

What’s the coolest Docker demo you have done/seen ?

Early this year, I ran Kubernetes 101 workshop for almost 4 hours in one of Docker Bangalore Community Meetup events at SAP Labs, India in front of an audience of more than 550 people. It was an amazing experience going LIVE and covering the overall KubeLabs tutorials running on Play with Kubernetes playground.

What have you worked on in the past 6 months that you’re particularly proud of?

One of the most exciting projects which I have worked on in the last 6 months is titled “Pico”. The Pico project is all about object detection and text analytics using Docker, Apache Kafka, IoT, and Amazon Rekognition System. Imagine you can capture live video streams, identify objects using deep learning, and then trigger actions or notifications based on the identified objects – all using Docker containers. With Pico, you will be able to set up and run a live video capture, analysis, and alerting solution prototype. This project excited dozens of Indian universities and provided me opportunities to travel and showcase it to larger communities.

The project is hosted over ARM Software Developer GITHUB repository 

What will be big news for Docker in the next year?

Docker Inc. announcing 10+ million Docker Hub repositories.

What is the biggest challenge that we as a community will need to tackle in 2021?

In 2021, the sustainability of community events despite the pandemic and lockdowns is going to be the biggest challenge for us.

What are your goals for the Docker community in the next year? 

Being a Docker Captain as well as Community Leader, I have below a list of goals for 2021:

Grow Docker Bangalore Community members from 10k to 12kTarget 250+ blogs around Docker and Ecosystem in Collabnix by 2021Conduct Joint Meetup with all other leading Docker communities across IndiaTake OSCONF (An Open Source Community Conference) – a conference dedicated to Docker & Kubernetes community to the international level

What talk would you most love to see at the next DockerCon?

Exciting use cases around emerging AI, Docker and IoT Edge Platforms

What is the technology that you’re most excited about and holds a lot of promise?

I’m excited about the emerging “No-code” development platform. A no-code development is an emerging platform that uses a visual development environment to allow layman users to create apps, through methods such as drag-and-drop, adding application components to create a complete application. With no-code, you don’t need coding knowledge to create apps.

Rapid fire questions…

What new skill have you mastered during the pandemic?

Artificial Intelligence using Docker

Cats or Dogs?

Dogs

Salty, sour or sweet?

Sweet

Beach or mountains?

Beach

Your most often used emoji?

  
The post Docker Captain Take 5 – Ajeet Singh Raina appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

AWS IQ führt eine neue Funktion zur Unterstützung von Unternehmen ein

AWS IQ hat zwei neue Funktionen zur Unterstützung von Firmen eingeführt: Projektmanager-Rolle und Gruppenchat. Mit der Projektmanager-Rolle kann jede in den USA ansässige Person in einem Unternehmen Anforderungen einsehen, mit Kunden sprechen, Vorschläge einreichen und Zahlungen anfordern, ohne dass AWS Certification erforderlich ist. Projektmanager können dann einen AWS-zertifizierten Experten hinzuziehen, um die praktische Arbeit im Kundenkonto abzuschließen.  
Quelle: aws.amazon.com

Amazon Connect führt APIs zur programmgesteuerten Konfiguration von Benutzerhierarchien ein

Amazon Connect bietet jetzt eine API zur programmgesteuerten Erstellung und Verwaltung von Benutzerhierarchien. Mit Benutzerhierarchien können Sie Benutzer in Gruppen organisieren, z.B. anhand ihres Einsatzortes oder der Abteilung, der sie angehören. Mit dieser Neuerung können Sie nun die Hierarchie Ihres Unternehmens programmgesteuert in Amazon Connect abbilden, sobald Änderungen in Ihren internen Datensystemen vorgenommen werden, z. B. in HR-Systemen. Zusätzlich können Sie alle Hierarchie- und Agentendaten als zeitpunktbezogenen Snapshot extrahieren und in eine andere Instance kopieren. Weitere Informationen finden Sie in der API-Dokumentation.
Quelle: aws.amazon.com

Amazon Chime SDK unterstützt jetzt öffentlich geschaltetes Telefonnetz (PSTN)-Audio

Amazon Chime SDK ist ein Service, der es Entwicklern einfach macht, Anwendungen ganz einfach um Echtzeit-Audio, -Video und Bildschirmfreigabefunktionen zu erweitern. Ab heute verbindet der Service sich mit dem globalen Telefonnetzwerk, damit Teilnehmer sich in ein Amazon Chime SDK-Meeting mit einem Telefon einwählen können. Mit der Funktion können Sie auch programmgesteuert einen ausgehenden Anruf an eine Telefonnummer ausführen und den Anruf mit einer Meeting-Sitzung verbinden. Vorher konnten Teilnehmer einem Amazon Chime SDK-Meeting-Audio mit Voice over IP (VOIP) in mobilen und Web-Anwendungen beitreten.
Quelle: aws.amazon.com