5 frequently asked questions about Google Cloud Anthos

In April, we introduced Anthos, Google Cloud’s new hybrid and multi-cloud platform that lets you build and manage modern hybrid applications across environments. Powered by Kubernetes and other open-source technologies, Anthos is the only software-based hybrid platform available today that lets you run your applications unmodified on existing on-premises hardware investments or in the public cloud. And with a technology stack that simplifies everything from migration to security to platform operations, Anthos makes it easy to accelerate application development and delivery.Anthos’ arrival has generated a lot of inquiries from enterprises looking to move closer to the cloud. Here are five common questions about Anthos.1. How do I get started with Anthos?One of the biggest decisions companies make is whether to migrate or modernize their existing workloads. Because both processes are complicated, time-consuming, and labor-intensive, it may seem necessary to choose one or the other. If you are deploying onto Google Cloud Platform (GCP), you can get started with Anthos simply by creating a new GKE cluster with Istio enabled in your project. If deploying on-premises, download and install GKE On-Prem, and register it with your GCP account. Once registered, you can manage your GKE On-Prem clusters just like any existing GKE cluster, as well as incorporate your services as a part of an Istio service mesh to get observability and enforce security. In addition, with the upcoming beta of Migrate for Anthos, you can take VMs from on-prem environments, Compute Engine, or other clouds and automatically convert them into containers running in Google Kubernetes Engine (GKE). It migrates stateful workloads, and automatically transforms those workloads to run as containers in GKE pods. Once you’ve migrated and upgraded your applications into containers, you can modernize them further with added services such as service mesh, Stackdriver Logging and Monitoring, as well as other solutions for Kubernetes applications in GCP Marketplace.2. How does Anthos help secure my environment? Anthos seamlessly integrates security into each stage of the application lifecycle: from development, to build, and to run. Security best practices are implemented as default settings and configurations—disabling Kubernetes dashboards is one example1. We validate and test conformant Kubernetes versions, and provide patch management and incident response. Your application services are protected in a zero trust environment with authenticated and encrypted service-to-service communications using mTLS2. Anthos also delivers a single, centralized point for enforcing policy across the fleet, whether that’s on-prem or in the cloud. As a security admin, you can let developers develop, knowing your policies are enforced. You control access based on policies and roles, not machines, with centralized configuration management through Anthos Config Management, which continuously checks cluster state for divergence in policies like RBAC and resource quotas. Finally, with a shared responsibility model between you and Google Cloud, Anthos helps reduce the burden of managing patches and performing incident response. 3. How does Anthos work across multiple environments? Spanning multiple environments can add complexity in terms of resource management and consistency. Anthos provides a unified model for computing, networking, and even service management across clouds and data centers.Configuration as code is one approach to managing this complexity. Anthos provides configuration as code via Anthos Config Management, which deploys the Anthos Config Management Operator to your GKE or GKE On-Prem clusters, allowing you to monitor and apply any configuration changes detected in a Git repo.This real-time configuration management approach also provides central governance, reconciling desired state with the actual resources running across your on-prem and multi-cloud environment. Further, because it’s built on a consistent set of APIs based on open-source technologies (like Kubernetes, Istio, and Knative), developers and operators only have to learn one stack that applies to multiple cloud providers. Anthos enables increased observability, more metrics, and telemetry across your hybrid environment, including the ability to perform zero-downtime upgrades, deploy canary releases for services, and upgrade Kubernetes cluster versions.4. Can Anthos speed up application modernization?To remain competitive, enterprises need to move faster than ever to build new applications, generate business-differentiating value, and continue to innovate. But developers are busy and in short supply. With Anthos, you can take existing applications and deploy them anywhere—without changing a single line of code. From there, Anthos lets you define custom workflows for building, testing, and deploying across those multiple environments. Anthos also speeds up application modernization with Cloud Run on GKE (currently in beta), which automatically brings serverless benefits like scale up, scale down and eventing to your applications. Built on Knative, Cloud Run runs stateless HTTP containers on a fully managed environment or in your own GKE cluster.5. What other vendors support Anthos?We’ve developed a global partner network for Anthos to help you innovate faster, scale smarter, and stay secure. We’re expanding technology integrations with key partners, increasing the number of service partners, and doubling down on open source to make building on Google Cloud even more flexible and open. We’re working closely with more than 35 hardware, software, and system integration partners to ensure you see value from Anthos from the start. Cisco, VMware, Dell EMC, HPE, Intel, and Lenovo have committed to delivering Anthos on their own hyperconverged infrastructure, as well as with more than 20 enterprise software providers to integrate their offerings with Anthos’ unique capabilities. Have other questions that aren’t listed here? You can learn more by visiting the Anthos landing page, reading the documentation, or watching this Spotlight session from Google Cloud Next 19.1. Hardening your cluster’s security2. Encryption-in-transit
Quelle: Google Cloud Platform

Google Cloud networking in depth: three defense-in-depth principles for securing your environment

If you operate in the cloud, one of the biggest considerations is trust—trust in your provider, of course, but also confidence that you’ve configured your systems correctly. Google Cloud Platform (GCP) offers a robust set of network security controls that allow you to adopt a defense-in-depth security strategy to minimize risk and ensure safe and efficient operations while satisfying compliance requirements. In fact, Google Cloud was recently named a leader in the Forrester Wave™: Data Security Portfolio Vendors, Q2 2019 report. In addition to tools, there are three principles for defense-in-depth network security that you should follow to reduce risk and protect your resources and environment: Secure your internet-facing servicesSecure your VPC for private deploymentsMicro-segment access to your applications and servicesOverview of network security controls GCPMost GCP deployments consist of a mix of your own managed applications deployed in VMs or in containers, as well as some managed Google or third-party services consumed as software-as-a-service, for example Google Cloud Storage, or BigQuery for data analytics. GCP enables a defense-in-depth security strategy with a comprehensive portfolio of security controls, across all of these deployment models. It’s important to remember that the attack surface for your cloud deployment exponentially increases when it is exposed to and reachable from the internet. Therefore, the most basic network security principle is to close off access to your cloud resources from the internet, unless absolutely necessary. On GCP, you can use Cloud IAM and Organization policies to restrict access to GCP resources and services to authorized users and projects. However, in the event that a workload must be exposed to the internet, you should employ a defense-in-depth strategy to protect your environment. Let’s take a closer look at three network security controls to minimize risk and secure your resources.1. Secure your Internet-facing services If a service must be exposed to the internet, you can still limit access where possible and defend against DDoS and targeted attacks against your applications.Defend against DDoS attacks The prevalence, magnitude, and duration of DDoS attacks is increasing as malicious tools and tactics proliferate and get commoditized by a wider range of bad actors. You can mitigate the threat of DDoS by placing your services behind a Google Cloud HTTP(S) Load Balancer and deploying Google Cloud Armor. Together, they protect publicly exposed services against Layer 3 and Layer 4 volumetric DDoS attacks.Google Cloud Armor protects your applications at the edge of Google’s networkControl access to your applications and VMs Cloud Identity-Aware Proxy (IAP) is a first step towards implementing BeyondCorp, the security strategy we developed to control access to applications and VMs in a zero-trust environment. With Cloud IAP, you can permit access for authorized users to applications over the internet based on their identity and other contexts without requiring them to connect to a VPN.Enforce web application firewall (WAF) policies at the edge By deploying Google Cloud Armor security policies, you can block malicious or otherwise unwanted traffic at the edge of Google’s network, far upstream from your infrastructure. Use preconfigured WAF rules to protect against the most common application vulnerabilities like Cross-site Scripting (XSS) and SQL injection (SQLi). Configure custom rules to provide custom filtering of internet traffic across Layer 3 through Layer 7 attributes like IP (IPv4 & IPv6), geography (alpha), request headers (alpha), and cookies (alpha). Apply granular security policies Although Google Cloud Armor evaluates and enforces security policies at the edge of the network, you can configure those policies at varying levels of granularity based on the complexity of your deployment. You can configure uniform L7 filtering for all applications in a project, or deploy customized access controls on a per-application (backend service) basis. Policy and rule updates are possible through a REST API or CLI as well as the UI, and are propagated globally in near real time to respond to threats as they happen.Cloud Armor security policies can be applied at various levels of granularityTurn on real-time monitoring, logging, and alerting With Stackdriver Monitoring you can leverage pre-configured or custom dashboards to monitor network security policies in real time including allowed and denied traffic as well as the impact of rules in preview mode (passive logging). Logs of all decisions made and relevant data about the requests can be sent to Stackdriver Logging to be stored in Cloud Storage or BigQuery, or forwarded on to a downstream SIEM or log management solution to plug into existing security operations processes and help satisfy compliance needs.  2. Secure your VPC for private deployments As described in the previous section, you should keep your deployments as private as possible, to reduce their exposure to internet threat vectors. Google Cloud offers a set of solutions that let you deploy your workloads privately, while fulfilling critical user workflows: Deploy your VMs with only private IPs. You can reduce or eliminate your exposure to the internet by disabling the use of external IPs in VMs in specific projects, or even in your entire organization with an Org policy. Deploy GKE private clusters. If you use Google Kubernetes Engine (GKE), consider creating private clusters. Google Cloud still fully manages private clusters through a private connection to our managed cluster master.Serve your applications privately whenever possible. Use our Internal Load Balancer service to scale and serve your applications privately, for clients accessing applications and services within your Google Cloud VPC, or from an on-prem private connection like Cloud VPN or Cloud Interconnect. Access Google managed services privately. Google Cloud offers a variety of private access options for Google services, so that both clients hosted in GCP or in your on-prem data centers can access and privately consume services like Cloud Storage, BigQuery or Cloud SQL. Provide secure outbound internet connections with Cloud NAT. Your private VMs or your private GKE clusters may need to initiate egress connections to the internet, for example, to contact an external software repository for software upgrades. Cloud NAT allows you to configure such access in a controlled manner, reducing the access paths and the number of public IPs to only those configured in Cloud NAT. Mitigate exfiltration risks by preventing your data from moving outside the boundaries of a trusted perimeter. VPC Service Controls allows you to build a trusted private perimeter and ensure that data access is not allowed outside the boundaries of that perimeter. Similarly, the data can’t move outside of the perimeter boundaries, mitigating exfiltration risks.The figure below illustrates a service perimeter applied to a production project, which prevents the data in Cloud Storage buckets or BigQuery datasets from being accessed or exfiltrated outside of the project’s boundaries.3. Micro-segment access to your applications and services Even within a private boundary, you often need to granularly regulate communication between applications. Google Cloud provides a comprehensive set of tools to micro-segment those applications. Micro-segmentation for your VM-based applications. Within a given VPC, you can control the communication of your VM-based applications by setting up firewall rules. You can group your applications with tags or service accounts and then construct your firewall rules referencing tags or service accounts. While tags are metadata and very flexible, service accounts provide additional access controls, requiring a user to have permission to apply the service account to a VM. Micro-segmentation for your GKE based applications. Within a given GKE cluster, you can control communication between your container-based applications by setting up network policies. You can group your applications based on namespaces or labels.Defense-in-depth, the networking wayTo summarize, you can reduce your attack surface by making your deployments as private as possible. If you must expose your applications to the internet, enforce strict access controls and traffic filtering at the edge of the network while monitoring for anomalous behavior. Finally, enforce granular micro-segmentation even within the VPC perimeter using GCP’s distributed VPC firewalls and GKE network policies. By following these defense-in-depth strategies, you can reduce risk, meet compliance requirements, and help ensure the availability of your mission-critical applications and services. To get started, watch this NEXT ‘19 talk on Cloud Armor. You can learn more about GCP’s cloud network security portfolio online. Let us know how you plan to use these network security features, and what capabilities you’d like to have in the future by reaching out to us at gcp-networking@google.com.
Quelle: Google Cloud Platform

Turn it up to eleven: Java 11 runtime comes to App Engine

Yesterday, we announced new second-generation runtimes for Go 1.12 and PHP 7.3. In addition, App Engine standard instances now run with double the memory. Today, we’re happy to announce the availability of the new Java 11 second-generation runtime for App Engine standard in beta. Now, you can take advantage of the latest Long-Term-Support version of the Java programming language to develop and deploy your applications on our fully-managed serverless application platform.Based on technology from the gVisor container sandbox, second-generation runtimes let you write portable web apps and microservices that take advantage of App Engine’s unique auto-scaling, built-in security and pay-per-use billing model—without some of App Engine’s earlier runtime restrictions. Second generation-runtimes also let you build applications more idiomatically. You’re free to use whichever framework or library you need for your project—there are no limitations in terms of what classes you can use, for instance. You can even use native dependencies if needed. Beyond Java, you can also use alternative JVM (Java Virtual Machine) languages like Apache Groovy, Kotlin or Scala if you wish.In addition to more developer freedom, you also get all the benefits of a serverless approach. App Engine can transparently scale your app up to n and back down to 0, so your application can handle the load when it’s featured on primetime TV or goes viral on social networks. Likewise, it scales to zero if no traffic comes. Your bill will also be proportional to your usage, so if nobody uses your app, you won’t pay a dime (there is also a free tier available).App Engine second-generation runtimes also mean you don’t need to worry about security tasks like applying OS security patches and updates. Your code runs securely in a gVisor-based sandbox, and we update the underlying layers for you. No need to provision or manage servers yourself—just focus on your code and your ideas!What’s new?When you migrate to Java 11, you gain access to all the goodies of the most recent Java versions: you can now use advanced type inference with the new var keyword, create lists or maps easily and concisely with the new immutable collections, and simplify calling remote hosts thanks to the graduated HttpClient support. Last but not least, you can also use the JPMS module system introduced in Java 9.You’ll also find some changes in the Java 11 runtime. For example, the Java 11 runtime does not provide a Servlet-based runtime anymore. Instead, you need to bundle a server with your application in the form of an executable JAR. This means that you are free to choose whichever library or framework you want, be it based on the Servlet API or other networking stacks like the Netty library. In other words, feel free to use Spring Boot, Vert.x, SparkJava, Ktor, Helidon or Micronaut if you wish!Last but not least, second-generation runtimes don’t come with the built-in APIs like Datastore or memcache from the App Engine SDK. Instead, you can use the standalone services with their Google Cloud client libraries, or use other similar services of your choice. Be sure to look into our migration guide for more help on these moves.Getting startedTo deploy to App Engine Java 11, all you need is an app.yaml file where you specify runtime: java11, signifying that your application should use Java 11. That’s enough to tell App Engine to use the Java 11 runtime, regardless of whether you’re using an executable JAR, or a WAR file with a provided servlet-container. However, the new runtime also gives you more control on how your application starts: by specifying an extra entrypoint parameter in app.yaml, you can then customize the java command flags, like the -X memory settings.With Java 11, the java command now includes the ability to run single independent *.java files without compiling them with javac! For this short getting started section, we are going to use it to run the simplest hello world example with the JDK’s built-in HTTP server:Notice how our Main class uses the var keyword introduced in Java 10, and how we re-used the keyword again in the try-with-resources block, as Java 11 makes possible.Now it’s time to prepare our app.yaml file. First, specify the java11 runtime. In addition, entrypoint define the actual java command with which to we’ll be running to launch the server. The java command points at our single Java source file:Finally, don’t forget to deploy your application with the gcloud app deploy app.yaml command. Of course, you can also take advantage of dedicated Maven and Gradle plugins for your deployments.Try Java 11 on App Engine standard todayYou can write your App Engine applications with Java 11 today, thanks to the newly released runtime in beta. Please read the documentation to get started and learn more about it, have a look at the many samples that are available, and check out the migration guide on moving from Java 8 to 11. And don’t forget you can take advantage of the App Engine free tier while you experiment with our platform.
Quelle: Google Cloud Platform

How Penn State World Campus is leveraging AI to help their advisers provide better student services

Recently we spoke to Dawn Coder, director of academic advising and student disability services at Penn State World Campus, which was established in 1998 to provide accessible, quality education for online learners. It has since grown to have the second largest enrollment in the university, serving thirty thousand students all over the world. By building a virtual advising assistant to automate routine interactions, Coder and her department aim to serve more students more efficiently. Working with Google and Quantiphi, a Google Cloud Partner of the Year for Machine Learning, they plan to roll out the pilot program, their first using AI, in January 2020.How does Penn State World Campus support its students?Our goal is to help students graduate and pursue whatever their goals are. I supervise three key services here: academic advising for undergraduates, disability services and accommodations for undergraduate and graduate students, and military services for our veterans and students on active duty, as well as their spouses. Altogether our team has about sixty employees serving approximately 11,000 undergraduates who take classes online from anywhere in the world.Why turn to AI?Our strategic objectives include student retention and organizational optimization, so that’s where AI fits in. We want to make our organization as efficient as possible, make sure employees are not overworked and overwhelmed, and provide the best quality services for our students to set them up for success. Quantiphi is using Google Cloud AI tools like Dialogflow to build us a custom user interface that will take incoming emails from students and recognize keywords to sort those emails into categories, like requests for change of major, change of campus, re-enrollment, and deferment. For example, if a student emails us asking how to re-enroll to finish a degree, the virtual assistant can collect all the relevant information about that student for the adviser in seconds. It can even generate a boilerplate response that the adviser can customize. Our students are physically located all over the world; they can’t just stop by our office. This allows them to get answers quicker in a way that’s convenient for them.  Why choose Google?Security was an important factor because we’re working with student data. That was the biggest decision-maker. We also wanted to work with a company who believes education is important, especially higher education, because if you aren’t aligned with the goal of who we are, it’s really difficult to build a strong, positive relationship. I felt as though the representatives from Google and Quantiphi were focused on higher ed and really understood it. That was another decision-maker for our team.What benefits do you hope to see?Using this new interface will provide advisers with necessary student information in one place. Currently, academic advisers access many different screens in our student information system to gather all the student information needed to provide next steps. The AI-driven tool will centralize the process and all the data will be displayed in one place. With the time that is saved, an adviser will have more quality time to assist students with special circumstances, career planning, and schedule planning. We want to scale our services to serve more students as World Campus grows. During peak times of the semester, it can take our advisers longer than we would like to respond. If AI can help us reduce the time it takes to a few minutes, that will be a huge success.What’s next for this project?If the project is successful, our hope is to expand AI to other World Campus departments, like admissions or the registrar and bursar’s offices. Our biggest goal is always providing quality, accurate services to students in a timely manner—more in real time than having to wait a long time. My hope is that technology can make the process more intuitive so students can make more decisions on their own, knowing that the academic advisers are always there to advocate for them. There’s so much more to academic advising than just scheduling courses!During peak times of the semester, it can take our advisers longer than we would like to respond. If AI can help us reduce the time it takes to a few minutes, that will be a huge success. Dawn Coder director of academic advising and student disability services, Penn State World Campus
Quelle: Google Cloud Platform

A moving experience: How Kiwi.com built a travel platform with APIs

Editors note: Jurah Hrinik is product manager of Kiwi.com’s Tequila B2B platform. Read on to learn how this Czech Republic-based travel information provider automated on-boarding for partners and developers who build on its APIs.Our vision with Kiwi.com is to offer customers a way to buy travel insurance coverage, book a taxi from home to the train station, take a train to the airport, pick up a rental car, and drive to their destination all in one seamless customer experience. To do it, we’ve built a B2B platform, Tequila, which aims to be a one-stop travel booking shop for our partners, such as online travel agencies, airlines, brick-and-mortar agencies, and affiliated programs. Tequila enables access, via APIs, to all of our content and services—from schedule information aggregated from hundreds of airlines, to ticketing fulfillment. The Apigee platform sits as a layer between our internal systems and partners to manage the entire relationship, from signing up, to invoicing, to reporting, to accessing our APIs, and everything else our partners need from us.Using Apigee to power a B2B travel platformBefore we implemented API management, everything from partner onboarding to monitoring and reporting had to be done through manual processes. Whenever a partner had a specific request or change order, they had to contact their account managers, who brought it to our internal technical business development department. This team would contact the developers, who in turn had to add it to their backlog, then execute merge requests. It was complicated and time-consuming to get anything done.We envisioned Tequila as a platform for distributing solutions we build in house, as well as those built by partners. For example, a taxi company with its own APIs can connect via Tequila and offer its services to a broad ecosystem. Tequila integrates with Apigee, enabling customers or partners to try APIs from the portal without doing the coding. We don’t maintain a database of customers and users. We use Apigee for this. We create the companies, register developers, and use the Apigee platform to build applications on Tequila. Even though we went live only six months ago, we already have a lot of APIs built in Apigee, as well as some back-end services.We’re currently using seven main APIs, each with four to five endpoints, for our partners. These are exposed on the Apigee platform by implementing API proxies, which decouple the app-facing API from backend services. This allows us to make backend changes to services while enabling apps to make calls to the same API without interruption. We also have 13 management proxies, making a total of 20 proxies for the whole platform. Each proxy has a couple of endpoints, and we’re adding a new one every couple of weeks as we roll out new features. We’ve also been able to streamline a lot of processes for finance and customer support.Delivering against a tight development deadlineWe had a tight deadline for Tequila’s original launch, with just 10 weeks to build it in time for our CEO’s presentation at an important conference. This meant that we couldn’t satisfy every requirement by launch time; we needed to keep rebuilding pieces and improving functionality after that deadline. Regardless of the time pressure, Apigee enabled us to do everything we needed from an API perspective—especially from the security and discoverability standpoints—and without disrupting the user experience. It gave us some breathing room while we focused on building Tequila.Kiwi.com relies extensively on Google solutions and we use almost every Google Cloud product. Aside from all of our business users on G Suite and Drive, our development staff also uses GCP for logging, storage, data warehousing (via Big Query), and reporting. We’re now in the midst of assessing how we can use the GCP machine learning capabilities to further enhance our products. While we evaluated other API management platforms, in the end, we were only deciding between two solutions—Apigee or build it ourselves. No other solution on the market was robust enough to handle everything we wanted to do.Monetizing one-stop booking dataThe future growth of Kiwi.com is oriented around integrating the full spectrum of travel options into our  platform, in addition to the air travel we offer now. This means that customers will be able to book true door-to-door solutions, including public transport, auto rental, taxis, ferries, and insurance. The Apigee platform enables our partners to bring us these services in a more secure environment, with control over what we expose and how they can work with it. We’re also evaluating ways to derive revenue from our APIs with Apigee’s monetization capabilities.Tequila generates revenue via commissions using either an affiliate or booking-based model. In the future, we might offer our content to different types of partners or different markets, possibly via subscription—for instance, we get requests from newspapers that want to visualize airport traffic around the world, and from airports that want access to our reporting platform. These kinds of services are candidates for monetization.We envision more opportunities like this arising as we open up to new markets. Each day we get closer to our vision to connect travelers to all the information they need from the time they leave their home to the time they arrive at their destination. API management with Apigee is helping make that vision a reality.To learn more about Apigee, visit our website.
Quelle: Google Cloud Platform

Azure HC-series Virtual Machines crosses 20,000 cores for HPC workloads

Azure HC-series Virtual Machines are now generally available in the West US 2 and East US regions. HC-series virtual machines (VMs) are optimized for the most at-scale, computationally intensive HPC applications. For this class of workload, HC-series VMs are the most performant, scalable, and price-performant ever launched on Azure or elsewhere on the public cloud.
Quelle: Azure