Extending per second billing in Google Cloud

By Paul Nash, Group Product Manager, Compute Engine

We are pleased to announce that we’re extending per-second billing, with a one minute minimum, to Compute Engine, Container Engine, Cloud Dataproc, and App Engine flexible environment VMs. These changes are effective today and are applicable to all VMs, including Preemptible VMs and VMs running our premium operating system images including Windows Server, Red Hat Enterprise Linux (RHEL), and SUSE Enterprise Linux Server.

These offerings join Persistent Disks, which has been billed by the second since its launch in 2013, as well as committed use discounts and GPUs; both of which have used per-second billing since their introduction.

In most cases, the difference between per-minute and per-second billing is very small — we estimate it as a fraction of a percent. On the other hand, changing from per-hour billing to per-minute billing makes a big difference for applications (especially websites, mobile apps and data processing jobs) that get traffic spikes. The ability to scale up and down quickly could come at a significant cost if you had to pay for those machines for the full hour when you only needed them for a few minutes.

Let’s take an example. If, on average, your VM lifetime was being rounded up by 30 seconds with per-minute billing, then your savings from running 2,600 vCPUs each day would be enough to pay for your morning coffee (at 99 cents, assuming you can somehow find coffee for 99 cents). By comparison, the waste from per-hour billing would be enough to buy a coffee maker every morning (over $100 in this example).

As you can see, the value of increased billing precision is mostly in per-minute. This is probably why we haven’t heard many customers asking for per-second. But, we don’t want to make you choose between your morning coffee and your core hours, so we’re pleased to bring per-second billing to your VMs, with a one-minute minimum.

We’ve spent years focusing on real innovation for your benefit and will continue to do so. Being first with automatic discounts for use (sustained use discounts), predictably priced VMs for non-time sensitive applications (Preemptible VMs), the ability to pick the amount of RAM and vCPUs that you want (custom machine types), billing by the minute, and commitments that don’t force you to lock into pre-payments or particular machine types/families/zones (committed use discounts).

We will continue to build new ways for you to save money and we look forward to seeing how you push the limits of Google Cloud to make the previously impossible, possible.

To get started with Google Cloud Platform, sign up today and get $300 in free credits.

Quelle: Google Cloud Platform

This Troll From Singapore Will Be Released From US Jail After Having His Asylum Upheld

Teen blogger Amos Yee speaks to reporters next to lawyer Nadarajan Kanagavijayan, after leaving a Singapore court in Sept. 2016.

Staff / Reuters

Amos Lee, a controversial blogger from Singapore who has been held in US detention for 10 months, will be freed on Tuesday after a federal appeals court upheld an immigration judge's decision to grant him asylum.

According to Yee's lawyer Sandra Grossman, the court upheld a judge's earlier ruling on the grounds that he would be persecuted if he returned to his native country, whose laws allow the government to restrict freedom of speech and expression. Yee had previously been jailed twice in Singapore on charges that included spreading obscenity and “wounding racial or religious feelings” before he flew to Chicago in December, where he was detained at O'Hare Airport.

He had been in US Custody ever since, despite a March ruling from Chicago immigration judge Samuel Cole that noted that Yee had showed that “he suffered past persecution on account of his political opinion and has a well-founded fear of future persecution in Singapore.” The Department of Homeland Security opposed that ruling, sending the case to appeals court, which ruled last Thursday that Yee should be freed.

The appeals court upheld the judge's original decision that “found that his prosecution in Singapore was actually pretext to silence his political opinion,” Grossman said in an interview with BuzzFeed News.

BuzzFeed News interviewed Yee for a story last month that chronicled his work as an online agitator who often ran into trouble for expressing his views on religion, Singapore's government, and sex. “I do confess I display similarities to a troll,” he said in a July interview. “And I do want attention. But a troll has sadistic pleasures. I want to help humans.”

Grossman said that the 18-year-old Yee would be released from a Chicago Immigration and Customs Enforcement office on Tuesday, but that she was unaware of his immediate plans. Per US law, he will be able to apply for a green card after being in the country for more than a year. Grossman also said it was unlikely that the government would take the case to a higher court to oppose Yee's asylum, alluding to the amount of money that has already been spent on an individual that the DHS “never argued” was a security threat.

“I can imagine that he looks forward to getting back online and expressing his views on a variety of topics including the government of Singapore, his detention in the United States and possibly any other topic that he wants to discuss,” she said.

Quelle: <a href="This Troll From Singapore Will Be Released From US Jail After Having His Asylum Upheld“>BuzzFeed

Mirantis Cloud Platform releases new features: No need to rough it when you have the right set of tools

The post Mirantis Cloud Platform releases new features: No need to rough it when you have the right set of tools appeared first on Mirantis | Pure Play Open Cloud.
I’ve got two teenage boys who love reading survival manuals. On weekends they can’t wait to go out on camping excursions with the bare minimum of equipment to get by while defaulting to ingenuity and skills to overcome all challenges. They’re not doing it because they like to be miserable; they’re doing it because they like the challenge of making sure they have the right tools, knowledge and level of preparedness, and that they make the right decisions — in other words, all of the factors that make the difference between a great experience and a poor one when it comes to accomplishing your objectives in the most efficient way.

Things don’t change when you grow up and go into the office instead of the woods. As you prepare your day in IT operations, keeping the infrastructure and operational environments that support the application deployment needs running smoothly also requires both skills and the right set of tools so you don’t spend your weekends and evenings fixing things that didn’t have to break in the first place.

Just as having a Swiss army knife, a flashlight, sunscreen, a hammock, a raincoat or a fishing rod available can make all the difference in the woods, at work, you also need to think about having the right tools you have at your disposal. You’ve trained hard, you are a hard-core professional, and you deserve to have the best tools you can get for the job; which brings me to today’s topic, Mirantis Cloud Platform.

Here at Mirantis, we’re pretty excited about what’s emerging for those of us assisting IT professionals in your quest to support all the application needs of your customers, internal and external.

Today I wanted to cover a few new options from Mirantis you may want to consider if you are looking to enhance and expand your breadth of capabilities, including capacity monitoring, increased robustness, orchestration and devops, and overall cloud health.

Mirantis StackLight, which is our 100% open-source Operations Support System (OSS) for continuous monitoring and maximum availability now includes a new DevOps Portal that provides a holistic view of your Mirantis Cloud Platform (MCP) environment.

New DevOps Portal

This new aggregated toolset significantly reduces the complexity of Day 2 cloud operations through services and dashboards around a high degree of automation, availability statistics, resource utilization, capacity utilization, continuous testing, logs, metrics, and notifications. What’s more, the new DevOps Portal enables cloud operators to manage larger clouds with greater uptime without having to convert their entire staff into open source developers.

The web UI offers services that include cloud intelligence, capacity management, and a subset of the tools made available within Simian Army.

Included services within the DevOps portal

Let’s take a look at each one of the components in detail.
Capacity Monitoring
MCP enables you to ensure available capacity by providing a live look at what’s going on inside your cloud using the Cloud Intelligence Service and Cloud Capacity Management.

Cloud Intelligence Service: This service collects and stores data from MCP services such as OpenStack, Kubernetes, bare metal, and so on. You can then query the data as part of use cases such as cost visibility, business insights, cost comparison, chargeback/showback, cloud efficiency optimization, and IT benchmarking. Operators can interact with the resource data using a wide range of queries, such as searching for the last VM rebooted, total memory consumed by the cloud, number of containers that are operational, and so on.

Cloud Capacity Management: This dashboard provides point in time resource consumption data for OpenStack by displaying parameters such as total CPU utilization, memory utilization, disk utilization, and number of hypervisors. This dashboard is based on data collected by the Cloud Intelligence Service, and can be used for cloud capacity management and other business optimization aspects.

Cloud Assurance
With this module you can evaluate security and improve utilization.

Security Monkey & Janitor Monkey: In this release, MCP includes Security Monkey and Janitor Monkey (and their respective dashboards), two of the multiple tools that compose Simian Army. Simian Army is a growing set of open source tools originally created by Netflix to run continuous tenant level tests on a production cloud to make it more antifragile. The closest traditional IT analogy to the Simian Army is online diagnostics. Security Monkey runs tests that track and evaluate security-related tenant changes and configurations. Janitor Monkey constantly looks to reclaim unused tenant resources for improved cloud utilization.

Orchestration and DevOps
Here we have a great set of tools to help you automate workflow of jobs in response to specific events among other things.

Runbooks Automation: Clouds are simply too complex to be managed using traditional manual processes. Instead, they require a high degree of automation in which events or time durations trigger the execution of specific jobs. The Runbooks Automation service, based on Rundeck, accomplishes this by enabling operators to create a workflow of jobs that get executed at specific time intervals or in response to specific events (such as policy-driven events). For example, operators can now automate periodic backups, weekly report creation, specific actions in response to a failed Cinder volume, and so on. Note, however, that Runbooks Automation is not a lifecycle management tool; it’s not appropriate for reconfiguring, scaling, or updating MCP itself. (LCM for an MCP cloud is exclusively performed with DriveTrain, see below).

DriveTrain: This toolchain provides access to relevant CI/CD LCM tooling such as Git, Gerrit, Jenkins, Artifactory, etc., to automate the delivery of change controls to the infrastructure and its services. This includes scaling the cloud, patching software packages, and full environment upgrades.

DriveTrain results

Cloud Health
You can find another great set of tools to gain broader monitoring capabilities, additional metrics and a higher level of alerts and notifications in the Cloud Health section.

Cloud Health Service: This service collects availability results for all OpenStack services and failed customer (tenant) interactions (FCI) for a subset of those services. These metrics are displayed so that operators can see both point-in-time health status and trends over time.

Metrics: All metrics collected by Prometheus (see below) are visualized through Grafana dashboards.

Logs: Logs for various MCP services are aggregated in Elasticsearch and visualized through Kibana dashboards.

Additionally, StackLight now expands monitoring coverage within Kubernetes, containers and Ceph, as well as deeper Kubernetes log processing. The architecture has undergone a major evolution with the inclusion of a monitoring and alerting solution built using the open source Prometheus project. Prometheus is a mature open source monitoring system, now maintained as an initiative of the Cloud Native Computing Foundation (CNCF), and approaches the age-old monitoring and alerting problem with a web-scale architecture utilizing a dimensional data model, powerful query engine, Grafana visualization integration, efficient storage, and precise alerting. Prometheus is also easy to operate and provides numerous third-party integrations. StackLight has evolved to use Telegraf to collect metrics and Prometheus Alertmanager for notifications/alerts. StackLight also provides InfluxDB, using it for long-term, resilient metrics storage and as a back-end for Ceilometer to enable Heat-based auto-scaling.

Notifications Service: A notifications dashboard displays all alerts/notifications generated by Prometheus Alertmanager. This screen replaces the previous Nagios tool in StackLight. Alertmanager enables MCP customers to configure where alerts are going to be sent — support is provided for many kinds of endpoints, including email, SMS, PagerDuty, and others.

Notifications Service

All this new integration and capabilities are specifically designed for MCP to provide a view into the open cloud with optimized collectors, dashboards, alarms, faults and event correlation.

In other words, even though open cloud may feel like the Wild, Wild, West, there’s no need to rough it when supporting the challenging needs of your business. Once you gain better insights that enable you to minimize unpredictability and better manage your work environment, you will be able to leave unplanned surprises to your weekend outings. Here at Mirantis, we aim to provide peace of mind with the right set of tools in support of your application deployment needs; leaving it up to you to decide how to spend your weekends.The post Mirantis Cloud Platform releases new features: No need to rough it when you have the right set of tools appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Moving from multicloud complexity to agility with process automation

I recently wrote about how tackling digital transformation in the multicloud world is a significant challenge for businesses. Enterprises need to integrate and manage their multicloud environments to deliver an agile, connected and secure IT infrastructure that supports rapid innovation. But the need for flexibility and agility doesn’t stop with IT infrastructure. It extends to the tasks and processes that impact customer satisfaction, service quality and — critically — the bottom line.
In some industries, such as healthcare, these challenges can mean the difference between life and death. Take the UK National Health Service Blood and Transplant (NHSBT). Their life-saving work facilitates 4,500 organ transplants a year in the United Kingdom. However, since 6,500 people are on the waiting list at any given time, an average of three people die every day while waiting for an organ. The stakes for improving and speeding up the allocation process couldn’t be higher.
The allocation decision process for each type of organ – known as an allocation scheme – is very complex. It needs to account for both the donor’s and the potential recipient’s physiology, clinical situation, and geographic location. Additionally, these decision rules are constantly changing as new medical insights are uncovered. NHSBT’s existing IT systems did not manage this evolving complexity. Deploying a new allocation scheme took more than two years, which added complexity at a time when expedience was essential.
NHSBT worked to transform their allocation process for greater flexibility and business agility. Using the IBM Digital Process Automation platform, it took NHSBT only six months to design and deploy a new heart allocation scheme, all in the cloud. This single, modern user interface sits above and integrates legacy on-premises systems and cloud services to automate more than 40 percent of its rigorous 96-step allocation process.
The process is now digitized all the way from the time a nurse discusses organ donation with family members to the time a donation is offered to a transplant center. By using cloud-based process mapping, business process management (BPM) and operation decision management (ODM) solutions, NHSBT is able to efficiently make future updates to the process. This takes the emphasis off of managing the IT infrastructure needed to run the automation platform.

IBM is well-positioned to help other organizations across diverse industries automate more of their work. While your own business processes may not be a literal matter of life and death, the task of allocating limited resources to achieve critical results is a universal challenge. It is an IBM goal to help organizations reduce the complexity of their multicloud and hybrid cloud environments by automating business processes and tasks at scale.
As we announced last month, we’re partnering with Automation Anywhere to deliver a robotic process automation (RPA) solution to help our clients automate at scale. The IBM RPA offering bundles Automation Anywhere RPA technology with IBM Business Process Manager (BPM) to deliver the joint, integrated value of both offerings. Specific work tasks get referred to automated RPA bots and use BPM to orchestrate multiple RPA activities. The advanced platform is designed to seamlessly integrate systems, people and bots across the widest assortment of processes running on premises or in the cloud.
Process automation, especially in the multicloud environment using the latest in RPA technology, presents a tremendous opportunity for the digital transformation of your business.  I invite you to learn more about how IBM can help you with your automation initiatives by scheduling a no-cost consultation with one of our IBM experts.
The post Moving from multicloud complexity to agility with process automation appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Java 8 on App Engine standard environment is now generally available

By Amir Rouzrokh, Product Manager

Earlier this quarter, we announced the beta availability of Java 8 on App Engine standard environment. Today, we’re excited to announce that this runtime is now generally available and is covered by the App Engine Service Level Agreement (SLA).

App Engine is a fully managed platform that allows you to build scalable web and mobile applications without worrying about the underlying infrastructure. For years, developers have loved the zero-toil, just-focus-on-your-code capabilities of App Engine. Unfortunately, using Java 7 on App Engine standard environment also required compromises, including limited Java classes, unusual thread execution and slower performance because of sandboxing overhead.

With this release, all of the above limitations will be removed. Leveraging an entirely new infrastructure, you can now take advantage of everything that OpenJDK 8 and Google Cloud Platform (GCP) have to offer, including running your applications using a OpenJDK 8 JVM (Java Virtual Machine) and Jetty 9 along with full outbound gRPC requests and Google Cloud Java Library support. App Engine standard environment also supports off-the-shelf frameworks such as Spring Boot and alternative languages like Kotlin or any other JVM supported language.

During the beta release, we continued to enhance the performance of the runtime; many of our customers, such as SnapEngage are already seeing significant performance improvements and reduced costs by migrating their application from the Java 7 runtime to the Java 8 runtime.

 “The new Java 8 runtime brings performance enhancements to our application, leading to cost savings. Running on Java 8 also means increased developer happiness and efficiency, thanks to the removal of the class white list, and to the new features the language provides. Last but not least, upgrading from the Java 7 to the Java 8 runtime was a breeze.”

— Jerome Mouton, Co-founder and CTO, SnapEngage

The migration process is simple. Just add the java8 line to your appengine-web.xml file and redeploy your application (you can read more about the migration process here). Also check out this short video on how to deploy a Java web app to App Engine Standard.

With App Engine, you can scale your applications up or down instantaneously, all the way down to zero instances when no traffic is detected. App Engine also enables global caching of static resources, native microservices and versioning, traffic splitting between any two deployed versions (including Java 7 and Java 8), local development tooling and numerous App Engine APIs that help you leverage other GCP capabilities.

We’d like to thank all of our beta users for their feedback and invite you to continue submitting your feedback on the Maven, Gradle, IntelliJ and Eclipse plugins, as well as the Google Cloud Java Libraries on their respective GitHub repositories. You can also submit feedback for the Java 8 runtime on the issue tracker. As for OpenJDK 9 support, we’re hard at work here to bring you support for the newest Java version as well, so stay tuned!

If you’re an existing App Engine user, migrate today; there’s no reason to delay. If you’re new to App Engine, now is the best time to jump on in. Create an app, and get started now.

Happy coding!

Quelle: Google Cloud Platform

Announcing new Azure VM sizes for more cost-effective database workloads

Our customers told us that their database workloads like SQL Server or Oracle often require high memory, storage, and I/O bandwidth, but not a high core count. Many database workloads they are running are not CPU-intensive. They want a VM size that enables them to constrain the VM vCPU count to reduce the cost of software licensing, all while maintaining the same memory, storage, and I/O bandwidth.

We are excited to announce the latest versions of our most popular VM sizes (DS, ES, GS, and MS), which constrain the vCPU count to one half or one quarter of the original VM size, while maintaining the same memory, storage and I/O bandwidth. We have marked these new VM sizes with a suffix that specifies the number of active vCPUs to make them easier for you to identify.

For example, the current VM size Standard_GS5 comes with 32 vCPUs, 448GB mem, 64 disks (up to 256 TB), and 80,000 IOPs or 2 GB/s of I/O bandwidth. The new VM sizes Standard_GS5-16 and Standard_GS5-8 comes with 16 and 8 active vCPUs respectively, while maintaining the rest of the specs of the Standard_GS5 in regards to memory, storage, and I/O bandwidth.

The licensing charged for SQL Server or Oracle will be constrained to the new vCPU count, and other products should be charged based on the new vCPU count. All of this results in a 50% to 75% increase in the ratio of the VM specs to active (billable) vCPUs. These new VM sizes that are only available in Azure, allowing workloads to push higher CPU utilization at a fraction of the (per-core) licensing cost. At this time, the compute cost, which includes OS licensing, remains the same one as the original size.

Here are a few examples of the potential savings running a VM provisioned from the SQL Server Enterprise image on the new DS14-4v2 and GS5-8 VM sizes as compared to their original versions. For the latest official pricing please refer to our Azure VM pricing page.

VM Size
vCPUs
Memory
Max Disks
Max I/O Throughput
SQL Server Enterprise licensing cost per year

Total cost per year

(Compute + licensing)

Standard_DS14v2
16
112 GB
32
51,200 IOPS or 768 MB/s
 
 

Standard_DS14-4v2
4
112 GB
32
51,200 IOPS or 768 MB/s
75% lower
57% lower

Standard_GS5
32
448
64
80,000 IOPS or 2 GB/s
 
 

Standard_GS5-8
8
448
64
80,000 IOPS or 2 GB/s
75% lower
42% lower

If you bring your own SQL Server licenses to one of these new VM sizes, either by using one of our BYOL images or manually installing SQL Server, then you only need to license their restricted vCPU count. For more details on BYOL and other ideas to further reduce cost check our SQL Server pricing guidance.

Start using these new VM sizes and saving on licensing cost today!
Quelle: Azure

Payment Processing Blueprint for PCI DSS-compliant environments

Today we are pleased to announce the general availability of a new Payment Processing Blueprint for PCI DSS-compliant environments, the only auditor reviewed, 100% automated solution for Payment Card Industry Data Security Standard – PCI DSS 3.2 technical controls. The architectural framework is designed to help companies deploy and operate a payment processing system, or credit card handling solution in Microsoft Azure. This automation solution will help customers adopt Azure solutions, showcasing a simple-to-understand reference architecture, and teach administrators how to deploy a secure and compliant workload while adhering to the PCI DSS compliance standard.

The solution was jointly developed with our partner Avyan Consulting, and subsequently reviewed by Coalfire, Microsoft’s PCI-DSS auditor. The PCI Compliance Review provides an independent, third-party review of the solution, and components that need to be addressed.

For a quick look at how this solution works, watch this five-minute video explaining, and demonstrating its deployment.

This automated architecture includes: Azure Application Gateway, Network Security Groups, Azure Active Directory, App Service Environment, OMS Log Analytics, Azure Key Vault, Azure SQL DB, Azure Load Balancer, Application Insights, Azure Web App, Azure Automation, Azure Runbooks, Azure DNS, Azure Virtual Network, Azure Virtual Machine, Azure Resource Group and Policies, Azure Blob Storage, Azure Active Directory access control (RBAC), and Azure Security Center.

The foundational architecture is comprised of the following components:

Architectural diagram. The diagram shows the reference architecture used for the Contoso Webstore solution.
Deployment templates. In this deployment, Azure Resource Manager templates are used to automatically deploy the components of the architecture into Microsoft Azure by specifying configuration parameters during setup.
Automated deployment scripts. These scripts help deploy the end-to-end solution. The scripts consist of:

A module installation and global administrator setup script is used to install and verify that required PowerShell modules and global administrator roles are configured correctly.
An installation PowerShell script is used to deploy the end-to-end solution, including security components built by the Azure SQL Database team.

A sample customer responsibility PCI DSS 3.2 workbook. The workbook provides an explanation of how the solution can be used to achieve a compliant state in each of the 262 PCI DSS 3.2 controls. This workbook provides details on how a shared responsibility between Azure, and a customer can successfully be implemented.

A PCI Compliance Review which outlines the topics necessary to build on the foundational architecture toward a full PCI-compliant business. The Coalfire’s review explores additional dimensions in advance of further solution design that will yield better results for an eventual PCI assessment.
A customer ready threat model. This data flow diagram (DFD) and sample threat model for the Contoso Webstore the solution that provides detailed explanation on the solution boundaries and connections.

The deployed application illustrates the secure management of credit card data including card numbers, expiration dates, and CVC (Card Verification Check) numbers in a four-tier architecture that includes built-in security and compliance considerations and can be deployed as an end-to-end Azure solution.

To stay up to date on all things Blueprint and Government, be sure to subscribe to our RSS feed and to receive emails by clicking “Subscribe by Email!” on the Azure Government Blog, additionally check back frequently to learn more about Azure automated foundational architectures.

To experience the power of Azure Government for your organization, sign up for an Azure Government Trial.
Quelle: Azure