High-Throughput with Azure Blob Storage

I am happy to announce that High-Throughput Block Blob (HTBB) is globally enabled in Azure Blob Storage. HTBB provides significantly improved and instantaneous write throughput when ingesting larger block blobs, up to the storage account limits for a single blob. We have also removed the guesswork in naming your objects, enabling you to focus on building the most scalable applications and not worry about the vagaries of cloud storage.

HTBB demo of 12.5GB/s single blob throughput at Microsoft Ignite

I demonstrated the significantly improved write performance at Microsoft Ignite 2018. The demo application orchestrated the upload of 50,000 32MiB (1,600,000 MiB) block blobs from RAM using Put Block operations to a single blob. When all blocks were uploaded, it sent the block list to create the blob using the Put Block List operation. It orchestrated the upload using four D64v3 worker virtual machines (VMs), each VM writing 25 percent of the block blobs. The total time to upload the object took around 120 seconds which is about 12.5GB/s. Check out the demo in the video below to learn more.

GB+ throughput using a single virtual machine

To illustrate the possible performance using just a single VM, I created a D32v3 VM running Linux in US West2. I stored the files to upload on a local RAM disk to reduce local storage performance affecting the results. I then created the files using the head command with input from /dev/urandom to fill them with random data. Finally I used AzCopy v10 (v10.0.4) to upload the files to a standard storage account in the same region. I ran each iteration 5 times and averaged the time to upload in the table below.

Data set
Time to upload
Throughput

1,000 x 10MB
10 seconds
1.0 GB/s

100 x 100MB
8 seconds
1.2 GB/s

10 x 1GB
8 seconds
1.2 GB/s

1 x 10GB
8 seconds
1.2 GB/s

1 x 100GB
58 seconds
1.7 GB/s

HTBB everywhere

HTBB is active on all your existing storage accounts, and does not require opt-in. It also comes without any extra cost. HTBB doesn’t introduce any new APIs and is automatically active when using Put Block or Put Blob operations over a certain size. The following table lists the minimum required Put Blob or Put Block size to activate HTBB.

Storage Account type
Minimum size for HTBB

StorageV2 (General purpose v2)
>4MB

Storage (General purpose v1)
>4MB

Blob Storage
>4MB

BlockBlobStorage (Premium)
>256KB

Azure Tools and Services supporting HTBB

There is a broad set of tools and services that already support HTBB, including:

AzCopy v10 preview
Azure Data Lake Storage Gen2
Data Box
Azure Data Factory

Conclusion

We’re excited about the throughput improvements and application simplifications High-Throughput Block Blob brings to Azure Blob Storage! It is now available in all Azure regions and automatically active on your existing storage accounts at no extra cost. We look forward to hearing your feedback. To learn more about Blob Storage, please visit our product page.
Quelle: Azure

Introducing Google Cloud security training and certification: skills that could save your company millions

Information security is a top priority for global businesses, large and small. Security breaches can cost organizations millions of dollars, cause irreparable brand damage, and result in lost customers. As data and apps move to the cloud, cloud security is increasingly crucial for organizational success.According to the Breach Level Index, a global database of public data breaches, 3.3 billion data records were compromised worldwide in the first half of 2018, an increase of 72% compared to the same period in 2017. Further, the average cost of a data breach globally increased to $3.86 million, according to the Ponemon Institute’s 2018 Cost of a Data Breach Study.While these statistics are eye-popping, concern about security in the cloud is no longer hampering cloud adoption for organizations. In fact, a 2017 global survey of more than 500 IT decision-makers found that three-quarters of respondents have become more confident in cloud security. Businesses are now more concerned about the shortage of talent with the right skills to manage cloud technology, make sure the right security controls are in place, manage cloud-based access, and ensure data protection.The cost and stakes of a security breach are too high, and organizations are realizing they need a skilled team able to handle the ever-increasing workload. A recent CIO.com article, “The top 4 IT security hiring priorities and in-demand skills for 2019,” said that “when it comes to cloud security hiring, the most in-demand role for 2019 is the Cloud Security Engineer.”To address these current and future needs, we recently launched the Security in GCP specialization, our latest on-demand training course. It introduces learners to Google’s approach to security in the cloud and how to deploy and manage the components of a secure GCP solution. With focus areas such as Cloud Identity, security keys, Cloud IAM, Google Virtual Private Cloud firewalls, Google Cloud load balancing, and many more, participants will learn about securing Google Cloud deployments as well as mitigating many of the vulnerabilities, attacks, and risks mentioned above in a GCP-based infrastructure. These include distributed denial-of-service (DDoS) attacks, phishing, and data exfiltration threats involving content classification and use.While this new training is a broad study of security controls and techniques on GCP, it provides a good framework for a job role that is becoming more important to organizations, the Cloud Security Engineer. To offer organizations a way to benchmark and measure the proficiency of their team’s Google Cloud security skills, we’ve also recently completed the beta version of the Professional Cloud Security Engineer. This certification, which will be available to the public at Next ‘19, validates an individual’s aptitude for security best practices and industry security requirements while demonstrating an ability to design, develop, and manage a secure infrastructure that uses Google security technologies.At a time when many businesses are more vulnerable to cyber attacks, developing cloud security skills with this new training and certification can bring you greater confidence that your data is in safe hands.To learn more about this new training and certification, join our webinar on March 29 at 9:45am PST.
Quelle: Google Cloud Platform

Future of cloud computing: 5 insights from new global research

Research shows that cloud computing will transform every aspect of business, from logistics to customer relationships to the way teams work together, and today’s organizations are preparing for this seismic shift. A new report from Google on the future of cloud computing combines an in-depth look at how the cloud is shaping the enterprise of tomorrow with actionable advice to help today’s leaders unlock its benefits. Along with insights from Google luminaries and leading companies, the report includes key findings from a research study that surveyed 1,100 business and IT decision-makers from around the world. Their responses shed light on the rapidly evolving technology landscape at a global level, as well as variations in cloud maturity and adoption trends across individual countries. Here are five themes that stood out to us from this brand-new research.1. Cloud computing will move to the forefront of enterprise technology over the next decade, backed by strong executive support.Globally, 47 percent of survey participants said that the majority of their companies’ IT infrastructures already use public or private cloud computing. When we asked about predictions for 2029, that number jumped 30 percentage points. C-suite respondents were especially confident that the cloud will reign supreme within a decade: More than half anticipate that it will meet at least three-quarters of their IT needs, while only 40 percent of their non-C-suite peers share that view. What’s the takeaway? The cloud already plays a key role in enterprise technology, but the next 10 years will see it move to the forefront—with plenty of executive support. Here’s how that data breaks down around the world.2. The cloud is becoming a significant driver of revenue growth.Cloud computing helps businesses focus on improving efficiency and fostering innovation, not simply maintaining systems and status quos. So it’s not surprising that 79 percent of survey respondents already consider the cloud an important driver of revenue growth, while 87 percent expect it to become one within a decade. C-suite respondents were just as likely as their non-C-suite peers to anticipate that the cloud will play an important role in driving revenue growth in 2029. This tells us that decision-makers across global organizations believe their future success will hinge on their ability to effectively apply cloud technology.3. Businesses are combining cloud capabilities with edge computing to analyze data at its source.Over the next decade, the cloud will continue to evolve as part of a technology stack that increasingly includes IoT devices and edge computing, in which processing occurs at or near the data’s source. Thirty-three percent of global respondents said they use edge computing for a majority of their cloud operations, while 55 percent expect to do so by 2029. The United States lags behind in this area, with only 18 percent of survey participants currently using edge computing for a majority of their cloud operations, but that figure grew by a factor of 2.5 when respondents looked ahead to 2029. As more and more businesses extend the power and intelligence of the cloud to the edge, we can expect to see better real-time predictions, faster responses, and more seamless customer experiences.4. Tomorrow’s businesses will prioritize openness and interoperability.In the best cases, cloud adoption is part of a larger transformation in which new tools and systems positively affect company culture. Our research suggests that businesses will continue to place more value on openness over the next decade. By 2029, 41 percent of global respondents expect to use open-source software (OSS) for a majority of their software platform, up 14 percentage points from today. Predicted OSS use was nearly identical between IT decision-makers and their business-oriented peers, implying that technology and business leaders alike recognize the value of interoperability, standardization, freedom from vendor lock-in, and continuous innovation.5. On their journey to the cloud, companies are using new techniques to balance speed and quality.To stay competitive in today’s streaming world, businesses face growing pressure to innovate faster—and the cloud is helping them keep pace. Sixty percent of respondents said their companies will update code weekly or daily by 2029, while 37 percent said they’ve already adopted this approach. This tells us that over the next 10 years, we’ll see an uptick in the use of continuous integration and delivery techniques, resulting in more frequent releases and higher developer productivity.As organizations prepare for the future, they will need to balance the need for speed with maintaining high quality. Our research suggests that they’ll do so by addressing security early in the development process and assuming constant vulnerability so they’re never surprised. More than half of respondents said they already implement security pre-development, and 72 percent plan to do so by 2029.Cloud-based enterprises will also rely on automation to maintain quality and security as their operations become faster and more continuous. Seventy percent of respondents expect a majority of their security operations to be automated by 2029, compared to 33 percent today.Our Future of Cloud Computing report contains even more insights from our original research, as well as a thorough analysis of the cloud’s impact on businesses and recommended steps for unlocking its full potential. You can download it here.
Quelle: Google Cloud Platform

9 mustn’t-miss machine learning sessions at Next ‘19

From predicting appliance usage from raw power readings, to medical imaging, machine learning has made a profound impact on many industries. Our AI and machine learning sessions are amongst our most popular each year at Next, and this year we’re offering more than 30 on topics ranging from building a better customer service chatbot to automated visual inspection for manufacturing.If you’re joining us at Next, here are nine AI and machine learning sessions you won’t want to miss.1. Automating Visual Inspections in Energy and Manufacturing with AIIn this session, you can learn from two global companies that are aggressively shaping practical business solutions using machine vision. AES is a global power company that strives to build a future that runs on greener energy. To serve this mission, they are rigorously scaling the use of drones in their wind farm operations with Google’s AutoML Vision to automatically identify defects and improve the speed and reliability of inspections. Our second presenter joins us from LG CNS, a global subsidiary of LG Corporation and Korea’s largest IT service provider. LG’s Smart Factory initiative is building an autonomous factory to maximize productivity, quality, cost, and delivery. By using AutoML Vision on edge devices, they are detecting defects in various products during the manufacturing process with their visual inspection solution.2. Building Game AI for Better User ExperiencesLearn how DeNA, a mobile game studio, is integrating AI into its next-generation mobile games. This session will focus on how DeNA built its popular mobile game Gyakuten Othellonia on Google Cloud Platform (GCP) and how they’ve integrated AI-based assistance. DeNA will share how they designed, trained, and optimized models, and then explain how they built a scalable and robust backend system with Cloud ML Engine.3. Cloud AI: Use Case Driven Technology (Spotlight)More than ever, today’s enterprises are relying on AI to reach their customers more effectively, deliver the experiences they expect, increase efficiency and drive growth across their organizations. Join Andrew Moore and Rajen Sheth in a session with three of Google Cloud’s leading AI innovators, Unilever, Blackrock, and FOX Sports Australia, as they discuss how GCP and Cloud AI services, like the Vision API, Video Intelligence API, and Cloud Natural Language have made their products more intelligent, and how they can do the same for yours.4. Fast and Lean Data Science With TPUsGoogle’s Tensor Processing Units (TPUs) are revolutionizing the way data scientists work. Week-long training times are a thing of the past, and you can now train many models in minutes, right in a notebook. Agility and fast iterations are bringing neural networks into regular software development cycles and many developers are ramping up on machine learning. Machine learning expert Martin Görner will introduce TPUs, then dive deep into their microarchitecture secrets. He will also show you how to use them in your day-to-day projects to iterate faster. In fact, Martin will not just demo but train most of the models presented in this session on stage in real time, on TPUs.5. Serverless and Open-Source Machine Learning at Sling MediaThis session covers Sling’s incremental adoption strategy of Google Cloud’s serverless machine learning platforms that enable data scientists and engineers to build business-relevant models quickly. Sling will explain how they use deep learning techniques to better predict customer churn, develop a traditional pipeline to serve the model, and enhance the pipeline to be both serverless and scalable. Sling will share best practices and lessons learned deploying Beam, tf.transform, and TensorFlow on Cloud Dataflow and Cloud ML Engine.6. Understanding the Earth: ML With Kubeflow PipelinesPetabytes of satellite imagery contain valuable indicators of scientific and economic activity around the globe. In order to turn its geospatial data into conclusions, Descartes Labs has built a data processing and modeling platform for which all components run on Google Cloud. Descartes leverages tools including Kubeflow Pipelines as part of their model-building process to enable efficient experimentation, orchestrate complicated workflows, maximize repeatability and reuse, and deploy at scale. This session will explain how you can implement machine learning workflows in Kubeflow Pipelines, and cover some successes and challenges of using these tools in practice.7. Virtual Assistants: Demystify and DeployIn this session, you’ll learn how Discover built a customer service solution around Dialogflow. Discover’s data science team will explain how to execute on your customer service strategy, and how you can best configure your agent’s Dialogflow “model” before you deploy it to production.8. Reinventing Retail with AIToday’s retailers must have a deep understanding of each of their customers to earn and maintain their loyalty. In this session, Nordstrom and Disney explain how they’ve used AI to create engaging and highly personalized customer experiences. In addition, Google partner Pitney Bowes will discuss how they’re predicting credit card fraud for luxury retail brands. This session will discuss new Google products for the retail industry, as well as how they fit into a broader data-driven strategy for retailers.9. GPU Infrastructure on GCP for ML and HPC WorkloadsML researchers want a GPU infrastructure they can get started with quickly, run consistently in production, and dynamically scale as needed. Learn about GCP’s various GPU offerings and features often used with ML. From there, we will discuss real-world customer story of how they manage their GPU compute infrastructure on GCP.  We’ll cover the new NVIDIA Tesla T4 and V100 GPU, Deep Learning VM Image for quickly getting started, preemptible GPUs for low cost, GPU integration with Kubernetes Engine (GKE), and more.If you’re looking for something that’s not on our list, check out the full schedule. And, don’t forget to register for the sessions you plan to attend—seats are limited.
Quelle: Google Cloud Platform

Migrating your traditional data warehouse platform to BigQuery: announcing the data warehouse migration offer

Today, we’re announcing a data warehousing migration offer for qualifying customers, one that makes it easier for them to move from traditional data warehouses such as Teradata, Netezza to BigQuery, our serverless enterprise data warehouse.For decades, enterprises have relied on traditional on-premises data warehouses to collect and store their most valuable data. But these traditional data warehouses can be costly, inflexible, and difficult to maintain, and for many, they no longer meet today’s business needs. Enterprises need an easy, scalable way to store all that data, as well as to take advantage of advanced analytic tools that can help them find valuable insights. As a result, many are turning to cloud data warehousing solutions like BigQuery.BigQuery is Google Cloud’s serverless, highly scalable, low-cost enterprise data warehouse designed to make all data analysts productive. There’s no infrastructure to manage, so you can focus on finding meaningful insights using familiar Standard SQL. Leading global enterprises like 20th Century Fox, Domino’s Pizza, Heathrow Airport, HSBC, Lloyds Bank UK, The New York Times, and many others rely on BigQuery for their data analysis needs, helping them do everything from break down data silos to jump-start their predictive analytics journey—all while greatly reducing costs.Here’s a little more on the benefits of BigQuery in contrast to traditional on-premises data warehouses.Recently, independent analyst firm Enterprise Strategy Group (ESG) released a report examining the economic advantages of migrating enterprise data warehouse workloads to BigQuery. They developed a three-year total-cost-of-ownership (TCO) model that compared the expected costs and benefits of upgrading an on-premises data warehouse, migrating to a cloud-based solution provided by the same on-premises vendor, or redesigning and migrating data warehouse workloads to BigQuery. ESG found that an organization could potentially reduce its overall three-year costs by 52 percent versus the on-premises equivalent, and by 41 percent when compared to an AWS deployment.You can read more about the above total cost of ownership (TCO) analyses in ESG’s blog post.How to begin your journey to a modern data warehouseWhile many businesses understand the value of modernizing, not all know where to start. A typical data warehouse migration requires three distinct steps:Data migration: the transfer of the actual data contents from the data warehouse from the source to the destination system.Schema migration: the transfer of metadata definitions and topologies.Workload migration:the transfer of workloads that include ETL pipes, processing jobs, stored-procedures, reports, and dashboards.Today, we’re also pleased to announce the launch of BigQuery’s data warehouse migration utility. Based on the existing migration experience, we have built this data warehouse migration service to automate migrating data and schema to BigQuery, and significantly reduce the migration time.How to get started with our data warehousing migration offerOur data warehousing migration offer and tooling equips you with architecture and design guidance from Google Cloud engineers, proof-of-concept funding, free training, and usage credits to help speed up your modernization process.Here’s how it works: Step 1: Planning consultationYou’ll receive expert advice, examples, and proof-of-concept funding support from Google Cloud, and you’ll work with our professional services or a specialized data analytics partner on your proof of concept.Step 2: Complementary trainingYou’ll get free training from Qwiklabs, Coursera, or Google Cloud-hosted classroom courses to deepen your understanding of BigQuery and related GCP services. Step 3: Expert design guidanceGoogle Cloud engineers will provide you with architecture design guidance, through personalized deep-dive workshops at no additional cost.Step 4: Migration supportGoogle Cloud’s professional services organization—along with our partners—have helped enterprises all over the world migrate their traditional data warehouses to BigQuery. And as part of this offer, qualified customers may also be eligible to receive partner funding support to offset the migration and BigQuery implementation costs.Interested in learning more? Contact us.
Quelle: Google Cloud Platform

What’s new in Azure IoT Central – March 2019

In IoT Central, our aim is to simplify IoT. We want to make sure your IoT data drives meaningful actions and visualizations. In this post, I will share new features now available in Azure IoT Central including embedded Microsoft Flow, updates to the Azure IoT Central connector, Azure Monitor action groups, multiple dashboards, and localization support. We also recently expanded Jobs functionality in IoT Central, so you can check out the announcement blog post to learn more.

Microsoft Flow is now embedded in IoT Central

You can now build workflows using your favorite connectors directly within IoT Central. For example, you can build a temperature alert rule that triggers a workflow to send push notifications and SMS all in one place within IoT Central. You can also test and share the workflow, see the run history, and manage all workflows attached to that rule.

Try it out in your IoT Central app by visiting Device Templates in Rules, adding a new action, and picking the Microsoft Flow tile.

Updated Azure IoT Central connector: Send a command and get device actions

With the updated Azure IoT Central connector, you can now build workflows in Microsoft Flow and Azure Logic Apps that can send commands on an IoT device and get device information like the name, properties, and settings values. You can also now build a workflow to tell an IoT device to reboot from a mobile app, and display the device’s temperature setting and location property in a mobile app.

Try it out in Microsoft Flow or Azure Logic Apps by using the Send a command and Get device actions in your workflow.

Integration with Azure Monitor action groups

Azure Monitor action groups are reusable groups of actions that can be attached to multiple rules at once. Instead of creating separate actions for each rule and entering in the recipient’s email address, SMS number, and webhook URL for each, you can choose an action group that contains all three from a drop down and expect to receive notifications on all three channels. The same action group can be attached to multiple rules and are reusable across Azure Monitor alerts.

Try it out in your IoT Central app by visiting Device Templates in Rules, adding a new action, and then pick the Azure Monitor action groups tile.

Multiple dashboards

Users can now create multiple personal dashboards in their IoT Central app! You can now build customized dashboards to better organize your devices and data. The default application dashboard is still available for all users, but each user of the app can create personalized dashboards and switch between them.

Localization support

As of today, IoT Central supports 17 languages! You can select your preferred language in the settings section in the top navigation, and this will apply when you use any app in IoT Central. Each user can have their own preferred language, and you can change it at any time.

With these new features, you can more conveniently build workflows as actions and reuse groups of actions, organize your visualizations across multiple dashboards, and work with IoT Central with your favorite language. Stay tuned for more developments in IoT Central. Until next time!

Next steps

Have ideas or suggestions for new features? Post it on Uservoice.
To explore the full set of features and capabilities and start your free trial, visit the IoT Central website.
Check out our documentation including tutorials to connect your first device.
To give us feedback about your experience with Azure IoT Central, take this survey.
To learn more about the Azure IoT portfolio including the latest news, visit the Microsoft Azure IoT page.

Quelle: Azure

Backup OpenShift Resources the Native Way

The age old question from an operations perspective is: ‘Do we have a backup strategy for the company?’  If the answer is yes, then we would also ask ‘does that strategy include accommodations for emerging technologies such as Kubernetes?’ We believe this question comes to bear in a couple of different ways when it pertains […]
The post Backup OpenShift Resources the Native Way appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift