Making the machine: the machine learning lifecycle

As a Googler, one of my roles is to educate the software development community on machine learning (ML). The first introduction for many individuals is what is referred to as the ‘model’. While building models, tuning them and evaluating their predictive abilities has generated a great deal of interest and excitement, many organizations still find themselves asking more basic questions, like how does machine learning fit into their software development lifecycle?In this post, I explain how machine learning (ML) maps to and fits in with the traditional software development lifecycle. I refer to this mapping as the machine learning lifecycle. This will help you as you think about how to incorporate machine learning, including models, into your software development processes.The machine learning lifecycle consists of three major phases: Planning (red), Data Engineering (blue) and Modeling (yellow).PlanningIn contrast to a static algorithm coded by a software developer, an ML model is an algorithm that is learned and dynamically updated. You can think of a software application as an amalgamation of algorithms, defined by design patterns and coded by software engineers, that perform planned tasks. Once an application is released “in the wild,” it may not perform as planned, prompting developers to rethink, redesign, and rewrite it (continuous integration/continuous delivery).We are entering an era of replacing these static algorithms with ML models, which are essentially dynamic algorithms. This dynamism presents a host of new challenges for planners, who work in conjunction with product owners and quality assurance (QA) teams.For example, how should the QA team test and report metrics? ML models are often expressed as confidence scores. Let’s suppose that a model shows that it is 97% accurate on an evaluation data set. Does it pass the quality test? If we built a calculator using static algorithms and it got the answer right 97% of the time, we would want to know about the 3% of the time it does not.Similarly, how does a daily standup work with machine learning models? It’s not like the training process is going to give a quick update each morning on what it learned yesterday and what it anticipates learning today. It’s more likely your team will be giving updates on data gathering/cleaning and hyperparameter tuning.When the application is released and supported, one usually develops policies to address user issues. But with continuous learning and reinforcement learning, the model is learning the policy. What policy do we want it to learn? For example, you may want it to observe and detect user friction in navigating the user interface and learn to adapt the interface (Auto A/B) to reduce the friction.Within an effective ML lifecycle, planning needs to be embedded in all stages to start answering these questions specific to your organization.Data engineeringData engineering is where the majority of the development budget is spent—as much as 70% to 80% of engineering funds in some organizations. Learning is dependent on data—lots of data, and the right data. It’s like the old software engineering adage: garbage in, garbage out. The same is true for modeling: if bad data goes in, what the model learns is noise.In addition to software engineers and data scientists, you really need a data engineering organization. These skilled engineers will handle data collection (e.g., billions of records), data extraction (e.g., SQL, Hadoop), data transformation, data storage and data serving. It’s the data that consumes the vast majority of your physical resources (persistent storage and compute). Typically due to the magnitude in scale, these are now handled using cloud services versus traditional on-prem methods.Effective deployment and management of data cloud operations are handled by those skilled in data operations (DataOps). The data collection and serving are handled by those skilled in data warehousing (DBAs). The data extraction and transformation are handled by those skilled in data engineering (Data Engineers), and data analysis are handled by those skilled in statistical analysis and visualization (Data Analysts).ModelingModeling is integrated throughout the software development lifecycle. You don’t just train a model once and you’re done. The concept of one-shot training, while appealing in budget terms and simplification, is only effective in academic and single-task use cases.Until fairly recently, modeling was the domain of data scientists. The initial ML frameworks (like Theano and Caffe) were designed for data scientists. ML frameworks are evolving and today are more in the realm of software engineers (like Keras and PyTorch). Data scientists play an important role in researching the classes of machine learning algorithms and their amalgamation, advising on business policy and direction, and moving into roles of leading data driven teams.But as ML frameworks and AI as a Service (AIaaS) evolve, the majority of modeling will be performed by software engineers. The same goes for feature engineering, a task performed by today’s data engineers: with its similarities to conventional tasks related to data ontologies, namespaces, self-defining schemas, and contracts between interfaces, it too will move into the realm of software engineering. In addition, many organizations will move model building and training to cloud-based services used by software engineers and managed by data operations. Then, as AIaaS evolves further, modeling will transition to a combination of turnkey solutions accessible via cloud APIs, such as for Cloud Vision and Cloud Speech-to-Text, and customizing pre-trained algorithms using transfer learning tools such as AutoML.Frameworks like Keras and PyTorch have already transitioned away symbol programming into imperative programming (the dominant form in software development), and incorporate object-oriented programming (OOP) principles such as inheritance, encapsulation, and polymorphism. One should anticipate that other ML frameworks will evolve to include object relational models (ORM), which we already use for databases, to data sources and inference (prediction). Common best practices will evolve and industry-wide design patterns will become defined and published, much like how Design Patterns by the Gang of Four influenced the evolution of OOP.Like continuous integration and delivery, continuous learning will also move into build processes, and be managed by build and reliability engineers. Then, once your application is released, its usage and adaptation in the wild will provide new insights in the form of data, which will be fed back to the modeling process so the model can continue learning.As you can see, adopting machine learning isn’t simply a question of learning to train a model, and you’re done. You need to think deeply about how those ML models will fit into your existing systems and processes, and grow your staff accordingly. I, and all the staff here at Google, wish you the best in your machine learning journey, as you upgrade your software development lifecycle to accommodate machine learning. To learn more about machine learning on Google Cloud here, visit our Cloud AI products page.
Quelle: Google Cloud Platform

Advancing confidential computing with Asylo and the Confidential Computing Challenge

Welcome to Safer Internet Week! Today, Google Cloud VP of Security Royal Hansen, who recently joined Google from the financial services industry, shared why he is excited by the opportunity that cloud computing presents to improve security for organizations around the world.Putting customers in controlIt’s no secret that taking advantage of the benefits of cloud computing requires businesses to refine how they think and operate. Trust is a core component of this change, since they no longer have direct control over parts of their infrastructure that they used to. We understand that success in the cloud requires earning our customers’ trust, and we work hard at Google Cloud to build trust through transparency, and putting customers in control of their data.For example, Google Cloud was the first major public cloud to provide customers with audit logs and justifications of authorized administrative access by Google Support and Engineering. We also give customers the ability to require explicit approval for access to their data or configurations on GCP with Access Approval.Combined with encryption at rest and in transit, these security capabilities helped establish Google Cloud as a leader in public cloud native security in 2018, according to Forrester Research.  To deliver even greater levels of control, we are investing in the area of “confidential computing.” Confidential computing aims to create computing environments that can help protect applications and data while they are in use—even from privileged access, including from the cloud provider itself. The most common approach for implementing key parts of confidential computing is using trusted execution environments (TEEs) to build software enclaves.Advancing our confidential computing strategyConfidential computing environments can help protect customers sensitive information from a number of adversaries and attack vectors:Malicious insiders – Whether inside a customer’s organization or a cloud provider’s, even insiders with root access can be restricted in their ability to observe or tamper with sensitive code or data inside an enclave.Network vulnerabilities – Confidential computing mitigates the impact of vulnerabilities in the network or guest OS, with regard to confidentiality and integrity.Compromised host OS – Because a malicious or compromised host OS or VMM/hypervisor exist outside of an enclave, vulnerabilities in these components can have less impact on code and data inside an enclave.BIOS compromise – Malicious firmware inserted into the BIOS, including UEFI drivers, are also less able to impact the confidentiality and integrity of the enclave.Despite the opportunities offered by confidential computing, the deployment and adoption of this emerging technology has been slow due to dependence on specific hardware, the lack of an application development tools to develop and run applications in confidential computing environments, and complexity around deployment. To help address these challenges, in May 2018, we introduced Asylo (Greek for “safe space”), an open source framework to make it easier to create and use enclaves, on Google Cloud and beyond.Asylo is designed to be agnostic to the hardware platform it rests on (and its trusted execution environment). This key design point is meant to make software development easier, reducing the friction developers experience when building software to run in a confidential computing environment. An application can be built to run in an Asylo enclave on hardware with Intel SGX today, and in the future, is intended to run on chipsets from other hardware vendors without code changes from the developer as well.Just as important, Asylo is designed to make it easy to build applications that run in enclaves. Simply start developing your app on top of an Asylo Docker container image, and today you can run it on any Intel SGX-capable machine. Down the road, we expect  Asylo will be integrated into popular developer pipelines, and that you’ll be able to deploy Asylo applications directly from commercial container registries and marketplaces.Forging a confidential computing futureWhile Asylo helps address core technical challenges inherent in developing trusted applications, confidential computing is still very much an emerging technology.  Enclaves, for example, are a new software design model, and there aren’t established design practices for implementing them. There’s also still more to develop a robust understanding of security risk tradeoffs, performance implications, etc. that would come from a broad use of confidential computing across the industry. The best way to develop these design patterns is for people to begin experimenting with confidential computing.For example, one model might be to move an entire component to run under an enclave. Porting may be reasonably straightforward, but might bring code into your trusted computing base (TCB) that adds security risks, reducing the intent of the model. At the other end of the spectrum, some developers might choose to run only the security-sensitive parts of their applications in a confidential computing environment to minimize the attack surface. Asylo supports both of these approaches, and each has advantages and trade-offs.In addition to the software-design challenges of developing confidential computing  applications, there are new processors and memory controllers being developed with support for runtime memory encryption and bus protection. As they come to market, these advanced hardware platforms can underpin robust confidential computing systems. To benefit from these breakthrough technologies, we are working with hardware and software partners who are contributing to the confidential computing space. Together, we hope to define a common platform-abstraction layer to underpin toolchains, compilers and interpreters, to ensure the forward-portability of confidential computing applications.Finally, we need to develop a set of industry-wide certification and interop programs to assess the security properties of CPUs and other secure hardware as they become available. Together with the industry, we can work toward more transparent and interoperable services to support confidential computing apps, for example, making it easy to understand and verify remote attestation claims, inter-enclave communication protocols, and federated identity systems across enclaves.  Enter the Confidential Computing ChallengeWe invite you to join us in exploring the advantages confidential computing can bring, and how to put it into practice.To that end, we are launching the Confidential Computing Challenge (C3), a competition dedicated to accelerating the field of confidential computing. Between now and April 1, 2019, we invite you to write an essay that develops a novel use case for confidential computing, or advances the current state of confidential computing by building upon and improving existing technology. These essays will be evaluated by a panel of judges, and the winner will receive $15,000 in cash, $5,000 worth of Google Cloud Platform credits, and a special hardware gift. To learn more about challenge and register, click here. We look forward to your submissions!We also have three hands-on labs that can help you learn how to build confidential computing apps using the Asylo toolchain, run a gRPC server inside an SGX enclave, or use Asylo to help protect secret data from an attacker with root privileges. As part of our Confidential Computing Challenge, we’ve arranged for you to access these labs at no cost. Click here and use code 1g-c3-880 to redeem this offer, which ends when our challenge closes on April 1, 2019.
Quelle: Google Cloud Platform

Beyond passwords: a roadmap for enhanced user security

When it comes to user security, a constant battle plays out between strong security controls and end-user convenience. Finding the right balance is well worth the effort; a well-designed and thoughtfully implemented security solution can be a true business enabler, allowing employees to work from anywhere, on any device — without compromising security. During Safer Internet Week, we wanted to share some of our views on the current state of user security, discuss a few approaches that we’ve taken to strengthen user protection, and offer suggestions on what you can do today as an organization to improve your security posture.  Passwords are ubiquitous, but they’re often not enoughOnline service providers, including Google, have long realized that a password alone is insufficient to protect user accounts. Users often reuse passwords across multiple services, and if one service is compromised, all of the user’s online accounts are now at risk. Employees are also often tricked into revealing their passwords, most commonly through phishing, a technique where attackers dupe users into believing they’re interacting with a legitimate service. Phishing attacks are widespread and often effective—71 % of all targeted attacks start with spear phishing according to the Symantec 2018 Internet Security Threat Report. So how can we address the shortcomings of passwords?2SV / 2FA as a protection against password reuseThe primary protection against password reuse by an attacker is 2-step verification (2SV), also known as two-factor authentication (2FA) or multi-factor authentication (MFA). With 2SV, a user needs two things to log into an account: 1) something they know (often a password), and 2) something they possess (the second factor), which can include hardware-based one-time password (OTP) tokens, time-based OTP smartphone apps (e.g. Google Authenticator), codes delivered via SMS or phone call, or smartphone push-notifications. Even if a user’s password is known, the attacker doesn’t have access to the second factor, so the account cannot be compromised.Using FIDO security keys to prevent account takeoversAs is typical in the cat-and-mouse game of security, malicious activity has intensified on remaining points of vulnerability. While 2SV is a strong step beyond a simple username and password, there are still ways that it can potentially be exploited. Many 2SV methods are vulnerable to man-in-the-middle (MITM) attacks; they are no different from a password in that they can be captured and re-used by a malicious actor.What’s missing with most 2SV methods is the ability for the technology to ensure that the user is providing their credentials to their intended destination and not to an attacker. Security keys based on the FIDO Alliance standard, such as Titan Security Keys, help solve this problem by providing cryptographic proof that the user is in possession of the second factor and that they’re interacting with a legitimate service. Security keys have been shown to be easier to use and more secure than other methods of 2SV. This level of protection is particularly important for high-value users such as cloud administrators or senior executives. Last year, Google disclosed that there have been no reported or detected G Suite account hijackings after security key deployments, a major security win for adopters of this technology.Titan Security KeyEven more phishing and malware protection through machine learningWhile FIDO security keys have proven to be a great method to protect users against account takeovers, we also work to automatically detect and prevent attacks that lead to password compromises in the first place. We use constantly refined machine learning models to quickly identify suspicious behavior and help you take action before harm is done to your organization. Examples include:Automatically flagging emails from untrusted senders that have encrypted attachments or embedded scripts, which often indicate attempts to deploy malicious softwareWarning against email that tries to spoof employee names or that comes from a domain name that looks similar to your own, common phishing tacticsScanning images for phishing indicators and expanding shortened URLs to uncover malicious and deceptive hyperlinksFlagging abnormal sign-in behavior and presenting these users with additional login challengesSecurity Center, included with G Suite Enterprise and Cloud Identity Premium, can also help highlight potential threats, bringing together security analytics, actionable insights, and best practices from Google to empower you to further protect your organization, data, and users.Take action today to improve user securityStrong user security is a must have in today’s world, but it doesn’t need to come at the sacrifice of user experience or productivity. End-user friendly 2SV methods can be enabled via solutions like G Suite and Cloud Identity. For your high-value employees, such as IT admins and executives, we strongly recommend enforcing security keys for the strongest account protection. Start protecting your users today with a free trial of Cloud Identity.
Quelle: Google Cloud Platform

Azure Cost Management now general availability for enterprise agreements and more!

As enterprises accelerate cloud adoption, it is becoming increasingly important to manage cloud costs across the organization. Last September, we announced the public preview of a comprehensive native cost management solution for enterprise customers. We are now excited to announce the general availability (GA) of Azure Cost Management experience that helps organizations visualize, manage, and optimize costs across Azure.

In addition, we are excited to announce the public preview for web direct Pay-As-You-Go customers and Azure Government cloud.

With the addition of the Azure Cost Management, customers now have an always-on, low-latency solution to understand and visualize costs with the following features available in Cost Management:

Cost analysis

This feature allows you to track costs over the course of the month and offers you a variety of ways to analyze your data. To learn more about how to use cost analysis, please visit our documentation, “Quickstart: Explore and analyze costs with Cost analysis.”

Budgets

Use budgets to proactively manage costs and drive accountability within your organization. To learn more about using Azure budgets please visit our documentation, “Tutorial: Create and manage Azure budgets.”

Exports

Export all your cost data to an Azure storage account using our new exports feature. You can use this data in external systems and combine it with your own data to maximize your cost management capabilities. To learn more about using Azure exports please visit our documentation, “Tutorial: Create and manage exported data.”

New Azure APIs

As a part of this release we are also making the APIs mentioned below available for you to build your own cost management solutions. To learn more about developing on top of our new cost management functionality, please visit the Azure REST API documentation links below.

Usage Query – Develop advanced API query calls to learn the most about your organization’s usage and cost patterns.
Budgets – Create and view your budgets in an automated fashion.
Exports – Automate data export configuration.
Usage details by Management Group – Use this API to analyze your organization’s usage across multiple subscriptions.

Alerts (in preview)

View and manage all your alerts in one single place with the new alerts preview feature. In the release you can view budget alerts, monetary commitment alerts, and department spending quota alerts. You can also view active and dismissed alerts.

Getting started

Get started now on this end-to-end cost management and optimization solution that enables you to get the most value for every cloud dollar spent. Please visit the Azure Cost Management documentation page for tutorial and details on getting started.

What’s coming next?

We will continue to iterate additional Cost Management features, so can enjoy a more unified user experience with features like ability to save and schedule reports, additional capabilities in cost analysis, budgets, alerts, and exports, as well as show backs in the coming months.

Partners will also soon be able to leverage the benefits of cost management with our support for the Cloud Solution Provider (CSP) program. With Azure Cost Management, Microsoft is committed to continuing the investment in supporting a multi-cloud environment including Azure and AWS. Public preview for AWS is currently targeted for Q2 of the current calendar year. We plan continue to enhance this with support for other clouds in the near future.

Are you ready for the best part? Azure Cost Management is available for free to all customers and partners to manage Azure costs.

The Cloudyn portal will continue to be available to customers while we integrate all relevant functionality into native Azure Cost Management.

Follow us on Twitter @AzureCostMgmt for exciting cost management updates.
Quelle: Azure

Refrigerdating: Samsung präsentiert Kühlschrank-Tinder

Liebe geht durch den Magen – oder den Kühlschrank: Mit der Plattform Refrigerdating können sich Singles anhand des Inhaltes ihrer Kühlschränke kennenlernen. Samsung nutzt den tatsächlich existierenden Service als Marketing-Werkzeug für seine Familiy-Hub-Kühlschränke mit eingebauter Kamera. (Samsung, Applikationen)
Quelle: Golem

How to make the most of multiple clouds

Many customer-facing activities, such as marketing and customer service, are migrating to the cloud along with mission-critical product development activities, manufacturing and operations processes and standard administrative tasks.
According to a study by the IBM Institute for Business Value (IBV), cloud services are proliferating across enterprise environments so quickly that most IT teams already use anywhere from two to 15 different cloud service providers. Yet many of these clouds are still managed individually.
This leads to significant management challenges, such as optimizing costs, meeting performance goals, establishing IT governance and ensuring visibility and automation.
Enterprise leaders must think holistically about managing these key business processes in a or risk falling behind the competition.
The state of multicloud
To better understand how businesses are managing multiple clouds and planning for the future, IBV surveyed 1,106 executives across 19 industries and in 20 countries. While 98 percent of surveyed organizations plan to operate in a multicloud environment within three years, fewer than half have dedicated multicloud processes in place. Only one-third have the right tools to manage this multicloud environment.
Additionally, many businesses use even more cloud services than intended. Almost 60 percent of organizations surveyed say that independent cloud adoption by business units has already created a de facto multicloud environment. Shadow cloud services often make the actual number of clouds used by enterprises even higher.
Instead of ignoring or stifling activity on multiple clouds, IT teams must facilitate, orchestrate and optimize their multicloud footing. Enterprises that assemble harmonized multicloud platforms can increase their business advantage while optimizing costs.
The infographic below shows that the organizations that have effectively managed multiple cloud services outperform in key areas. Multicloud management increases revenue growth and profitability in the private sector by more than 20 percent and enhances efficiency and effectiveness in government agencies by more than 40 percent.
While your organization may already use multiple cloud services, does it have the right forward-facing strategy to fully unleash the power of your cloud? Is your strategy accelerating innovation and providing the visibility, governance and automation capabilities fundamental to the success of your IT operations and site reliability engineer (SRE) teams?
Take a stand
Learn to orchestrate your hybrid, multicloud environment at Think 2019 next week in San Francisco. Attend the “Cloud Management in a Multicloud World” session on Wednesday, 13 February from 4:30 PM to 5:10 PM to hear how other enterprises are optimizing their use of multiple cloud providers. including IBM Cloud, or traditional on-premises environments.
Download the Multicloud Field Guide, and watch a video to learn more about the findings from the IBV study.

The post How to make the most of multiple clouds appeared first on Cloud computing news.
Quelle: Thoughts on Cloud