Cloud-first, mobile-first: Microsoft moves to a fully wireless network

Supporting a network in transition: Q&A blog series with David Lef

In a series of blog posts, this being the third, David Lef, Principal Network Architect at Microsoft IT, chats with us about supporting a network as it transitions from a traditional infrastructure to a fully wireless platform. Microsoft IT is responsible for supporting 900 locations and 220,000 users around the world. David is helping to define the evolution of the network topology to a cloud-based model in Azure that supports changing customer demands and modern application designs.

David Lef explains the planning and processes behind migrating to a wireless networking environment, including primary drivers, planning considerations, and challenges.

Q: Can you explain your role and the environment you support?

A: My role at Microsoft is Principal Network Architect with Microsoft IT. My team supports almost 900 sites around the world and the networking components that connect those sites, which are used by a combination of over 220,000 Microsoft employees and vendors that work on our behalf. Our network supports over 2,500 individual applications and business processes. We are responsible for providing wired, wireless, and remote network access for the organization, implementing network security across our network, including our network edges. We make sure that the nuts and bolts of network functionality work as they should: IP addressing, name resolution, traffic management, switching, routing, and so on.

Q: What is driving the shift toward a wireless environment?

A: For Microsoft, it’s two main things: first, our employees want flexibility in the way they get their work done. Our users don’t simply use a workstation at a desk to do their jobs anymore. They’re using their phone, their tablet, their laptop, and their desktop computer, if they have one. It’s evolved into a devices ecosystem rather than a single productivity device, and most of those devices support wireless. In fact, most of them support only wireless. The second motivator is simple cost effectiveness. It’s cheaper and simpler to set up and install a wireless environment than it is to do the same with wired infrastructure. It also makes upgrades and additions to the networking environment easier and cheaper. With wireless, there are no switch stacks to add and no cables to run.

Q: How did you begin planning for this?

A: When Microsoft started accepting and supporting mobile devices connecting to the corporate network, it was clear that the way our network was accessed was going to change. We initially planned to provide wireless support to the physical locations that needed it the most as a support for our wired infrastructure. However, traffic and use analysis showed that the wireless network was very quickly becoming our main network infrastructure, from a user’s perspective. We knew that wireless needed to be there to support mobile devices, and we knew we had to plan for the wireless network to support most of our end-user connectivity, eventually. We looked at the device profiles across our information worker roles to assess what was necessary, and we built out a network to meet that demand and make sure that it scales well with future growth.

We had, and still have, a lot of wired infrastructure that simply isn’t being used to its potential. At many of our information worker sites, wired port utilization is less than 10 percent. If you average it out across all of our user sites, it’s closer to thirty percent, but when you do the math, it still ends up being a lot of investment in network infrastructure that simply isn’t necessary. Over seventy-five percent of our sites are targeted for wireless-first, and we’ve been going through the process of removing dependencies on the wired network infrastructure from a user perspective. In some cases, that means putting wireless network adapters into desktop computers that don’t natively support wireless, and simply making sure wireless connectivity is enabled and configured on those devices that do support it. The more complete we can make the transition to wireless in terms of number of devices, the sooner we can retire the existing wired infrastructure and realize the cost savings from it. We estimate that our wireless-first strategy will result in a reduction in network equipment of more than fifty percent.

Q: What are your key considerations in this project?

A: It’s driven primarily from the high-level goal of cloud first, mobile first. Wireless networking simply complements both of these strategies; it’s a logical and necessary part of the larger puzzle. We are a business, of course, so cost and capital cost savings are important. Migration to wireless as our primary network infrastructure means long-term cost avoidance, less equipment to buy, and decreased maintenance requirements.

We also want the transition to be as non-intrusive as possible to our users. We’re going on-site to make sure they’re ready for the transition to wireless. This might mean helping users install or configure wireless adapters and showing them how to perform tasks, such as installing an operating system, differently. We also want to educate them about using the network and get them comfortable with being their own first level of support and solving basic issues they might encounter.

Q: What have been or will be the biggest challenges in making this work?

A: We’ve run into some challenges in a few different areas. Different devices and their drivers have their peculiarities and issues, whether that’s with a new wireless adapter we’re putting into an existing computer or access and authentication mechanisms for devices that use older wireless network hardware. We also have a lot of wireless access points around the globe, so standardization of those access points has been a challenge. With the advent of bring your own device (BYOD) and the emergence of the “Internet of Things” (IoT), many more wirelessly networked devices are showing up in our environment, and bandwidth is always a concern. A big part of managing this trend is realizing that not all IoT devices need to be included in our corporate network—only those that will benefit from the functionality that the corporate network enables. We’re providing the highest level of wireless bandwidth that we can, as far as supporting devices and meeting transmission standards, but we’re still closely monitoring bandwidth availability to ensure that we’re eliminating any unnecessary bottlenecks.

We’ve also had to address some changes in processes and conceptions. In some cases, older technology that’s in use doesn’t work with wireless, so we have to show users how to do tasks differently, or give them an alternative method.

Q: Is the technology available today to make this successful?

A: Yes, and we’re in the process of rolling out 802.11ac, which gives us more capabilities and bandwidth across our wireless infrastructure. We’ve also committed to having 802.11ac fully implemented before we begin any mandated removal of our existing wired infrastructure. We want to ensure that our wireless network can provide our users a satisfactory level of reliability and performance before we start removing the old way of connecting.

We’re continually rolling out upgrades and changes to our infrastructure to implement 802.11ac, but it also means making sure that existing equipment that our users employ isn’t being removed from the network inadvertently. Whether we provide an 802.11ac-compatible solution or simply replace the device itself, we’re very conscious of reducing the negative impact of the change on our users.

Q: What is the roadmap for pilot and implementation of this project?

A: It’s already in place and underway. The pilot project has been closed and we have 660 sites targeted for wireless infrastructure updates and conversion in the next 24 months. The other 200 or so will retain wired functionality—these are datacenters, engineering centers, or locations where our customers or users might still require wired connectivity. In the grand scheme of things, we’ll be cutting over 90 percent of our end-user network infrastructure. Wired ports will still be available where they are needed, but our footprint and associated resources needed to support it will be massively reduced.

Learn more

Other blog posts in this series:

Supporting network architecture that enables modern work styles
Engineering the move to cloud-based services

Learn how Microsoft IT is evolving its network architecture.
Quelle: Azure

Using an Ansible Job Template in a CloudForms Service Bundle

This is part 5, the last post of our series on Ansible Tower Integration in Red Hat CloudForms.
As you saw from previous articles, Job Templates can be launched from CloudForms via Ansible Tower to run playbooks on targeted hosts. In particular we have looked at launching them from a button on a VM and from the CloudForms Service Catalog. In this last article, we examine how to expose Job Templates as Service Items to utilize them as part of a Service Bundle.
In this example, we reuse our ‘Deploy PostgreSQL’ Job Template to automate the installation and configuration of a PostgreSQL database on a newly provisioned VM. Our service bundle will deploy a new RHEL7 instance on Amazon EC2 and launch our Ansible Job Template to configure the database on this host.
 
The Ansible Job Template can only run once the AWS instance is created. To ensure the ordering, we add an initial ‘pre0’ state with a value of /Service/Provisioning/StateMachines/Methods/GroupSequenceCheck to the state machine schema used to launch our Ansible Type service item /ManageIQ/ConfigurationManagement/AnsibleTower/Service/Provisioning/StateMachines/Provision.
 

 
Now we create a simple provisioning service item called ‘Deploy RHEL7 instance’ based on an Amazon RHEL7 AMI. The ‘Provisioning Entry Point State Machine’ for this Service Item is set to ‘/Service/Provisioning/StateMachines/ServiceProvision_Template/CatalogItemInitialization’ as we will use it in our Service Bundle.
 

 
Similarly, we create a Service Item of type ‘Ansible Type’ for the launch of our Job Template. ‘Provisioning Entry Point State Machine’ for this Service Item is set to ‘/ConfigurationManagement/AnsibleTower/Service/Provisioning/StateMachines/Provision/provision_from_bundle’.
 

 
Note that none of these Service Items have a Dialog set or are displayed in a catalog. We will simply use them as part of our Service Bundle.
Next, we create a Service Dialog which will prompt the user for some details. We create a new dialog from ‘Automate > Customization > Service Dialogs’. As you can see, our ‘RHEL7 with PostgreSQL’ dialog contains two tabs and elements specifying the instance name and the database details.
 

 
Here is a summary of each elements as configured in our example:
 

( function() {
var func = function() {
var iframe_form = document.getElementById(‘wpcom-iframe-form-83dbadf1accad8d5bceaf156135c7368-57b48b476c6f6′);
var iframe = document.getElementById(‘wpcom-iframe-83dbadf1accad8d5bceaf156135c7368-57b48b476c6f6′);
if ( iframe_form && iframe ) {
iframe_form.submit();
iframe.onload = function() {
iframe.contentWindow.postMessage( {
‘msg_type': ‘poll_size’,
‘frame_id': ‘wpcom-iframe-83dbadf1accad8d5bceaf156135c7368-57b48b476c6f6′
}, window.location.protocol + ‘//wpcomwidgets.com’ );
}
}

// Autosize iframe
var funcSizeResponse = function( e ) {
var origin = document.createElement( ‘a’ );
origin.href = e.origin;

// Verify message origin
if ( ‘wpcomwidgets.com’ !== origin.host )
return;

// Verify message is in a format we expect
if ( ‘object’ !== typeof e.data || undefined === e.data.msg_type )
return;

switch ( e.data.msg_type ) {
case ‘poll_size:response':
var iframe = document.getElementById( e.data._request.frame_id );

if ( iframe && ” === iframe.width )
iframe.width = ‘100%';
if ( iframe && ” === iframe.height )
iframe.height = parseInt( e.data.height );

return;
default:
return;
}
}

if ( ‘function’ === typeof window.addEventListener ) {
window.addEventListener( ‘message’, funcSizeResponse, false );
} else if ( ‘function’ === typeof window.attachEvent ) {
window.attachEvent( ‘onmessage’, funcSizeResponse );
}
}
if (document.readyState === ‘complete’) { func.apply(); /* compat for infinite scroll */ }
else if ( document.addEventListener ) { document.addEventListener( ‘DOMContentLoaded’, func, false ); }
else if ( document.attachEvent ) { document.attachEvent( ‘onreadystatechange’, func ); }
} )();

 
By now, you should recognize most of these fields:

‘vm_name’ is used to specify the name of the newly provisioned instance. It is automatically handled by CatalogItemInitialization to propagate the specified name as part of the provisioning process. We set the ‘Auto Refresh other fields when modified’ option to trigger the population of the ‘limit’ element.
‘limit’ is used by Ansible Tower to filter on which host the Job Template must run. In our example, we set the ‘auto-refresh’ option and specify ‘Get_VM_Name’ instance as ‘Entry Point’. The associate ‘get_vm_name’ simply replicates the value from ‘vm_name’ element.

 

 

‘param_postgresql_databases’ and ‘param_postgresql_users’ elements allow the user to override the default extra variables set on the Ansible Job Template.

Now that the Service Dialog is defined we can create a new Service Bundle. We specify the ‘RHEL7 with PostgreSQL’ dialog and the ‘CatalogBundleInitialization’ Provisioning Entry Point State Machine.
 

 
In the ‘Resources’ section, we add our two Service Items and specify their order of provisioning.
 

 
That’s it. Our new service is now available as part of our service catalog. We can go ahead and order it to provision a new instance on Amazon and run an Ansible playbook on it. If configured properly, a fully configured PostgreSQL database is deployed and configured on a new Amazon instance.
The code base for this example can be found on GitHub.
 
This concludes this series of articles on Ansible Tower integration introduced in Red Hat CloudForms 4.1. We have seen how the two management tools integrate and we explored examples on how to leverage Ansible automation within CloudForms. Additional documentation on Red Hat CloudForms and Red Hat Ansible Tower can be found on the Red Hat Customer Portal.
Quelle: CloudForms

Google Cloud Bigtable is generally available for petabyte-scale NoSQL workloads

Posted by Misha Brukman, Product Manager for Google Cloud Bigtable

In early 2000s, Google developed Bigtable, a petabyte-scale NoSQL database, to handle use cases ranging from low-latency real-time data serving to high-throughput web indexing and analytics. Since then, Bigtable has had a significant impact on the NoSQL storage ecosystem, inspiring the design and development of Apache HBase, Apache Cassandra, Apache Accumulo and several other databases.

Google Cloud Bigtable, a fully-managed database service built on Google’s internal Bigtable service, is now generally available. Enterprises of all sizes can build scalable production applications on top of the same managed NoSQL database service that powers Google Search, Google Analytics, Google Maps, Gmail and other Google products, several of which serve over a billion users. Cloud Bigtable is now available in four Google Cloud Platform regions: us-central1, us-east1, europe-west1 and asia-east1, with more to come.

Cloud Bigtable is available via a high-performance gRPC API, supported by native clients in Java, Go and Python. An open-source, HBase-compatible Java client is also available, allowing for easy portability of workloads between HBase and Cloud Bigtable.

Companies such as Spotify, FIS, Energyworx and others are using Cloud Bigtable to address a wide array of use cases, for example:

Spotify has migrated its production monitoring system, Heroic, from storing time series in Apache Cassandra to Cloud Bigtable and is writing over 360K data points per second.
FIS is working on a bid for the SEC Consolidated Audit Trail (CAT) project, and was able to achieve 34 million reads/sec and 23 million writes/sec on Cloud Bigtable as part of its market data processing pipeline.
Energyworx is building an IoT solution for the energy industry on Google Cloud Platform, using Cloud Bigtable to store smart meter data. This allows it to scale without building a large DevOps team to manage its storage backend.

Cloud Platform partners and customers enjoy the scalability, low latency and high throughput of Cloud Bigtable, without worrying about overhead of server management, upgrades, or manual resharding. Cloud Bigtable is well-integrated with Cloud Platform services such as Google Cloud Dataflow and Google Cloud Dataproc as well as open-source projects such as Apache Hadoop, Apache Spark and OpenTSDB. Cloud Bigtable can also be used together with other services such as Google Cloud Pub/Sub and Google BigQuery as part of a real-time streaming IoT solution.

To get acquainted with Cloud Bigtable, take a look at documentation and try the quickstart. We look forward to seeing you build what’s next!

Quelle: Google Cloud Platform

Azure SQL Database Threat Detection, your built-in security expert

Azure SQL Database Threat Detection has been in preview for a few months now. We’ve onboarded many customers and received some great feedback. We would like to share a few customer experiences that demonstrate how Azure SQL Database Threat Detection helped address their concerns about potential threats to their database.

What is Azure SQL Database Threat Detection?

Azure SQL Database Threat Detection is a new security intelligence feature built into the Azure SQL Database service. Working around the clock to learn, profile and detect anomalous database activities, Azure SQL Database Threat Detection identifies potential threats to the database.

Security officers or other designated administrators can get an immediate notification about suspicious database activities as they occur. Each notification provides details of the suspicious activity and recommends how to further investigate and mitigate the threat.

Currently, Azure SQL Database Threat Detection detects potential vulnerabilities and SQL injection attacks, as well as anomalous database access patterns. The following customer feedback attests to how Azure SQL Database Threat Detection warned them about these threats as they occurred and helped them improve their database security.

Case : Attempted database access by former employee

Borja Gómez, architect and development lead at YesEnglish

“Azure SQL Database Threat Detection is a useful feature that allows us to detect and respond to anomalous database activities, which were not visible to us beforehand. As part of my role designing and building Azure-based solutions for global companies in the Information and Communication Technology field, we always turn on Auditing and Threat Detection, which are built-in and operate independently of our code. A few months later, we received an email alert that "Anomalous database activities from unfamiliar IP (location) was detected." The threat came from a former employee trying to access one of our customer’s databases, which contained sensitive data, using old credentials. The alert allowed us to detect this threat as it occurred, we were able to remediate the threat immediately by locking down the firewall rules and changing credentials, thereby preventing any damage. Such is the simplicity and power of Azure.”

Case : Preventing SQL Injection attacks

Richard Priest, architectural software engineer at Feilden Clegg Bradley Studios and head of the collective at Missing Widget

“Thanks to Azure SQL Database Threat Detection, we were able to detect and fix code vulnerabilities to SQL injection attacks and prevent potential threats to our database. I was extremely impressed how simple it was to enable the threat detection policy using the Azure portal, which required no modifications to our SQL client applications. A while after enabling Azure SQL Database Threat Detection, we received an email notification about ‘An application error that may indicate a vulnerability to SQL injection attacks.’  The notification provided details of the suspicious activity and recommended concrete actions to further investigate and remediate the threat. The alert helped me to track down the source my error and pointed me to the Microsoft documentation that thoroughly explained how to fix my code. As the head of IT, I now guide my team to turn on Azure SQL Database Auditing and Threat Detection on all our projects, because it gives us another layer of protection and is like having a free security expert on our team.”

Case : Anomalous access from home to production database

Manrique Logan, architect and technical lead at ASEBA

“Azure SQL Database Threat Detection is an incredible feature, super simple to use, empowering our small engineering team to protect our company data without the need to be security experts. Our non-profit company provides user-friendly tools for mental health professionals, storing health and sales data in the cloud. As such we need to be HIPAA and PCI compliant, and Azure SQL Database Auditing and Threat Detection help us achieve this. These features are available out of the box, and simple to enable too, taking only a few minutes to configure. We saw the real value from these not long after enabling Azure SQL Database Threat Detection, when we received an email notification that ‘Access from an unfamiliar IP address (location) was detected.&;  The alert was triggered as a result of my unusual access to our production database from home. Knowing that Microsoft is using its vast security expertise to protect my data gives me incredible peace of mind and allows us to focus our security budget on other issues. Furthermore, knowing the fact that every database activity is being monitored has increased security awareness among our engineers. Azure SQL Database Threat Detection is now an important part of our incident response plan. I love that Azure SQL Database offers such powerful and easy-to-use security features.

Turning on Azure SQL Database Threat Detection

Azure SQL Database Threat Detection is incredibly easy to enable. You simply navigate to the Auditing and Threat Detection configuration blade for your database in the Azure management portal. There you switch on Auditing and Threat Detection, and configure at least one email address for receiving alerts.

Click the following links to:

Learn more about Azure SQL Database Threat Detection.
Learn more about Azure SQL Database.

We&039;ll be glad to get feedback on how this feature is serving your security requirements. Please feel free to share your comments below.
Quelle: Azure

Apply for a Docker Scholarship and learn how to code!

Today, Docker is proud to announce the launch of the Docker Scholarship Program in partnership with Reactor Core to improve opportunities for underrepresented groups in the tech industry! With the help of the community, we surpassed our goal for the DockerCon 2016 Bump Up Challenge unlocking $50,000 to fund three full tuition scholarships.
 

 
The Docker Scholarship Program is part of our continued work to improve opportunities for women and underrepresented groups throughout the global Docker ecosystem and encourage inclusivity in the larger tech community.
Docker’s Goal
The goal of the Docker scholarship program is to strengthen the broader tech community by making it more diverse and inclusive to traditionally underrepresented groups. We aim to achieve that goal by providing financial support and mentorship to three students at Reactor Core’s partner schools, Hack Reactor and Telegraph Academy.
Our Partnership with Hack Reactor and Telegraph Academy
Docker believes in the power of innovation and pushing our current technological boundaries. As a driver of innovation, we embrace our role in  advancing opportunities for underrepresented groups in the tech industry. Hack Reactor and Telegraph Academy share in our vision of empowering people and creating more opportunities for every member of our community. We are inspired by their commitment to improving the status quo for underrepresented groups in the tech industry.
Available Scholarships:
2016 Scholarships:
Telegraph Academy November Cohort
Complete the Docker Scholarship application and apply to Telegraph Academy’s bootcamp. Applications will be reviewed and applicants who are accepted into the Telegraph Academy program and meet Docker’s criteria will be invited to Docker HQ for a panel interview with Docker team members. Scholarships will be awarded based on acceptance to Telegraph Academy program, demonstration of personal financial need and quality of the responses to the Docker Scholarship application.
Apply here
Hack Reactor October Cohort
Complete the Docker Scholarship application and apply to Hack Reactor’s bootcamp. Applications will be reviewed and applicants who are accepted into the Hack Reactor program and meet Docker’s criteria will be invited to Docker HQ for a panel interview with Docker team members. Scholarships will be awarded based on acceptance to Hack Reactor program, demonstration of personal financial need and quality of the responses to the Docker Scholarship application.
As women in the tech industry are traditionally underrepresented, we have a strong preference to award this scholarship to a self identified woman. However, we encourage all to apply as there may be additional opportunities available.
Apply here
 
2017 Scholarships:
Telegraph Academy February Cohort
Stay tuned for Telegraph&;s February 2017 cohort application.
Visit the Docker Scholarship page to learn more about each scholarship and the partner programs.
 
Want to help Docker with these initiatives?
We’re always happy to connect with other folks or companies who want to improve opportunities for women and underrepresented groups throughout the global Docker ecosystem and promote diversity in the larger tech community.
If you or your organization are interested in getting more involved, please contact us at events@docker.com. With your help, we are excited to take these initiatives to the next level!
The post Apply for a Docker Scholarship and learn how to code! appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Portal support for Azure Search blob and table indexers now in preview

When building a search enabled application, data can come from many places and take many forms, so making it easy to ingest a variety of data sources is extremely important. Bringing amazing search to your data just got a little easier. Today we&;re excited to announce preview support for Azure blob and Azure table data sources in the Portal. Make your Microsoft Office, HTML, PDF, and other documents searchable with just a few clicks in the Import Data wizard.

We’ve provided simple user interfaces to pick accounts and containers from within your subscription. Perhaps you want to index blobs containing an Outlook email archive, or create a resume search application to streamline your hiring process.

After selecting your data, we’ll detect your metadata fields and suggest an index. The blob indexer has the ability to crack open your documents and extract all text into the content field as well.

We hope that these new data sources will enable some truly awesome experiences! For more information about indexers, see our articles on indexing table and blob storage through the API. To get started with Azure Search in the Portal check out this article. If you have questions, please feel free to reach out in the comments below, or leave your feedback on UserVoice.
Quelle: Azure