Amazon CloudFront now lets you select a security policy with minimum TLS v1.1, v1.2, and security ciphers for viewer connections!

Starting today, you can further improve security for your web applications on Amazon CloudFront by selecting a pre-defined security policy that enforces TLS version 1.1 or 1.2 as the minimum protocol version. Amazon CloudFront will automatically select the cipher suite for your selected security policy which it will use to encrypt your content before returning it to viewers over HTTPS. For instance, with this feature, you can select the security policy that enforces TLS version 1.1 and weak ciphers such as RC4 and 3DES will automatically be excluded. This feature is available when you use custom SSL certificates to serve HTTPS requests using SNI. 
Quelle: aws.amazon.com

Mark Zuckerberg Defends Facebook Against President's "Anti-Trump" Tweet

Drew Angerer / Getty Images

Donald Trump took to Twitter. Mark Zuckerberg responded on Facebook.

On Wednesday, the Facebook CEO responded to the president's comments that his company “was always anti-Trump” with a bulleted statement that attempted to downplay the notion that the social network influenced the 2016 election for either party.

“The facts suggest the greatest role Facebook played in the 2016 election was different from what most are saying,” Zuckerberg wrote on Facebook.

It's been a contentious month for Facebook after the company acknowledged efforts by foreign entities to manipulate the race on its platform by buying targeted ads. Last week, the company said it would have copies of more than 3,000 ads with ties to Russian actors to give to the House and Senate Intelligence Committees, with Zuckerberg announcing a new policy with for advertising so-called dark posts. On Wednesday, the company, along with Google and Twitter, were invited to testify in front of the Senate Intelligence Committee on Nov. 1.

A source close to Facebook confirmed that the company had received the invite, but that it had not decided who to send in front of the committee.

“Trump says Facebook is against him,” wrote Zuckerberg. “Liberals say we helped Trump. Both sides are upset about ideas and content they don't like. That's what running a platform for all ideas looks like.

Earlier on Wednesday, Trump wrote in the first of a two-part tweet that Facebook had been opposed to his candidacy: “Facebook was always anti-Trump,” he said. “The Networks were always anti-Trump hence,Fake News, @nytimes(apologized) & @WaPo were anti-Trump. Collusion?”

@realDonaldTrump / Twitter / Via Twitter: @realDonaldTrump

In his post, Zuckerberg attempted to outline the positives that his company brought to the election. He noted that “more people had a voice in this election than ever before” because of Facebook and notes that all the candidates had Facebook pages through which they interacted with tens of millions of followers. The post, however, made no mention of the fake news and information that the platform helped to proliferate.

“After the election, I made a comment that I thought the idea misinformation on Facebook changed the outcome of the election was a crazy idea,” Zuckerberg wrote. “Calling that crazy was dismissive and I regret it. This is too important an issue to be dismissive.”

The Facebook CEO also hinted that he may be in favor of campaign spending reforms for online advertising. “Campaigns spent hundreds of millions advertising online to get their messages out even further. That's 1000x more than any problematic ads we've found,” he said.

Since the election, Zuckerberg has stayed out of Trump's orbit. In December, during a meeting of technology leaders at Manhattan's Trump Tower, Facebook opted to send Chief Operating Officer Sheryl Sandberg to sit down with the then-president-elect. He also did not attend a similar meeting for technology leaders in June at the White House, with the company reportedly citing “scheduling conflicts” at the time.

Zuckerberg has also largely avoided saying Trump's name in public settings. He discussed “fearful voices calling for building walls” at a keynote for the company's F8 conference in April 2016 and made a veiled criticism at the presidential administration's approach to immigration at Harvard University's May commencement, but did not mention the president's name at either event. Similarly in a Facebook post criticizing the president's decision to end the Deferred Action for Childhood Arrivals (DACA) policy, he did not name Trump.

It remains to be seen if Zuckerberg or another representative for Facebook will testify in front of the Senate Intelligence Committee in November.

View Video ›

Mark Zuckerberg / Facebook / Via Facebook: zuck

Quelle: <a href="Mark Zuckerberg Defends Facebook Against President's "Anti-Trump" Tweet“>BuzzFeed

Facebook, Google And Twitter Have Been Asked To Testify Publicly In The Senate’s Russia Investigation

Stephen Lam / Reuters

Facebook, Twitter and Google officials have been called to testify publicly before the Senate Intelligence Committee on November 1 about Russian attempts to use social media to sway last year’s presidential election after Facebook revealed that a Russian troll operation had purchased more than 3,000 political ads on the platform.

The news, first reported by Recode, was confirmed to BuzzFeed News by a source familiar with the matter.

The Senate Intelligence Committee, which is leading congressional investigations into Russian election interference, has increased its scrutiny of Facebook, in particular, following its disclosure earlier this month that fake accounts and pages on the site linked to a Russian troll farm spent approximately $100,000 on political ads during the presidential race.

A person familiar with the situation said that Facebook is considering the invitation, but has not decided which executives to send to the hearing. Representatives for Google and Twitter did not immediately respond to a request for comment.

Sen. Richard Burr, chairman of the Senate Intelligence Committee, declined to confirm the invitations to reporters on Wednesday, but said he had spoken to Facebook CEO Mark Zuckerberg recently. Rep. Adam Schiff, the lead Democrat on the House Intelligence Committee, said he had also spoken with Zuckerberg.

Burr said Tuesday members want to hear from someone at Facebook during the public hearing who can speak about “what they need to do to identify foreign money that might come in and what procedures, if any, should be put in law to make sure that elections are not intruded by foreign entities.”

“Clearly it's the bigger companies that we think might have been used and we're working with them to acquire the type of data that we need to look at a public hearing,” Burr told reporters.

The planned public hearing, during which senators will grill officials from all three companies, comes as Facebook is under fire for allowing advertisers to target anti-Semitic interests and being slow to acknowledge efforts by foreign actors to manipulate the 2016 election using the social media platform. Some Democratic senators are reportedly already working on legislation to require greater ad transparency from Facebook and others.

Facebook announced last week that it would give both the House and Senate Intelligence Committees copies of the more than 3,000 Russia-linked ads. When asked on Tuesday if he had seen the ads, Sen. Mark Warner, the top Democrat on the committee, said: “Soon. Really soon. This week soon.”

Burr declined to say whether he had viewed the ads, but he said the committee has “traded a lot of documents with Facebook” and that the social media giant has “been incredibly helpful to us.”

Burr added that the committee is in conversation “with everybody in the social platform arena that we think can provide us insight into whether there was any foreign manipulation of their sites.”

“I think their actions just last week indicate that they believe that it's important to get out in front of this and share as much of it as possible,” Burr said of Facebook.

Facebook announced last week it would publicly display so-called “dark posts,” which advertisers buy to promote to specific audiences but that remain concealed from the broader public. “We will work with others to create a new standard for transparency in online political ads,” Zuckerberg said in a live video address announcing the move, among others measures the company is taking in an attempt to increase transparency.

Asked if it, too, would reveal dark posts, Twitter told BuzzFeed News it has nothing new to announce.

The plan to hold a open hearing with Facebook, Twitter and Google comes as the panel is expected to begin publicly interviewing select high-profile witnesses in October, including Michael Cohen, President Donald Trump’s longtime personal lawyer.

Quelle: <a href="Facebook, Google And Twitter Have Been Asked To Testify Publicly In The Senate’s Russia Investigation“>BuzzFeed

WANdisco enables continuous data replication on Azure HDInsight for Big Data applications

We are pleased to announce the expansion of HDInsight Application Platform to include WANdisco. You can install the WANdisco Fusion app and take advantage of the free trial too.

Azure HDInsight is the industry leading fully-managed cloud Apache Hadoop and Spark offering, which gives you optimized open-source analytic clusters for Spark, Hive, MapReduce, HBase, Storm, Kafka, and Microsoft R Server, backed by a 99.9% SLA.

WANdisco Fusion provides continuous replication of selected data at scale between multiple Big Data and cloud environments. With guaranteed data consistency and continuous availability, Microsoft Azure HDInsight customers will now have easy access to the cost-saving benefits of Fusion’s hybrid architecture for on-demand data analytics and offsite disaster recovery.

This combined offering of WANDisco on Azure HDInsight enables customers to connect their Big Data applications from on-premise to HDInsight and expand their analytical footprint faster. Customers can use more open source workloads and libraries easily in the cloud, since they can create clusters on demand and run them against the data that was replicated by WANdisco.

To learn more please come to our presentation Extend on-premises Hadoop and Spark deployments across data centers and the cloud, including Microsoft Azure with Pranav Rastogi, Program Manager, Microsoft and Jagane Sundar, Chief Technology Officer, WANdisco at Strata Data Conference New York on Thursday, September 28, 2017 at 1:15 PM in room 1A03. To find out more, please visit the Strata Data Conference website.

The engineering teams are also hosting a webinar where they will discuss this offering in detail. Please join us by registering today.

Microsoft Azure HDInsight – Reliable Open Source Analytics at Enterprise grade and scale

Azure HDInsight is the only fully-managed cloud Hadoop offering that provides optimized open source analytical clusters for Spark, Hive, Interactive Hive, MapReduce, HBase, Storm, Kafka, and R Server backed by a 99.9% SLA. Each of these Big Data technologies are easily deployable as managed clusters, with enterprise-level security and monitoring.

The ecosystem of productivity applications in Big Data has grown immensely to help customers be more productive with their Big Data solutions. Today, customers often find it challenging to discover these productivity applications, and then in-turn struggle to install and configure these apps.

To address this gap, the HDInsight Application Platform provides a unique experience to HDInsight where Independent Software Vendors (ISV’s) can directly offer their applications to customers. Customers can now easily discover, install and use these applications built for the Big Data ecosystem by a single click.

Setting up a hybrid environment for Big Data scenarios has always been a huge challenge since customers had to replicate petabytes of data and keep both environments in sync. To help customers connect their on-premise Big Data environments with HDInsight, WANdisco Fusion can be deployed as an HDInsight application.

WANdisco Fusion on Azure HDInsight – Move petabyte scale data from on-premises Big Data deployments to Azure

The integration of WANdisco Fusion with Azure HDInsight presents an enterprise solution that enables organizations to meet stringent data availability and compliance requirements whilst seamlessly moving production data at petabyte scale from on-premises big data deployments to Microsoft Azure.

As customers start moving parts of their Big Data applications to Azure, it would give them the flexibility of experimenting with advanced analytical offerings such as running R Server on HDInsight, and more open source machine learning libraries to use. Traditionally experimenting with them on an on-premise Hadoop deployment has been hard due to IT and hardware procurement, but the cloud effectiveness of HDInsight where you can spin up clusters, scale and delete them on demand, allows you to easily experiment in the cloud. Once you have done your analysis, you can then determine how much of your Big Data deployment should you migrate to the cloud.

Customers can use Fusion for the following scenarios:

Hybrid cloud setup for Big data applications: Connect on-premises Big Data deployments to HDInsight. You can setup replication from any Hadoop or Spark distribution running any open source workload (Hive, Spark, HBase, and more)
Multi-cloud: Connect any Big Data deployment running in any cloud to Azure HDInsight
Multi-region replication for back-up and disaster recovery

The following are some of the key benefits of Fusion on HDInsight which help customers

Continuous data replication: Data is replicated as soon as changes occur, regardless of where those changes are initiated, with guaranteed consistency
Opt-in backup: An administrator can select subsets of content for replication, with fine-grained control over where data resides
No administrator overhead: Replication is continuous and automatic, recovering from intermittent network or system failures automatically so that the need for administration oversight is eliminated

Getting started with Fusion on HDInsight

Installing Fusion is a two-step process. This will configure Fusion server, and the client libraries required on the cluster.

Install Fusion server: This will install the Fusion server in the same Azure Virtual Network as the HDInsight cluster. This allows the server to access the cluster in a secure manner.

Install the Fusion app on a new HDInsight cluster or an existing cluster. In the License key field, enter the Public IP of the Fusion Server

 

After you have installed Fusion on HDInsight, you can follow the user guide to setup continuous active replication from on-premises Big Data deployments to Azure HDInsight, multi-region replication, backup and restore, and more.

Strata Presentation and Webinar

To learn more, please come to our presentation Extend on-premises Hadoop and Spark deployments across data centers and the cloud, including Microsoft Azure with Pranav Rastogi, Program Manager, Microsoft and Jagane Sundar, Chief Technology Officer, WANdisco at Strata Data Conference New York on Thursday, September 28, 2017 at 1:15 PM in room 1A03. To find out more, please visit the Strata Data Conference website.

The engineering teams are also hosting a webinar where they will discuss this offering in detail. Please join us by registering today.

Resources

Install WANdisco Fusion App on Azure HDInsight
Install WANdisco Fusion Server
Try WANdisco for free
Learn more about Azure HDInsight
User Guide for WANdisco

Summary

We are pleased to announce the expansion of HDInsight Application Platform to include WANdisco. This combined offering of WANDisco on Azure HDInsight enables customers to connect their Big Data applications from on-premises to HDInsight in the cloud faster. Please visit us at the Strata session and register for the upcoming webinar to learn more.
Quelle: Azure

Azure Log Analytics – meet our new query language

Azure Log Analytics has recently been enhanced to work with a new query language. The query language itself actually isn’t new at all, and has been used extensively by Application Insights for some time. Recently, the language and the platform it operates on have been integrated into Log Analytics, which allows us to introduce a wealth of new capabilities, and a new portal designed for advanced analytics.

This post reviews some of the cool new features now supported. It’s just the tip of the iceberg though, and you're invited to also review the tutorials on our language site and our Log Analytics community space. The examples shown throughout the post can also be run in our Log Analytics playground – a free demo environment you can always use, no registration needed.

Pipe-away

Queries collect data, stored in one or more tables. Check out this basic query:

Event

This is as simple as you can get, but it's still a valid query, that simply returns everything in the Event table. Grabbing every record in a table usually means way too many results though. When analyzing data, a common first step is to review just a bunch of records from a table, and plan how to zoom in on relevant data. This is easily done with “take”:

Event
| take 10

This is the general structure of queries – multiple elements separated by pipes. The output of the first element (i.e the entire Event table) is the input of the next one. In this case, the final query output will be 10 records from the Event table. After reviewing them, we can decide how to make our query more specific. Often, we will use where to filter by a specific condition, such as this:

Event
| where EventLevelName == "Error"

This query will return all records in the table, where EventLevelName equals “Error” (case sensitive).

Looks like our query still returns a lot of records though. To make sense of all that data, we can use summarize. Summarize identifies groups of records by a common value, and can also apply aggregations to each group.

Event
| where EventLevelName == "Error"
| summarize count() by Computer

This example returns the number of Events records marked as Error, grouped by computer.

Try it out on our playground!

Search

Sometimes we need to search across all our data, instead of restricting the query to a specific table. For this type of query, use the “search” keyword:

search "212.92.108.214"
| where TimeGenerated > ago(1h)

The above example searches all records from the last hour, that contain a specific IP address.

Scanning all data could take a bit longer to run. To search for a term across a set of tables, scope the search this way:

search in (ConfigurationData, ApplicationInsights) "logon" or "login"

This example searches only the ConfigurationData and ApplicationInsights tables for records that contain the terms “logon” or “login”.

Note that search terms are by default case insensitive. Search queries have many variants, you can read more about them in our tabular operators.

Query-time custom fields

We often find that we want to calculate custom fields on the fly, and use them in our analysis. One way to do it is to assign our own name to automatically-created columns, such as ErrorsCount:

Event
| where EventLevelName == "Error"
| summarize ErrorsCount=count() by Computer
| sort by ErrorsCount

But adding fields does not require using summarize. The easiest way to do it is with extend:

Event
| where TimeGenerated > datetime(2017-09-16)
| where EventLevelName == "Error"
| extend PST_time = TimeGenerated-8h
| where PST_time between (datetime(2017-09-17T04:00:00) .. datetime(2017-09-18T04:00:00))

This example calculates PST_time which is based on TimeGenerated, but adapted from UTC to PST time zone. The query uses the new field to filter only records created between 2017-09-17 at 4 AM and 2017-09-18 at 4 AM, PST time.

A similar operator is project. Instead of adding the calculated field to the results set, project keeps only the projected fields. In this example, the results will have only four columns:

Event
| where EventLevelName == "Error"
| project TimeGenerated, Computer, EventID, RenderedDescription

Try it out on our playground.

A complementary operator is Project-away, which specifies columns to remove from the result set.

Joins

Join merges the records of two data sets by matching values of the specified columns. This allows richer analysis, that relies on the correlation between different data sources.

The following example joins records from two tables – Update and SecurityEvent:

Update
| where TimeGenerated > ago(1d)
| where Classification == "Security Updates" and UpdateState == "Needed"
| summarize missing_updates=makeset(Title) by Computer
| join (
SecurityEvent
| where TimeGenerated > ago(1h)
| summarize count() by Computer
) on Computer

Let’s review the two data sets being matched. The first data set is:

Update
| where TimeGenerated > ago(1d)
| where Classification == "Security Updates" and UpdateState == "Needed"
| summarize missing_updates=makeset(Title) by Computer

This takes Update records from the last day, that describe needed security updates. It then summarizes the set of required updates per computer.

The second data set is:

SecurityEvent
| where TimeGenerated > ago(1h)
| summarize count() by Computer

This counts how many of SecurityEvent records were created in the last hour per computer.

The common field we matched on is Computer, so eventually we get a list of computers that each has a list of missing security updates, and the total number of security events in the last hour.

The default visualization for most queries is a table. To visualize the data graphically, add "| render barchart” at the end of the query, or select the Chart button shown above the results. The outcome can help us decide how to manage our next updates:

We can see that the most required update is 2017-09 Cumulative Update for Windows Server and that the 1st computer to handle should probably be ContosoAzADDS1.ContosoRetail.com.

Joins have many flavors – inner, outer, semi, etc. These flavors define how matching should be performed and what the output should be. To learn more on joins, review our joins tutorial.

Next steps

Learn more on how to analyze your data:

Query language doc site
Getting started with queries
Upgrading to the new query language

Quelle: Azure

Announcing general availability of Azure Managed Applications Service Catalog

Today we are pleased to announce the general availability of Azure Managed Applications Service Catalog.

Service Catalog allows corporate central IT teams to create and manage a catalog of solutions and applications to be used by employees in that organization. It enables organizations to centrally manage the approved solutions and ensure compliance. It enables the end customers, or in this case, the employees of an organization, to easily discover the list of approved solutions. They can consume these solutions without having to worry about learning how the solution works in order to service, upgrade or manage it. All this is taken care of by the central IT team which published and owns the solution.

In this post, we will walkthrough the new capabilities that have been added to the Managed Applications and how it improves the overall experience.

Improvements

We have made improvements to the overall experience and made authoring much easier and straight forward. Some of the major improvements are described below.

Package construction simplified

In the preview version, the publisher needed to author three files and package them in a zip. One of them was a template file which contained only the Microsoft.Solutions/appliances resource. The publisher also had to specify all of the actual parameters needed for the deployment of the actual resources in this template file again. This was in addition to these parameters already being specified in the other template file. Although this was needed, it caused redundant and often confusing work for the publishers. Going forward, this file will be auto-generated by the service.

So, in the package (.zip), only two files are now required – i) mainTemplate.json (template file which contains the resources that needs to be provisioned) ii) createUIDefinition.json

If your solution uses nested templates, scripts or extensions, those don’t need to change.

Portal support enabled

At preview, we just had CLI support for creating a managed application definition for the Service Catalog. Now, we have added Portal and PowerShell support. With this, the central IT team of an organization can use portal to quickly author a managed application definition and share it with folks in the organization. They don’t need to use CLI and learn the different commands offered there.

These could be discovered in the portal by clicking on “More Services” and then searching for Managed. Don’t use the ones which say “Preview”.

 

To create a managed application definition, select “Service Catalog managed application definitions” and click on “Add” button. This will open the below blade.

Support for providing template files inline instead of packaging as .zip

Create a .zip file, uploading it to a blob, making it publicly accessible, getting the URL and then creating the managed application definition still required a lot of steps. So, we have enabled another option where you can specify these files inline using new parameters that have been added to CLI and Powershell. Support for inline template files will be added to portal shortly.

Service Changes

Please note that the following major changes have been made to the service.

New api-version

The general availability release is introducing a new api-version which will enable you to leverage all the above mentioned improvements. The new api-version is 2017-09-01. Azure Portal will use this new api-version. The latest version of Azure CLI and Azure PowerShell leverages this new api-version. It will be required that you switch to this latest version to develop and manage Managed Applications. Note that creating and managing Managed Applications will not be supported using the existing version of CLI after 9/25/2017. Existing resources which have been created using the old api-version (old CLI) will still continue to work.

Resource type names have changed

The resource type names have changed in the new api-version. And so, Microsoft.Solutions/appliances is now Microsoft.Solutions/applications, and Microsoft.Solutions/applianceDefinitions is Microsoft.Solutions/applicationDefinitions.

Upgrade to the latest CLI and PowerShell

As mentioned above, to continue using and creating Managed Applications, you will have to use the latest version of CLI and PowerShell, or you can use the Azure portal. Existing versions of these clients built on the older api-version will no longer be supported. Your existing resources will be migrated to use the new resource types and will continue to work using the new version of the clients.

Supported locations

Currently, the supported locations are West Central US and West US2.

Please try out the new version of the service and let us know your feedback through our user voice channel or in the comments below.

Additional resources

Publish a Marketplace Managed Application
Publish a Service Catalog Managed Application
How to create UIDefinition for the Managed Application
Managed Applications samples GitHub repository

Quelle: Azure