Monitoring Azure SQL Data Sync using OMS Log Analytics

Azure SQL Data Sync is a solution which enables customers to easily synchronize data either bidirectionally or unidirectionally between multiple Azure SQL databases and/or on-premises SQL Databases.

Previously, users had to manually look at SQL Azure Data Sync in the Azure portal or use PowerShell/RestAPI’s to pull the log and detect errors and warnings. By following the steps in this blog post, Data Sync users can configure a custom solution which will greatly improve the Data Sync monitoring experience. Remember, this solution can and should be customized to fit your scenario.

Automated email notifications

Users will no longer need to manually look at the log in the Azure Portal or through PowerShell/RestAPI’s. By leveraging OMS Log Analytics we can create alerts which will go directly to the emails of those that need to see them in the event of an error.

Monitoring dashboard for all your Sync Groups 

Users will no longer need to manually look through the logs of each Sync Group individually to look for issues. Users will be able to monitor all their Sync Groups from any of their subscriptions in one place using a custom OMS view. With this view users will be able to surface the information that matters to them.

How do you set this up?

You can implement your custom Data Sync OMS monitoring solution in less than an hour by following the steps below and making minimal changes to the given samples.

You’ll need to configure 3 components:

PowerShell Runbook to feed Data Sync Log Data to OMS
OMS Log Analytics alert for email notifications
OMS view for monitoring

Download the 2 samples:

Data Sync Log PowerShell Runbook
Data Sync Log OMS View

Prerequisites:

Azure Automation Account
Log Analytics linked with OMS Workspace

PowerShell Runbook to get Data Sync Log 

We will use a PowerShell runbook hosted in Azure Automation to pull the Data Sync log data and send it to OMS. A sample script is included. As a prerequisite you need to have an Azure Automation Account. You will need to create a runbook and the schedule for running it.

To create the runbook:

Under your Azure Automation Account, click the Runbooks tab under Process Automation.
Click Add a Runbooks at the top left corner of the Runbooks blade.
Click Import an existing Runbook.
Under Runbook file use the given “DataSyncLogPowerShellRunbook” file. Set the Runbook type as “PowerShell”. You can use any name you want.
Click Create. You now have your runbook.
Under your Azure Automation Account, click the Variables tab under Shared Resources.
Click Add a variable at the top left side of the variables blade. We need to create a variable to store the last execution time for the runbook. If you have multiple runbooks you'll need one variable for each.
Set the name as “DataSyncLogLastUpdatedTime” and Type as DateTime.
Select the Runbook and click the edit button at the top of the blade.
Make the required changes (details in the script)

Azure information
Sync Group information
OMS information (find this information at OMS Portal -> Settings -> Connected Sources)

Run the runbook in the test pane and check to make sure it’s successful.

Note: If you have errors make sure you have the newest PowerShell Module installed. You can do this in the Modules Gallery in your Automation Account.

Click Publish

To schedule the runbook:

Under your runbook, click the Schedules tab under Resources.
Click Add a Schedule in the Schedules blade.
Click Link a Schedule to your runbook.
Click Create a new schedule.
Set Recurrence to Recurring and set the interval you’d like. You should use the same interval here, in the script, and in OMS.
Click Create

To monitor if your automation is running:

Under Overview for your automation account, find the Job Statistics view under Monitoring. Pin this to your dashboard for easy viewing.
Successful runs of the runbook will show as “Completed” and failed runs will show up as “Failed”.

OMS log reader alert for email notifications

We will use an OMS Log Analytics to create an alert. As a prerequisite you need to have a Log Analytics linked with an OMS workspace.

In the OMS portal click on Log Search towards the top left.
Create a query to select the errors and warnings by sync group within the interval you are using.

Type=DataSyncLog_CL LogLevel_s!=Success| measure count() by SyncGroupName_s interval 60minute

After running the query click the bell that says Alert.
Under Generate alert based on select Metric Measurement.

Set the Aggregate Value to Greater than.
After greater than, use the threshold you’d like to set before you receive notifications.
Transient errors are expected in Data Sync. We recommend that you set the threshold to 5 to reduce noise.

Under Actions set Email notification to “Yes”. Enter the desired recipients.
Click Save. You will now receive email notifications based on errors.

OMS view for monitoring

We will create an OMS view to visually monitor all the sync groups. The view includes a few main components:

The Overview tile shows how many errors, successes, and warnings all your sync groups have.
Tile for all sync groups which shows the amount of errors and warnings per sync group that has them. Groups with no issues will not appear.
Tile for each Sync Group which shows the number errors, successes, and warnings and the recent error messages.

To configure the view:

On the OMS home page, click the plus on the left to open the view designer.
Click Import on the top bar of the view designer and select the “DataSyncLogOMSView” file.
The given view is a sample for managing 2 sync groups. You can edit this to fit your case. Click edit and make the following changes.

Create new “Donut & List” objects from the Gallery as needed.
In each tile update the queries with your information. 

On all tiles, change the TimeStamp_t interval as desired
On the Sync Group specific tiles, update the Sync Group names.

In each tile update the titles as needed.

Click Save and your view is ready.

Cost

In most cases this solution will be free.

Azure Automation: There may be a cost incurred with the Azure Automation Account depending on your usage. The first 500 minutes of job run time per month is free. In most cases you will use less than 500 minutes for this solution. To avoid charges, schedule the runbook to run at an interval of 2 hours or more.

OMS Log Analytics: There may be a cost associated with OMS depending on your usage. The free tier includes 500 MB of ingested data per day. In most cases this will be enough for this solution. To decrease the usage, use the failure only filtering included in the runbook. If you are using more than 500 MB per day, upgrade to the paid tier to avoid stoppage of analytics from hitting the limitation.

Code samples

Data Sync Log PowerShell Runbook
Data Sync Log OMS View

Quelle: Azure

Welcome To The Age Of Cheap Overseas Information

BuzzFeed News; Alamy

As ad dollars that used to fund journalism pour into the coffers of Facebook and Google, the information business is experiencing a trend familiar to other American industries: The product they produce is now competing with cheaper versions coming from overseas.

Content farmers in the Philippines, Pakistan, Macedonia (of course), and beyond are launching websites and Facebook pages aimed at Americans in niches such as politics, mental health, marijuana, American muscle cars, and more.

Based on Facebook engagement and other metrics, some of these overseas publishers are now beating their American counterparts. In the process they’re building an industry centered on producing and exporting cheap (and sometimes false) information targeted at the US.

“This is like all of the basic stuff happening in economics and politics today,” said Tyson Barker, a political economist with the Aspen Institute Germany who specializes in international economic policy. “It's a globalization trend and you've seen it also in manufacturing and other industries.”

Americans and others in the English-language world are used to buying clothing and other products with labels that say “Made in China” or “Made in Bangladesh.” Thanks to the rise of platforms like Facebook and Google, a growing amount of the information being served up in English is now coming from overseas as, albeit without the same kind of labeling.

Facebook

One surprising area where the impact of this trend is being felt is with Native American news and content.

A few weeks ago, Indian Country Today Media Network, an online and print publisher for Native Americans, announced that it was suspending operations due to the lack of a sustainable business model.

“ICTMN has faced the same challenges that other media outlets have faced,” said a letter from publisher Ray Halbritter. “It is no secret that with the rise of the Internet, traditional publishing outlets have faced unprecedented adversity.”

But while ICTMN had to stop operations, a raft of overseas-based publishers of content about Native Americans continue to forge ahead and experience growth and revenue primarily thanks to Facebook.

TheNativePeople.net, which has two associated Facebook pages with close to half a million fans between them, is run by a man in Kosovo. The website TheIndigenousAmericans.com also pumps out Native American news for visitors coming from its Indigenous People Of America Facebook page, which is approaching 1 million fans, almost twice the number of ICTMN’s. The page has experienced steady growth: It added roughly 200,000 new fans since BuzzFeed News first wrote about it in a December story that identified a slew of Native American publishers based in Kosovo and Vietnam.

A Vietnamese publisher runs WelcomeNative.com and YesWeNative.com, two sites promoted by the Yes We Native Magazine Facebook page, which has more than 350,000 fans. The page says its owner is based in San Francisco, but domain ownership records list the owner as a person named Minh Nhat Tran of Hanoi. Domain owners can list whatever name and location they want in registration records; however, the email address used for both domains has also been listed as the contact for job postings in Vietnam for graphic designers and Facebook page managers, further showing a link to Vietnam. The same person also runs an American news site called USANewToday.com.

Some of the Native American pages and websites earn money from advertising on articles. Many also operate online stores where they sell T-shirts with Native American designs, as well as clothing, mugs, and other items. As reported by BuzzFeed News, these designs are often stolen from actual Native American artists.

“These pages are taking our work and paying for the sponsored posts on Facebook and making tons of money off of us,” said Aaron Silva, the Native American cofounder of The NTVS, a clothing brand in Minnesota.

BuzzFeed News has identified other online publishers in countries including Macedonia, Pakistan, Georgia, Croatia, India, and the Philippines that produce information aimed primarily at US audiences.

“It's clear that those foreign publishers have developed avenues and methods to get their content into the American traffic flow,” said Sarah Thompson, an Indiana woman who operates the Exploiting the Niche Facebook page.

When not homeschooling her children, she hunts down scammers and clickbait artists who target niche information topics. Many of them turn out to be based overseas, she told BuzzFeed News. When asked to name some of the topics where this is the case, she rattled off a list.

“The US military and veterans are popular themes as well as police and police dogs. Anything with animals, animal abuse, wild animals, beautiful nature, flowers, Native Americans, Christianity,” she said. “Really, it could be anything. Any subject I have looked into I have found the corrupt pockets where that community is being exploited.”

Jason Kint, the CEO of Digital Content Next, an alliance of large digital publishers, told BuzzFeed News the current economics of online content often favor people who excel at gaming platforms, rather than media brands doing reporting and original content creation.

“If proper trust frameworks aren't in place to ensure consumer and advertiser trust, then the automation/farming of the content will move to the lowest cost, ethics, laws available,” he said.

Native American publishers aren’t the only ones competing with — and sometimes losing out to — overseas publishers in a niche aimed at people in the US. As previously reported by BuzzFeed News, the town of Veles, Macedonia, is home to dozens of websites targeting American conservatives which often publish fake news. A recent BuzzFeed News analysis of partisan political news websites and Facebook pages revealed that a page run by a 20-year-old in Macedonia outperforms many of the biggest conservative news Facebook pages run by Americans. BuzzFeed News has also found publishers in Kosovo and Georgia that publish (often fake) news crafted for American conservatives.

Facebook

Health is another niche attracting overseas publishers. According to domain ownership records from DomainTools, a man in Pakistan named “Kashif Shahzad” owns over 200 domain names, several of which focus on mental health and related topics, including MedicalHealthRecords.us, HealthTimes.info, and GeneralHealthcare.co. Another of his sites, GreatAmericans.world, focuses on fibromyalgia and is heavily promoted from a Facebook page called US Health Care. He also owns DailyMedicalNews.co, which is promoted by a Facebook page called Depression Awareness with close to half a million fans. BuzzFeed News contacted him at the email address listed in his domain registrations but did not receive a reply.

One way the (often plagiarized) content from this network of sites spreads is to have fake Facebook accounts share it in Facebook groups about health topics. Thompson pointed BuzzFeed News to several accounts that were part of a group of interconnected profiles that consistently share articles from the same health sites into Facebook groups. Some of the accounts are also administrators of these groups, which focus on mental health, fibromyalgia, addiction, and medical marijuana, among other topics. Along with the fake accounts, some groups, such as this one about marijuana, have administrators based in Pakistan.

One suspicious account with the name Rabia Anwar is a member of seven Facebook groups about marijuana and five dedicated to fibromyalgia. The account’s profile features a photo of a woman, but earlier photos posted on its timeline clearly show it originally belonged to a man. (The account info is also set to male.) The profile also prominently presents the photo of a Pakistani actress and her family as if it depicts the person behind the account.

Since August, the account’s public posting activity consists entirely of sharing new articles from the network of health sites run from Pakistan into Facebook groups.

Facebook

Thompson was most alarmed when she identified what she believes are fake Facebook accounts that are active in Facebook groups and present themselves as recovering drug addicts. These accounts repeatedly share content from overseas publishers.

“The thought of these spamming bots infiltrating a support group of recovering addicts made me so mad,” she said. “Some clickbaiter thousands of miles away is violating the trust and privacy these communities afford to each other for mere pennies per click.”

Along with the violation of trust, Thompson is concerned that many overseas publishers in the health vertical simply copy and paste whatever information will grab attention, which can often be false claims about new cures, or misleading health warnings.

“They could be giving them bad information, distracting them from proven treatments with snake oil spam, eroding their trust in their doctor, or even giving them bad information that could harm them,” she said. “It's not a joke, it’s not harmless. The heroin epidemic in the Midwest where I live is really bad. Lots of people are dying.”

Health is also a focus for Macedonian publishers. Wired magazine reported on Aleksandar and Borce Velkovski, two brothers who got rich from HealthyFoodHouse.com, a website filled with health tips and recipes. BuzzFeed News also found dozens of health-focused domain names registered to people in Macedonia.

That country is in fact home to a cottage industry of websites focused on motorcycles, American muscle cars, horses, and other topics.

The glut of English-language publishers in Macedonia is partly thanks to a man named Mirko Ceselkoski. More than a decade ago, he figured out how to make money by running websites about cars and other niche topics aimed at Americans. When he met with BuzzFeed News in July in Skopje, Ceselkoski provided a business card that described him as “The Man Who Helped Donald Trump Win US Elections (me and my students from Veles).”

Ceselkoski claims credit for Trump’s win because many of the young publishers in Veles took a course he offers on how to make money with English-language websites. Ceselkoski charged $425, which is roughly equivalent to the average monthly salary in the country.

“I was instructing my students that they should write news aimed at American people,” Ceselkoski said.

He denies telling students to publish fake news, but does instruct them to copy a few paragraphs from a story that’s performing well on Facebook and create a new story from that. It's the content equivalent of an overseas factory pumping out knockoffs of the latest fashion trend.

ICTMN / The Indigenous American

Plagiarism is a standard tactic of low-quality overseas publishers. All of the content BuzzFeed News reviewed on the health sites run from Pakistan was stolen from other websites. (There was even one story about antidepressants stolen from BuzzFeed.)

The same is true for players in the Native American niche. TheIndigenousAmericans.com recently featured a Q&A with actor Adam Beach. That interview was stolen word-for-word from Indian Country Today Media Network.

The same plagiarism frequently occurs in the world of fake political news, too. As previously detailed by BuzzFeed News, multiple publishers in Macedonia, Kosovo, Bulgaria, and Georgia plagiarize the fake articles published on a group of websites run by a man in Maine. The man, Christopher Blair, calls himself a liberal troll and claims he publishes the fake stories — such as “BREAKING: Hillary Clinton Personally Funded Antifa Terrorists With $7.1 Million Bankroll” — to expose the ignorance of American conservatives. After months of having his content stolen, he managed to get some of their websites and Facebook pages shut down.

“They will copy, paste, and post as many times in a day as they can. They steal content from pages with a lot of shares,” he said.

Sometimes overseas publishers mix their topics to puzzling effect. A website called USMedicalCouncil.com shows new visitors a pop-up message to like the Fibro & Chronic Pain Center Facebook page. That page constantly posts articles connected to health spammers in Pakistan. However, USMedicalCouncil.com recently switched topics and now posts hyperpartisan political stories. One of its most recent is a completely false story alleging incest in the Trump family.

Facebook

TheNativePeople.net, which is run from Kosovo, is just as likely to publish a list of “home remedies” to help with clogged arteries, which itself is an article copied from a health site run by a Macedonian, according to domain registration records.

But not all overseas publishers working in English operate at the lowest end of the value chain. Bored Panda publishes viral content about art, design, and other topics. It frequently works with the original artists to create stories. The company was founded in Lithuania, and that’s where the majority of its staff is based. Owner Tomas Banisauskas did not respond to interview requests from BuzzFeed News, but he did publish a post on Medium titled “How we built a global media business with $5/month.” The $5 in question is the cost of his initial web hosting bill.

“I was laser-focused on profits from day one,” wrote Banisauskas, who studied business at Vilnius University. “The idea was to create content that people would share on social networks, which would bring free traffic back to my website. All this traffic then could be monetised with AdSense banners.”

He said Bored Panda succeeded by focusing on publishing a smaller number of quality posts, rather than churning out a large number each day. This, and what he said was a decision to avoid using clickbait headlines, helped his site avoid a crash in traffic that hit viral sites such as Upworthy when Facebook changed its algorithm, according to Banisauskas.

Bored Panda

Quelle: <a href="Welcome To The Age Of Cheap Overseas Information“>BuzzFeed

What’s brewing in Visual Studio Team Services: September 2017 Digest

Visual Studio Team Services offers the best DevOps tooling to create an efficient continuous integration and release pipeline to Azure. This month we’ll take a look at support Git forks, multi-phase builds, work hub improvements, new reporting widgets, and an updated NDepend extension. Let’s get started with our first look at forking in VSTS.

Forks preview

Preview feature

This capability is enabled through the Git Forks preview feature on your account.

You can now fork and push changes back within a VSTS account! This is the first step on our journey with forks. The next step will be to enable you to fork a repository into a different VSTS account.

A fork is a complete, server-side copy of a repository, including all files, commits, and (optionally) branches. Forks are a great way to isolate experimental, risky, or confidential changes from the original codebase. Once you’re ready to share those changes, it’s easy to use pull requests to push the changes back to the original repository.

Using forks, you can also allow a broad range of people to contribute to your repository without giving them direct commit access. Instead, they commit their work to their own fork of the repository. This gives you the opportunity to review their changes in a pull request before accepting those changes into the central repository.

A fork starts with all the contents of its upstream (original) repository. When you create a fork, you can choose whether to include all branches or limit to only the default branch. None of the permissions, policies, or build definitions are applied. The new fork acts as if someone cloned the original repository, then pushed to a new, empty repository. After a fork has been created, new files, folders, and branches are not shared between the repositories unless a PR carries them along.

You can create PRs in either direction: from fork to upstream, or upstream to fork. The most common direction will be from fork to upstream. The destination repository’s permissions, policies, builds, and work items will apply to the PR.

See the documentation for forks for more information.

Create a folder in a repository using web

You can now create folders via the web in your Git and TFVC repositories. This replaces the Folder Management extension, which will now undergo the process of deprecation.

To create a folder, click New > Folder in either the command bar or context menu:

Wiki page deep linking

Wiki now supports deep linking sections within a page and across pages, which is really useful for creating a table of contents. You can reference a heading in the same page or another page by using the following syntax:

Same page: [text to display](#section-name)
Another page: [text to display](/page-name#section-name)

See the documentation for Markdown syntax guidance for more information.

Preview content as you edit Wiki pages

Data shows that users almost always Preview a wiki page multiple times while editing content. For each page edit, users click on Preview 1-2 times on average. This can be particularly time-consuming for those new to markdown. Now you can see the preview of your page while editing.

Paste rich content as HTML in Wiki

You can now paste rich text in the markdown editor of Wiki from any browser-based applications such as Confluence, OneNote, SharePoint, and MediaWiki. This is particularly useful for those who have created rich content such as complex tables and want to show it in Wiki. Simply copy content and paste it as HTML.

Multi-phase builds

Modern multi-tier apps often must be built with different sets of tasks, on different sets of agents with varying capabilities, sometimes even on different platforms. Until now, in VSTS you had to create a separate build for each aspect of these kinds of apps. We’re now releasing the first set of features to enable multi-phase builds.

You can configure each phase with the tasks you need, and specify different demands for each phase. Each phase can run multiple jobs in parallel using multipliers. You can publish artifacts in one phase, and then download those artifacts to use them in a subsequent phase.

You’ll also notice that all of your current build definitions have been upgraded to have a single phase. Some of the configuration options such as demands and multi-configuration will be moved to each phase.

We’re still working on a few features, including:

Ability to select a different queue in each phase.
Ability to consume output variables from one phase in a subsequent phase.
Ability to run phases in parallel. (For now, all the phases you define run sequentially).

CI builds for Bitbucket repositories

It's now possible to run CI builds from connected Bitbucket repositories. To get started, set up a service endpoint to connect to Bitbucket. Then in your build definition, on the Tasks tab select the Bitbucket source.

After that, enable CI on the Triggers tab, and you’re good to go.

This feature works only for builds in VSTS accounts and with cloud-hosted Bitbucket repositories.

Release template extensibility

Release templates let you create a baseline for you to get started when defining a release process. Previously, you could upload new ones to your account, but now authors can include release templates in their extensions. You can find an example on the GitHub repo.

Conditional release tasks and phases

Similar to conditional build tasks, you can now run a task or phase only if specific conditions are met. This will help you in modeling rollback scenarios.

If the built-in conditions don’t meet your needs, or if you need more fine-grained control over when the task or phase runs, you can specify custom conditions. Express the condition as a nested set of functions. The agent evaluates the innermost function and works its way outwards. The final result is a boolean value that determines if the task to be run.

Personalized notifications for releases

Release notifications are now integrated with the VSTS notification settings experience. Those managing releases are now automatically notified of pending actions (approvals or manual interventions) and important deployment failures. You can turn off these notifications by navigating to the Notification settings under the profile menu and switching off Release Subscriptions. You can also subscribe to additional notifications by creating custom subscriptions. Admins can control subscriptions for teams and groups from the Notification settings under Team and Account settings.

Release definition authors will no longer have to manually send emails for approvals and deployment completions.

This is especially useful for large accounts that have multiple stakeholders for releases, and those other than approver, release creator and environment owner that might want to be notified.

See the managing release notifications for more information.

Branch filters in release environment triggers

In the new release definition editor, you can now specify artifact conditions for a particular environment. Using these artifact conditions, you will have more granular control on which artifacts should be deployed to a specific environment. For example, for a production environment, you may want to make sure that builds generated only from the master branch are deployed. This filter needs to be set for all artifacts that you think should meet this criteria.

You can also add multiple filters for each artifact that is linked to the release definition. Deployment will be triggered to this environment only if all the artifact conditions are successfully met.

Gulp, Yarn, and more authenticated feed support

The npm task today works seamlessly with authenticated npm feeds (in Package Management or external registries like npm Enterprise and Artifactory), but until now it’s been challenging to use a task runner like Gulp or an alternate npm client like Yarn unless that task also supported authenticated feeds. With this update, we’ve added a new npm Authenticate build task that will add credentials to your .npmrc so that subsequent tasks can use authenticated feeds successfully.

Run webtests using the VSTest task

Using the Visual Studio test task, webtests can now be run in the CI/CD pipeline. Webtests can be run by specifying the tests to run in the task assembly input. Any test case work item that has an “associated automation” linked to a webtest, can also be run by selecting the test plan/test suite in the task.

Webtest results will be available as an attachment to the test result. This can be downloaded for offline analysis in Visual Studio.

This capability is dependent on changes in the Visual Studio test platform and requires installing Visual Studio 2017 Update 4 on the build/release agent. Webtests cannot be run using prior versions of Visual Studio.

Similarly, webtests can be run using the Run Functional Test task. This capability is dependent on changes in the Test Agent, that will be available with the Visual Studio 2017 Update 5.

See the Load test your app in the cloud using Visual Studio and VSTS quickstart as an example of how you can use this together with load testing.

Work Items hub

Preview feature

To use this capability you must have the New Work Items Hub preview feature enabled on your profile and/or account.

The Work Items hub allows you to focus on relevant items inside a team project via five pivots:

Assigned to me – All work items assigned to you in the project in the order they’re last updated. To open or update a work item, click its title.
Following – All work items you’re following.
Mentioned – All work items you’ve been mentioned in, for the last 30 days.
My activity – All work items that you have recently viewed or updated.
Recently created – All work items recently created in the project.

Creating a work item from within the hub is just one click away:

While developing the new Work Items hub, we wanted to ensure that you could re-create each one of the pivots via the Query Editor. Previously, we supported querying on items that you’re following and that were assigned to you but this sprint we created two new macros: @RecentMentions and @MyRecentActivity. With these, you can now create a query and obtain the work items where you’ve been mentioned in the last 30 days or create a query that returns your latest activity. Here’s a sneak peek of how these macros can be used:

See the documentation for the Work Items hub for more information.

Customizable work item rules

Whether it be automatically setting the value of certain work item fields or defining the behavior of those fields in specific states, project admins can now use rules to automate the behavior of work item fields and ease the burden on their teams. Here are just a few examples of the key scenarios you will be able to configure using rules.

When a work item state changes to Active, make Remaining Work a required field
When a work item is Proposed and the Original Estimate is changed, copy the value of Original Estimate to the Remaining Work field
When you add a custom state, with its own by/date field types, you can use rules to automatically set those fields’ values on state transitions
When a work item state changes, set the value of custom by/date fields

To get started with rules, simply follow these steps:

Select Customize from the work item’s context menu
Create or select an existing inherited process
Select the work item type you would like to add rules on, click Rules, and click New rule

Check out the documentation for custom rules for more information.

Custom Fields and Tags in Notifications

Notifications can now be defined using conditions on custom fields and tags – not only when they change but when certain values are met. This has been a top customer suggestion in UserVoice (see 6059328 and 2436843) and will allow for a more robust set of notifications that can be set for work items.

Inline add on Delivery Plans

New feature ideas can arrive at any moment, so we’ve made it easier to add new features directly to your Delivery Plans. Simply click the New item button available on hover, enter a name, and hit enter. A new feature will be created with the area path and iteration path you’d expect.

 

New Queries experience

Preview Feature

To use this capability you must have the New Queries Experience preview feature enabled on your profile.

The Queries hub has a new look and feel, changes in navigation, and some exciting new features such as the ability to search for queries.

You’ll notice that the left pane has been removed. To quickly navigate between your favorite queries, use the dropdown in the query title.

We’ve also made the following improvements:

Create and edit followed work item queries with the @Follows macro
Query for work items you were mentioned in with the @Mentions macro
Save as now copies charts to the new query
Simplified command bars for Results and Editor
Expanded filter capabilities in the result grid

Burndown and Burnup widgets

The Burndown and Burnup widgets are now available for those who have installed the Analytics Extension on their accounts.

The Burndown widget lets you display a burndown across multiple teams and multiple sprints. You can use it to create a release burndown, a bug burndown, or a burndown on just about any scope of work over any time period. You can even create a burndown that spans team projects!

The Burndown widget helps you answer the question: Will we complete this project on time?

To help you answer that question, it provides these features:

Displays percentage complete
Computes average burndown
Shows you when you have items not estimated with story points
Tracks your scope increase over the course of the project
Projects your project’s completion date based on historical burndown and scope increase trends

You can burndown on any work item type based on count of work items or by the sum of a specific field (e.g., Story Points). You can burndown using daily, weekly, monthly intervals or based on an iteration schedule. You can even add additional filter criteria to fine tune the exact scope of work you are burning down.

The widget is highly configurable allowing you use it for a wide variety of scenarios. We expect our customers will find amazing ways to use these two widgets.

The Burnup widget is just like the Burndown widget, except that it plots the work you have completed, rather than the work you have remaining.

Streamlined user management

Preview feature

This capability is enabled through the Streamlined User Management preview feature on your profile and/or account.

Effective user management helps administrators ensure they are paying for the right resources and enabling the right access in their projects. We’ve repeatedly heard in support calls and from our customers that they want capabilities to simplify this process in VSTS. This sprint, we are releasing an experience to general availability, which begins to address these issues. See the documentation for the User hub for more information. Here are some of the changes that you’ll see light up:

Invite people to the account in one easy step

Administrators can now add users to an account, with the proper extensions, access level, and group memberships at the same time, enabling their users to hit the ground running. You can also invite up to 50 users at once through the new invitation experience.

More information when managing users

The Manage users page now shows you more information to help you understand users in your account at a glance. The table of users includes a new column called Extensions that lists the extensions each user has access to.

Adding Users to Projects and Teams

We want to make sure each of your administrators has the tools they need to easily get your team up and running. As part of this effort, we are releasing an improved project invitation dialog. This new dialog enables your project administrators to easily add users to the teams which they should be a member of. If you are a project or team administrator, you can access this dialog by clicking the Add button on your project home page or the Team Members widget.

Improved authentication documentation and samples

In the past, our REST documentation has been focused solely on using PATs for access to our REST APIs. We’ve updated our documentation for extensions and integrations to give guidance on how best to authenticate given your application scenario. Whether you’re developing a native client application, interactive web app, or simply calling an API via PowerShell, we have a clear sample on how best to authenticate with VSTS. For more information see the documentation.

Extension of the month: Code Quality NDepend for TFS 2017 and VSTS

With over 200 installs and a solid 5-star rating, NDepend is one of the best code quality solutions in our Marketplace and the extension adds a bunch of new features to your VSTS experience

NDepend Dashboard Hub shows a recap of the most relevant data including technical debt estimations, code size, Quality Gates status, rules and issues numbers.
Quality Gates which are code quality facts that must be enforced before committing to source control and eventually, before releasing.
Rules that check your code against best practices with 150 included inbox and the ability to create custom additions.
Technical Debt and Issues which is generated from checking your code against industry best practices, including deep issue drill down.
Trends are supported and visualized across builds so you can see if you’re improving, or adding more technical debt.
Code Metrics recap for each assembly, namespace, class or method.
Build Summary recap is shown in each Build Summary:

With the new pricing plans for NDepend, you can enable a code quality engineer on your team to use all of these tools starting at a low monthly cost.

For full pricing info, check out the NDepend extension.

These releases always include more than I can cover here. Please check out the August 28th and the September 15th release notes for more information. Be sure to subscribe to the DevOps blog to keep up with the latest plans and developments for VSTS.

Happy coding!

@tfsbuck
Quelle: Azure

What Austin Powers taught me about IT Integration

(This post is part of a series on IT integration. Read the first article on the urgent need for hybrid integration)
Remember when Austin Powers was defrosted back in 1997? He was a man caught between eras — living life in the style of the swinging 60’s but caught in a world that had changed.
Vanessa Kensington: Mr. Powers, my job is to acclimatize you to the nineties. You know, a lot’s changed since 1967.
Austin Powers: No doubt, love, but as long as people are still having promiscuous sex with many anonymous partners without protection while at the same time experimenting with mind-expanding drugs in a consequence-free environment, I’ll be sound as a pound!
The underlying theme of the movie was Austin’s struggle between his conditioning to the freedom of the 60’s while being forced to live with the responsibility of the 90’s.
In many ways, IT professionals face the same struggle in reverse. Traditionally, IT has been centrally controlled with a dedicated set of resources ensuring that everything was done properly, securely. But the rise of cloud changed that. Now, people across the organization have the freedom to identify a business need and quickly stand up a solution in the cloud, regardless of the long-term impacts. The results have been a little…evil.
The rise of cloud has been swift, but not complete. With IDC’s worldwide 2017 CloudView Survey finding that “nearly 54% of respondents have adopted SaaS for at least one application, and an additional 17% of respondents are planning to adopt SaaS within 12 months,” most organizations run some percentage of their applications on cloud and some percentage on-premises. They have become companies caught between eras.
Like Austin Powers, you need to adapt. You need to consider how you bridge between the freedom of cloud and the responsibility of on-premises applications.
We recently asked our friends at IDC to lend their brain power to the issue of integrating on-premises and cloud environments, and they found that organizations need to consider changing their approach. This is what they determined:

Using ETL or FTP to synchronize application data is not secure enough between a datacenter and the cloud application, and cloud-optimized data integration software is required.
Communication shifts from a high-speed LAN to slower broadband connections, creating higher integration latency. This may require a rework of services interfaces to narrow their scope while making them more lightweight to make them faster.
There is the potential of lower reliability between the datacenter and the SaaS application, which means there may be a need for reliable messaging and improved error handling.
Web services interacting with the legacy application may need to be extended to include REST APIs to support formats required by SaaS applications.
The integration bus will not be capable of mediating cloud-originating web services requests without use of gateway software or some type of trusted agent.
There may be a need to integrate assets in the cloud, which means the integration capabilities must be extended to support every workload in every cloud, impacting the new major application and relevant business processes.
There may be a decision to co-locate supporting applications by hosting them in the same cloud as the SaaS application. This may remove latency and reliability concerns, but there is still a need for integration. This means there is a need to adopt new integration software.
Data associated with the cloud application may need to be replicated to a cloud or on-premises data repository for reporting and analytics, which may require cloud-resident data integration and movement technologies.

Integrating your on-premises applications with your cloud applications allows you to put your enterprise data to work in new ways. It provides you the best of both worlds. As Austin said, “Right now we have freedom and responsibility. It’s a very groovy time.”
If you want to learn more about cloud and on-premises integration, download the IDC Report – The Urgent Need for Hybrid Integration or go to the IBM Integration website to learn more about IBM’s view on hybrid cloud integration.
Yeah, baby!
The post What Austin Powers taught me about IT Integration appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Announcing Kentico Cloud Solution in Azure Marketplace

Kentico Cloud is a cloud-first Headless CMS for digital agencies and end clients. We’re excited to announce that its sample site is now available in the Azure Marketplace. With such a solution, your Azure App Service web application can read the content form Kentico Cloud. Kentico Cloud stores the content, tracks the visitors, provides you with statistics and allows for personalization of the content for various customer segments. It allows you to distribute the content to any channel and device, such as websites, mobile devices, mixed reality devices, presentation kiosks, etc, through an API.

Here are a few highlights of using Kentico Cloud:

Content is served via REST, backed by a super-fast CDN. This means that the app or website can be developed using any programming language on any platform.
SDKs for multiple programming languages are provided as open source projects developed by Kentico in collaboration with a developer community.
Built-in visitor-tracking feature tracks individual visitors. It allows you to analyze the data to identify customer segments with similar profiles or behavior.
Based on the gathered data, Kentico Cloud allows you to deliver personalized content and interactions with customers.

The Kentico Cloud sample application utilizes the Git Deploy feature in Azure App Service. With Git Deploy, you don’t have to package your application for Azure Marketplace. Instead, Azure can build your source code in GitHub for itself and deploy your app to an arbitrary Azure App Service Web App instance.

Go to the Azure Portal to create a sample application that uses Kentico Cloud.

References

Kentico Cloud
Case Studies

 

 
Quelle: Azure

Azure Analysis Services adds firewall support

We are pleased to introduce firewall support for Azure Analysis Services. By using the new firewall feature, customers can lock down their Azure Analysis Services (Azure AS) servers to accept network traffic only from desired sources. Firewall settings are available in the Azure Portal in the Azure AS server properties. A preconfigured rule called “Allow access from Power BI” is enabled by default so that customers can continue to use their Power BI dashboards and reports without friction. Permission from an Analysis Services admin is required to enable and configure the firewall feature.

Any client computers can be granted access by adding a list of IPv4 addresses and an IPv4 address range to the firewall settings. It is also possible to configure the firewall programmatically by using a Resource Manager template along with Azure PowerShell, Azure Command Line Interface (CLI), Azure portal, or the Resource Manager REST API. Forthcoming articles on the Microsoft Azure blog will provide detailed information on how to configure the Azure Analysis Services firewall.

Submit your own ideas for features on our feedback forum and learn more about Azure Analysis Services.
Quelle: Azure