Twitter Bug Is Inserting Tweets Into People's Timelines From Users They Don't Follow

A Twitter bug is baffling some of the social media platform&;s users Wednesday by inserting tweets in their timelines from people they don&039;t follow.

The inserted tweets, which are being placed into timelines without any initial explanation from Twitter, set off a chorus from users asking, well, what the hell is happening?

Twitter is based on a follow model, which means tweets should appear in someone&039;s timeline only when posted by someone they&039;ve elected to follow, or when retweeted by someone they&039;ve followed. The mystery tweets were not received with a great appreciation by Twitter&039;s user base. Here&039;s a sample of their reactions:

Reached for comment, a Twitter spokesperson told BuzzFeed News, “This is a bug and we&039;re working on a fix.”

Quelle: <a href="Twitter Bug Is Inserting Tweets Into People&039;s Timelines From Users They Don&039;t Follow“>BuzzFeed

This Man's Bank Wanted To Read All His Emails To Approve A Credit Card

All your info are belong to us.

Twitter: @coderzombie

It turns out that his bank, HDFC, used a third-party company called Verifi.Me, whose website describes it as a verification service that lets users “prove their identities and fast-track their applications”.

It turns out that his bank, HDFC, used a third-party company called Verifi.Me, whose website describes it as a verification service that lets users “prove their identities and fast-track their applications”.

BuzzFeed News screenshot / Via verifi.me

Here’s everything that Verifi.Me collects about a user when they use the service, according to the company’s privacy policy.

Here's everything that Verifi.Me collects about a user when they use the service, according to the company's privacy policy.

That&;s pretty much everything important. Worse, the policy says that the company may share this information with people “who are required to know such information in order provide [services] to you.”

BuzzFeed News screenshot


View Entire List ›

Quelle: <a href="This Man&039;s Bank Wanted To Read All His Emails To Approve A Credit Card“>BuzzFeed

Azure Virtual Machine Internals – Part 1

Introduction

The Azure cloud services are composed of elements from Compute, Storage, and Networking. The compute building block is a Virtual Machine (VM), which is the subject of discussion in this post. Web search will yield large amounts of documentation regarding the commands, APIs and UX for creating and managing VMs. This is not a 101 or ‘How to’ and the reader is for the most part expected to already be familiar with the topics of VM creation and management. The goal of this series is to look at what is happening under the covers as a VM goes thru its various states.

Azure provides IaaS and PaaS VMs; in this post when we refer to a VM we mean the IaaS VM. There are two control plane stacks in Azure, Azure Service Management (ASM) and Azure Resource Manager (ARM). We will be limiting ourselves to ARM since it is the forward looking control plane.

ARM exposes resources like VM, NIC but in reality ARM is a thin frontend layer and the resources themselves are exposed by lower level resource providers like Compute Resource Provider (CRP), Network Resource Provider (NRP) and Storage Resource Provider (SRP). Portal calls ARM which in turn calls the resource providers.

Getting Started

For most of the customers, their first experience creating a VM is in the Azure Portal. I did the same and created a VM of size ‘Standard DS1 v2’ in the West US region. I mostly stayed with the defaults that the UI presented but chose to add a ‘CustomScript’ extension. When prompted I provided a local file ‘Sample.ps’ as the PowerShell script for the ‘CustomScript’ extension. The PS script itself is a single line Get-Process.

The VM provisioned successfully but the overall ARM template deployment failed (bright red on my Portal dashboard). Couple clicks showed that the ‘CustomScript’ extension had failed and the Portal showed this message:

{
"status": "Failed",
"error": {
"code": "ResourceDeploymentFailure",
"message": "The resource operation completed with terminal provisioning state &;Failed&039;.",
"details": [
{
"code": "DeploymentFailed",
"message": "At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/arm-debug for usage details.",
"details": [
{
"code": "Conflict",
"message": "{rn "status": "Failed",rn "error": {rn "code": "ResourceDeploymentFailure",rn "message": "The resource operation completed with terminal provisioning state &039;Failed&039;.",rn "details": [rn {rn "code": "VMExtensionProvisioningError",rn "message": "VM has reported a failure when processing extension &039;CustomScriptExtension&039;. Error message: "Finished executing command"."rn }rn ]rn }rn}"
}
]
}
]
}
}

It wasn’t immediately clear what had gone wrong. We can dig from here and as is often true, failures teach us more than successes.

I RDPed to the just provisioned VM. The logs for the VM Agent are in C:WindowsAzureLogs. The VM Agent is a system agent that runs in all IaaS VMs (customers can opt out if they would like). The VM Agent is necessary to run extensions. Let’s peek into the logs for the CustomScript Extension:

C:WindowsAzureLogsPluginsMicrosoft.Compute.CustomScriptExtension.8CustomScriptHandler

[1732+00000001] [08/14/2016 06:19:17.77] [INFO] Command execution task started. Awaiting completion…

[1732+00000001] [08/14/2016 06:19:18.80] [ERROR] Command execution finished. Command exited with code: -196608

The fact that the failure logs are cryptic hinted that something catastrophic had happened. So I re-looked at my input and realized that I had the file extension for the PS script wrong. I had it as Sample.ps when it should have been Sample.ps1. I updated the VM this time specifying the script file with the right extension. This succeeded as shown by more records appended to the log file mentioned above.

[3732+00000001] [08/14/2016 08:42:24.04] [INFO] HandlerSettings = ProtectedSettingsCertThumbprint: , ProtectedSettings: {}, PublicSettings: {FileUris: [https://iaasv2tempstorewestus.blob.core.windows.net/vmextensionstemporary-10033fff801becb5-20160814084130535/simple.ps1?sv=2015-04-05&sr=c&sig=M3qa7lS%2BZwp%2B8Tytqf1VEew4YaAKvvYn1yzGrPfSwyw%3D&se=2016-08-15T08%3A41%3A30Z&sp=rw], CommandToExecute: powershell -ExecutionPolicy Unrestricted -File simple.ps1 }

[3732+00000001] [08/14/2016 08:42:24.04] [INFO] Downloading files specified in configuration…

[3732+00000001] [08/14/2016 08:42:24.05] [INFO] DownloadFiles: fileUri = https://iaasv2tempstorewestus.blob.core.windows.net/vmextensionstemporary-10033fff801becb5-20160814084130535/simple.ps1?sv=2015-04-05&sr=c&sig=M3qa7lS+Zwp+8Tytqf1VEew4YaAKvvYn1yzGrPfSwyw=&se=2016-08-15T08:41:30Z&sp=rw

[3732+00000001] [08/14/2016 08:42:24.05] [INFO] DownloadFiles: Initializing CloudBlobClient with baseUri = https://iaasv2tempstorewestus.blob.core.windows.net/

[3732+00000001] [08/14/2016 08:42:24.22] [INFO] DownloadFiles: fileDownloadPath = Downloads

[3732+00000001] [08/14/2016 08:42:24.22] [INFO] DownloadFiles: asynchronously downloading file to fileDownloadLocation = Downloadssimple.ps1

[3732+00000001] [08/14/2016 08:42:24.24] [INFO] Waiting for all async file download tasks to complete…

[3732+00000001] [08/14/2016 08:42:24.29] [INFO] Files downloaded. Asynchronously executing command: &039;powershell -ExecutionPolicy Unrestricted -File simple.ps1 &039;

[3732+00000001] [08/14/2016 08:42:24.29] [INFO] Command execution task started. Awaiting completion…

[3732+00000001] [08/14/2016 08:42:25.29] [INFO] Command execution finished. Command exited with code: 0

The CustomScript extension takes a script file which can be provided as a file in a Storage blob. Portal offers a convenience where it accepts a file from the local machine. I had provided Simple.ps1 which was in my temp folder. Behind the scenes Portal uploads the file to a blob, generates a shared access signature (SAS) and passes it on to CRP. From the logs above you can see that URI.

This URI is worth understanding. It is a Storage blob SAS with the following attributes for an account in West US (which is the same region where my VM is deployed):

se=2016-08-15T08:41:30Z means that the SAS is valid until that time (UTC). Comparing it to the timestamp on the corresponding record in log (08/14/2016 08:42:24.05) it is clear that the SAS is being generated for a period of 24 hours.
Sr=c means that this is container level policy.
Sp=rw means that the access is for both read and write.
The shared access signature (SAS) has the full descriptions

I asserted above that this is a storage account in West US. That may be apparent from the naming of the storage account (iaasv2tempstorewestus) but is not a guarantee. So how can you verify that this storage account (or any other storage account) is in the region it claims to be in?

A simple nslookup on the blob DNS URL reveals this

C:Usersyunusm>nslookup iaasv2tempstorewestus.blob.core.windows.net

Server: PK5001Z.PK5001Z

Address: 192.168.0.1

Non-authoritative answer:

Name: blob.by4prdstr03a.store.core.windows.net

Address: 40.78.112.72

Aliases: iaasv2tempstorewestus.blob.core.windows.net

The blob URL is a CNAME to a canonical DNS blob.by4prdstr03a.store.core.windows.net. Experimentation will show that more than one storage accounts maps to a single canonical DNS URL. The ‘by4’ in the name gives a hint to what region it is located. As per the Azure Regions page, the West US region is in California. Looking up the geo location of the IP address (40.78.112.72) indicates a more specific area within California.

Understanding the VM

Now that we have a healthy VM, let’s understand it more. As per the Azure VM Sizes page, this is the VM that I just created:

Size

CPU cores

Memory

NICs (Max)

Max. disk size

Max. data disks (1023 GB each)

Max. IOPS (500 per disk)

Max network bandwidth

Standard_D1_v2

1

3.5 GB

1

Temporary (SSD) =50 GB

2

2×500

moderate

This information can be fetched programmatically by doing a GET.

Returns this:

{

"name": "Standard_DS1_v2",

"numberOfCores": 1,

"osDiskSizeInMB": 1047552,

"resourceDiskSizeInMB": 7168,

"memoryInMB": 3584,

"maxDataDiskCount": 2

}

Doing a GET on the VM we created
Returns the following. Let’s understand this response in some detail. I have annotated inline comments preceded and followed by //
{
"properties": {
"vmId": "694733ec-46a0-4e0b-a73b-ee0863a0f12c",
"hardwareProfile": {
"vmSize": "Standard_DS1_v2"
},
"storageProfile": {
"imageReference": {
"publisher": "MicrosoftWindowsServer",
"offer": "WindowsServer",
"sku": "2012-R2-Datacenter",
"version": "latest"

The interesting field here is the version. Publishers can have multiple versions of the same image at any point of time. Popular images are revved typically on a monthly frequency with the security patches. Major new versions are released as new SKUs. The Portal has defaulted me to the latest version. As a customer, I can chose to pick a specific version as well, whether I deploy thru Portal or thru an ARM template using the CLI or REST API; the latter being the preferred method for automated scenarios. The problem with specifying a particular version is that it can render the ARM template fragile. The deployment will break if the publisher unpublishes that specific version in one or more regions, as a publisher can do. So unless there is a good reason not to, the preferred value for the version setting is latest. As an example, the following images of the SKU 2012-R2-Datacenter are currently in the WestUS region, as returned by the CLI command azure vm image list.

MicrosoftWindowsServer  WindowsServer  2012-R2-Datacenter                      Windows  4.0.20151120     westus    MicrosoftWindowsServer:WindowsServer:2012-R2-Datacenter:4.0.20151120
MicrosoftWindowsServer  WindowsServer  2012-R2-Datacenter                      Windows  4.0.20151214     westus    MicrosoftWindowsServer:WindowsServer:2012-R2-Datacenter:4.0.20151214
MicrosoftWindowsServer  WindowsServer  2012-R2-Datacenter                      Windows  4.0.20160126     westus    MicrosoftWindowsServer:WindowsServer:2012-R2-Datacenter:4.0.20160126
MicrosoftWindowsServer  WindowsServer  2012-R2-Datacenter                      Windows  4.0.20160229     westus    MicrosoftWindowsServer:WindowsServer:2012-R2-Datacenter:4.0.20160229
MicrosoftWindowsServer  WindowsServer  2012-R2-Datacenter                      Windows  4.0.20160430     westus    MicrosoftWindowsServer:WindowsServer:2012-R2-Datacenter:4.0.20160430
MicrosoftWindowsServer  WindowsServer  2012-R2-Datacenter                      Windows  4.0.20160617     westus    MicrosoftWindowsServer:WindowsServer:2012-R2-Datacenter:4.0.20160617
MicrosoftWindowsServer  WindowsServer  2012-R2-Datacenter                      Windows  4.0.20160721     westus    MicrosoftWindowsServer:WindowsServer:2012-R2-Datacenter:4.0.20160721

},
"osDisk": {
"osType": "Windows",
"name": "BlogWindowsVM",
"createOption": "FromImage",
"vhd": {
"uri": https://blogrgdisks562.blob.core.windows.net/vhds/BlogWindowsVM2016713231120.vhd

The OS disk is a page blob and starts out as a copy of the source image that the Publisher has published. Looking at the meta data of this blob and correlating it to what the VM itself has is instructive. Using the Cloud Explorer in Microsoft Visual Studio the blob’s property window:

This is a regular page blob that is functioning as an OS disk over the network. You will observe that the Last Modified date pretty much stays with NOW() most of the time – the reason being as long as the VM is running there are some flushes happening to the disk regularly. The size of the OS disk is 127 GB; the max allowed OS disk in Azure is 1 TB.

Azure Storage Explorer shows more properties for the same blob than the VS Cloud Explorer.

 

The interesting properties are the Lease properties. It shows the blob as leased with an infinite duration. Internally to VM creation, when a page blob is configured to be an OS/data disk for a VM, we take a lease on that blob before attaching it to the VM. This is so that the blob for a running VM is not deleted out of band. If you see a disk-backing blob has no lease while it shows as attached to a VM then that would be an inconsistent state and will need to be repaired.

RDPing in the VM itself, we can see two drives mounted and the OS drive is about the same size as the page blob in Storage. The pagefile is on D drive; so that faulted pages are fetched locally rather than over the network from Blob Storage. The temporary storage can be lost in case of events that case a VM to be relocated to a different node.

},
"caching": "ReadWrite"
},
"dataDisks": []

there are no data disks yet but we will add some soon

},
"osProfile": {
"computerName": "BlogWindowsVM",

The name we chose for the VM in Portal is the hostname as well. The VM is DHCP enabled and gains its DIP address thru DHCP. The VM is registered in an internal DNS and has a generated FQDN.

C:Usersyunusm>ipconfig /all

Windows IP Configuration

Host Name . . . . . . . . . . . . : BlogWindowsVM
Primary Dns Suffix . . . . . . . :
Node Type . . . . . . . . . . . . : Hybrid
IP Routing Enabled. . . . . . . . : No
WINS Proxy Enabled. . . . . . . . : No
DNS Suffix Search List. . . . . . : qkqr4ajgme4etgyuajvm1sfy3h.dx.internal.cl
oudapp.net

Ethernet adapter:

Connection-specific DNS Suffix . : qkqr4ajgme4etgyuajvm1sfy3h.dx.internal.cl
oudapp.net
Description . . . . . . . . . . . : Microsoft Hyper-V Network Adapter
Physical Address. . . . . . . . . : 00-0D-3A-33-81-01
DHCP Enabled. . . . . . . . . . . : Yes
Autoconfiguration Enabled . . . . : Yes
Link-local IPv6 Address . . . . . : fe80::980c:bf29:b2de:8a05%12(Preferred)
IPv4 Address. . . . . . . . . . . : 10.1.0.4(Preferred)
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Lease Obtained. . . . . . . . . . : Saturday, August 13, 2016 11:14:58 PM
Lease Expires . . . . . . . . . . : Wednesday, September 20, 2152 6:24:34 PM
Default Gateway . . . . . . . . . : 10.1.0.1
DHCP Server . . . . . . . . . . . : 168.63.129.16
DHCPv6 IAID . . . . . . . . . . . : 301993274
DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-1F-41-C4-70-00-0D-3A-33-81-01

DNS Servers . . . . . . . . . . . : 168.63.129.16
NetBIOS over Tcpip. . . . . . . . : Enabled

"adminUsername": "yunusm",
"windowsConfiguration": {
"provisionVMAgent": true,

This is a hint to install a guest agent that does a bunch of config and runs the extensions. The guest agent binaries are here – C:WindowsAzurePackages

"enableAutomaticUpdates": true

Windows VMs by default are set to receive auto updates from Windows Update Service. There is a nuance to grasp here regarding availability and auto updates. If you have an Availability Set with multiple VMs with the purpose of getting high SLA against unexpected faults, then you do not want to have correlated actions (like Windows Updates) that can take down VMs across the Availability Set.

 

},

"secrets": []

},

"networkProfile": {

"networkInterfaces": [

{

"id": "/subscriptions/f028f547-f912-42b0-8892-89ea6eda4c5e/resourceGroups/BlogRG/providers/Microsoft.Network/networkInterfaces/blogwindowsvm91"

NIC is a standalone resource, we are not discussing networking resources yet.

}
]
},
"diagnosticsProfile": {
"bootDiagnostics": {
"enabled": true,
"storageUri": "https://blogrgdiag337.blob.core.windows.net/"
}

Boot diagnostics have been enabled. Portal has a way of viewing the screenshot. You can get the URL for the screenshot from CLI:

C:Program Files (x86)Microsoft SDKsAzureCLIbin>node azure vm get-serial-output

info: Executing command vm get-serial-output

Resource group name: blogrg

Virtual machine name: blogwindowsvm

+ Getting instance view of virtual machine "blogwindowsvm"

info: Console Screenshot Blob Uri:

https://blogrgdiag337.blob.core.windows.net/bootdiagnostics-blogwindo-694733ec-46a0-4e0b-a73b-ee0863a0f12c/BlogWindowsVM.694733ec-46a0-4e0b-a73b-ee0863a0f12c.screenshot.bmp

info: vm get-serial-output command OK

The boot screenshot can be viewed in Portal. However, the URL for the screenshot bmp file does not render in a browser.

What gives? It is due to the authentication on the storage account which blocks anonymous access. For any blob or container in Azure Storage it is possible to configure anonymous read access. Please do this with caution and only in cases where secrets will not be exposed. It is a useful capability for sharing data that is not confidential without having to generate SAS signatures. Once anonymous access is enabled on the container the screenshot renders in any browser outside of the portal.

    },
    "provisioningState": "Succeeded"
  },
  "resources": [
    {
      "properties": {
        "publisher": "Microsoft.Compute",
        "type": "CustomScriptExtension",
        "typeHandlerVersion": "1.7",
        "autoUpgradeMinorVersion": true,

It is usually safe for extensions to be auto updated on the minor version. There have been very few surprises in this regard though you have an option to not auto update.

"settings": {
"fileUris": [

https://iaasv2tempstorewestus.blob.core.windows.net/vmextensionstemporary-10033fff801becb5-20160814084130535/simple.ps1?sv=2015-04-05&sr=c&sig=M3qa7lS%2BZwp%2B8Tytqf1VEew4YaAKvvYn1yzGrPfSwyw%3D&se=2016-08-15T08%3A41%3A30Z&sp=rw

As discussed earlier this is the SAS key for the powershell script. You will see this as a commonly used pattern to sharing files and data – upload to a blob, generate a SAS key and pass around.

],
"commandToExecute": "powershell -ExecutionPolicy Unrestricted -File simple.ps1 "
},
"provisioningState": "Succeeded"
},
"id": "/subscriptions/f028f547-f912-42b0-8892-89ea6eda4c5e/resourceGroups/BlogRG/providers/Microsoft.Compute/virtualMachines/BlogWindowsVM/extensions/CustomScriptExtension",
"name": "CustomScriptExtension",
"type": "Microsoft.Compute/virtualMachines/extensions",
"location": "westus"
},
{
"properties": {
"publisher": "Microsoft.Azure.Diagnostics",
"type": "IaaSDiagnostics",
"typeHandlerVersion": "1.5",
"autoUpgradeMinorVersion": true,
"settings": {
"xmlCfg": <trimmed>,
"StorageAccount": "blogrgdiag337"
},
"provisioningState": "Succeeded"
},
"id": "/subscriptions/f028f547-f912-42b0-8892¬-89ea6eda4c5e/resourceGroups/BlogRG/providers/Microsoft.Compute/virtualMachines/BlogWindowsVM/extensions/Microsoft.Insights.VMDiagnosticsSettings",
"name": "Microsoft.Insights.VMDiagnosticsSettings",
"type": "Microsoft.Compute/virtualMachines/extensions",
"location": "westus"
}
],
"id": "/subscriptions/f028f547-f912-42b0-8892-89ea6eda4c5e/resourceGroups/BlogRG/providers/Microsoft.Compute/virtualMachines/BlogWindowsVM",
"name": "BlogWindowsVM",
"type": "Microsoft.Compute/virtualMachines",
"location": "westus"
}

To Be Continued

We will carry on with what we can learn from a single VM and then move on to other topics.
Quelle: Azure

WeWork Has Officially Entered The Indian Market

Theo Wargo / Getty Images

Co-working startup WeWork has officially entered the Indian market by leasing nearly 200,000 square feet of workspace in Mumbai.

The company, which has been open about its Indian ambitions since last year, will launch a flagship building in India&;s startup capital Bengaluru and New Delhi later this year, The Times of India reported.

“If you are a member in New York building a global company and I don’t give you an Indian solution, I took away 10% of the world for you,” CEO Adam Neumann told Forbes in October. “It’s my responsibility.”

WeWork, which launched with 3,000 square feet of space in New York and a single employee seven years ago, is currently valued at nearly $17 billion. It offers co-working spaces in more than 35 cities across the world including 19 in the United States. BuzzFeed News has reached out to WeWork for a comment.

WeWork’s Indian presence is a result of a partnership deal with Embassy, an Indian real estate company that will take care of negotiating leases and construction. WeWork will provide branding, office services, and culture, which, among other things, includes beer-and-wine happy hours, and weekly bagel-and-mimosas networking events.

India&039;s startup boom in the last few years has led to a rise in the popularity of startups offering co-working spaces. Some offer office space to startup workers and freelancers for as little as Rs. 250 — about $4 — a day (and, presumably, no bagels).

Last year, an Indian co-working startup called Innov8 was inducted in Silicon Valley-based startup accelerator Y Combinator’s summer batch. Y Combinator invested $120,000 for a seven percent equity stake in the company, making it the accelerator&039;s first ever investment in a co-working startup.

LINK: WeWork Used These Documents To Convince Investors It’s Worth Billions

Quelle: <a href="WeWork Has Officially Entered The Indian Market“>BuzzFeed

The Hashtag And Winky Face Emoji Could Be Monopoly’s New Game Tokens

The iconic Monopoly board game is getting a new set of playing pieces, and you can vote on whether you want the symbol or kissy face emoji to be one of the game&;s tokens.

The current set is made up of eight die-cast pieces: a battleship, a shoe, an old-school race car, a cat, a top hat, a Schnauzer named Scottie, a thimble, and a wheelbarrow.

Fans can choose eight among 64 pieces, and the new set will be shipped with the game in October. Voting closes January 31, and Hasbro will reveal the results on March 19.

The redesign isn&039;t unprecedented. In 2013, fans voted the cat token as the newest game piece when the iron token received the lowest number of fan votes in a Hasbro poll. The company said that it decided to open up a vote on all the tokens in 2017 because it had seen strong engagement from fans in previous polls.

Any of these 64 tokens could become a part of the official Monopoly board game.

Hasbro

A few of the tokens, like the hashtag, originated on the interwebs, while others resemble classic symbols that have found new meaning in digital culture. Hasbro said that it based the choices on pop culture, past editions of Monopoly (that&039;s where the cowboy boot and the penguin came from), and Mr Monopoly&039;s luxe life (the helicopter, the money clip). It also included lasting, recognizable symbols to give fans a range of choices.

The key

Made famous by DJ Khaled&039;s relentlessly positive Snapchat, where he talks about the “major keys” to living a more successful life.

The hashtag

We can thank Twitter for this viral symbol.

The winky face emoji

For when you&039;re planning to Monopoly and chill.

The thumbs up

An iconic symbol of positivity that&039;s been made its way into the emoji vocabulary. It can help bridge the generation gap for millennials who still live with their parents.

The computer. &;

What started it all.

Hasbro

The kissy face emoji.

“Monopoly with bae.” — Future Instagram caption

Hasbro

The classic smiley face.

And an emoji version of Rich Uncle Pennybags, Monopoly&039;s mascot.

A not-so-subtle emoji suggestion for Unicode?

Quelle: <a href="The Hashtag And Winky Face Emoji Could Be Monopoly’s New Game Tokens“>BuzzFeed

Announcing: New Auto-Scaling Standard Streaming Endpoint and Enhanced Azure CDN Integration

Since the launch of Azure Media Services our streaming services have been one of the biggest things that has attracted customers to our platform.  It offers the scale and robustness to handle the largest events on the web including FIFA World Cup matches, streaming coverage of Olympic events, and Super Bowls.  It also offers features that greatly reduce workflow complexity and cost through dynamic packaging into HLS, MPEG-DASH, and Smooth Streaming as well as dynamic encryption for Microsoft PlayReady, Google Widevine, Apple Fairplay, and AES128.

However, our origin services (aka Streaming Endpoints) have always been plagued with the usability issue of needing to provision them with Streaming Units (each one provides 200Mbps of egress capacity) based on scale needs.  We continually receive questions from customers and partners asking “how many Streaming Units do I need?”, “how do I know when I need more”, “can I get dynamic packaging without Streaming Units”, etc.

Thus, we’re very excited to announce that we have a new Streaming Endpoint option called a Standard Streaming Endpoint which eliminates this complexity by giving you the scale and robustness you need without needing to worry about Streaming Units.  Behind the scenes we monitor the bandwidth requirements on your Streaming Endpoint and scale out as needed.  This means a Standard Streaming Endpoint can be used to deliver your streams to a large range of audience sizes, from very small audiences to thousands of concurrent viewers using the integration of Azure CDN services (more on that further below).

More good news! We also heard your request to have a free trial period to get familiar with Azure Media Services streaming capabilities. When a new Media Services account gets created, a default Standard Streaming Endpoint also automatically gets provisioned under the account. This endpoint includes a 15-day free trial period and trial period starts when the Endpoint gets started for the first time.

In addition to Standard Streaming Endpoints, we are also pleased to announce enhanced Azure CDN integration. With a single click you can integrate all the available Azure CDN providers (Akamai and Verizon) to your Streaming Endpoint including their Standard and Premium products and you can manage and configure all the related features through the Azure CDN portal. When Azure CDN is enabled for a Streaming Endpoint using Azure Media Services, data transfer charges between the Streaming Endpoint and CDN do not apply. Data transferred is instead charged at the CDN edge using CDN pricing.

Comparing Streaming Endpoint Types

Our previous Streaming Endpoints are not going away, meaning there are now multiple options so let’s discuss their attributes.  But before I do let me first jump to the punch line and give you our recommendation for which Streaming Endpoint type you should use.  We have analyzed current customer usage and have determined that the streaming needs of 98% of our customers can be met with Standard Streaming Endpoint.  The remaining 2% are customers like Xbox Movies and Rakuten Showtime that have extremely large catalogs, massive audiences, and origin load profiles that are very unique.  Thus, unless you feel your service will be in that stratosphere our recommendation is that you migrate to a Standard Streaming Endpoint.  If you have any concerns that you may fall into that 2% please contact us and we can provide additional guidance. A good guide post is to contact us if you expect a concurrent audience size larger than 10,000 viewers.

With that out of the way, here’s some finer grained details on the types and how they can be provisioned.

Our existing Streaming Units have now been renamed to "Premium Streaming Units" and any streaming endpoint that have a Premium Streaming Unit will be named a “Premium Streaming Endpoint”.  These Streaming Endpoints behave exactly as they did before and require you to provision them with Streaming Units based on your anticipated load.  As mentioned above almost everyone should be using a Standard Streaming Endpoint and you should contact us if you think you need a Premium Streaming Endpoint.
Any newly created Azure Media Services account will by default have a Standard Streaming Endpoint with Azure CDN (S1 Verizon Standard) integrated created and placed in a stopped state.  It is put into a stopped state so that it doesn’t incur any charges until you are ready to begin streaming.
New Streaming Endpoints can also be created as Standard Streaming Endpoints.
Previously, when a new Azure Media Services account was created a Streaming Endpoint was created with no Streaming Units(aka Classic Streaming Endpoint) . This was a free service intended to give developers time to develop services before incurring any costs.  However, Streaming Units were needed to turn on many of our critical services such as dynamic packaging and encryption so the value was very limited.  Some customers may still have one of these “Classic” Streaming Endpoints in their account.  We recommend customers migrate these to Standard as well, they will not be migrated automatically.  The migration can be done using Azure management portal or Azure Media Services APIs.  For more information, please check "Streaming endpoints overview".  As mentioned above we are offering a 15-day free trial on Standard which provides developers with the same ability to develop services without incurring streaming costs.

Feature
Standard
Premium

Free first 15 days*
Yes
No

Streaming Scale
Up to 600 Mbps when Azure CDN is not used; With Azure CDN turned on Standard will scale to thousands of concurrent viewers
200 Mbps per streaming unit (SU) and scales with CDN.

SLA
99.9
99.9 (200 Mbps per SU).

CDN
Azure CDN, third party CDN, or no CDN.
Azure CDN, third party CDN, or no CDN.

Billing is prorated
Daily
Daily

Dynamic encryption
Yes
Yes

Dynamic packaging
Yes
Yes

IP filtering/G20/Custom host
Yes
Yes

Progressive download
Yes
Yes

Recommended usage
Recommended for the vast majority of streaming scenarios, contact us if you think you may have needs beyond Standard
Contact Us

*Note: Free trial doesn’t apply to existing accounts and end date doesn’t change with state transitions such as stop/start. Free trial starts the first time you start the streaming endpoint and ends after 15 calendar days. The free trial only applies to the default streaming endpoint and doesn&;t apply to additional streaming endpoints.
 

When to Use Azure CDN?

As mentioned above all new Media Services accounts by default have a Standard Streaming Endpoint with Azure CDN (S1 Verizon Standard) integrated. In most cases you should keep CDN enabled. However, if you are anticipating max concurrency lower than 500 viewers then it is recommended to disable CDN since CDN scales best with concurrency.

To migrate your Classic or Premium endpoint to Standard

Navigate to streaming endpoint settings
Toggle your type from Premium to Classic. (If your endpoint doesn&039;t have any streaming units Classic type will be highlighted)

Click "Classic" and save

 
After saving the changes "Opt-in to Standard" button should be visible

Click "Opt-in to Standard"
Read the details and click YES.  (Note: Migrating from classic to standard endpoints cannot be rolled back and has a pricing impact. Please check Azure Media Services pricing page. After migration, it can take up to 30 minutes for full propagation and dynamic packaging and streaming requests might fail during this period)
When operation is completed your classic endpoint will be migrated to "Standard"

To migrate legacy CDN integration to new CDN integration

To migrate to new CDN integration you need to stop your streaming endpoint. Navigate to streaming endpoint details and click stop

Note: Stopping the endpoint will delete existing CDN configuration and stop streaming. Any manually configured setting using CDN management portal will also be deleted and needs to be reconfigured after enabling new CDN integration. Please also note that legacy CDN integrated streaming endpoints doesn&039;t have the "Manage CDN" action button in the menu.
Click "Disable CDN"

Click "Enable CDN" which will trigger new CDN integration workflow

Follow the steps and select your CDN provider and pricing tiers based on your streaming endpoint type

Click "Start"

Note: Starting the streaming endpoint and full CDN provisioning might take up to 2 hours. During this period, you might use your streaming endpoint however, it will operate in degraded mode.
Manage CDN; after streaming endpoint is started and CDN is fully provisioned you can access CDN management.
Click "Manage CDN"

This will open CDN management section and you can manage and configure your streaming integrated CDN endpoint as a regular CDN endpoint.

Note: Data charges from streaming endpoint to CDN only gets disabled if the CDN is enabled over streaming endpoint APIs or using Azure management portal&039;s streaming endpoint section. Manual integration or directly creating an CDN endpoint using CDN APIs or portal section will not disable the data charges. 

Finally; with the release of standard streaming endpoints you will also get access to all CDN providers and can enable your desired CDN provider such as Verizon Standard, Verizon Premium and Akamai Standard with the simple enable CDN check box on the streaming endpoints.

 

You can get more information on Streaming Endpoint from "Streaming endpoints overview" and "StreamingEndpoint REST"

 

We hope you will enjoy our new standard streaming endpoint and the other features.

Common questions related to streaming

1) How to monitor streaming endpoints?

For the last couple of months, we were running a private preview program for our “telemetry APIs”. I know some of you already used the private APIs, but from general usage there was no public data and our streaming endpoints was a black box.

Good news is, we just released our “Telemetry APIs”.  With this APIs, you can monitor your streaming endpoint as well as your live channels. For streaming endpoint, you can get the throughput, latency, request count and errors count almost in real-time and act based on the values. Please check this blog post “Telemetry Platform Features in Azure Media Services” for details. You can also get more information from the API documentation.

2) How to determine the count of streaming units?

Unfortunately, there is no simple answer for this question. The answers depend on various factors such as your catalog size, CDN cache hit ratio, CDN node count, simultaneous connections, aggregated bitrate, protocol counts, DRM count etc. Based on these values, you need to make the math and calculate the required streaming unit count.  Good news is, standard streaming endpoint and Azure CDN integration combination will be sufficient enough for most of the work loads. If you have an advanced workload or you are not sure if standard endpoint is suitable for you or you want to get more insights on the throughput, you can use the Telemetry APIs and monitor your streaming endpoints. If your load is more than the standard endpoint targeted values or you want to use the premium streaming units, you need to make the math based on telemetry values and define the streaming unit count and scale accordingly. You can start with a high number, but after that you can monitor the system and fine tune it and based on the throughput, request/sec and latency numbers.

3) I don’t see CDN analytics for my existing streaming endpoints in the new portal.

CDN management portal for existing CDN integrated streaming endpoints are not available in the new management portal and depriciated. To access the CDN management, you should migrate your streaming endpoint to new CDN integration.  Please see migration steps above.

Providing Feedback and Feature Requests

Azure Media Services will continue to grow and evolve, adding more features, and enabling more scenarios.  To help serve you better, we are always open to feedback, new ideas and appreciate any bug reports so that we can continue to provide an amazing service with the latest technologies. To request new features, provide ideas or feedback, please submit to User Voice for Azure Media Services. If you have and specific issues, questions or find any bugs, please post your question to our forum.

 
Quelle: Azure

Marissa Mayer To Exit Yahoo Board Along With Co-Founder David Filo

Yahoo&; President and CEO Marissa Mayer

Stephen Lam / Getty Images

Yahoo CEO Marissa Mayer, once tasked with turning the struggling company around, is set to exit the company&;s board when its sale to Verizon closes, a company SEC filing said today.

The exit of Mayer, along with Yahoo co-founder David Filo and four other board members, will reduce the size of the company&039;s board to five members. Upon closing the deal, Yahoo will take on a new name too: Altaba Inc.

Hired in July 2012 to help fix the flailing company, Mayer initially appeared to bring new life to Yahoo with shiny acquisitions, like the $1.1 billion purchase of Tumblr, that got the media and tech world buzzing. But ultimately, Mayer didn&039;t steer Yahoo in a new direction. She&039;ll hand over Yahoo to Verizon in essentially the same shape as she found it: a middling content company that tries to do a lot but excels at little.

As former Arizona Cardinals coach Dennis Green would put it:

giphy.com

Yahoo&039;s final days as an independent company are mired in embarrassment, specifically recently revelations of a massive cyber attack that compromised over 1 billion users accounts. Verizon is demanding new terms following the damaging news.

Quelle: <a href="Marissa Mayer To Exit Yahoo Board Along With Co-Founder David Filo“>BuzzFeed

Twitter Reinstates Woman Who Tweeted Screenshots Of Her Trolls' Abuse

Alexandra Brodsky is a co-founder of Know Your IX, an organization that advocates for students’ rights to an education free from gender-based violence. She now works at National Women’s Law Center. This weekend Brodsky received a number of harassing tweets from anti-semitic trolls, replete with holocaust imagery and phrases like, “Welcome to Trump&;s America. See you in the camps&;” Brodsky promptly reported the tweets to Twitter and screen-shotted the offending tweets. Then, “to highlight the new normal in Trump&039;s America and put pressure on Twitter to suspend the users,” she tweeted those screenshots to her 5,047 followers.

Hours later, according to Brodsky, Twitter locked her account, telling her that she&039;d need to delete the offending images in order to regain access to it. Brodsky&039;s trolls, meanwhile, had not been suspended. “So let&039;s get this straight: Twitter still hasn&039;t suspended all the bigots I reported, but they have suspended me for calling out bigotry,” Brodsky wrote in a post to her Facebook page Monday morning. “I call bullshit.”

Shortly after BuzzFeed News asked Twitter about its decision to freeze Brodsky&039;s account and not those of her harassers, the company unlocked it and issued the following statement:

Hello,

Twitter takes reports of violations of the Twitter Rules very seriously. After reviewing your account, it looks like we locked it by mistake.

We have unlocked your account, and we apologize for this error.

Thanks,

Twitter

This isn&039;t the first time Twitter has responded to abuse violations only after being called to action by a media request for comment. On November 2nd, Twitter suspended trolls using misinformation to disenfranchise Black and Latino voters only after being contacted by BuzzFeed News (previously the company replied to an individual user that the tweets did not violate company rules). Four days later, when the company responded to user reports of more false voter information, its action again followed an inquiry from BuzzFeed News. Likewise, in separate instances this summer, Twitter reversed decisions to keep up an ISIS beheading photo and a number of threats of rape only after media inquiries into those incidents.

Quelle: <a href="Twitter Reinstates Woman Who Tweeted Screenshots Of Her Trolls&039; Abuse“>BuzzFeed

Azure Storage Queues New Feature: Pop-Receipt on Add Message

As part of the “2016-05-31” REST API version, we have introduced the pop receipt on add message functionality, which has been a commonly requested feature by our users.

Pop receipt functionality for the Queue service is a great tool for developers to easily identify an enqueued message for further processing. Prior to the “2016-05-31” version, pop receipt value could only be retrieved when a user gets a message from the queue. To simplify this, we now make pop receipt value available in the Put Message (aka Add Message) response which allows users to update/delete a message without the need to retrieve the message first.

Below is a short code snippet that make use of this new feature using Azure Storage Client Library 8.0 for .NET.

// create initial message
CloudQueueMessage message = new CloudQueueMessage("");

// add the message to the queue, but keep it hidden for 3 min
queue.AddMessage(message, null, TimeSpan.FromSeconds(180));
//message.PopReceipt is now populated, and only this client can operate on the message until visibility timeout expires
.
.
.
// update the message (now no need to receive the message first, since we already have a PopReceipt for the message)
message.SetMessageContent("");
queue.UpdateMessage(message, TimeSpan.FromSeconds(180), MessageUpdateFields.Content | MessageUpdateFields.Visibility);

// remove the message using the PopReceipt before any other process sees it
await queue.DeleteMessageAsync(message.Id, message.PopReceipt);

A common problem in cloud applications is to help coordinate updates across non-transactional resources. As an example, an application that processes images or videos may:

1.    Process an image
2.    Upload it to a blob
3.    Save metadata in a table entity

These steps can be tracked using the Queue service as the processes complete successfully using the following flow:

1.    Add a state as a message to the Queue service
2.    Process an image
3.    Upload it to a blob
4.    Save metadata in a table entity
5.    Delete the message if all were successful

Remaining messages in the queue are simply images that failed to be processed, and can be consumed by a worker for cleanup. The scenario above is now made simpler with the popreceipt on add message feature, since in the 5th step the message can be deleted with the popreceipt value retrieved in the 1st step.

Quick Sample using the Face API from Azure Cognitive Services

In the following sample, we are going to be uploading photos from a local folder to the Blob service and we will also make use of the Face API to estimate each person’s age in the photos, storing as an entity in a table. This process will be tracked in a queue and once completed, the message will be deleted with the pop receipt value. The workflow for the sample is:

1.    Find JPG files in ‘testfolder’
2.    For each photo, repeat steps 2-7:
3.    Upload a queue message representing the processing of this photo.  
4.    Call the Face API to estimate the age of each person in the photo.
5.    Store the age information as an entity in the table.
6.    Upload the image to a blob if at least one face is detected.
7.    If both the blob and the table entity operation succeeded, delete the message from queue using the pop receipt.

// Iterate over photos in &;testfolder&039;
var images = Directory.EnumerateFiles("testfolder", "*.jpg");

foreach (string currentFile in images)
{

string fileName = currentFile.Replace("testfolder", "");

Console.WriteLine("Processing image {0}", fileName);

// Add a message to the queue for each photo. Note the visibility timeout
// as blob and table operations in the following process may take up to 180 seconds.
// After the 180 seconds, the message will be visible and a worker role can pick up
// the message from queue for cleanup. Default time to live for the message is 7 days.
CloudQueueMessage message = new CloudQueueMessage(fileName);
queue.AddMessage(message, null, TimeSpan.FromSeconds(180));

// read the file
using (var fileStream = File.OpenRead(currentFile))
{

// detect face and estimate the age
var faces = await faceClient.DetectAsync(fileStream, false, true, new FaceAttributeType[] { FaceAttributeType.Age });
Console.WriteLine(" > " + faces.Length + " face(s) detected.");

CloudBlockBlob blob = container.GetBlockBlobReference(fileName);

var tableEntity = new DynamicTableEntity(DateTime.Now.ToString("yyMMdd"), fileName);

// iterate over detected faces
int i = 1;
foreach (var face in faces)
{

// append the age info as property in the table entity
tableEntity.Properties.Add("person" + i.ToString(), new EntityProperty(face.FaceAttributes.Age.ToString()));
i++;

}

// upload the blob if a face was detected
if (faces.Length > 0)
await blob.UploadFromFileAsync(currentFile);

// store the age info in the table
table.Execute(TableOperation.InsertOrReplace(tableEntity));

// delete the queue message with the pop receipt since previous operations completed successfully
await queue.DeleteMessageAsync(message.Id, message.PopReceipt);

}

}

Check out the full sample in our Github sample repository.

As always, if you have any feature requests please let us know by submitting your ideas to Azure Storage Feedback.
Quelle: Azure

Trump Runs Twitter Now, But He's Not Going To "Save" It

Donald Trump’s viral tweets and his centrality to the American conversation have made him vastly the largest force on Twitter — ten times larger in terms of conversations than the entire Kardashian clan, according to new data — giving him unprecedented leverage over a social platform that, as it struggles as a business, remains central to news and politics.

Trump’s emergence hasn’t helped Twitter’s core metrics, but his dominance on the platform does raise an existential question for its leaders: What happens to Twitter if he should suddenly leave?

Between December 5, 2016 and January 5, 2016, Trump’s name was mentioned 42.7 million times on Twitter. That’s more than 10 times the entire Kardashian clan.

Trump&;s Twitter dominance becomes clear when you look at how often he’s discussed compared to other celebrities and world events. Between December 5, 2016 and January 5, 2016, Trump’s name was mentioned 42.7 million times on Twitter, according to an analysis of Twitter’s “firehose” data by social marketing platform Spredfast. That’s more than 10 times the entire Kardashian clan, whose names were mentioned 3.8 million times during the same period despite a combined 100 million+ followers compared to Trump&039;s 19 million. Trump’s 42.7 million mentions also dwarfed those of Aleppo (7.6 million) ,and the 2.9 million combined mentions of kittens, puppies, cats, dogs.

“Trump’s numbers are in another stratosphere when we compare him to anyone or anything that has traditionally been the gold standard for ‘winning the internet’,” Chris Kerns, VP of research and insights at Spredfast, told BuzzFeed News.

Trump vs. the Kardashians

Spredfast

Trump’s Twitter presence extends well beyond the platform, giving the company free marketing on a grand scale. His tweets, often fired off in the early morning, regularly suck the air out of the news cycle, putting the Twitter brand in front of millions of potential new users.

But while those tweets have driven dozens of news cycles, they haven&039;t done much for Twitter. Indeed, according to third party data reviewed by BuzzFeed News, the Trump Twitter spectacle has not coincided with any material change in the core metrics used to measure Twitter&039;s success.

During the company’s last earnings call, Twitter CFO Anthony Noto said, “There&039;s no noticeable impact that we&039;ve seen from the elections.” And since the election, Twitter hasn&039;t experienced a material upward trend in daily active users, downloads or total time spent in its app, according to App Annie, an app analytics company which reviewed panel data on hundreds of thousands of U.S. iPhones and Android handsets. The data analytics firm 7ParkData also found no clear trend in increased Twitter usage since the election, though it did show an uptick in logins from those who have the app.

So while Trump has commandeered a vast swath of Twitter&039;s attention, he’s unlikely to ‘save’ the platform, as some are suggesting. “The company could benefit from its most talked-about user’s ascent to the White House,” The Guardian argued last week in a story headlined “Can Donald Trump save Twitter?”

While Trump&039;s Twitter presence isn’t helping the company, his sudden departure from the platform could hurt it, creating an unfillable-by-anyone-else void. When he assumes office, Trump will almost certainly be pressured by national security advisors to scale back his personal use of Twitter as his account will likely be a particularly tempting target for hackers.

Twitter declined comment.

Twitter’s lack of a boom following its biggest gift yet — a President-elect addicted its service who posts inflammatory tweets with regularity — is the clearest evidence to date that the company&039;s platform may have hit an insurmountable wall. Twitter has a lot going for it. It’s a perfect platform for watching news unfold in real time. But that’s something only certain people find interesting, and if Twitter can’t sell that promise with Trump, it’s hard to imagine it ever will. Because as the metrics above show, even with the president-elect of the United States sparking endless tweets by trolling Arnold Schwarzenegger for his Celebrity Apprentice ratings, disparaging national security efforts and taunting North Korea over nuclear weapon development, Twitter&039;s appeal remains limited.

That said, Twitter is currently making over $2 billion a year with a user base of about 317 million monthly active users. Yes, the company is struggling with user growth. And slowing revenues. And leadership turmoil. But those were also its struggles long before Trump announced the presidential bid that would land him in the White House. If the data shows that Trump isn&039;t “saving” Twitter, perhaps it&039;s because Twitter didn&039;t need this kind of &039;saving&039; in the first place.

Quelle: <a href="Trump Runs Twitter Now, But He&039;s Not Going To "Save" It“>BuzzFeed