The Clever Trick An Alt-Right Hot Spot Is Using To Seem Much, Much Bigger

Since the presidential election, the media has paid an enormous amount of attention to the alt-right, a loose confederation of trolls, white nationalists, conservatives, and neo-Nazis. From intense scrutiny of Steve Bannon — the former Breitbart executive appointed to be Donald Trump&;s chief strategist — to blanket coverage of a white separatist press conference last week in Washington, DC, the spotlight has been commensurate with a major new force in American politics.

But there&039;s a fundamental problem in our understanding of the alt-right: No one knows how many people the controversial movement comprises. Like Anonymous and other leaderless, internet-driven movements before it, the alt-right leverages social media to amplify its message while keeping its membership, if you can call it that, obscure.

Last week, the journalist John Herrman noticed one eye-popping number: 7,528,000, or the total number of subscribers to the subreddit r/altright. That would make r/altright one of the 50 most popular pages on Reddit, up there with huge general interest subs like r/food and r/gadgets.

And this morning, that number had grown by a factor of 1,000, to the humungous figure of 8,190,000,000, or roughly half a billion subscribers more than the population of planet Earth.

So what the heck is going on?

The moderators of r/altright appear to be playing a simple trick with an input prompt. Reddit allows moderators to customize the name for subscribers to a subreddit in the right-hand column where the subscriber count appears, so the group of subscribers could be called “readers,” or “followers,” or “fanatics” — whatever you like. In the case of r/altright they appear as “Fashy Goys.”

A simple right-click inspection of the page HTML shows that name, but it also shows something else: six zeroes.

That places the true subscriber count directly next to a multiplier of 1,000,000 (or when Herrman noticed it last week, merely 1,000). A moderator appears to have done the same thing with the online count for the current number of subscribers:

On the moderator page you can find the real number of subscribers, 8,191, or approximately .000001 of Earth&039;s population.

Eight thousand subscribers isn&039;t nothing: As the Anti-Defamation League found in its examination of anti-Semitic abuse against journalists, a relatively small number of internet users can make a hell of a lot of noise. But it does provide a new data point about an amorphous movement that may have a vested interest in appearing bigger than it really is.

Reddit did not respond to a request for comment.

Quelle: <a href="The Clever Trick An Alt-Right Hot Spot Is Using To Seem Much, Much Bigger“>BuzzFeed

Transitioning your StorSimple Virtual Array to the new Azure portal

Today we are announcing the availability of StorSimple Virtual Device Series in the new Azure portal. This release features significant improvement in the user experience. Our customers can now use the new Azure portal to manage the StorSimple Virtual Array configured as a NAS (SMB) or a SAN (iSCSI) in a remote office/branch office.

If you are using the StorSimple Virtual Device Series, you will be seamlessly transitioned into the new Azure portal with no downtime. We&;ll reach out to you via email regarding the specifics of the dates of the transition. After the transition is complete, you can no longer manage your transitioned virtual array from the classic Azure portal.

If you are using StorSimple Physical Device Series, you can continue to manage your devices via the classic Azure portal.

Learn how to use the new Azure portal in just a few steps as detailed below.

Navigate the new Azure portal

Everything about the StorSimple Virtual Device Series experience in new Azure portal is designed to be easy. In the new Azure portal, you will find your service as StorSimple Device Manager.

The Quick start gives a concise summary of how to setup a new virtual array. This is now available as an option in the left-pane of your StorSimple Device Manager blade.

The StorSimple Device Manager service summary blade is redesigned to make it simple. Use the Overview from the left-pane to navigate to your service summary.

Click on Devices in your summary blade to navigate to all the devices registered to your service. For specific monitoring requirements, your can even customize your dashboards.

Click on a device to go to the Device summary blade. Use the commands in the top menu bar to provision a new share, take a backup, fail over, deactivate, or delete a device. You can also right-click and use the context menu to perform the same operations.

The Jobs, Alerts, Backup catalogs, Device Configuration blades are all redesigned to ensure ease of access.

For more information, go to StorSimple product documentation. Visit StorSimple MSDN forum to find answers, ask questions and connect with the StorSimple community. Your feedback is important to us, so send all your feedback or any feature requests using the StorSimple User Voice. And don’t worry – if you need any assistance, Microsoft Support is there to help you along the way!
Quelle: Azure

From “A PC on every desktop” to “Deep Learning in every software”

Deep learning is behind many recent breakthroughs in Artificial Intelligence, including speech recognition, language understanding and computer vision. At Microsoft, it is changing customer experience in many of our applications and services, including Cortana, Bing, Office 365, SwiftKey, Skype Translate, Dynamics 365, and HoloLens. Deep learning-based language translation in Skype was recently named one of the 7 greatest software innovations of the year by Popular Science, and the technology helped us achieve human-level parity in conversational speech recognition. Deep learning is now a core feature of development platforms such as the Microsoft Cognitive Toolkit, Cortana Intelligence Suite, Microsoft Cognitive Services, Azure Machine Learning, Bot Framework, and the Azure Bot Service. I believe that the applications of this technology are so far reaching  that “Deep Learning in every software” will be a reality within this decade.

We’re working very hard to empower developers with AI and Deep Learning, so that they can make smarter products and solve some of the most challenging computing tasks. By vigorously improving algorithms, infrastructure and collaborating closely with our partners like NVIDIA, OpenAI and others and harnessing the power of GPU-accelerated systems, we’re making Microsoft Azure the fastest, most versatile AI platform – a truly intelligent cloud.

Production-Ready Deep Learning Toolkit for Anyone

The Microsoft Cognitive Toolkit (formerly CNTK) is our open-source, cross-platform toolkit for learning and evaluating deep neural networks. The Cognitive Toolkit expresses arbitrary neural networks by composing simple building blocks into complex computational networks, supporting all relevant network types and applications. With the state-of-the art accuracy and efficiency, it scales to multi-GPU/multi-server environments. According to both internal and external benchmarks, the Cognitive Toolkit continues to outperform other Deep Learning frameworks in most tests and unsurprisingly, the latest version is faster than the previous releases, especially when working on massively big data sets and on Pascal GPUs from NVIDIA. That’s true for single-GPU performance, but what really matters is that Cognitive Toolkit can already scale up to using a massive number of GPUs. In the latest release, we’ve extended Cognitive Toolkit to natively support Python in addition to C++. Furthermore, the Cognitive Toolkit also now allows developers to use reinforcement learning to train their models. Finally, Cognitive Toolkit isn’t bound to the cloud in any way. You can train models on the cloud but run them on premises or with other hosters. Our goal is to empower anyone to take advantage of this powerful technology.

To quickly get up to speed on the Toolkit, we’ve published Azure Notebooks with numerous  tutorials and we’ve also assembled a DNN Model Gallery with dozens of code samples, recipes and tutorials across scenarios working with a variety of datasets: images, numeric, speech and text.

What Others Are Saying

In the “Benchmarking State-of-the-Art Deep Learning Software Tools” paper published in September 2016, academic researchers have run a comparative study of the state-of-the-art GPU-accelerated deep learning software tools, including Caffe, Cognitive Toolkit (CNTK), TensorFlow, and Torch. They’ve benchmarked the running performance of these tools with three popular types of neural networks on two CPU platforms and three GPU platforms. Our Cognitive Toolkit outperformed other deep learning toolkits on nearly every workload.

Furthermore, Nvidia recently has also run a benchmark comparing all the popular Deep Learning toolkits with their latest hardware. The results show that the Cognitive Toolkit trains and evaluates deep learning algorithms faster than other available toolkits, scaling efficiently in a range of environments—from a CPU, to GPUs, to multiple machines—while maintaining accuracy. Specifically, it’s 1.7 times faster than our previous release and 3x faster than TensorFlow on Pascal GPUs (as presented at SuperComputing’16 conference).

End users of deep learning software tools can use these benchmarking results as a guide to selecting appropriate hardware platforms and software tools. Second, for developers of deep learning software tools, the in-depth analysis points out possible future directions to further optimize performance.

Real-world Deep Learning Workloads

We at Microsoft use Deep Learning and the Cognitive Toolkit in many of our internal services, ranging from digital agents to core infrastructure in Azure.

1. Agents (Cortana): Cortana is a digital agent that knows who you are and knows your work and life preferences across all your devices. Cortana has more 133 million users and has intelligently answered more than 12 billion questions. From speech recognition to computer vision in Cortana – all these capabilities are powered by Deep Learning and Cognitive Toolkit. We have recently made a major breakthrough in speech recognition, creating a technology that recognizes the words in a conversation and makes the same or fewer errors than professional transcriptionists. The researchers reported a word error rate (WER) of 5.9 percent, down from the 6.3 percent, the lowest error rate ever recorded against the industry standard Switchboard speech recognition task. Reaching human parity using Deep Learning is a truly historic achievement.

Our approach to image recognition also placed first in several major categories of the ImageNet and the Microsoft Common Objects in Context challenges. The DNNs built with our tools won first place in all three categories we competed in: classification, localization and detection. The system won by a strong margin, because we were able to accurately train extremely deep neural nets, 152 layers – much more than in the past – and it used a new “residual learning” principle. Residual learning reformulates the learning procedure and redirects the information flow in deep neural networks. That helped solve the accuracy problem that has traditionally dogged attempts to build extremely deep neural networks.

2. Applications: Our applications, from Office 365, Outlook, PowerPoint, Word and Dynamics 365 can use deep learning to provide new customer experiences. One excellent example of a deep learning application the bot used by Microsoft Customer Support and Services. Using Deep Neural Nets and the Cognitive Toolkit, it can intelligently understand the problems that a customer is asking about, and recommend the best solution to resolve those problems. The bot provides a quick self-service experience for many common customer problems and helps our technical staff focus on the harder and more challenging customer issues.

Another example of an application using Deep Learning is the Connected Drone application built for powerline inspection by one of our customers eSmart (to see the Connected Drone in action, please watch this video). eSmart Systems began developing the Connected Drone out of a strong conviction that drones combined with cloud intelligence could bring great efficiencies to the power industry. The objective of the Connected Drone is to support and automate the inspection and monitoring of power grid infrastructure instead of the currently expensive, risky, and extremely time consuming inspections performed by ground crews and helicopters. To do this, they use Deep Learning to analyze video data feeds streamed from the drones. Their analytics software recognizes individual objects, such as insulators on power poles, and directly links the new information with the component registry, so that inspectors can quickly become aware of potential problems. eSmart applies a range of deep learning technologies to analyze data from the Connected Drone, from the very deep Faster R-CNN to Single Shot Multibox Detectors and more.

3. Cloud Services (Cortana Intelligence Suite): On Azure, we offer a suite for Machine Learning and Advanced Analytics, including Cognitive Services (Vision, Speech, Language, Knowledge, Search, etc.), Bot Framework, Azure Machine Learning, Azure Data Lake, Azure SQL Data Warehouse and PowerBI, called the Cortana Intelligence Suite. You can use these services along with the Cognitive Toolkit or any other deep learning framework of your choice to deploy intelligent applications. For instance, you can now massively parallelize scoring using a pre-trained DNN machine learning model on an HDInsight Apache Spark cluster in Azure. We are seeing a growing number of scenarios that involve the scoring of pre-trained DNNs on a large number of images, such as our customer Liebherr that runs DNNs to visually recognize objects inside a refrigerator. Developers can implement such a processing architecture with just a few steps (see instructions here).

A typical large-scale image scoring scenario may require very high I/O throughput and/or large file storage capacity, for which the Azure Data Lake Store (ADLS) provides a high performance and scalable analytical storage. Furthermore, ADLS imposes data schema on read, which allows the user to not worry about the schema until the data is needed. From the user’s perspective, ADLS functions like any other HDFS storage account through the supplied HDFS connector. Training can take place on an Azure N-series NC24 GPU-enabled Virtual Machine or using recipes from the Azure Batch Shipyard, which allows training of our DNNs with bare-metal GPU hardware acceleration in the public cloud using as many as four NVIDIA Tesla K80 GPUs. For scoring, one can use HDInsight Spark Cluster or Azure Data Lake Analytics to massively parallelize the scoring of a large collection of images with the rxExec function in Microsoft R Server (MRS) by distributing the workload across the worker nodes. The scoring workload is orchestrated from a single instance of MRS and each worker node can read and write data to ADLS independently, in parallel.

SQL Server, our premier database engine, is “becoming deep” as well. This is now possible for the first time with R and ML built into SQL Server. Pushing deep learning models inside SQL Server, our customers now get throughput, parallelism, security, reliability, compliance certifications and manageability, all in one. It’s a big win for data scientists and developers – you don’t have to separately build the management layer for operational deployment of ML models. Furthermore, just like data in databases can be shared across multiple applications, you can now share the deep learning models.  Models and intelligence become “yet another type of data”, managed by the SQL Server 2016. With these capabilities, developers can now build a new breed of applications that marry the latest transaction processing advancements in databases with deep learning.

4. Infrastructure (Azure): Deep Learning requires a new breed of high performance infrastructure that is able to support the computationally intensive nature of deep learning training. Azure now enables these scenarios with its N-Series Virtual machines that are powered by NVIDIA&;s Tesla K80 GPUs that are best in class for single and double precision workloads in the public cloud today. These GPUs are exposed via a hardware pass-through mechanism called Discreet Device Assignment that allows us to provide near bare-metal performance. Additionally, as data grows for these workloads, data scientists have the need to distribute the training not just across multiple GPUs in a single server, but to a number of GPUs across nodes. To enable this distributed learning need across tens or hundreds of GPUs, Azure has invested in high-end networking infrastructure for the N-Series using a Mellanox&039;s InfiniBand fabric which allows for high bandwidth communication between VMs with less than 2 microseconds latency. This networking capability allows for libraries such as Microsoft&039;s own Cognitive Toolkit (CNTK) to use MPI for communication between nodes and efficiently train with a larger number of layers and great performance.

We are also working with NVIDIA on a best in class roadmap for Azure with the current N-Series as the first iteration of that roadmap. These Virtual Machines are currently in preview and recently announced General Availability of this offering starting on 1st of December.

It is easy to get started with deep learning on Azure. The Data Science Virtual Machine (DSVM) is available in the Azure Marketplace, and comes pre-loaded with a range of deep learning frameworks and tools for Linux and Windows. To easily run many training jobs in parallel or launch a distributed job across more than one server, Azure Batch “Shipyard” templates are available for the top frameworks. Shipyard takes care of configuring the GPU and InfiniBand drivers, and uses Docker containers to setup your software environment.

Lastly, our team of our engineers and researchers has created a system that uses a reprogrammable computer chip called a field programmable gate array, or FPGA, to accelerate Bing and Azure. Utilizing the FPGA chips, we can now write Deep Learning algorithms directly onto the hardware, instead of using potentially less efficient software as the middle man. What’s more, an FPGA can be reprogrammed at a moment’s notice to respond to new advances in AI/Deep Learning or meet another type of unexpected need in a datacenter. Traditionally, engineers might wait two years or longer for hardware with different specifications to be designed and deployed. This is a moonshot project that’s succeeded and we are bringing this now to our customers.

Join Us in Shaping the Future of AI

Our focus on innovation in Deep Learning is across the entire stack of infrastructure, development tools, PaaS services and end user applications. Here are a few of the benefits our products bring:

Greater versatility: The Cognitive Toolkit lets customers use one framework to train models on premises with the NVIDIA DGX-1 or with NVIDIA GPU-based systems, and then run those models in the cloud on Azure. This scalable, hybrid approach lets enterprises rapidly prototype and deploy intelligent features.
Faster performance: When compared to running on CPUs, the GPU-accelerated Cognitive Toolkit performs deep learning training and inference much faster on NVIDIA GPUs available in Azure N-Series servers and on premises. For example, NVIDIA DGX-1 with Pascal and NVLink interconnect technology is 170x faster than CPU servers with the Cognitive Toolkit.
Wider availability: Azure N-Series virtual machines powered by NVIDIA GPUs are currently in preview to Azure customers, and will be generally available in December. Azure GPUs can be used to accelerate both training and model evaluation. With thousands of customers already part of the preview, businesses of all sizes are already running workloads on Tesla GPUs in Azure N-Series VMs.
Native integration with the entire data stack: We strongly believe in pushing intelligence close to where the data lives. While a few years ago running Deep Learning inside a database engine or a Big Data engine might have seemed like a science fiction, this has now become real. You can run deep learning models on massive amounts of data, e.g., images, videos, speech and text, and you can do it in bulk. This is the sort of capability brought to you by Azure Data Lake, HDInsight and SQL Server. You can also now join now the results of deep learning with any other type of data you have and do incredibly powerful analytics and intelligence over it (which we now call “Big Cognition”). It’s not just extracting one piece of cognitive information at a time, but rather joining and integrating all the extracted cognitive data with other types of data, so you can create seemingly magical “know-it-all” cognitive applications.

Let me invite all developers to come and join us in this exciting journey into AI applications.

@josephsirosh
Quelle: Azure

Facebook Wants You To Keep In Touch By Playing Pac-Man

Facebook’s newest Messenger upgrade will let you challenge friends, family, and that crush from summer camp to matches of Pac-Man, Galaga, Words With Friends, and a bunch of other games.

When you download the latest update for Messenger, you&;ll see a game controller icon in the same area as the GIF and sticker selector.

Blake Montgomery

Tap that icon, and you&039;re presented with a list of games that includes classic 8-bit titles, recognizable mobile games, and unique exclusives for Messenger.

You play the games together but not at the same time. They aren&039;t turn-based, though. The competition is more about comparing scores than simultaneous play.

The message thread shows what game you&039;ve played, who else in the thread has played, and you and your opponents&039; scores. You can play in a message thread with more than one person.

After each round, you can also swipe right for an already-taken screenshot of your score, mark it up with a sliding color finger brush, and send it to the people in the message thread.

Facebook

You don&039;t have to open a separate app to play. The games are webpages that open within Messenger.

Facebook first got the idea for the feature when it debuted a single game during March Madness 2016 called “Basketball.”

You can still play it by sending a basketball emoji and double tapping it.

Blake Montgomery

Andrea Vaccari, a Messenger product manager, said that at the time Basketball launched, his team wasn&039;t considering the idea of building more games in the app. They were more interested in engaging basketball fans.

Then, he said, people played the game a billion times.

“The success caught us by surprise,” he told BuzzFeed News, “Then we started to think about games as a way to keep in touch. With games, you may not have something to say, as with a photo or a message, but you can play together.”

Vaccari said he hopes the feature will make gaming less isolating: “Most consoles and other mobile games build games and sprinkle social on top. We built a game on top of social.”

The games are built with the mobile experience in mind, but they&039;re also available on the desktop version, messenger.com. They&039;ll open in a vertical window like they would on a phone.

Sound too familiar? Don&039;t worry. Your mom won&039;t be able to start a FarmVille game with you on Messenger.

The classic Facebook desktop game, which accrued millions of users and hundreds of millions of notifications to people who may not have wanted them, won&039;t be part of the Messenger gaming platform.

“We learned our lesson,” Vaccari said.

You can only challenge people within message threads, and Messenger only displays messages from friends — unless you accept strangers&039; requests. You can also abandon or mute message threads you want to avoid. That means no mass requests like when FarmVille was at the height of its popularity.

Quelle: <a href="Facebook Wants You To Keep In Touch By Playing Pac-Man“>BuzzFeed

Snapchat's Spectacles Are Overhyped – But Amazing

I waited for five hours to buy Snapchat&;s $129 camera glasses. I don&8217;t regret it.

If you’ve ever shared a self-destructing photo or video, you probably did so on Snapchat. Two months ago, the company re-branded itself as Snap Inc., “a camera company” (though the app is still called Snapchat).

But what’s a camera company without a camera? Enter Spectacles.

Snap&;s new camera/sunglasses hybrid is like a GoPro for hipsters, or maybe like a cuter and less conspicuous Google Glass. While wearing them, you can take photos that automatically upload to your phone, ready for you to add to your Snapchat story. They cost $129 and come in three colors (black, teal and coral), all in a rounded, slightly cat-eye shape.

And their hype is real, thanks in no small part to a genius rollout that&039;s led to artificially scarce supply, super-long lines, and a media story in and of itself. Unless you live in New York City or LA, Spectacles are only available via so-called Snapbots — a cyclops/vending machine hybrid that&039;s trackable on this map and that has been popping up in places like Big Sur, the Grand Canyon, and Tulsa, Oklahoma (but curiously enough, not bigger cities like Chicago or Philadelphia). Some pairs are already going for two or three times retail price on eBay, and Lumoid is charging $20 to rent a pair for a day.

Xavier Harding / BuzzFeed News


View Entire List ›

Quelle: <a href="Snapchat&039;s Spectacles Are Overhyped – But Amazing“>BuzzFeed

Facebook's Plan B To Bring Millions Of Indians Online Just Went Live

Getty Images

The last time Facebook tried to bring free internet access to millions of unconnected people in India, it didn’t go so well. Now, the world’s largest social network is back for round two. Its brand new program to bring internet access to rural India called Express WiFI is now live after months of testing in remote parts of the country, according to the program’s website.

“We are working with carriers, internet service providers, and local entrepreneurs to help expand connectivity to underserved locations around the world,” says the website.

Unlike its controversial Free Basics program where Facebook tied up with cellphone carriers and allowed people to access a limited selection of websites and services for free, Express WiFi lets users access the entire internet for a small fee — alleviating any potential net neutrality concerns. Users can buy affordable data packs in the form of digital vouchers to access internet on the Express WiFi network.

Facebook declined to say how much internet access under the program would cost, but notes on its website that its “working with local internet providers or mobile operators” who are able to use “software provided by Facebook to connect their communities.”

“We are currently working with ISP and operator partners to test Express WiFi with public deployments in multiple pilot sites,” said a Facebook spokesperson. “This solution empowers ISPs, operators, and local entrepreneur retailers to offer quality internet access to their village, town or region.”

According to previous reports, at least one of the ISPs that Facebook may have partnered with for the Express WiFi program is India’s state-owned RailTel, which provides internet access through fiber that runs alongside the country’s dense network of railway tracks. Earlier this year, Google partnered with RailTel to bring free WiFi to over 50 railway stations in India.

Facebook did not respond to BuzzFeed News&; questions about the program, including exactly which parts of the country is it available in and what the timeframe is for a nationwide rollout. However, Facebook does note on the program&039;s website that Express WiFi will be available in more countries soon.

With millions of people still unconnected to the internet and smartphone penetration in the low hundreds of the millions, internet companies like Facebook and Google see India as their next battleground for growth.

Google’s Project Loon, which uses giant solar-powered balloons floating in space to beam down internet to places where getting online is nearly impossible, has been stuck in with Indian regulators for months. The company recently announced Google Station, a new initiative to let cafes, malls, and small businesses set up public WiFi easily. Facebook’s Free Basics program was shut down by Indian regulators in February on the grounds that it violated net neutrality.

Quelle: <a href="Facebook&039;s Plan B To Bring Millions Of Indians Online Just Went Live“>BuzzFeed

Zenefits Agrees To $7 Million Settlement With California Regulators

Zenefits CEO David Sacks.

Steve Jennings / Getty Images

Zenefits has agreed to pay a $7 million fine in a settlement with California regulators, a major milestone that will let the human resources startup continue operating in its home state, according to a person briefed on the deal.

The $7 million penalty, relating to the insurance licensing scandal that rocked the company earlier this year, is among the largest such penalties ever assessed against a company by the California insurance department. It also dwarfs the size of the fines levied against Zenefits by other states — including Washington, Arizona, Minnesota, New Jersey, and Tennessee.

More importantly for the San Francisco-based Zenefits, which ousted its founding CEO in February and has shed hundreds of staff, the deal will give the startup a second chance to play by the rules as an insurance broker in its biggest market.

Capping a fall that stunned Silicon Valley, Zenefits acknowledged this year that its founding CEO, Parker Conrad, created and shared with his employees a piece of software to cheat on California insurance broker licensing requirements. Any employee who used this program to bypass the legally required 52 hours of online training would then be directed to certify under penalty of perjury that they had actually completed the work.

In addition, Zenefits apparently flouted insurance laws by allowing unlicensed brokers to sell health insurance in multiple states. That revelation, first reported by BuzzFeed News a year ago, touched off an internal inquiry at Zenefits that uncovered the cheating program created by Conrad.

The California settlement caps a months-long effort by the new CEO, David Sacks, to atone for past missteps. Of the $7 million penalty, $4 million is for subverting licensing education and study hour requirements, while $3 million is for transacting insurance without licenses, the person briefed on the deal said. Half of the total amount will be waived after two years if Zenefits passes a market conduct examination, this person added.

Zenefits will also pay a $160,000 fee to reimburse the California insurance department for the cost of the investigation, as well as the market conduct examination, the person said.

The company&;s violations in California — where many sales reps got their initial insurance broker licenses — had a ripple effect throughout other states where it did business.

While insurance brokers have to get licensed in each state where they sell insurance, they typically take a broker test only in their home state; with that credential in hand, getting additional licenses in other states is just a matter of filling out forms online. Cheating on the California test, then, means that any other state licenses acquired afterward are based on a rotten foundation.

Regulators in other states have been keeping a close eye on the inquiry in California. Its settlement will likely be seen as a vote of confidence by California regulators, which could help Zenefits resolve other inquiries around the country.

Zenefits did not immediately provide a comment.

Quelle: <a href="Zenefits Agrees To Million Settlement With California Regulators“>BuzzFeed

Stanford, The White House, And Tech Bigwigs Will Host A Summit On Poverty

President Obama and Mark Zuckerberg in 2011

Justin Sullivan / Getty Images

Tomorrow, the White House will partner with Stanford University and the Chan Zuckerberg Initiative — a limited liability corporation launched by the CEO of Facebook and his wife — to co-host the Summit on Poverty and Opportunity, a two-day, invite-only event held on the school&;s campus. It will focus on using technology and innovation to address issues like poverty, inequality, and economic immobility. The event will include an interactive demo by Palantir, the secretive Peter Thiel-backed analytics company, on how a real-time data platform can reduce incarceration, hospital use, and homelessness, as well as a lunchtime conversation on universal basic income with Facebook cofounder Chris Hughes and Y Combinator&039;s Sam Altman, who first got involved with basic income earlier this year.

The event was organized by representatives from each of the hosts, including Jim Shelton, president of education at the Chan Zuckerberg Initiative (and former deputy secretary of education under President Obama) as well as Elizabeth Mason, founding director of the new Stanford Poverty & Technology Lab, part of Stanford&039;s Center for Poverty & Inequality.

Mason told BuzzFeed News that the summit was “sort of a coming-out party for the Lab.” The goal of the event was to “bring together 275 high-level players in technology, philanthropy, community service, government, and academia to discuss how we can use technology and Big Data” to address these issues, she said by email. The Lab will develop “a new field” of study “that applies the premises and tools of technology to the policies and processes of fighting poverty.” The Lab will also “incubate ventures with practical solutions on high-tech poverty fixes.”

Silicon Valley&039;s role in any potential fixes is nascent. In May, Altman announced plans for a pilot study on basic income in Oakland, however, in earlier interviews with BuzzFeed News, Altman stressed that it was just a “research project” and meant in that spirit. The summit will also host a session on using technology to facilitate financial access featuring the CEO of Kiva and the director of public policy for Lending Club, the troubled peer-to-peer financing company.

The list of attendees and speakers also includes ex-Microsoft CEO Steve Ballmer, White House CTO Megan Smith (formerly a top executive at Google), Martin Ford, author of two books on automation, including Rise of the Robots, Marian Edelman, founder of the Children&039;s Defense Fund, Nobel Prize–winning economist Ken Arrow, and Bryan Desloge, president of the National Association of Counties, who backed Donald Trump in the presidential election and has participated in a previous White House summit on poverty.

The summit will also feature a roundtable discussion with Stanford professor Raj Chetty, a popular economist and MacArthur fellow, who researches economic immobility and will discuss plans to build a new database infrastructure that could steer and organize national research on poverty. An additional workshop will be held by Alexandra Bernadotte, founder of Beyond 12, a nonprofit dedicated to increasing the number of first generation, low-income, and other underrepresented students who graduate from college.

This event comes at a time when tech moguls like Mark Zuckerberg, and Sean Parker have been subject to increased scrutiny for their free-market approach to doing good, which eschews nonprofit foundations for traditional investment vehicles labeled as philanthropy. This structure allows wealthy donors control over the causes and initiatives that get funding, but without the oversight or accountability required of a nonprofit. Taken in that context, this summit is one example of Silicon Valley’s growing influence on philanthropy and ability to influence which ideas get heard.

Quelle: <a href="Stanford, The White House, And Tech Bigwigs Will Host A Summit On Poverty“>BuzzFeed

Microsoft Azure Storage Explorer: November update and summer recap

One year ago we released the very first version of Microsoft Azure Storage Explorer. At the beginning we only supported blobs on Mac OS and Windows. Since then, we&;ve added the ability to interact with queues, tables and file shares. We started shipping for Linux and we&039;ve kept adding features to support the capabilities of Storage Accounts.

In this post, we first want to thank our users for your amazing support! We appreciate all the feedback we get: your praise encourages us, your frustrations give us problems to solve, and your suggestions help steer us in the right direction. The developers behind Storage Explorer and I have been using this feedback to implement features based on what you liked, what needed improvement, and what you felt was missing in the product.

Today, we&039;ll elaborate on these features, including what&039;s new in the November update (0.8.6) and what we&039;ve shipped since our last post.

November release downloads: [Windows] [Mac OS] [Linux]

New in November:

Quick access to resources
Tabs
Improved upload/download speeds and performance
High contrast theme support
Return of scoped search
"Create new folder" for blobs
Edit blob and file properties
Fix for screen freeze bug

Major features from July-October:

Grouping by subscriptions and local resources
Ability to sign-off from accounts
Rename for blobs and files
Rename for blob containers, queues, tables, and file shares
Deep search
Improved table query experience
Ability to save table queries
CORS Settings
Managing blob leases
Direct links for sharing
Configuring of proxy settings
UX improvements

November features

For this release we focused on the features that most help with productivity when working across multiple Storage Accounts and services. With this in mind we implemented quick access to resources, the ability to open multiple services in tabs, and vastly improved the upload and download speeds of blobs.

Quick Access

The top of the tree view now contains a "Quick Access" section, which displays resources you want to access frequently. You can add any Storage Accounts, blob containers, queues, tables, or file shares to the Quick Access list. To add resources to this list, right-click on the resource you want to access and select "Add to Quick Access".

Tabs

This has long been requested in feedback, so we&039;re pleased to share you can now have multiple tabs! You can open any blob container, queue, table, or file share in a tab by double-clicking it. Single-clicking on a resource will open it in a temporary tab, the contents of which change depending on which service you have single-clicked on the left-hand tree view. You can make the temporary tab permanent by clicking on the tab name. This emulates patterns set by Visual Studio Code.

Upload/download performance improvements

On the performance front, we&039;ve made major improvements to the upload and download speeds of blobs. The new speeds are approximately 10x faster than our previous releases. This improvement primarily impacts large files such as VHDs, but also benefits the upload and download of multiple files.

Folders and property editing

Before this release, you could only see the properties of a specific file or blob. With this release you&039;ll have the ability to modify the value of editable properties, such as cache control or content type. You can right-click on the blob or file to see and edit their properties.

We&039;ve also added support for creating empty "virtual" folders in blob containers. Now you can create folders before uploading any blobs to it, rather than only being able to create them in the "Upload blob" dialog.

Usability and reliability

Last but not least, we worked on features and bug fixes to improve overall usability and reliability. First, we&039;ve brought back the ability to search within a Storage Account or service. We know a lot of you missed this feature, so now you have two ways of searching your resources:

Global search: Use the search box to search for any Storage Accounts or services
Scoped search: Use the magnifying glass to search within that node of the tree view

We also improved usability by adding support for themes in Storage Explorer. There are four themes available: light (default), dark, and two high-contrast themes. You can change the theme by going to the Edit menu and selecting "Themes."

Lastly, we fixed a screen freeze issue that had been impacting Storage Explorer when starting the app or using the Windows + arrow keys to move it around the screen. Based on our testing we believe this issue is fully fixed, but if you run into it please do let us know.

Summer features

After completing support for the full set of Storage services, we pivoted on improving the experience for connecting to your Storage Accounts and managing their content. This allowed us to open up our backlog to work on the major features we shipped in November.

Account management

One of the main areas we wanted to improve was the display of Storage Accounts in the left-hand tree view. The tree now shows Storage Accounts grouped by subscription, as well as a separate section for non-subscription resources. This "(Local and Attached)" section lists the local development storage (on Windows) and any Storage Accounts you&039;ve attached via either account name and key or SAS URI. It also contains a "SAS-Attached Services" node, which displays all services (such as blob containers) that you&039;ve added with SAS .

If you&039;re behind a firewall, you&039;ve likely had issues with signing into Storage Explorer. To help mitigate this, we&039;ve added the ability to specify proxy settings. To modify Storage Explorer proxy settings, you can select the "Configure proxy settings…" icon in the left-side toolbar.

Lastly, we&039;ve also modified the experience when you first sign-in so that all the subscriptions you have under the Azure account are displayed. You can modify this behavior in the account settings pane either by filtering subscriptions under an account, or by selecting the "Remove" button to completely sign-off from an account.

Copying and renaming

In the summer months we also added the ability to copy and rename blob containers, queues, tables, and file shares. You can also copy and rename blobs, blob folders, files, and file directories.

To copy and rename, we first create a copy of all the resources selected and move them if necessary. In the case of a rename, we delete the original files once the copy operation is completed successfully.

It&039;s possible to copy within an account as well as from one storage account to another, regardless of how you&039;re connected to it. The copy is done on the server-side, so it&039;s a fast operation that does not require disk space on your machine.

CORS, leases, and sharing

We&039;ve also improved the way to manage the access and rules of your storage resources. At the storage account level, you can now add, edit, and delete CORS rules for each of the services. You can do this by right-clicking on the node for either blob containers, queues, tables, or file shares, and selecting the "Configure CORS Settings…" option.

You can also control the actions you can take on blobs by creating and breaking leases for blobs and blob containers. Blobs with leases will be marked by a "lock" icon beside the blob, while blob containers with leases will have the word "(Locked)" displayed next to the blob container name. To manage leases, you can right-click on the resource for which you want to break or acquire a lease.

We also added the ability to share direct links to the resources in your subscription. This allows another person (who also has access to your subscription) to click on a link that will open up Storage Explorer and navigate to the specific resource you shared. To share a direct link, right-click on the Storage Account or blob container, queue, table, or file share you want the other person to access and select "Get Direct Link…."

Writing and saving queries

Lastly, we made significant improvements to the table querying functionality. The new query builder interface allows you to easily query your tables without having to know ODATA. With this query builder you can create AND/OR statements and group them together to search for any field in your table. You still can switch to the ODATA mode by selecting the "Text Editor" button at the top of the query toolbar.

Additionally, you have the ability to save and load any queries you have created, regardless of whether you use the builder or the editor to construct your queries.

Summary

Although we&039;ve done a lot of big features, we know there&039;s still gaps. Blob snapshots, stats and counts about the contents of your services, and support for Azure Stack are among the features for which we&039;ve heard a lot of requests. If you notice anything missing form that list or have any other comments, issues, or suggestions, you can send us feedback directly from Storage Explorer.

Thanks for making our first year a fantastic one!

– The Storage Explorer Team
Quelle: Azure