Apple Retail Chief: Fastest Way To Pre-Order iPhone X Is Through The App

Apple Retail Chief: Fastest Way To Pre-Order iPhone X Is Through The App

At the debut of Apple's massive new Chicago store, CEO Tim Cook and head of retail Angela Ahrendts sat down with BuzzFeed News to talk Trump, the future of Apple retail, and the upcoming launch of the iPhone X, or what the company describes as “the future of the smartphone.”

Reports of iPhone X supply shortages, however, have customers wondering just how long wait times for the much-anticipated, $1000 phone will be. According to Ahrendts, the fastest way to pre-order the phone the device on Friday October 27 is via the Apple Store app at 12:01 a.m. She also noted that when the iPhone X officially goes on sale on November 3rd “there [will be] some in stores.” In a press release, Apple said, “walk-in customers are encouraged to arrive early.”

View Video ›

video-player.buzzfeed.com

“The iPhone X really sets the tone for the next decade in technology. It has a lot of new technology in it,” Cook told BuzzFeed News. On the matter of supply constraints, the CEO said, “We'll be working as hard as possible to make as many as possible.”

Watch our full interview with Tim Cook and Angela Ahrendts:

youtube.com

Quelle: <a href="Apple Retail Chief: Fastest Way To Pre-Order iPhone X Is Through The App“>BuzzFeed

Robert Scoble Resigns From His Consulting Company In The Wake Of Sexual Harassment Allegations

US blogger Robert Scoble presents the Google Glass on April 24, 2013, at the NEXT Berlin conference in Berlin.

Ole Spata / AFP / Getty Images

On Sunday afternoon, Shel Israel — Robert Scoble's partner in the consulting company Transformation Group — announced in a Facebook post that Scoble had resigned, effective immediately, and would be canceling his public activities for the rest of the year. The resignation comes in the wake of multiple sexual harassment allegations against the tech evangelist.

Transformation Group was founded in March by Scoble and Israel to help brands develop “mixed reality” (that is, AR and VR) strategies. The pair had previously collaborated on three books, most recently 2016's The Fourth Transformation: How Augmented Reality and Artificial Intelligence Change Everything.

In his post, Israel wrote that Scoble is taking the rest of the year off to focus “on dealing with his deep and troubling personal issues. He is now going to meetings and he will start seeing a psychiatrist as well.” Scoble had previously been a fixture on the tech conference and speaking circuit, appearing in the last year at Collision, SXSW, and Evolve, among others.

Israel noted that “the revelations about Robert came to me as a surprise.” Though he conceded that he had seen Scoble “drunk and stoned,” Israel said he had “never personally witnessed” his partner behaving inappropriately toward women. “If I did, I would have called him on it,” Israel wrote. He noted that he plans to keep running the Transformation Group alone.

Scoble did not respond to a request for comment on Israel's post.

On Friday, TechCrunch reported that Scoble has allegedly harassed women after getting sober over the summer. The same day, Scoble posted an apology to his Facebook page, writing, “I'm deeply sorry to the people I've caused pain to. … The only thing I can do to really make a difference now is to prove, through my future behavior, and my willingness to listen, learn and change, that I want to become part of the solution going forward.”

Quelle: <a href="Robert Scoble Resigns From His Consulting Company In The Wake Of Sexual Harassment Allegations“>BuzzFeed

Steve Bannon Dropped Milo After White Nationalism Revelations. Will The Mercers Stand By Him?

Steve Bannon Dropped Milo After White Nationalism Revelations. Will The Mercers Stand By Him?

Josh Edelson / AFP / Getty Images

Breitbart News executive chairman Steve Bannon has told multiple people that he will never work with Milo Yiannopolous again in the aftermath of a BuzzFeed News expose linking Breitbart's former tech editor to white nationalists, BuzzFeed News has learned.

Yiannopolous, Bannon told at least one acquaintance, is “dead to me.”

But members of the Mercer family, Bannon's and Yiannopolous's key, shared patrons and partners on the new right, have not signaled whether they will continue to bankroll the controversial culture warrior. Their decision may shed light on the extent to which the hedge fund billionaires are motivated by the raw ethnonationalist politics a cache of leaked documents related to Yiannopoulos and Breitbart revealed.

The Mercers did not respond to multiple emails asking them if they intended to continue funding Yiannopoulos, nor did they respond to emails informing them that Bannon had excommunicated him.

BuzzFeed News's story demonstrated that Breitbart, which the Mercers partly own, ran numerous stories that were conceived and co-edited by white nationalists. The central figure in this effort was Yiannopoulos, who, the story revealed, once sang America the Beautiful” in a karaoke bar as a crowd, including the white nationalist Richard Spencer, gave Nazi salutes.

youtube.com

According to half a dozen people in Bannon's orbit, the story's revelations were enough to push the brawling former White House chief strategist to disavow Yiannopoulos, telling those close to him that there will never be a place for him at Breitbart again. (Yiannopoulos resigned from the site in February 2017, after a video surfaced in which he appeared to condone pedophilia. After Bannon left the government in August, Yiannopoulos had told friends that he expected to be rehired.)

In the two weeks since the story ran, however, neither Robert Mercer, the co-C.E.O. of the 65-billion dollar hedge fund Renaissance Technologies, nor Rebekah, his powerful daughter, have indicated whether they would continue to fund Yiannopoulos. As BuzzFeed News reported, the Mercers paid Yiannopoulos for more than a year in a variety of ways: Through Breitbart, through their production company, Glittering Steel, and even directly to Yiannopoulos' bank account through Robert Mercer's personal accountant. Rebekah Mercer was friendly enough with Yiannopoulos — who visited the family at their house in Florida earlier this year — to offer periodontist recommendations to Yiannopoulos via text.

It's unclear whether either Mercer knew about Yiannopoulos' connections to white nationalists prior to the BuzzFeed News story.

For the press-shy billionaires — who have funded an insurgency in the Republican party through tens of millions of dollars in political donations, a conservative research shop, and a controversial data analytics firm — the lack of a public break with Yiannopoulos raises questions about what exactly they want the future of conservative politics to look like. The anti-globalization, anti-immigration Breitbart, which once featured a “black crime” vertical, has long been accused by critics of stoking white racial resentment.

But Yiannopoulos' actions as tech editor are clear evidence that the site cultivated actual white nationalists and neo-Nazis — actions that the people who long paid his salary have yet to denounce.

Quelle: <a href="Steve Bannon Dropped Milo After White Nationalism Revelations. Will The Mercers Stand By Him?“>BuzzFeed

Twitter Doesn’t Seem To Be Very Good At Enforcing Doxxing Bans

Rose McGowan on June 24, 2015 in New York City

Michael Loccisano / Getty Images

When actor Rose McGowan doxxed someone by tweeting a private phone number last week, Twitter acted quickly to restrict her account until she deleted the tweet, which was in violation of the platform’s terms of service. But a BuzzFeed News analysis of thousands of tweets in the same timeframe, as well as thousands more a week later, shows that Twitter’s enforcement of doxxing bans is inconsistent at best.

Although the company was swift to crack down on McGowan’s account as she was discussing film producer Harvey Weinstein’s alleged sexual misconduct, it’s slower to act on less prominent users who break the same rule, which prohibits publicly tweeting someone’s private phone number.

Using Twitter’s Search API, we collected 10,000 tweets between October 9 and October 13 and found five that included people’s private phone numbers. All are still up. We also used Twitter’s Streaming API to collect tweets for eight hours on October 14, and for nine hours on October 16. Over that time period, we found 32 tweets containing phone numbers belonging to people other than the tweeter. On average, that’s roughly two private phone numbers per hour, and about 45 per day. Of these tweets, only five had been deleted by October 17, and only six had been deleted by October 19.

The examples we collected are just a small sample of tweets that violate Twitter’s rules, but still slip past the company’s enforcement tools. While BuzzFeed News focused its search on tweets containing private phone numbers, quick searches turned up tweets containing additional personally identifiable information for people other than the tweeter, including addresses and email addresses. Often, this information is posted on Twitter along with explicit calls to doxx the targets.

When BuzzFeed News reached out to three Twitter users whose numbers had been made public, two said they hadn’t reported the tweets because they wanted to give the doxxer time to take it down themselves, and the third said they hadn’t seen the tweets yet. We reported three doxxing tweets containing private numbers to Twitter on Monday, October 16. As of this article’s publication, two remain online.

We asked Twitter how it responds when users post others’ private information on its platform. A Twitter spokesperson stated, “We are constantly looking for opportunities to use Machine Learning to help make Twitter safer and will continue to leverage more ML/AI to improve the detection of content that violates our terms of service. As we announced last week, we are taking a more aggressive stance with our rules and how we enforce them. We're moving quickly to make these updates and we will share more soon.”

Twitter has struggled with harassment for a decade, and it has long relied on algorithms and automated systems to enforce its rules. But the company and the algorithms it relies on have a tendency to overlook abuse on its platform, and Twitter often takes action only when the media publicly calls out an issue, or when prominent figures like Leslie Jones or McGowan are involved.

Over the past year, as Twitter has faced increasing pressure from the public to quash abuse on its platform, it has rolled out a series of harassment-combating tools and efforts, including more ways to report misconduct, keyword filters, muting abilities, and timeline tweaks that bury abusive tweets. Still, detecting abuse is not the same as effectively stopping it. Reporting by BuzzFeed News in the past year has uncovered hundreds of examples of harassment on Twitter; in many cases, when victims report the abusive tweets, Twitter dismisses the reports because it doesn’t consider them to be in violation of Twitter’s rules.

But the company seems to be doubling down on these tools, recently telling BuzzFeed News that it’s “focusing more on improving its abuse-filtering algorithms rather than hiring more humans.” And this week, Twitter CEO Jack Dorsey publicly shared the company’s internal safety work streams and shipping calendar in an attempt to be more transparent with users about Twitter’s push to root out bad behavior.

In a blog post on Thursday, the company wrote, “We’re updating our approach to make Twitter a safer place. This won’t be a quick or easy fix, but we’re committed to getting it right. Far too often in the past we’ve said we’d do better and promised transparency but have fallen short in our efforts. Starting today, you can expect regular, real-time updates about our progress.” Upcoming efforts include plans to immediately suspend accounts that post non-consensual nude images and videos, an updated account suspension process, and bans on accounts that promote violence.

Quelle: <a href="Twitter Doesn’t Seem To Be Very Good At Enforcing Doxxing Bans“>BuzzFeed

How People Inside Facebook Are Reacting To The Company’s Election Crisis

Rob Dobi for BuzzFeed News

In the summer of 2015, a Facebook engineer was combing through the company's internal data when he noticed something unusual. He was searching to determine which websites received the most referral traffic from its billion-plus users. The top 25 included the usual suspects — YouTube and the Huffington Post, along with a few obscure hyperpartisan sites he didn’t recognize. With names like Conservative Tribune and Western Journalism, these publications seemed to be little more than aggregation content mills blaring divisive political headlines, yet they consistently ranked among the most widely read websites on Facebook.

“Conservative Tribune, Western Journalism, and Breitbart were regularly in the top 10 of news and media websites,” the engineer told BuzzFeed News. “They often ranked higher than established brands like the New York Times and got far more traffic from Facebook than CNN. It was wild.”

Troubled by the trend, the engineer posted a list of these sites and associated URLs to one of Facebook's internal employee forums. The discussion was brief — and uneventful. “There was this general sense of, 'Yeah, this is pretty crazy, but what do you want us to do about it?'” the engineer explained.

To truly understand how Facebook is responding to its role in the election and the ensuing morass, numerous sources inside and close to the company pointed to its unemotional engineering-driven culture, which they argue is largely guided by a quantitative approach to problems. It’s one that views nearly all content as agnostic, and everything else as a math problem. As that viewpoint has run headfirst into the wall of political reality, complete with congressional inquiries and multiple public mea culpas from its boy king CEO, a crisis of perception now brews.

Inside Facebook, many in the company’s rank and file are frustrated. They view the events of the last month and those that preceded it as part of an unjust narrative that’s spiraled out of control, unchecked. Five sources familiar with the thinking inside the company told BuzzFeed News that many employees feel Facebook is being used as a scapegoat for the myriad complex factors that led to 2016's unexpected election result. What the public sees as Facebook’s failure to recognize the extent to which it could be manipulated for untoward ends, employees view as a flawed hindsight justification for circumstances that mostly fell well beyond their control. And as the drumbeat of damning reports continues, the frustration and fundamental disconnect between Facebook's stewards and those wary of its growing influence grow larger still.

Today, the engineer’s anecdote reads as a missed opportunity — a warning of an impending storm of misinformation blithely dismissed. But inside Facebook in July 2015, it seemed a rational response. At the time, the platform was facing criticism for what many believed to be overly censorious content policies, most notably a decision to ban breastfeeding photos which had only recently been reversed. A move to reduce the reach of nontraditional publications seemed certain to trigger a PR disaster at a time when Facebook was consumed by a troubling downturn in its core business metric — person-to-person sharing — and battling Snapchat for new users.

“Things are organized quantitatively at Facebook,” the engineer said, noting that the company was far more concerned with how many links were shared than what was being shared. “There wasn't a team dedicated to what news outlets [were using the platform] and what news was propagating (though there was a sales-oriented media partnerships team). And why would they have had one, it simply wasn’t one of their business objectives.”

Yet that failure to fully recognize a looming problem has engulfed the company in the aftermath of the 2016 US presidential election. In the past month alone, Facebook has disclosed to Congress 3,000 ads linked to Kremlin election manipulation, its CEO has publicly apologized for dismissing the fake news epidemic as “a crazy idea,” and it has been attacked by President Trump on Twitter. It’s also been criticized for surfacing fake news to its Las Vegas massacre “safety check” page, published full-page apology ads in major newspapers, and been forced to update lengthy blog posts about its handling of the Russian ads when its explanations proved too murky. And then there are the congressional probes — two of them — and a pending bipartisan bill meant to force it to disclose political ads. With the specter of government regulation hanging above it, Facebook seems to have few, if any, friends right now in the public sphere.

The public-facing crisis is playing out internally as well, as employees wrestle with the election meddling that occurred on its platform. Sources familiar with recent internal discussions at the company told BuzzFeed News that plenty of employees are conflicted over the issue and are demanding more clarity about the platform’s exact role in the election. “Internally, there’s a great deal of confusion about what's been done and people are trying to come to terms with what exactly happened,” one of these people told BuzzFeed News.

Facebook

Three sources close to the company described similar conversations, noting that Facebook staffers feel some sense of responsibility for the platform’s misuse in the election. “One of the things people inside are bemoaning is the fact that the response internally was very, very slow,” one former employee told BuzzFeed News. “That’s because Facebook didn't have the expertise needed to spot it until it happened.”

The employee, who left the company recently, said that Facebook was so focused on US-centric policies and engaging with 2016 election campaigns that it didn’t bother to fully consider foreign interference. “There’s a feeling that this kind of social engineering was happening all over the world before our election — in places like Estonia, Poland, and Ukraine. If there was a less US-focused approach it may have been spotted and acted on in real time,” this person said.

According to a Facebook spokesperson, responding on behalf of the company, “we take these issues very seriously. Facebook is an important part of many people’s lives and we recognize the responsibility that comes with that. It’s also our responsibility to do all we can to prevent foreign interference on our platform when it comes to elections. We are taking strong action to continue bolstering security on Facebook – investing heavily in new technology and hiring thousands more people to remove fake accounts, bettering enforce our standards on hate and violence, and increasing oversight of our ad system to set a new transparency standard for the internet. This is a new kind of threat, even though not a new challenge. Because there will always be bad actors trying to undermine our society and our values. But we will continue to work to make it a lot harder to harm us, and ensure people can express themselves freely and openly online.”

But the prevailing viewpoint within Facebook, according to numerous sources, is that the company has been wrongly excoriated for the misinformation and election meddling enabled by its platform. “There are lots inside thinking, 'We're the victims,'” a source familiar with the current climate at the company told BuzzFeed News. “[They feel] that this Russia stuff is bigger than just Facebook’s responsibility — that Facebook is just a battlefield in a greater misinformation campaign and that it’s up to the governments involved to resolve these issues.”

More broadly, multiple sources told BuzzFeed News that some inside Facebook think the blame cast on the company by the media and public feels reactionary and somewhat hypocritical. “Before the election the digital community was complaining that Facebook was this monopolistic power that was overly censorious and buttoned-up. And now the same group is saying, ‘how'd you let Breitbart and fake news get out there?’” a second former employee who recently left the company said. “And they have a point — ultimately it's because the election didn't go the way they wanted. It's worth pointing out that 12 months ago people said, 'I hate Facebook because they don't let all voices on the platform,' and they're upset and asking for Facebook to restrict what’s shown.”

“The view at Facebook is that ‘we show people what they want to see and we do that based on what they tell us they want to see, and we judge that with data like time on the platform, how they click on links, what they like,’” a former senior employee told BuzzFeed News. “And they believe that to the extent that something flourishes or goes viral on Facebook — it’s not a reflection of the company’s role, but a reflection of what people want. And that deeply rational engineer’s view tends to absolve them of some of the responsibility, probably.”

For Facebook’s critics, this view is tantamount to the company’s original sin — one that’s exacerbated by its leakproof culture and what some employees describe as a hive mind mentality.

Moreover, it is largely driven from the top down. CEO Mark Zuckerberg seems to project two perhaps antithetical views: that Facebook has great power to connect the world for the better, but only limited influence when it comes to efforts to destabilize democracy. A source who has worked closely with Zuckerberg said he sees the founder and CEO as approaching Facebook’s role in the election with none of the hysteria that’s reflected in the press.

“He’s treating it with a level of urgency,” this former senior employee told BuzzFeed News. “We’re not going to see a knee-jerk reaction to this from him — he’ll be very restrained with any potential tweaks to the platform because he's more interested in substance than optics.”

“Zuck tends to have a pretty unemotional and macro level view of what's going on,” another former Facebook employee explained. “He’ll look at data from a macro level and see the significance, but also see that the data shows that nobody wanted to read the liberal media stuff — that [the mainstream media] didn't target half the country with their content.”

For many outside observers, the idea that the social network potentially played an outsize role in election interference by a foreign government is confirmation of their worst dystopian fears. The fact that the Russian ads were likely targeted using personal information provided by users themselves tugs at long-held suspicions that Facebook knows too much about its users and profits wildly from it.

Yet those with knowledge of Facebook’s ad system say that there’s a solid case to be made that the disclosed Russian ad spend — and even the reported millions of impressions those ads received — pales in comparison to the billions spent by political groups in the run-up to 2016 on Facebook’s ad platform and the hundreds of millions of impressions that the platform delivers daily on all types of paid and unpaid content. Basically: Facebook’s unprecedented scale, when applied to the Russian ads, renders the scandal’s impact far less consequential than news reports would suggest.

The greater, perhaps more existential issues, former employees argue, are Facebook’s filter bubbles, the increasing misinformation and hyperpartisan news that flourishes there as a result, and the platform’s role as arguably the single largest destination for news consumption.

Sources familiar with recent discussions inside Facebook told BuzzFeed News there’s some concern that the strong reaction to 2016 election meddling and the desire for fast reform could push the company to assume a greater role in determining what is or isn’t legitimate news. “That Facebook played a significant part as perhaps the most important online venue in this election is not up for debate,” one of these people said. “But what we need to be debating is: What is Facebook’s role in controlling the outcomes of elections? I’m not sure anyone outside Facebook has a good proposal for that.”

Facebook, too, has long been concerned about assuming any sort of media watchdog role and the company’s objection usually takes the form — as it did last week in an interview with Facebook COO Sheryl Sandberg — of its well-worn argument that Facebook is a technology company, not a media company. “We hire engineers. We don’t hire reporters. No one is a journalist. We don’t cover the news,” Sandberg told Axios’s Mike Allen.

Antonio Garcia Martinez, a former Facebook employee who helped lead the company’s early ad platform, worries that the momentum to correct for what happened during the 2016 election will push Facebook a step too far. “Everyone fears Facebook’s power and as a result, they're asking them to assume more power in form of human curation and editorial decision-making,” he said. “I worry that two or three years from now we're all going to deeply regret we asked for this.”

This gulf between the way the company sees itself and the way it is increasingly being viewed by outside observers threatens to undermine Facebook’s awareness of crucial issues that need to be addressed, he says.

@NellieBowles / Twitter / Via Twitter: @NellieBowles

To illustrate this, Martinez points to Facebook’s “filter bubble” problem — that the platform’s design pushes its users into echo chambers filled with only the news and information they already want, rather than the potentially unpopular information they might need. “What worries me is that we’ve talked about the filter bubble problem for years now. And the company — and all the other platforms — have largely batted the concerns aside. But finally we’re seeing the filter bubble at work now in a very real way,” he said. Facebook, Martinez suggests, will weather its PR struggles. What remains to be seen is whether the company can learn from the chaos with a better ability to see outside itself.

“I think there's a real question if democracy can survive Facebook and all the other Facebook-like platforms,” he said. “Before platforms like Facebook, the argument used to be that you had a right to your own opinion. Now, it's more like the right to your own reality.”

Meanwhile, those inside the company continue to struggle with what, exactly, the company is, and what it is responsible for.

“There are times when people at Facebook would gloat about the power and reach of the network,” a senior former employee said. “Somebody said with a straight face to me not terribly long ago that 'running Facebook is like running a government for the world.’ I remember thinking, ‘God, it’s really not like that at all.’”

Quelle: <a href="How People Inside Facebook Are Reacting To The Company’s Election Crisis“>BuzzFeed

Another Woman Has Accused Robert Scoble of Sexual Harassment

Robert Scoble at the 'NEXT Berlin' conference in Berlin on on April 24, 2013

Dpa / AFP / Getty Images

Robert Scoble, the high-profile tech evangelist and blogger, and longtime fixture at industry events, has been accused by a second woman of groping. Michelle Greer, who worked with Scoble at Rackspace in 2009 and 2010, says Scoble groped her at a tech conference in 2010. The accusation comes on the same day that the technology journalist Quinn Norton accused Scoble of assaulting her in the early 2010s at Foo Camp, a hacker conference. Norton wrote, “And then, without any more warning, Scoble was on me. I felt one hand on my breast and his arm reaching around and grabbing my butt.”

Scoble is an influential figure in tech. He started his Scobleizer blog in 2000, and parlayed the blog into lucrative roles including technology evangelist at Microsoft, futurist at Rackspace, and most recently, entrepreneur in residence at Upload, a virtual reality startup that recently settled a sexual harassment lawsuit with a former employee. He's a fixture at tech conferences, where he gives talks with titles like “Beyond Mobile.” He’s also written several books with a co-author, Shel Israel, including 2016's The Fourth Transformation, about how in the future, we will replace everything we do on our phones with moving our eyes or even brainwaves.

Greer was a senior manager in corporate communications at Rackspace, where one of her responsibilities was to produce content for Scoble's now-defunct “Building 43″ project, a social networking and content community that largely focused on startups. One night in February 2010 when Greer was at the Startup Riot tech conference with Scoble in Atlanta, a few people from Rackspace ended up in the hotel bar.

“I remember seeing him with two drinks in his hand,” she told BuzzFeed News. “My boss sat next to me, and Scoble sits across from me and starts touching my leg.” She said that she told the group she was tired and had to go up to her room. Once she got there, she called her boyfriend and told him what had happened.

A couple days later, when the team was back in San Francisco, Scoble's producer (who was present during the incident) apologized to her. As she recalled, he told her, “I'm so sorry, my employees will never touch you again.” But Scoble himself never apologized, and Greer decided not to go to HR with what had happened.

When contacted by BuzzFeed News via Facebook Messenger on Thursday, Scoble said, “What happened with Michelle happened with my boss,” as well as “other women in the room.” He declined to comment further, saying only that he would be doing a live video at midnight Pacific.

After the incident, Greer still had to work with Scoble. Their face-to-face interaction was limited, since she was based in Austin and he was in San Francisco, but still, Greer said, “I was afraid of him,” and dreaded their occasional interactions at conferences and events. She decided to try to transfer to a different team at Rackspace, but within the next few months, two different teams told her they didn't have the head count to put her on their teams. In the meantime, her job performance was suffering. “I had basically shut down,” she said. “I was miserable.”

One day, she got called into an office by her boss, who told her he didn't think she could be happy at the company, and let her go. (Greer's former boss did not return a phone call from BuzzFeed News.) At her exit interview, she said she told HR everything that had happened. “You could tell they felt bad,” she said. “They were like, why didn't you come to us before? I was like, it's Robert Scoble. If this gets out, he has the bigger megaphone than I do. I could be totally hosed.”

The timing, she said, was terrible: “I had just gotten a condo. I had a mortgage.”

For Greer, one of the most galling things about what happened was watching Scoble continue to act like an advocate for women in tech on social media and elsewhere. “He'll share a lot of stuff about women in tech. He tries to act like an ally.” In July, she posted a comment on Facebook that said “I have always worked with mostly men. They know things should change. If you don't get rid of the bad actors though, nothing changes.” Scoble liked her comment. “I responded, 'You're a bad actor. I can't tell you how awful I felt after working with you. Watching you like this post angered me.' He said, 'Saying I am sorry isn't enough to undo the harm I have done.'”

Years after the incident and her subsequent firing, Greer is still angry about how she was treated. “I have to explain for the rest of my life why I only worked at Rackspace for 10 months. I wish I had gone to HR when it happened so I could have nipped it in the bud, and it wouldn't be this cancer that just spread.”

To Greer, it seems as though she's been re-victimized all over again. “He apologizes and then he keeps doing this crap,” she said. “I lost my job. It traumatized me for life.”

Quelle: <a href="Another Woman Has Accused Robert Scoble of Sexual Harassment“>BuzzFeed

Basic Income Isn't Just About Robots, Says Mayor Who Just Launched Pilot Program

Phonlamaiphoto / Getty Images / Caroline O’Donovan

The idea of basic income — in which the government gives all citizens a small monthly stipend — has grown popular in tech circles, not in the least because it's seen as a possible solution to the looming problem of robots, artificial intelligence, and automation taking jobs away from human workers.

But when Stockton, California Mayor Michael Tubbs spoke at Cash Conference, a pro-basic income event held in San Francisco on Thursday, about plans to test a basic income pilot for his city, he said the program isn't a response to encroaching technology.

“Basic income isn't about a scary future where robots run everything,” Tubbs told reporters at a press conference held Thursday. “It’s about today, when working people can't afford rent.”

Tubbs, along with the Economic Security Project — the group that hosted the Cash Conference, which is partially backed by Facebook co-founder Chris Hughes — announced Stockton’s pilot on Wednesday. The Economic Security Project pledged $1 million to the initiative, which will dole out $500 a month per Stockton household. For comparison, other basic income test programs range from one that pays $1,500 a month to 100 families in Oakland, to others in places like Kenya that offer people much lower monthly stipends.

“Basic income isn't about a scary future where robots run everything. It’s about today, when working people can't afford rent.”

Many of the pilot program’s details , including how many people will be selected, how long it will run, and how it will select participants, are still undecided. Tubbs said the city is hiring a researcher who will design the pilot over the course of six to nine months.

Basic income pilots are in vogue in places around the world right now; Hawaii’s doing one, Ontario, Canada is doing one, and startup accelerator Y Combinator is doing one in Oakland. The idea is to find out, if you give randomly selected people money with no strings attached, what they will spend it on.

“I think the vast majority of people will make rational economic decisions,” Tubbs said of the not-yet-selected Stockton families who will participate in the basic income pilot.

Not all policy makers think giving Americans a basic universal income is a good idea. Vice President Joe Biden, who recently launched an institute to study jobs and work at the University of Delaware, has opposed the idea, because he believes people derive dignity from doing work.

“While I appreciate concerns from Silicon Valley executives about what their innovations may do to American incomes, I believe they're selling American workers short,” Biden wrote in a blog post.

But Bloomberg venture fund manager Roy Bahat, who moderated a panel at Cash Conference, said it’s wealthy people, not average Americans, who derive self-worth from their jobs. “Work is about meaning…if you make $150k a year,” Bahat said, citing a survey conducted by Bloomberg and the New America foundation. “For everyone else, it's about security.”

Tubbs said receiving basic income wouldn’t necessarily preclude someone from having a job — to be sure, it would be nearly impossible to live on $500 a month alone in California. Tubbs says many of his constituents in Stockton have jobs and work long days, but still can’t afford to pay their bills.

“I would say there is something inherently good about work, but I don’t think the inherent goodness of work is in working and not making money,” Tubbs said Thursday. “I think it’s a false dichotomy to say, we can do this but we can’t do that. I think people have dignity when they can pay their bills, pay their healthcare bills, take their kids shopping.”

Stockton, while just a few hours away from San Francisco, hasn’t really benefited from the tech boom in the Bay Area. Tubbs said he first learned of basic income from Martin Luther King’s book Where Do We Go From Here?, not from Silicon Valley’s roboticists futurists. In the years since he first read King’s book, Tubbs said he’s watched as basic income — or “guaranteed income” in King’s words — has become “not some crazy idea” but a policy proposal a lot of people were thinking about.

For him, the basic income pilot isn’t about a future where workers are replaced by algorithms and robots, but about a present where the hardest working people — he cited migrant workers, service workers, and Uber and Lyft drivers — can’t making a living even if they’re working full-time.

Quelle: <a href="Basic Income Isn't Just About Robots, Says Mayor Who Just Launched Pilot Program“>BuzzFeed

People Are Hella Mad About Twitter's Old-Tweet-Loving Algorithm

Twitter is constantly testing and tweaking the algorithm that shows its users “the best tweets first” in their timelines. But within the past month or so, the algorithm has started filling some people's timelines with such old tweets they've started protesting loudly.

The complaints suggest that some of users' worst fears about the algorithmic timeline are coming true: Namely, that messing with Twitter's reverse chronological order would harm its live, vibrant feel.

Some Twitter users say they've spotted tweets in their timelines from as long as three days earlier:

Others have said it shows them the same tweets over and over:

Many more are reacting, uhh, calmly to what they're seeing:

As Twitter CEO Jack Dorsey is fond of saying, Twitter is all about a live experience, meaning it shows users what's happening in real-time.

But filling timelines with such old tweets doesn't deliver on that promise. A tweet about a baseball game the day after it finished is the opposite of Dorsey's in-the-moment Twitter value proposition.

And sometimes, those day-old sports tweets can cause Twitter users to relive the horror of devastating losses, such as in the case of this Cleveland Indians fan:

Asked if the algorithm has been tweaked to show a higher ratio of old tweets, a Twitter spokesperson told BuzzFeed News that the company had nothing new to share, “but we are always testing tweaks to make the timeline more relevant.”

Twitter users can, of course, opt out of an algorithmically sorted timeline. But only 2% of Twitter users have toggled that option, according to the company's latest numbers. And one user who had opted-out reported seeing old tweets in her feed:

Introducing the algorithm has been good for Twitter. The company's user numbers have increased since the move, along with the amount of time people spend on the site. But there's no guarantee that if Twitter continues to ramp up the algorithm it will experience a proportional benefit. And judging from the feedback from those currently living with the version of the algorithm that surfaces old and repetitive tweets, the company may be finding the edge of its usefulness.

Quelle: <a href="People Are Hella Mad About Twitter's Old-Tweet-Loving Algorithm“>BuzzFeed

Ad Industry Insiders Are Connected To A Fraud Scheme That Researchers Say Stole Millions Of Dollars

Some of the world’s biggest brands were ripped off by a digital fraud scheme that used a network of websites connected to US advertising industry insiders to steal what experts say could be millions of dollars, a BuzzFeed News investigation has found.

Approximately 40 websites used special code that triggered an avalanche of fraudulent views of video ads from companies such as P&G, Unilever, Hershey’s, Johnson & Johnson, Ford, and MGM, according to data gathered by ad fraud investigation firm Social Puncher in collaboration with BuzzFeed News. Over 100 brands saw their ads fraudulently displayed on the sites, and roughly 50 brands appeared multiple times.

Documents obtained by BuzzFeed News reveal that the CEO of an ad platform and digital marketing agency is an owner of 12 websites that earned revenue from the fraudulent views, and his company provided the ad platform used by sites in the scheme. Another key player is a former employee of a large ad network who runs a group of eight sites that were part of the fraud, and who consults for a company with another eight sites in it. That company is owned by a model and online entrepreneur who played Bob Saget’s girlfriend on the HBO show Entourage. A final site researchers identified in the scheme is owned by the cofounder of one of the 20 largest ad networks in the United States.

In statements provided to BuzzFeed News by email, all parties deny any knowledge of fraudulent ad activity taking place on their websites. 301network, the ad platform used in the scheme, is now in the process of being shut down and many of the websites that participated have also been deactivated.

This scheme illustrates that while governments and platforms such as Facebook are grappling with online misinformation, the advertising world is in the midst of its own crisis brought on by a multibillion-dollar form of digital deception: ad fraud. This investigation also reveals how seemingly credible players in the ad supply chain can play an active role in — and profit from — fraud.

It's yet another example of how the digital ad industry is being rocked by concerns about quality, fraud, and brand safety. YouTube lost millions of dollars in advertising after it was revealed that ads from major brands were showing up next to extremist videos. P&G, one of the world's biggest advertisers, recently withheld more than $100 million of digital ad spend and found it had little impact on its business. “What that tells me is that the spending we cut was largely ineffective,” said CEO David Taylor.

Social Puncher, which publishes ad fraud investigations at SadBotTrue.com, estimates this scheme could have stolen as much as $20 million this year. Pixalate, a fraud prevention and detection company, recently exposed a group of seven sites involved in the scheme as a result of its own independent investigation. It estimated that “a sustained attack [from just one website] could net the fraudsters over $2 million per year.”

Another fraud detection company, Integral Ad Science, reviewed sites that participated in the scheme and found they engaged in fraudulent tactics to generate ad impressions. “Those sites present various degrees of fraud, and they have been flagged accordingly to our customers,” Maria Pousa, the chief marketing officer of IAS, told BuzzFeed News.

“We have stopped the dumb criminals. Now we need to be able to stop the smart criminals.”

Kristin Lemkau, the chief marketing officer of JPMorgan Chase, recently said advertisers are expected to lose $16.4 billion this year to ad fraud, more than double what was stolen in 2016.

Mike Zaneis, CEO of Trustworthy Accountability Group, an anti-fraud initiative set up by the ad industry's key trade groups, told BuzzFeed News some in the industry enable fraud because they look the other way and let it happen, while others actively participate. “There are errors of omission and then there are errors of commission,” he said.

In spite of rising losses and brand concerns, Zaneis believes recent initiatives in the industry have made it more difficult for criminals to make money from ad fraud, which has in turn required them to develop more-sophisticated attacks.

“It was so easy to just turn on the nonhuman traffic and there was no accountability, and that’s no longer the case,” he said. “We have stopped the dumb criminals. Now we need to be able to stop the smart criminals.”

Ads for major brands were fraudulently displayed on approximately 40 websites.

Social Puncher

What caught the attention of researchers at Pixalate and Social Puncher, two companies that identified the fraud independently of each other, was that sites in the scheme deployed a sophisticated method to automatically redirect traffic between websites in order to rack up ad impressions and avoid detection. Once caught in this web of redirects, the sites show a constant stream of video ads that are often barely interrupted by actual editorial content. In some cases, the sites showed more than one video ad at the same time in order to increase revenue.

Jalal Nasir, the CEO of Pixalate, referred to the sites in the scheme as “self-driven” because once the redirect code is initiated it can bounce between websites without any action required on the part of a human user or bot. (This kind of attack is known as “session hijacking.”)

“The people profiting from this scheme could have initiated the first visit to the URL, simply to open as many windows or tabs as possible on browsers,” he told BuzzFeed News. “Once that first step had been taken, however, the browsers could have been left open to ‘browse’ all day, ‘mimicking a human.’”

The websites in the scheme focus on niche topics, such as beauty, celebrity news, food, and parenting, that are popular with major advertisers and that can attract high ad rates. They had names such as BeautyTips.online, RightParent.com, HealthyBackyard.com, MomTaxi.com, and GossipFamily.com. In many cases the sites are filled with images and content that has been plagiarized or loosely rewritten from other websites. Others are filled with posts that read like poor translations of actual English.

“Don’t assume rumored baby bump of Kylie Jenner anytime soon,” begins a recent article on StyleFashionista.com. The headline is similarly nonsensical: “Kylie Jenner’s Post Instagram Posts A Fascinating Selection Of Shirts.”

A screenshot of StyleFashionista.com

Many of the sites appear to have been hastily thrown together: Some, such as UpcomingBeauty.com, still contain the default settings of the design template, and have newsletter signup boxes that are not configured. Others, such as StyleFashionista.com, have been online for a year and a half and yet do not appear to have a single user comment. (The “Recent Comments” section on its homepage is empty.)

Pixalate referred to the group of properties it investigated as “zombie sites” because of how they generate ad views without human action, and because it’s unlikely they could attract interest from a real audience.

If any real visitors did happen upon these sites, the scheme was designed to avoid detection by ensuring that a normal user visiting the homepage or regular URL would not be exposed to the malicious behavior. The sites were configured with a “friend or foe” system that only triggered the redirects when a specific URL was accessed. Once triggered, the secret URL would engage what Social Puncher came to refer to as “ad hell” due to the constant display of video ads and very little actual editorial content.

Social Puncher identified the secret URLs and then accessed them in order to verify the fraudulent ad display. For example, this video shows ads from top brands being shown as a small group of sites redirect between each other with no action taken on the part of the user:

View Video ›

video-player.buzzfeed.com

Pixalate’s researchers documented the same behavior, as did Protected Media, another fraud detection company that examined the sites at BuzzFeed News’ request.

Along with the secret URLs, the scheme attempted to avoid detection by using a network of different sites to ensure no single property generated enough revenue to risk catching the attention of fraud detection companies, or of the brands being defrauded. Many sites in the scheme would launch, instantly gain traffic and ads, and then see their audience disappear months later. It was the digital equivalent of skimming from a casino.

Using the list of sites that Social Puncher and Pixalate identified, BuzzFeed News began to investigate the companies and people behind them. That trail led to two major owners/operators of sites who turned out to be Americans with ties to the US digital ad industry.

A screenshot of 301network.com

301network.com

All sites involved in the scheme used ad technology provided by 301network, which is a company connected to 301 Digital Media, a marketing agency based in Nashville. (Pixalate also saw 301’s ad code in the sites it examined.) A cached version of its company page on LinkedIn cited Scripps and Pfizer as clients, and the company is a gold-level sponsor of a digital marketing conference taking place in New York next month.

When first contacted by BuzzFeed News about the presence of its ad platform code across the sites identified in the scheme, 301 CEO Matt Arceneaux said the company was in the process of shutting down its ad platform, and that he was unaware of any fraud.

“We had a few publishers still using our [supply-side platform] and ad server products, but in light of recent clawbacks from advertisers and other SSPs related to Monkey Frog Media and a few other publishers in the network, we decided to accelerate the wind down process,” he said in an email.

“Clawbacks” are demands for refunds, which in this case were made after Pixalate publicly exposed the ad fraud executed by seven sites owned by a shell company called Monkey Frog Media LLC. (The sites were all taken offline shortly after Pixalate’s blog post was published.)

Arceneaux portrayed it as a case of his small ad platform being exploited by unscrupulous players like Monkey Frog. However, documents obtained by BuzzFeed News combine with corporate records and other information to show that Arceneaux is actually an owner of Monkey Frog Media. Tennessee corporate records show that Monkey Frog Media goes by another name, Happy Planet Media. That company had another five sites involved in the scheme, all of which have public domain-registration records that list 301 Digital Media, Arceneaux’s company, as their owner.

Tennessee corporate records also show that two other LLCs with websites in the scheme have connections to 301 and/or Arceneaux. Market 57 LLC, which had five sites, lists its corporate address as the headquarters of 301. One of Market 57’s properties, ViralNewsJunkie.com, also contains the same unique Amazon affiliate code in its source code as two websites owned by 301. (The use of the code means that any products sold via the website earn 301 a commission.)

Orange Box Media LLC, which owns another five sites, is also registered in Tennessee and lists Arceneaux’s home address in its corporate records. (That same address appeared in an early version of the registration for Monkey Frog Media/Happy Planet Media.)

A final sign that these companies share an owner is that on September 8 at roughly noon Eastern the websites belonging to all three companies were taken offline at the same time, according to data gathered by Social Puncher.

Arceneaux initially denied any connection between the shell companies and 301. “Neither 301 Digital Media, 301 Ads nor 301 Network have any ownership in any of the businesses you mention.” He did not reply when asked to clarify if he or his partner, COO Andrew Becks, have personal stakes in the companies. (Becks did not respond to the question.)

BuzzFeed News

Documents show that Arceneaux has been operating the Monkey Frog Media sites since at least 2015. On December 11 of that year, an employee of an ad tech company sent an email to get Monkey Frog’s websites set up as a customer. Arceneaux was identified as the “manager” of Monkey Frog when he signed the contract, and was also listed as the company contact. BuzzFeed News obtained a copy of the email and contract with Arceneaux’s signature.

After being informed of the existence of a Monkey Frog contract with his signature, Arceneaux issued a statement to deny that any fraud took place.

“No one profited from an ad fraud scheme as there was no ad fraud scheme evidence shown in any data that we have collected or seen,” Arceneaux said. “We ran all publishers through 3rd party ad fraud detection companies and were not notified of any issues until shortly before they were removed from our platform. We always make sure any person or publisher running on our network is fully compliant with our 3rd party fraud detection partner.”

He argued that the behavior of the Monkey Frog sites was not suspicious:

“We take real ad fraud very seriously. After reviewing the behavior on the sites in question, we did not observe anything other than auto-advancing pages similar to what YouTube and Pandora do. We did not observe any attempts to mimic human behaviors or automatically click on ads. The 301network SSP has since notified all publishers that it is ceasing operations and can proudly say that none of the sites that showed this behavior are operational anymore.”

This is not the first time Arceneaux has run into problems with fraud on his sites. Shailin Dhar is today the founder of ad fraud research firm Method Media Intelligence, but back in 2015 he was working at an ad network when Arceneaux approached the company to get his Monkey Frog sites signed up. Dhar told BuzzFeed News that that same year he also looked into two 301 Digital Media properties, BridalTune.com and MensTrait.com. Dhar was told by AppNexus, a major ad platform, that its quality team removed the sites from the ad exchange “after they detected and confirmed significant amounts of nonhuman traffic coming through the sites.”

Dhar provided a copy of the email to BuzzFeed News and it can be viewed here.

“While managing supply quality for an ad network client, I regularly saw traffic quality flags with sites from 301 Digital and Monkey Frog,” Dhar told BuzzFeed News. “While these sites fit the mold for getting accepted into ad exchanges, they were repeatedly flagged by AppNexus and other SSPs we used. When I tried to get more insight as to why they were flagged, the platforms simply told us it was an open-and-shut case with nothing to debate over.”

“The content they put out was so light and it was such fluff and had so little value it was hard to believe they were getting so much traffic and revenue on it.”

Quelle: <a href="Ad Industry Insiders Are Connected To A Fraud Scheme That Researchers Say Stole Millions Of Dollars“>BuzzFeed

This Is What A 21st-Century Police State Really Looks Like

This Is What A 21st-Century Police State Really Looks Like

Photographed for BuzzFeed News

KASHGAR, China — This is a city where growing a beard can get you reported to the police. So can inviting too many people to your wedding, or naming your child Muhammad or Medina.

Driving or taking a bus to a neighboring town, you’d hit checkpoints where armed police officers might search your phone for banned apps like Facebook or Twitter, and scroll through your text messages to see if you had used any religious language.

You would be particularly worried about making phone calls to friends and family abroad. Hours later, you might find police officers knocking at your door and asking questions that make you suspect they were listening in the whole time.

For millions of people in China’s remote far west, this dystopian future is already here. China, which has already deployed the world’s most sophisticated internet censorship system, is building a surveillance state in Xinjiang, a four-hour flight from Beijing, that uses both the newest technology and human policing to keep tabs on every aspect of citizens’ daily lives. The region is home to a Muslim ethnic minority called the Uighurs, who China has blamed for forming separatist groups and fueling terrorism. Since this spring, thousands of Uighurs and other ethnic minorities have disappeared into so-called political education centers, apparently for offenses from using western social media apps to studying abroad in Muslim countries, according to relatives of those detained.

Over the past two months, I interviewed more than two dozen Uighurs, including recent exiles and those who are still in Xinjiang, about what it’s like to live there. The majority declined to be named because they were afraid that police would detain or arrest their families if their names appeared in the press.

Taken along with government and corporate records, their accounts paint a picture of a regime that at once recalls the paranoia of the Mao era and is also thoroughly modern, marrying heavy-handed hthuman policing of any behavior outside the norm with high-tech tools like iris recognition and apps that eavesdrop on cell phones.

China’s government says the security measures are necessary in Xinjiang because of the threat of extremist violence by Uighur militants — the region has seen periodic bouts of unrest, from riots in 2009 that left almost 200 dead to a series of deadly knife and bomb attacks in 2013 and 2014. The government also says it’s made life for Uighurs better, pointing to the money it’s poured into economic development in the region, as well as programs making it easier for Uighurs to attend university and obtain government jobs. Public security and propaganda authorities in Xinjiang did not respond to requests for comment. China’s foreign ministry said it had no knowledge of surveillance measures put in place by the local government.

“I want to stress that people in Xinjiang enjoy a happy and peaceful working and living situation,” said Lu Kang, a spokesperson for China’s foreign ministry, when asked why the surveillance measures are needed. “We have never heard about these measures taken by local authorities.”

But analysts and rights groups say the heavy-handed restrictions punish all of the region’s 9 million Uighurs — who make up a bit under half of the region’s total population — for the actions of a handful of people. The curbs themselves fuel resentment and breed extremism, they say.

The ubiquity of government surveillance in Xinjiang affects the most prosaic aspects of daily life, those interviewed for this story said. D., a stylish young Uighur woman in Turkey, said that even keeping in touch with her grandmother, who lives in a small Xinjiang village, had become impossible.

Whenever D. called her grandmother, police would barge in hours later, demanding the elderly woman phone D. back while they were in the room.

“For god’s sake I’m not going to talk to my 85-year-old grandmother about how to destroy China!” D. said, exasperated, sitting across the table from me in a café around the corner from her office.

After she got engaged, D. invited her extended family, who live in Xinjiang, to her wedding. Because it is now nearly impossible for Uighurs to obtain passports, D. ended up postponing the ceremony for months in hopes the situation would improve.

Finally, in May, she and her mother had a video call with her family on WeChat, the popular Chinese messaging platform. When D. asked how they were, they said everything was fine. Then one of her relatives, afraid of police eavesdropping, held up a handwritten sign that said, “We could not get the passports.”

D. felt her heart sink, but she just nodded and kept talking. As soon as the call ended, she said, she burst into tears.

“Don’t misunderstand me, I don’t support suicide bombers or anyone who attacks innocent people,” she said. “But in that moment, I told my mother I could understand them. I was so pissed off that I could understand how those people could feel that way.”

China’s government has invested billions of renminbi into top-of-the-line surveillance technology for Xinjiang, from facial recognition cameras at petrol stations to surveillance drones that patrol the border.

A surveillance video from SenseTime Group, which works on using artificial intelligence and deep learning in face, body, and behavioral recognition. The video, which circulated widely on social media, could not be independently verified by BuzzFeed News.

youtube.com

China is not alone in this — governments from the United States to Britain have poured funds into security technology and know-how to combat threats from terrorists. But in China, where Communist Party–controlled courts convict 99.9% of the accused and arbitrary detention is a common practice, digital and physical spying on Xinjiang’s populace has resulted in disastrous consequences for Uighurs and other ethnic minorities. Many have been jailed after they advocated for more rights or extolled Uighur culture and history, including the prominent scholar Ilham Tohti.

China has gradually increased restrictions in Xinjiang for the past decade in response to unrest and violent attacks, but the surveillance has been drastically stepped up since the appointment of a new party boss to the region in August 2016. Chen Quanguo, the party secretary, brought “grid-style social management” to Xinjiang, placing police and paramilitary troops every few hundred feet and establishing thousands of “convenience police stations.” The use of political education centers — where thousands have been detained this year without charge — also radically increased after his tenure began. Spending on domestic security in Xinjiang rose 45% in the first half of this year, compared to the same period a year earlier, according to an analysis of Chinese budget figures by researcher Adrian Zenz of the European School of Culture and Theology in Germany. A portion of that money has been poured into dispatching tens of thousands of police officers to patrol the streets.

In an August speech, Meng Jianzhu, China’s top domestic security official, called for the use of a DNA database and “big data” in keeping Xinjiang secure.

It’s a corner of the country that has become a window into the possible dystopian future of surveillance technology, wielded by states like China that have both the capital and the political will to monitor — and repress — minority groups. The situation in Xinjiang could be a harbinger for draconian surveillance measures rolled out in the rest of the country, analysts say.

“It’s an open prison,” said Omer Kanat, director of the Washington-based Uyghur Human Rights Project, an advocacy group that conducts research on life for Uighurs in Xinjiang. “The Cultural Revolution has returned [to the region], and the government doesn’t try to hide anything. It’s all in the open.”

A blacksmith works under watch of a police guard in Kashgar.

Photographed for BuzzFeed News

Once an oasis town on the ancient Silk Road, Kashgar is the cultural heart of the Uighur community. On a sleepy tree-lined street in the northern part of the city, among noodle shops and bakeries, stands an imposing compound surrounded by high concrete walls topped with loops of barbed wire. The walls are papered with colorful posters bearing slogans like “cherish ethnic unity as you cherish your own eyes” and “love the party, love the country.”

The compound is called the Kashgar Professional Skills Education and Training Center, according to a sign posted outside its gates. When I took a cell phone photo of the sign in September, a police officer ran out of the small station by the gate and demanded I delete it.

“What kind of things do they teach in there?” I asked.

“I’m not clear on that. Just delete your photo,” he replied.

“People disappear inside that place.”

Before this year, the compound was a school. But according to three people with friends and relatives held there, it is now a political education center — one of hundreds of new facilities where Uighurs are held, frequently for months at a time, to study the Chinese language, Chinese laws on Islam and political activity, and all the ways the Chinese government is good to its people.

“People disappear inside that place,” said the owner of a business in the area. “So many people — many of my friends.”

He hadn’t heard from them since, he said, and even their families cannot reach them. Since this spring, thousands of Uighurs and other minorities have been detained in compounds like this one. Though the centers aren't new, their purpose has been significantly expanded in Xinjiang over the last few months.

Through the gaps in the gates, I could see a yard decorated with a white statue in the Soviet-era socialist realist style, a red banner bearing a slogan, and another small police station. The beige building inside had shades over each of its windows.

A propaganda poster on the walls of a political education compound in Kashgar reads, “Cherish ethnic unity as you cherish your own eyes.”

Photographed for BuzzFeed News

Chinese state media has acknowledged the existence of the centers, and often boasts of the benefits they confer on the Uighur populace. In an interview with the state-owned Xinjiang Daily, a 34-year-old Uighur farmer, described as an “impressive student,” says he never realized until receiving political education that his behavior and style of dress could be manifestations of “religious extremism.”

Detention for political education of this kind is not considered a form of criminal punishment in China, so no formal charges or sentences are given to people sent there, or to their families. So it’s hard to say exactly what transgressions prompt authorities to send people to the centers. Anecdotal reports suggest that having a relative who has been convicted of a crime, having the wrong content on your cell phone, and appearing too religious could all be causes.

It’s clear, though, that having traveled abroad to a Muslim country, or having a relative who has traveled abroad, puts people at risk of detention. And the ubiquity of digital surveillance makes it nearly impossible to contact relatives abroad, according to the Uighurs I interviewed.

One recent exile reported that his wife, who remained in Xinjiang with their young daughter, asked for a divorce so that police would stop questioning her about his activities.

“It’s too dangerous to call home,” said another Uighur exile in the Turkish capital, Ankara. “I used to call my classmates and relatives. But then the police visited them, and the next time, they said, ‘Please don’t call anymore.’”

R., a Uighur student just out of undergrad, discovered he had a knack for Russian language in college. He was dying to study abroad. Because of the new rules imposed last year that made it nearly impossible for Uighurs to obtain passports, the family scraped together about 10,000 RMB ($1500) to bribe an official and get one, R. said.

R. made it to a city in Turkey, where he started learning Turkish and immersed himself in the culture, which has many similarities to Uighur customs and traditions. But he missed his family and the cotton farm they run in southern Xinjiang. Still, he tried to avoid calling home too much so he wouldn’t cause them trouble.

“She would never talk like that. It felt like a police officer was standing next to her.”

“In the countryside, if you get even one call from abroad, they will know. It’s obvious,” said R., who agreed to meet me in the back of a trusted restaurant only after all the other patrons had gone home for the night. He was so nervous as he spoke that he couldn’t touch the lamb-stuffed pastries on his plate.

In March, R. told me, he found out that his mother had disappeared into a political education center. His father was running the farm alone, and no one in the family could reach her. R. felt desperate.

Two months later, he finally heard from his mother. In a clipped phone call, she told him how grateful she was to the Chinese Communist Party, and how good she felt about the government.

“I know she didn’t want to say it. She would never talk like that,” R. said. “It felt like a police officer was standing next to her.”

Since that call, his parents’ phones have been turned off. He hasn’t heard from them since May.

A Uighur man stares at a police station from a balcony in Kashgar.

Photographed for BuzzFeed News

Security has become a big business opportunity for hundreds of companies, mostly Chinese, seeking to profit from the demand for surveillance equipment in Xinjiang.

Researchers have found that China is pouring money into its budget for surveillance. Zenz, who has closely watched Xinjiang’s government spending on security personnel and systems, said its investment in information technology transfer, computer services, and software will quintuple this year from 2013. The growth in the security industry there reflects the state-backed surveillance boom, he said.

He noted that a budget line item for creating a “shared information platform” appeared for the first time this year. The government has also hired tens of thousands more security personnel.

Armed police, paramilitary forces, and volunteer brigades stand on every street in Kashgar, stopping pedestrians at random to check their identifications, and sometimes their cell phones, for banned apps like WhatsApp as well as VPNs and messages with religious or political content.

Other equipment, like high-resolution cameras and facial recognition technology, is ubiquitous. In some parts of the region, Uighurs have been made to download an app to their phones that monitors their messages. Called Jingwang, or “web cleansing,” the app works to monitor “illegal religious” content and “harmful information,” according to news reports.

Quelle: <a href="This Is What A 21st-Century Police State Really Looks Like“>BuzzFeed