These Takeout Robots Won't Wipe Out Delivery Jobs Just Yet

These Takeout Robots Won't Wipe Out Delivery Jobs Just Yet

Marble

Yelp is getting into food delivery, and it&;s using robots to do it.

It&039;s long been possible to order takeout through Eat24, which is owned by Yelp, but until recently, restaurants that accepted orders through Eat24 mostly did the deliveries themselves.

Now, Eat24 wants to handle some deliveries itself — only with robots instead of people.

Starting in San Francisco on Wednesday, some hungry customers ordering from specific restaurants on Eat24 will be asked if they&039;re okay with getting their food delivered by robot. If they say yes, the robot will head to the restaurant from either its home base in a garage, or a convenient nearby spot where it&039;s been inconspicuously waiting for instructions. Then an employee will come outside, open the robot with a four-digit code, and insert the order into a warming bag inside. When the robot is on its way, the customer will receive a text message from Marble with a four-digit code, which they can use the retrieve their order curbside when the robot arrives. No tip necessary.

Hiring people to deliver food in their cars can be inefficient, said Matt Delaney, CEO of Marble, the company that builds the robots Yelp is using. Eventually, he says, using robots to get food from a restaurant to your doorstep could be cheaper than paying a person in a car to do it. He think that someday, it might even save you money on takeout.

Of course, that&039;s not great news for the people who currently rely on income from doing deliveries. But the upside is, Marble&039;s robots probably won&039;t be turning the restaurant takeout industry on its head any time soon.

First of all, during this pilot project, Marble&039;s robots are only doing deliveries from four restaurants, all in the same neighborhood, and only during dinner time.

The robot uses mapping technology and a suite of sensors to navigate the city, but it still needs a human chaperone to walk beside it. That means it can&039;t move very fast, so realistically it can only deliver to people ordering food from within a relatively small radius of just two neighborhoods, the Mission and Potrero Hill.

It also can&039;t be used in the rain, and requires someone back at headquarters to monitor its progress via video live stream and a suite of sensors — which means instead of one human needed per delivery, Eat24&039;s system in its current form requires at least two.

But just because Marble&039;s robotic restaurant delivery won&039;t be widespread doesn&039;t mean it won&039;t attract attention — and not all of it will be positive.

Marble&039;s delivery robot is very big and takes up a lot of room the sidewalk. It&039;s kind of a rolling reminder of the tech industry&039;s encroachment on jobs, and on culture in general in San Francisco.

And that&039;s not something people necessarily want a giant, autonomous, rolling reminder about. In fact, when BuzzFeed News visited Marble&039;s San Francisco headquarters to watch the robot at work, we encountered a passerby who clearly had some feelings about the whole thing. “Fuck you&;” he shouted, maybe at us, but mostly at the robot. “This is what&039;s ruining the city&033;”

Luckily, Marble&039;s CEO said, not only can Marble employees use video game controllers to navigate the robots to safety if they run into trouble, but they can also speak through the robot, and tell would-be adversaries to back off.

View Video ›

Facebook: video.php

Quelle: <a href="These Takeout Robots Won&039;t Wipe Out Delivery Jobs Just Yet“>BuzzFeed

Burger King Defeated Google's Block And Successfully Hijacked People's Speakers

Burger King Defeated Google's Block And Successfully Hijacked People's Speakers

Burger King

Burger King&;s successfully hijacked people&039;s Google Home speakers on Wednesday night, and ad industry experts say it&039;s just an early warning for the advertising invasion headed for our voice-activated devices.

“It&039;s an idea whose time had come,” said Allen Adamson, an industry veteran and the founder of BrandSimple Consulting.

In case you missed the beef that unfolded yesterday, in short: Burger King announced an upcoming 15-second TV ad designed to trigger voice-activated Google Home devices to read out Wikipedia&039;s description of its Whopper burger. The internet wasn&039;t impressed — people first began editing the Whopper&039;s Wikipedia entry to say it contains cyanide and a “medium-sized child.” Within hours, Google, which was not consulted by Burger King for the campaign, appeared to shut down the ad, with Google Home devices no longer responding to its audio cue.

Then Burger King figured out a way to work around Google&039;s block. When the ad aired on Jimmy Kimmel Live on Wednesday night, it successfully set off Google Home devices.

“Burger King deployed another very similar commercial that could trigger the smart speaker technology last night on Jimmy Kimmel and Jimmy Fallon,” a spokesperson for Burger King told BuzzFeed News in an email Thursday.

Google did not respond to a request for comment.

It’s hard to hear it all, but this how I experienced the Burger King commercial in my living room last night.

youtube.com

The actual experience did not exactly register as a tidy burger ad. While the commercial did trigger Google Home to start reading out a description of the Whopper, between the time and weather announcement sponsored by TD Bank, and the clip from Armie Hammer&039;s new movie, all I caught was some mumbling about a quarter pound beef patty and a sesame seed bun.

The recital may not have done much for Whopper sales — and the Google Home is not a particularly popular device to begin with — but the ad certainly garnered a lot of interest. The original clip on YouTube has been viewed almost a million times since it went online yesterday.

But the stunt is likely to be just the beginning. As voice-activation is added to more gadgets, marketers will be looking for ways to use it to their benefit.

While some consumers will find this kind of advertising novel, “for most, it will be annoying. These are personal devices,” said Adamson. “They last thing they want is advertisers on it.”

Burger King&039;s New Ad Will Hijack Your Voice-Activated Speaker

It Looks Like Google Has Shut Down Burger King&039;s Ad

Quelle: <a href="Burger King Defeated Google&039;s Block And Successfully Hijacked People&039;s Speakers“>BuzzFeed

Scientists Taught A Robot Language. It Immediately Turned Racist.

Monsitj / Getty Images

One day a few years ago, while talking to a journalist in her office, Harvard computer science professor Latanya Sweeney typed her name into Google’s search bar to pull up a study. As results filled in, the page also brought up an alarming advertisement: for a service that would search for arrests against her name.

The journalist perked up. Forget the paper, he told her, tell me about that arrest. That’s impossible, Sweeney replied — she had never been arrested.

Sweeney decided to get to the bottom of the phenomenon. Armed with the names of 2,000 real people, she searched on Google.com and Reuters.com and noted how the ads delivered varied depending on the person’s race. In a 2013 study, she observed that searches for typically African American names suggested an arrest 25% of the time.

Today, these sorts of Google searches no longer result in arrest ads. But this algorithmic discrimination is likely to show up in all sorts of online services, according to a study published in Science on Thursday.

The authors raise the possibility that language algorithms — such as those being developed to power chat bots, search engines, and online translators — could inadvertently teach themselves to be sexist and racist simply by studying the way people use words.

The results suggest that the bots were absorbing hints of human feelings — and failings, the researchers say.

“At the fundamental level these models are carrying the bias within them,” study author Aylin Caliskan, a postdoctoral researcher at Princeton, told BuzzFeed News.

Widely used algorithms that screen resumes, for example, could be ranking a woman programmer’s application lower than a man’s. In 2015, a group from Carnegie Mellon University observed that Google was more likely to display ads for high-paying jobs if the algorithm believed the person visiting the site was a man.

Caliskan and her colleagues focused on two kinds of popular “word-embedders” — algorithms that translate words into numbers for computers to understand. The researchers trained each bot on a different dataset of the English language: the “common crawl,” a database of language scraped from the web, containing about 840 billion words; and a Google News database containing some 100 billion words.

The study found that these simple word associations could give the bots knowledge about how people judge objects: Flowers and musical instruments, for example, were deemed more pleasant than guns and bugs.

The researchers also made their own version of a psychology test for people that seeks to reveal hidden biases.

The algorithms more often linked European American names, such as Adam, Paul, Amanda, and Megan, with feel-good words like “cheer,” “pleasure,” and “miracle” than it did for African American names like Jerome, Lavon, Latisha, and Shaniqua. And conversely, the algorithm matched words like “abuse” and “murder” more strongly with the African American names than the European American ones.

Female names like Amy, Lisa, and Ann tended to be linked to domestic words like “home” “children” and “marriage,” whereas male names like John, Paul, and Mike were associated with job terms like “salary”, “management”, and “professional.”

The software also linked male descriptors (brother, father, son) with scientific words (physics, chemistry, NASA), and female descriptors (mother, sister, aunt) to art terms (poetry, art, drama).

“We’ve used machine learning to show that this stuff is in our language,” study author Joanna Bryson, professor of artificial intelligence at the University of Bath in the UK, told BuzzFeed News.

That algorithms are deeply biased is not a new idea. Researchers who study ethics in AI have been arguing for a decade to program “fairness” into algorithms. As AI gets smarter, they say, software could make for a society that is less fair and less just.

Such signs are already here: Last year, when Amazon launched its same-day delivery service in major markets, predominantly black neighborhoods were excluded “to varying degrees” in six cities, Bloomberg found. According to analysis by ProPublica, the test-prep seller Princeton Review was twice as likely to charge Asians a higher price than non-Asians. ProPublica also showed that software used by courts to predict future criminals were biased against black Americans. On social media, people have been regularly calling out racially biased search results.

“It&;s vital that we have a better understanding of where bias creeps in before these systems are applied in areas like criminal justice, employment and health,” Kate Crawford, principal researcher at Microsoft Research, told BuzzFeed News in an email. Last year, Crawford worked with the Obama administration to run workshops on the social impact of AI, and this year co-founded the AI Now Initiative to examine how to ethically deploy such technology.

While the Science paper is expected in some ways, the results show how systemic the problem of bias is, Sorelle Friedler, associate professor of computer science at Haverford College, told BuzzFeed News.

“I see it as an important to scientifically validate them so that we can build on this — so that we can say, now that we know this is happening, what do we do about this?” she said.

Because these kinds of language-learning bots will soon be common, it’s likely that most of us will routinely encounter such bias, Hal Daumé III, a professor of computer science at the University of Maryland, told BuzzFeed News.

Some researchers at Google are finding ways to make decision-making in AI more transparent, according to company spokesperson Charina Choi. “We’re quite sensitive to the effect of algorithms, and spend a lot of time rigorously testing our products with users’ feedback,” Choi wrote in an email to BuzzFeed News.

Facebook, which is developing chat programs powered by AI, declined to comment. But the Science study showed how these biases crop up in at least one popular service: Google Translate.

When translating from Turkish, which does not have gendered pronouns, to English, which does, the service associates male pronouns with the word “doctor” and female pronouns with the word “nurse.”

When translating into Spanish, English, Portuguese, Russian, German, and French, the tool brought up similar results, the study found.

“I think almost any company that does natural language processing-like tasks is going to be using word-embedding in some form or another,” Daumé said. “I don’t see any reason to believe that [the study’s word-embedder] is any more sexist or racist than any of the others.”

LINK: How The Internet Turned Microsoft’s AI Chatbot Into A Neo-Nazi

LINK: Facebook Is Using Artificial Intelligence To Help Prevent Suicide

LINK: This Is Why Some People Think Google’s Results Are “Racist”

Quelle: <a href="Scientists Taught A Robot Language. It Immediately Turned Racist.“>BuzzFeed

Slack Is Adding Status Messages That Tell People When You're On A Phone Call Or Vacation

What&;s old is new again.

Slack, the workplace chat app, is adding status messages — think AIM away messages, but for the office.

Slack

Slack&039;s statuses come with emoji. You can choose from a menu of existing emoji or customize your own.

The status emoji will appear next to your name, explaining what you&039;re up to. Hovering over the emoji reveals your full status message.

Slack

Some third party apps that integrate with Slack can also deliver status updates for the people that use them. If you log vacation time in a Zenefits HR system, for example, your status can tell people when you&039;re out of the office.

Slack

Slack is facing challenges from Google and Microsoft, which both recently released competitive workplace messaging products. Slack CEO Stewart Butterfield has noted the competition on Twitter.

Slack

Asked for his top five custom statuses, Butterfield provided eight.

Asked for his top five custom statuses, Butterfield provided eight.

Quelle: <a href="Slack Is Adding Status Messages That Tell People When You&039;re On A Phone Call Or Vacation“>BuzzFeed

This Survey Shows Americans Can't Agree On What Exactly "News" Is On Facebook

Dado Ruvic / Reuters

Facebook is an increasingly important source of news for American adults, but they can’t seem to agree on what exactly qualifies as “news” on the platform, and many remain skeptical of it as a source of trustworthy information, according to a new survey from BuzzFeed News and Ipsos Public Affairs.

The findings, based on a survey of nearly 3,000 American adults between March 23 and 28, suggest that news on Facebook is an area rife with confusion and contradictions for users. When combined with other recent data, it also highlights the differences between what people say and what they actually do when it comes to consuming and trusting news on Facebook.

Overall, 48% of respondents said Facebook was a major or minor source of news for them. Another 20% said it “rarely” was. The rest either said Facebook was never a source or they weren&;t familiar with the platform. The survey found that more than half of those who use Facebook as a news source — 54% — said they trust news on the platform “only a little” or “not at all.”

View a summary of the results here.

What content do people say is “news” on Facebook?

The survey asked those who do use Facebook as a source of news to identify which types of content on Facebook they “consider news.” Seventy percent said they considered “content from traditional media sources (i.e. CNN, New York Times, etc.) shared on their pages” to be news — the highest percentage of any option. This result means a third of those surveyed don’t consider news from actual news organizations to be news when it appears on that outlet’s Facebook page.

A total of 51% of respondents said that content from a traditional outlet shared by one of their friends was news.

“When asked what they consider to be ‘news’ on Facebook, most people focus on traditional outlets like CNN or the New York Times,” said Ipsos researcher Chris Jackson. “However, there are some clear differences in perception when it comes to stories published on Facebook by traditional media and traditional news stories shared by friends.”

BuzzFeed News

Just 31% of people said “content from non-traditional news sources, (i.e BuzzFeed, VICE, Occupy Democrats, Breitbart etc.)” shared on the outlet’s own Facebook page is news, and 26% said “Status updates from my Facebook friends” is news. Non-traditional news outlets put significant effort into spreading their content on Facebook, yet the vast majority of American adults surveyed don’t consider it to be news when it appears on that platform.

The bottom line is that while almost half of the American adults surveyed said Facebook is a major or minor source of news for them, there is far from any unanimity as to what “news” actually is when it comes to the platform.

This question also saw a divergence in the responses from Republicans and Democrats. Only 61% of Republicans said they consider content from traditional sources shared on the outlet’s Facebook page to be news. That compared to 77% of Democrats.

Deciding what news to trust on Facebook

Respondents were also asked to indicate how important various factors are when determining the trustworthiness of news on Facebook. Eighty-three percent said the news source was very or somewhat important, the highest response. That compared to 71% who said that their familiarity with the specific news story was very or somewhat important, and 63% who placed that degree of importance on the person who shared it.

BuzzFeed News

It’s important to view these responses in context given the findings of a recent study from the Media Insight Project. That study created an experiment to test user behavior and trust on Facebook and found that the sharer of a given story mattered more than the news source.

“Whether readers trust the sharer, indeed, matters more than who produces the article — or even whether the article is produced by a real news organization or a fictional one,” according to the study.

Tom Rosenstiel, executive director of the American Press Institute, which helped run the experiment, says Facebook users say and do two different things when it comes to evaluating the trustworthiness of news on the platform.

“People often say what they think they believe in surveys, or what is socially responsible,” he told BuzzFeed News. “Experiments test real behavior if done right. So our experiment found people were deluding themselves.”

Why people don’t read or trust news on Facebook

The survey also sought to understand why some people don’t use Facebook as a news source. Of the 1,377 respondents who said they rarely or never used the platform for news, 41% said they “mostly use Facebook to keep up with friends and family, the top response. The next most popular answer, at 33%, was “I prefer other news sources.” A third of respondents said they “don’t trust news on Facebook.” This suggests many people still consider Facebook to be more suited for personal connections and communications than news, while others have trust issues with the information they get on the platform.

BuzzFeed News

The survey also asked those who trust news on Facebook “only a little” or “not at all” to indicate why. Two-thirds said one reason was that “anyone can post content that looks like news on Facebook” — the most popular response. Forty-four percent said they don’t trust news on social media in general.

And in a sign of how much concerns about misinformation on Facebook resonate with American adults, 42% selected “Facebook doesn’t do a good job of removing fake news.”

BuzzFeed News

The survey also revealed a notable gap between Democrats and Republicans when it came to concerns about censorship on Facebook. Twenty percent of Republicans said they don’t trust news on the platform because “Facebook censors some news.” But only 8% of Democrats selected that option.

This could be a result of the scandal from last year when a former curator for Facebook’s Trending product claimed that conservative news and sources had been suppressed from the Trending list. The allegation was never fully proven, but it subjected Facebook to significant backlash from conservative media leaders.

Respondents were also asked if they mistrust news on Facebook because of the role an algorithm plays in choosing content, or because Facebook does not have human editors. Neither appeared to be a significant concern. Only 15% of respondents don’t trust news on Facebook due to the role of an algorithm, and even fewer, 11%, have concerns about the lack of human editors.

Quelle: <a href="This Survey Shows Americans Can&039;t Agree On What Exactly "News" Is On Facebook“>BuzzFeed

"Silicon Valley" Finished Its Homework And Now It Gets To Have Some Fun

"Silicon Valley" Finished Its Homework And Now It Gets To Have Some Fun

FilmMagic for HBO

On Tuesday night, HBO hosted the premiere party for the fourth season of Silicon Valley at the Letterman Digital Arts Center in San Francisco, a campus-like collection of buildings that houses a few George Lucas-related film and special effects companies, as well as a Yoda fountain and life-size replicas of Darth Vader and Boba Fett. A lot of the guests used the latter as selfie backdrops.

After the screening, the audience full of CEOs, venture capitalists, Twitter-famous engineers, and tech bloggers watched Recode’s Kara Swisher grill the cast and crew. The question she returned to again and again was the same one the cast and creators got pelted with in the press junket beforehand: With the tech world in the crosshairs, how political is Season 4 gonna get?

“Everyone thinks [tech executives are] coastal elites, that some of the reasons for the election were because these people are stealing jobs, becoming wealthy, and leaving behind everyone,” Swisher said. “How do you reflect that in this season?”

Executive producer Alec Berg had a ready answer. “The tricky thing with the show is that we write [it] months before we shoot it, and we shoot it months before it airs, so it’s hard to be topical,” he explained onstage. The inspiration that writers draw from has to stay relevant, “so we can’t really chase trends.”

“You’ll see our United episode in a year and a half,” said Kumail Nanjiani, the actor who plays the perpetual striver Dinesh Chugtai, cutting in.

Later, Swisher tried asking the question a different way: “Do you want the show to get more political or is it just &;let&039;s make fun of the idiots of Silicon Valley&039; kind of thing?”

This time, actor Zach Woods responded. “It’s a tricky thing. [The writers] make fun of the let’s-make-the-world-a-better-place people all the time,” he said, but “then if you get a show that’s too shrill or sanctimonious then you become the person you’re parodying.”

The fourth season may not wade into the internal meltdown currently underway at Uber’s headquarters, but the first episode does kick off with a fake Uber driver. Pied Piper, the data compression startup at the center of the show, has pivoted away from its prized algorithm in favor of PiperChat, a more practical video-messaging app. The company is racking up users, but it can’t afford the server costs, so Richard Hendricks, the spiny, graceless genius behind the code, masquerades as an Uber driver. The plan is to temporarily kidnap a venture capitalist and entice him into investing while he’s sitting captive in the backseat.

The investor quickly realizes that he’s being chauffeured around by the most toxic founder on the peninsula. Please, Richard begs him, we really need the cash. “Really? Is it hard to become a billionaire? Welcome to the Valley, assholes,” the VC replies, demanding to be let out — that is, once Richard can figure out the child locks. A few seconds later, the irate investor pops back in to hand Richard his business card: Look, if PiperChat can actually get to a million users, give him a call. “Then everyone in town will be trying to kidnap you,” the VC says, making it clear that the right numbers can absolve all kinds of sins.

The scene is pretty restrained for a series that leans so heavily on sitcom-style punchlines, but the message still comes through: In Silicon Valley, the FOMO flows both ways. The Uber scenario also sneaks in a subtle point about what constitutes desperation in an industry where three commas in your bank balance is a real possibility. Richard is driving an Uber so he can try to pick up some spare millions for his startup, not because he needs to make ends meet.

Richard&039;s discomfort hard-selling the app sets up the tension of the fourth season. Until this point, the biggest threats to our hapless band of beta males has come from the world outside Erlich Bachman’s living room. Now they’re in danger of sinking under the weight of their own ambition, obliviousness, and poor interpersonal skills.

The decision to satirize personalities instead of ripping the plot from TechCrunch headlines has paid off. Four years in, Silicon Valley is playing to its strong suit and gliding past limitations that critics have latched onto in the past. From the get-go, the show has been more interested in pleasing Reddit with its obsessive technical accuracy than in sending a progressive message. The creators didn’t just do their homework, they waved it all around to make sure you could see the A+ at the top of the page. Year after year after year they were told that the tech industry’s backward-ass attitude toward gender and race is just begging for a comedic takedown, but they choose to go the academic route instead.

Now that its makers have proven themselves, however, there&039;s a buoyancy in the air. Season 4 looks ready to take its learnings — as Gavin Belson (the demented egomaniac running Goog…er, Hooli…played with panache by Matt Ross) might say — for a spin. Like Girls, Silicon Valley seems to be serving a keener sense of pathos now that the pressure is off. The inward turn helps, the dick jokes do not. but because the shifting fortunes and jockeying egos are rendered so breezily, it&039;s easy to forget that the show barely glances outside its bubble.

Perhaps that avoidance is deliberate. In the first episode, Belson is being asked about a Hooli factory in Malaysia but can only think about how another executive forced his private jet to stop in Jackson Hole first, even though Mountain View was closer. It’s a succinct way to show viewers how your world-changing sausage gets made: CEOs may be too consumed with petty concerns to pay much mind to just how far their power can reach.

All told, though, both episodes were a nice reminder that Silicon Valley is responsible for bringing so much of the vocabulary and imagery of this subculture to the mainstream — if audiences wanted to picture Google’s campus before 2014, they had to rely on flights of fancy like The Circle. Or take the return of iconic jackass-like investor Russ Hanneman, he of three comma fame, who shows up with the doors of his orange McLaren raised at full mast. Of course, the show has always been more “funny chortle” than “funny haha.” As one engineer told BuzzFeed News after, when she’s watching home alone she doesn’t laugh.

The season premiere even had a couple echoes back to seasons past. Instead of “Big Head” Bighetti failing upward until he reaches the Hooli roof to “rest and vest,” we see another exec in an elevator that sinks down to the sub-level where he meets the ponytailed server dweller last seen in Season 3.

In Trumpian times, the low-stakes antics are a welcome breather, especially when they involve characters the audience has grown to love and pity. This season Nanjiani’s character, Dinesh, steps into more of a leadership role, complete with a costume change: from casual coder to douchebag pitchman with a one-button blazer, a mess of product in his hair, and a smarmy grin.

“Who were you trying to be?” asked Swisher. “I met about 15 people like that recently.”

During the Q&A, co-creator Mike Judge promised that there would be some female characters this season, including an actor who plays an influential role. She appears for “more than one episode, more than one line — she has a whole arc,” Judge said. A few seats over, Amanda Crew, who plays Monica, the young female investor, was unmissable wearing a Pepto-Bismol–pink suit in a row of seven men. Although Crew joins the cast for press junkets, no one wants to point out that her character doesn’t have as many lines, isn’t as fully developed, and isn’t as integral to the plot. How many viewers know Monica&039;s last name?

She only gets a couple lines in the first two episodes, but they include one of the most poignant. Richard comes to her for advice about dropping PiperChat for something even more ambitious. “Richard, I know people who have spent their entire careers chasing after an app with this kind of growth rate and never hit it,” she says. The scene quickly moves on to a sight gag about Monica being demoted to an office with a view of the urinal, but passing lines like that gesture at how many Richards there are driving around Palo Alto, hoping that millions might fall in their lap.

Quelle: <a href=""Silicon Valley" Finished Its Homework And Now It Gets To Have Some Fun“>BuzzFeed

Google's New Tool Turns Your Goofy Drawings Into Slick Graphics

Google just released a fun little tool called Autodraw.

It uses machine learning to predict what your scribbles are supposed to look like by comparing them to drawings in its database. You can use it on your phone, your tablet, or your desktop — anywhere with a browser. If you want to use your drawings (or the perfected versions of them that Autodraw suggests) later, you can download them. And if you&;re an artist, you can donate drawings to the database.

You start with a blank canvas.

After you draw whatever you can dream up, Autodraw offers you a bunch of images of what it thinks you were trying to draw.

Then Google will turn it into a better-looking version of what you tried to draw.

I was drawing a peach (badly), and Autodraw predicted I would want a streamlined version of a strawberry or an apple. Not bad.

Its other options were a little more wild.

So I was drawing a peach, and the first few options were fruit. But after I scrolled past the fruit drawings Autodraw predicted, its guesses got more creative. Did I mean to draw a rat? No? Maybe some sandals, a bunch of toes, or a Great Horned Owl, then.

Majestic beasts, owls — but fruits they are not.

As I&039;m sure many people will attempt, I tried to draw a penis. But no luck.

Fun to know that I could communicate my yoga routine to someone else, though.

Maybe Autodraw would have an easier time recognizing an eggplant.

Nope. It thought I was trying to draw a mermaid.

Another one of its suggestions looked a whole lot like a bong.

Another one of its suggestions looked a whole lot like a bong.

But it could be a vase, who&039;s to say?

When I scrolled a little further, I found a banana. Close enough.

When I scrolled a little further, I found a banana. Close enough.

How about the President?

This got really rough. I&039;m not great at drawing. It&039;s also possible, since Autodraw is new, that there isn&039;t a sketch of Donald Trump in the tool&039;s library for the algorithms to find.

These are the first options:

A backpack?

“The president&039;s face is a big toe” sounds like a protest sign.

But those are the options it suggested alongside smiley faces.

The owl returns&;&033;&033;

My favorite suggestion.

In conclusion, Autodraw could help you draw better versions of the things you want to draw, but keep yourself open to possibilities. It can also suggest things you never knew you wanted and that you probably don&039;t need.

Have fun scribbling&033;

Giphy

Quelle: <a href="Google&039;s New Tool Turns Your Goofy Drawings Into Slick Graphics“>BuzzFeed

Burger King's New Ad Will Hijack Your Voice-Activated Speaker

Burger King is launching a full-fledged marketing blitz based on triggering voice-activated Google devices, in what could become a grim precedent for TV and radio ads talking directly to voice-activated gadgets like smartphones and Amazon&;s Echo speakers.

The fast-food company&039;s new TV ad features a person looking directly into the camera and saying “OK Google, what is the Whopper burger?” — which, if everything goes as planned, will trigger Google devices like the Google Home assistant and Android phones that have enabled voice search.

In a demo, the ad prompts a Google Home voice-activated speaker to start reading a description of the Whopper from Wikipedia.

While Google Home is still less popular than Amazon&039;s Echo, the ad “could trigger” other Android devices like smartphones to search for “Whopper,” Burger King President José Cil said in an interview with BuzzFeed News.

Just imagine the symphony of machines all telling you about the Whopper at once.

Venessa Wong / BuzzFeed News

Spamming people with search results for flame-broiled burgers is not what Google had in mind with when it launched the device, and the Burger King commercial, which is the work of the ad agency David, was not done in partnership with Google.

“We saw it as a technology to essentially punch through that fourth wall,” said Cil, who called it “a cool way to connect directly with our guests.”

It raises the grim prospect of more marketers taking advantage of the growing number of voice-activated devices in people&039;s homes. Last month, Google Home owners complained that the “My Day” function, which reads out things like weather, traffic conditions, and calendar appointments for the day, ended up recommending the new film Beauty and the Beast.

Google said in a statement at the time that this was not an ad, but an experimental My Day feature that will “sometimes call out timely content.” However, they added, “We’re continuing to experiment with new ways to surface unique content for users and we could have done better in this case.”

Here’s the new ad.

youtube.com

It Looks Like Google Has Shut Down Burger King&039;s Ad

Quelle: <a href="Burger King&039;s New Ad Will Hijack Your Voice-Activated Speaker“>BuzzFeed

It Looks Like Google Has Shut Down Burger King's Ad

For less than three sweet hours, a Burger King ad successfully tricked Google&;s voice-activated Google Home devices into reading out the ingredients of a Whopper, in a marketing stunt designed to “punch through that fourth wall,” according to Burger King&039;s President.

In the ad, a person looked straight into the camera and said “OK Google, what is the Whopper burger?,” using the prompt that triggers Google Home devices. In response, any Google Home speaker nearby would rattle of an excerpt from the Wikipedia entry for the sandwich.

No more.

While a normal human being can still ask their Google Home about the burger, the audio from the ad itself no longer triggers the devices, BuzzFeed News tests have found.The Verge first reported on the change. It&039;s unclear if Google has disabled the specific audio from the ad from being recognized by its devices — neither Burger King nor Google immediately responded to requests for comment.

The rollout of the Burger King ad hasn&039;t been flawless, although it certainly got the brand plenty of attention. Almost immediately after the ad was first released, Wikipedia users began to alter the site&039;s entry for the Whopper, in an attempt to prank the pranksters and trick Google Home devices into reading out ingredients for the whopper that included “cyanide” and “a medium-sized child.”

Burger King&039;s New Ad Will Hijack Your Voice-Activated Speaker

Quelle: <a href="It Looks Like Google Has Shut Down Burger King&039;s Ad“>BuzzFeed

New search analytics for Azure Search

One of the most important aspects of any search application is the ability to show relevant content that satisfies the needs of your users. Measuring relevance requires combining search results with the app side user interactions, and it can be hard to decide what to collect and how to do it. This is why we are excited to announce our new version of Search Traffic Analytics, a pattern on how to structure, instrument, and monitor search queries and clicks, that will provide you with actionable insights about your search application. You’ll be able to answer common questions, like most clicked documents or most common queries that do not result in clicks, as well as provide evidence for other situations, like deciding on the effectiveness of a new UI layout or tweaks on the search index. Overall, this new tool will provide valuable insights that will let you make more informed decisions.

Let’s expand on the scoring profile example. Let’s say you have a movies site and you think your users usually look for the newest releases, so you add a scoring profile with a freshness function to boost the most recent movies. How can you tell this scoring profile is helping your users find the correct movies? You will need information on what your users are searching for, the content that is being displayed and the content that your users select. When you have the data on what your users are clicking, you can create metrics to measure effectiveness and relevance.

Our solution

To obtain rich search quality metrics, it’s not enough to log the search requests; it’s also necessary to log data on what users are choosing as the relevant documents. This means that you need to add telemetry to your search application that logs what a user searches for and what a user selects. This is the only way you can have information on what users are really interested on and wether they are finding what they are looking for. There are many telemetry solutions available and we didn&;t invent yet another one. We decided to partner with Application Insights, a mature and robust telemetry solution, available for multiple platforms. You can use any telemetry solution to follow the pattern that we describe, but using Application Insights lets you take advantage of the Power BI template created by Azure Search.

The telemetry and data pattern consists of 4 steps:

1.    Enabling Application Insights
2.    Logging search request data
3.    Logging users’ clicks data
4.    Monitoring in Power BI desktop

Because it’s not easy to decide what to log and how to use that information to produce interesting metrics, we created a clear set schema to follow, that will immediately produce commonly asked for charts and tables out of the box on Power BI desktop. Starting today, you can access the easy to follow instructions on the Azure Portal and the official documentation.

Once you instrument your application and start sending the data to your instance of Application Insights, you will be able to use Power BI to monitor the search quality metrics. Upon opening the Power BI desktop file, you’ll find the following metrics and charts
•    Clickthrough Rate (CTR): ratio of users who click on a document to the number of total searches.
•    Searches without clicks: terms for top queries that register no clicks.
•    Most clicked documents: most clicked documents by ID in the last 24 hours, 7 days and 30 days.
•    Popular term-document pairs: terms that result in the same document clicked, ordered by clicks.
•    Time to click: clicks bucketed by time since the search query.

 

Operational Logs and Metrics

Monitoring metrics and logs are still available. You can enable and manage them in the Azure Portal under the Monitoring section.

Enable Monitoring to copy operation logs and/or metrics to a storage account of your choosing. This option lets you integrate with the Power BI content pack for Azure Search as well as your own custom integrations.

If you are only interested in Metrics, you don’t need to enable monitoring as metrics are available for all search services since the launch of Azure Monitor, a platform service that lets you monitor all your resources in one place.

Next steps

Follow the instructions in the portal or in the documentation to instrument your app and start getting detailed and insightful search metrics.

You can find more information on Application Insights here.  Please visit Application Insights pricing page to learn more about their different service tiers.
Quelle: Azure