Facebook Eliminates Human Trending Topic Editors And Replaces Them With An Algorithm

A Facebook employee walks past a sign at Facebook headquarters in Menlo Park, California, on March 15, 2013.

Jeff Chiu / AP

Facebook on Friday changed its controversial trending news section to be based more heavily on algorithms, eliminating the editors who had been curating the stories in the process.

The social network explained the changes in a statement, saying a more algorithmically driven process “allows us to scale Trending to cover more topics and make it available to more people globally over time.”

“In this new version of Trending we no longer need to draft topic descriptions or summaries, and as a result we are shifting to a team with an emphasis on operations and technical skill sets, which helps us better support the new direction of the product,” a Facebook spokesperson said.

BuzzFeed News confirmed that, as a result of the change, Facebook also eliminated the positions for people who had previously run the trending news section.

Quartz reported that the team — which included between 15 and 18 people contracted through an outside company — were laid off Friday and given severance equal to what they would have earned through September, plus two weeks.

Facebook has generated significant controversy in recent months over allegations that its trending section had a liberal bias. In May, Gizmodo published a report citing former “news curators” who said they were instructed to inject stories into the trending section, even if those stories weren&;t actually trending, while also suppressing other more conservative content.

The report prompted the US Senate to demand answers from Facebook over the alleged bias, after which the company published internal “trending guidelines” and promised to improve training, terminology, and practices for news curation.

In Friday&039;s announcement, Facebook explained that users visiting the new trending section will now see a “simplified topic,” along with information about who is discussing that topic. Hovering over or clicking on the link will bring up more information.

Facebook said Friday that articles in the trending section surface “based on a high volume of mentions and a sharp increase in mentions over a short period of time.” The company added that while it did not find evidence of “systematic bias” earlier this year, the new changes to the product “allows our team to make fewer individual decisions about topics.”

“Facebook is a platform for all ideas, and we’re committed to maintaining Trending as a way for people to access a breadth of ideas and commentary about a variety of topics,” the company added.

LINK: Facebook VP Says “No Evidence” Of Political Bias Against Conservatives

LINK: Facebook Publishes Internal “Trending Topics” Guidelines After Bias Claims

Quelle: <a href="Facebook Eliminates Human Trending Topic Editors And Replaces Them With An Algorithm“>BuzzFeed

Weekly Roundup | Docker

Here’s the buzz from this week we think you should know about! We shared a preview of Microsoft&;s container monitoring, reviewed the Docker Engine security feature set, and delivered a quick tutorial for getting 1.12.1 running on Raspberry Pi 3. As we begin a new week, let’s recap our top five most-read stories for the week of August 21, 2016:
 
 

Docker security: the Docker Engine has strong security default for all containerized applications.

1.12.1 on Raspberry Pi 3: five minute guide for getting Docker 1.12.1 running on Raspberry Pi 3 by Docker Captain Ajeet Singh Raina.

Securing the Enterprise: how Docker’s security features can be used to provide active and continuous security for a software supply chain.

Docker + NATS for Microservices: building a microservices control plane using NATS and Docker v1.12 by Wally Quevedo.

Container Monitoring: Microsoft previews open Docker container monitoring. Aimed at users who want a simplified view of containers’ usage, to diagnose issues whether containers are running in the cloud or on-premises by Sam Dean.  

Weekly roundup: Top 5 Docker stories of the weekClick To Tweet

The post Weekly Roundup | Docker appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Is This An Ad? Jonathan Cheban And The Whopperito

Welcome to our weekly column, “Is This an Ad?,” in which we strap on our reportin’ hat (it is NOT a fedora, please stop imagining that) and aim to figure out what the heck is going on in the confusing world of celebrity social media endorsements. Because even though the FTC recently came out with rules on this, sometimes when celebrities post about a product or brand on social media, it’s not immediately clear if they are being paid to post about it, got a freebie, just love it, or what.


Instagram: @jonathancheban

THE CASE:

Jonathan Cheban is a former publicist, current entrepreneur, bon vivant, internet troll, and, perhaps most famously, Kim Kardashian’s best friend. Martha Stewart does not know who he is.

In his current career incarnation (Cheban is quick to point out that he hasn’t done PR in years, and now owns between 5-10 companies, depending what month you ask him), he has some sort of relationship (perhaps partial owner?) with a burger joint on Long Island and a lifestyle website called TheDishh.com.

Perhaps because of these new business developments, he’s taken a turn to positioning himself as some sort of culinary expert, referring to himself as the “Foodg&;d.” The bar over the “o” is called a macron, and it means the word should be pronounced “foodgoad”.

(I have a theory of how this came about: starting maybe two years ago, Cheban began experimenting with a fairly typical Instagram ploy to gain followers: reposting like-bait photos of decadent desserts or other foods. These were photos he found elsewhere and would caption things like “mmm yum&;” or about how much he wanted to eat it. He still does some of this sort of stuff, like a recent post where he posted a photo of an ice cream cotton-candy hybrid with the caption, “I need to try this cotton candy ice cream cone immediately …xx Foodg&x14D;d.”)

Instagram: @jonathancheban

But we’re not here to talk about ice-cream cotton candy. We’re here to talk about Cheban’s recent post about eating Burger King’s new menu item, the Whopperito.

The Whopperito is fairly straightforward: it’s Whopper filling (with spicer meat), in a burrito tortilla instead of a bun. Nick Gazin, a Vice reporter who recently ate three of these for a review, wrote: “It is my belief that this Whopperito was made to cater to the Jackass generation who want to do gross things on Instagram to show off. I don&;t think this was an earnest food invention. I think this is stunt-burgerism created to get press and hashtags.”

THE EVIDENCE:

So, the obvious thing here is that Mr. Cheban used the hashtag . That seems like, obvs it’s an , right? I mean, he’s saying it right there. OR IS HE?

Here’s the weird part: if you search that hashtag, two posts show up. The other is from 3 weeks before Jonathan&039;s, from a young fashion and lifestyle blogger named Ria Michelle (I reached out to her to ask if she could confirm she was paid; I did not hear back). The best theory here is that a digital marketing agency convinced Burger King to pay social influencers to post about the Whopperito using the cheeky and winking tag thekingpaidmetodoit (so transgressive and ironic, right?) And yet… they only found 2 people to actually use the tag? Sounds like some ad buyer somewhere has some explaining to do.

There’s something more mysterious about the fact that only two people used the tag – it confuses the obvious narrative that this is clearly a paid ad. Was this just a huge failure, or is there something else going on?

Here’s how celebrity endorsements work: companies want someone who will ~align with their brand’s message~. Even if consumers know it’s an ad, that’s ok, it still has to be someone who makes sense. When we see Matthew McConaughey monologuing to a cow in a TV ad for Lincoln cars, we know it’s he’s getting paid, but isn’t there something about it where you’re like “yeah, I could totally imagine he’d drive a Lincoln”? There’s a good brand alignment there.

Cheban’s recent personal branding as “foodgoad” is relevant here: He’s worked to establish himself as an influencer in the world of viral, unhealthy food. Remember what Vice said about the Whopperito, how it was just a social media stunt food? Well, what better way to align a product that is purely a vapid, frivolous trend food designed only to appeal to society’s lowest denominator than with Jonathan Cheban? It’s simply good brand alignment.

THE VERDICT:

UNDETERMINED.

Believe it or not, we couldn’t verify this. BuzzFeed News reached out to Burger King to confirm if this was a paid endorsement, and they refused to comment on it. Which…. is not a good look for them, since according to the FTC’s point of view, it’s the responsibility of the brand to be crystal-clear about paid social media endorsements.

So then we tried to ask Cheban. I’m already blocked by him for posting about how he is rude to fans on social media, so fellow BuzzFeed reporter Jess Misener asked:

Cheban didn’t reply, and promptly blocked Jess on Twitter.

WHAT ARE YOU HIDING, JONATHAN?

Since both Cheban and Burger King were stonewalling me, I went to some experts in the field of celebrity endorsements to find out their opinions on this.

According to Stefania Pomponi, founder and president of the Clever Girls influencer marketing agency:

I am 99.9% positive Jonathan Cheban&039;s Whopperito post is a paid sponsorship. He is being coy about disclosing his paid endorsement, which is in direct violation of FTC guidelines which state that standardized hashtags like ad or be used. The guidelines further explain that disclosure hashtags must have a clear meaning to the audience (meaning the audience shouldn&039;t have to guess if a post is sponsored) and hashtags can&039;t be abbreviated (e.g. instead of sponsored). If Cheban wants to be in compliance, he needs to make sure his disclosures … are clearly and easily understood by his audience.

Lucas Brockner, associate director of partnerships and business development at the social media agency Attention:

While nobody loves seeing the ad, sponsored or the somewhat sneaky sp, it’s part of the FTC guidelines and something we ask all influencers to include in posts. To no surprise, influencers don’t like putting this in their posts as it can result in negative backlash from their audiences. As a result and as seen in this example, you’re starting to see more clever ways that influencers are disclosing that they were paid for these types of social promotions. Of course, the more authentic the partnership, the more creative you can be. For example, the idea of using the language “in partnership with” has become a favored term amongst influencers/celebrities and brands when it’s an ongoing content series versus a one-off endorsement.

Dear readers, I have failed you here. Some secrets are too deep, too dangerous, too guarded by the forces of power and money to ever be revealed. Whether or not Jonathan Cheban ate that god-awful meat tube for fun or profit is one of those secrets.

Quelle: <a href="Is This An Ad? Jonathan Cheban And The Whopperito“>BuzzFeed

How To Save Mankind From The New Breed Of Killer Robots

A very, very small quadcopter, one inch in diameter can carry a one- or two-gram shaped charge. You can order them from a drone manufacturer in China. You can program the code to say: “Here are thousands of photographs of the kinds of things I want to target.” A one-gram shaped charge can punch a hole in nine millimeters of steel, so presumably you can also punch a hole in someone’s head. You can fit about three million of those in a semi-tractor-trailer. You can drive up I-95 with three trucks and have 10 million weapons attacking New York City. They don’t have to be very effective, only 5 or 10% of them have to find the target.

There will be manufacturers producing millions of these weapons that people will be able to buy just like you can buy guns now, except millions of guns don’t matter unless you have a million soldiers. You need only three guys to write the program and launch them. So you can just imagine that in many parts of the world humans will be hunted. They will be cowering underground in shelters and devising techniques so that they don’t get detected. This is the ever-present cloud of lethal autonomous weapons.

They could be here in two to three years.

— Stuart Russell, professor of computer science and engineering at the University of California Berkeley

Mary Wareham laughs a lot. It usually sounds the same regardless of the circumstance — like a mirthful giggle the blonde New Zealander can’t suppress — but it bubbles up at the most varied moments. Wareham laughs when things are funny, she laughs when things are awkward, she laughs when she disagrees with you. And she laughs when things are truly unpleasant, like when you’re talking to her about how humanity might soon be annihilated by killer robots and the world is doing nothing to stop it.

One afternoon this spring at the United Nations in Geneva, I sat behind Wareham in a large wood-paneled, beige-carpeted assembly room that hosted the Convention on Certain Conventional Weapons (CCW), a group of 121 countries that have signed the agreement to restrict weapons that “are considered to cause unnecessary or unjustifiable suffering to combatants or to affect civilians indiscriminately”— in other words, weapons humanity deems too cruel to use in war.

The UN moves at a glacial pace, but the CCW is even worse. There’s no vote at the end of meetings; instead, every contracting party needs to agree in order to get anything done. (Its last and only successful prohibitive weapons ban was in 1995.) It was the start of five days of meetings to discuss lethal autonomous weapons systems (LAWS): weapons that have the ability to independently select and engage targets, i.e., machines that can make the decision to kill humans, i.e., killer robots. The world slept through the advent of drone attacks. When it came to LAWS would we do the same?

Yet it’s important to get one thing clear: This isn&;t a conversation about drones. By now, drone warfare has been normalized — at least 10 countries have them. Self-driving cars are tested in fleets. Twenty years ago, a computer beat Garry Kasparov at chess and, more recently, another taught itself how to beat humans at Go, a Chinese game of strategy that doesn’t rely as much on patterns and probability. In July, the Dallas police department sent a robot strapped with explosives to kill an active shooter following an attack on police officers during a protest.

But with LAWS, unlike the Dallas robot, the human sets the parameters of the attack without actually knowing the specific target. The weapon goes out, looks for anything within those parameters, hones in, and detonates. Examples that don’t sound entirely shit-your-pants-terrifying are things like all enemy ships in the South China Sea, all military radars in X country, all enemy tanks on the plains of Europe. But scale it up, add non-state actors, and you can envision strange permutations: all power stations, all schools, all hospitals, all fighting-age males carrying weapons, all fighting-age males wearing baseball caps, those with brown hair. Use your imagination.

While this sounds like the kind of horror you pay to see in theaters, killer robots will shortly be arriving at your front door for free courtesy of Russia, China, or the US, all of which are racing to develop them. “There are really no technological breakthroughs that are required,” Russell, the computer science professor, told me. “Every one of the component technology is available in some form commercially … It’s really a matter of just how much resources are invested in it.”

LAWS are generally broken down into three categories. Most simply, there&039;s humans in the loop — where the machine performs the task under human supervision, arriving at the target and waiting for permission to fire. Humans on the loop — where the machine gets to the place and takes out the target, but the human can override the system. And then, humans out of the loop — where the human releases the machine to perform a task and that’s it — no supervision, no recall, no stop function. The debate happening at the UN is which of these to preemptively ban, if any at all.

“I know that this is a finite campaign — the world’s going to change, very quickly, very soon, and we need to be ready for that.”

Wareham, the advocacy director of the Human Rights Watch arms division, is the coordinator of the Campaign to Stop Killer Robots, a coalition of 61 international NGOs, 12 of which had sent delegations to the CCW. Unlike drones, which entered the battlefield as surveillance technology and were weaponized later, the campaign is trying to ban LAWS before they happen. Wareham is the group’s cruise director — moderating morning strategy meetings, writing memos, getting everyone to the right room at the right time, handling the press, and sending tweets from the @BanKillerRobots account.

This year was the big one. The CCW was going to decide whether to go to the next level, to establish a Group of Governmental Experts (GGE), which would then decide whether or not to draft a treaty. If they didn’t move forward, the campaign was threatening to take the process “outside”— to another forum, like the UN Human Rights Council or an opt-in treaty written elsewhere. “Who gets an opportunity to work to try and prevent a disaster from happening before it happens? Because we can all see where this is going,” Wareham told me. “I know that this is a finite campaign — the world’s going to change, very quickly, very soon, and we need to be ready for that.”

That morning, countries delivered statements on their positions. Algeria and Costa Rica announced their support for a ban. Wareham excitedly added them to what she and other campaigners refer to as “The List,” which includes Pakistan, Egypt, Cuba, Ecuador, Bolivia, Ghana, Palestine, Zimbabwe, and the Holy See — countries that probably don’t have the technology to develop LAWS to begin with. All eyes were on Russia, which had given a vague statement suggesting they weren’t interested. “They always leave us guessing,” Wareham told me when we broke for lunch, reminding me only one country needs to disagree to stall consensus. The cafe outside the assembly room looked out on the UN’s verdant grounds. You could see placid Lake Geneva and the Alps in the distance.

In the afternoon, country delegates settled into their seats to take notes or doze with their eyes open as experts flashed presentation slides. The two back rows were filled with civil society, many of whom were part of the campaign. During the Q&A, the representative from China, who is known for being somewhat of an oratorical wildcard, went on a lengthy ramble about artificial intelligence. Midway through, the room erupted in nervous laughter and Erin Hunt, program coordinator from Mines Action Canada, fired off a tweet: “And now the panel was asked if they are smarter than Stephen Hawking. Quite the afternoon at .” (Over the next five days, Hunt would begin illustrating her tweets with GIFs of eye rolls, prancing puppies, and facepalms.)

A few seats away, Noel Sharkey, emeritus professor of robotics and artificial intelligence at Sheffield University in the UK, fidgeted waiting for his turn at the microphone. The founder of ICRAC, the International Committee for Robot Arms Control (pronounced eye-crack), plays the part of the campaign’s brilliant, absent-minded professor. With a bushy long white ponytail, he dresses in all black and is perpetually late or misplacing a crucial item — his cell phone or his jacket.

In the row over, Jody Williams, who won the Nobel Peace Prize in 1997 for her work banning landmines, barely suppressed her irritation. Williams is the campaign’s straight shooter — her favorite story is one in which she grabbed an American colonel around the throat for talking bullshit during a landmine cocktail reception. “If everyone spoke like I do, it would end up having a fist fight,” she said. Even the usually tactful Wareham stopped tweeting. “I didn’t want to get too rude or angry. I don’t think that helps especially when half the diplomats in that room are following the Twitter account,” she explained later and laughed.

But passionate as they all were, could this group of devotees change the course of humanity? Or was this like the campaign against climate change — just sit back and watch the water levels rise while shaking your head in dismay? How do you take on a revolution in warfare? Why would any country actually ban a weapon they are convinced can win them a war?

And maybe most urgently: With so many things plainly in front of us to be fearful of, how do you convince the world — quickly, because these things are already here — to be extra afraid of something we can&039;t see for ourselves, all the while knowing that if you fail, machines could kill us all?

Jody Williams (left), a Nobel Peace Laureate, and Professor Noel Sharkey, chair of the International Committee for Robot Arms Control, pose with a robot as they call for a ban on fully autonomous weapons, in Parliament Square on April 23, 2013, in London, England.

Oli Scarff / Getty Images

One of the very real problems with attempting to preemptively ban LAWS is that they kind of already exist. Many countries have defensive systems with autonomous modes that can select and attack targets without human intervention — they recognize incoming fire and act to neutralize it. In most cases, humans can override the system, but they are designed for situations where things are happening too quickly for a human to actually veto the machine. The US has the Patriot air defense system to shoot down incoming missiles, aircraft, or drones, as well as the Aegis, the Navy’s own anti-missile system on the high seas.

Members of the campaign told me they do not have a problem with defensive weapons. The issue is offensive systems in part because they may target people — but the distinction is murky. For example, there’s South Korea’s SGR-A1, an autonomous stationary robot set up along the border of the demilitarized zone between North and South Korea that can kill those attempting to flee. The black swiveling box is armed with a 5.56-millimeter machine gun and 40-millimeter grenade launcher. South Korea says the robot sends the signal back to the operator to fire, so there is a person behind every decision to use force, but there are many reports the robot has an automatic mode. Which mode is on at any given time? Who knows.

Meanwhile, offensive systems already exist, too: Take Israel’s Harpy and second-generation Harop, which enter an area, hunt for enemy radar, and kamikaze into it, regardless of where they are set up. The Harpy is fully autonomous; the Harop has a human on the loop mode. The campaign refers to these as “precursor weapons,” but that distinction is hazy on purpose — countries like the US didn’t want to risk even mentioning existing technology (drones), so in order to have a conversation at the UN, everything that is already on the ground doesn’t count.

Militaries want LAWS for a variety of reasons — they&039;re cheaper than training personnel. There’s the added benefit of force multiplication and projection. Without humans, weapons can be sent to more dangerous areas without considering human-operator casualties. Autonomous target selection allows for faster engagement and the weapon can go where the enemy can jam communications systems.

Israel openly intends to move toward full autonomy as quickly as possible. Russia and China have also expressed little interest in a ban. The US is only a little less blunt. In 2012, the Department of Defense issued Directive 3000.09, which says that LAWS will be designed to allow commanders and operators to exercise “appropriate levels of human judgment over the use of force.” What “appropriate” really means, how much judgment, and in which part of the operation, the US has not defined.

In January 2015, the DoD announced the Third Offset strategy. Since everyone has nuclear weapons and long-range precision weapons, Deputy Secretary of Defense Robert Work suggested emphasizing technology was the only way to keep America safe. With the DoD’s blessing, the US military is racing ahead. Defense contractor Northrop Grumman’s X-47B is the first autonomous carrier-based, fighter-sized aircraft. Currently in demos, it looks like something from Independence Day: The curved, grey winged pod takes off from a carrier ship, flies a preprogrammed mission, and returns. Last year, the X-47B autonomously refueled in the air. In theory, that means except for maintenance, an X-47B executing missions would never have to land.

At an event at the Atlantic Council in May, Work said the US wasn’t developing the Terminator. “I think more in terms of Iron Man — the ability of a machine to assist a human, where the human is still in control in all matters, but the machine makes the human much more powerful and much more capable,” he said. This is called centaur fighting or human–machine teaming.

Among the lauded new technologies is swarms — weapons moving in large formations with one controller somewhere far away on the ground clicking computer keys. Think hundreds of small drones moving as one, like a lethal flock of birds that would put Hitchcock’s to shame, or an armada of ships. The weapons communicate with each other to accomplish the mission, in what is called collaborative autonomy. This is already happening — two years ago, a small fleet of ships sailed down the James River. In July, the Office of Naval Research tested 30 drones flying together off a small ship at sea that were able to break out of formation, perform a mission, and then regroup.

Quelle: <a href="How To Save Mankind From The New Breed Of Killer Robots“>BuzzFeed

Grab, Uber’s Rival In Southeast Asia, To Join The Self-Driving Car Battle

Nguyen Huy Kham / Reuters

Grab, Uber’s major rival in Southeast Asia, is partnering with nuTonomy, the self-driving taxi company in Singapore that began offering rides to passengers earlier this week, according to a source familiar with the matter.

The relationship, which was previously undisclosed, gives the Southeast Asian ridehail company a partner in the race toward autonomous vehicles – a partner that on Thursday became the first to put customers in its fleet of self-driving vehicles. Uber announced last week that it will dispatch self-driving Volvos in Pittsburgh later this month for passengers to hail.

Recode reported in May that Grab CEO Anthony Tan said he would be open to partnering with a self-driving car company at some point in the future, when the technology matured beyond nascent stages. On Thursday, nuTonomy began offering free rides to a select group of riders in self-driving versions of Renault Zoe and Mitsubishi i-MiEv electric vehicles. The handful of test cars have backup human drivers in the front, and an engineer in the back as a precaution.

Uber’s pilot program in Pittsburgh will have greater scale, with 100 vehicles and a goal of 1,000 customers, who can opt into a self-driving vehicle ride. A backup driver will be behind the wheel of those vehicles as well. Meanwhile, Ford said it will mass-produce self-driving cars for ridehail fleets by 2021, and Lyft and General Motors are working on their own autonomous electric vehicles.

Grab is in 30 cities across Singapore, Indonesia, the Philippines, Malaysia, Thailand and Vietnam. That existing mapping data gives it an advantage in a region where some roads are less developed and drivers and riders often rely on landmarks to navigate. Grab And Google announced a partnership earlier this month to integrate the ridehail app into Google Maps.

But Uber is gearing up to fight for greater market share in Southeast Asia. The ridehail giant merged its China business with Didi Chuxing earlier this month in a truce, a move meant to free up cash to focus on other markets, including India and Southeast Asia.

Quelle: <a href="Grab, Uber’s Rival In Southeast Asia, To Join The Self-Driving Car Battle“>BuzzFeed

Windows in a Google Cloud Platform world: this week on Google Cloud Platform

Posted by Alex Barrett, Editor, Google Cloud Platform Blog

Google has a long and storied history running Linux, but Google Cloud Platform’s goal is to support a broad range of languages and tools. This week saw us significantly expand our support for the Microsoft ecosystem, with new support for ASP.NET, SQL Server, Powershell and the like.

If you have apps developed in .NET, Microsoft’s application development framework, you’ll be happy to learn that you can run them efficiently on GCP, with support for several flavors of Windows Server, an ASP.NET image in Cloud Launcher, pre-loaded SQL Server images on Google Compute Engine, and a variety of Google APIs available for the .NET platform. And thanks to a new integration with Microsoft Visual Studio, the popular integrated development environment, developers in the Microsoft ecosystem can easily access that functionality from the comfort of their IDE.

But it’s not just about Google broadening its horizons. Microsoft, too, is taking its offerings outside of its traditional confines. This week, Microsoft open-sourced Powershell, the command-line shell and scripting language for .NET, so that developers can use it to automate and administer Linux apps and environments, not just Windows ones.

And Kubernetes, Google’s open-source container management system, is also finding its way over to Microsoft’s Azure public cloud, thanks to its ability to provide a lingua franca for hosting and managing container-based environments. Check out this blog post about provisioning Azure Kubernetes infrastructure to see just how far things have come.
Quelle: Google Cloud Platform

Docker Labs Repo Continues to Grow

Back in May, we launched the Docker Labs repo in an effort to provide the community with a central place to both learn from and contribute to Docker tutorials. We now have 16 separate labs and tutorials, with 16 different contributors, both from Docker and from the community. And it all started with a birthday party.
Back in March, Docker celebrated it’s third birthday with more than 125 events around the world to teach new users how to use Docker. The tutorial was very popular, and we realized people would like this kind of content. So we migrated it to the labs repository as a beginner tutorial. Since then, we’ve added tutorials on using .NET and Windows containers, Docker for Java developers, our DockerCon labs and much more.
 
 
Today we wanted to call out a new series of tutorials on developer tools. We’re starting with three tutorials for Java Developers on in-container debugging strategies. Docker for Mac and Docker for Windows introduced improved volume management, which allows you to debug live in a container while using your favorite IDE.
We try our best to continuously update these tutorials and add new ones but definitely welcome external contributions. What’s your favorite language, IDE, text editor, or CI/CD platform? Any specific steps or configuration needed? Don&;t hesitate to submit a pull request and share your knowledge with the community.
The post Docker Labs Repo Continues to Grow appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/