Ready To Die On Mars? Elon Musk Wants To Send You There

Ready To Die On Mars? Elon Musk Wants To Send You There

This is what SpaceX&;s Interplanetary Transport System, which Elon Musk hopes will take people to Mars one day, would look like.

SpaceX / Via Flickr

Elon Musk has said he wants to die on Mars — “just not on impact.” In a speech on Tuesday, Musk outlined how his company Space Exploration Technologies (SpaceX), which has yet to even send a human into orbit, hopes to shuttle people to Mars to forge a self-sustaining civilization within 40 to 100 years.

What the billionaire did not explain, however, is how the people he plans to shuttle there would survive on a planet no human has ever set foot on. In 2002, Musk founded SpaceX with the goal of “making life multi-planetary.” When Musk teased his intent to discuss a plan to colonize Mars in April, he warned, “it’s going to sound pretty crazy.” It does.

“Are you prepared to die? If that’s ok, then you’re a candidate for going.”

“Are you prepared to die? If that’s ok, then you’re a candidate for going,” Musk said. Would he become the first man on Mars himself? Probably not. “I’d definitely need to have a good succession plan because the probability of death is really high on the first mission. And I’d like to see my kids grow up.”

But for those who are willing to risk death – Musk would not advise sending your children – he pulled up a presentation slide that showed SpaceX’s timeline to begin flights to Mars in 2023. The cost of bringing a person to Mars right now is about $10 billion, he said. And his goal is to bring that figure down to $200,000, the median price of a home in the US, and hopefully even lower, to $140,000. Who’s going to pay for it? “Ultimately, this is going to be a huge public-private partnership,” Musk said. He also said he will fund the project with his own money. (Forbes estimates his net worth at $11.7 billion.)

The speech marks a big moment for Musk, and casts aside his troubles on Earth: Tesla, his electric car company, is under federal investigation after a driver&039;s fatal crash while operating one of its cars with its Autopilot system engaged. Several shareholders are suing Tesla as well, after the company made an offer to purchase SolarCity, the solar energy company Musk is chairman of. Not to mention the fact that a SpaceX rocket carrying a satellite for Facebook’s Internet.org initiative exploded at its launch site, Cape Canaveral Air Force station, earlier this month.

“There’s a tremendous opportunity for anyone who wants to go to Mars to create something new…Everything from iron refineries to the first pizza joint.”

“If you’re an explorer and you want to be on the frontier and push the envelope and be where things are super exciting, even if it’s dangerous, that’s really who we’re appealing to here,” Musk said. He compared SpaceX’s plans to shuttle people to Mars in spaceships that could fit 100 (and eventually 200) people to the construction of the Union Pacific Railroad, which was built in the late 1800s to connect about two dozen western states. “There’s a tremendous opportunity for anyone who wants to go to Mars to create something new and bold, the foundations of a new planet. Everything from iron refineries to the first pizza joint, things on Mars that people can’t even imagine today that might be unique to Mars,” he said.

People might not be able to imagine them because humans have yet to set foot on Mars. For 40 years, NASA has been sending out rovers, orbiters and landers to learn more about the planet. Scientists and researchers have spent lengthy periods of time in cold, dangerous environments like Antarctica, and inside barren volcano slopes in Hawaii, to simulate life on the Red Planet. But dreaming big is perfectly in character for Musk, who started SpaceX in 2002. In 2012, the company’s Dragon rocket became the first commercial spacecraft to deliver cargo safely to the International Space Station for NASA and return to Earth. Since then, it’s been landing (and failing to land) reusable rockets on barges in the middle of the ocean.

youtube.com

The company released a video of its new rocket, which would be the biggest rocket ever, as part of the presentation. It’s called BFR – short for “big fucking rocket.” For scale, Musk pulled up an image of it on the screen behind him. This is what it looked like:

That small man to the right, just a blip, is Elon Musk. He projected the BFR on the screen behind him.

Scott Hubbard, formerly the director of NASA Ames Research Center and its “Mars czar,” told BuzzFeed News that building such a rocket would be an engineering feat. “That&039;s way beyond anything anyone&039;s ever built before,” he said. The individual components of Musk&039;s engineering goals are very optimistic, but not technologically impossible, Hubbard said – it&039;s not like Musk said he&039;s trying to build a transporter beam.

“The scale of it, though, is so much larger than anything NASA&039;s ever done, and I am skeptical about the timeline. The specifics require engineering development that has yet to be done,” Hubbard said. “The history of launch vehicles is littered with failures…rocket science is called rocket science for a reason.”

In a statement after Musk’s presentation finished, NASA said it “applauds all those who want to take the next giant leap – and advance the journey to Mars. We are very pleased that the global community is working to meet the challenges of a sustainable human presence on Mars.”

“Rocket science is called rocket science for a reason.”

Still, NASA’s timeline for putting humans on Mars is several years out from Musk’s, and its plans are much less grandiose.

Ellen Stofan, chief scientist at NASA, told BuzzFeed News prior to Musk’s announcement that the agency sees value in its partnership with SpaceX and that the company can help accelerate the dream of getting humans to Mars. But the biggest hindrance is figuring out how to keep humans healthy and sustain life there. Humans lose bone density in space, and radiation levels on Mars are so high that “for humans to stay on Mars for any duration, you’d have to be living underground.”

“When you think about large-scale movement of humans to Mars, it’s just not practical or desirable,” Stofan said. “I think our timeline of aiming to get humans to mars in the early 2030s, say 2032, is the one that gets people there on a path where we can feel comfortable that we can get them there safely, and get them home safely.”

Then there are other human issues, like one that an audience member asked Musk in the Q&A after his presentation. He said he came up with the question while at Burning Man, with no plumbing, in a hot, dusty Nevada desert that got chilly in the evenings. Will Mars have toilets?

“Is this what Mars is going to be like? Just a dusty, waterless shit storm?”

“There was a lot of shit, and there was no water to take it into the rivers,” he told Musk. “Is this what Mars is going to be like? Just a dusty, waterless shit storm?”

Musk clearly wasn’t prepared for the question.

After all, his presentation touched only lightly on how people would live upon getting to Mars. He presented a simple solution as to how people would be fed: “We can grow plants on Mars just by compressing the atmosphere.”

John Logsdon, the founder and former director of the Space Policy Institute at the George Washington University in DC, said Musk’s presentation lacked details on how any of his goals would be funded, and that it left “lots of open technical issues.”

“This is very much a vision rather than a detailed plan,” Logsdon said. “We need bold visions for anything to happen.”

Quelle: <a href="Ready To Die On Mars? Elon Musk Wants To Send You There“>BuzzFeed

Those Online Polls Showing Trump Winning The Debate Were Probably Not Rigged By Russia

Timothy A. Clary / AFP / Getty Images

SAN FRANCISCO — Many of the online polls following the first presidential debate were manipulated to make it appear as though Trump had won, but those trying to skew the polls appeared to be be pranksters and Trump supporters rather than organized Russian hackers.

As it emerged Tuesday that dozens of online polls showed landslide victories for Trump following the first debate, rumors swirled that Russian hackers had been behind a campaign to hack the polls and hand Trump a victory.

A tweet by Twitter user @DustinGiebel purported to show a map of Twitter activity in the Russian city of St Petersburg as evidence that the hashtag had originated there. The tweet has since been deleted but a screengrab is below.

The tweet has since been deleted and the account has not answered numerous requests for comment on which program was used to reach the conclusion that Russians were behind the Trumpwon hashtag. A blog post from the Washington Post also pointed out that the map used in the tweet is not typical of mapping programs used to plot Twitter trends, including TrendsMap which was originally identified as the source of the graphic.

This is certainly not from any of our tools and do not know of any tools that look this way,” TrendsMap spokesperson Kathy Mellett said in an email to the Post. “Based upon our analysis, TrumpWon primarily came from the US. There was an initial spike just after the debate followed by a much larger one a few hours later. In particular, around 97% of the initial spike of approximately 6,000 tweets came from the US.”

An analysis by BuzzFeed News showed that while there was one St. Petersburg-based Twitter account repeatedly using that hashtag, it was also widely used across the United States at the same time.

Meanwhile, a look at the poll results, however, showed that the polls themselves were so poorly designed and secured that simple tricks suggested by Trump supporters on the 4chan messaging board were enough to manipulate the results at dozens of media outlets, including Time, CNBC, and BuzzFeed News.

The idea to manipulate the polls was suggested on 4chan in the week leading up to the debate. In one dedicated chat room Monday night, Trump supporters shared tips on how to manipulate the polls using easy methods such as opening an incognito browser window, or turning a phone&;s airplane mode off and on.

4chan / Via boards.4chan.org

4chan / Via boards.4chan.org

A developer at BuzzFeed confirmed that in-house this poll, run in the wake of the debate, had indeed been manipulated to give Trump a hardy victory over Clinton. The developer said that someone had executed a JavaScript script to register votes repeatedly, basically running a program that let them vote repeatedly in the same poll.

BuzzFeed News reached out to Time and CNBC for comment, both of whom ran polls created by the same Playbuzz platform, which gave Trump a huge victory in the debate. A spokeswoman from Time said the company had seen more unique viewers than votes on the page where they ran the poll. CNBC did not respond to a request for comment, though it appears it was heavily targeted by 4Chan users. A spokesperson from Playbuzz did not answer a request for comment on how it ensures that its polls are conducted securely, but a Playbuzz employee, who answered a call from BuzzFeed and agreed to speak on condition of anonymity, said their polls were “not scientific, they aren&039;t meant to be taken as scientific evidence.”

That media companies continue to run online polls, especially around important news events such as the first debate in right race for the presidency is a “failing of journalism,” said one editor at a news organization who ran one of the polls conducted Monday night.

“I spent all morning asking and no one knows if our poll was secure, how it was conducted, or if someone scammed it. Now people are pointing to our poll saying that it shows Trump won,” said the editor, who asked to remain anonymous as he was not authorized to speak about his company&039;s polling system. “That&039;s not good journalism.”

CNN is currently the only news outlet to have run a poll in which verified people were asked how Hillary Clinton and Donald Trump fared in the debate Monday night. The CNN/ORC poll conducted immediately following the debate found that Clinton topped Trump 62 to 27.

Quelle: <a href="Those Online Polls Showing Trump Winning The Debate Were Probably Not Rigged By Russia“>BuzzFeed

Azure Media OCR Simplified Output

Not sure what Azure Media OCR is? Check out the introductory blog post on Azure Media OCR.

Thanks to all of the customers and partners who have been part of the Azure Media OCR public and private previews. We have continued to iterate on valuable first-hand feedback over the past 5 months, and today are tackling one of the most common painpoints that we have heard: the complexity of the output format. 

It turned out, for most customers, we were simply providing too much detail in our default output format. This led to a lot of frustration.

Most customers utilize Azure Media OCR to index videos by the text displayed within them at various times.  In conjunction with Azure Media Indexer, this creates an excellent alternative to manual video tagging for augmenting the discoverability of your video content in a search engine. 

By providing positional data for every single word (in addition to the phrases and regions), we were needlessly inflating the output with little to no additional value. 

Today we are releasing our new output format, a simpler schema that will cover most end-user scenarios with less headache.  In case you were one of the advanced users who found value in the additional data from our previous output format, you can simply use the “AdvancedOutput” flag.

Advanced Output

The advanced output format of a JSON object is made of time fragments, each of which contained separate events comprised of regions, lines, and words, all tagged with positional and language data. 

The new “simple” output format simply contains time fragments which contain text.

Here’s an example of one fragment in the simple output format:

New output

"fragments": [
{
"start": 0,
"duration": 1435434,
"interval": 1435434,
"events": [
[
{
"language": "English",
"text": "Notes File WOF MYM Edit Format View oo Window Help index-html – MvWebSite Q Search Visual Studio Code for Mac Developers Today June 1, 2016, 1:07 PM Visual Studio Code for Mac Developers 211. Google Chrome extensions Sergii Baidachnyi Principal Technical Evangelist Microsoft Canada sbaydach@microsoft.com @sbaidachni"
}
]
]
},

 

The following is a heavily-truncated equivalent “advanced output” from the same fragment.

Note: the actual advanced output that corresponds to the above fragment is over 600 lines of JSON!

Old output

"fragments": [
{
"start": 0,
"duration": 120000,
"interval": 120000,
"events": [
[
{
"region": {
"language": "English",
"orientation": "Up",
"lines": [
{
"text": "Notes File",
"left": 74,
"top": 7,
"width": 109,
"height": 15,
"word": [
{
"text": "Notes",
"left": 74,
"top": 8,
"width": 54,
"height": 14,
"confidence": 974
},
{
"text": "File",
"left": 154,
"top": 7,
"width": 29,
"height": 15,
"confidence": 848
}
]
},
{
"text": "WOF",
"left": 155,
"top": 117,
"width": 33,
"height": 12,
"word": [
{
"text": "WOF",
"left": 155,
"top": 117,
"width": 33,
"height": 12,
"confidence": 397
}
]
},
{
"text": "MYM",
"left": 156,
"top": 206,
"width": 32,
"height": 12,
"word": [
{
"text": "MYM",
"left": 156,
"top": 206,
"width": 32,
"height": 12,
"confidence": 309
}
]
}
]
}
},
{
"region": {

As you can see, unless you need all of the detail, a lot of the advanced output features may be redundant for your scenario.

How do I use this?

Minimal preset for Old Output

{
&;Version&039;:&039;1.0&039;,
&039;Options&039;: {
&039;AdvancedOutput&039;:&039;true&039;
}
}

Minimal preset for New Output

{
&039;Version&039;:&039;1.0&039;
}

Want to learn more about the input configuration? Check out our previous blog post introducing the configuration.

Love the new output? Hate it? Share your feedback with us!

If you want to learn more about this product, and the scenarios that it enables, read the introductory blog post on Azure Media OCR.

To learn more about Azure Media Analytics, check out the introductory blog post.

If you have any questions about any of the Media Analytics products, send an email to amsanalytics@microsoft.com.
Quelle: Azure

How Qbox Saved 50% per Month on AWS Bills Using Kubernetes and Supergiant

Editor’s Note: Today’s post is by the team at Qbox, a hosted Elasticsearch provider sharing their experience with Kubernetes and how it helped save them fifty-percent off their cloud bill. A little over a year ago, we at Qbox faced an existential problem. Just about all of the major IaaS providers either launched or acquired services that competed directly with our Hosted Elasticsearch service, and many of them started offering it for free. The race to zero was afoot unless we could re-engineer our infrastructure to be more performant, more stable, and less expensive than the VM approach we had had before, and the one that is in use by our IaaS brethren. With the help of Kubernetes, Docker, and Supergiant (our own hand-rolled layer for managing distributed and stateful data), we were able to deliver 50% savings, a mid-five figure sum. At the same time, support tickets plummeted. We were so pleased with the results that we decided to open source Supergiant as its own standalone product. This post will demonstrate how we accomplished it.Back in 2013, when not many were even familiar with Elasticsearch, we launched our as-a-service offering with a dedicated, direct VM model. We hand-selected certain instance types optimized for Elasticsearch, and users configured single-tenant, multi-node clusters running on isolated virtual machines in any region. We added a markup on the per-compute-hour price for the DevOps support and monitoring, and all was right with the world for a while as Elasticsearch became the global phenomenon that it is today.BackgroundAs we grew to thousands of clusters, and many more thousands of nodes, it wasn’t just our AWS bill getting out of hand. We had 4 engineers replacing dead nodes and answering support tickets all hours of the day, every day. What made matters worse was the volume of resources allocated compared to the usage. We had thousands of servers with a collective CPU utilization under 5%. We were spending too much on processors that were doing absolutely nothing. How we got there was no great mystery. VM’s are a finite resource, and with a very compute-intensive, burstable application like Elasticsearch, we would be juggling the users that would either undersize their clusters to save money or those that would over-provision and overspend. When the aforementioned competitive pressures forced our hand, we had to re-evaluate everything.Adopting Docker and KubernetesOur team avoided Docker for a while, probably on the vague assumption that the network and disk performance we had with VMs wouldn’t be possible with containers. That assumption turned out to be entirely wrong.To run performance tests, we had to find a system that could manage networked containers and volumes. That’s when we discovered Kubernetes. It was alien to us at first, but by the time we had familiarized ourselves and built a performance testing tool, we were sold. It was not just as good as before, it was better.The performance improvement we observed was due to the number of containers we could “pack” on a single machine. Ironically, we began the Docker experiment wanting to avoid “noisy neighbor,” which we assumed was inevitable when several containers shared the same VM. However, that isolation also acted as a bottleneck, both in performance and cost. To use a real-world example, If a machine has 2 cores and you need 3 cores, you have a problem. It’s rare to come across a public-cloud VM with 3 cores, so the typical solution is to buy 4 cores and not utilize them fully.This is where Kubernetes really starts to shine. It has the concept of requests and limits, which provides granular control over resource sharing. Multiple containers can share an underlying host VM without the fear of “noisy neighbors”. They can request exclusive control over an amount of RAM, for example, and they can define a limit in anticipation of overflow. It’s practical, performant, and cost-effective multi-tenancy. We were able to deliver the best of both the single-tenant and multi-tenant worlds.Kubernetes + SupergiantWe built Supergiant originally for our own Elasticsearch customers. Supergiant solves Kubernetes complications by allowing pre-packaged and re-deployable application topologies. In more specific terms, Supergiant lets you use Components, which are somewhat similar to a microservice. Components represent an almost-uniform set of Instances of software (e.g., Elasticsearch, MongoDB, your web application, etc.). They roll up all the various Kubernetes and cloud operations needed to deploy a complex topology into a compact entity that is easy to manage.For Qbox, we went from needing 1:1 nodes to approximately 1:11 nodes. Sure, the nodes were larger, but the utilization made a substantial difference. As in the picture below, we could cram a whole bunch of little instances onto one big instance and not lose any performance. Smaller users would get the added benefit of higher network throughput by virtue of being on bigger resources, and they would also get greater CPU and RAM bursting.Adding Up the Cost SavingsThe packing algorithm in Supergiant, with its increased utilization, resulted in an immediate 25% drop in our infrastructure footprint. Remember, this came with better performance and fewer support tickets. We could dial up the packing algorithm and probably save even more money. Meanwhile, because our nodes were larger and far more predictable, we could much more fully leverage the economic goodness that is AWS Reserved Instances. We went with 1-year partial RI’s, which cut the remaining costs by 40%, give or take. Our customers still had the flexibility to spin up, down, and out their Elasticsearch nodes, without forcing us to constantly juggle, combine, split, and recombine our reservations. At the end of the day, we saved 50%. That is $600k per year that can go towards engineering salaries instead of enriching our IaaS provider. Download KubernetesGet involved with the Kubernetes project on GitHub Post questions (or answer questions) on Stack Overflow Connect with the community on SlackFollow us on Twitter @Kubernetesio for latest updates
Quelle: kubernetes

Introducing new M4 instance size, m4.16xlarge, and new region availability of M4 instances

We are excited to announce the availability of the m4.16xlarge, the largest instance size in the latest generation of EC2 General Purpose instances, featuring 64 vCPUs and 256 GiB of memory. The m4.16xlarge offers a balance of compute, memory, and network resources, and is a good choice for many applications including databases, data processing tasks, cluster computing, and web servers that require high computational horsepower or memory size.
Quelle: aws.amazon.com

Resource health exposes historical health

Today we are pleased to announce the public preview of the Azure Resource health history blade, a new feature that exposes up to 14 days of historical health information.

Up until now, Resource health has helped customers reduce the time spent on troubleshooting ongoing problems, in particular reducing the time spent determining if the problem is caused by an event inside the Azure platform or by a problem in the application. This new feature makes it easier to investigate problems occurred during the last 14 days.

Getting the current and historical health of a resource

The easiest way to open the Resource health blade is to navigate to the resource blade and click on Resource Health.  This blade will show the current health of the resource, as well as recommended troubleshooting steps that are customized based on the current health status. It is important to highlight that since entering Public Preview, we have made a number of improvements to help with troubleshooting, including tighter integration with the troubleshooting experience in the portal.

To access the historical health data, click on the View History link located under the current health state.

The history blade shows any changes in the health of the resource during the last 14 days, including the staring time, the end time and a summary of the text customers would have seen if they had visited the Resource health blade during this time. In the screenshot above you can see that the virtual machine was available until September 19th at 4:19PM when due to a disk failure, it became unavailable until 4:39PM when it was recovered.

New ways to access resource health

As mentioned above, the easiest way is to click on Resource health in the resource blade. Keep in mind that Resource health will only be displayed for resource types available in Resource health. 

Another way is by browsing to the Resource Health List Blade which displays the health of all resources in all your subscriptions. Open this blade by clicking on the Resource health tile located in the Help + Support blade. Once in the Resource Health List blade you can filter by subscription or by resource type.

Resource heath is a key data point when troubleshooting problems, so during the past few weeks we incorporated the Resource health signal in the Troubleshooting and the Case submission blades.

You can access the Troubleshooting blade by clicking on Diagnose and solve problems in the resource blade. Once the troubleshooting blade opens, Resource health will be displayed at the top. Clicking on More Details will take you to the Resource health blade.

In the case submission workflow, you will see the health once you have selected a resource.

Moving forward

Exposing the historical health of a resource is a big step forward on our journey to provide you with the data and the tools you need to troubleshoot problems. During the upcoming months, stay tune for additional improvements to Resource health and for more services to become available through it.
Quelle: Azure

Facebook's Suspensions Of Political Speech Are Now A Pattern

Facebook, a vital forum for online speech, can’t seem to stop removing significant political content from its platform.

Last week, the company disabled several prominent Palestinian journalists’ accounts, following user reports that they were violating Facebook standards. These weren&;t small time reporters — they&039;re people who manage pages followed by millions. Facebook later reinstated their accounts, blaming their removal on an error: “The pages were removed in error and restored as soon as we were able to investigate,” a Facebook spokesperson said, using an excuse that didn’t need dusting off, since Facebook has offered variations of it at least four times in past six months.

In April, Facebook removed six pro-Bernie Sanders groups before reinstating them and blaming a technical error. In July, Facebook pulled a video showing Philando Castile dying after being shot by police at a traffic stop, only to subsequently reinstate it and again blame its original removal on a glitch. In August, Facebook suspended two big libertarian Facebook pages for days before reinstating them, saying: “The pages were taken down in error.” Last week, it was an “error” again.

“We sometimes get things wrong”

After four such errors in six months, Facebook&039;s takedowns seem less like occasional missteps and more like symptoms of a flawed policy that needs to be addressed. Asked if there are fundamental issues within Facebook’s systems that need to change, a Facebook spokesperson pointed BuzzFeed News to a public statement, stating: “Our team processes millions of reports each week, and we sometimes get things wrong.”

The company did not respond to a follow-up question about whether Facebook plans to review its tendency to erroneously silence politically significant speech.

Facebook depends on a system of user reports to police content on its platform. When someone sees content they think violates Facebook’s community standards, they can flag it and send it into review. While this system might work well for content that&039;s broadly recognized as objectionable and in clear violation of Facebook policies, it doesn&039;t work quite as well in situations with more nuance. In some of those situations, it seems people with one political perspective are gaming Facebook&039;s system to silence people with other perspectives.

User reports are used as weapons in other scenarios on Facebook, such as the company&039;s “real names,” policy, which has been exploited to suspend transgender Facebook users.

This probably won&039;t be the last time a Facebook review team member makes a curation decision that the company will reverse after complaints and further consideration. If the company doesn&039;t change its review system, there’s little preventing errors like this from occurring again, and again.

Quelle: <a href="Facebook&039;s Suspensions Of Political Speech Are Now A Pattern“>BuzzFeed

Migrating On-premise VMs to Azure

In 2008, the company I worked for at the time finally felt that virtualization was ready to host production workloads.  We stood up a two node VMware ESX 3.5 cluster, and started to migrate a handful of Linux, Windows and Novell Netware (!) servers from bare metal to virtual.  Even with VMware’s migration tooling, it was still a very manual process.  I scripted as much as I could, but my higher ups never felt good about farming the process out to lower level resources.  It was always me who was on the hook for physical to virtual migrations in after hour maintenance windows.
But that was a lifetime ago in terms of technology, and long before today’s DevOps mentality and tooling existed.  I don’t hear as many customers planning P2V (Physical-to-Virtual) migrations these days.  Instead, they’re asking about V2V (Virtual-to-Virtual), or to be more specific, how can they move on-prem workloads to the cloud: V2C (Virtual-to-Cloud).  Quite a few times, I’ve been asked “Can CloudForms help me migrate VMs from my internal virtual infrastructure to the cloud?”
The answer I usually give is, “Not out of the box, but with CloudForms Automation and Red Hat Consulting services, it’s definitely possible.”  No customer ever really pursued this beyond the initial inquiry, however. My own curiosity and interest in Microsoft Azure lead me to try to actually prove this concept out. I submitted a proposal for Red Hat Summit for this year on automating on-prem to Azure migrations using Red Hat CloudForms, which was accepted. I wanted to demonstrate that CloudForms can do just about anything you can think of, with your imagination and knowledge of Ruby being the only real limiting factors.
All of the CloudForms automation methods and Ansible playbooks required to enable the migration of on-premise VMs running on Hyper-V to Azure are available on GitHub. There is also a video available of the process on YouTube.
There are two main challenges when it comes to performing any V2V migration:  dealing with the differences in virtual hardware, and converting between different virtual disk formats.  For the first proof-of-concept, I decided to take a bit of a shortcut by using Microsoft Hyper-V as my on-prem infrastructure source, and Microsoft Azure as my cloud destination.  We are seeing a lot of  interest in Azure as a cloud provider since we added support in CloudForms 4.0.  Since there is a lot common ground between Azure and Hyper-V, it was logical to start with these two platforms.  They both use a similar virtual disk format &; only a bit of metadata needs to be removed from a Hyper-V disk before it can be uploaded and used as an image in Azure.  They also use the same virtual hardware, so there is no need to worry about the drivers and kernel modules change.
Here is a workflow of the migration process:

Selecting a VM to migrate
Retrieving Azure information
Preparing the VM
Converting the virtual disk to VHD
Uploading the converted disk
Provisioning the VM in Azure

Selecting a VM to Migrate
The process is initiated by selecting a VM from your on-prem infrastructure.  I used SCVMM/Hyper-V for this test, and I hope to use the same process for VMware & Red Hat Virtualization in the near future. I also tested with Red Hat Enterprise Linux 7 as the guest operating systems, but hope to use other guest operating systems in the future.
Select ‘Migrate to Azure’ from the ‘Migration’ button &8211; a custom button on VMs that leads to the Azure migration dialog.

Retrieving Azure Information
The Azure migration dialog uses several automation methods to retrieve information from the Azure provider in CloudForms.  The basic workflow is:

The Azure credentials and region are derived from the provider information in CloudForms
The resource group list is pulled via the Azure resource manager API, using CloudForms native azure-armrest Ruby gem
Once one of the resource groups is selected, the list of storage accounts, networks, and subnets are refreshed
The user selects the OS type (Windows or Linux), the instance size, enters a password for the “clouduser” account, and a name for the network interface and public IP address resources

Preparing the VM
When the submit button is clicked, CloudForms leverages its Ansible Tower integration to launch a job template that removes VM specific information (e.g. udev rules, SSH host keys, ethernet configuration) and installs the Windows Azure Linux Agent, as required to run on Azure. Similarly, Windows VMs have sysprep run against them to remove machine specific information.

Converting the Virtual Disk to VHD
The VM is shut down as the last task of the Ansible playbook. At this point, the virtual disk can be converted to VHD format to run on Azure.  In the case of Hyper-V, this means CloudForms starts a WinRM remote session against the Hyper-V host the VM was running on.  Using PowerShell, the path to the virtual hard disk is derived, and the disk is converted from VHDX to VHD format &8211; some extra metadata are removed from VHDX to VHD. Upon completion, the file is ready for upload to the selected storage account.
Uploading the Converted Disk
The upload to Azure uses the same WinRM session on the Hyper-V host which requires the installation of Azure Resource Manager powershell commandlets (performed once).  The migration method requires the Azure session credentials to be saved in the file:
C:credsazure.txt

Provisioning the VM in Azure
The time required to upload the image depends on the Internet bandwith. Once finished, a new public IP resource is created, along with a new network interface, and the two resources are associated with one another.  With a functioning network interface, a new instance is created from cloning the uploaded virtual disk.
The process takes approximately 2-3 minutes and results in a new instance ready to SSH/RDPe into. This instance is now listed in the Azure inventory in CloudForms.

Conclusion
In this article, we looked at how CloudForms can reduce a multi-step V2C process to a couple of clicks from a dialog.  This allows IT teams to take complex processes that were previously entrusted to the highest level engineers and put them in the hands of lower level administrators. The Ansible Tower integration added since CloudForms 4.1 extends this even further.
A video of the Azure migration process is available, and you can keep up with the development of this automation method on github.
Quelle: CloudForms