The New MacBook Pro: A Perfectly Fine Laptop For No One In Particular

Apple&;s new top-of-the-line laptop is impressively lightweight, but it may not be the home run longtime MacBook Pro users were hoping for.

BuzzFeed News; Apple

The all-new MacBook Pro is the laptop that loyal MacBook Pro users have been waiting for since 2012. But it might not be the one they were expecting.

Apple’s new laptop, which starts shipping in mid-December, is lighter and thinner than its predecessor. There’s a model with a tiny touchscreen called the Touch Bar, and a 13-inch model without, aimed at replacing the MacBook Air.

When the fourth-generation Pro offering was announced in October, the first major redesign for the premium laptop line in four years, the Maclash was very strong.

Gone is the strip of physical function keys, MagSafe charger, SD card reader, HDMI, mini DisplayPort, and USB ports. It&;s all been replaced with multiple USB-C ports and a headphone jack (OMG&;&033;), which is the only legacy input that remains.

Apple has removed the ports that some thought made the MacBook deserving of its Pro moniker.

“I’m out of apologia juice for defending Apple,” tweeted David Heinemeier Hansson, creator of the Ruby on Rails web development framework. “Those complaining about Apple’s current Mac lineup are not haters, they’re lovers. They’ve spent 10+ years and 5+ figures on Macs,” tweeted @lapcatsoftware, a self-described longtime Mac developer.

Meanwhile, some Mac users complained that the the new MacBook Pro appears to be underpowered for its price. The machine runs on last year’s Intel Skylake chip, and not the more recent, slightly more powerful Kaby Lake (which the chipmaker claims is about 12% faster in raw performance).

So, were the complaints warranted?

In my week and a half-ish with the new MacBook Pros, I found the laptops to be impressively fast and lightweight, but perhaps not quite the home run for which diehard MacBook Pro users had hoped. I tried both Touch Bar and non-Touch Bar models. The 13-inch non-Touch Bar laptop is clearly a win for those looking to upgrade aging Airs, as it’s lighter, thinner, and more powerful than the Air line.
But it’s not clear who exactly the MacBook Pro with Touch Bar is for — other than early adopters who won’t mind toting around a handful of dongles in order to push USB-C, the port of the future, forward.

The MacBook Pro’s marquee feature is the Touch Bar, a new Retina, multi-touch screen that displays a set of additional controls that change according to what apps you have open.

The MacBook Pro’s marquee feature is the Touch Bar, a new Retina, multi-touch screen that displays a set of additional controls that change according to what apps you have open.

The Touch Bar is so slick and smooth, it feels frictionless. It’s a virtualization of the keys you’d typically find at the top of the keyboard, with some more bells and whistles.

The whole gang’s still there: the ESC key, music controls, volume control, the Launchpad shortcut that I’ve literally NEVER seen anyone use, a dedicated Siri button, etc. Touch Bar can be customized in a number of ways with actions like Screenshot and Show Desktop (my favorite *hide everything* trick for when people creep up from behind).

As one might expect at this early stage, the only apps with Touch Bar support right now are Apple-designed ones like Photos and Mail, and some applications make better use of Touch Bar than others.

My favorite is viewing PDFs in Preview, which you can quickly highlight with a single tap. The bar also allows you to stay in full screen longer in the Photos app by placing a menu of touch-based editing tools right at your fingertips. In Final Cut Pro, you can precisely trim clips with your finger, which feels more ergonomic than using your trackpad. In QuickTime, being able to scrub videos backwards and forwards with precision is pretty sweet, too.

Finger input feels easier, faster, and more precise than clicking and dragging on a trackpad. Another neat feature is that adjusting volume and brightness only requires a single swipe: Instead of multiple key taps, you can press and hold the volume icon and then move your finger back in forth to adjust.

Nicole Nguyen / BuzzFeed News

Other Touch Bar functions, like tab preview in Safari, seem more forced.

Other Touch Bar functions, like tab preview in Safari, seem more forced.

As you can see here, Touch Bar&039;s Safari tab previews are insanely small and difficult to read; It’s hard to imagine anyone would select a tab using the Touch Bar instead of the control + tab shortcut. That said, it is fun to swipe through all 123,801,293 of your open tabs.

Another is the emoji bar in Messages, which, at first, seemed great for quickly selecting frequently used emoji. However, to find something specific, you have to scroll and scroll and scroll, which seems silly when there’s already a great keyboard MacOS shortcut for it (control + command + spacebar = emoji heaven).

Nicole Nguyen / BuzzFeed News


View Entire List ›

Quelle: <a href="The New MacBook Pro: A Perfectly Fine Laptop For No One In Particular“>BuzzFeed

Docker Online Meetup #46: Introduction to InfraKit

In case you missed it, Solomon Hykes ( Founder and CTO) open sourced during his keynote address at LinuxCon Europe in Berlin last month. InfraKit is a declarative management toolkit for orchestrating infrastructure built by two Docker core team engineers, David Chung and Bill Farner. Read this blog post to learn more about InfraKit origins, internals and plugins including groups, instances and flavors.
During this online meetup, David and Bill explained what InfraKit is, what problems it solves, some use cases, how you can contribute and what&;s coming next.
InfraKit is being developed at  github.com/docker/infrakit.
 

 

There are many ways you can participate in the development of InfraKit and influence the roadmap:

Star the project on GitHub to follow issues and development
Help define and implement new and interesting plugins
Instance plugins to support different infrastructure providers
Flavor plugins to support a variety of systems like etcd or mysql clusters
Group controller plugins like metrics-driven auto scaling and more
Help define interfaces and implement new infrastructure resource types for things like load balancers, networks and storage volume provisioners

Check out the InfraKit repository README for more info, a quick tutorial and to start experimenting — from plain files to Terraform integration to building a Zookeeper ensemble.  Have a look, explore and send us a PR or open an issue with your ideas!

Check out the video and slides from docker Online meetup &; intro to infrakit by @wfarnerClick To Tweet

The post Docker Online Meetup 46: Introduction to InfraKit appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Creating and accessing a Kubernetes cluster on OpenStack, part 2: Access the cluster

The post Creating and accessing a Kubernetes cluster on OpenStack, part 2: Access the cluster appeared first on Mirantis | The Pure Play OpenStack Company.
To access the Kubernetes cluster we created in part 1, we&;re going to create a Ubuntu VM (if you have a Ubuntu machine handy you can skip this step), then configure it to access the Kubernetes API we just deployed.
Create the client VM

Create a new VM by choosing Project->Compute->Intances->Launch Instance:

Fortunately you don&8217;t have to worry about obtaining an image, because you&8217;ll have the Ubuntu Kubernetes image that was downloaded as part of the Murano app. Click the plus sign (+) to choose it.  (You can choose another distro if you like, but these instructions assume you&8217;re using Ubuntu.)

You don&8217;t need a big server for this, but it needs to be big enough for the Ubuntu image we selected, so choose the m1.small flavor:

Chances are it&8217;s already on the network with the cluster, but that doesn&8217;t matter; we&8217;ll be using floating IPs anyway. Just make sure it&8217;s on a network, period.

Next make sure you have a key pair, because we need to log into this machine:

After it launches&;

Add a floating IP if necessary to access it by clicking the down arrow on the button at the end of the line and choosing Associate Floating IP.  If you don&8217;t have any floating IP addresses allocated, click the plus sign (+) to allocate a new one:

Choose the appropriate network and click Allocate IP:

Now add it to your VM:

You&8217;ll see the new Floating IP listed with the Instance:

Before you can log in, however, you&8217;ll need to make sure that the security group allows for SSH access. Choose Project->Compute->Access & Security and click Manage Rules for the default security group:

Click +Add Rule:

Under Rule, choose SSH at the bottom and click Add.

You&8217;ll see the new rule on the Manage Rules page:

Now use your SSH client to go ahead and log in using the username ubuntu and the private key you specified when you created the VM.

Now you&8217;re ready to actually deploy containers to the cluster.

The post Creating and accessing a Kubernetes cluster on OpenStack, part 2: Access the cluster appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Dynamic advertising gets the cognitive treatment

Brands are spending more on native advertising than ever before — a lot more — to create targeted, minimally invasive online advertising experiences for consumers.
Business Insider Intelligence reports that native advertising, which assumes the look and feel of content that surrounds it, is the fastest-growing digital advertising category. The same report also projects that spending on native advertising will grow to $21 billion in 2018, up from $4.7 billion in 2013.
The real game-changer for brands that want to make a meaningful connection with audiences in digital channels will be the future marriage of artificial intelligence and native advertising in video content. In this future — likely only two to three years away — dynamic and highly personalized advertising takes on an entirely new meaning.
Advertising&;s giant leap toward science
IBM Watson&8217;s cognitive capabilities, once incorporated into advertisers&8217; video platforms, will enable advertisers to personalize marketing messages across channels, even within the video stream. The key is the ability to accumulate data about a specific viewer&8217;s preferences and integrate that data from external sources, such as social media and advertisers&8217; other marketing tools. ​​
If Watson knows a consumer recently bought a refrigerator, for instance, then it wouldn&8217;t show that consumer advertising for refrigerators. Instead, Watson might serve up an ad for a product to put in the new fridge, such as soda. And because Watson could determine — based on purchase history — loyalty to a certain soda brand, so the consumer won&8217;t see any rivals&8217; ads. Watson will be able to dynamically swap in a product they love — such as Coke for Pepsi — into the video the consumer is watching to create a powerful, personalized brand experience.
For brands, the value of such a scenario is clear: they can be seamlessly front-and-center in a consumer&8217;s entertainment experience, facilitating a positive and lasting association between brand and content. Media and entertainment companies will benefit, too, because consumers will feel more personally connected to the video content they create.
360-degree user profiles
The ability to deliver highly-targeted online video advertising is here. Brands can already use Watson analytics tools and intelligence to enable this for any business and campaign, creating direct advertising that will resonate with customers. Watson intelligence can also be integrated with other digital marketing tools, such as email or text, to deliver personalized advertising and marketing messages.
Many brands are already experimenting with Watson&8217;s cognitive capabilities — facial recognition, audio recognition, tone analytics, personality insights and more — to better understand the needs and perceptions of consumers. Chevrolet recently tapped Watson for a “global positivity system&; campaign to analyze people&8217;s social media feeds, for example. The North Face is among a growing list of retailers using Watson AI capabilities to make product recommendations. Video providers are now exploring ways to use Watson&8217;s intelligence to deliver more relevant content to viewers.
Through these efforts, brands are starting to develop 360-degree profiles of users that will help them better understand what their customers say, how they feel and how they interact with the company and its products. These comprehensive profiles are essential to making the dynamic and highly personalized advertising of the future a reality in all digital channels, including video.
Learn more about IBM Cloud Video.
The post Dynamic advertising gets the cognitive treatment appeared first on news.
Quelle: Thoughts on Cloud

New Dockercast episode and interview with Docker Captain Laura Frank

We recently had the opportunity to catch up with the amazing Laura Frank. Laura is a developer focused on making tools for other developers.As an engineer at Codeship, she works on improving the Docker infrastructure and overall experience for users on Codeship. Previously, she worked on several open source projects to support Docker in the early stages of the project, including Panamax and ImageLayers. She currently lives in Berlin.
Laura is also a Docker Captain, a distinction that Docker awards select members of the community that are experts in their field and passionate about sharing their Docker knowledge with others.
As we do with all of these podcasts, we begin with a little bit of history of &;How did you get here?” Then we dive into the Codeship offering and how it optimizes its delivery flow by using Docker containers for everything.  We then end up with a “What&;s the coolest Docker story you have?”  I hope you enjoy  &; please feel free to comment and leave suggestions.
 

In addition to the questions covered in the podcast, we’ve had the chance to ask Laura for a couple additional questions below.
How has Docker impacted what you do on a daily basis?
I’m lucky to work with Docker every day in my role as an engineer at Codeship. In addition to appreciating  the technical aspects of Docker, I really enjoy seeing the different ways the Docker ecosystem as a whole empowers engineering teams to move faster. Docker is really impactful at two levels: we can use Docker to simplify the way we build and distribute software. But we can also solve problems in more unique ways because containerization is more accessible. It’s not just about running a production application in containers; you can use Docker to provide a distributed system of containers in order to scale up and down and handle task processing in interesting ways. To me, Docker is really about reducing friction in the development process and allowing engineers to focus on the stuff we’re best at &; solving complex problems in interesting ways.
As a Docker Captain, how do you share that learning with the community?
I’m usually in front of a crowd, talking through a set of problems that can be solved with Docker. There are lots of great ways to share information with others, from writing a blog post or presenting a webinar, to answering questions at a meetup. I’m very hands on when it comes to helping people wrap their heads around the questions they have when using Docker. I think the best way to help is to open my laptop and work through the issues together.
Since Docker has is such a complex and vast ecosystem, it’s important that Captains, and all of us who lead different areas of the Docker community, understand that each person has different levels of expertise with different components. The goal isn’t to impress people with how smart you are or what cool things you’ve built; the goal is to help your peers become better at what they do. But, the most important point is that everyone has something to contribute to the community.
Who are you when you’re not online?
I really love to get far away from computers when I’m not at work. I think there are so many other interesting parts of me that aren’t related to the work I do in the Docker community, and are separate from me as a technologist. You have to strike the right balance to stay focused and healthy. I love to adventure outdoors &8212; canoeing and kayaking in the summer in addition to, running around the city, hiking, and camping. Eliminating distractions and giving my brain some time to recover helps me think more clearly and strategically during the week.
How did you first get involved with Docker?
In 2013, I worked at HP Cloud on an infrastructure engineering team, and someone shared Solomon’s lightning talk from PyCon in an IRC or HipChat channel. I remember being really intrigued by the technical complexity and greater vision that he expressed. Later, my boss from HP left to join CenturyLink Labs, where he was building out a team to work on Docker-related developer tools, and a handful of us went with him. It was a huge gamble. There wasn’t much in the way of dev tools built around Docker, and those projects were really fun and exciting to work on, because we were just figuring out everything as we went along. My team was behind Panamax, ImageLayers, Lorry, and Dray, to name a few. If someone were to take me back to 2013 and tell me that this weirdly obscure new project would be the thing I spend 100% of my time working with, I wouldn’t have believed them, but I’m really glad it’s true.
If you could switch your job with anyone else, whose job would you want?
I’d be a pilot. I think it also shares common qualities with my role as an engineer &8212; I love the high-level view and seeing lots of complex systems working together. Plus, I think I’d look pretty cool in a tactical jumpsuit. Maybe I’ll float that idea by the rest of the engineers on my team as a possible dress code update.
Do you have a favorite quote?
“Don’t half-ass two things. Whole-ass one thing” &8211; Ron Swanson. It’s really tempting to try to learn everything about everything, especially related to technology that is constantly changing. The Docker world can be pretty chaotic. Sometimes it’s better to slow down, focus on one component of the ecosystem, and rely on the expertise of your peers for guidance in other areas. The Docker Community is great place to see this in action, because you simply can’t do it all yourself. You have to rely on the contributions of others. And you know, finish unloading the dishwasher before starting to clean the bathroom. Ron Swanson is a wise man in all areas of life.
 
The post New Dockercast episode and interview with Docker Captain Laura Frank appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Conquering impossible goals with real-time analytics

“Past data only said, &;go faster&; or &8216;ride better,’&; Kelly Catlin, Olympic Cyclist and Silver Medalist, shared with the audience at IBM World of Watson event on 24 October. In other words, the feedback generated from all her analytics data sources — the speed, cadence, power meters on her bicycle — was generally useless to this former mountain bike racer who wanted to improve her track cycling performance by 4.5 percent to capture a medal at a medal at the 2016 Rio Olympic Games.

USA Cycling Women&8217;s Team Pursuit

While I am by no means an Olympic level athlete, I knew exactly what Kelly meant. I’ve logged over 300 miles in running races over 8 years, and just in this past year started to see some small improvements in my 5Ks and half-marathons. Suddenly, I started asking, “How much faster could I run a half marathon? Could I translate these improvements to longer distances?” I downloaded all my historical race information into an excel chart. I looked at my Runkeeper and Strava training runs. Despite all this data, I was stuck. &;What should I do to improve?&8221; I asked a coach. He said, “Run more during the week.”
But I wanted to know more. How much capacity do I really have? How much does my asthma limit me? Should I only run in certain climates? During which segments of a race should I speed up or slow down? Just like Kelly, who spent four hours per session reviewing data, I understood how historical data had limited impact on improving current performance.
According to Derek Bouchard-Hall, CEO of USA Cycling, “At the elite level, a 3 percent performance improvement in 12 months is attainable but very difficult. For the USA Women’s Team Pursuit Team, they had only 11 months and needed 4.5 percent improvement which would require them to perform at a new world record time (4.12/15.4 Lap Average). The coach could account for the 3 percent in physiological improvement but needed technology to bring the other 1.5 percent. He focused in two areas: equipment (bike/tire, wind tunnel training) and real-time analytic training insights.”

How exactly could real-time analytics insight change performance?
According to Kelly, “Now, we can make executable changes.” She and her teammates now know when to make a transition of who is leading the group, how best to make that transition, and which times of the race to pick up cadence.
The result: USA Women’s Team Pursuit finished in the race in 4:12:454 to secure the silver medal behind Great Britain, finishing in 4:10:236.
The introduction of data sets and technology did not alone lead to Team USA’s incredible improvement. Instead, it was the combination of well-defined goals, strategic implementation of technology, and actionable, timely recommendations that led to their strong performance and results.
As you consider how to improve an area of your business, keep in mind these three things from the USA Cycling project with IBM Bluemix:

Set well-defined goals. Or, as business expert Stephen Covey would say, “always begin with the end in mind.” USA Cycling clearly articulated they needed to increase performance by 4.5 percent, and that would take more than a coach.
Choice and implementation of technology matters. Choose the tools that will not only deliver analytics data and insights, but do so in a timely and relevant manner for your business. Explore how to get started with IBM Bluemix.
Data alone doesn’t equal guidance. You must review the data, and with your colleagues, your coach, your running buddy, set clear, executable actions.

The IBM Bluemix Garage Method can help you define your ideas and bring a culture of innovation agility to your cloud development.
A version of this post originally appeared on the IBM Bluemix blog.
The post Conquering impossible goals with real-time analytics appeared first on news.
Quelle: Thoughts on Cloud

Creating and accessing a Kubernetes cluster on OpenStack, part 1: Create the cluster

The post Creating and accessing a Kubernetes cluster on OpenStack, part 1: Create the cluster appeared first on Mirantis | The Pure Play OpenStack Company.
In honor of this week&;s Kubecon, we&8217;re bringing you information on how to get started with the Kubernetes container orchestration engine.  On Monday we explained what Kubernetes is. Now let&8217;s show you how to actually use it.
In this three part series, we&8217;ll take you through the steps to run an Nginx container on Kubernetes over OpenStack, including:

Deploying a Kubernetes cluster with Murano
Configuring OpenStack security to make a Kubernetes cluster usable from within OpenStack
Downloading and configuring the Kubernetes client
Creating a Kubernetes application
Running an application on Kubernetes

Let&8217;s get started.
Create the Kubernetes cluster with Murano
The first step is to get a cluster created. There are several ways to do that, but the easiest is to use a Murano Package. If you don&8217;t have Murano handy, you can get access to it in several ways, but the easiest is to deploy Mirantis OpenStack with Murano.
Import the Kubernetes cluster app
The first step is to get the actual Kubernetes cluster app, which is available on the OpenStack Foundation&8217;s Community App Catalog.  Follow these steps:

Log into Horizon and go to Applications->Manage->Packages.
Go to the Community App Catalog and choose Murano Apps -> Kubernetes Cluster to get the Kubernetes Cluster App.  You&8217;re looking for the URL for the package itself. In this case, that&8217;s http://storage.apps.openstack.org/apps/com.mirantis.docker.kubernetes.KubernetesCluster.zip.
Back in Horizon, click Import Package.
For the Package Source, choose URL, and add the URL from step 2, then click Next:
Murano will automatically start downloading the images it needs, then mark them for use with Murano; you won&8217;t have to do anything there but click Import and wait. To see the images downloading, choose Project->Images. If the images didn&8217;t already exist, you&8217;ll see them Saving:
Once they&8217;re finished saving, you&8217;ll see that their status has changed to Active:

Next, we&8217;ll deploying an environment that includes the Kubernetes master and minions.
Create the Kubernetes Murano environment

In Horizon, choose Applications->Browse. You should see the new app under Recent Activity.
To make things simple, click the Quick Deploy button.
Keep all the defaults, then scroll down and click Next.
Choose the Debian image and click Create.
Horizon will automatically take you to the new environment. At this point, it&8217;s been created, but not deployed:
You can add other things if you want, but for now, click Deploy This Environment. Goes through a number of steps, creating VMs, networks, security groups, and so on. You can see that on the main environment page, or by checking the logs:
When deployment is complete, you&8217;ll see the status change to Ready:
All that&8217;s great, but where do you access the cluster?  Click Latest Deployment Log to see the IP address assigned to the cluster:

Now, you&8217;ll notice that there are references to (in this case) 4 different nodes: gateway-1, kube-1, kube-2, and kube-3.  You can see these instances if you go to Project->Compute->Instances.  Notice that the Kubernetes API is running on kube-1.
In part 2, you&8217;ll actually access the cluster.
The post Creating and accessing a Kubernetes cluster on OpenStack, part 1: Create the cluster appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Notes from the Barcelona Summit: OpenStack in the Service of Science

The post Notes from the Barcelona Summit: OpenStack in the Service of Science appeared first on Mirantis | The Pure Play OpenStack Company.
Every summit, we see new use cases showing how OpenStack-based clouds help scientists to move their research forward by providing big data processing and analysis, and the summit in Barcelona was no exception. The attendees were undoubtedly amazed by the scale and value of the results of the research projects in the presented areas of nuclear physics, astronomy, and medicine. We&;ve gotten strangely familiar with data levels presented in petabytes (1,000,000 GB), but zettabytes (1,000,000,000,000 GB)?  That&8217;s new.  Add to that the use of hundreds of thousands of CPU cores and you have some seriously Big Data.
First, Tim Bell, who seems to be a permanent OpenStack speaker, updated the audience on the state of cloud infrastructure at CERN. He noted that the scientists there receive 0.5 PB of data daily as a result of monitoring a billion collisions of particles in the Large Hadron Collider. This huge amount of data is processed using more than 190,000 OpenStack cores.

Next, Dr. Rosie Bolton, from Cambridge University, explained how researchers explore our Universe to find the origins of galaxy formation and dark matter using a giant software defined radio telescope called the Square Kilometer Array, which is geographically located in South Africa and Australia. This telescope produces 1.3 ZB and stores 1 PB of data every day to be processed and stored in the OpenStack cloud.

To continue, Dr. Paul Calleja, from Cambridge University, explained how researchers use the created Bio-medical cloud to collect patient data from hospitals, and store and analyse that data to come up with new medical treatments. For example, the project developed a statistical model that processes patients’ medical records in real time during surgical procedures and helps to cut surgical site infection rates by 58%.
He also presented another use case in which a cloud platform called OpenCB uses the Hadoop infrastructure for next-generation big data analytics that will be used by Genomics England to study the genomes of 100,000 people in the UK. OpenCB is already being used to analyse the genomes of 10,000 rare disease patients.
He also talked about using the Bio-medical cloud forcomputing and storing the data obtained from brain scanning facilities in a summit keynote. 

Research isn&8217;t all that OpenStack is used for in academia. Students study, for example, networking and security, doing labs in OpenStack clouds. Universities build their supercomputers based on the OpenStack platform to help both students and teachers to do their Doctoral and Master research projects. Add to that the fact that it&8217;s open source and it&8217;s a no-lock-in choice for institutions that often have limited budgets.
There&8217;s a lot of focus these days on OpenStack as an enterprise tool, but remember, it was originally designed, in part, by NASA, so it&8217;s no wonder that OpenStack has so many followers in academic and scientific circles, and that goes on today.
The post Notes from the Barcelona Summit: OpenStack in the Service of Science appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis