A Site That Facebook Made A Top Trending Topic Is A Sketchy Reprint Factory

EndingTheFed.com is filled with dubious articles taken from other right-wing websites.

For much of Sunday and into Monday, Fox News host Megyn Kelly was one of the top Trending Topics on Facebook. Her name appeared in the sidebar seen by Facebook users in the United States:

For much of Sunday and into Monday, Fox News host Megyn Kelly was one of the top Trending Topics on Facebook. Her name appeared in the sidebar seen by Facebook users in the United States:

Facebook

If you hovered your mouse over her name, up popped a story claiming that Kelly had been “kicked out” of Fox News “for backing Hillary.” The story was from a site called EndingTheFed.com — and it’s false.

If you hovered your mouse over her name, up popped a story claiming that Kelly had been "kicked out" of Fox News "for backing Hillary." The story was from a site called EndingTheFed.com — and it's false.

EndingTheFed.com was anonymously registered by its current owner in March of this year. The site has grown quickly thanks to a strategy of publishing aggressively pro-Trump, right wing stories. Even more notable is that the majority of its recent stories are simply taken word-for-word from other right-wing sites.

That means Facebook, the largest social network on the planet, actively promoted a fake story from a website that basically exists to republish other, often dubious, posts from fringe sites on the conservative web.

It&;s unclear whether Ending The Fed has permission to republish content from other sites, or if it&039;s committing mass plagiarism. BuzzFeed News contacted the site but has not heard back. Facebook declined to comment on the record about how the story made it to the Trending Topics list.

Facebook

Even before Facebook gave it a boost, Ending The Fed was getting big hits on the social network. The site’s top five stories have together racked up over 1.2 million likes, shares, and comments since May:

Even before Facebook gave it a boost, Ending The Fed was getting big hits on the social network. The site's top five stories have together racked up over 1.2 million likes, shares, and comments since May:

The top story is a word-for-word reprint of this story, while the (partially false, partially true) claim about Obama cutting military pay is taken from here.

It&039;s unclear whether Ending The Fed&039;s success on Facebook in recent months caused it to be selected as the top story for the Megyn Kelly Trending Topic. On Friday, Facebook announced it was no longer using humans to write the summaries that accompany Trending Topics, though human engineers would be reviewing the topics selected by the algorithm.

Facebook previously announced measures to try and reduce the spread of fake news on its platform, but a BuzzFeed News report found that false stories continue to receive strong engagement.

BuzzSummo

Ending The Fed often republishes false stories. The same day it ran the Kelly story it also incorrectly reported that NFL quarterback Colin Kaepernick had converted to Islam:

Ending The Fed often republishes false stories. The same day it ran the Kelly story it also incorrectly reported that NFL quarterback Colin Kaepernick had converted to Islam:

Ending The Fed&039;s story was a word-for-word repost of this from Clash Daily, which claimed Kaepernick had converted. That post was based on a claim from a sports gossip website, which cited anonymous “people close to the player” who said Kepernick is going to become Muslim.

Kaepernick recently attracted criticism after he refused to stand for the national anthem before a football game; he has said nothing about converting to Islam.

Ending The Fed / Via endingthefed.com


View Entire List ›

Quelle: <a href="A Site That Facebook Made A Top Trending Topic Is A Sketchy Reprint Factory“>BuzzFeed

Recent RDO blogs, August 29, 2016

It’s been a few weeks since I posted a blog update, and we’ve had some great posts in the meantime. Here’s what RDO enthusiasts have been blogging about for the last few weeks.

Native DHCP support in OVN by Numan Siddique

Recently native DHCP support has been added to OVN. In this post we will see how native DHCP is supported in OVN and how it is used by OpenStack Neutron OVN ML2 driver. The code which supports native DHCP can be found here.

… read more at http://tm3.org/8d

Manual validation of Cinder A/A patches by Gorka Eguileor

In the Cinder Midcycle I agreed to create some sort of document explaining the manual tests I’ve been doing to validate the work on Cinder’s Active-Active High Availability -as a starting point for other testers and for the automation of the tests- and writing a blog post was the most convenient way for me to do so, so here it is.

… read more at http://tm3.org/8e

Exploring YAQL Expressions by Lars Kellogg-Stedman

The Newton release of Heat adds support for a yaql intrinsic function, which allows you to evaluate yaql expressions in your Heat templates. Unfortunately, the existing yaql documentation is somewhat limited, and does not offer examples of many of yaql’s more advanced features.

… read more at http://tm3.org/8f

Tripleo HA Federation Proof-of-Concept by Adam Young

Keystone has supported identity federation for several releases. I have been working on a proof-of-concept integration of identity federation in a TripleO deployment. I was able to successfully login to Horizon via WebSSO, and want to share my notes.

… read more at http://tm3.org/8g

TripleO Deploy Artifacts (and puppet development workflow) by Steve Hardy

For a while now, TripleO has supported a “DeployArtifacts” interface, aimed at making it easier to deploy modified/additional files on your overcloud, without the overhead of frequently rebuilding images.

… read more at http://tm3.org/8h

TripleO deep dive session (Overcloud – Physical network) by Carlos Camacho

This is the sixth video from a series of “Deep Dive” sessions related to TripleO deployments.

… read more at http://tm3.org/8i

Improving QEMU security part 7: TLS support for migration by Daniel Berrange

This blog is part 7 of a series I am writing about work I’ve completed over the past few releases to improve QEMU security related features.

… read more at http://tm3.org/8j

Running Unit Tests on Old Versions of Keystone by Adam Young

Just because Icehouse is EOL does not mean no one is running it. One part of my job is back-porting patches to older versions of Keystone that my Company supports.

… read more at http://tm3.org/8k

BAND-AID for OOM issues with TripleO manual deployments by Carlos Camacho

First in the Undercloud, when deploying stacks you might find that heat-engine (4 workers) takes lot of RAM, in this case for specific usage peaks can be useful to have a swap file. In order to have this swap file enabled and used by the OS execute the following instructions in the Undercloud:

… read more at http://tm3.org/8l

Debugging submissions errors in TripleO CI by Carlos Camacho

Landing upstream submissions might be hard if you are not passing all the CI jobs that try to check that your code actually works. Let’s assume that CI is working properly without any kind of infra issue or without any error introduced by mistake from other submissions. In which case, we might ending having something like:

… read more at http://tm3.org/8m

Ceph, TripleO and the Newton release by Giulio Fidente

Time to roll up some notes on the status of Ceph in TripleO. The majority of these functionalities were available in the Mitaka release too but the examples work with code from the Newton release so they might not apply identical to Mitaka.

… read more at http://tm3.org/8n
Quelle: RDO

Test and deploy to Google App Engine with the new Maven and Gradle plugins

Posted by Amir Rouzrokh, Product Manager, Google Cloud Platform

                   

Here at Google, we strive to make it easy for developers to use Google Cloud Platform (GCP). Today, we’re excited to announce the beta release of two new build tool plugins for Java developers: one for Apache Maven, and another for Gradle. Together, these plugins allow developers to test applications locally and then deploy them to cloud from the Command Line Interface (CLI), or through integration with an Integrated Development Environment (IDE) such as Eclipse and IntelliJ (check out our new native plugin for IntelliJ as well).

Developed in open-source, the plugins are available for both standard and flexible Google App Engine environments and are based on the Google Cloud SDK. The new Maven plugin for GAE standard is offered as an alternative to an existing plugin for App Engine standard. This allows users to choose the existing plugin if they wish to use tooling based on the App Engine Java SDK, or the new plugin if they wish to use tooling based on Google Cloud SDK (all other plugins are fully based on Google Cloud SDK).

After installing the Google Cloud SDK, you can install the plugins using the pom.xml or build.gradle file:

pom.xml

<plugins>

  <plugin>

    <groupId>com.google.cloud.tools</groupId>

    <artifactId>appengine-maven-plugin</artifactId>

    <version>0.1.1-beta</version>

 </plugin>

</plugins>

build.gradle

buildscript {

dependencies {

   classpath “com.google.cloud.tools:appengine-gradle-plugin:+” // latest version  } }

apply plugin: “com.google.cloud.tools.appengine”

And then, to deploy an application:

$ mvn appengine:deploy

$ gradle appengineDeploy

Once the application is deployed, you’ll see its URL in the output of the shell.

For enterprise users who wish to take their compiled artifacts such as JARs and WARs through a separate release process, both plugins provide a staging command that copies the final compiled artifacts to a target directory without deploying them to the cloud. Those artifacts can then be passed to a Continuous Delivery/Continuous Integration (CI/CD) pipeline (see here for some of CI/CD offerings for GCP).

$ mvn appengine:stage

$ gradle appengineStage

You can check the status of your deployed applications in the Google Cloud Platform Console. Head to the Google App Engine tab and click on Instances to see your application’s underlying infrastructure in action.

For additional information on the new plugins, please see the documentation for App Engine Standard (Maven, Gradle) and App Engine Flexible (Maven, Gradle). If you have specific feature requests, please submit them at GitHub, for Maven and Gradle.

You can learn more about using Java on GCP at the Java developer portal, where you’ll find all the information you need to get up and running. And be on the lookout for additional plugins for Google Cloud Platform services in the coming months!

Happy Coding!

Quelle: Google Cloud Platform

Scaling Stateful Applications using Kubernetes Pet Sets and FlexVolumes with Datera Elastic Data Fabric

Editor’s note: today’s guest post is by Shailesh Mittal, Software Architect and Ashok Rajagopalan, Sr Director Product at Datera Inc, talking about Stateful Application provisioning with Kubernetes on Datera Elastic Data Fabric. IntroductionPersistent volumes in Kubernetes are foundational as customers move beyond stateless workloads to run stateful applications. While Kubernetes has supported stateful applications such as MySQL, Kafka, Cassandra, and Couchbase for a while, the introduction of Pet Sets has significantly improved this support. In particular, the procedure to sequence the provisioning and startup, the ability to scale and associate durably by Pet Sets has provided the ability to automate to scale the “Pets” (applications that require consistent handling and durable placement). Datera, elastic block storage for cloud deployments, has seamlessly integrated with Kubernetes through the FlexVolume framework. Based on the first principles of containers, Datera allows application resource provisioning to be decoupled from the underlying physical infrastructure. This brings clean contracts (aka, no dependency or direct knowledge of the underlying physical infrastructure), declarative formats, and eventually portability to stateful applications.While Kubernetes allows for great flexibility to define the underlying application infrastructure through yaml configurations, Datera allows for that configuration to be passed to the storage infrastructure to provide persistence. Through the notion of Datera AppTemplates, in a Kubernetes environment, stateful applications can be automated to scale. Deploying Persistent StoragePersistent storage is defined using the Kubernetes PersistentVolume subsystem. PersistentVolumes are volume plugins and define volumes that live independently of the lifecycle of the pod that is using it. They are implemented as NFS, iSCSI, or by cloud provider specific storage system. Datera has developed a volume plugin for PersistentVolumes that can provision iSCSI block storage on the Datera Data Fabric for Kubernetes pods.The Datera volume plugin gets invoked by kubelets on minion nodes and relays the calls to the Datera Data Fabric over its REST API. Below is a sample deployment of a PersistentVolume with the Datera plugin:  apiVersion: v1  kind: PersistentVolume  metadata:    name: pv-datera-0  spec:    capacity:      storage: 100Gi    accessModes:      – ReadWriteOnce    persistentVolumeReclaimPolicy: Retain    flexVolume:      driver: “datera/iscsi”      fsType: “xfs”      options:        volumeID: “kube-pv-datera-0″        size: “100″        replica: “3”        backstoreServer: “tlx170.tlx.daterainc.com:7717”This manifest defines a PersistentVolume of 100 GB to be provisioned in the Datera Data Fabric, should a pod request the persistent storage.[root@tlx241 /]# kubectl get pvNAME          CAPACITY   ACCESSMODES   STATUS      CLAIM     REASON    AGEpv-datera-0   100Gi        RWO         Available                       8spv-datera-1   100Gi        RWO         Available                       2spv-datera-2   100Gi        RWO         Available                       7spv-datera-3   100Gi        RWO         Available                       4sConfigurationThe Datera PersistenceVolume plugin is installed on all minion nodes. When a pod lands on a minion node with a valid claim bound to the persistent storage provisioned earlier, the Datera plugin forwards the request to create the volume on the Datera Data Fabric. All the options that are specified in the PersistentVolume manifest are sent to the plugin upon the provisioning request.Once a volume is provisioned in the Datera Data Fabric, volumes are presented as an iSCSI block device to the minion node, and kubelet mounts this device for the containers (in the pod) to access it.Using Persistent StorageKubernetes PersistentVolumes are used along with a pod using PersistentVolume Claims. Once a claim is defined, it is bound to a PersistentVolume matching the claim’s specification. A typical claim for the PersistentVolume defined above would look like below:kind: PersistentVolumeClaimapiVersion: v1metadata:  name: pv-claim-test-petset-0spec:  accessModes:    – ReadWriteOnce  resources:    requests:      storage: 100GiWhen this claim is defined and it is bound to a PersistentVolume, resources can be used with the pod specification:[root@tlx241 /]# kubectl get pvNAME          CAPACITY   ACCESSMODES   STATUS      CLAIM                            REASON    AGEpv-datera-0   100Gi      RWO           Bound       default/pv-claim-test-petset-0             6mpv-datera-1   100Gi      RWO           Bound       default/pv-claim-test-petset-1             6mpv-datera-2   100Gi      RWO           Available                                              7spv-datera-3   100Gi      RWO           Available                                              4s[root@tlx241 /]# kubectl get pvcNAME                     STATUS    VOLUME        CAPACITY   ACCESSMODES   AGEpv-claim-test-petset-0   Bound     pv-datera-0   0                        3mpv-claim-test-petset-1   Bound     pv-datera-1   0                        3mA pod can use a PersistentVolume Claim like below:apiVersion: v1kind: Podmetadata:  name: kube-pv-demospec:  containers:  – name: data-pv-demo    image: nginx    volumeMounts:    – name: test-kube-pv1      mountPath: /data    ports:    – containerPort: 80  volumes:  – name: test-kube-pv1    persistentVolumeClaim:      claimName: pv-claim-test-petset-0The result is a pod using a PersistentVolume Claim as a volume. It in-turn sends the request to the Datera volume plugin to provision storage in the Datera Data Fabric.[root@tlx241 /]# kubectl describe pods kube-pv-demoName:       kube-pv-demoNamespace:  defaultNode:       tlx243/172.19.1.243Start Time: Sun, 14 Aug 2016 19:17:31 -0700Labels:     <none>Status:     RunningIP:         10.40.0.3Controllers: <none>Containers:  data-pv-demo:    Container ID: docker://ae2a50c25e03143d0dd721cafdcc6543fac85a301531110e938a8e0433f74447    Image:   nginx    Image ID: docker://sha256:0d409d33b27e47423b049f7f863faa08655a8c901749c2b25b93ca67d01a470d    Port:    80/TCP    State:   Running      Started:  Sun, 14 Aug 2016 19:17:34 -0700    Ready:   True    Restart Count:  0    Environment Variables:  <none>Conditions:  Type           Status  Initialized    True  Ready          True  PodScheduled   TrueVolumes:  test-kube-pv1:    Type:  PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)    ClaimName:   pv-claim-test-petset-0    ReadOnly:    false  default-token-q3eva:    Type:        Secret (a volume populated by a Secret)    SecretName:  default-token-q3eva    QoS Tier:  BestEffortEvents:  FirstSeen LastSeen Count From SubobjectPath Type Reason Message  ——— ——– —– —- ————- ——– —— ——-  43s 43s 1 {default-scheduler } Normal Scheduled Successfully assigned kube-pv-demo to tlx243  42s 42s 1 {kubelet tlx243} spec.containers{data-pv-demo} Normal Pulling pulling image “nginx”  40s 40s 1 {kubelet tlx243} spec.containers{data-pv-demo} Normal Pulled Successfully pulled image “nginx”  40s 40s 1 {kubelet tlx243} spec.containers{data-pv-demo} Normal Created Created container with docker id ae2a50c25e03  40s 40s 1 {kubelet tlx243} spec.containers{data-pv-demo} Normal Started Started container with docker id ae2a50c25e03The persistent volume is presented as iSCSI device at minion node (tlx243 in this case):[root@tlx243 ~]# lsscsi[0:2:0:0]    disk    SMC      SMC2208          3.24  /dev/sda [11:0:0:0]   disk    DATERA   IBLOCK           4.0   /dev/sdb[root@tlx243 datera~iscsi]# mount | grep sdb/dev/sdb on /var/lib/kubelet/pods/6b99bd2a-628e-11e6-8463-0cc47ab41442/volumes/datera~iscsi/pv-datera-0 type xfs (rw,relatime,attr2,inode64,noquota)Containers running in the pod see this device mounted at /data as specified in the manifest:[root@tlx241 /]# kubectl exec kube-pv-demo -c data-pv-demo -it bashroot@kube-pv-demo:/# mount | grep data/dev/sdb on /data type xfs (rw,relatime,attr2,inode64,noquota)Using Pet SetsTypically, pods are treated as stateless units, so if one of them is unhealthy or gets superseded, Kubernetes just disposes it. In contrast, a PetSet is a group of stateful pods that has a stronger notion of identity. The goal of a PetSet is to decouple this dependency by assigning identities to individual instances of an application that are not anchored to the underlying physical infrastructure.A PetSet requires {0..n-1} Pets. Each Pet has a deterministic name, PetSetName-Ordinal, and a unique identity. Each Pet has at most one pod, and each PetSet has at most one Pet with a given identity. A PetSet ensures that a specified number of “pets” with unique identities are running at any given time. The identity of a Pet is comprised of:a stable hostname, available in DNSan ordinal indexstable storage: linked to the ordinal & hostnameA typical PetSet definition using a PersistentVolume Claim looks like below:# A headless service to create DNS recordsapiVersion: v1kind: Servicemetadata:  name: test-service  labels:    app: nginxspec:  ports:  – port: 80    name: web  clusterIP: None  selector:    app: nginx—apiVersion: apps/v1alpha1kind: PetSetmetadata:  name: test-petsetspec:  serviceName: “test-service”  replicas: 2  template:    metadata:      labels:        app: nginx      annotations:        pod.alpha.kubernetes.io/initialized: “true”    spec:      terminationGracePeriodSeconds: 0      containers:      – name: nginx        image: gcr.io/google_containers/nginx-slim:0.8        ports:        – containerPort: 80          name: web        volumeMounts:        – name: pv-claim          mountPath: /data  volumeClaimTemplates:  – metadata:      name: pv-claim      annotations:        volume.alpha.kubernetes.io/storage-class: anything    spec:      accessModes: [ “ReadWriteOnce” ]      resources:        requests:          storage: 100GiWe have the following PersistentVolume Claims available:[root@tlx241 /]# kubectl get pvcNAME                     STATUS    VOLUME        CAPACITY   ACCESSMODES   AGEpv-claim-test-petset-0   Bound     pv-datera-0   0                        41mpv-claim-test-petset-1   Bound     pv-datera-1   0                        41mpv-claim-test-petset-2   Bound     pv-datera-2   0                        5spv-claim-test-petset-3   Bound     pv-datera-3   0                        2sWhen this PetSet is provisioned, two pods get instantiated:[root@tlx241 /]# kubectl get podsNAMESPACE     NAME                        READY     STATUS    RESTARTS   AGEdefault       test-petset-0               1/1       Running   0          7sdefault       test-petset-1               1/1       Running   0          3sHere is how the PetSet test-petset instantiated earlier looks like:[root@tlx241 /]# kubectl describe petset test-petsetName: test-petsetNamespace: defaultImage(s): gcr.io/google_containers/nginx-slim:0.8Selector: app=nginxLabels: app=nginxReplicas: 2 current / 2 desiredAnnotations: <none>CreationTimestamp: Sun, 14 Aug 2016 19:46:30 -0700Pods Status: 2 Running / 0 Waiting / 0 Succeeded / 0 FailedNo volumes.No events.Once a PetSet is instantiated, such as test-petset below, upon increasing the number of replicas (i.e. the number of pods started with that PetSet), more pods get instantiated and more PersistentVolume Claims get bound to new pods:[root@tlx241 /]# kubectl patch petset test-petset -p'{“spec”:{“replicas”:”3″}}'”test-petset” patched[root@tlx241 /]# kubectl describe petset test-petsetName: test-petsetNamespace: defaultImage(s): gcr.io/google_containers/nginx-slim:0.8Selector: app=nginxLabels: app=nginxReplicas: 3 current / 3 desiredAnnotations: <none>CreationTimestamp: Sun, 14 Aug 2016 19:46:30 -0700Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 FailedNo volumes.No events.[root@tlx241 /]# kubectl get podsNAME                        READY     STATUS    RESTARTS   AGEtest-petset-0               1/1       Running   0          29mtest-petset-1               1/1       Running   0          28mtest-petset-2               1/1       Running   0          9sNow the PetSet is running 3 pods after patch application.When the above PetSet definition is patched to have one more replica, it introduces one more pod in the system. This in turn results in one more volume getting provisioned on the Datera Data Fabric. So volumes get dynamically provisioned and attached to a pod upon the PetSet scaling up.To support the notion of durability and consistency, if a pod moves from one minion to another, volumes do get attached (mounted) to the new minion node and detached (unmounted) from the old minion to maintain persistent access to the data.ConclusionThis demonstrates Kubernetes with Pet Sets orchestrating stateful and stateless workloads. While the Kubernetes community is working on expanding the FlexVolume framework’s capabilities, we are excited that this solution makes it possible for Kubernetes to be run more widely in the datacenters. Join and contribute: Kubernetes Storage SIG.Download KubernetesGet involved with the Kubernetes project on GitHub Post questions (or answer questions) on Stack Overflow Connect with the community on the k8s SlackFollow us on Twitter @Kubernetesio for latest updates
Quelle: kubernetes

Apple To Unveil New iPhone On September 7

This afternoon, Apple sent out an invitation for its fall event, which will be held Wednesday, Sept. 7 at 10 a.m. PST at the Bill Graham Civic Auditorium in San Francisco. In keeping with its yearly product cycle, the company is expected to unveil its newest iPhone models.

Reports suggest that the new models will be similar in design but with better cameras and a new, pressure-sensitive haptic home screen button. Multiple reports also suggest that Apple will eliminate the headphone jack on the new phones — a controversial decision.

Also rumored (but unconfirmed): an upgraded Apple Watch with built-in GPS and improved battery life.

Quelle: <a href="Apple To Unveil New iPhone On September 7“>BuzzFeed

IBM expands partner ecosystem for VMware users moving to the cloud

More and more organizations are moving their enterprise workloads to the cloud, but they don’t just get there through magic. Often, there’s a lot of expense and risk involved. Occasionally, entire IT operations have to be overhauled. It’s a big challenge. To face down that challenge, IBM and VMware joined forces earlier this year to [&;]
The post IBM expands partner ecosystem for VMware users moving to the cloud appeared first on Thoughts On Cloud.
Quelle: Thoughts on Cloud

Build your data science skills–and your network–in Atlanta

Over the last decade, data science has gone from a murmur to a deafening roar.

The demand for skilled data scientists is showing up everywhere—in every industry. And whatever their titles, the community of people who can do analysis, machine learning, big data, or visualization have never been more universally valued.

It’s a good time to be a data scientist.

As a senior recruiter in AI and machine learning at Microsoft, I’ve experience this rapid evolution from the front row. We now have approximately 5,000 members that belong to the company’s internal Machine Learning and Data Science community, sharing insights and technical expertise across nearly every team and discipline. They’re woven into our very DNA.

A lot of these folks will be in Atlanta this September 26–27 at the Microsoft Data Science Summit—together with leading thinkers, researchers, and experts from across data science and machine learning. They’ll be sharing strategies, tools, tips, and breakthroughs with the data science community.

Build your skills, see what your peers are up to, and get hands on with the latest tech.

Data scientists of all stripes are integral for driving business forward, understanding customers, and helping organizations innovate. And right now, new tools are emerging quickly to help you move even faster. In Atlanta, you’ll get an insider’s look at the new ways that Microsoft and other businesses are applying these technologies, and get hands-on training in using them yourself. Questions about Cortana Intelligence Suite, SQL Server, or Microsoft R? This is the place to deepen your expertise. Set up a 1:1 meeting with the people who build and run these technologies to get answers that fit your particular scenario.

Data science is exploding in so many directions that there’s a breathtaking array of demos and real-world examples awaiting you as well. Come see what others are doing. Share ideas, puzzles, and successes with your peers and Microsoft presenters. And discover new ways to help your organization and expand your career.

But register soon! The Microsoft Data Science Summit is coming up fast.

Register now
Quelle: Azure

Amazon CloudWatch improves the usability of CloudWatch Logs Console

Amazon CloudWatch announces usability improvements to the CloudWatch Logs Console. The Logs Console now gives customers the ability to share the state of their log sessions with their teams to facilitate collaboration when troubleshooting issues. Sharing is done via human readable URLs that include timestamp and search parameters. In addition, customers can narrow down their log search to specific time frames with just one click. The Logs Console now supports infinite scrolling, allowing customers to navigate large volumes of log data without pagination. It also improves readability of log data through additional display formatting options.
To learn more about these usability improvements click here. To know more about CloudWatch Logs, visit the CloudWatch Logs product page.
Quelle: aws.amazon.com

Fitbit: Ausatmen mit dem Charge 2

Fitbit stellt den Nachfolger seines meistverkauften Wearables Charge HR vor: Der Fitnesstracker Charge 2 verfügt über ein größeres Display – und soll mit individuellen Atemübungen den Puls beruhigen können. Außerdem kündigt der Hersteller den Tracker Flex 2 an, der auch fürs Schwimmen geeignet ist. (Fitbit, Mobil)
Quelle: Golem