Zentralisierung statt Demokratisierung: Neue Hierarchien durch die Blockchain
Der Ruf der Blockchain als technische Graswurzelbewegung bröckelt. Tatsächlich könnte sie vor allem großen Firmen und Organisationen nutzen.
Quelle: Heise Tech News
Der Ruf der Blockchain als technische Graswurzelbewegung bröckelt. Tatsächlich könnte sie vor allem großen Firmen und Organisationen nutzen.
Quelle: Heise Tech News
Google kauft mal wieder einen Smartphone-Hersteller. Diesmal ist HTC an der Reihe: Die Smartphone-Sparte geht zusammen mit allen Patenten des Unternehmens für 1,1 Milliarden US-Dollar an Google. (HTC, Google)
Quelle: Golem
Erste Experimente für den geplanten H.265 zeigen die erhofften Ergebnisse, allerdings nur bei deutlich gestiegener Komplexität. Der freie Code AV1 positioniert sich außerdem als qualitativ gleichwertige Konkurrenz zu H.265, wie sich auf der Rundfunkmesse IBC zeigt. (Audio/Video, Film)
Quelle: Golem
Die Telekom Austria Group wird die in Österreich und Slowenien genutzte Marke A1 auch in ihren anderen Märkten einführen. 350 Millionen Euro Markenwerte müssen abgeschrieben werden.
Quelle: Heise Tech News
Spielerisch tritt die Fußball-Simulation FIFA 18 auf der Stelle. Das ist zu verkraften, denn der jüngste Teil der Reihe hat anderes zu bieten: Top-Grafik, TV-Atmosphäre und den zweiten Teil des interaktiven Story-Kampagne The Journey.
Quelle: Heise Tech News
Werbung, überall ploppt und poppt sie im Internet auf. Wie funktioniert das Werben im Internet mittlerweile, welcher Technik bedienen sich die Werbenden und wie läuft die Kartierung des potentiellen Kunden ab? Das besprechen wir in einer neuen heiseshow.
Quelle: Heise Tech News
As part of Microsoft’s mission to enable more customers and organizations worldwide to achieve more, Azure IoT Hub is expanding within 4 countries across 3 continents, with availability now in Azure UK South, UK West, Canada Central, Canada East, India Central, India South, East US2 and Central US. These new regions give you more options for implementing IoT solutions in geographic locations that work best for your mission, passions, creative aspirations, and business!
Azure IoT Hub is a fully-managed service that enables reliable and secure bidirectional communications between millions of IoT devices and a solution back end.
Azure IoT Hub provides you with:
Secure communications by using per-device security credentials and access control
Multiple device-to-cloud and cloud-to-device hyper-scale communication options
Queryable storage of per-device state information and meta-data
Easy device connectivity with device libraries for the most popular languages and platforms
IoT Hub is the bridge between your devices and their solutions in the cloud, allowing them to store, analyze and act on that data in real-time.
To learn more, visit IoT Hub documentation.
Quelle: Azure
We’re excited and proud to announce that Microsoft Azure is the first hyper-scale cloud computing platform to be able to service UK law enforcement IT customers. This announcement comes in the wake of the United Kingdom’s National Police Information Risk Management Team (NPIRMT) completing a comprehensive physical security review of a Microsoft UK Data Centre. This review is a necessary step to provide assurance to UK law enforcement agencies that their information management systems would be hosted in Police Approved Secure Facilities (PASF).
As stated by the College of Policing’s Authorized Professional Practice (APP), “Policing is an information-led activity, and information assurance is fundamental to how the police service manages many of the challenges faced in policing today.” Azure is proud to be recognized in this way as we contribute to the information assurance tapestry needed to enable the UK law enforcement community.
The actual NPRIMT PASF assessment is available to policing customers from the Home Office for individual Police Services to review as part of their own approach to risk assessment in utilizing cloud services.
*= It is important to note that the NPIRMT do not offer any warranty of physical security of the Microsoft data center.
Quelle: Azure
Carlos Barria / Reuters
When it comes to the ongoing Trump/Russia investigations, the media — and readers — have made their interests clear: former Trump campaign manager Paul Manafort may be a crucial figure, but he's nowhere near as interesting as Don Jr. or Michael Flynn.
That is according to Facebook data collected by the social media data and optimization firm Social Flow, which tracked how many stories have been written about Trump's family members and associates regarding Russia in 2017 across its network of over 300 major media companies. The company also monitored engagement across its network on Facebook, tracking average clicks per story for articles about each associate, as well as the aggregate Facebook reach of articles they're mentioned in. Taken together, the chart attempts to gauge reader interest in each player (according to Social Flow President Trump was left off the chart because his results “dwarfs everything” and skew the results).
Here's what we learned from the data:
Social Flow
With regard to Russia, Flynn is by far the most covered of anyone in Trumpland (besides the President). According to Social Flow, Flynn has been mentioned in at least one Russia-related article 157 days in 2017 (out of 260 counted in the data) meaning his name has rarely been out of the news.
According to the data, Russia stories involving former national security advisor Michael Flynn and Donald Trump Jr. are far and away more interesting to readers than other associates like Manafort or even Jared Kushner. Individually, Russia stories about Flynn and Trump Jr. have an aggregate reach on Facebook of nearly 300 million users since January 1, 2017. Meanwhile, Social Flow data on Russia-related stories about Kushner shows an aggregate Facebook reach of just over 125 million users — Manafort's trails far below that with an aggregate reach on Facebook of just 50 million.
The data also shows that many prominent Trump associates whose names have been mentioned in media coverage of the campaign's potential involvements with Russia are not necessarily household names. According to Social Flow, Trump advisors Roger Stone, Carter Page, and Michael Cohen have appeared frequently in media coverage (Stone, for example, has been mentioned in at least one article about Russia 98 out of 260 days in 2017) but none have attracted the kind of substantial reach across Facebook as Flynn, Don Jr., and Kushner.
While stories about Don Jr. have vast reach across Facebook and a high number of average clicks across the social network, Russia related stories mentioning Eric Trump have not captivated audiences. This can probably be chalked up to the coverage disparity between the two brothers (in 2017 Don Jr. has at least one Russia related article published about him on 76 days comapred to just 23 for Eric Trump) as well as the midsummer revelations of Don Jr.'s 2016 contacts with Russia.
Social Flow
Social Flow tracked the combined stories published each day across its network, which includes major publications like the New York Times, Washington Post, Wall Street Journal, BBC, Politico, and more. The data illustrates that, while coverage ebbs and flows, it never stops. Across Social Flow's network, coverage of Trump associates and Russia rarely dips below a combined 100 articles a day.
And since most of the coverage is driven by breaking news, Social Flow's chart also acts as a helpful map of the year according to Russia. The chart's spikes, for example, show some of the biggest stories of the year, including the Steele Dossier published by BuzzFeed News on January 10th, revelations on May 15th that President Trump revealed highly classified intelligence with Russian ambassadors, and the July 10th and 11th stories about Don Jr.'s contacts with Russia.
All told, however, the data seems to confirm what many already take to be true: the Trump/Russia reporting is massive, complex story that's captured the interest of hundreds of millions of readers. And it's one that doesn't appear to be going away.
Quelle: <a href="Flynn And Trump Jr. Dominate Coverage Of Trump's Associates In Russia Investigation“>BuzzFeed
Editor’s note: today’s post is by Jeremy Eder, Senior Principal Software Engineer at Red Hat, on the formation of the Resource Management Working Group Why are we here?Kubernetes has evolved to support diverse and increasingly complex classes of applications. We can onboard and scale out modern, cloud-native web applications based on microservices, batch jobs, and stateful applications with persistent storage requirements. However, there are still opportunities to improve Kubernetes; for example, the ability to run workloads that require specialized hardware or those that perform measurably better when hardware topology is taken into account. These conflicts can make it difficult for application classes (particularly in established verticals) to adopt Kubernetes. We see an unprecedented opportunity here, with a high cost if it’s missed. The Kubernetes ecosystem must create a consumable path forward to the next generation of system architectures by catering to needs of as-yet unserviced workloads in meaningful ways. The Resource Management Working Group, along with other SIGs, must demonstrate the vision customers want to see, while enabling solutions to run well in a fully integrated, thoughtfully planned end-to-end stack. Kubernetes Working Groups are created when a particular challenge requires cross-SIG collaboration. The Resource Management Working Group, for example, works primarily with sig-node and sig-scheduling to drive support for additional resource management capabilities in Kubernetes. We make sure that key contributors from across SIGs are frequently consulted because working groups are not meant to make system-level decisions on behalf of any SIG. An example and key benefit of this is the working group’s relationship with sig-node. We were able to ensure completion of several releases of node reliability work (complete in 1.6) before contemplating feature design on top. Those designs are use-case driven: research into technical requirements for a variety of workloads, then sorting based on measurable impact to the largest cross-section. Target Workloads and Use-casesOne of the working group’s key design tenets is that user experience must remain clean and portable, while still surfacing infrastructure capabilities that are required by businesses and applications. While not representing any commitment, we hope in the fullness of time that Kubernetes can optimally run financial services workloads, machine learning/training, grid schedulers, map-reduce, animation workloads, and more. As a use-case driven group, we account for potential application integration that can also facilitate an ecosystem of complementary independent software vendors to flourish on top of Kubernetes. Why do this?Kubernetes covers generic web hosting capabilities very well, so why go through the effort of expanding workload coverage for Kubernetes at all? The fact is that workloads elegantly covered by Kubernetes today, only represent a fraction of the world’s compute usage. We have a tremendous opportunity to safely and methodically expand upon the set of workloads that can run optimally on Kubernetes. To date, there’s demonstrable progress in the areas of expanded workload coverage: Stateful applications such as Zookeeper, etcd, MySQL, Cassandra, ElasticSearch Jobs, such as timed events to process the day’s logs or any other batch processing Machine Learning and compute-bound workload acceleration through Alpha GPU support Collectively, the folks working on Kubernetes are hearing from their customers that we need to go further. Following the tremendous popularity of containers in 2014, industry rhetoric circled around a more modern, container-based, datacenter-level workload orchestrator as folks looked to plan their next architectures. As a consequence, we began advocating for increasing the scope of workloads covered by Kubernetes, from overall concepts to specific features. Our aim is to put control and choice in users hands, helping them move with confidence towards whatever infrastructure strategy they choose. In this advocacy, we quickly found a large group of like-minded companies interested in broadening the types of workloads that Kubernetes can orchestrate. And thus the working group was born. Genesis of the Resource Management Working GroupAfter extensive development/feature discussions during the Kubernetes Developer Summit 2016 after CloudNativeCon | KubeCon Seattle, we decided to formalize our loosely organized group. In January 2017, the Kubernetes Resource Management Working Group was formed. This group (led by Derek Carr from Red Hat and Vishnu Kannan from Google) was originally cast as a temporary initiative to provide guidance back to sig-node and sig-scheduling (primarily). However, due to the cross-cutting nature of the goals within the working group, and the depth of roadmap quickly uncovered, the Resource Management Working Group became its own entity within the first few months. Recently, Brian Grant from Google (@bgrant0607) posted the following image on his Twitter feed. This image helps to explain the role of each SIG, and shows where the Resource Management Working Group fits into the overall project organization. To help bootstrap this effort, the Resource Management Working Group had its first face-to-face kickoff meeting in May 2017. Thanks to Google for hosting! Folks from Intel, NVIDIA, Google, IBM, Red Hat. and Microsoft (among others) participated. You can read the outcomes of that 3-day meeting here. The group’s prioritized list of features for increasing workload coverage on Kubernetes enumerated in the charter of the Resource Management Working group includes: Support for performance sensitive workloads (exclusive cores, cpu pinning strategies, NUMA) Integrating new hardware devices (GPUs, FPGAs, Infiniband, etc.) Improving resource isolation (local storage, hugepages, caches, etc.) Improving Quality of Service (performance SLOs) Performance benchmarking APIs and extensions related to the features mentioned above The discussions made it clear that there was tremendous overlap between needs for various workloads, and that we ought to de-duplicate requirements, and plumb generically. Workload CharacteristicsThe set of initially targeted use-cases share one or more of the following characteristics:Deterministic performance (address long tail latencies) Isolation within a single node, as well as within groups of nodes sharing a control plane Requirements on advanced hardware and/or software capabilities Predictable, reproducible placement: applications need granular guarantees around placement The Resource Management Working Group is spearheading the feature design and development in support of these workload requirements. Our goal is to provide best practices and patterns for these scenarios. Initial ScopeIn the months leading up to our recent face-to-face, we had discussed how to safely abstract resources in a way that retains portability and clean user experience, while still meeting application requirements. The working group came away with a multi-release roadmap that included 4 short- to mid-term targets with great overlap between target workloads:Device Manager (Plugin) ProposalKubernetes should provide access to hardware devices such as NICs, GPUs, FPGA, Infiniband and so on.CPU ManagerKubernetes should provide a way for users to request static CPU assignment via the Guaranteed QoS tier. No support for NUMA in this phase.HugePages support in KubernetesKubernetes should provide a way for users to consume huge pages of any size.Resource Class proposalKubernetes should implement an abstraction layer (analogous to StorageClasses) for devices other than CPU and memory that allows a user to consume a resource in a portable way. For example, how can a pod request a GPU that has a minimum amount of memory? Getting Involved & SummaryOur charter document includes a Contact Us section with links to our mailing list, Slack channel, and Zoom meetings. Recordings of previous meetings are uploaded to Youtube. We plan to discuss these topics and more at the 2017 Kubernetes Developer Summit at CloudNativeCon | KubeCon in Austin. Please come and join one of our meetings (users, customers, software and hardware vendors are all welcome) and contribute to the working group!
Quelle: kubernetes