Univision To Buy Gawker Media For $135 Million

Pool / Getty Images

Univision Communications will buy the bankrupt Gawker Media network for $135 million dollars.

As first reported by Recode, Univision, known for its Spanish language television station of the same name, has won the auction to buy the distressed property, beating out a $90 million bid from Ziff Davis.

Gawker Media declared bankruptcy in June following a $115 million judgment against it in a violation of privacy lawsuit brought by former professional wrestler Hulk Hogan and funded by billionaire tech investor Peter Thiel.

The banker who represented Gawker in its sale confirmed the price to Recode. Gawker Media owner and founder Nick Denton had previously estimated his empire to be valued at around $200 million.

The sale will include all seven of the sites under the Gawker Media banner, and will be subject to the approval of a US bankruptcy judge. If the sale goes through, the Gawker Media sites will join the news and entertainment site Fusion as Univision properties.

Quelle: <a href="Univision To Buy Gawker Media For 5 Million“>BuzzFeed

Cloud SQL Second Generation performance and feature deep dive

Posted by Brett Hesterberg, Product Manager, Google Cloud Platform

Five years ago, we launched the First Generation of Google Cloud SQL and have helped thousands of companies build applications on top of it.

In that time, Google Cloud Platform’s innovations on Persistent Disk dramatically increased IOPS for Google Compute Engine, so we built Second Generation on Persistent Disk, allowing us to offer a far more performant MySQL solution at a fraction of the cost. Cloud SQL Second Generation now runs 7X faster and has 20X more storage capacity than its predecessor — with lower costs, higher scalability, automated backups that can restore your database from any point in time and 99.95% availability, anywhere in the world. This way you can focus on your application, not your IT solution.

Cloud SQL Second Generation performance gains are dramatic: up to 10TB of data, 20,000 IOPS, and 104GB of RAM per instance.

Cloud SQL Second Generation vs. the competition
So we know Cloud SQL Second Generation is a major advance from First Generation. But how does it compare with database services from Amazon Web Services?
Test: We used sysbench to simulate the same workload on three different services: Cloud SQL Second Generation, Amazon RDS for MySQL and Amazon Aurora.
Result: Cloud SQL Second Generation outperformed RDS for MySQL and performed better than Aurora when active thread count is low, as is typical for many web applications.

Cloud SQL sustains higher TPS (transactions per second) per thread than RDS for MySQL. It outperforms Aurora in configurations of up to 16 threads.
Details
The workload compares multi-zone (highly available) instances of Cloud SQL Second Generation, Amazon RDS for MySQL and Amazon Aurora running the latest offered MySQL version. The replication technology used by these three services differs significantly, and has a big impact on performance and latency. Cloud SQL Second Generation uses MySQL’s semi-synchronous replication, RDS for MySQL uses block-level synchronous replication and Aurora uses a proprietary replication technology.

To determine throughput, a Sysbench OLTP workload was generated from a MySQL client in the same zone as the primary database instance. The workload is a set of step load tests that double the number of threads (connections) with each run. The dataset used is five times larger than total memory of the database instance to ensure that reads go to disk.

Transaction per second (TPS) results show that Cloud SQL and Aurora are faster than RDS for MySQL. Cloud SQL’s TPS is higher than Aurora at up to 16 threads. At 32 threads, variance and the potential for replication lag increase, causing Aurora’s peak TPS to exceed Cloud SQL’s at higher thread counts. The workload illustrates the differences in replication technology between the three services. Aurora exhibits minimal performance variance and consistent replication lag. Cloud SQL emphasizes performance, allowing for replication lag, which can increase failover times, but without putting data at risk.
Latency
We measured average end-to-end latency with a single client thread (i.e., “pure” latency measurement).
The latency comparison changes as additional threads are added. Cloud SQL exhibits lower latency than RDS for MySQL across all tests. Compared to Aurora, Cloud SQL’s latency is lower until 32 or more threads are used to generate load.
Running the benchmark

Environment configuration and sysbench parameters for our testing.

We used the following environment configuration and sysbench parameters for our testing.

Test instances:

Google Cloud SQL v2, db-n1-highmem-16 (16 CPU, 104 GB RAM), MySQL 5.7.11, 1000 GB PD SSD + Failover Replica
Amazon RDS Multi-AZ, db.r3.4xlarge (16 CPU, 122 GB RAM), MySQL 5.7.11, 1000 GB SSD, 10k Provisioned IOPS + Multi-AZ Replica
Amazon RDS Aurora, db.r3.4xlarge (16 CPU, 122 GB RAM), MySQL 5.6 (newest) + Replica

Test overview:
Sysbench runs were 100 tables of 20M rows each, for a total of 2B rows. In order to ensure that the data set doesn’t fit in memory, it was set to a multiple of the ~100 GB memory per instance, allowing sufficient space for binary logs used for replication. With 100x20M rows, the data set size as loaded is ~500 GB. Each step run was 30 minutes with a one minute “cool down” period in between, producing one report line per second of the runtime.

Load the data:

Ubuntu setupsudo apt-get updatesudo apt-get install  git automake autoconf libtool make gcc  Libmysqlclient-dev mysql-client-5.6git clone https://github.com/akopytov/sysbench.git(apply patch)./autogen.sh ./configuremake -j8Test variablesexport test_system=<test name>export mysql_host=<mysql host>export mysql_user=<mysql user>export mysql_password=<mysql password>export test_path=~/oltp_${test_system}_1export test_name=01_baselinePrepare test datasysbench/sysbench  –mysql-host=${mysql_host}  –mysql-user=${mysql_user}  –mysql-password=${mysql_password}  –mysql-db=”sbtest”  –test=sysbench/tests/db/parallel_prepare.lua  –oltp_tables_count=100  –oltp-table-size=20000000  –rand-init=on  –num-threads=16  runRun the benchmark:mkdir -p ${test_path}for threads in 1 2 4 8 16 32 64 128 256 512 1024do sysbench/sysbench  –mysql-host=${mysql_host}  –mysql-user=${mysql_user}  –mysql-password=${mysql_password}  –mysql-db=”sbtest”  –db-ps-mode=disable  –rand-init=on  –test=sysbench/tests/db/oltp.lua  –oltp-read-only=off  –oltp_tables_count=100  –oltp-table-size=20000000  –oltp-dist-type=uniform  –percentile=99  –report-interval=1  –max-requests=0  –max-time=1800  –num-threads=${threads}  runFormat the results:Capture results in CSV formatgrep “^[” ${test_path}/${test_name}_*.out  | cut -d] -f2  | sed -e ‘s/[a-z ]*://g’ -e ‘s/ms//’ -e ‘s/(99%)//’ -e ‘s/[ ]//g’  > ${test_path}/${test_name}_all.csvPlot the results in Rstatus <- NULL # or e.g. “[DRAFT]”config <- “Amazon RDS (MySQL Multi-AZ, Aurora) vs. Google Cloud SQL Second Generationnsysbench 0.5, 100 x 20M rows (2B rows total), 30 minutes per step”steps <- c(1, 2, 4, 8, 16, 32, 64, 128, 256, 512)time_per_step <- 1800output_path <- “~/oltp_results/”test_name <- “01_baseline”results <- data.frame(  stringsAsFactors = FALSE,  row.names = c(    “amazon_rds_multi_az”,    “amazon_rds_aurora”,    “google_cloud_sql”  ),  file = c(    “~/amazon_rds_multi_az_1/01_baseline_all.csv”,    “~/amazon_rds_aurora_1/01_baseline_all.csv”,    “~/google_cloud_sql_1/01_baseline_all.csv”  ),  name = c(    “Amazon RDS MySQL Multi-AZ”,    “Amazon RDS Aurora”,    “Google Cloud SQL 2nd Gen.”  ),  color = c(    “darkgreen”,    “red”,    “blue”  ))results$data <- lapply(results$file, read.csv, header=FALSE, sep=”,”, col.names=c(“threads”, “tps”, “reads”, “writes”, “latency”, “errors”, “reconnects”))# TPSpdf(paste(output_path, test_name, “_tps.pdf”, sep=””), width=12, height=8)plot(0, 0,  pch=”.”, col=”white”, xaxt=”n”, ylim=c(0,2000), xlim=c(0,length(steps)),  main=paste(status, “Transaction Rate by Concurrent Sysbench Threads”, status, “nn”),  xlab=”Concurrent Sysbench Threads”,  ylab=”Transaction Rate (tps)”)for(result in rownames(results)) {  tps <- as.data.frame(results[result,]$data)$tps  points(1:length(tps) / time_per_step, tps, pch=”.”, col=results[result,]$color, xaxt=”n”, new=FALSE)}title(main=paste(“nn”, config, sep=””), font.main=3, cex.main=0.7)axis(1, 0:(length(steps)-1), steps)legend(“topleft”, results$name, bg=”white”, col=results$color, pch=15, horiz=FALSE)dev.off()# Latencypdf(paste(output_path, test_name, “_latency.pdf”, sep=””), width=12, height=8)plot(0, 0,  pch=”.”, col=”white”, xaxt=”n”, ylim=c(0,2000), xlim=c(0,length(steps)),  main=paste(status, “Latency by Concurrent Sysbench Threads”, status, “nn”),  xlab=”Concurrent Sysbench Threads”,  ylab=”Latency (ms)”)for(result in rownames(results)) {  latency <- as.data.frame(results[result,]$data)$latency  points(1:length(latency) / time_per_step, latency, pch=”.”, col=results[result,]$color, xaxt=”n”, new=FALSE)}title(main=paste(“nn”, config, sep=””), font.main=3, cex.main=0.7)axis(1, 0:(length(steps)-1), steps)legend(“topleft”, results$name, bg=”white”, col=results$color, pch=15, horiz=FALSE)dev.off()# TPS per Threadpdf(paste(output_path, test_name, “_tps_per_thread.pdf”, sep=””), width=12, height=8)plot(0, 0,  pch=”.”, col=”white”, xaxt=”n”, ylim=c(0,60), xlim=c(0,length(steps)),  main=paste(status, “Transaction Rate per Thread by Concurrent Sysbench Threads”, status, “nn”),  xlab=”Concurrent Sysbench Threads”,  ylab=”Transactions per thread (tps/thread)”)for(result in rownames(results)) {  tps <- as.data.frame(results[result,]$data)$tps  threads <- as.data.frame(results[result,]$data)$threads  points(1:length(tps) / time_per_step, tps / threads, pch=”.”, col=results[result,]$color, xaxt=”n”, new=FALSE)}title(main=paste(“nn”, config, sep=””), font.main=3, cex.main=0.7)axis(1, 0:(length(steps)-1), steps)legend(“topleft”, results$name, bg=”white”, col=results$color, pch=15, horiz=FALSE)dev.off()

Cloud SQL Second Generation features
But performance is only half the story. We believe a fully managed service should be as convenient as it is powerful. So we added new features to help you easily store, protect and manage your data.

Store and protect data
Flexible backups: Schedule automatic daily backups or run them on-demand. Backups are designed not to affect performance
Precise recovery: Recover your instance to a specific point in time using point-in-time recovery
Easy clones: Clone your instance so you can test changes on a copy before introducing them to your production environment. Clones are exact copies of your databases, but they’re completely independent from the source. Cloud SQL offers a streamlined cloning workflow.
Automatic storage increase: Enable automatic storage increase and Cloud SQL will add storage capacity whenever you approach your limit

Connect and Manage
Open standards: We embrace the MySQL wire protocol, the standard connection protocol for MySQL databases, so you can access your database from nearly any application, running anywhere.
Secure connections: Our new Cloud SQL Proxy creates a local socket and uses OAuth to help establish a secure connection with your application or MySQL tool. This makes secure connections easier for both dynamic and static IP addresses. For dynamic IP addresses, such as a developer’s laptop, you can help secure connectivity using service accounts, rather than modifying your firewall settings. For static IP addresses, you no longer have to set up SSL.

We’re obviously very proud of Cloud SQL, but don’t just take our word for it. Here’s what a couple of customers have had to say about Cloud SQL Second Generation:

As a SaaS Company, we manage hundreds of instances for our customers. Cloud SQL is a major component of our stack and when we beta tested Cloud SQL, we were able to see fantastic performance over our large volume customers. We immediately migrated a few of our major customers as we saw 7x performance improvements of their queries.                                                                                     – Rajesh Manickadas, Director of Engineering, Orangescape As a mobile application company, data management is essential to delivering the best product for our clients. Google Cloud SQL enables us to manage databases that grow at rates such as 120 – 150 million data points every month. In fact, for one of our clients, a $6B Telecommunications Provider, their database adds ~15 GB of data every month. At peak time, we hit around 400 write operations/second and yet our API calls average return time is still under 73ms.                                                                                                                                                   – Andrea Michaud, Head of Client Services, www.TeamTracking.us
Next stepsWhat’s next for Cloud SQL? You can look forward to continued Persistent Disk performance improvements, added virtual networking enhancements and streamlined migration tools to help First Generation users upgrade to Second Generation.

Until then, we urge you to sign up for a $300 credit to try Cloud SQL and the rest of GCP. Start with inexpensive micro instances for testing and development. When you’re ready, you can easily scale them up to serve performance-intensive applications.

You can also take advantage of our partner ecosystem to help you get started. To streamline data transfer, reach out to Talend, Attunity, Dbvisit and xPlenty. For help with visualizing analytics data, try Tableau, Looker, YellowFin and Bime by Zendesk. If you need to manage and monitor databases, ScaleArc and WebYog good bets, while Pythian and Percona are at the ready if you simply need extra support.

Tableau customers continue to adopt Cloud SQL at a growing rate as they experience the benefits of rapid fire analytics in the cloud. With the significant performance improvements in Cloud SQL Second Generation, it’s likely that that adoption will grow even faster.                                                                            – Dan Kogan, Director of Product Marketing & Technology Partners, Tableau Looker is excited to support a Tier 1 integration for the Google’s Cloud SQL Second Generation as it goes into General Availability. When you combine the Looker Data Platform’s in-database analytics approach with Cloud SQL’s fully-managed database offering, customers get a real-time analytics and visualization environment in the cloud, enabling anyone in the organization to make data-driven decisions.                                                                                                                                           – Keenan Rice, VP Strategic Alliances, Looker Migrating database applications to the cloud is a priority for many customers and we facilitate that process with Attunity Replicate by simplifying migrations to Google Cloud SQL while enabling zero downtime. Cloud SQL Second Generation delivers even better performance, reliability and security which are key for expanding deployments for enterprise customers. Customers can benefit from these enhanced abilities and we look forward to working with them helping to remove any data transfer hurdles.                                                                                                  – Itamar Ankorion, Chief Marketing Officer, Attunity 
Things are really heating up for Cloud SQL, and we hope you’ll come along for the ride.

Quelle: Google Cloud Platform

Accelerate your insights with Application Insights Performance Counters

You can now monitor performance counters for Azure Web Apps using Visual Studio Application Insights. Until recently, Performance Counters such as CPU and network usage weren’t available on the Azure portal when monitoring your app. This is because Azure Web Apps don’t run on their own machines. Our new Aggregate Metrics package adds this telemetry, so you can now monitor your app’s use of resources as workload varies.

To get started, simply install the Aggregate Metrics prerelease NuGet package in your app. This package is now in the SDK Labs feed. You should then be able to view the below performance charts in your Application Insights Metrics Explorer within the Azure portal.

Azure Web Apps have continued to allow you to host whatever web app, website, and API in the Azure cloud, all while maintaining a plethora of capabilities such as Visual Studio integration, ease of deployment, and agility. In a nutshell, Azure Web Apps are simple to deploy and run.

However, there are limitations when it comes to monitoring your Azure Web App’s performance while it runs on the cloud. Web pages and apps are run in a sandbox environment, separating them from other apps running on the same machine. For developers, this sandbox is ultimately a tradeoff for the low monetary cost of using Azure Web Apps. The sandbox also prevents your app from accessing Performance Counters and using Performance Monitor. On your desktop, Performance Monitor provides a comprehensive selection of metrics with easy to interpret visualization

Until now, there were few metrics available for gaining insights on the performance of your Azure Web App, and those few metrics available gave nowhere near the comprehensiveness of Performance Monitor.

The Application Insights team saw the need for more complete feedback on web app performance, and we’re proud to announce that we’ve added a solution to Application Insights’ SDK Labs that collects Performance Counters and visualizes them on the Azure portal through Application Insights. The Aggregate Metrics solution contains several different Performance Counters, such as memory being used and the percentage of the CPU’s processor being used.  Currently, Performance Counters currently provide historical data, with Live Metrics Stream implementation planned for future development.

Custom Performance Counters

We have also implemented custom performance counters, such as thread and handle count, to provide even more detailed insights into app performance. These counters are specific and can be added independently to a project based on your app’s needs. At the moment there are few custom counters, as their availability is limited by provided content from the Azure Web Apps team. To get started with Custom Counters, you only need to adjust the ApplicationInsights.config file, just like would for other performance monitors.

Microsoft Intern Program

Performance Counters have been an ongoing project developed by two of Application Insights’ summer interns: myself, Mackenzie Frackleton, and Mateo Torres Ruiz. Click through to our GitHub profiles to see our latest developments.

Summary

Performance Counters are now available to provide metrics for Azure Web App telemetry. We always want to hear your feedback, so please visit the Application Insights SDK Labs repository here for issues or feature requests. The Application Insights team as a whole is committed to providing quality tools for developers. Any additional feedback or new feature recommendations is welcome.
Quelle: Azure

Microsoft expands and renews international certifications in seven countries

Microsoft invests heavily in to not only create the most advanced functionality and highest quality services possible, but also to ensure security, compliance, privacy and transparency are provided to our cloud services customers. Products like Azure Security Center and Microsoft Transparency Hub, and activities such as our ongoing legal effort to protect privacy rights across the globe, show our holistic approach to trust and security which no other cloud service provider can match.

We continue to maintain the largest portfolio of cloud certifications. In the first half of 2016, we achieved four new international certifications as well as renewed and expanded other certifications in seven countries. Here is a quick recap of our international compliance activities:

New certifications

Japan: We achieved Cloud Security Mark Gold Level accreditation and announced our alignment to the My Number Act on protecting personal data in Japan. Cloud Security Mark by Japan Information Security Audit Association (JASA) is the standard required by the government for cloud procurement.
Spain: Microsoft is the first global cloud service provider that achieved the Spain Esquema Nacional de Seguridad certification, which reiterates the effectiveness of our security controls implemented to protect customer data.
United Kingdom: We are also the first public cloud to gain the Federation Against Copyright Theft (FACT) certification. This accreditation proves our compliance with established media-industry security best-practices, including Content Delivery and Security Association’s (CDSA), Content Protection and Security (CPS) Standard, and the Motion Picture Association of America’s application and cloud security guidelines.

Expanded certifications

China: Microsoft Azure operated by 21vianet upgraded our Multi-Layer Protection Scheme (MLPS) classification from level 2 to level 3 and also added a new service to our Trusted Cloud Services certification.
Canada: We announced the alignment of our approach to protect customer data with recommendations from the Canadian privacy commission on related privacy laws. Based on the shared responsibility principle, customers that want to use cloud services should also go through self-assessment to ensure proper planning and adherence to the laws.
New Zealand: Our responses to New Zealand’s Cloud Computing Information Security and Privacy Considerations have been updated for new services in scope for the new question set.
Singapore: Microsoft’s Multi-Tier Cloud Security Singapore Standard:584 (MTCS SS:584-2015) certification has been upgraded to the 2015 version at Tier 3 with expanded scope of services. We also published a whitepaper in the context of Singapore compliance for Azure to help our customers address questions from MTCS and PDPA.
United Kingdom: Our UK G-Cloud has been expanded to cover all services that are in-scope for ISO 27001:2013 and updated to address the latest version of cloud security principles at OFFICIAL level.

As a potential or continuing customer, you can rest assured of our commitment to compliance and security based on our dedication to customer regulatory requirements. This dedication is evident in the success of an industry-leading certification count and its international breadth. We are achieving compliance for our customers so they can leverage our cloud services to grow their missions and businesses while knowing their regulatory needs are being met.

For access to any of the certifications mentioned above or any other compliance certifications achieved by Microsoft Azure, visit our Service Trust Portal or Microsoft Trust Center.
Quelle: Azure

Kubernetes Namespaces: use cases and insights

“Who’s on first, What’s on second, I Don’t Know’s on third” Who’s on First? by Abbott and CostelloIntroductionKubernetes is a system with several concepts. Many of these concepts get manifested as “objects” in the RESTful API (often called “resources” or “kinds”). One of these concepts is Namespaces. In Kubernetes, Namespaces are the way to partition a single Kubernetes cluster into multiple virtual clusters. In this post we’ll highlight examples of how our customers are using Namespaces. But first, a metaphor: Namespaces are like human family names. A family name, e.g. Wong, identifies a family unit. Within the Wong family, one of its members, e.g. Sam Wong, is readily identified as just “Sam” by the family. Outside of the family, and to avoid “Which Sam?” problems, Sam would usually be referred to as “Sam Wong”, perhaps even “Sam Wong from San Francisco”.  Namespaces are a logical partitioning capability that enable one Kubernetes cluster to be used by multiple users, teams of users, or a single user with multiple applications without concern for undesired interaction. Each user, team of users, or application may exist within its Namespace, isolated from every other user of the cluster and operating as if it were the sole user of the cluster. (Furthermore, Resource Quotas provide the ability to allocate a subset of a Kubernetes cluster’s resources to a Namespace.)For all but the most trivial uses of Kubernetes, you will benefit by using Namespaces. In this post, we’ll cover the most common ways that we’ve seen Kubernetes users on Google Cloud Platform use Namespaces, but our list is not exhaustive and we’d be interested to learn other examples from you.Use-cases coveredRoles and Responsibilities in an enterprise for namespacesPartitioning landscapes: dev vs. test vs. prodCustomer partitioning for non-multi-tenant scenariosWhen not to use namespacesUse-case : Roles and Responsibilities in an EnterpriseA typical enterprise contains multiple business/technology entities that operate independently of each other with some form of overarching layer of controls managed by the enterprise itself. Operating a Kubernetes clusters in such an environment can be done effectively when roles and responsibilities pertaining to Kubernetes are defined. Below are a few recommended roles and their responsibilities that can make managing Kubernetes clusters in a large scale organization easier.Designer/Architect role: This role will define the overall namespace strategy, taking into account product/location/team/cost-center and determining how best to map these to Kubernetes Namespaces. Investing in such a role prevents namespace proliferation and “snowflake” Namespaces.Admin role: This role has admin access to all Kubernetes clusters. Admins can create/delete clusters and add/remove nodes to scale the clusters. This role will be responsible for patching, securing and maintaining the clusters. As well as implementing Quotas between the different entities in the organization. The Kubernetes Admin is responsible for implementing the namespaces strategy defined by the Designer/Architect. These two roles and the actual developers using the clusters will also receive support and feedback from the enterprise security and network teams on issues such as security isolation requirements and how namespaces fit this model, or assistance with networking subnets and load-balancers setup.Anti-patternsIsolated Kubernetes usage “Islands” without centralized control: Without the initial investment in establishing a centralized control structure around Kubernetes management there is a risk of ending with a “mushroom farm” topology i.e. no defined size/shape/structure of clusters within the org. The result is a difficult to manage, higher risk and elevated cost due to underutilization of resources.Old-world IT controls choking usage and innovation: A common tendency is to try and transpose existing on-premises controls/procedures onto new dynamic frameworks .This results in weighing down the agile nature of these frameworks and nullifying the benefits of rapid dynamic deployments.Omni-cluster: Delaying the effort of creating the structure/mechanism for namespace management can result in one large omni-cluster that is hard to peel back into smaller usage groups. Use-case : Using Namespaces to partition development landscapesSoftware development teams customarily partition their development pipelines into discrete units. These units take various forms and use various labels but will tend to result in a discrete dev environment, a testing|QA environment, possibly a staging environment and finally a production environment. The resulting layouts are ideally suited to Kubernetes Namespaces. Each environment or stage in the pipeline becomes a unique namespace.The above works well as each namespace can be templated and mirrored to the next subsequent environment in the dev cycle, e.g. dev->qa->prod. The fact that each namespace is logically discrete allows the development teams to work within an isolated “development” namespace. DevOps (The closest role at Google is called Site Reliability Engineering “SRE”)  will be responsible for migrating code through the pipelines and ensuring that appropriate teams are assigned to each environment. Ultimately, DevOps is solely responsible for the final, production environment where the solution is delivered to the end-users.A major benefit of applying namespaces to the development cycle is that the naming of software components (e.g. micro-services/endpoints) can be maintained without collision across the different environments. This is due to the isolation of the Kubernetes namespaces, e.g. serviceX in dev would be referred to as such across all the other namespaces; but, if necessary, could be uniquely referenced using its full qualified name serviceX.development.mycluster.com in the development namespace of mycluster.com.Anti-patternsAbusing the namespace benefit resulting in unnecessary environments in the development pipeline. So; if you don’t do staging deployments, don’t create a “staging” namespace.Overcrowding namespaces e.g. having all your development projects in one huge “development” namespace. Since namespaces attempt to partition, use these to partition by your projects as well. Since Namespaces are flat, you may wish something similar to: projectA-dev, projectA-prod as projectA’s namespaces.Use-case : Partitioning of your CustomersIf you are, for example, a consulting company that wishes to manage separate applications for each of your customers, the partitioning provided by Namespaces aligns well. You could create a separate Namespace for each customer, customer project or customer business unit to keep these distinct while not needing to worry about reusing the same names for resources across projects.An important consideration here is that Kubernetes does not currently provide a mechanism to enforce access controls across namespaces and so we recommend that you do not expose applications developed using this approach externally.Anti-patternMulti-tenant applications don’t need the additional complexity of Kubernetes namespaces since the application is already enforcing this partitioning.Inconsistent mapping of customers to namespaces. For example, you win business at a global corporate, you may initially consider one namespace for the enterprise not taking into account that this customer may prefer further partitioning e.g. BigCorp Accounting and BigCorp Engineering. In this case, the customer’s departments may each warrant a namespace.When Not to use NamespacesIn some circumstances Kubernetes Namespaces will not provide the isolation that you need. This may be due to geographical, billing or security factors. For all the benefits of the logical partitioning of namespaces, there is currently no ability to enforce the partitioning. Any user or resource in a Kubernetes cluster may access any other resource in the cluster regardless of namespace. So, if you need to protect or isolate resources, the ultimate namespace is a separate Kubernetes cluster against which you may apply your regular security|ACL controls.Another time when you may consider not using namespaces is when you wish to reflect a geographically distributed deployment. If you wish to deploy close to US, EU and Asia customers, a Kubernetes cluster deployed locally in each region is recommended.When fine-grained billing is required perhaps to chargeback by cost-center or by customer, the recommendation is to leave the billing to your infrastructure provider. For example, in Google Cloud Platform (GCP), you could use a separate GCP Project or Billing Account and deploy a Kubernetes cluster to a specific-customer’s project(s).In situations where confidentiality or compliance require complete opaqueness between customers, a Kubernetes cluster per customer/workload will provide the desired level of isolation. Once again, you should delegate the partitioning of resources to your provider.Work is underway to provide (a) ACLs on Kubernetes Namespaces to be able to enforce security; (b) to provide Kubernetes Cluster Federation. Both mechanisms will address the reasons for the separate Kubernetes clusters in these anti-patterns. An easy to grasp anti-pattern for Kubernetes namespaces is versioning. You should not use Namespaces as a way to disambiguate versions of your Kubernetes resources. Support for versioning is present in the containers and container registries as well as in Kubernetes Deployment resource. Multiple versions should coexist by utilizing the Kubernetes container model which also provides for auto migration between versions with deployments. Furthermore versions scope namespaces will cause massive proliferation of namespaces within a cluster making it hard to manage.Caveat GubernatorYou may wish to, but you cannot create a hierarchy of namespaces. Namespaces cannot be nested within one another. You can’t, for example, create my-team.my-org as a namespace but could perhaps have team-org.Namespaces are easy to create and use but it’s also easy to deploy code inadvertently into the wrong namespace. Good DevOps hygiene suggests documenting and automating processes where possible and this will help. The other way to avoid using the wrong namespace is to set a kubectl context. As mentioned previously, Kubernetes does not (currently) provide a mechanism to enforce security across Namespaces. You should only use Namespaces within trusted domains (e.g. internal use) and not use Namespaces when you need to be able to provide guarantees that a user of the Kubernetes cluster or ones its resources be unable to access any of the other Namespaces resources. This enhanced security functionality is being discussed in the Kubernetes Special Interest Group for Authentication and Authorization, get involved at SIG-Auth. –Mike Altarace & Daz Wilkin, Strategic Customer Engineers, Google Cloud PlatformDownload KubernetesGet involved with the Kubernetes project on GitHub Post questions (or answer questions) on Stack Overflow Connect with the community on SlackFollow us on Twitter @Kubernetesio for latest updates
Quelle: kubernetes

How This Decades-Old Technology Ushered In Predictive Text

How This Decades-Old Technology Ushered In Predictive Text

Caroline O’Donovan / BuzzFeed News

What do you call a typewriter with no keys? Though it sounds like a riddle, Stanford historian Tom Mullaney told BuzzFeed News, it’s not. A Chinese typewriter has 2,500 characters but zero keys, and Mullaney owns more of them than anyone else in the world.

The Chinese language has over 75,000 characters — way too many to fit on a keyboard with a single key per character. The inventor of the original Chinese typewriter slimmed that number down to just the 2,500 most used characters, but even still, that’s too many to type out as you would on an English-alphabet typewriter.

On a Chinese typewriter, those 2,500 characters correspond to 2,500 tiny metal squares, which lie side by side in a tray bed. Instead of pressing individual keys with individual fingers, the typist uses a knob in each hand to move a scroll of paper up and down, left and right over the keys. When the machine is on top of the character you want, you press down on a lever. Then, Mullaney explained, the metal piece gets pushed (or sucked up) into the type chamber, inked, and struck the surface of the paper, before being ejected and spit back out into the spot in the grid where it was.

Here’s what that process looks like:

youtube.com

That’s a lot of steps for just one character. Each stroke of the lever takes just a second, but the thing that slows Chinese typists down, and for years made them much slower than western typists, is the distance between characters — especially because the typewriter&;s characters were organized according to what Mullaney called “dictionary order.”

“The problem is, just like a dictionary, words don’t necessarily go together,” he said. “Aardvark and apple don’t show up in sentences together, despite being in the same part of the dictionary.”

So in the 50s, typists started dumping out all of the characters, and building their own custom tray beds from scratch, using tweezers to array characters based on how commonly they were used together.

A tray of metal characters that came with a Chinese keyboard in so-called “dictionary order.”

Caroline O’Donovan / BuzzFeed News

The thinking was, “If I have to write the name Mao Zedong, which is three characters, over and over, why would I move across the tray bed?” Mullaney said. “Let’s put them right next to each other.” The result was “completely personalized, completely individualized, completely idiosyncratic” tray beds.

This innovation made typing much faster, according to Mullaney. “If the top speeds in the 1930s and 40s was 20 characters per minute, and that was for a very fast typist, after the 1950s …. you had top speeds of 55, 60 characters per minute,” he said. “That’s a three time increase in the speed of the machine just by rearranging the characters in the tray bed.”

But he also sees it as an early, hand-crafted, analog version of the kind of predictive text we now find in Google autocompletes and advanced smartphone keyboards. “Predictive text of that sort was already baked into Chinese typewriters in the mechanical realm, and then gets brought into the realm of Chinese computing in the 60s and 70s.” Nowadays, he said, in China, pretty much every interaction a person has with a computer interface — from word processors to search bars — has predictive text built in.

Caroline O’Donovan / BuzzFeed news

These days, the QWERTY keyboard is standard in China — but instead of each key corresponding to a single letter in the alphabet, each key corresponds to a sound, and predictive text suggests a character that goes with those sounds. As a result, he said, after decades of relying on slower technologies, “the fastest Chinese computer inputter using a QWERTY keyboard input … is faster than the fastest alphabetic typist.”

Stanford historian Tom Mullaney explains how *not* to use a Chinese typewriter.

Caroline O’Donovan / BuzzFeed News

Both alphabetic and Chinese typewriters had an indelible impact on the way we communicate today, but only the Chinese typewriter has been almost entirely forgotten. Only a few institutions in the United States, including the Huntington Library and the Museum of Chinese in America, have Chinese typewriters, and three Chinese speakers I spoke with about this article weren’t aware they had ever existed.

So Mullaney, who amassed one of the largest collections in the world more or less by accident, is trying to “Save The Chinese Typewriter” starting with a (successful) Kickstarter campaign. His first Chinese typewriter, which is seafoam green and was the leading model in China throughout the 1970s, was given to him by a man who worked at a church in San Francisco. The typewriter had been used to print Chinese-language bulletins, but its services were no longer needed, and the man didn’t know what to do with it.

“If these were Western typewriters, you’d have the pick of the litter in terms of where to donate a machine like this or sell it on the antique market,” Mullaney told BuzzFeed News on a hot afternoon outside on Stanford’s campus, where he is a history professor. “There’s no such thing for East Asian information technology and Chinese typewriters.”

In the end, Mullaney&039;s Kickstarter campaign raised more than $13,491, and his traveling exhibition will — “outside of maybe the National LIbrary of China and the National Diet Library in Japan” — feature more Chinese typewriters than any other collection in the world. The tour kicks off with an exhibition in the San Francisco Airport in 2017.

View Video ›

Facebook: video.php

Quelle: <a href="How This Decades-Old Technology Ushered In Predictive Text“>BuzzFeed

Ford Plans To Put Self-Driving Cars On The Road By 2021

Beawiharta Beawiharta / Reuters

Ford announced a series of big investments in autonomous car technology on Tuesday, signaling that the old-school automaker is betting very seriously on self-driving vehicles as the future of personal transportation.

The company said it plans to put autonomous vehicles on the road in 2021 through ride-hailing fleets, and it will start selling them to individual drivers in the second half of that decade. To get there, Ford said it will double the Silicon Valley team it set up in Palo Alto a year and a half ago. Ford also announced Tuesday that it has invested $75 million into Velodyne, a company that makes light detection and ranging sensors, and it has acquired an Israeli machine-learning company called SAIPS that will help its autonomous vehicles learn about their environment.

The announcements come as many automakers and technology companies, from Tesla to Google, vie to be leaders in the self-driving vehicle race. Many companies have partnered in their efforts, for example, General Motors and Lyft. Ford would not say whether it will build its own ride-hailing platform or partner with established companies like Uber and Lyft. “There will be some things that we do on our own. There will be some things where we partner with others,” Ford CEO Mark Fields told BuzzFeed News. “We have a lot of options and business models that we’re working through.”

Ford opened its Palo Alto office in early 2015 and has a staff of more than 130 people there. The company says it’s working with more than 40 startups on new car technology. Ford has also invested in the Berkeley, California mapping startup Civil Maps, as well as Pivotal, a cloud-based software development company in San Francisco to boost its connected car efforts, this year. (Pivotal is working on Ford’s Dynamic Shuttle pilot program, a ride-hailing experiment it’s testing on its campus in Dearborn, Michigan. Employees use it to travel between buildings during the day.) And in March, Ford created a subsidiary called Ford Smart Mobility to “design, build, grow and invest in new mobility services.”

These investments and efforts show how the legacy car company is trying to match pace with newer automakers like Tesla, the many other upstarts that have cropped up, and ride-hailing companies such as Uber and Lyft — who are all also working on driverless cars. Tesla said last month that once it unveils autonomous vehicles, owners will be able to add them to a ride-hailing fleet and send them off to pick up fares, or request self-driving Tesla rides if they don’t want to drive.

Raj Nair, Ford’s chief technical officer, told BuzzFeed ahead of Tuesday’s announcement that personal ownership of Ford’s self-driving cars will become an option in the second half of the decade after the vehicles are introduced in 2021. By the time they’re available for sale, people will be used to them and they’ll be cheaper to make, he said.

“Certainly there will be some kind of market for personal ownership…as you significantly reduce the cost,” Nair said.

The old-school automaker regularly notes its history in manufacturing innovation when it talks about self-driving cars. “From our standpoint, our view is autonomous vehicles could have the same societal impact…as Ford’s moving assembly line did,” Fields said.

And one thing Ford does have compared to Tesla and other upstarts in the self-driving is scale. When the company says it will put autonomous vehicles on the road, it means it will do it in droves. “We’re not going to be talking about a couple hundred units like some science project,” Fields said.

Quelle: <a href="Ford Plans To Put Self-Driving Cars On The Road By 2021“>BuzzFeed