Katastrophenschutz: Cell Broadcast soll im Sommer 2022 einsatzbereit sein
Von den Anfang der 1990er Jahre funktionstüchtigen 80.000 Sirenen waren zuletzt noch etwa 15.000 verfügbar. (Hochwasser, Mobilfunk)
Quelle: Golem
Von den Anfang der 1990er Jahre funktionstüchtigen 80.000 Sirenen waren zuletzt noch etwa 15.000 verfügbar. (Hochwasser, Mobilfunk)
Quelle: Golem
Intels Nucs sind mit ihren Compute Modules kompakt. Beast Canyon wird mit Intel Core i7 und i9 und austauschbaren Komponenten ausgestattet. (Mini-PC, Intel)
Quelle: Golem
Quelle: <a href="A Clearview AI Patent Application Describes Facial Recognition For Dating And Identifying People Who Are Unhoused Or Use Drugs“>BuzzFeed
Quelle: <a href="The NYPD Has Misled The Public About Its Use Of Facial Recognition Tool Clearview AI“>BuzzFeed
At MWC in Barcelona this year, we saw the industry is ready to meet the needs of their customers with outstanding achievements in radio access network (RAN) technology, artificial intelligence and machine learning (AI/ML), edge computing and more. As we stand at this industry inflection point, Red Hat has prioritized a few things to help CSPs innovate, scale, optimize and meet the challenges of today.
Quelle: CloudForms
Performance metric tracking with Performance Co-Pilot (PCP) and Grafana can be useful in almost any RHEL environment. However, the process to get it set up across a large number of hosts might seem daunting at first. This is why Red Hat introduced a Metrics System Role, which automates the configuration of performance metrics. I’ll show you how in this post.
Quelle: CloudForms
Photos on your website can be a representation of who you are, what you do, or what you love. We want your photos to show to your visitors as best they possibly can, no matter what kind of device they are using.
Beautiful Photo Carousels, Now Better on Mobile
We’ve recently launched an upgraded photo carousel experience that takes photo viewing to the next level on mobile devices. Now your visitors can swipe, zoom, and double-tap with ease and get the best look at those beautiful snaps!
Video Overview
Here’s a short video overview of using the photo carousel on a mobile device and the upgrades we’ve introduced:
What’s New
Smooth, Hands-on, High Resolution Photo Viewing
Full-width photos expand to the display of your deviceSwift, smooth pinch-to-zoom, and double-tap-to-zoom through your photosNew, higher resolution images that stay crisp and clear however far you zoomA simpler, faster tap-to-close button to move back into the thumbnail gallery view
Faster, Clearer Navigation Between Photos
Responsive, fast swiping between your photosClear photo numbering navigation to show where you are in your galleryWith 5 or less photos, a dot navigation system you can tap to jump to any photo in the sequence
Easy Access Metadata and Comments for Every Photo
One-tap metadata for every photoPer-photo comment notifications, with a single tap to view comments
Getting Started
If you’ve previously used our image carousel feature, your site has been upgraded automatically so your visitors will get the new experience starting right now.
If you haven’t added a carousel to your site so far, now’s a great time to give it a try. Simply insert a gallery block using our editor and your photos will show using the carousel whenever your visitors click or tap an image.
Quelle: RedHat Stack
With security on everyone’s mind, we wanted to bring you this webinar, “Demystifying Cloud Security Compliance”, which Bryan Langston, Director of Architecture and Jason James, Director of Security provided earlier this year. Please enjoy the video or read the transcript below, and let us know how we can help! Cloud Security Compliance Presenters Bryan Langston … Continued
Quelle: Mirantis
Editor’s note: Today we are hearing from Jono MacDougall , Principal Software Engineer at Ravelin. Ravelin delivers market-leading online fraud detection and payment acceptance solutions for online retailers. To help us meet the scaling, throughput, and latency demands of our growing roster of large-scale clients, we migrated to Google Cloud and its suite of managed services, including Cloud Bigtable, the scalable NoSQL database for large workloads.As a fraud detection company for online retailers, each new client brings new data that must be kept in a secure manner and new financial transactions to analyze. This means our data infrastructure must be highly scalable and constantly maintain low latency. Our goal is to bring these new organizations on quickly without interrupting their business. We help our clients with checkout flows, so we need latencies that won’t interrupt that process—a critical concern in the booming online retail sector. We like Cloud Bigtable because it can quickly and securely ingest and process a high volume of data. Our software accesses data in Bigtable every time it makes a fraud decision. When a client’s customer places an order, we need to process their full history and as much data as possible about that customer in order to detect fraud, all while keeping their data secure. Bigtable excels at accessing and processing that data in a short time window. With a customer key, we can quickly access data, bring it into our feature extraction process, and generate features for our models and rules. The data stays encrypted at rest in Bigtable, which keeps us and our customers safe.Bigtable also lets us present customer profiles in our dashboard to our client, so that if we make a fraud decision, our clients can confirm the fraud using the same data source we use.Retailers can use Ravelin’s dashboard to understand fraud decisionsWe have configured our bigtable clusters to only be accessible within our private network and have restricted our pods access to it using targeted service accounts. This way the majority of our code does not have access to bigtable and only the bits that do the reading and writing have those privileges.We also use Bigtable for debugging, logging, and tracing, because we have spare capacity and it’s a fast, convenient location. We conduct load testings against Bigtable. We started at a low rate of ~10 Bigtable requests per second and we peaked at ~167000 mixed read and write requests per second at absolute peak. The only intervention that was done to achieve this was pressing a single button to increase the number of nodes in the database. No other changes were made.In terms of real traffic to our production system, we have seen ~22,000 req/s (combined read/write) on Bigtable in our live environment as a peak within the last 6 weeks.Migrating seamlessly to Google Cloud Like many startups, we started with Postgres, since it was easy and it was what we knew, but we quickly realized that scaling would be a challenge, and we didn’t want to manage enormous Postgres instances. We looked for a kind of key value store, because we weren’t doing crazy JOINS or complex WHERE clauses. We wanted to provide a customer ID and get everything we knew about it, and that’s where key value really shines. I used Cassandra at a previous company, but we had to hire several people just for that chore. At Ravelin we wanted to move to managed services and save ourselves that headache. We were already heavy users and fans of BigQuery, Google Cloud’s serverless, scalable data warehouse, and we also wanted to start using Kubernetes. This was five years ago, and though quite a few providers offer Kubernetes services now, we still see Google Cloud at the top of that stack with Google Kubernetes Engine (GKE). We also like Bigtable’s versioning capability that helped with a use case involving upserts. All of these features helped us choose Bigtable.Migrations can be intimidating, especially in retail where downtime isn’t an option. We were migrating not just from Postgres to Bigtable, but also from AWS to Google Cloud. To prepare, we ran in AWS like always, but at the same time we set up a queue at our API level to mirror every request over to Google Cloud. We looked at those requests to see if any were failing, and confirmed if the results and response times were the same as in AWS. We did that for a month, fine tuning along the way. Then we took the big step and flipped a config flag and it was 100% over to Google Cloud. At the exact same time, we flipped the queue over to AWS so that we could still send traffic into our legacy environment. That way, if anything went wrong, we could fail back without missing data. We ran like that for about a month, and we never had to fail back. In the end, we pulled off a seamless, issue-free online migration to Google Cloud.Flexing Bigtable’s featuresFor our database structure, we originally had everything spread across rows, and we’d use a hash of a customer ID as a prefix. Then we could scan each record of history, such as orders or transactions. But eventually we got customers that were too big, where the scanning wasn’t fast enough. So we switched and put all of the customer data into one row and the history into columns. Then each cell was a different record, order, payment method, or transaction. Now, we can quickly look up the one row and get all the necessary details of that customer. Some of our clients send us test customers who place an order, say, every minute, and that quickly becomes problematic if you want to pull out enormous amounts of data without any limits on your row size. The garbage collection feature makes it easy to clean up big customers. We also use Bigtable replication to increase reliability, atomicity, and consistency. We need strong consistency guarantees within the context of a single request to our API since we make multiple bigtable requests within that scope. So within a request we always hit the same replica of Bigtable and if we have a failure, we retry the whole request. That allows us to make use of the replica and some of the consistency guarantees, a nice little trade-off where we can choose where we want our consistency to live.We also use BigQuery with Bigtable for training on customer records or queries with complicated WHERE clauses. We put the data in Bigtable, and also asynchronously in BigQuery using streaming inserts, which allows our data scientists to query it in every way you can imagine, build models, and investigate patterns and not worry about query engine limitations. Since our Bigtable production cluster is completely separate, doing a query on BigQuery has no impact on our response times. When we were on Postgres many years ago, it was used for both analysis and real time traffic and it was not the optimal solution for us. We also use Elasticsearch for powering text searches for our dashboard.If you’re using Bigtable, we recommend three features:Key visualizer. If we get latency or errors coming back from Bigtable, we look at the key visualizer first. We may have a hotkey or a wide row, and the visualizer will alert us and provide the exact key range where the key lives, or the row in question. Then we can go in and fix it at that level. It’s useful to know how your data is hitting Bigtable and if you’re using any anti-patterns or if your clients have changed their traffic pattern that exacerbated some issue.Garbage collection. We can prevent big row issues by putting size limits in place with the garbage collection policies. Cell versioning. Bigtable has a 3d array, with rows, columns, and cells, which are all the different versions. You can make use of the versioning to get history of a particular value or to build a time series within one row. Getting a single row is very fast in Bigtable so as long as you can keep the data volume in check for that row, making use of cell versions is a very powerful and fast option. There are patterns in the docs that are quite useful and not immediately obvious. For example, one trick is to reverse your timestamps (MAXINT64 – now) so instead of the latest version, you can get the oldest version effectively reversing the cell version sorting if you need it.Google Cloud and Bigtable help us meet the low-latency demands of the growing online retail sector, with speed and easy integration with other Google Cloud services like BigQuery. With their managed services, we freed up time to focus on innovations and meet the needs of bigger and bigger customers. Learn more about Ravelin and Bigtable, and check out our recent blog, How BIG is Cloud Bigtable?Related ArticleCloud Bigtable brings database stability and performance to PrecognitiveUsing Google’s Cloud Bigtable database improved performance and cut latency and maintenance time for software developer Precognitive.Read Article
Quelle: Google Cloud Platform
Today, many applications in organizations’ data centers run on Windows Server. Modernizing these traditional Windows apps onto Kubernetes promises a host of benefits: a consistent platform across environments, better portability, scalability, availability, simplified management and speed of deployment, just to name a few. But how? Rewriting traditional .NET applications to run on Linux with .NET Core can be challenging and time-consuming. There is, however, a lower-toil, more developer friendly option.Last year, we announced support for Windows Server containers running on Google Kubernetes Engine (GKE), our cloud-based managed Kubernetes service, which lets you take the advantage of containers without porting your apps to .NET core or rewriting them for Linux. Today, we’re going a step further with support for Windows Server containers on Anthos clusters on VMware in your on-premises environment. Now available in preview, you can consolidate all your Windows operations across on-prem and Google Cloud.Bringing Windows Server support to our family of Kubernetes-based services—GKE running on Google Cloud, and Anthos everywhere—with the same experience, lets you modernize apps faster and achieve a consistent development and deployment experience across hybrid and cloud environments. Further, by running Windows and Linux workloads side by side, you get operational consistency and efficiency—no need to have multiple teams specializing in different tooling or platforms to manage different workloads. The single-pane-of-glass view and the ability to manage policies from a central control plane simplifies the management experience, while bin packing multiple Windows applications drives better resource utilization, leading to infrastructure and license savings.Google Cloud Console provides a single pane of glass view for managing your clusters in different environmentsWith all these benefits, it’s no surprise that customers such as Thales, a French multinational firm specializing in aerospace and security services, have been able to reap significant benefits by moving Windows applications to GKE. “We moved our Windows applications from VMs to Windows containers on GKE and now have a unified mechanism for Linux and Windows-based application management, scaling, logging, and monitoring. Earlier, setting up these applications in VMs and configuring them for high availability used to take up to a week, and the applications were not easily scalable,”said Najam Siddiqui, Solutions Architect at Thales.“Now with GKE, the setup takes only a few minutes. GKE’s automatic scaling and built-in resiliency features make scaling and high-availability setup seamless. Also, manually maintaining the VMs and applying security patches used to be tedious, which is now handled by GKE.” Let’s take a deeper look at the architecture that lets you run your Windows container-based workloads on-prem. Windows Server running on-prem with Anthos The diagram below illustrates the high-level architecture of running Windows container-based workloads in an on-prem GKE cluster with Anthos. Windows server node-pools can be added to an existing or new Anthos cluster. Kubelet and Kube-proxy run natively on Windows nodes, allowing you to run mixed Windows and Linux containers in the same cluster. The admin cluster and the user cluster control plane continue to be Linux-based, providing you a consistent orchestration experience and management ease across Windows and Linux workloads.Windows Server and Linux containers running side-by-side in the same Anthos on-prem clusterGet started todayWhen considering modernizing your on-prem Windows estate, we recommend running Windows Server containers on Anthos in your own data center. If you are new to Anthos, the Anthos getting started page and the Coursera course on Architecting Hybrid Cloud with Anthos are good places to start. You can also find detailed documentation on our website, and our partners are eager to help you with any questions related to the published solutions, as is the GCP sales team. And as always, please don’t hesitate to reach out to us at anthos-onprem-windows@google.com if you have any feedback or need help unblocking your use case.Related ArticleWindows Server containers on GKE now GA, with ecosystem supportWindows Server containers are now GA on Google CloudRead Article
Quelle: Google Cloud Platform