OldOS: 18-Jähriger baut iOS 4 nach
Ein Entwickler hat Apples altes Betriebssystem iOS 4 nachgebaut. Dabei wurde die Programmiersprache SwiftUI eingesetzt. (iOS, iPhone)
Quelle: Golem
Ein Entwickler hat Apples altes Betriebssystem iOS 4 nachgebaut. Dabei wurde die Programmiersprache SwiftUI eingesetzt. (iOS, iPhone)
Quelle: Golem
Um die eigene Kapazität rasant zu steigern, soll Intel eine Übernahme von Globalfoundries planen, dem drittgrößten Auftragsfertiger der Welt. (Intel, TSMC)
Quelle: Golem
Die Elgato Facecam ist eine 1080p-Webcam, die bis zu 60 Bilder pro Sekunde liefert. Der Sony-Starvis-CMOS-Sensor soll rauscharme Bilder ermöglichen. (Elgato, Webcam)
Quelle: Golem
WordPress.com experts invite you to join our community-driven courses to get your blog or podcast started, launched, or taken to the next level. Join today for on-demand content, access to a course community of peers, weekly office hours with our dedicated experts, and virtual meet-ups to connect with other learners like you. Start learning here.
Overheard on our Blogging for Beginners Course Community: “Taking this course has pushed me to think beyond my comfort zone and take a step back in my journey to become a better blogger and focus on what I truly want to write about.”
Join the WordPress.com Course Communities here.
Quelle: RedHat Stack
From retail companies to auto manufacturers and financial services institutions, organizations across Europe rely on our cloud services to run their businesses. We are committed to helping our customers meet stringent data protection requirements by offering industry-leading technical controls, contractual commitments, and continued transparency to support their risk assessments and compliance needs. On June 21, 2021, the European Data Protection Board (EDPB) published its finalRecommendations on supplementary measures following the Court of Justice of the European Union’s ruling, which invalidated the EU-US Privacy Shield Framework and upheld the validity of the EU Standard Contractual Clauses (SCCs). The EDPB’s guidance is important to help organizations address international data transfers. Many of the Board’s recommendations align with our long-standing practices. In the light of the above, we want to reaffirm our commitment to GDPR compliance and to help Google Cloud customers meet their compliance objectives when using our services. In particular:A customer-controlled cloudOur customers own their data and we believe they should have the strongest levels of control over data stored in the cloud. Our public cloud provides customers with world-class levels of visibility and control over their data through our services. With the capabilities we offer, Google Cloud Platform customers can store data in the European region, ensure customer data is not moved outside of Europe, and prevent users and administrators outside of Europe from accessing their data. They can exercise control over who accesses their data by managing their own encryption keys, ensuring the keys are stored in a European region, and storing them outside Google Cloud’s infrastructure. Customers can also require detailed justification and approval each time a key is requested to decrypt data using External Key Manager, and deny Google the ability to decrypt their data for any reason using Key Access Justifications, which is now in General Availability. You can learn more by reading our blog on advancing control and visibility in the cloud. For insight into what this commitment means to customers from a technical perspective, please see our post on options for data residency, operational transparency and control. Google Cloud was the first and is currently the only cloud provider to offer the ability for customers to store and manage encryption keys for cloud-resident data outside the provider’s infrastructure with programmatic control over decryption based on specific justifications, including government access requests.Our Google Workspace (formerly G Suite) customers can opt to store their covered data in Europe. Additionally, we’re taking encryption a step further in Workspace by giving customers direct control of encryption keys and the identity service they choose to access those keys. With Client-side encryption, customer data is indecipherable to Google, while users can continue to take advantage of Google’s native web-based collaboration, access content on mobile devices, and share encrypted files externally. This capability is currently available in Public Beta for Google Drive, Docs, Sheets, and Slides with plans to extend it to other Workspace services. Customers can also benefit from third party solutions that offer end-to-end encryption for Gmail. With these solutions, customers can keep keys in their preferred geo-location and manage access to covered content.Google Cloud will continue to invest in capabilities that ensure that our customers control the location of and access to their data.New Standard Contractual Clauses The European Commission has published new Standard Contractual Clauses to help safeguard European personal data. Google Cloud plans to implement the new SCCs to help protect our customers’ data and meet the requirements of European privacy legislation. Like the previous SCCs, these clauses can be used to facilitate lawful transfers of data.Transparency to help your risk-based assessmentThe EDPB’s recommendations introduce a risk-based approach under which data exporters should assess the level of risk to fundamental rights that a certain transfer would entail in practice.Our Transparency Report discloses the number of requests made by law enforcement agencies and government bodies for Enterprise Cloud customer information. The historical numbers show that the number of Enterprise Cloud-related requests is extremely low compared to our Enterprise Cloud customer base. For example, our report shows that we didn’t produce any Google Cloud Platform Enterprise customer data in response to government requests for the last reporting period. The likelihood of Enterprise Cloud customer information data being affected by these types of requests is therefore low.We also work hard to help our customers conduct a meaningful assessment by giving a clear and detailed understanding of our process for responding to government requests for Cloud customer data in rare cases where they do happen.AccountabilityWe are always looking at ways to increase our accountability and compliance support for our customers. Recently we announced our adherence to the EU GDPR Code of Conduct. Codes of conduct are effective collaboration instruments among industry players and data protection authorities where state-of-the-art industry practices can be tailored to meet stringent data protection requirements. We believe that this Code provides a robust basis to build an international data transfer tool for cloud services and will continue to support industry efforts in this regard. We also continue to follow and be certified against internationally-recognized privacy and security standards such as ISO/IEC 27001, ISO/IEC 27017, ISO/IEC 27018, and ISO/IEC 27701. Certifications provide independent validation of our ongoing dedication to world-class security and privacy. Strong policy advocacyWe will continue to advocate for the principles we believe should guide access requests by government authorities for enterprise data anywhere in the world. Government engagement on a bilateral and multilateral level is critical for modernizing laws and establishing rules for the production of electronic evidence across borders in a manner that respects international norms and resolves any potential conflicts of law. Google has long supported these efforts, including work to find a successor to the US-E.U. Privacy Shield to restore legal certainty around trans-Atlantic personal data flows and develop common global principles on government access to data at the Organisation for Economic Co-operation and Development (OECD) level. We will continue to support these efforts while protecting the privacy and security of our customers. Millions of organisations with users in Europe rely on our cloud services to run their businesses every day, and we remain steadfastly committed to helping them meet their regulatory requirements by maintaining a diverse set of compliance tools in light of EDPB’s recommendations.Related ArticleGoogle Cloud’s contribution to an environment of trust and transparency in EuropeThe Belgian Data Protection Authority has approved a new Cloud Code of Conduct, and Google Cloud is among the first cloud providers to ad…Read Article
Quelle: Google Cloud Platform
Traditionally, dedicated game servers for real time multiplayer games have used bespoke UDP protocols for communication and synchronization of gameplay among the players within a game. This communication is most often bundled into monolithic game servers and clients, pairing the technical functionality of communication protocols, such as custom network physics synchronisation, security, access control, telemetry and metrics, with the extremely high computational requirements of physics simulations, AI computation and more.Developed in collaboration with Embark Studios, Quilkin is a UDP proxy, tailor-made for high performance real-time multiplayer games. Its aim is twofold:Pull common functionality, such as security, access control, telemetry and metrics out of monolithic dedicated game servers and clients.Provide this common functionality in a composable and configurable way, such that it can be reused across a wide set of multiplayer games.This reusable foundation then allows game developers to spend more of their time focusing on building the game-specific aspects of building their multiplayer communication protocols, rather than these common aspects.Challenges with multiplayer Game Server communicationIn fast-paced, multiplayer games, the full simulation of a session of gameplay generally occurs within the memory of a monolithic dedicated game server, whose responsibility covers everything from network physics and AI simulation to communications from client back to server and more.Since the entire state of the game is memory resident, each client connects directly to the dedicated game server the player is playing on, which presents several challenges:Each dedicated game server is a single point of failure. If it goes down, then the whole game session (or sometimes multiple sessions) fails. This makes it a target for malicious actors.The IP and port of connection to the game server is public, and exposed to the game client, making it easy to discover and target.Multiple aspects of game server simulation and network communication are tightly coupled in the same process, making reuse and modularity more difficult, and expanding risk of performance issues.If we look at both web and mobile technologies over the past several years, some of these challenges start to look very familiar. Thankfully, one of the solutions to help drive dedicated server workloads to more redundant and distributed orchestration is the utilisation of traffic proxies!By using a proxy for multiplayer UDP traffic, in front of our dedicated games servers within a low latency network such as what is available on Google Cloud, we can address these key challenges as follows:Greater reliability. Proxies provide redundant points of communication entry. UDP packets can be sent to any number of proxies and routed to the dedicated game server. While a dedicated game server will still generally be a single point of failure, proxies improve redundancy and potential failover at the communication layer.Greater security. The IP and port of the dedicated game server is no longer public. Game clients may only have visibility into a subset of the proxy pool, limiting a potential attack surface.Greater scalability. We start to break apart the single process, as we can move aspects of the communication protocol, metrics, communication security and access control into the proxy. This removes the non-game specific computation out of your game server’s processing loop.As a result, the entire system is now more resilient as proxies can be scaled independently, not only for performance reasons but also to distribute load in case of malicious actors.Introducing Quilkin: The UDP proxy for Game ServersEmbark Studios and Google Cloud came together and built Quilkin, to provide a standard, open source solution. Based out of Stockholm, Embark Studios is a (relatively) new studio made up of seasoned industry veterans. They were the perfect collaboration partner to create Quilkin with, given their team’s experience with large scale real time multiplayer games. Quilkin is an open-source, non-transparent UDP proxy specifically designed for use with large scale multiplayer dedicated game server deployments, to ensure security, access control, telemetry data, metrics and more.Quilkin is designed to be used behind game clients as well as in front of dedicated game servers, and offers the following major benefits:Obfuscation. Non-transparent proxying of UDP data, making the internal state of your game architecture less visible to bad actors.Out of the box metrics. For UDP packet traffic and communication.Visibility. A composable set of processing filters that can be applied for routing, access control, rate limiting, and more.Flexibility. Ability to to be utilised as a standalone binary, with no client/server changes required or as a Rust library, depending on how deep an integration you wish for your system and/or custom processing Filters you wish to build.Compatibility. Can be integrated with existing C/C++ code bases via Rust FFI, if required.Onboarding. Multiple integration patterns, allowing you to choose the level of integration that makes sense for your architecture and existing platform.Until now, these sorts of capabilities are only available to large game studios with resources to build their own proprietary technology. We think leveling the playing field for everyone in the games industry is an important and worthy endeavor. That’s why we collaborated with Google Cloud and initiated this project together.At Embark, we believe open source is the future of the games industry and that open, cross-company collaboration is the way forward, so that all studios, regardless of size, are able to achieve the same level of technical capabilities. —Luna Duclos, Tech Lead, Embark StudiosGoogle Cloud is excited to announce Quilkin as the latest entry in our portfolio of open-source solutions for gaming. Quilkin complements our existing OSS solutions including Agones for game servers, Open Match for matchmaking, and Open Saves for persistence. These are designed to work together as an open and integrated ecosystem for gaming. We’re proud to include Embark Studios as our latest open source collaborator for gaming along with Ubisoft, Unity, and 2K Games. Google Cloud will continue to work closely with our partners in industry and the community to offer planet-scale solutions to power the world’s largest games. —Rob Martin, Chief Architect, Google Cloud for GamesGetting started with QuilkinWhile Quilkin can support more advanced deployment scenarios like above, the easiest way to get started with Quilkin is to deploy it as a sidecar to your existing dedicated game server. This may initially limit some of the benefits, but it’s an easy path to getting metrics and telemetry data about your UDP communication, with a very low barrier to entry and the ability to expand over time.While Quilkin is released as both binaries and container images, and is not tied to any specific hosting platform, we’ll use Agones and Google Cloud Game Servers as our game server hosting platform for this example.First we will create a ConfigMap to store the yaml for a static configuration for Quilkin that will accept connections on port 26001 and route then to the Xonotic (an open source, multiplayer FPS game) dedicated game server on port 26000:Second, we’ll take the example container that Agones provides for the Xonotic dedicated game server, and run Quilkin alongside each dedicated game server as a sidecar, in an Agones Fleet of game servers like so:Once applied, when we query the cluster for the running GameServers, everything looks the same as it would without Quilkin! Nothing else in our system needs to be aware that the traffic is being intercepted, and we can freely take advantage of the functionality of Quilkin without adjusting either client or server code.If this has piqued your interest, make sure to have a look at the walkthrough, where we step through this same scenario and then extend it to compress UDP packets from the game client to server, without having to change either programs.This just scratches the surface, however: there’s even more to Quilkin, including an xDS compliant admin API, a variety of existing Filters to manipulate and route UDP packets and more.What’s next for QuilkinQuilkin is still in its early stages, with this 0.1.0 alpha release, but we’re very happy with the foundation that has been laid.There are a variety of features in the roadmap, from enhanced metrics and telemetry, new filters and filter types, and more.If you would like to try out this release, you can grab the binaries or container images from our releases page, step through our quickstarts and review different integration options with your dedicated game servers.To get involved with the project, please:Check out our Github repositoryJoin our Discord communityJoin the quilkin-discuss mailing listFollow us on TwitterEmbark Studios has also released their own announcement blog post, going deeper into the plans they have for their own production game backend infrastructure, and where Quilkin fits in.Thanks to everyone who has been involved with this project across Google Cloud and Embark Studios, and we look forward to the future for Quilkin!
Quelle: Google Cloud Platform
Being alerted to an issue with your application before your customers experience undue interruption is a goal of every development and operations team. While methods for identifying problems exist in many forms, including uptime checks and application tracing, alerts on logs is a prominent method for issue detection. Previously, Cloud Logging only supported alerts on error logs and log-based metrics, but that was not robust enough for most application teams.Today, we’re happy to announce the preview of log-based alerts, a new feature that opens alerts to all log types, adds new notification channels, and helps you make alerts more actionable within minutes. The alert updates include: the ability to set alerts on any log type and content,additional notification channels such as SMS, email groups, webhooks (and more!) anda metadata field for alerts so playbooks and documentation can be included.Alert on any logs dataWhile error logs and log-based metrics are sufficient for many indicators of application and system health, there are some events in security, such as suspicious IP address activity, or runtime system issues such as host errors, where you want to get alerted immediately. We’re happy to announce that you can now set alerts on single log entries via the UI or API.Creating an alert in the UI is easy:Go to Logs Explorer and run your query. Under Actions > Create Log Alert.Enter the following information: a) alert name & documentation, b) any edits to your log query if necessary (and preview the results to confirm it is correct), c) select the minimum interval between alerts for this policy, and d) select the notification channel(s).Click “Save” and you’re done!For more information on configuring a log-based alert, visit the documentation page.Creating a log-based alert in the Google Cloud ConsoleNew notification channelsCloud Logging is pre-integrated with Google Cloud services and can be configured to send alerts when something goes wrong. While email notifications from Cloud Logging were effective during business hours, operations teams and their development cohorts expressed a need for a greater number of communication channels for their global extended workforce partners and after-hours triage units. That’s why we’re excited to announce, as part of this preview, that logging alerts of any kind can be sent to an email group, SMS, mobile push notifications, webhooks, Pub/Sub, and Slack.Enhanced metadata for alertsAlerts are just the first step to actually solving an issue within your service or application. Development and operations teams usually have a playbook or documentation for incidents or occurrences where they want to create an alert. Including links to these materials can save valuable time, especially as workforces involve more geographic distribution and collaboration between a greater number of teams. With this preview announcement, you can now include documentation or links to playbooks that allow your team to investigate and solve alerts.Overview of the fields that are configured as part of logs-based alertsConfigure your logging alerts todayIf you have a critical log field that your team is watching, consider setting up an alert on it today. See the documentation that walks you through each step of configuring an alert.If you’d like to be alerted after a certain count of your log entries, consider a Log-based metric. This allows you to set a threshold for the number of log events that occur within a specific time period before you are notified.If you have suggestions or feedback, please join our Cloud Operations group on the Google Cloud Community site.Related ArticleRead Article
Quelle: Google Cloud Platform
TL;DR: You can visualize your billing data with Looker to gain insights on your spending over time! Use the Google Cloud Cost Management block to quickly get analyzing.Along with the growth of the power and flexibility of the cloud, there’s also an increasing need for better visibility into your cloud spend. If you’re just getting started or aren’t running much in the cloud, the billing reports are a great place to start and can help you see what you’re paying for. However, as your usage of the cloud grows, you may need even more details on where that spend came from. As a best practice, we always recommend enabling the export of your billing data to BigQuery. This should be your first step when creating a new billing account because you can’t backfill any data before enabling the export and you’ll probably want all the data you can get!That’s a lot of data!The standard billing export includes both a cost table, where you can see cost and usage across services, and a pricing table, that can be used to analyze prices, discounts and services. Besides the standard billing export, you can also use the preview feature to export insights and recommendations data into BigQuery. Recommender is a service that provides per-product or per-service recommendations, and are generated based on heuristic methods, machine learning, and current resource usage. With these exports enabled, you can use BigQuery to analyze spend details down to the hour, but maybe you’re not the kind of person who enjoys writing queries or puzzling through data from hundreds of thousands of rows (or maybe you are, no judgement). So where do you go from here? Enter visualization!Look, it’s Looker!Looker is a business intelligence/big data analytics platform that helps you explore and analyze your data, and we’ve just recently launched the Google Cloud Billing block to help you get started on your visualization journey. Using the Looker Marketplace, you can easily install any number of Looker Blocks and other content to help you get a running start. Installing the Google Cloud Billing block only takes a few steps and you’ll end up with a dashboard to help you see what your spend looks like.That’s also a lot of data, but much easier to understand!To get started using the GCP Billing block, you’ll need to first create a connection to your BigQuery project using the steps outlined here. Next, you can search through the marketplace to find and install the block on your instance.The pre-built Looker is a great starting place for analyzing your billing data. However, you can continue to develop new fields in LookML or even customize existing definitions. We’ve included some examples of how to customize this block in our Github ReadMe here. Using the dashboardsNow that you have your shiny new dashboards up and running, there’s a few things you may want to look for to optimize your spend. First off, you may want to use the Cost Summary Dashboard to pinpoint areas of high spend to drill into. Using the Project Deep Dive dashboard we can focus our attention on a single project that has unusual spend behavior. With the information in this report, we can pinpoint a specific service that is ripe for optimization – for example, Compute Engine.Finally, we can head over to the Recommendations Insights dashboard to understand where there may be tactical things we can do to immediately reduce costs. You can even create a custom field-action to mark a recommendation as accepted straight from the Looker dashboard.Further customizationsOne of the key advantages of centralizing your billing, pricing and recommendations data into BigQuery and analyzing it in Looker is the ability to include custom business logic and other data sources. For example, you may use labels to represent the designated cost center for each project. In this case, you can customize the LookML to create a new dimension that leverages the value from the cost_center, as shown here. With each cost center represented in your model, you could even refine the aggregation metrics to divide up the support costs across each team. Finally, each one of these centers may have their own budget information, that’s managed in a Google Spreadsheet. By creating an external table in BigQuery, Looker users can now see how each team’s spend is tracking towards their budget.With so many organizations taking a mult-cloud approach to their technology stack, you may be wondering how you can see your Google Cloud billing data alongside other platforms. The good news is that the Google Cloud Cost Management block is just one element of the Looker Multi-Cloud Cost Management Solution – stay tuned for more on that soon! See what I mean?While you may want to do some additional customizations, the block is a great starting point to quickly see where your spend is coming from and if there are any surprises. Another option to get started is using the Data Studio template that gives you a few starting views into your billing data.The starter Data Studio template with some sample billing dataIf you’re not using Looker yet, check out these getting started guides to help you learn more and request a demo.Related ArticleLooker lets you choose what works best for your dataEmbrace platform freedom with Looker. Learn about how we are expanding our features as a cloud platform to meet the unique needs of every…Read Article
Quelle: Google Cloud Platform
Syntacore arbeitet an einer RISC-V-basierten CPU, die für russische Bildungs- und Regierungseinrichtungen verwendet werden soll. (RISCV, Prozessor)
Quelle: Golem
China will bis 2035 eine Flotte von Flugzeugen im Einsatz haben, die Passagiere in einer Stunde nach Europa oder in die USA bringen. (Flugzeug, Technologie)
Quelle: Golem