Mattel und SpaceX: SpaceX-Raumschiffe sollen zu Matchbox-Spielzeug werden
Im kommenden Jahr wird Mattel Falcon 9, Starship und Co. als Miniaturversionen unter der Marke Matchbox herausbringen. SpaceX ist dabei. (SpaceX, Raumfahrt)
Quelle: Golem
Im kommenden Jahr wird Mattel Falcon 9, Starship und Co. als Miniaturversionen unter der Marke Matchbox herausbringen. SpaceX ist dabei. (SpaceX, Raumfahrt)
Quelle: Golem
Die Digitaluhr Casio F-91W ist ein Klassiker, die ein Entwickler mit modernem Innenleben, Bluetooth und einem OLED zur Smartwatch hochgerüstet hat. (Casio, OLED)
Quelle: Golem
Electric As You Go von Stellantis ist eine Langzeitmiete für E-Autos zu niedrigen Preisen. Der Haken: Kunden müssen ihre Verbrenner herausrücken. (Auto, Technologie)
Quelle: Golem
Die auch in Deutschland eingesetzten GPS-Tracker erlauben es, den Motor des Fahrzeugs zu stoppen. Über Sicherheitslücken ist dies auch Dritten möglich. (GPS, Android)
Quelle: Golem
Drei Minuten lang können Fans einen Einblick in die Serie Herr der Ringe: Die Ringe der Macht erhalten. Viele machen sich darüber lustig. (Der Herr der Ringe, Amazon)
Quelle: Golem
An SAP architect designing a backup solution is faced with questions towards the business (what needs to be done) and delivery (how it can be accomplished). This blog gives an overview of the challenges, available options, their advantages and disadvantages. As a conclusion, a blended backup concept will be proposed that uses cloud technology to combine best-of-market approaches to satisfy business requirements without overloading complexity or costs. Finally, we will introduce Actifio GO, the future Google Cloud Backup and DR. It is Google’s enterprise-scale backup solution that provides centralized, policy-based protection of multiple workloads. We will describe its features and how it can help finding out when exactly a logical error occurred and how to even repair databases.Terms used in this documentIt is a good idea to clarify certain central terms using a diagram about a restore process:A corruption or a logical error occurs during normal operation. Then the database needs to be restored. After the restore, its logs need to be replayed, sometimes this step is called roll-forward. Then, the database will start, again taking time to complete. Downtime arguably includes corruption and detection. The maximum allowable downtime is called RTO (recovery time objective) and the maximum data loss that can be tolerated is RPO (recovery point objective).For file systems, the diagram would look the same, just that the replay of logs and database start would be void.Many customers rely on HA nodes to reduce downtime, and they can help, but not in case of logical errors like deleted tables. To recover from those, full backups or snapshots are the solution. In this article, we will speak about backups when we mean either full backups or snapshots.Snapshots are fast and cost-efficient. They are a mechanism to only store changes while keeping the original disk state. So, their backup (“snapshot”) and restore (“revert”) can happen in a very short time frame (it is size-independant), while their size corresponds to the amount of data changes. Snapshots will represent the disk content at the time when the snapshot has been taken. If you take a snapshot during normal operation, it will be crash-consistent, just as if the power had been switched off. Databases and file systems will have to recover when you revert to this snapshot. If you want to avoid this time-consuming task for the database, the database needs to place itself into a state that is ready for an application-consistent database snapshot. All SAP NetWeaver supported databases have mechanisms to support this. For HANA, it is the prepare step of a HANA snapshot. From a conceptual perspective, it involves forcing all required DB storage write activity to the disk and then quiescing disk activity at the OS level so the snapshot can be created. On Google Cloud, you can have snapshots that build on each other. For example you could have 24 snapshots which are each one hour apart from each other. Snapshots reside by default on multi-regional storage which guarantees that they can tolerate a regional outage.A full database backup will typically take around 0.6xRAM in case of HANA. The size of snapshots on the other hand will start at 0 and grow with incoming data changes.Ransomware attacks typically encrypt the companies’ data with a key that is only known to the attacker. To recover from such an attack, customers need to restore a backup without this infection – which is hard if the attacker has had the opportunity to infect the backups. But with Google Cloud’s Bucket Lock feature, backups can be set immutable for a retention period up to 100 years.Backup Solutions by SAP ComponentsA typical discussion is that there should be more resources for backups of productive systems than for e.g. DEV and QAS. Using snapshots, this regulates itself as the resource consumption will be determined by the amount of changes in the respective system. In other words, in the past there was the idea to run daily full backups in production and weekly full backups in non-production. This implicitly assumes that production is seven times as important as non-production. By taking snapshots instead of full backups, having a lower data change rate automatically saves storage costs. A distinction between the backup SLA for production and non-production is no longer needed.Blended Backup approachAs discussed, snapshots provide low RTO and can be performed frequently which means they also provide low RPO. On the other hand, full database backups provide an integrity check by SAP and allow for separating the backup from the location and storage infrastructure it was created on. To achieve low costs by low storage consumption and a low RTO/RPO at the same time, we propose:Make sure you can quickly create application servers and database servers with their root file system using e.g. Terraform scripts. Being that agile will not only speed up recovery, but also allow you to scale faster on the application layer and envision leaner concepts for DR.Take a PD snapshot of the application servers’ and database servers’ root file system every day and delete (merge) the old one. This will be stored in a multi-regional bucket by default. Storage consumption will only be the data changes since the last day.Before an operating system or software update, take a Persistent Disk snapshot so you can revert as a matter of secondsShared file systems: Take daily snapshots using the shared storage means. In case of high interface usage, this can be done more frequently. Overwrite the existing snapshot, so storage consumption will only be the data changes since the last snapshot.Databases: All SAP databases have similar support for taking storage snapshots. For productive and non-productive databases (using HANA as an example) we recommend the following approach as starting point in your considerations:As primary mechanism, use DB consistent snapshots orchestrated from the HANA Studio at a frequency as little as every 10 minutes. Retain a series of snapshots. This will give you a fully DB consistent backup with very quick restore times which is at the same time very efficient on storage consumption. Additional load on operation will be very low. Plus, it is by default replicated to other regions.As secondary mechanism, use a weekly full database backup at lowest operation time, e.g. midnight overwriting the previous one to multi-regional cloud storage. This will give you a DB-checked consistent backup, also replicated to other regions. It will provide additional protection against DB level block errors.This approach achieves an RPO < 10 minutes while retaining only one full backup and a series of snapshots. Restore speed will be very high as it just means reverting to a snapshot. Storage consumption will be little: one full backup, one week (at max) of changes and the log backups from one week.The design can be adapted to the customer’s preferences. The weekly frequency of full backups can be changed to daily without causing more storage consumption – previous backups will be overwritten. To save costs, also single-regional backups can be chosen where the strong recommendation is to have them outside of the region where the system is running. Log backups can be added to further reduce the RPO. So how many snapshots of the database should you retain? If you snapshot every 15 minutes, chances are high that the latest snapshot already contains the error you want to recover from. In this case you must be able to go further back, so you need to manage several snapshots. And this is where Actifio proves helpful.IMPORTANT: A number of SAP systems have cross system data synchronicity requirements (e.g.: SAP ECC and SAP CRM) and can be considered as being so closely coupled that the data consistency across all the systems needs to be ensured. When performing recovery activities for any single system this would trigger similar recovery activities in other systems. Depending on the customer specific environment additional backup mechanisms may be required to be able to ensure cross system data consistency requirements.The Actifio backup softwareActifio (soon to be Google Cloud Backup and DR) is Google’s software for managing backups.It supports GCP-native PD snapshots and the SAP-supported databases DB2, Oracle, SAP ASE, SAP HANA, SAP IQ, SAP MaxDB and SQL Server.For SAP customers, the following benefits are of special interest:Provide a single management interface for database and file system backups, not limited to SAP data.Allow to backup on VM level instead of disk levelDirect backup to the Sky server (“backup appliance”) with no need for an intermediate storageWith Actifio it will also be possible to determine the point in time where a logical error has occurred. It is possible to spin up 10 virtual machines each one holding a mount to a different snapshot. Administrators can then check when the error occurred – for example between snapshot 7 and 8. This reduces the data loss to a minimum. But the options do not stop there. It is also possible to “repair” a database. Take the above example, mount snapshot 7 to a virtual machine. It contains a table that has been dropped in snapshots 8 and newer. Now it is possible to export the single table and import it into the production database. Note that this may lead to inconsistencies – but the option is there.See alsoHow to do HANA snapshots How HANA savepoints relate to snapshotsFAQ about HANA snapshotsRelated ArticleUsing Pacemaker for SAP high availability on Google Cloud – Part 1This blog introduces some basic terminology and concepts about the Red Hat and SUSE HA implementation of Pacemaker cluster software for S…Read Article
Quelle: Google Cloud Platform
Docker Captains are select members of the community that are both experts in their field and are passionate about sharing their Docker knowledge with others. “Docker Captains Take 5” is a regular blog series where we get a closer look at our Captains and ask them the same broad set of questions ranging from what their best Docker tip is to whether they prefer cats or dogs (personally, we like whales and turtles over here). Today, we’re interviewing Thorsten, who recently joined as a Docker Captain. He’s a Cloud-Native Consultant at Thinktecture and is based in Saarbrücken, Germany.
How/when did you first discover Docker?
I started using Docker when I got a shiny new MacBook Pro back in 2015. Before unboxing the new device, I was committed to keeping my new rig as clean and efficient as possible. I didn’t want to mess up another device with numerous databases, SDKs, or other tools for every project. Docker sounded like the perfect match for my requirements. (Spoiler: It was!)
When using macOS as an operating system, Docker Toolbox was the way to go back in those days.
Although quite some time has passed since 2015, I still remember how amazed I was by Docker’s clean CLI design and how Docker made underlying (read: way more complicated) concepts easy to understand and adopt.
What’s your favorite Docker command?
To be honest, I think “favorite” is a bit too complicated to answer! Based on hard facts, it’s docker run.
According to my ZSH history, it’s the command with most invocations. By the way, if you want to find yours, use this command:
bash
history | awk ‘BEGIN {FS="[ t]+||"} {print $3,$4}’ | sort | uniq -c | sort -nr | grep docker | head -n 10
Besides docker run, I would go with docker sbom and docker scan. Those help me to address common requirements when it comes to shift-left security.
What’s your top tip for working with Docker that others may not know?
From a developer’s perspective, it’s definitely docker context in combination with Azure and AWS.
Adding Azure Container Instances (ACI) or Amazon Elastic Container Service (ECS) as a Docker context and running your apps straight in the public cloud within seconds is priceless.
Perhaps you want to quickly try out your application, or you have to verify that your containerized application works as expected in the desired cloud infrastructure. Serverless contexts from Azure and AWS with native integration in Docker CLI provide an incredible inner-loop experience for both scenarios.
What’s the coolest Docker demo you’ve done/seen?
It might sound a bit boring these days. However, I still remember how cool the first demo on debugging applications running in Docker containers from people at Microsoft was.
Back in those days, they demonstrated how to debug applications running in Docker containers on the local machine and attach the local debugger to Docker containers running in the cloud. Seeing the debugger stopping at the desired breakpoint, showing all necessary contextual information, and knowing about all the nitty-gritty infrastructure in-between was just mind blowing.
That was the “now we’re talking” moment for many developers in the audience.
What have you worked on in the past six months that you’re particularly proud of?
As part of my daily job, I help developers understand and master technologies. The most significant achievement is when you recognize that they don’t need your help anymore. It’s that moment when you realize they’ve grasped the technologies — which ultimately permits them to master their technology challenges without further assistance.
What do you anticipate will be Docker’s biggest announcement this year?
Wait. There is more to come? Really!? TBH, I have no clue. We’ve had so many significant announcements already in 2022. Just take a look at the summary of DockerCon 2022 and you’ll see what I mean.
Personally, I hope to see handy extensions appearing in Docker Desktop, and I would love to see new features in Docker Hub when it comes to automations.
What are some personal goals for the next year with respect to the Docker community?
I want to help more developers adopt Docker and its products to improve their day-to-day workflow. As we start to see more in-person conferences here in Europe, I can’t wait to visit new communities, meetups, and conferences to demonstrate how Docker can help them take their productivity to a whole new level.
Speaking to all the event organizers: If you want me to address inner-loop performance and shift-left security at your event, ping me on Twitter and we’ll figure out how I can contribute.
What was your favorite thing about DockerCon 2022?
I won’t pick a particular announcement. It’s more the fact that Docker as a company continually sharpens its communication, marketing, and products to address the specific needs of developers. Those actions help us as an industry build faster inner-loop workflows and address shift-left security’s everyday needs.
Looking to the distant future, what’s the technology that you’re most excited about and that you think holds a lot of promise?
Definitely cloud-native. Although the term cloud-native has been around for quite some time now, I think we haven’t nailed it yet. Vendors will abstract complex technologies to simplify the orchestration, administration, and maintenance of cloud-native applications.
Instead of thinking about technical terms, we must ensure everyone thinks about this behavior when the term cloud-native is referenced.
Additionally, the number of tools, CLIs, and technologies developers must know and master to take an idea into an actual product is too high. So I bet we’ll see many abstractions and simplifications in the cloud-native space.
Rapid fire questions…
What new skill have you mastered during the pandemic?
Although I haven’t mastered it (yet), I would answer this question with Rust. During the pandemic, I looked into some different programming languages. Rust is the language that stands out here. It has an impressive language design and helps me write secure, correct, and safe code. The compiler, the package manager, and the entire ecosystem are just excellent.
IMO, every developer should dive into new programming languages from time to time to get inspired and see how other languages address common requirements.
Cats or Dogs?
Dogs. We thought about and discussed having a dog for more than five years. Finally, in December 2022, we found Marley, the perfect dog to complete our family.
Salty, sour, or sweet?
Although I would pick salty, I love sweet Alabama sauce for BBQ.
Beach or mountains?
Beach, every time.
Your most often used emoji?
Phew, There are tons of emojis I use quite frequently. Let’s go with 🚀.
Quelle: https://blog.docker.com/feed/
Besonders schnelle Lieferfahrer sollen künftig bei Gorillas bessere Schichten bekommen – der Lieferdienst spricht nur von einem Vorschlag. (Gorillas, Startup)
Quelle: Golem
Was am 22. Juli 2022 neben den großen Meldungen sonst noch passiert ist, in aller Kürze. (Kurznews, Canon)
Quelle: Golem
Die Berliner Polizei hat bei Kontrollen festgestellt, dass wiederholt Vorgaben zu Datenabfragen missachtet wurden. (Polizei, Datenbank)
Quelle: Golem