Netzausfall: Anwohner kühlen Telekom-Gehäuse mit Sonnenschirm
Vodafone-Kunden im Telekom-Netz hatten bei großer Hitze kein Internet mehr, weil der Verteiler überhitzte. Die Anwohner griffen zur Selbsthilfe. (Lüfter, DSL)
Quelle: Golem
Vodafone-Kunden im Telekom-Netz hatten bei großer Hitze kein Internet mehr, weil der Verteiler überhitzte. Die Anwohner griffen zur Selbsthilfe. (Lüfter, DSL)
Quelle: Golem
Volkswagen will seine Software-Tochter Cariad personell verstärken, weil es dort Entwicklungsprobleme gibt. (Cariad, Elektroauto)
Quelle: Golem
Das smarte System für Pedelecs punktet mit guter Anbindung an unser Telefon – funktioniert aber nicht mit älteren E-Bikes und hat noch eine weitere große Schwäche. Von Martin Wolf (E-Bike, Telekommunikation)
Quelle: Golem
Der Hack beim Krypto-Spiel Axie Infinity wurde durch ein gefälschtes Jobangebot ermöglicht – das sogar mehrere Bewerbungsrunden enthielt. (Kryptowährung, Internet)
Quelle: Golem
Japanische Gerichte können für Hasskommentare und Beleidigungen im Netz jetzt Gefängnisstrafen von einem Jahr verhängen. (Politik/Recht, Soziales Netz)
Quelle: Golem
Das Android in Windows soll nun die gleiche IP-Adresse wie das Host-System nutzen und VPNs unterstützen. Auch AV1 kommt in das System. (Windows, Microsoft)
Quelle: Golem
Who’s up for a challenge? It’s time to show off your #GoogleClout!Starting today, check in every Wednesday to unlock a new cloud puzzle that will test your cloud skills against participants worldwide. Stephanie Wong’s previous record is 5 minutes, can you complete the new challenge in 4?#GoogleClout ChallengeThe #GoogleClout challenge is a no-cost weekly 20 minute hands-on challenge. Every Wednesday for the next 10 weeks, a new challenge will be posted on our website. Participants will race against the clock to see how quickly they can complete the challenge. Attempt the 20 minute challenge as many times as you want. The faster you go, the higher your score!How it worksTo participate, follow these four simple steps:Enroll – Go to our website, click the link to the weekly challenge, and enroll in the quest using your Google Cloud Skills Boost account. Play – Attempt the challenge as many times as you want. Remember the faster you are, the higher your score!Share – Share your score card on Twitter/LinkedIn using #GoogleCloutWin – Complete all 10 weekly challenges to earn exclusive #GoogleClout badgesReady to get started?Take the #GoogleClout challenge today!Related ArticleEarn Google Cloud swag when you complete the #LearnToEarn challengeEarn swag with the Google Cloud #LearnToEarn challengeRead Article
Quelle: Google Cloud Platform
The name Kitabisa means “we can” in Bahasa Indonesia, the official language of Indonesia, and captures our aspirational ethos as Indonesia’s most popular fundraising platform. Since 2013, Kitabisa has been collecting donations in times of crisis and natural disasters to help millions in need. Pursuing our mission of “channeling kindness at scale,” we deploy AI algorithms to foster Southeast Asia’s philanthropic spirit with simplicity and transparency.Unlike e-commerce platforms that can predict spikes in demand, such as during Black Friday, Kitabisa’s mission of raising funds when disasters like earthquakes strike is by definition unpredictable. This is why the ability to scale up and down seamlessly is critical to our social enterprise.In 2020, Indonesia’s COVID-19 outbreak coincided with Ramadan. Even in normal times, this is a peak period, as the holy month inspires charitable activity. But during the pandemic, the crush of donations pushed our system beyond the breaking point. Our platform went down for a few minutes just as Indonesia’s giving spirit was at its height, creating frustrations for users. A new cloud beginningThat’s when we realized we needed to embark on a new cloud journey, moving from our monolithic system to one based on microservices. This would enable us to scale up for surges in demand, but also scale down when a wave of giving subsides. We also needed a more flexible database that would allow us to ingest and process the vast amounts of data that flood into our system in times of crisis.These requirements led us to re-architect our entire platform on Google Cloud. Guided by a proactive Google Cloud team, we migrated to Google Kubernetes Engine (GKE) for our overall containerized computing infrastructure, and from Amazon RDS to Cloud SQL for MySQL and PostgreSQL, for our managed database services.The result surpassed our expectations. During the following year’s Ramadan season, we gained a 50% boost in computing resources to easily handle escalating crowdfunding demands on our system. This was thanks to both the seamless scaling of GKE and recommendations from the Google Cloud Partnership team on deploying and optimizing Cloud SQL instances with ProxySQL to optimize our managed database instances.A progressive journey to kindness at scale While Kitabisa’s mission has never wavered, our journey to optimized performance took us through several stages before we ultimately landed on our current architecture on Google Cloud.Origins on a monolithic provider Kitabisa was initially hosted on DigitalOcean, which only allowed us to run monolithic applications based on virtual machines (VMs) and a stateful managed database. This meant manually adding one VM at a time, which led to challenges in scaling up VMs and core memory when a disaster triggered a spike in donations. Conversely, when a fundraising cycle was complete, we could not scale down automatically from the high specs of manually provisioned VMs, which was a strain on manpower and budgetary resources.Transition to containersTo improve scalability, Kitabisa migrated from DigitalOcean to Amazon Web Services (AWS), where we hoped deploying load balancers would provide sufficient automated scaling to meet our network needs. However, we still found manual configurations to be too costly and labor-intensive. We then attempted to improve automation by switching to a microservices-based architecture. But on Amazon Elastic Container Service (Amazon ECS) we hit a new pain point: when launching applications, we needed to ensure that they were compatible with CloudFormation in deployment, which reduced the flexibility of our solution building due to vendor locking. We decided it was “never too late” to migrate to Kubernetes, which is a more agile containerized solution. Given that we were already using AWS, it seemed natural to move our microservices to Amazon Elastics Kubernetes Service (Amazon EKS). But we soon found that provisioning Kubernetes clusters with EKS was still a manual process that required a lot of configuration work for every deployment. Unlocking automated scalability At the height of the COVID-19 crisis, faced with mounting demands on our system, we decided it was time to give Google Kubernetes Engine (GKE) a try. Since Kubernetes is a Google-designed solution, it seemed likeliest that GKE would provide the most flexible microservices deployment, alongside better access to new features. Through a direct comparison with AWS, we discovered that everything from provisioning Kubernetes clusters to deploying new applications became fully automated, with the latest upgrades and minimal manual setups. By switching to GKE, we can now absorb any unexpected surge in donations, and add new services without expanding the size of our engineering team. The transformative value of GKE became apparent when severe flooding hit Sumatra in November 2021, affecting 25,000 people. Our system easily handled the 30% spike in donations.Moving to Cloud SQL and ProxySQL Kitabisa was also held back by its monolithic database system, which was prone to crashing under heavy demand. We started to solve the problem by moving from a stateful DigitalOcean database to a stateless Redis one, which freed us from relying on a single server, giving us better agility and scale. But the strategy left a major pain point because it still required us to self-manage databases. In addition, we were experiencing high database egress costs due to the need to execute data transfers from a non-Google Cloud database into BigQuery. In December 2021, we migrated our Amazon RDS to Cloud SQL for MySQL, and immediately saved 10% in egress costs per month. But one of the greatest benefits came when the Google Cloud team recommended using the open source proxy for MySQL to improve the scalability and stability of our data pipelines.Cloud SQL’s compatibility allowed us to use connection pooling tools such as ProxySQL to better load balance our application. Historically, creating a direct connection to a monolithic database was a single point of failure that could end up in a crash. With Cloud SQL plus ProxySQL, we create layers in front of our database instances. It serves as a load balancer that allows us to connect simultaneously to multiple database instances, by creating a primary and a read replica instance. Now, whenever we have a read query, we redirect the query to our read replica instance instead of the primary instance. This configuration has transformed the stability of our database environment because we can have multiple database instances running at the same time, with the load distributed across all instances. Since switching to Cloud SQL as our managed database, and using ProxySQL, we have experienced zero downtime on our fundraising platform even when a major crisis hits.We are also saving costs. Rather than having a separate database for each different Kubernetes cluster, we’ve merged multiple database instances into one instance. We now group databases according to business units instead of per service, yielding database cost reductions of 30%. Streamlining with Terraform deployment There’s another key way in which Google Cloud managed services have allowed us to optimize our environment: using Terraform as an infrastructure-as-a-code tool to create new applications and upgrades to our platform. We also managed to automate the deployment of Terraform code into Google Cloud with the help of Cloud Build, and no human intervention. That means our development team can focus on creative tasks, while Cloud Build deploys a continuous stream of new features to Kitabisa. The combination of seamless scalability, resilient data pipelines, and creative freedom is enabling us to drive the future of our platform, expanding our mission to inspire people to create a kinder world in other Asian regions.We believe that having Google Cloud as our infrastructure backbone will be a critical part of our future development, which will include adding exciting new insurtech features. Now firmly established on Google Cloud, we can go further in shaping the future of fundraising to overcome turbulent times.Related ArticleTokopedia’s journey to creating a Customer Data Platform (CDP) on Google Cloud PlatformUsing tools like Big Query, and Data Flow, Tokopedia can better personalize search results and product recommendations for customers.Read Article
Quelle: Google Cloud Platform
The National Institute of Standards and Technology (NIST) on Tuesday announced the completion of the third round of the Post-Quantum Cryptography (PQC) standardization process, and we are pleased to share that a submission (SPHINCS+) with Google’s involvement was selected for standardization. Two submissions (Classic McEliece, BIKE) are being considered for the next round. We want to congratulate the Googlers involved in the submissions (Stefan Kölbl, Rafael Misoczki, and Christiane Peters) and thank Sophie Schmieg for moving PQC efforts forward at Google. We would also like to congratulate all the participants and thank NIST for their dedication to advancing these important issues for the entire ecosystem.This work is incredibly important as we continue to advance quantum computing. Large-scale quantum computers will be powerful enough to break most public-key cryptosystems currently in use and compromise digital communications on the Internet and elsewhere. The goal of PQC is to develop cryptographic systems that safeguard against these potential threats, and NIST’s announcement is a critical step toward that goal. Governments in particular are in a race to secure information because foreign adversaries can harvest sensitive information now and decrypt it later. At Google, our work on PQC is focused on four areas: 1) driving industry contributions to standards bodies; 2) moving the ecosystem beyond theory and into practice (primarily through testing PQC algorithms); 3) taking action to ensure that Google is PQC ready; and 4) helping customers manage the transition to PQC. Driving industry contributions to a range of standards bodies In addition to our work with NIST, we continue to drive industry contributions to international standards bodies to help advance PQC standards. This includes ISO 14888-4, where Googlers are the editors for a standard on stateful hash-based signatures. More recently, we also contributed to the IETF proposal on data formats, which will define JSON and CBOR serialization formats for PQC digital signature schemes. These standards, collectively, will enable large organizations to build PQC solutions that are compatible and ease the transition globally.Moving the ecosystem beyond theory and into practice: Testing PQC algorithmsWe’ve been working with the security community for over a decade to explore options for PQC algorithms beyond theoretical implementations. We announced in 2016 an experiment in Chrome where a small fraction of connections between desktop Chrome and Google’s servers used a post-quantum key-exchange algorithm, in addition to the elliptic-curve key-exchange algorithm that would typically be used. By adding a post-quantum algorithm in a hybrid mode with the existing key-exchange, we were able to test its implementation without affecting user security. We took this work further in 2019 and announced a wide-scale post-quantum experiment with Cloudflare. We worked together to implement two post-quantum key exchanges, integrated them into Cloudflare’s TLS stack, and deployed the implementation on edge servers and in Chrome Canary clients. Through this work, we learned more about the performance and feasibility of deployment in TLS of two post-quantum key agreements, and have continued to integrate these learnings into our technology roadmap. In 2021, we tested broader deployment of post-quantum confidentiality in TLS and discovered a range of network products that were incompatible with post-quantum TLS. We were able to work with the vendor so that the issue was fixed in future firmware updates. By experimenting early, we resolved this issue for future deployments.Taking action to ensure that Google is PQC readyAt Google, we’re well into a multi-year effort to migrate to post-quantum cryptography that is designed to address both immediate and long-term risks to protect sensitive information. We have one goal: ensure that Google is PQC ready. Internally, this effort has several key priorities, including securing asymmetric encryption, in particular encryption in transit. This means using ALTS, for which we are using a hybrid key-exchange, to secure internal traffic; and using TLS (consistent with NIST standards) for external traffic. A second priority is securing signatures in the case of hard-to-change public keys or keys with a long lifetime, in particular focusing on hardware, especially hardware deployed outside of Google’s control. We’re also focused on sharing the information we learn to help others address PQC challenges. For example, we recently published a paper that includes PQC transition timelines, leading strategies to protect systems against quantum attacks, and approaches for combining pre-quantum cryptography with PQC to minimize transition risks. The paper also suggests standards to start experimenting with now and provides a series of other recommendations to allow organizations to achieve a smooth and timely PQC transition. Helping customers manage the transition to PQCAt Google Cloud, we are working with many large enterprises to ensure they are crypto-agile and to help them prepare for the PQC transition. We fully expect customers to turn to us for post-quantum cloud capabilities, and we will be ready. We are committed to supporting their PQC transition with a range of Google products, services, and infrastructure. As we make progress, we will continue to provide more PQC updates on Google core, cloud, and other services, and updates will also come from Android, Chrome and other teams. We will further support our customers with Google Cloud transformation partners like the Google Cybersecurity Action Team to help provide deep technical expertise on PQC topics. Additional References:Google Cloud Security Foundations GuideGoogle Cloud Architecture Framework Google infrastructure security design overviewRelated ArticleRead Article
Quelle: Google Cloud Platform
Amazon Connect ermöglicht Ihnen jetzt die weitere Personalisierung der Self-Service-Kundenerfahrung mithilfe von Amazon Lex-Absichts-Zuversichtlichkeitsbewertungen als Verzweigung in Ihren Flows. Amazon Lex ermöglicht Kunden die Erstellung intelligenter Chatbots, mit denen ihre Amazon Connect-Flows in natürliche Unterhaltungen verwandelt werden können. Durch die Verzweigung von Flows basierend auf Lex-Zuversichtlichkeitsbewertungen können Sie Ihren Kunden schneller die richtigen Lösungen für ihre Probleme anbieten. Bei einer hohen Zuversichtlichkeitsbewertung können Sie Kunden beispielsweise sofort eine Self-Service-Option präsentieren, statt zusätzliche Informationen anzufordern oder sie an einen Kundendienstmitarbeiter weiterzuleiten. Diese neue Funktion kann über den Flow-Block „Kontaktattribute prüfen“ eingerichtet werden.
Quelle: aws.amazon.com