Mimimi Games: Shadow Tactics reist erneut ins alte Japan
Das preisgekrönte Strategiespiel Shadow Tactics erhält eine Standalone-Erweiterung mit Aiko als Hauptfigur. (Strategiespiel, Daedalic)
Quelle: Golem
Das preisgekrönte Strategiespiel Shadow Tactics erhält eine Standalone-Erweiterung mit Aiko als Hauptfigur. (Strategiespiel, Daedalic)
Quelle: Golem
The Problem
Recently, I worked with a customer in the courier industry whose core logistics scheduling application suffered from reliability, performance, and scalability issues. While I addressed the immediate problems, it was clear that the 20 years of patchwork fixes, enhancements, and technical debt accumulated were the root cause.
Quelle: CloudForms
Telecommunications companies sit on a veritable goldmine of data they can use to drive new business opportunities, improve customer experiences, and increase efficiencies. There’s so much data, in fact, that a significant challenge lies in ingesting, processing, refining, and using that data efficiently enough to inform decision-making as quickly as possible—often in near real-time.According to a new study by Analysys Mason, telecommunications data volumes are growing worldwide at 20% CAGR, and network data traffic is expected to reach 13 zettabytes by 2025. To stay relevant as the industry evolves, communications service providers (CSPs) need to manage and monetize their data more effectively to:Deliver new user experiences and B2B2X services, with the “X” being customers and entities in previously untapped industries, and unlock new revenue streams.Transform operations by harnessing data, automation, and artificial intelligence (AI)/machine learning (ML) to drive new efficiencies, improved network performance, and decreased CAPEX/OPEX across the organization.Here are four key data management and analytics challenges CSPs face, and how cloud solutions can help. 1. Reimagining the user experience means CSPs need to solve for near-real-time data analytics challenges.Consider being able to suggest offers to customers at the right place and time, based on their interactions. Or imagine being able to maximize revenue generation by dynamically adjusting offers to macro and micro groups based upon trends you discover during a campaign. These types of programs, which reduce churn and increase up-sell/cross-sell, are made possible when you can correlate your data across systems and get actionable insights at near real-time.Now, when it comes to effective decision-making in near real-time, lightning-speed is critical. Low latency is required for use cases like delivering location-based offers while customers are still on-site, or detecting fraud fast enough during a transaction to minimize losses. Cloud vendors can offer the speed and scale to tackle streaming data required for near-real-time data processing. At Google, we understand these requirements because they are core to our business, and we’ve developed the technologies to do so at scale. Google Cloud’s BigQuery, for example, is a serverless and highly scalable cloud data warehouse that supports streaming ingestion and super-fast queries at petabyte scale. Google infrastructure technologies like Dremel, Colossus, Jupiter and Borg that underpin BigQuery were developed to address Google’s global data scalability challenges. And Google Cloud’s full stream analytics solution is built upon Pub/Sub and Dataflow, and supports the ingestion, processing, and analysis of fluctuating volumes of data for near real-time business insights. Furthermore, CSPs can also take advantage of Google Cloud Anthos, which offers the ability to place workloads closer to the customer, whether within an operator’s own data center, across clouds, or even at the edge, enabling the speed required for latency sensitive use cases.What’s more, according to Justin van der Lande, principal analyst at Analysys Mason, “real-time use cases require an action to take place based on changes in streaming data, which predicts or signifies a fresh action.” They also require constant model validation and optimization. Therefore, using ML tools like TensorFlow in the cloud can help improve models and prevent them from degrading. Cloud-based services also let CSP developers build, deploy and train ML models through APIs or a management platform, so models can be deployed quickly with the appropriate validation, testing, and governance. Google Cloud AutoMLenables users with limited ML expertise to train high-quality models specific to their business needs. 2. Driving CSP operational efficiencies requires streamlining fragmented and complex sets of tools.Over time, many CSPs have built up highly fragmented and complex sets of software tools, platforms, and integrations for data management and analysis. A legacy of M&A activity over years means different departments or operating companies may have their own tools, which adds to the complexity of procuring and maintaining them—and can also impact an operator’s ability to make changes and roll out new functionalities quickly.Cloud providers offer CSPs access to advanced data and analytics tools with rich capabilities that are continuously updated. Google Cloud, for instance, offers Looker, which enables organizations to connect, analyze, and visualize data across Google Cloud, Azure, AWS, or on-premises databases, and is ideal for streaming applications. In addition, hyperscale cloud vendors work with a wide ecosystem of technology partners, enabling operators to adopt more standardised data tools that support a wider variety of use cases and are more open to new requirements. For example, Google Cloud partnered with Amdocs, helping CSPs consolidate, organize, and manage data more effectively in the cloud to lower costs, improve customer experiences, and drive new business opportunities. Amdocs DataONE extracts, transforms, and organizes data using a telco-specific and TM Forum-compliant Amdocs Logical Data Model. The solution runs on Google Cloud SQL, a fully managed and scalable relational database solution that allows you to more efficiently organize and improve the accessibility, availability, and visibility of your operational and analytical data. The Amdocs data solution can also integrate with BigQuery to take advantage of built-in ML. Finally, Amdocs Cloud Servicesoffers a practice to help CSPs migrate, manage and organize their data so they can extract the strategic insights needed to maximize business value.3. Leveraging cloud and automation can help CSPs reduce cost and overhead as data volumes continue to rise.One of the most powerful motivations for CSPs to adopt a cloud-based data infrastructure may be the prospect of lowering operational and capital costs. Analysys Mason predicts that IT and software capital spending for CSPs will approach $45 billion by 2025, and IT operational expenses will be more than double that amount. These costs are set to rise, as operators support new digital services and growing data volumes. With cloud services, you pay for the capacity you use, not the servers you own. This not only saves on infrastructure-related capital costs, but it also takes advantage of the efficiencies cloud computing achieves through scale and means that all maintenance and updates are built into a predictable monthly bill.Additionally, CSPs experience demand peaks and valleys daily and annually to accommodate busy internet traffic hours and high-audience events, like the Super Bowl. However, building infrastructure to accommodate these peaks wastes resources and reduces your return on capital. Customer demand may also fluctuate beyond these expected cycles, and large workloads like big data queries or ad hoc analytics and reports also make it difficult to predict your capacity needs. Cloud computing offers fast scaling up and down—even autoscaling—that isn’t always easy to do with on-premises systems. 4. Increasing customer lifetime value requires high quality and complete data for timely decision-making.Finally, CSPs need to utilize data and analytics to better understand how to engage with customers and deliver greater, more personalized services in order to increase overall customer lifetime value. This requires the ability to analyze and act on a complete set of quality data quick enough to inform sound decision-making. For example, without high quality and timely data on your most valued customers, you may not be able to spot customers who are about to churn or conversely, you may offer discounts to customers who were not about to churn in the first place. According to van der Lande, there are five main attributes required of a good data set: data quality, governance, speed, completeness, and shareability (see Chart 1). Put another way, your data is only as good as how fast you can capture/transform/load it from a myriad of back-end systems, front-end systems, and networks, how complete it is, and how easily you can share a 360o view with the right decision-makers. It is also important to consider how well that data is governed. Considerations such as data lineage, data source, categorization of PII data, and regulatory requirements are very important as you look to build trust in the data quality and ultimately the insights. What’s more, as data volumes grow, the more difficult it is to ensure its quality, governance, and completeness.The main CSP challenges related to data (Source: Analysys Mason)Operators can create a single operational data store in the cloud and use ML-driven preparation tools to improve data quality and completeness. Cloud vendors can also provide enterprise-grade security tools with the ability to manage access rights, as well as automated administration to ensure proper governance. The cloud supports near real-time, end-to-end streaming pipelines for big data analytics that would otherwise quickly strain in-house systems. In addition, solutions like Google Cloud’s BigQuery Omni powered by Anthos give CSPs a consistent data analysis and infrastructure management experience, regardless of their deployment environment.The telecommunications industry has a unique opportunity to mine the massive amount of data its systems generate to improve customer experiences, operate more efficiently, create innovative new products, and uncover use cases to generate new revenue opportunities faster. But as long as CSPs rely on rigid on-premises infrastructure, they’re unlikely to capitalize on this valuable resource. In a world where near real-time decision-making is more critical than ever, the cloud can help provide the agility, scale, and flexibility necessary to process and analyze this growing volume of data to remain not just relevant, but competitive.Download the complete Analysys Mason whitepaper, co-sponsored with Amdocs and Intel, to learn more.Related ArticlePartnering with Intel to accelerate cloud-native 5GSee how Google Cloud and Intel are partnering to make it easier for telco companies to help customers use 5G networks and deliver edge ap…Read Article
Quelle: Google Cloud Platform
It’s been a long, cold winter, after a long, strange year. The global pandemic impacted the last baseball season, and it may do so again. But Spring Training is finally here, which is all about optimism and new beginnings. We’re excited for this season, and we’re really excited about our new architecture powered by Anthos on bare metal.We already use an array of services from Google Cloud: we broadcast mlb.com out of Google Cloud, relying on Compute Engine, Google Kubernetes Engine (GKE), Cloud SQL, Load Balancing, and Cloud Storage, to name a few. We also use GKE for test and development, and BigQuery for analytics. And for the second season now, we run Anthos in our ballparks to host applications that need to run in the park for performance reasons. Take our Statcast baseball metric platform: cameras collect data on everything from pitch speed to ball trajectories to player poses, which gets fed into the Statcast pipeline in real-time. Statcast transforms that data into on-screen analytics that announcers use as part of their game-time color commentary. Obviously, minimizing the time between when the bat hits the ball and when it’s displayed on screen is hugely important to the fan’s viewing experience.Last year, our Anthos servers ran on top of VMware, but the plan had always been to run Anthos on bare metal because it would help us simplify the stack we have to maintain in our 31 parks. So when Anthos for bare metal became generally available in November, we pushed forward with our partner Arctiq to deploy it for the upcoming season. By eliminating the virtualization layer, Anthos on bare metal makes it easier to swap out a server in the event of a hardware failure (our ballparks weren’t designed as climate-controlled data centers, so hardware failures happen more often than you would think). When a failure happens, we simply drop in a new server, image it with Ubuntu and Anthos, and the cluster auto heals itself and automatically deploys your apps. This kind of remote operation is particularly valuable in a pandemic. During the height of the 2020 season, local ordinances precluded most vendors and technicians from being on-site, making ease-of-replacement all the more important. 2021 will present many of the same restrictions. Anthos on bare metal also makes us more agile. For example, the Toronto Blue Jays are starting their season in Dunedin, FL this year. If the team is able to go back to Toronto this year, Anthos on bare metal makes it easy for us to follow them there. Going forward, we have big plans for Anthos. Over time, Anthos clusters will become a multi-tenant resource that we can offer to anyone that needs access to low-latency compute in the ballpark, like a food vendor or entertainment provider. Rather than having every vendor provide their own silo, we provide Anthos as a service. The 2020 MLB season was a season like no other, and we’re expecting the 2021 season to throw us its fair share of curveballs too. But when it comes to our in-park server infrastructure, Google Cloud and Anthos on bare metal have us feeling pretty good about the future. It’s time to play ball!Major League Baseball trademarks and copyrights are used with permission of Major League Baseball. Visit MLB.com.
Quelle: Google Cloud Platform
As more services and applications go online, ensuring a frictionless customer experience is vital to building brand loyalty, capturing more sales, and optimizing profits. But if your underlying technology isn’t reliable, it’s easy to lose customers to the competition. For TELUS International, a leading digital customer experience innovator, ensuring the reliability of its online tools and services is crucial to its team’s mission to design, build, and deliver high-tech, high-touch customer experiences for some of the world’s most respected brands. TELUS International bundlesVerint, a Google Cloud Partner and workforce-management application, with its Cloud Contact Center platform to help North American call centers optimize customer service activities on the phone and online. TELUS International also uses Verint’s solution internally for its business process outsourcing.So, as part of its own digital transformation journey, TELUS International migrated Verint—a workforce-optimization-management application—from its legacy on-premises data center to Google Cloud.This move will help its global service centers optimize customer service activities for improved performance, by leveraging automation and AI-based analytics and insights to achieve better business outcomes.. Currently, TELUS International has approximately 30,000 users on the Verint platform, so ensuring that it’s running on a reliable cloud platform like Google Cloud is vital.Fast, painless migrationMartin Viljoen, VP of Information Technology at TELUS International, says the Verint migration from on-premise to Google Cloud was fast and seamless. “It took us about a week to stand up the infrastructure, which could have taken up to a year or more on-premises,” he says, given the traditional back and forth with hardware vendors to beef up the data center and solve problems along the way. “We didn’t have to worry about hardware availability on the fly. If you miss something, you just click and add it. You don’t have to write up a purchase order and wait for six weeks for delivery,” with much more time needed to get the new hardware operational. In all, Viljoen says it took only 4–6 weeks to go from system design to production. “Everything we needed was readily available,” he says. “And at the end of the day, it was very successful.” Simple, quick provisioningEvery migration has its challenges, but TELUS International found this project’s ‘bumps in the road’ much easier to navigate in Google Cloud. For example, Viljoen says the company started with a load balancer which was inexpensive but didn’t provide all the functionality needed. “We just went down the menu and selected the F5 load balancer,” which the company is currently using on-premises. “It was a very simple, very quick provisioning process and it proved why we are in the cloud. You can just pick any service if or when needed.” He says doing the same thing with an off-the-shelf load balancer and running into the same issue would have delayed the project for months. . Getting F5 configured for the cloud was also easy. The company simply replicated its on-premises configuration in Google Cloud. One-click backupBacking up into Google Cloud is also simple for TELUS International. “All you do is right-click,” says Viljoen. “On-premises, I would have to order oodles of bandwidth or buy a massive storage array. We’re also backing up our on-premises data center into Google Cloud because it’s so easy to do. It’s a no-brainer.” Exceptional performanceWith Verint Workforce Engagement and the Google Cloud Platform, TELUS has a world-class customer engagement platform to empower the remote and globally distributed workforce to support exceptional customer experiences, while gaining real-time insight into business operations for adjustment as needed to meet today’s ever-changing demands both within contact centers and throughout the enterprise.TELUS International’s clients are benefiting from improved performance on Google Cloud. “On a server, antivirus software is running and it eats up half of your resources.” He says customers with resource-intensive jobs have reported dramatic improvements in speed, getting reports in minutes instead of hours. As a result of all these gains, TELUS International’s plan is to migrate more of Verint to Google Cloud, including key components of the application’s workforce management solution as well as its call and screen recording feature. Having this data in Google Cloud will make it more accessible and open up new possibilities for data analytics and integration with other services, such as the company’s telephony platform, which is also on Google Cloud. Viljoen says, “We’re still at the low-hanging-fruit stage with Google Cloud, and we’re going to get deeper into it in the platform. The next step is to integrate other services that either our company or our clients are mandating. We’re a growing and evolving global organization. Having incremental tools and services in the cloud has made all aspects of our business a lot easier, including our integrations. Once things are in the cloud, it’s just a lot simpler to enable our business.” At Google Cloud, we’re here to help you craft the right migration for you and your business just like we did with TELUS International. Get started by signing up for a free migration cost assessment, or visit our data center migration solutions page to learn more. Let’s get migrating!Related ArticleHow TELUS International got employees back to work with virtual desktopsHow TELUS deployed VDI to let agents work from home during COVID-19 pandemicRead Article
Quelle: Google Cloud Platform
When doing analytics at scale with BigQuery, understanding what is happening and being able to take action in real-time is critical. To that end, we are happy to announce Resource Charts for BigQuery Administrator. Resources Charts provide a native, out-of-the-box experience for real-time monitoring and troubleshooting of your BigQuery environments. Resource Charts make it easy to understand your historical patterns across slot consumption, job concurrency and job performance, allowing you to take actions to ensure your BigQuery environment continues to run smoothly. Specifically, it can help you:Determine how your resources are being consumed across several dimensions like projects, reservations and users, so you can take remediating actions like pausing a troublesome query. Manage capacity by allowing you to understand how your resources are being consumed over time and helping you optimize your BigQuery environment’s slot capacity.Taking Resource Charts for a spinLet’s say you start the morning with a hot coffee in hand and suddenly several colleagues complain their queries are running slower than expected. You open up Resource Charts and immediately see there was a spike in slot usage. But, what caused the spike? You zoom into the time range when the spike happened and group by different dimensions.When looking at the job dimension, you see that a new scheduled query job has been eating up a significant portion of your slot resources for the past 10 minutes.You find the query in Job history, click Cancel Job and your BigQuery environment returns to normal. You just diagnosed your BigQuery environment, identified the outlier and remediated the situation… all before you had a chance to put your coffee cup down.You just diagnosed your BigQuery environment, identified the outlier and remediated the situation… all before you had a chance to put your coffee cup down.Resource Charts leverages BigQuery’s INFORMATION_SCHEMA tables to render these visuals. This means all the data is also available for you to query directly, allowing you to create your own dashboards and monitoring processes. To help you get started, you can find example INFORMATION_SCHEMA queries on GitHub that show an organization’s slot and reservation utilization, job execution and job errors. You can also view Google Data Studio dashboard templates built from these queries.Resource Charts for BigQuery Administrator is available today in Public Preview for customers using Reservations and we hope it makes it easier for you to manage your BigQuery environments. You can learn more about how to use Resource Charts here.Related ArticleInventory management with BigQuery and Cloud RunBuilding a simple inventory management system with Cloud Run and BigQueryRead Article
Quelle: Google Cloud Platform
At the start of 2020, who knew that our supply chains would be so disrupted that we’d have to worry about having enough toilet paper or paper towels? Yet, the early days of the COVID-19 pandemic resulted in many disruptions and unanticipated events. On a practical level, the sudden changes in consumer behavior placed supply chains in the spotlight, and revealed to many—including consumers everywhere—the fragility of our logistics networks. Of course, this disruption wasn’t just isolated to household items. Entire modes of purchasing shifted dramatically (and perhaps permanently). At the end of 2019, 16% of global sales was e-commerce. Within four months, that number grew to 33%. Supply chain companies were forced to adapt almost overnight to massive shifts to e-commerce and rapidly changing delivery models.The follow-on effects of this shift have been equally dramatic, including leading to a shortage of shipping containers. As COVID-19 lockdowns resulted in fewer people in the ports, this caused shipping traffic jams, which in turn led to a sharp rise in container shipping costs. And let’s not forget perhaps the most visible manifestation of why supply chains are the backbone of the global economy: The massive worldwide effort to deliver and distribute COVID-19 vaccines. The limitation of today’s supply chainsIt is not that supply chain professionals haven’t made investments to better predict demand, deliver or fulfill orders, and manage inventory. In fact, according to IDC Research, investments in supply chain management and service delivery are projected to grow by 34% in just the next 3 years—from $48 billion today to $64 billion by the end of 2023. However, there are still significant limitations to overcome, particularly in three key areas:Visibility: Companies don’t have enough information about their inventories to react to the uncertainties of profound change. Flexibility: Companies running standard processes are slow to adapt to the changes. Intelligence: Without streamlined, cleaned, and actionable data, companies can’t accurately predict and meet demand. Supply chains, then, are due for innovation. Unlike the manufacturing sector overall, which has adopted everything from AI and robotics to smart factories, supply chains have made only relatively small adjustments to their standard processes. Join the supply chain transformation at our summitTo help companies discuss and address these pressing issues, we’re hosting a Digital Supply Chain Summit on March 30, 2021, to bring together more than 300 senior supply chain and logistics leaders from across the world. At this event, attendees will learn how they can create a digital supply chain platform that enables them to deliver exceptional customer experience; how to build resilient and sustainable supply chains; and how to run supply chains autonomously through the use of AI, ML, and other advanced technologies. Among the featured industry speakers are Kuehne+Nagel and J.B. Hunt, both among the world’s leading transportation and logistics companies, who are digitizing their supply chains to enable every process, person, and team. They will discuss their digital transformation journeys, particularly how they’re leveraging the cloud, artificial intelligence, and data analytics to unlock new levels of efficiency and business performance. Also at the event, you’ll hear from the leaders of Google’s own supply chain and data center operations, who will discuss how the cloud-based solutions they’ve deployed have driven real impact and business results. Finally, industry practice leaders from Accenture and Deloitte will also share how you can architect a customer-centric digital supply chain and a connected digital thread across your extended value chain.Why be average when you can be unique?As a company running a supply chain, you don’t need to stick with the status quo. Your transformation can be achieved by leveraging data to power individualized processes, which will set your company apart from the pack. We’re helping companies build a cloud-based digital supply chain platform based on four capabilities: First, using a digital supply chain twin, which is a digital representation of the physical supply chain as its core. Second, using intelligence to anticipate and predict potential outcomes. Third, allowing end users to access information from whatever device they are using, anywhere. And fourth, embracing partnering when it comes to applications. Supply chain companies succeed when they complementing their existing systems with new technologies, making it easier to innovate, adapt, and overcome limitations. Take the first step to transformationWe welcome you to register for the Google Cloud Digital Supply Chain Summit to get a comprehensive look at how companies are digitally transforming their supply chain and logistics operations. This online event is taking place on March 30. We hope it will help you identify steps you can take today to advance your own digital strategies with cloud-based solutions, data analytics, and AI.Related ArticleSave the date for Google Cloud Next ‘21: October 12-14, 2021Join us and learn how the most successful companies have transformed their businesses with Google Cloud. Sign-up at g.co/cloudnext for up…Read Article
Quelle: Google Cloud Platform
Ab heute können Sie vererbte und Standardwerte mit
AWS Cost Categories verwenden. AWS Cost Categories ermöglicht es Ihnen, Regeln zu definieren, um Ihre Kosten anhand von Dimensionen wie Konten, Tags, Services, Gebührenarten und sogar anderen Kostenkategorien zu kategorisieren. Mit den neuen Funktionen erreichen Sie eine effizientere und ganzheitliche Kategorisierung Ihrer Kosten- und Nutzungsinformationen.
Quelle: aws.amazon.com
AWS Glue Studio bietet jetzt die Möglichkeit, Transformationen mit Hilfe von SQL-Abfragen zu definieren. So können Sie Aggregationen durchführen, auf einfache Weise Filterlogik auf Ihre Daten anwenden, berechnete Felder hinzufügen und vieles mehr. Mit dieser Funktion können Sie beim Erstellen von ETL-Aufträgen SQL-Abfragen nahtlos mit den visuellen Transformationen von AWS Glue Studio kombinieren.
Quelle: aws.amazon.com
AWS hat die Verfügbarkeit von Amazon-EC2-Inf1-Instances auf die Regionen EU (Mailand), EU (Stockholm) und AWS GovCloud (USA) erweitert. Inf1-Instances werden unterstützt von AWS-Inferentia-Chips, die AWS maßgeschneidert hat, um hohe Leistung und niedrigste Kosten für Machine-Learning-Inferenz in der Cloud bereitzustellen.
Quelle: aws.amazon.com