E-Tron: Produktionsprobleme durch fehlende Akkus
Audi soll ein Problem bei der Produktion seiner Elektroautos haben: Einem Medienbericht zufolge wird der Autohersteller nicht mit genügend Akkus beliefert. (Audi, Technologie)
Quelle: Golem
Audi soll ein Problem bei der Produktion seiner Elektroautos haben: Einem Medienbericht zufolge wird der Autohersteller nicht mit genügend Akkus beliefert. (Audi, Technologie)
Quelle: Golem
Nach einer mehrmonatigen Verzögerung startet Amazon die Skype-Telefonie für Echo-Geräte. Neben normalen Anrufen sind Videotelefonate möglich, sofern die passenden Geräte verwendet werden. (Amazon Alexa, Skype)
Quelle: Golem
Viele inhaltliche Änderungen und Ergänzungen sowie Verbesserungen bei der Grafik und Fehlerkorrekturen: Das Entwicklerstudio 4A Games hat ein rund 6 GByte großes Update für Metro Exodus veröffentlicht. (Metro, Nvidia)
Quelle: Golem
Nach zweieinhalb Jahren Diskussionen hat das Europaparlament der EU-Urheberrechtsrichtlinie zugestimmt. Das haben die Abgeordneten konkret beschlossen. Eine Analyse von Friedhelm Greis (Leistungsschutzrecht, Urheberrecht)
Quelle: Golem
Data is the backbone of many an enterprise, and when cloud is in the picture it becomes especially important to store, manage and use all that data effectively. At Next ‘19, you’ll find plenty of sessions that can help you understand ways to manage your Google Cloud data, and tips to store and manage it efficiently. For an excellent primer on Google Cloud Platform (GCP) data storage, sign yourself up for this spotlight session for the basics and demos. Here are some other sessions to check out:1. Tools for Migrating Your Databases to Google CloudYou can choose different ways to migrate your database to the cloud, whether lift-and-shift to use fully managed GCP or a total rebuild to move onto cloud-native databases. This session will explain best practices for database migration and tools to make it easier.2. Migrate Enterprise Workloads to Google Cloud PlatformThere is a whole range of essential enterprise workloads you can move to the cloud, and in this session you’ll learn specifically about Accenture Managed Services for GCP, which makes it easy for you to run Oracle databases and software on GCP.3. Migrating Oracle Databases to Cloud SQL PostgreSQLGet the details in this session on migrating on-prem Oracle databases to Cloud SQL PostgreSQL. You’ll get a look at all the basics, from assessing your source database and doing schema conversion to data replication and performance tuning.4. Moving from Cassandra to Auto-Scaling Bigtable at SpotifyThis migration story illustrates the real-world considerations that Spotify used to decide between Cassandra and Cloud Bigtable, and how they migrated workloads and built an auto-scaler for Cloud Bigtable.5. Optimizing Performance on Cloud SQL for PostgreSQLIn this session, you’ll hear about the database performance tuning we’ve done recently to considerably improve Cloud SQL for PostgreSQL. We’ll also highlight Cloud SQL’s use of Google’s Regional Persistent Disk storage layer. You’ll learn about PostgreSQL performance tuning and how to let Cloud SQL handle mundane, yet necessary, tasks.6. Spanner Internals Part 1: What Makes Spanner Tick?Dive into Cloud Spanner with Google Engineering Fellow Andrew Fikes. You’ll learn about the evolution of Cloud Spanner and what that means for the next generation of databases, and get technical details about how Cloud Spanner ensures strong consistency.7. Thinking Through Your Move to Cloud SpannerFind out how to use Cloud Spanner to its full potential in this session, which will include best practices, optimization strategies and ways to improve performance and scalability. You’ll see live demos of how Cloud Spanner can speed up transactions and queries, and ways to monitor its performance.8. Technical Deep Dive Into Storage for High-Performance ComputingHigh-performance computing (HPC) storage in the cloud is still an emerging area, particularly because complexity, price and performance have caused concern. This session will look at companies that are using HPC storage in the cloud across multiple industries. You’ll also see how HPC storage uses GCP tools like Compute Engine VMs and Persistent Disk.9. Driving a Real-Time Personalization Engine With Cloud BigtableSee how one company, Segment, built its own Lambda architecture for customer data using Cloud Bigtable to handle fast random reads and BigQuery to process large analytics datasets. Segment’s CTO will also describe the decision-making process around choosing these GCP products vs. competing options, and their current setup, with tens of terabytes stored in multiple systems and super-fast latency.10. Building a Global Data PresenceCome take a look at how Cloud Bigtable’s new multi-regional replication works using Google’s SD-WAN. This new feature makes it possible for a single instance of data, up to petabyte size, to be accessed within or between five different continents in up to four regions. Your users can access data globally with low latency, and get a fast disaster recovery option for essential data.11. Worried About Application Performance? Cache It!In-memory caching can help speed up application performance, but it brings challenges too. Take a closer look in this session to learn about cache sizing, API considerations and latency troubleshooting.12. How Twitter Is Migrating 300 PB of Hadoop Data to GCPThis detailed look at Twitter’s complex Hadoop migration will cover their use of the Cloud Storage Connector and open-source tools. You’ll hear from Twitter engineers on how they planned and managed the migration to GCP and how they solved some of their unique data management challenges.For more on what to expect at Google Cloud Next ‘19, take a look at the session list here, and register here if you haven’t already. We’ll see you there.
Quelle: Google Cloud Platform
Uber zahlt 3,1 Milliarden US-Dollar für den Konkurrenten Careem. Damit will der Fahrdienstvermittler vor dem Börsengang stärker werden. (Uber, Startup)
Quelle: Golem
It has been inspiring to watch how customers use Azure Stack to innovate and drive digital transformation across cloud boundaries. In her blog post today, Julia White shares examples of how customers are using Azure Stack to innovate on-premises using Azure services. Azure Stack shipped in 2017, and it is the only solution in the market today for customers to run cloud applications using consistent IaaS and PaaS services across public cloud, on-premises, and in disconnected environments. While customers love the fact that they can run cloud applications on-premises with Azure Stack, we understand that most customers also run important parts of their organization on traditional virtualized applications. Now we have a new option to deliver cloud efficiency and innovation for these workloads as well.
Today, I am pleased to announce Azure Stack HCI solutions are available for customers who want to run virtualized applications on modern hyperconverged infrastructure (HCI) to lower costs and improve performance. Azure Stack HCI solutions feature the same software-defined compute, storage, and networking software as Azure Stack, and can integrate with Azure for hybrid capabilities such as cloud-based backup, site recovery, monitoring, and more.
Adopting hybrid cloud is a journey and it is important to have a strategy that takes into account different workloads, skillsets, and tools. Microsoft is the only leading cloud vendor that delivers a comprehensive set of hybrid cloud solutions, so customers can use the right tool for the job without compromise.
Choose the right option for each workload
Azure Stack HCI: Use existing skills, gain hyperconverged efficiency, and connect to Azure
Azure Stack HCI solutions are designed to run virtualized applications on-premises in a familiar way, with simplified access to Azure for hybrid cloud scenarios. This is a perfect solution for IT to leverage existing skills to run virtualized applications on new hyperconverged infrastructure while taking advantage of cloud services and building cloud skills.
Customers that deploy Azure Stack HCI solutions get amazing price/performance with Hyper-V and Storage Spaces Direct running on the most current industry-standard x86 hardware. Azure Stack HCI solutions include support for the latest hardware technologies like NVMe drives, persistent memory, and remote-direct memory access (RDMA) networking.
IT admins can also use Windows Admin Center for simplified integration with Azure hybrid services to seamlessly connect to Azure for:
Azure Site Recovery for high availability and disaster recovery as a service (DRaaS).
Azure Monitor, a centralized hub to track what’s happening across your applications, network, and infrastructure – with advanced analytics powered by AI.
Cloud Witness, to use Azure as the lightweight tie breaker for cluster quorum.
Azure Backup for offsite data protection and to protect against ransomware.
Azure Update Management for update assessment and update deployments for Windows VMs running in Azure and on-premises.
Azure Network Adapter to connect resources on-premises with your VMs in Azure via a point-to-site VPN.
Azure Security Center for threat detection and monitoring for VMs running in Azure and on-premises (coming soon).
Buy from your choice of hardware partners
Azure Stack HCI solutions are available today from 15 partners offering Microsoft-validated hardware systems to ensure optimal performance and reliability. Your preferred Microsoft partner gets you up and running without lengthy design and build time and offers a single point of contact for implementation and support services.
Visit our website to find more than 70 Azure Stack HCI solutions currently available from these Microsoft partners: ASUS, Axellio, bluechip, DataON, Dell EMC, Fujitsu, HPE, Hitachi, Huawei, Lenovo, NEC, primeLine Solutions, QCT, SecureGUARD, and Supermicro.
Learn more
We know that a great hybrid cloud strategy is one that meets you where you are, delivering cloud benefits to all workloads wherever they reside. Check out these resources to learn more about Azure Stack HCI and our other Microsoft hybrid offerings:
Register for our Hybrid Cloud Virtual Event on March 28, 2019.
Learn more at our Azure Stack HCI solutions website.
Listen to Microsoft experts Jeff Woolsey and Vijay Tewari discuss the new Azure Stack HCI solutions.
FAQ
What do Azure Stack and Azure Stack HCI solutions have in common?
Azure Stack HCI solutions feature the same Hyper-V based software-defined compute, storage, and networking technologies as Azure Stack. Both offerings meet rigorous testing and validation criteria to ensure reliability and compatibility with the underlying hardware platform.
How are they different?
With Azure Stack, you can run Azure IaaS and PaaS services on-premises to consistently build and run cloud applications anywhere.
Azure Stack HCI is a better solution to run virtualized workloads in a familiar way – but with hyperconverged efficiency – and connect to Azure for hybrid scenarios such as cloud backup, cloud-based monitoring, etc.
Why is Microsoft bringing its HCI offering to the Azure Stack family?
Microsoft’s hyperconverged technology is already the foundation of Azure Stack.
Many Microsoft customers have complex IT environments and our goal is to provide solutions that meet them where they are with the right technology for the right business need. Azure Stack HCI is an evolution of Windows Server Software-Defined (WSSD) solutions previously available from our hardware partners. We brought it into the Azure Stack family because we have started to offer new options to connect seamlessly with Azure for infrastructure management services.
Will I be able to upgrade from Azure Stack HCI to Azure Stack?
No, but customers can migrate their workloads from Azure Stack HCI to Azure Stack or Azure.
How do I buy Azure Stack HCI solutions?
Follow these steps:
Buy a Microsoft-validated hardware system from your preferred hardware partner.
Install Windows Server 2019 Datacenter edition and Windows Admin Center for management and the ability to connect to Azure for cloud services
Option to use your Azure account to attach management and security services to your workloads.
How does the cost of Azure Stack HCI compare to Azure Stack?
This depends on many factors.
Azure Stack is sold as a fully integrated system including services and support. It can be purchased as a system you manage, or as a fully managed service from our partners. In addition to the base system, the Azure services that run on Azure Stack or Azure are sold on a pay-as-you-use basis.
Azure Stack HCI solutions follow the traditional model. Validated hardware can be purchased from Azure Stack HCI partners and software (Windows Server 2019 Datacenter edition with software-defined datacenter capabilities and Windows Admin Center) can be purchased from various existing channels. For Azure services that you can use with Windows Admin Center, you pay with an Azure subscription.
We recommend working with your Microsoft partner or account team for pricing details.
What is the future roadmap for Azure Stack HCI solutions?
We’re excited to hear customer feedback and will take that into account as we prioritize future investments.
Quelle: Azure
The blob storage interface on the Data Box has been in preview since September 2018 and we are happy to announce that it's now generally available. This is in addition to the server message block (SMB) and network file system (NFS) interface already generally available on the Data Box.
The blob storage interface allows you to copy data into the Data Box via REST. In essence, this interface makes the Data Box appear like an Azure storage account. Applications that write to Azure blob storage can be configured to work with the Azure Data Box in exactly the same way.
This enables very interesting scenarios, especially for big data workloads. Migrating large HDFS stores to Azure as part of a Apache Hadoop® migration is a popular ask. Using the blob storage interface of the Data Box, you can now easily use common copy tools like DistCp to directly point to the Data Box, and access it as though it was another HDFS file system! Since most Hadoop installations come pre-loaded with the Azure Storage driver, most likely you will not have to make changes to your existing infrastructure to use this capability. Another key benefit of migrating via the blob storage interface is that you can choose to preserve metadata. For more details on migrating HDFS workloads, please review the Using Azure Data Box to migrate from an on premises HDFS store documentation.
Blob storage on the Data Box enables partner solutions using native Azure blob storage to write directly to the Data Box. With this capability, partners like Veeam, Rubrik, and DefendX were able to utilize the Data Box to assist customers moving data to Azure.
For a full list of supported partners please visit the Data Box partner page.
For more details on using blob storage with Data Box, please see our official documentation for Azure Data Box Blob Storage requirements and a tutorial on copying data via Azure Data Box Blob Storage REST APIs.
Quelle: Azure
Along with the general availability of Azure Data Box Edge that was announced today, we are announcing the preview of Azure Machine Learning hardware accelerated models on Data Box Edge. The majority of the world’s data in real-world applications is used at the edge. For example, images and videos collected from factories, retail stores, or hospitals are used for manufacturing defect analysis, inventory out-of-stock detection, and diagnostics. Applying machine learning models to the data on Data Box Edge provides lower latency and savings on bandwidth costs, while enabling real-time insights and speed to action for critical business decisions.
Azure Machine Learning service is already a generally available, end-to-end, enterprise-grade, and compliant data science platform. Azure Machine Learning service enables data scientists to simplify and accelerate the building, training, and deployment of machine learning models. All these capabilities are accessed from your favorite Python environment using the latest open-source frameworks, such as PyTorch, TensorFlow, and scikit-learn. These models can run today on CPUs and GPUs, but this preview expands that to field programmable gate arrays (FPGA) on Data Box Edge.
What is in this preview?
This preview enhances Azure Machine Learning service by enabling you to train a TensorFlow model for image classification scenarios, containerize the model in a Docker container, and then deploy the container to a Data Box Edge device with Azure IoT Hub. Today we support ResNet 50, ResNet 152, DenseNet-121, and VGG-16. The model is accelerated by the ONNX runtime on an Intel Arria 10 FPGA that is included with every Data Box Edge.
Why does this matter?
Over the years, AI has been infused in our everyday lives and in industry. Smart home assistants understand what we say, and social media services can tag who’s in the picture we uploaded. Most, if not all, of this is powered by deep neural networks (DNNs), which are sophisticated algorithms that process unstructured data such as images, speech, and text. DNNs are also computationally expensive. For example, it takes almost 8 billion calculations to analyze one image using ResNet 50, a popular DNN.
There are many hardware options to run DNNs today, most commonly on CPUs and GPUs. Azure Machine Learning service brings customers the cutting-edge innovation that originated in Microsoft Research (featured in this recent Fast Company article), to run DNNs on reconfigurable hardware called FPGAs. By integrating this capability and the ONNX runtime in Azure Machine Learning service, we see vast improvements in the latencies of models.
Bringing it together
Azure Machine Learning service now brings the power of accelerated AI models directly to Data Box Edge. Let’s take the example of a manufacturing assembly line scenario, where cameras are photographing products at various stages of development.
The pictures are sent from the manufacturing line to Data Box Edge inside your factory, where AI models trained, containerized and deployed to FPGA using Azure Machine Learning service, are available. Data Box Edge is registered with Azure IoT Hub, so you can control which models you want deployed. Now you have everything you need to process incoming pictures in near real-time to detect manufacturing defects. This enables the machines and assembly line managers to make time-sensitive decisions about the products, improving product quality, and decreasing downstream production costs.
Join the preview
Azure Machine Learning service is already generally available today. To join the preview for containerization of hardware accelerated AI models, fill out the request form and get support on our forum.
Quelle: Azure
Today I am pleased to announce the general availability of Azure Data Box Edge and the Azure Data Box Gateway. You can get these products today in the Azure Portal.
Compute at the edge
We’ve heard your need to bring Azure compute power closer to you – a trend increasingly referred to as edge computing. Data Box Edge answers that call and is an on-premises anchor point for Azure. Data Box Edge can be racked alongside your existing enterprise hardware or live in non-traditional environments from factory floors to retail aisles. With Data Box Edge, there's no hardware to buy; you sign up and pay-as-you-go just like any other Azure service and the hardware is included.
This 1U rack-mountable appliance from Microsoft brings you the following:
Local Compute – Run containerized applications at your location. Use these to interact with your local systems or to pre-process your data before it transfers to Azure.
Network Storage Gateway – Automatically transfer data between the local appliance and your Azure Storage account. Data Box Edge caches the hottest data locally and speaks file and object protocols to your on-premise applications.
Azure Machine Learning utilizing an Intel Arria 10 FPGA – Use the on-board Field Programmable Gate Array (FPGA) to accelerate inferencing of your data, then transfer it to the cloud to re-train and improve your models. Learn more about the Azure Machine Learning announcement.
Cloud managed – Easily order your device and manage these capabilities for your fleet from the cloud using the Azure Portal.
Since announcing Preview at Ignite 2018 just a few months ago, it has been amazing to see how our customers across different industries are using Data Box Edge to unlock some innovative scenarios:
Sunrise Technology, a wholly owned division of The Kroger Co., plans to use Data Box Edge to enhance the Retail as a Service (RaaS) platform for Kroger and the retail industry to enable the features announced at NRF 2019: Retail's Big Show, including personalized, never-before-seen shopping experiences like at-shelf product recommendations, guided shopping and more. The live video analytics on Data Box Edge can help store employees identify and address out-of-stocks quickly and enhance their productivity. Such smart experiences will help retailers provide their customers with more personalized, rewarding experiences.
Esri, a leader in location intelligence, is exploring how Data Box Edge can help those responding to disasters in disconnected environments. Data Box Edge will allow teams in the field to collect imagery captured from the air or ground and turn it into actionable information that provides updated maps. The teams in the field can use updated maps to coordinate response efforts even when completely disconnected from the command center. This is critical in improving the response effectiveness in situations like wildfires and hurricanes.
Data Box Gateway – Hardware not required
Data Box Edge comes with a built-in storage gateway. If you don’t need the Data Box Edge hardware or edge compute, then the Data Box Gateway is also available as a standalone virtual appliance that can be deployed anywhere within your infrastructure.
You can provision it in your hypervisor, using either Hyper-V or VMware, and manage it through the Azure Portal. Server message block (SMB) or network file system (NFS) shares will be set up on your local network. Data landing on these shares will automatically upload to your Azure Storage account, supporting Block Blob, Page Blob, or Azure Files. We’ll handle the network retries and optimize network bandwidth for you. Multiple network interfaces mean the appliance can either sit on your local network or in a DMZ, giving your systems access to Azure Storage without having to open network connections to Azure.
Whether you use the storage gateway inside of Data Box Edge or deploy the Data Box Gateway virtual appliance, the storage gateway capabilities are the same.
More solutions from the Data Box family
In addition to Data Box Edge and Data Box Gateway, we also offer three sizes of Data Box for offline data transfer:
Data Box – a ruggedized 100 TB transport appliance
Data Box Disk – a smaller, more nimble transport option with individual 8 TB disks and up to 40 TB per order
Data Box Heavy Preview – a bigger version of Data Box that can scale to 1 PB.
All Data Box offline transport products are available to order through the Azure Portal. We ship them to you and then you fill them up and ship them back to our data center for upload and processing. To make Data Box useful for even more customers, we’re enabling partners to write directly to Data Box with little required change to their software via our new REST API feature which has just reached General Availability – Blob Storage on Data Box!
Get started
Thank you for partnering with us on our journey to bring Azure to the edge. We are excited to see how you use these new products to harness the power of edge computing for your business. Here’s how you can get started:
Order Data Box Edge or the Data Box Gateway today via the Azure Portal.
Review server hardware specs on the Data Box Edge datasheet.
Learn more about our family of Azure Data Box products.
Quelle: Azure