Skip: Arcteryx will Exoskelett in Wanderhosen einbauen
Outdoor-Bekleidungsspezialist Arcteryx arbeitet mit einem Start-up zusammen, um eine Hose mit einem weiches Exoskelett zu entwickeln. (Exoskelett, Techcrunch)
Quelle: Golem
Outdoor-Bekleidungsspezialist Arcteryx arbeitet mit einem Start-up zusammen, um eine Hose mit einem weiches Exoskelett zu entwickeln. (Exoskelett, Techcrunch)
Quelle: Golem
Das Bundesverkehrsministerium erwägt offenbar die Einführung von Gewichtsbeschränkungen für Fahrradanhänger. Das erzürnt Verbände und Hersteller. (BMVI, E-Bike)
Quelle: Golem
Intel hält die Probleme bei Core-Prozessoren der 13. und 14. Generation für gelöst – ein Rückruf oder Verkaufsstopp sei nicht notwendig. (Intel Raptor Lake, Prozessor)
Quelle: Golem
Bei Amazon kann man das Wera Kraftform Micro Big Pack zum Bestpreis kaufen. Das Schraubendreher-Set adressiert Bastler und Elektroniker. (Technik/Hardware)
Quelle: Golem
Neuronale Netze sind die Grundlage moderner Bilderkennungs- und Klassifikationstechnologien. Ein umfassender Intensiv-Workshop der Golem Karrierewelt deckt das Training dieser Modelle mit Python ab. (Golem Karrierewelt, Python)
Quelle: Golem
Wackelnde Kulissen, inkompetente Darsteller, schlechte Dialoge: Ed Woods Sci-Fi-Film Plan 9 from Outer Space von 1959 macht jede Menge Spaß. Von Peter Osteried (Science-Fiction, Aliens)
Quelle: Golem
Die Update-Panne von Crowdstrike hat auch Microsoft viel Arbeit beschert. Nun will der Konzern dafür sorgen, dass das Windows-Ökosystem robuster wird. (Windows, Microsoft)
Quelle: Golem
Ein kleines Unternehmen will Cuda-Code auf AMD-GPUs bringen – besser als AMD selbst. Wir haben ausprobiert, wie gut das funktioniert. Ein Hands-on von Johannes Hiltscher (Cuda, AMD)
Quelle: Golem
Auf der San Diego Con ließ Eric Kripke, der Showrunner von The Boys, eine Bombe platzen. Eine neue Serie wurde angekündigt. (Filme & Serien, Amazon)
Quelle: Golem
We are also announcing safety features by default for GPT-4o mini, expanded data residency and service availability, plus performance upgrades to Microsoft Azure OpenAI Service.
GPT-4o mini allows customers to deliver stunning applications at a lower cost with blazing speed. GPT-4o mini is significantly smarter than GPT-3.5 Turbo—scoring 82% on Measuring Massive Multitask Language Understanding (MMLU) compared to 70%—and is more than 60% cheaper.1 The model delivers an expanded 128K context window and integrates the improved multilingual capabilities of GPT-4o, bringing greater quality to languages from around the world.
GPT-4o mini, announced by OpenAI today, is available simultaneously on Azure AI, supporting text processing capabilities with excellent speed and with image, audio, and video coming later. Try it at no cost in the Azure OpenAI Studio Playground.
Azure AI
Where innovators are creating the future
Try for free
We’re most excited about the new customer experiences that can be enhanced with GPT-4o mini, particularly streaming scenarios such as assistants, code interpreter, and retrieval which will benefit from this model’s capabilities. For instance, we observed the incredible speed while testing GPT-4o mini on GitHub Copilot, an AI pair programmer that assists you by delivering code completion suggestions in the tiny pauses between keystrokes, rapidly updating recommendations with each new character typed.
We are also announcing updates to Azure OpenAI Service, including extending safety by default for GPT-4o mini, expanded data residency, and worldwide pay-as-you-go availability, plus performance upgrades.
Azure AI brings safety by default to GPT-4o mini
Safety continues to be paramount to the productive use and trust that we and our customers expect.
We’re pleased to confirm that our Azure AI Content Safety features—including prompt shields and protected material detection— are now ‘on by default’ for you to use with GPT-4o mini on Azure OpenAI Service.
We have invested in improving the throughput and speed of the Azure AI Content Safety capabilities—including the introduction of an asynchronous filter—so you can maximize the advancements in model speed while not compromising safety. Azure AI Content Safety is already supporting developers across industries to safeguard their generative AI applications, including game development (Unity), tax filing (H&R Block), and education (South Australia Department for Education).
In addition, our Customer Copyright Commitment will apply to GPT-4o mini, giving peace of mind that Microsoft will defend customers against third-party intellectual property claims for output content.
Azure AI now offers data residency for all 27 regions
From day one, Azure OpenAI Service has been covered by Azure’s data residency commitments.
Azure AI gives customers both flexibility and control over where their data is stored and where their data is processed, offering a complete data residency solution that helps customers meet their unique compliance requirements. We also provide choice over the hosting structure that meets business, application, and compliance requirements. Regional pay-as-you-go and Provisioned Throughput Units (PTUs) offer control over both data processing and data storage.
We’re excited to share that Azure OpenAI Service is now available in 27 regions including Spain, which launched earlier this month as our ninth region in Europe.
Azure AI announces global pay-as-you-go with the highest throughput limits for GPT-4o mini
GPT-4o mini is now available using our global pay-as-you-go deployment at 15 cents per million input tokens and 60 cents per million output tokens, which is significantly cheaper than previous frontier models.
We are pleased to announce that the global pay-as-you-go deployment option is generally available this month, allowing customers to pay for the resources they consume, making it flexible for variable workloads, while traffic is routed globally to provide higher throughput, and still offering control over where data resides at rest.
Additionally, we recognize that one of the challenges customers face with new models is not being able to upgrade between model versions in the same region as their existing deployments. Now, with global pay-as-you-go deployments, customers will be able to upgrade from existing models to the latest models.
Global pay-as-you-go offers customers the highest possible scale, offering 15M tokens per minute (TPM) throughput for GPT-4o mini and 30M TPM throughput for GPT-4o. Azure OpenAI Service offers GPT-4o mini with 99.99% availability and the same industry leading speed as our partner OpenAI.
Azure AI offers leading performance and flexibility for GPT-4o mini
Azure AI is continuing to invest in driving efficiencies for AI workloads across Azure OpenAI Service.
GPT-4o mini comes to Azure AI with availability on our Batch service this month. Batch delivers high throughput jobs with a 24-hour turnaround at a 50% discount rate by using off-peak capacity. This is only possible because Microsoft runs on Azure AI, which allows us to make off-peak capacity available to customers.
We are also releasing fine-tuning for GPT-4o mini this month which allows customers to further customize the model for your specific use case and scenario to deliver exceptional value and quality at unprecedented speeds. Following our update last month to switch to token based billing for training, we’ve reduced the hosting charges by up to 43%. Paired with our low price for inferencing, this makes Azure OpenAI Service fine-tuned deployments the most cost-effective offering for customers with production workloads.
With more than 53,000 customers turning to Azure AI to deliver breakthrough experiences at impressive scale, we’re excited to see the innovation from companies like Vodafone (customer agent solution), the University of Sydney (AI assistants), and GigXR (AI virtual patients). More than 50% of the Fortune 500 are building their applications with Azure OpenAI Service.
We can’t wait to see what our customers do with GPT-4o mini on Azure AI!
1GPT-4o mini: advancing cost-efficient intelligence | OpenAI
The post OpenAI’s fastest model, GPT-4o mini is now available on Azure AI appeared first on Azure Blog.
Quelle: Azure