Hierarchical Firewall Policy Automation with Terraform

Firewall rules are an essential component of network security in Google Cloud. Firewalls in Google Cloud can broadly be categorized into two types; Network Firewall Policies and Hierarchical Firewall Policies. While Network Firewalls are directly associated with a VPC to allow/deny the traffic, Hierarchical Firewalls can be thought of as the policy engine to use Resource Hierarchy for creating and enforcing policies across the organization. Hierarchical policies can be enforced at the organization level or at the folder(s) level. Like Network Firewall rules, hierarchical firewall policy rules can allow or deny traffic AND can also delegate the evaluation to lower level policies or to the network firewall rules itself (with a go_next). Lower-level rules cannot override a rule from a higher place in the resource hierarchy. This lets organization-wide admins manage critical firewall rules in one place.So, now let’s think of a few scenarios where Hierarchical Firewall policies will be useful1. Reduce the number of Network Firewall: Example: say in xyz.com got 6 Shared VPCs based upon their business segments. It is a security policy to refuse SSH access to any VMs in the company, i.e. deny TCP port 22 traffic. With Network Firewalls, this rule needs to be enforced at 6 places (each Shared VPC). Growing number of granular Network firewall rules for each network segment means more touch points, i.e. means more chances of drift and accidents. Security admins get busy with hand holding and almost always become a bottleneck for even simple firewall changes. With Hierarchical firewall Policies, Security Admins can create a common/single policy to deny TCP port 22 traffic and enforce it to xyz.com org. OR explicitly target one/many Shared VPCs from the policy. This way a single policy can define the broader traffic control posture.  2. Manage critical firewall rules using centralized policies AND safely delegate non-critical controls at VPC levelExample: At xyz.com SSH to GCEs is strictly prohibited and non-negotiable. Auditors need this. While allowing/denying TCP traffic to port 443 depends on which Shared VPC the traffic is going to. In this case security admins can create a policy to deny TCP port 22 traffic and enforce this policy to the xyz.com. Another policy is created for TCP port 443 traffic to say “go_next” and decide at the next lower level if this traffic is allowed. Then, have a Network Firewall rule to allow/deny 443 traffic at the Shared VPC level. This way Security Admin has broader control at a higher level to enforce traffic control policies and delegate where possible. Ability to manage the most critical firewall rules at one place also frees project level administrators (e.g., project owners, editors or security admin) from having to keep up with changing organization-wide policies. With hierarchical firewall policies, Security admin can centrally enforce, manage and observe the traffic control patterns.Create, Configure and Enforce Hierarchical Firewall PoliciesThere are 3 major components of Hierarchical Firewall Policies; Rules, Policy and Association. Broadly speaking a “Rule” is a decision making construct to declare if the traffic should be allowed, denied or delegated to the next level for decision. “Policy” is a collection of rules, i.e. one or more rules can be associated with a Policy. “Association” tells the enforcement point of the policy in the Google Cloud resource hierarchy. These concepts are extensively explained on the product page.A simple visualization of Rules, Policy and Association looks likeInfrastructure as Code (Terraform) for Hierarchical Firewall PoliciesThere are 3 Terraform Resources that need to be stitched together to build and enforce Hierarchical Firewall Policies.  #1 Policy Terraform Resource – google_compute_firewall_policyIn this module the most important parameter is the “parent” parameter. Hierarchical firewall policies, like projects, are parented by a folder or organization resource. Remember this is NOT the folder where the policy is enforced or associated. It is just a folder which owns Policy(s) that you are creating. Using a Folder to own the hierarchical firewall policies, also simplifies the IAM to manage who can create/modify these policies, i.e. just assign the IAM to this folder. For a scaled environment it is recommended to create a separate “firewall-policy” folder to host all of your Hierarchical Firewall Policies.Samplecode_block[StructValue([(u’code’, u’/*rn Create a Policyrn*/rnresource “google_compute_firewall_policy” “base-fw-policy” {rn parent = “folders/<folder-id>”rn short_name = “base-fw-policy”rn description = “A Firewall Policy Example”rn}’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eeca59971d0>)])]You can get the Folder ID of the “firewall-policy” folder using below commandgcloud resource-manager folders list –organization=<your organization ID> –filter='<name of the folder>’For example, if your firewall policy folder is called ‘firewall-policy’ then use gcloud resource-manager folders list –organization=<your organization ID> –filter=’firewall-policy’ #2 Rules Terraform Resource – google_compute_firewall_policy_ruleMost of the parameters in this resource definition are very obvious but there are a couple of them that need special consideration.disabled – Denotes whether the firewall policy rule is disabled. When set to true, the firewall policy rule is not enforced and traffic behaves as if it did not exist. If this is unspecified, the firewall policy rule will be enabled. enable_logging – enabling firewall logging is highly recommended for many future operational advantages. To enable it, pass true to this parameter.target_resources – This parameter comes handy when you want to target certain Shared VPC(s) for this rule. You need to pass the URI path for the Shared VPC. Top get the URI for the VPC use this command code_block[StructValue([(u’code’, u’gcloud config set project <Host Project ID>rngcloud compute networks list –uri’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eeca4d95c50>)])]SampleHere is some sample Terraform code to create a Firewall Policy Rule with priority 9000 to deny TCP port 22 traffic from 35.235.240.0/20 CIDR block (used for identity aware proxy)code_block[StructValue([(u’code’, u’/*rn Create a Firewall rule #1rn*/rnresource “google_compute_firewall_policy_rule” “base-fw-rule-1″ {rn firewall_policy = google_compute_firewall_policy.base-fw-policy.idrn description = “Firewall Rule #1 in base firewall policy”rn priority = 9000rn enable_logging = truern action = “deny”rn direction = “INGRESS”rn disabled = falsern match {rn layer4_configs {rn ip_protocol = “tcp”rn ports = [22]rn }rn src_ip_ranges = [“35.235.240.0/20″]rn }rn target_resources = [“https://www.googleapis.com/compute/v1/projects/<PROJECT-ID>/global/networks/<VPC-NAME>”]rn}’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eeca58a7750>)])]#3 Association Terraform Resource – google_compute_firewall_policy_associationIn the attachment_target pass the folder ID where you want to enforce this policy, i.e. everything under this folder (all projects) will get this policy. In the case of Shared VPCs, the target folder should be the parent of your host project. Samplecode_block[StructValue([(u’code’, u’/*rn Associate the policy rn*/rnresource “google_compute_firewall_policy_association” “associate-base-fw-policy” {rn firewall_policy = google_compute_firewall_policy.base-fw-policy.idrn attachment_target = “folders/<Folder ID>”rn name = “Associate Base Firewall Policy with dummy-folder”rn}’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eeca58a73d0>)])]Once these policies are enforced, you can see it on the console under “VPC Network->Firewall” as something like below.In the Firewall Policy Folder, the created Hierarchical Firewall Policy will show up. Remember there are 4 default firewall rules that come with each policy, so even when you create a single rule in your policy, rule count will be 5, as shown in the panel below.Go into the Policy to see the rules you created and association of the policy (shown in 2 panels). SummaryHierarchical Firewall Policy simplifies the complex process of enforcing consistent traffic control policies across your Google Cloud environment. With Terraform modules and automation shown in this article, it gives Security admins ability to build guardrails using a policy engine and known Infrastructure as Code platform. Check out the Hierarchical Firewall Policy doc and how to use them. 
Quelle: Google Cloud Platform

Accelerate integrated Salesforce insights with Google Cloud Cortex Framework

Enterprises across the globe rely on a number of strategic independent software vendors like Salesforce, SAP and others to help them run their operations and business processes. Now more than ever, the need to sense and respond to new and changing business demands has increased and the availability of data from these platforms is integral for business decision making. Many companies today are looking for accelerated ways to link their enterprise data with surrounding data sets and sources to gain more meaningful insights and business outcomes. Getting there faster given the complexity and scale of managing and tying this data together can be an expensive and challenging proposition.To embark on this journey, many companies choose Google’s Data Cloud to integrate, accelerate and augment business insights through a cloud first data platform approach with BigQuery to power data driven innovation at scale. Next, they take advantage of best practices and accelerator content delivered with Google Cloud Cortex Framework to establish an open, scalable data foundation that can enable connected insights across a variety of use cases. Today, we are excited to announce the next offering of accelerators available that expand Cortex Data Foundation to include new packaged analytics solution templates and content for Salesforce. New analytics content for SalesforceSalesforce provides a powerful Customer Relationship Management (CRM) solution that is widely recognized and adopted across many industries and enterprises. With increased focus on engaging customers better and improving insights on relationships, this data is highly valuable and relevant as it spans many business activities and processes including sales, marketing, and customer service. With Cortex Framework, Salesforce data can now be more easily integrated into a single, scalable data foundation in BigQuery to unlock new insights and value. With this release, we take the guesswork out of the time, effort, and cost to establish a Salesforce data foundation in BigQuery. You can deploy Cortex Framework for Salesforce content to kickstart customer-centric data analytics and gain broader insights across key areas including: accounts, contacts, leads, opportunities and cases. Take advantage of the predefined data models for Salesforce along with analytics examples in Looker for immediate customer relationship focused insights, or easily join Salesforce data with other delivered data sets, such as Google Trends, Weather, or SAP to enable richer, connected insights. The choice is yours, and the sky’s the limit with the flexibility of Cortex to enable your specific use cases.By bringing Salesforce data together with other public, community, and private data sources, Google Cloud Cortex Framework helps accelerate the ability to optimize and innovate your business with connected insights.What’s nextThis release extends upon prior content releases for SAP and other data sources to further enhance the value of Cortex Data Foundation across private, public and community data sources. Google Cloud Cortex Framework continues to expand content to help better meet the needs of customers on data analytics transformation journeys. Stay tuned for more announcements coming soon.To learn more about Google Cloud Cortex Framework, visit our solution page, and try out Cortex Data Foundation today to discover what’s possible.Related ArticleAccelerating SAP CPG enterprises with Google Cloud Cortex FrameworkGoogle Cloud Cortex Framework launches analytics content to make it easier for SAP enterprises to solve common Consumer Packaged Goods us…Read Article
Quelle: Google Cloud Platform

CISO Survival Guide: Vital questions to help guide transformation success

Part of being a security leader whose organization is taking on a digital transformation is preparing for hard questions – and complex answers – on how to implement a transformation strategy. In our previous CISO Survival Guide blog, we discussed how financial services organizations can more securely move to the cloud. We examined how to organize and think about the digital transformation challenges facing the highly-regulated financial services industry, including the benefits of the Organization, Operation, and Technology (OOT) approach, as well as embracing new processes like continuous delivery and required cultural shifts.As part of Google Cloud’s commitment to shared fate, today we offer tips on how to ask the right questions that can help create the conversations that lead to better transformation outcomes for your organization. While there often is more than one right answer, a thoughtful, methodical approach to asking targeted questions and maintaining an open mind about the answers you hear back can help achieve your desired result. These questions are designed to help you figure out where to start and where to end your organization’s security transformation. By asking the following questions, CISOs and business leaders can develop a constructive, focused dialogue which can help determine the proper balance between implementing security controls and fine-tuning the risk tolerance set by the executive management and the board of directors.aside_block[StructValue([(u’title’, u’Hear monthly from our Cloud CISO in your inbox’), (u’body’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eba10dcfc90>), (u’btn_text’, u’Subscribe today’), (u’href’, u’https://go.chronicle.security/cloudciso-newsletter-signup?utm_source=cgc-blog&utm_medium=blog&utm_campaign=FY23-Cloud-CISO-Perspectives-newsletter-blog-embed-CTA&utm_content=-&utm_term=-‘), (u’image’, None)])]To start the conversation, begin by asking: What defines our organization’s culture?How can we best integrate the culture with our security goals?CISOs should ask business leaders:What makes a successful transformation? What are the key goals of the transformation?What data is (most) valuable?  What data can be retired, reclassified, or migrated?  What losses can we afford to take and still function?  What is the real risk that the organization is willing to accept?Business leaders should ask CISOs and the security team:What are the best practices for protecting our valuable data?What is the business impact of implementing those controls?  What are the top threats that we need to address?CISOs and business leaders should ask: Which threats are no longer as important? Where could we potentially use spending for more cost-effective controls such as firewalls and antivirus software?What benefits do we get from refactoring our applications?Are we really transforming, or lifting and shifting?How should we perform identity and access management to meet our business objectives?What are the core controls needed to ensure enterprise-level performance for the first workloads?CISOs and risk teams should ask:How can we use the restructuring of an existing body of code to streamline security functions?How should we monitor our security posture to ensure we are aligned with our risk appetite?Business and technical teams should ask:What’s our backup plan? What do we do if that fails?Practical advice and the realities of operational transformationSome organizations have been working in the cloud for more than a decade and have already addressed many operational procedures, sometimes with painful lessons learned along the way. If you’ve been operating in the cloud securely for that long, we recognize that there’s a lot to be gained from understanding your approaches to culture, operational expertise, and technology. However, there are still many organizations that have not thought through how they will operate in a cloud environment until it’s almost ready – and at that point, it might be too late. If you can’t detail how a cloud environment will operate before its launch, how will you know who should be responsible for maintaining it? Who are the critical stakeholders, along with those responsible for engineering and maintaining specific systems, who should be identified at the start of the transformation?  There are likely several groups of stakeholders, such as those aligned with operations for transformation, and those focused on control design for cloud aligned with operations. If you don’t have the operators involved in the design phase, you’re destined to create clever security controls with very little practical value because those tasked with day-to-day maintenance most likely won’t have the expertise or training to effectively operate these controls. This is complicated by the fact that many organizations are struggling to recruit and retain resources with the right skills to operate in the cloud. We believe that training current employees to learn new cloud skills, and giving them the time away from other responsibilities, can help build skilled, diverse cloud security teams.If your organization continually experiences high turnover in security leadership and skilled staff, it’s up to you to navigate your culture to ensure greater consistency. You can, of course, choose to supplement internal knowledge with trusted partners – however, that’s an expensive strategy for ongoing operational cost.We met recently with a security organization that turns over skilled staff and leadership every two to three years. This rate of churn results in a continual resetting of security goals. This particular team joked that it’s like “Groundhog Day” as they constantly re-evaluate their best security approaches yet make no meaningful progress. This is not a model to emulate.Many security controls fail not because they are improperly engineered, but because the people who use them – your security team – are improperly trained and insufficiently  motivated. This is especially true for teams with high turnover rates and other organizational misalignments. A security control that blocks 100% of attacks might be engineered correctly, but if you can’t efficiently operate it, the effectiveness of the control will plummet to zero over time. Worse, it then becomes a liability because you incorrectly assume you have a functioning control.In our next blog, we will highlight several proven approaches that we believe can help guide your security team through your organization’s digital transformation. To learn more now, check out:Previous blogPodcast: CISO walks into the cloud: Frustrations, successes, lessons… and does the risk change?Report: CISO’s Guide to Cloud Security TransformationRelated ArticleCISO Survival Guide: How financial services organizations can more securely move to the cloudThe first in a series of CISO survival guide blog posts offers cloud security advice for CISOs in financial services organizations tackli…Read Article
Quelle: Google Cloud Platform

Announcing the GA of BigQuery multi-statement transactions

Transactions are mission critical for modern enterprises supporting payments, logistics, and a multitude of business operations. And in today’s modern analytics-first and data-driven era, the need for the reliable processing of complex transactions extends beyond just the traditional OLTP database; today businesses also have to trust that their analytics environments are processing transactional data in an atomic, consistent, isolated, and durable (ACID) manner. So BigQuery set out to support DML statements spanning large numbers of tables in a single transaction and commit the associated changes atomically (all at once) if successful or rollback atomically upon failure. And today, we’d like to highlight the recent general availability launch of multi-statement transactions within BigQuery and the new business capabilities it unlocks. While in preview, BigQuery multi-statement transactions were tremendously effective for customer use cases, such as keeping BigQuery synchronized with data stored in OLTP environments, the complex post processing of events pre-ingested into BigQuery, complying with GDPR’s right to be forgotten, etc. One of our customers, PLAID, leverages these multi-statement transactions within their customer experience platform KARTE to analyze the behavior and emotions of website visitors and application users, enabling businesses to deliver relevant communications in real time and further PLAID’s mission to Maximize the Value of People with the Power of Data.“We see multi-statement transactions as a valuable feature for achieving expressive and fast analytics capabilities. For developers, it keeps queries simple and less hassle in error handling, and for users, it always gives reliable results.”—Takuya Ogawa, Lead Product EngineerThe general availability of multi-statement transactions not only provides customers with a production ready means of handling their business critical transactions in a comprehensive manner within a single transaction, but now also provides customers with far greater scalability compared to what was offered during the preview. At GA, multi-statement transactions increase support for mutating up to 100,000 table partitions and modifying up to 100 tables per transaction. This 10x scale in the number of table partitions and 2x scale in the number of tables was made possible by a careful re-design of our transaction commit protocol which optimizes the size of the transactionally committed metadata. The GA of multi-statement transactions also introduces full compatibility with BigQuery sessions and procedural language scripting. Sessions are useful because they store state and enable the use of temporary tables and variables, which then can be run across multiple queries when combined with multi-statement transactions. Procedural language scripting provides users the ability to run multiple statements in a sequence with shared state and with complex logic using programming constructs such as IF … THEN and WHILE loops.For instance, let’s say we wanted to enhance the current multi-statement transaction example, which uses transactions to atomically manage the existing inventory and supply of new arrivals of a retail company. Since we’re a retailer monitoring our current inventory on hand, we would now also like to add functionality to automatically suggest to our Sales team which items we should promote with sales offers when our inventory becomes too large. To do this, it would be useful to include a simple procedural IF statement, which monitors the current inventory and supply of new arrivals and modifies a new PromotionalSales table based on total inventory levels. And let’s validate the results ourselves before committing them as one single transaction to our sales team by using sessions. Let’s see how we’d do this via SQL.First, we’ll create our tables using DDL statements:code_block[StructValue([(u’code’, u’CREATE OR REPLACE TABLE my_dataset.Inventoryrn(product string,rnquantity int64,rnsupply_constrained bool);rn rnCREATE OR REPLACE TABLE my_dataset.NewArrivalsrn(product string,rnquantity int64,rnwarehouse string);rn rnCREATE OR REPLACE TABLE my_dataset.PromotionalSalesrn(product string,rninventory_on_hand int64,rnexcess_inventory int64);’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e4c43fb6cd0>)])]Then, we’ll insert some values into our Inventory and NewArrivals tables:code_block[StructValue([(u’code’, u”INSERT my_dataset.Inventory (product, quantity)rnVALUES(‘top load washer’, 10),rn (‘front load washer’, 20),rn (‘dryer’, 30),rn (‘refrigerator’, 10),rn (‘microwave’, 20),rn (‘dishwasher’, 30);rn rnINSERT my_dataset.NewArrivals (product, quantity, warehouse)rnVALUES(‘top load washer’, 100, ‘warehouse #1′),rn (‘dryer’, 200, ‘warehouse #2′),rn (‘oven’, 300, ‘warehouse #1′);”), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e4c40dced50>)])]Now, we’ll use a multi-statement transaction and procedural language scripting to atomically merge our NewArrivals table with the Inventory table while taking excess inventory into account to build out our PromotionalSales table. We’ll also create this within a session, which will allow us to validate the tables ourselves before committing the statement to everyone else.code_block[StructValue([(u’code’, u”DECLARE average_product_quantity FLOAT64;rn rnBEGIN TRANSACTION;rn rnCREATE TEMP TABLE tmp AS SELECT * FROM my_dataset.NewArrivals WHERE warehouse = ‘warehouse #1′;rnDELETE my_dataset.NewArrivals WHERE warehouse = ‘warehouse #1′;rn rn#Calculates the average of all product inventories.rnset average_product_quantity = (SELECT AVG(quantity) FROM my_dataset.Inventory);rn rnMERGE my_dataset.Inventory IrnUSING tmp TrnON I.product = T.productrnWHEN NOT MATCHED THENrnINSERT(product, quantity, supply_constrained)rnVALUES(product, quantity, false)rnWHEN MATCHED THENrnUPDATE SET quantity = I.quantity + T.quantity;rn rn#The below procedural script uses a very simple approach to determine excess_inventory based on current inventory being 120% of the average inventory across all products.rnIF EXISTS(SELECT * FROM my_dataset.Inventoryrn WHERE quantity > (1.2 * average_product_quantity)) THENrn INSERT my_dataset.PromotionalSales (product, inventory_on_hand, excess_inventory)rn SELECTrn product,rn quantity as inventory_on_hand,rn quantity – CAST(ROUND((1.2 * average_product_quantity),0) AS INT64) as excess_inventoryrn FROM my_dataset.Inventoryrn WHERE quantity > (1.2 * average_product_quantity);rnEND IF;rn rnSELECT * FROM my_dataset.NewArrivals;rnSELECT * FROM my_dataset.Inventory ORDER BY product;rnSELECT * FROM my_dataset.PromotionalSales ORDER BY excess_inventory DESC;rn#Note the multi-statement SQL temporarily stops here within the session. This runs successfully if you’ve set your SQL to run within a session.”), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e4c4268fad0>)])]From the results of the SELECT statements, we can see the warehouse #1 arrivals were successfully added to our inventory and the PromotionalSales table correctly reflects what excess inventory we have. Looks like these transactions are ready to be committed.However, just in case there were some issues with our expected results, if others were to query the tables outside the session we created, the changes wouldn’t have taken effect. Thus, we have the ability to validate our results and could roll them back if needed without impacting others.code_block[StructValue([(u’code’, u’#Run in a different tab outside the current session. Results displayed will be consistent with the tables before running the multi-statement transaction.rnSELECT * FROM my_dataset.NewArrivals;rnSELECT * FROM my_dataset.Inventory ORDER BY product;rnSELECT * FROM my_dataset.PromotionalSales ORDER BY excess_inventory DESC;’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e4c4268f650>)])]Going back to our configured session, since we’ve validated our Inventory, NewArrivals, and PromotionalSales tables are correct, we can go ahead and commit the multi-statement transaction within the session, which will propagate the changes outside the session too.code_block[StructValue([(u’code’, u’#Now commit the transaction within the same session configured earlier. Be sure to delete or comment out the rest of the SQL text run earlier.rnCOMMIT TRANSACTION;’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e4c42de2590>)])]And now that the PromotionalSales table has been updated for all users, our sales team has some ideas of what products they should promote due to our excess inventory.code_block[StructValue([(u’code’, u’#Results now propagated for all users.rnSELECT * FROM my_dataset.PromotionalSales ORDER BY excess_inventory DESC;’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e4c42de2c10>)])]As you can tell, using multi-statement transactions is simple, scalable, and quite powerful, especially combined with other BigQuery Features. Give them a try yourself and see what’s possible.
Quelle: Google Cloud Platform

Microsoft named a Leader in 2022 Gartner® Magic Quadrant™ for Insight Engines

How your organization can benefit, no matter the industry.

As the amount of data being generated continues to grow at an exponential rate, it's becoming increasingly important for organizations to have a rich set of tools that can help them make sense of it all. That's where insight engines come in. These powerful solutions apply relevancy methods to data of all types, from structured to highly unstructured, allowing users to describe, discover, organize, and analyze it to deliver information proactively or interactively at the right time, in the right context.

Microsoft has recently been named a Leader in the 2022 Gartner Magic Quadrant for Insight Engines, a report that evaluates the capabilities of various vendors in the market.

Microsoft offers two integrated solutions in this space: Microsoft Search, which is available with Microsoft 365, and Azure Cognitive Search, which is available as a platform as-a-service (PaaS) with Microsoft Azure. These solutions are designed to help professionals and developers build impactful AI-powered search solutions that can solve complex problems and enhance the customer experience by enabling information discovery across the spectrum from unstructured to structured data. Whether you need a turnkey solution to reason over enterprise data or the flexibility to tailor search to specific scenarios, Microsoft has you covered.

Azure Cognitive Search can be used in a variety of industries to improve efficiency and decision-making. Some specific examples of how it can be used include:

Manufacturing: Cognitive Search can be used to help manufacturers quickly find information about production processes, equipment, and materials. It can be applied to structured data scenarios such as part catalogs as well as unstructured content such as equipment manuals, safety procedures, and imagery. 
Energy: Cognitive Search can be used to quickly find information related to exploration, drilling, and production. Geo-location search combined with traditional search input enables discovery experiences to get the most of past and present geological site studies, and extensibility allows incorporating energy industry-specific information.
Retail: Cognitive Search can be used to develop a powerful product catalog search experience for retail web sites and apps. Customizable ranking options, scale capability to handle peak traffic with low latency, and the ability for near-real time updates for critical data such as inventory make it a great fit for the scenario. 
Financial services: Cognitive Search can be used by financial institutions to quickly find data related to investments, market trends, and regulatory compliance. Its sophisticated semantic ranking and question-answering capabilities can enable users to answer business questions faster and more confidently.
Healthcare: Cognitive Search can be used by healthcare organizations to improve patient care, streamline operations, and make better informed decisions by quickly finding and accessing relevant information within electronic medical record systems, providing real-time access to clinical guidelines and evidence-based best practices.

Nearly every user knows what to do when they see a search box. All SaaS applications targeting audiences from consumer to enterprise can greatly benefit from a great search experience over their own data. Azure Cognitive Search can deliver an out-of-the-box solution, inclusive of various multi-tenancy strategies, support for over 50 languages, and a global presence to ensure your solution is delivered in the right location for your customers.

If you're a technical decision maker in one of these industries, or any other industry, and you're interested in learning more about how Microsoft's cognitive search solutions can help you unlock the full potential of your data, you can visit the Azure Cognitive Search website and the Microsoft Search website.

You can also download a complimentary copy of the Gartner Magic Quadrant for Insight Engines to see how Microsoft is recognized in the space.

 

 

Gartner, Magic Quadrant for Insight Engines, Stephen Emmott, Anthony Mullen, David Pidsley, Tim Nelms, 12 December 2022

Gartner is a registered trademark and service mark, and Magic Quadrant is a registered trademark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and are used herein with permission. All rights reserved.

This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from Gartner Reprint.

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.
Quelle: Azure

Microsoft named a Leader in The Forrester Wave™: Public Cloud Development and Infrastructure Platforms, 2022

Forrester recently published its report, The Forrester Wave™: Public Cloud Development and Infrastructure Platforms, Global, Q4 2022, placing Microsoft in the “Leaders” category. It’s an honor to be named as one of only two leaders in Forrester’s definitive report on the public cloud development and infrastructure platform market.

The Forrester report recognized Microsoft for its long-term focus on Kubernetes, hybrid, and multicloud capabilities and noted that it is seeking to lead in hybrid and multicloud environments with platform management tools and capabilities. Reference customers praised Microsoft’s service improvements and partnerships. With Microsoft Azure, customers have a trusted cloud partner and the most advanced, highly integrated enterprise IT infrastructure to help them navigate ever-changing environments and achieve business success today, while they build for the future.

Helping developers build any app for any platform

We recognize that developers are the driving engine of innovation. When they are empowered to set up a complete engineering system in seconds, contribute and collaborate with anyone on any device, use the right tool for the job, and integrate with the rest of the organization’s digital estate, organizations can bring innovation to market faster with greater confidence. Azure makes all that possible. For example, with Microsoft Visual Studio, developers can deploy iOS, Android, Windows, Web or embedded apps to wherever they’d like–Azure, hybrid, on-premises and multi-cloud environments. Further, Azure fully supports some of the most popular open source technologies from Linux, to open-sourced databases, to Grafana, allowing organizations to leverage existing investments when running on Azure.

In August, we introduced Microsoft Dev Box, a managed service for developers to create on-demand, high-performance, secure, ready-to-code, project-specific workstations in the cloud, so they can work and innovate anywhere. And we’ve continued to bring new Kubernetes capabilities across Azure, which I’ll cover a little later in this article.

As the range of application development tools continues to grow, we’re seeing a surge in low-code technologies to spur innovation and lower the barrier to entry. Microsoft makes it easy with PowerApps, which provides prebuilt templates, drag-and-drop simplicity, quick deployment and AI-powered assistance, helping anyone create apps using natural language, while enabling the same DevOps practices for low-code tools that customers expect when building trusted enterprise solutions.

In this new world, organizations are harnessing the cloud to create a culture where everyone feels empowered to innovate, while lowering the barrier to creating new types of apps that can take businesses to new heights.

The Microsoft Intelligent Data Platform

We’re entering the age of the “intelligent app,” where every app is AI-enabled and adapts to each organization’s modern data capabilities. However; fragmented digital estates make it difficult for organizations to harness their data to add layers of intelligence to their apps.

At our Build event in May, we announced the Microsoft Intelligent Data Platform that fully integrates databases, analytics, and governance for a unified data estate. With this integration, organizations can power applications at any scale, get actionable insights from all their data, and properly govern data where it resides. To accelerate time to value, customers can use pre-built, customizable, and production-ready AI models as the building blocks for intelligent solutions with Azure Cognitive Services and Azure Applied AI Services.

As AI becomes more mainstream across organizations, it’s essential that employees have the tools to leverage this technology responsibly. We apply Microsoft's Responsible AI Standards to our product development, and have made it a priority to help customers understand, protect, and control their AI solutions with tools and resources like the Responsible AI Dashboard, bot development guidelines, and built-in tools to help explain model behavior, test for fairness and more.

By unifying and integrating data to create more intelligent apps, customers are opening the door to new innovations never thought possible.

From cloud to edge: Innovate securely, anywhere

More and more organizations are embracing hybrid and multicloud as part of their migration and modernization journeys, and they want to continue this flexible approach in a secure, compliant, reliable, and integrated way. Forrester credits Microsoft with "seeking to lead in hybrid and multicloud environments with platform management tools and capabilities, including the Azure Arc management platform."

Azure Arc operates as a bridge extending across the Azure platform by allowing applications and services the flexibility to run across on-premises, edge, and multicloud environments. One of the key challenges organizations face is securing and managing their distributed environments consistently while building innovative applications using cloud-native technologies.

Recently, we announced new deployment options for Azure Kubernetes Services enabled by Azure Arc so customers can run containerized apps, in addition to many first-party Azure application, data, and machine learning services, anywhere regardless of their location.

They can also take advantage of Azure’s comprehensive security, governance, and management capabilities for their Windows, Linux, SQL Server, and Kubernetes deployments in their datacenters, at the edge, or multicloud.

Azure is the only cloud platform built by a security vendor and ensuring that our customers' data is safe and secure is at the forefront of everything we do. For example, our Defender for Cloud security service spans across all clouds—even AWS and Google Cloud for a seamless, consistent, and secure cloud journey where it leads.

Our deep commitment to our customers is baked into every aspect of our vision and roadmap—to be the trusted partner with the most advanced, yet flexible cloud technologies that enable anyone in any organization to innovate anywhere. It’s an honor to be recognized for that commitment and a great way to usher in the New Year.

Learn more

Read The Forrester Wave™: Public Cloud Development And Infrastructure Platforms, Global, Q4 2022.
Learn how Toyota employees used low-code to create more than 400 apps to meet business needs.
Learn how Sutherland, a professional services company, built a data-driven culture with Microsoft Azure.
Learn how the National Basketball Association delivers compelling experiences for its fans through intelligent applications. 
Learn how Royal Bank of Canada, the largest bank in Canada, is using Azure Arc–enabled data services to take advantage of always up-to-date cloud-native data services to modernize its large data estate.
Read how a study found a 228 percent ROI when modernizing apps on Azure's platform as a service.

Quelle: Azure