OpenShift Commons Briefing: Delivering Transformative Reactive Systems on OpenShift – Karl Wehden (Lightbend)

In this OpenShift Commons Briefing our guest speaker Karl Wehden, VP of Product Strategy at Lightbend, talked about delivering business transformation at scale and how Lightbend and OpenShift can transform organizations by integrating business rules management into stream-based systems. By combining Red Hat Decision Manager and Stream processing using Lightbend Pipelines–a new abstraction to simplify […]
The post OpenShift Commons Briefing: Delivering Transformative Reactive Systems on OpenShift – Karl Wehden (Lightbend) appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Azure Blob Storage lifecycle management generally available

Data sets have unique lifecycles. Some data is accessed often early in the lifecycle, but the need for access drops drastically as the data ages. Some data remains idle in the cloud and is rarely accessed once stored. Some data expires days or months after creation while other data sets are actively read and modified throughout their lifetimes.

Today we are excited to share the general availability of Blob Storage lifecycle management so that you can automate blob tiering and retention with custom defined rules. This feature is available in all Azure public regions.

Lifecycle management

Azure Blob Storage lifecycle management offers a rich, rule-based policy which you can use to transition your data to the best access tier and to expire data at the end of its lifecycle.

Lifecycle management policy helps you:

Transition blobs to a cooler storage tier such as hot to cool, hot to archive, or cool to archive in order to optimize for performance and cost
Delete blobs at the end of their lifecycles
Define up to 100 rules
Run rules automatically once a day
Apply rules to containers or specific subset of blobs, up to 10 prefixes per rule

To learn more visit our documentation, “Managing the Azure Blob storage Lifecycle.”

Example

Consider a data set that is accessed frequently during the first month, is needed only occasionally for the next two months, is rarely accessed afterwards, and is required to be expired after seven years. In this scenario, hot storage is the best tier to use initially, cool storage is appropriate for occasional access, and archive storage is the best tier after several months and before it is deleted seven years later.

The following sample policy manages the lifecycle for such data. It applies to block blobs in container “foo”:

Tier blobs to cool storage 30 days after last modification
Tier blobs to archive storage 90 days after last modification
Delete blobs 2,555 days (seven years) after last modification
Delete blob snapshots 90 days after snapshot creation

{
"rules": [
{
"name": "ruleFoo",
"enabled": true,
"type": "Lifecycle",
"definition": {
"filters": {
"blobTypes": [ "blockBlob" ],
"prefixMatch": [ "foo" ]
},
"actions": {
"baseBlob": {
"tierToCool": { "daysAfterModificationGreaterThan": 30 },
"tierToArchive": { "daysAfterModificationGreaterThan": 90 },
"delete": { "daysAfterModificationGreaterThan": 2555 }
},
"snapshot": {
"delete": { "daysAfterCreationGreaterThan": 90 }
}
}
}
}
]
}

Pricing

Lifecycle management is free of charge. Customers are charged the regular operation cost for the “List Blobs” and “Set Blob Tier” API calls initiated by this feature. To learn more about pricing visit the Block Blob pricing page.

Next steps

We are confident that Azure Blob Storage lifecycle management policy will simplify your cloud storage management and cost optimization strategy. We look forward to hearing your feedback on this feature and suggestions for future improvements through email at DLMFeedback@microsoft.com. As a reminder, we love hearing all of your ideas and suggestions about Azure Storage, which you can post at Azure Storage feedback forum.
Quelle: Azure

Happy birthday to managed Open Source RDBMS services in Azure!

March 20, 2019 marked the first anniversary of general availability for our managed Open Source relational database management system (RDBMS) services, including Azure Database for PostgreSQL and Azure Database for MySQL. A great year of learning and improvements lays behind us, and we are looking forward to an exciting future!

Thank you to all our customers, who have trusted Azure to host their Open Source Software (OSS) applications with MySQL and PostgreSQL databases. We are very grateful for your support and for pushing us to build the best managed services in the cloud!

It’s amazing to the see the variety of mission critical applications that customers run on top of our services. From line of business applications over real-time event processing to internet of things applications, we see all possible patterns running across our different OSS RDBMS offerings. Check out some great success stories by reading our case studies! It’s humbling to see the trust our customers put in the platform! We love the challenges posed by this variety of use cases, and we are always hungry to learn and provide even more enhanced support.

We wouldn’t have reached this point without ongoing feedback and feature requests from our customers. There have been asks for functionality such as read replicas, greater performance, extended regional coverage, additional RDBMS engines like MariaDB, and more. In response, over the year since our services became generally available, we have delivered features and functionality to address these asks. Just check out some of the announcements we have made over the past year:

Latest updates to Azure Database for MySQL – July 2018
Latest updates to Azure Database for PostgreSQL – July 2018
Latest updates to Open Source Database Services for Azure – September 2018 (Ignite)
Announcing the general availability of Azure Database for MariaDB – December 2018
Read Replicas for Azure Database for PostgreSQL in public preview – January 2019
Scaling out read workloads in Azure Database for MySQL – March 2019
Service update announcements for:

Azure Database for MySQL
Azure Database for PostgreSQL
Azure Database for MariaDB

We also want to enable customers to focus on using these features when developing their applications. To that end, we are constantly enhancing our compliance certification portfolio to address a broader set of standards. This gives customers the peace of mind, knowing that our services are increasingly safe and secure. We have also introduced features such as Threat Protection (MySQL, PostgreSQL) and Intelligent Performance (PostgreSQL) to the OSS RDBMS services, so there are two fewer things to worry about!

Open Source is all about the community and the ecosystem built around the Open Source products delivered by the community. We want to bring this goodness to our platform and support it so that customers can leverage the benefits when using our managed services. For example, we have recently announced support for GraphQL with Hasura and TimescaleDB! However, we want to be more than a consumer and make significant contributions to the community. Our first major contribution was the release of the Open Source Azure Data Studio with support for PostgreSQL.

While we are proud to highlight these developments, we also understand that we are still at the outset of the journey. We have a lot of work to do and many challenges to overcome, but we are continuing to move ahead at full steam. We are very thrilled to have Citus Data joining the team, and you can expect to see a lot of focus on enabling improved performance, greater scale, and more built-in intelligence. Find more information about this acquisition by visiting the blog post, “Microsoft and Citus Data: Providing the best PostgreSQL service in the cloud.”

Next steps

In the interim, be sure to take advantage of the following, helpful resources.

Azure Database for PostgreSQL

Performance best practices for using Azure Database for PostgreSQL – Connection Pooling
Performance troubleshooting using new Azure Database for PostgreSQL features
Performance updates and tuning best practices for using Azure Database for PostgreSQL
Best practices for alerting on metrics with Azure Database for PostgreSQL monitoring
Securely monitoring your Azure Database for PostgreSQL Query Store

Azure Database for MySQL

Best practices for alerting on metrics with Azure Database for MySQL monitoring

Azure Database for MariaDB

Best practices for alerting on metrics with Azure Database for MariaDB monitoring

We look forward to continued feedback and feature requests from our customers. More than ever, we are committed to ensuring that our OSS RDBMS services are top-notch leaders in the cloud! Stay tuned, as we have a lot more in the pipeline!
Quelle: Azure

Analysis of network connection data with Azure Monitor for virtual machines

Azure Monitor for virtual machines (VMs) collects network connection data that you can use to analyze the dependencies and network traffic of your VMs. You can analyze the number of live and failed connections, bytes sent and received, and the connection dependencies of your VMs down to the process level. If malicious connections are detected it will include information about those IP addresses and threat level. The newly released VMBoundPort data set enables analysis of open ports and their connections for security analysis.

To begin analyzing this data, you will need to be on-boarded to Azure Monitor for VMs.

Workbooks

If you would like to start your analysis with a prebuilt, editable report you can try out some of the Workbooks we ship with Azure Monitor for VMs. Once on-boarded you navigate to Azure Monitor and select Virtual Machines (preview) from the insights menu section. From here, you can navigate to the Performance or Map tab to see a link for View Workbook that will open the Workbook gallery which includes the following Workbooks that analyze our network data:

Connections overview
Failed connections
TCP traffic
Traffic comparison
Active ports
Open ports

These editable reports let you analyze your connection data for a single VM, groups of VMs, and virtual machine scale sets.

Log Analytics

If you want to use Log Analytics to analyze the data, you can navigate to Azure Monitor and select Logs to begin querying the data. The logs view will show the name of the workspace that has been selected and the schema within that workspace. Under the ServiceMap data type you will find two tables:

VMBoundPort
VMConnection

You can copy and paste the queries below into the Log Analytics query box to run them. Please note, you will need to edit a few of the examples below to provide the name of a computer that you want to query.

Common queries

Review the count of ports open on your VMs, which is useful when assessing which VMs configuration and security vulnerabilities.

VMBoundPort
| where Ip != "127.0.0.1"
| summarize by Computer, Machine, Port, Protocol
| summarize OpenPorts=count() by Computer, Machine
| order by OpenPorts desc

List the bound ports on your VMs, which is useful when assessing which VMs configuration and security vulnerabilities.

VMBoundPort
| distinct Computer, Port, ProcessName

Analyze network activity by port to determine how your application or service is configured.

VMBoundPort
| where Ip != "127.0.0.1"
| summarize BytesSent=sum(BytesSent), BytesReceived=sum(BytesReceived), LinksEstablished=sum(LinksEstablished), LinksTerminated=sum(LinksTerminated), arg_max(TimeGenerated, LinksLive) by Machine, Computer, ProcessName, Ip, Port, IsWildcardBind
| project-away TimeGenerated
| order by Machine, Computer, Port, Ip, ProcessName

Bytes sent and received trends for your VMs.

VMConnection
| summarize sum(BytesSent), sum(BytesReceived) by bin(TimeGenerated,1hr), Computer
| order by Computer desc
//| limit 5000
| render timechart

If you have a lot of computers in your workspace, you may want to uncomment the limit statement in the example above. You can use the chart tools to view either bytes sent or received, and to filter down to specific computers.

Connection failures over time, to determine if the failure rate is stable or changing.

VMConnection
| where Computer == <replace this with a computer name, e.g. ‘acme-demo’>
| extend bythehour = datetime_part("hour", TimeGenerated)
| project bythehour, LinksFailed
| summarize failCount = count() by bythehour
| sort by bythehour asc
| render timechart

Link status trends, to analyze the behavior and connection status of a machine.

VMConnection
| where Computer == <replace this with a computer name, e.g. ‘acme-demo’>
| summarize dcount(LinksEstablished), dcount(LinksLive), dcount(LinksFailed), dcount(LinksTerminated) by bin(TimeGenerated, 1h)
| render timechart

Getting started with log queries in Azure Monitor for VMs

To learn more about Azure Monitor for VMs, please read our overview, “What is Azure Monitor for VMs (preview).” If you are already using Azure Monitor for VMs, you can find additional example queries in our documentation for querying data with Log Analytics.
Quelle: Azure

Azure Stack IaaS – part six

Pay for what you use

In the virtualization days I used to pad all my requests for virtual machines (VM) to get the largest size possible. Since decisions and requests took time, I would ask for more than I required just so I wouldn’t have delays if I needed more capacity. This resulted in a lot of waste and a term I heard often–VM sprawl.

The behavior is different with Infrastructure-as-a-Service (IaaS) VMs in the cloud. A fundamental quality of a cloud is that it provides an elastic pool for your resource to use when needed. Since you only pay for what you use, you don’t need to over provision. Instead, you can optimize capacity based on demand. Let me show you some of the ways you can do this for your IaaS VMs running in Azure and Azure Stack.

Resize

It’s hard to know exactly how big your VM should be. There are so many dimensions to consider, such as CPU, memory, disks, and network. Instead of trying to predict what your VM needs for the next year or even month, why not take a guess, let it run, and then adjust the size once you have some historical data.

Azure and Azure Stack makes it easy for you to resize your VM from the portal. Pick the new size and you’re done. No need to call the infrastructure team and beg for more capacity. No need to over spend for a huge VM that isn’t even used.

Learn more:

Resize an Azure Virtual Machine
Azure Virtual Machine sizes
Azure Stack Virtual Machine sizes

Scale out

Another dimension of scale is to make multiple copies of identical VMs to work together as a unit. When you need more, create additional VMs. When you need less, remove some of the VMs. Azure has a feature for this called Virtual Machine Scale Sets (VMSS) which is also available in Azure Stack. You can create a VMSS with a wizard. Fill out the details of how the VM should be configured, including which extensions to use and which software to load onto your VM. Azure takes care of wiring the network, placing the VMs behind a load balancer, creating the VMs, and running the in guest configuration.

Once you have created the VMSS, you can scale it up or down. Azure automates everything for you. You control it like IaaS, but scale it like PaaS. It was never this easy in the virtualization days.

Learn more:

Azure Virtual Machine Scale Sets
Virtual Machine Scale Sets in Azure Stack

Add, remove, and resize disk

Just like virtual machines in the cloud, storage is pay per use. Both Azure and Azure Stack make it easy for you to manage the disks running on that storage so you only need to use what your application requires. Adding, removing, and resizing data disks is a self-service action so you can right-size your VM’s storage based on your current needs.

Learn more:

Add a disk to an Azure Virtual Machine
Remove a disk on an Azure Virtual Machine
Resize a disk of an Azure Virtual Machine

Usage based pricing

Just like Azure, Azure Stack prices are based on how much you use. Since you take on the hardware and operating costs, Azure Stack service fees are typically lower than Azure prices. Your Azure Stack usage will show up as line items in your Azure bill. If you run your Azure Stack in a network which is disconnected from the Internet, Azure Stack offers a yearly capacity model.

Pay-per-use really benefits Azure Stack customers. For example, one organization runs a machine learning model once a month. It takes about one week for the computation. During this time, they use all the capacity of their Azure Stack, but for the other three weeks of the month, they run light, temporary workloads on the system. A later blog will cover how automation and infrastructure-as-code allows you to quickly set this up and tear it down, allowing you to just use what the app needs in the time window it’s needed. Right-sizing and pay-per-use saves you a lot of money.

Learn more:

Microsoft Azure Stack packaging and pricing

In this blog series

We hope you come back to read future posts in this blog series. Here are some of our past and upcoming topics:

Azure Stack at its core is an Infrastructure-as-a-Service (IaaS) platform
Start with what you already have
Protect your stuff
Fundamentals of IaaS
Do it yourself
It takes a team
If you do it often, automate it
Build on the success of others
Journey to PaaS

Quelle: Azure

Azure Sphere ecosystem accelerates innovation

The Internet of Things (IoT) promises to help businesses cut costs and create new revenue streams, but it also brings an unsettling amount of risk. No one wants a fridge that gets shut down by ransomware, a toy that spies on children, or a production line that’s brought to a halt through an entry point in a single hacked sensor.

So how can device builders bring a high level of security to the billions of network-connected devices expected to be deployed in the next decade?

It starts with building security into your IoT solution from the silicon up. In this piece, I will discuss the holistic device security of Azure Sphere, as well as how the expansion of the Azure Sphere ecosystem is helping to accelerate the process of taking secure solutions to market. For additional partner-delivered insights around Azure Sphere, view the Azure Sphere Ecosystem Expansion Webinar.

A new standard for security

Small, lightweight microcontrollers (or MCUs) are the most common class of computer, powering everything from appliances to industrial equipment. Organizations have learned that security for their MCU-powered devices is critical to their near-term sales and to the long-term success of their brands (one successful attack can drive customers away from the affected brand for years). Yet predicting which devices can endure attacks is difficult.

Through years of experience, Microsoft has learned that to be highly secured, a connected device must possess seven specific properties:

Hardware-based root of trust: The device must have a unique, unforgeable identity that is inseparable from the hardware.
Small trusted computing base: Most of the device's software should be outside a small trusted computing base, reducing the attack surface for security resources such as private keys.
Defense in depth: Multiple layers of defense mean that even if one layer of security is breached, the device is still protected.
Compartmentalization: Hardware-enforced barriers between software components prevent a breach in one from propagating to others.
Certificate-based authentication: The device uses signed certificates to prove device identity and authenticity.
Renewable security: Updated software is installed automatically and devices that enter risky states are always brought into a secure state.
Failure reporting: All device failures, which could be evidence of attacks, are reported to the manufacturer.

These properties work together to keep devices protected and secured in today's dynamic threat landscape. Omitting even one of these seven properties can leave devices open to attack, creating situations where responding to security events is difficult and costly. The seven properties also act as a practical framework for evaluating IoT device security.

How Azure Sphere helps you build secure devices

Azure Sphere – Microsoft’s end-to-end solution for creating highly-secure, connected devices – delivers these seven properties, making it easy and affordable for device manufacturers to create devices that are innately secure and prepared to meet evolving security threats. Azure Sphere introduces a new class of MCU that includes built-in Microsoft security technology and connectivity and the headroom to support dynamic experiences at the intelligent edge.

Multiple levels of security are baked into the chip itself. The secured Azure Sphere OS runs on top of the hardware layer, only allowing authorized software to run. The Azure Sphere Security Service continually verifies the device's identity and authenticity and keeps its software up to date. Azure Sphere has been designed for security and affordability at scale, even for low-cost devices. 

Opportunities for ecosystem expansion

In today’s world, device manufacturing partners view security as a necessity for creating connected experiences. The end-to-end security of Azure Sphere creates a potential for significant innovation in IoT. With a turnkey solution that helps prevent, detect, and respond to threats, device manufacturers don’t need to invest in additional infrastructure or staff to secure these devices. Instead, they can focus their efforts on rethinking business models, product experiences, how they serve customers, and how they predict customer needs.

To accelerate innovation, we’re working to expand our partner ecosystem. Ecosystem expansion offers many advantages. It reduces the overall complexity of the final product and speeds time to market. It frees up device builders to expand technical capabilities to meet the needs of customers. Plus, it enables more responsive innovation of feature sets for module partners and customization of modules for a diverse ecosystem. Below we’ve highlighted some partners who are a key part of the Azure Sphere ecosystem.

Seeed Studio, a Microsoft partner that specializes in hardware prototyping, design and manufacturing for IoT solutions, has been selling their MT3620 Development Board since April 2018. They also sell complementary hardware that enables rapid, solder-free prototyping using their Grove system of modular sensors, actuators, and displays. In September 2018, they released the Seeed Grove starter kit, which contains an expansion shield and a selection of sensors. Besides hardware for prototyping, they are going to launch more vertical solutions based on Azure Sphere for the IoT market. In March, Seeed also introduced the MT3620 Mini Dev Board, a lite version of Seeed’s previous Azure Sphere MT3620 Development Kit. Seeed developed this board to meet the needs of developers who need smaller sizes, greater scalability and lower costs.

AI-Link has released the first Azure Sphere module that is ready for mass production. AI-Link is the top IoT module developer and manufacturer in the market today and shipped more than 90 million units in 2018.

Avnet, an IoT solution aggregator and Azure Sphere chips distributor, unveiled their Azure Sphere module and starter kit in January 2019. Avnet will also be building a library of general and application specific Azure Sphere reference designs to accelerate customer adoption and time to market for Azure Sphere devices and solutions.

Universal Scientific Industrial (Shanghai) Co., Ltd. (USI) recently unveiled their Azure Sphere combo module, uniquely designed for IoT applications, with multi-functionality design-in support by standard SDK. Customers can easily migrate from a discrete MCU solution to build their devices based on this module with secured connectivity to the cloud and shorten design cycle.

Learn more about the Azure Sphere ecosystem

To learn more, view the on-demand Azure Sphere Ecosystem Expansion webinar. You’ll hear from each of our partners as they discuss the Azure Sphere opportunity from their own perspective, as well as how you can take full advantage of Azure Sphere ecosystem expansion efforts.

For in-person opportunities to gain actionable insights, deepen partnerships, and unlock the transformative potential of intelligent edge and intelligent cloud IoT solutions, sign up for an in-person IoT in Action event coming to a city near you.
Quelle: Azure