Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

Deliver the right events to the right places with Event Domains

$
0
0

As we continue to see our community grow around Event Grid, many of you have started to explore the boundaries of complexity and scale that can be achieved. We’ve been blown away with some of the system architectures we have seen built on top of the platform.

In order to make your life easier with some of these scenarios, we decided to dedicate much of the last few months to building two features we are very excited to announce today: advanced filters, and Event Domains – a managed platform for publishing events to all of your customers. In addition, we’ve been working to improve the developer experience and make Event Grid available in Azure Government regions.

Event Domains

Become your own event source for Event Grid with Event Domains, managing the flow of custom events to your different business organizations, customers, or applications. An Event Domain is essentially a management tool for large numbers of Event Grid Topics related to the same application, a top-level artifact that can contain thousands of topics. With a Domain, you get fine grain authorization and authentication control over each topic via Azure Active Directory, which lets you easily decide which of your tenants or customers has access to subscribe to which topics. The Event Domain also handles partitioning for you which means instead of needing to publish your events to each topic individually, you can publish all of your events to the Domain’s endpoint and Event Grid takes care of ensuring each event is sent to the correct topic.

This is a unique feature to Event Grid that enables new scenarios for you and your partners to offer your own events to your end customers.

With Event Domains, we are taking the underlying architecture that Azure services, such as Blob Storage or IoT Hub, to publish events and make them available for you to use. What this means is that you can now use a single Event Domain to handle publishing events to all your end customers, or across a complex organization with independent services.

Event Domains are perhaps most easily explained using an example. Let’s say you run Contoso Construction Machinery, where you manufacture tractors, digging equipment, and other heavy machinery. As a part of running the business, you push real-time information to customers regarding equipment maintenance, systems health, and contract updates. All of this goes to various endpoints including your app, customer endpoints, and other infrastructure customers had setup.

Event Domains allows you to model Contoso Construction Machinery as a single eventing entity. Each of your customers is represented as a Topic within the Domain, handling authentication and authorization using Azure Active Directory, so you don’t have to worry about it. Each of your customers can subscribe to their Topic and get the events delivered to them, but AAD and the Event Domain block them from accessing another tenant’s Topic.

It also gives you a single endpoint which you can publish all of your customer events to, and Event Grid will take care of making sure each Topic is only aware of events scoped to its tenant.

Contoso Construction Machinery Event Domain grid

In a world where application architectures are increasingly shifting towards event-based programming models, support for push to HTTP endpoints, applications, and cloud services is becoming more and more important. Event Domains handles the eventing for you, so you only have to worry about what events are you making available to your customers.

Use Event Domains to:

  • Manage multitenant eventing architectures at scale.
  • Manage your authorization and authentication.
  • Partition your topics without managing each individually.
  • Avoid individually publishing to each of your topic endpoints.

Learn more about Event Domains in our docs, and get started with this sample.

Advanced filters on Event Grid

To compliment the added capabilities of Event Domains, we are also excited to announce that Event Grid supports several new types of advanced filters starting today! These new filters will allow a host of new types of filtering to occur on the wire including numerical, string, and Boolean filters.

With this update, we are providing a number of operators on each datatype and greatly increasing the number of fields you’ll be able to run them on, increasing control over what events are routed where, and making sure only the required ones are getting to the compute services handling them. Advanced filters supports filtering on envelope properties (such as DataVersion, Id, and Topic) as well as the data payload up to two layers deep (e.g. data.key1 or data.key1.key2). The following operators are being added per data type:

Numbers

  • NumberLessThan
  • NumberLessThanOrEquals
  • NumberGreaterThan
  • NumberGreaterThanOrEquals
  • NumberIn – the value for data.key equals a value in the set [0, 2.08, 3.14]
  • NumberNotIn – the value for data.key is not in the set [1, 11, 112, 1124]

Strings

  • StringContains – the value for data.key contains “the”
  • StringIn – the value for data.key equals a value in the set [“small”, “brown”, “fox”]
  • StringNotIn – the value for data.key equals a value in the set [“jumped”, “over”, “the”]
  • StringBeginsWith – the value for data.key begins with “lazy”
  • StringEndsWith – the value for data.key ends with “dog”

Boolean

  • BoolEquals

No more adding additional clunk scripts to further filter your events before running your business logic. By combining the various new filters, powerful combinations of on-the-wire filters become possible, allowing your event handlers to deal only with the data they truly care about.

Learn more about the advanced filtering capabilities of Event Grid in our docs.

Azure Government regions availability

As of November 1, 2018, Event Grid is available in US Gov Arizona, Texas, and Virginia regions. All Event Grid features (including Custom Topics, Domains, filtering, and Dead Lettering) will be available from the beginning. However, the built-in events coming from first party Azure services, such as Azure Storage, will not immediately be available. These built in event sources are already in the works for these regions, and will be added on a rolling basis over the course of the next few months, so stay tuned. This means that Event Grid will initially be public preview in the US Gov regions until we finish adding the rest of capabilities on the service. If you have questions or requirements regarding specific event publishers, don’t hesitate to contact the team.

Development experience improvements

Finally, we are constantly working to make little tweaks and updates that make testing and development easier for you. In that spirit, we have updated some of our SDKs, added a time to live feature for Event Subscriptions, and added Portal UI support for configuring dead-lettering and retry policies.

Dead lettering and retry policies in the Azure portal

You can now click on the “Additional Features” tab at the top of the page anytime you are creating an Event Subscription. From this page you’ll be able to access retry policy & dead-letter configuration, filters, and any new functionality we add. There is also now an Advanced Editor mode where you can see the ARM representation of your subscription for use in an ARM Template, as well as configure any new features not yet available in the portal.

Event Subscription Time to Live (TTL)

Ephemeral resources can be useful for a number of reasons. They enforce hygiene when testing by cleaning up old resources, they drive security by not allowing a resource to publish ad infinitum, and they keep configurations fresh by ensuring only resources proactively subscribing to events are still receiving them. With Event Subscription TTL, you can now configure a live time, at creation, for an Event Subscription after which it automatically gets cleaned up.

SDKs updates

We have updated the .Net, Java, and Go SDKs to make consuming Azure Native Events even easier. Using the latest release of each, the SDK is now aware of all available event types being published by Azure. This means you can use Event Grid’s eventGridSubscriber to deserialize events directly for all known event types and start writing code against the content of the event rather than first having to deal with manually deserializing JSON into various different event type classes. Check the different SDKs for an easier experience managing your Event Grid resources and events.


Spooky! Gravedigger in R

$
0
0

Need something to distract the kids while they're waiting to head out Trick-or-Treating? You could have them try out a Creepy Computer Game in R! Engineer and social scientist Dr Peter Provost translated one of the old BASIC games from the classic book Creepy Computer Games into R: just have them type in this file (and yes, they do need to type it character-by-character to get the full experience) and then source it into R to play the game. The goal is to move your character * to the exit at the SE corner of the graveyard populated by gravestones + while avoiding the skeletons X.

Scary computer games

For extra credit, see if they can figure out the bug that causes the move counter to go down, instead of up. It might help to refer to the orginal BASIC sources -- the book is online at the publisher's website as a PDF here.

Gravedigger

The Lucid Manager: Celebrate Halloween with Creepy Computer Games in R

MVP Spotlight: Bing Maps goes Digital Hollywood University with Daisuke Yamazaki

$
0
0

Daisuke YamazakiMicrosoft MVP, Daisuke Yamazaki specializes in Bing Maps and evangelizes mapping inside and outside of the classroom. As Associate Professor teaching web technologies at Digital Hollywood Graduate School, Yamazaki has built a website for students that publicly offers anyone the resources and tools to easily get started with Bing Maps – BingMaps Go!

Digital Hollywood University is a private university in Tokyo, Japan with a focus on digital communications and degree programs in Anime, CG, Film, Web, Graphic Design and IT Programming. Yamazaki teaches in the web program and, as author of several books, he is known for popularizing Bing and several other web technologies. His latest book entitled, Bing Maps API Nyumon (available in Japanese only), the companion book to Bing Maps GO!, further underscores his deep knowledge of mapping.

When asked “Why Bing Maps?”, Yamazaki’s answer is forthright, “Bing Maps is excellent in terms of cost, mapping accuracy and ease of development.” What more can be said?

Below is a Q&A with Yamazaki about maps, BingMaps Go! and being an MVP:

Why are you passionate about maps?

I am an engineer and I teach programming engineering as an associate professor at a university. In recent years, the number of applications using maps and location information has increased, and I believe that knowledge of maps APIs are essential in programming classes. Therefore, in my class, which uses programming, I always take classes that use Bing Maps APIs. Maps are one important solution that you need to build research and practical applications.

Why did you decide to become a Microsoft MVP specializing in Bing Maps?

I use the Bing Maps API at work, and it's very simple and versatile. With a little bit of code I was able to operate and display the map. In business applications, location and maps are used in many situations. For Bing Maps in Japan, there is still not enough recognition by developers. I wanted to be an MVP, and I wanted to be able to spread the usefulness of Bing Maps APIs to developers all over the world with Bing Maps output.

What are you using Bing Maps for Enterprise for?

The newly released BingMaps Go! website will be an implementation support web service for web designers and programmers to introduce them to Bing Maps at an early stage. Currently, we are preparing basic and simple templates, which make Bing Maps easy to use for beginners.

Bing Maps Go

We will add additional templates as needed from now on. Engineers around the world will be able to use the map API immediately.

Why did you choose Bing Maps for Enterprise in your solution?

With Bing Maps we offer APIs, such as route search and traffic at the top level compared to other companies. Also, the APIs are designed to be easy to use by developers, such as pushpin and infobox, and have low learning costs.

In university and graduate studies, it is easy to get started with Bing Maps as a place where you can learn, even if you do not have a credit card. Then the familiar Bing Maps API can continue to be a part of a company's development. 

What benefits are you and your students seeing?

People across society are now studying programming. Especially in JavaScript classes, we learn the process from the basics to building small applications.

We are also doing classes using the map API at G's ACADEMY, entrepreneurial development and programming school. In the map API class, it is important to show a certain number of samples because students cannot imagine what a maps API can do, they have to see it.

BingMaps GO! can also display basic samples of the map API and allow users to edit and execute JavaScript. I can imagine the application while checking its behavior on the browser. Samples are chosen by selecting frequently-used functions so that students can study easily. Therefore, I think that it is a brief and good site to know the basic behavior of Bing Maps API.

What are your goals as a Microsoft MVP?

I hope that the Bing Maps GO! service that I made and my students will grow and help solve as many of the challenges in the world as they can.

For more information about BingMaps GO, go to https://mapapi.org/. For more about the Bing Maps for Enterprise solutions, go to https://www.microsoft.com/maps.

New to Microsoft 365 in October—tools to create impactful content and help transition to the cloud

$
0
0

This month, we released new features in Microsoft 365 that help teams enhance the look and feel of their content with ease, plus new tools and resources to help you transition to the cloud.

Here’s a look at what’s new in October.

Create content that stands out

We’re excited to introduce new capabilities that help enhance the visual look and feel and overall impact of your documents and presentations.

Bring your content to life with Embedded 3D Animations—Earlier this month, we announced that you will be able to insert Embedded 3D Animations in Word and PowerPoint to help easily illustrate more complex ideas in your presentations using 3D. Simply insert and play embedded 3D animated models, so you can improve the comprehension and retention of your content. This feature will available beginning November 2018.

An animated screenshot showing an embedded animation of a hummingbird in PowerPoint.

Convert ink to text and shapes at the same time—We’re enabling more natural ways to work with Ink in PowerPoint. You’ll be able to easily convert your digital pen ink into text on a slide. And when you draw a more complex diagram with words and shapes, you can select everything in the slide and convert it to text or shapes all at once. The feature requires a Microsoft 365 or Office 365 subscription and a touch/ink-compatible device. This feature will be available beginning November 2018.

An animated screenshot shows ink being converted to text in PowerPoint.

Build impactful content with curated slide recommendations in PowerPoint—Now, regardless of whether your words are handwritten or typed, Designer in PowerPoint uses artificial intelligence (AI) and natural language processing to convert text into curated slide recommendations. These recommendations include multiple design themes and intelligent SmartArt and icons—based on words in your slide—to bring your ideas to life. You’ll save time, stay in the design flow, communicate more clearly with imagery, and create better slides. This feature is now available to Microsoft 365 or Office 365 subscribers.

An animated screenshot shows Design Ideas suggested in a PowerPoint slide.

Easily edit documents with a digital penNew gestures in Ink Editor enable you to edit documents using familiar shorthand with a digital pen while in tablet mode. Now, you can insert new lines, add missing words, and delete and highlight content quickly and naturally while on the go. These capabilities are available with a Microsoft 365 or Office 365 subscription when using a touch-enabled device and a digital pen.

An animated screenshot shows text in a Word document edited with a digital pen. The Ink Editor deletes a paragraph, inserts an ommitted word, and bolds a numerical figure.

Convert Word documents into beautiful webpages using Transform to Web—With Transform to Web, you can easily transform your Word documents into interactive, easy-to-share webpages with just three clicks. Transform to Web offers a variety of styles that look great on any device, helping you easily publish a wide variety of polished content, including newsletters and training manuals. You can also view analytics on the page, including who has viewed or read your content. To transform a document, go to File > Transform and select a style. This feature will be available for Word with an Office 365 subscription and Word Online beginning November 2018.

Easily transition to and work in the cloud

We’re  introducing capabilities across Microsoft 365 to help you move your files and applications to the cloud and help your users easily get help and support when they need it.

Migrate your files to the cloud with the SharePoint Migration Tool—The new SharePoint Migration Tool is a simple and intuitive way for commercial customers to migrate their existing SharePoint, OneDrive, and File Share content to SharePoint Online, OneDrive for Business, and Microsoft Teams. To take advantage of the latest collaboration, intelligence, and security solutions in Microsoft 365, download the SharePoint Migration Tool at no added cost to bring your on-premises content to the cloud.

Ensure app compatibility with Desktop App AssureDesktop App Assure is now available through FastTrack to assist customers who encounter app compatibility issues when deploying Windows 10, Office 365 ProPlus, and feature updates. Desktop App Assure for FastTrack customers is currently available in North America and will be available worldwide by February 1, 2019.

Access help and support when using Office.com—The new help and support pane for Office.com will provide you access to the latest support information and support articles, and give you help with the common issues for the app you’re using without leaving the app. For commercial customers, you’ll be able to search your organization’s help desk information, if enabled by your admin. The new experience will launch next month on www.office.com and will roll out across Office 365 web apps over the next six months.

A screenshot displays an Excel tab open in Office Online.

Other updates

The post New to Microsoft 365 in October—tools to create impactful content and help transition to the cloud appeared first on Microsoft 365 Blog.

Announcing Windows Community Toolkit v5.0

$
0
0

I’m excited to announce the next major update of the Windows Community Toolkit, version 5.0. This update introduces the new WindowsXamlHost control built on top of the new XAML Islands APIs to simplify adding built-in or custom UWP control to a WPF or Windows Forms desktop application. Alongside, this version introduces new WinForms and WPF controls that leverage the WindowsXamlHost interfaces to wrap UWP platform controls such as the InkCanvas and the MapControl.

In addition, version 5.0 also introduces the TabView control for UWP, a new .NET Standard Weibo service, new .NET Framework support for the Twitter and LinkedIn service, and many new bug fixes and feature updates to existing controls and helpers.

Let’s take a look at the highlights in more details.

WindowsXamlHost

The latest version of Windows 10 (1809) introduces new pre-release APIs to enable UWP controls in a non-UWP Win32 desktop application to enhance the look, feel, and functionality of the experience with the latest UI features that are only available via UWP controls.

To make it easier for WPF and Windows Forms developers to use any UWP control that derives from the UWP UIElement, the toolkit introduces the WindowsXamlHost control to host built-in or custom UWP control. The control is currently available as a developer preview today and we encourage developers to try it out in their own prototype code.

To make it easier for WPF and Windows Forms developers to use any UWP control that derives from UIElement, the toolkit introduces the WindowsXamlHost control to host built-in or custom UWP control. We encourage developers to try it out in their own prototype code.

To make it easier for WPF and Windows Forms developers to use any UWP control that derives from the UWP UIElement, the toolkit introduces the WindowsXamlHost control to host built-in or custom UWP control.

Make sure to visit the documentation to learn more about how the new control works and how to add UWP UI to your desktop apps.

Wrapped Controls

Built on top of the WindowsXamlHost interfaces, the toolkit also introduces a selection of wrapped UWP controls. These controls wrap the interface and functionality of a specific UWP platform control. These controls can be added directly to the design surface of a WPF or Windows Forms project and can be used like any other control.

Windows Forms project.

With this release, the following controls are available:

  • WebView – a control that uses the Microsoft Edge rendering engine – supported on Windows 10 April 2018 update and above
  • WebViewCompatible – a control that provides a version of WebView that is compatible with more OS versions. The control uses the WebView control on OS versions that support it and the Internet Explorer rendering engine otherwise, even on Windows 8 and Windows 7.
  • InkCanvas and InkToolbar – wrapper around the UWP WinkCanvas and InkToolbar controls to enable Ink based user interaction – supported on Windows 10 October 2018 update and above
  • MediaPlayerElement – wrapper around the UWP MediaPlayerElement to stream and render media content such as video – supported on Windows 10 October 2018 update and above
  • MapControl – wrapper around the UWP MapControl control to display and interact with rich Map content – supported on Windows 10 October 2018 update and above

As with the WindowsXamlHost control, these controls are currently available as a developer preview and we encourage developers to try them out in their own prototype code.

TabView control

After hundreds of comments on GitHub and two years of discussions, the UWP TabView control is now available in the Windows Community Toolkit. The TabView control allows you to provide a rich Tab experience, with support for fully customizing the behavior, built in support for closing tabs, drag and drop and more.

The TabView control allows you to provide a Tab experience, with support for customizing the behavior, built in support for closing tabs, drag and drop and more

Make sure to visit the documentation and checkout the sample in the sample app.

Weibo service

With only few lines of code, developers can now easily retrieve or publish data to the very popular Weibo social platform. The service is built on .NET Standard and can be used on any platform including UWP, .NET Framework, Xamarin and more.


// Initialize service
WeiboService.Instance.Initialize(AppKey, AppSecret, RedirectUri);

// Login to Weibo
if (await WeiboService.Instance.LoginAsync())
{
    // Post a status with a picture
         await WeiboService.Instance.PostStatusAsync(StatusText.Text, stream);
}

Visit the documentation for more details on how to create your Weibo application.

Twitter and LinkedIn .NET Framework support

In version 4.0 of the toolkit, the Twitter and LinkedIn services moved from UWP to .NET Standard. In this new release, the community continued to improve the cross platform experience by building the .NET Framework platform specific implementation required for authentication to enable OAuth on WPF and Windows Forms.

Get started today

There is a lot more updates than we can cover in this blog post, so make sure to read the release notes.

As a reminder, you can get started by following this tutorial, or preview the latest features by installing the Windows Community Toolkit Sample App from the Microsoft Store. If you would like to contribute, please join us on GitHub! To join the conversation on Twitter, use the #WindowsToolkit hashtag.

Happy coding!

The post Announcing Windows Community Toolkit v5.0 appeared first on Windows Developer Blog.

bingbot Series: JavaScript, Dynamic Rendering, and Cloaking. Oh My!

$
0
0

Last week, we posted the second blog of our bingbot Series: Optimizing Crawl Frequency.

Today is Halloween and like every day, our crawler (also known as a "spider") is wandering outside, browsing the world wide web, following links, seeking to efficiently discover, index and refresh the best web content for our Bing users.
bingbot, bings crawler
 

Occasionally, bingbot encounters websites relying on JavaScript to render their content. Some of these sites link to many JavaScript files that need to be downloaded from the web server. In this setup, instead of making only one HTTP request per page, bingbot has to do several requests. Some some sites are spider traps, with dozens of HTTP calls required to render each page! Yikes. That's not optimal, now is it? 

As we shared last week at SMX East, bingbot is generally able to render JavaScript. However, bingbot does not necessarily support all the same JavaScript frameworks that are supported in the latest version of your favorite modern browser. Like other search engine crawlers, it is difficult for bingbot to process JavaScript at scale on every page of every website, while minimizing the number of HTTP requests at the same time. 

Therefore, in order to increase the predictability of crawling and indexing by Bing, we recommend dynamic rendering as a great alternative for websites relying heavily on JavaScript. Dynamic rendering is about detecting user agent and rendering content differently for humans and search engine crawlers. We encourage detecting our bingbot user agent, prerendering the content on the server side and outputting static HTML for such sites, helping us minimize the number of HTTP requests and ensure we get the best and most complete version of your web pages every time bingbot visits your site.

Is using JavaScript for Dynamic Rendering considered Cloaking?

When it comes to rendering content specifically for search engine crawlers, we inevitably get asked whether this is considered cloaking... and there is nothing scarier for the SEO community than getting penalized for cloaking, even during Halloween! The good news is that as long as you make a good faith effort to return the same content to all visitors, with the only difference being the content is rendered on the server for bots and on the client for real users, this is acceptable and not considered cloaking.

So if your site relies a lot of JavaScript and you want to improve your crawling and indexing on Bing, look into dynamic rendering: you will certainly benefit immensely, receiving only treats and no tricks!

Happy Halloween!

Fabrice Canel and Frédéric Dubut
Program Managers

Microsoft - Bing

Azure Cosmos DB – A polymorphic database for an expanding data universe

$
0
0

There is a discussion that occurs frequently in the data world today, which centers around comparisons between traditional databases that are based on relational theory e.g. Oracle or SQL Server and a more modern wave of platforms commonly referred to as “NoSQL” databases. Proponents from both types of databases tend to get into disputes concerning, “which database is best?” However, this can be a misguided point of contention. To understand why, it can help to trace back through history, and reflect on how NoSQL databases first rose to prominence.

In the past 15 years, database technology has radically expanded beyond what could be described, to use a physics analogy, as a singularity in the initial conditions of our data universe: transactional processing using relational databases. This expansion has grown with improved technology and adoption, fueled by demand for the capability to process more data, as well as different kinds of data. There has been a revolution in the exchange of data, precipitated by the social media and mobile age. This has given rise to the increased popularity of different transient, flexible storage mediums, and protocols such as XML and JSON. While these became de facto standards in various forms of web publishing and messaging, methods of building applications have also evolved and matured. Object-oriented design in applications has increased in popularity, which has given rise to object-relational impedance mismatch. This further throttled the way in which we can build and maintain applications using relational databases. In addition, we have started to store various kinds of unstructured data, log files, binary images, text, sensory data, and more. This has given rise to distributed computing architectures like Hadoop and Spark that have allowed us to perform big queries on large data sets, without the need to apply structure such as schema to it at design time. In short, the variety of data structures we now need to manage has changed dramatically.

These changing approaches to modelling data in response to demands for greater flexibility can be thought of as emerging trade-offs between structural integrity vs agility/productivity when it comes to building applications that store any kind of data. This need to be more flexible has been characterized as a paradigm known as schema on read. The idea that data structures can be self-describing or semi-structured, and therefore the schema for applying meaning to data can in some sense live within the consumer/client code rather than being tight-coupled to databases at design time. This leaves databases free to be more flexible in how they ingest data.

In parallel to this shift in consumer demand for more flexible data structures, there has been a phenomenal increase in the volume and velocity of data we are handling in databases. This has given rise to the need to balance transactional integrity with physical availability, latency, and concurrency. Volumes of data being processed have become so large that traditional relational databases can sometimes struggle to offer the levels of overall performance that end users demand. In the case of ACID transactions, this has led to a common practice of relaxing the isolation element of ACID semantics in order to provide greater concurrency. In the case of availability and latency, this has given rise to the emergence of distributed database architectures, which in turn require us to balance trade-offs between consistency (of replicas) and availability. Learn more by reading about the CAP Theorem and PACELC theorem

In the context of deciding which type of database engine to use, we now have an emerging set of spectrums for data storage and persistence in an “expanding data universe”. The below diagram illustrates this, and suggests where we might place some of the emergent paradigms that have been solidifying within them.

Data Universe

Ultimately these spectrums have emerged and solidified through the increasing variety, velocity, and volume of data that modern day applications need to handle, and the advances in technology to support them.

The isolation trade-offs in ACID databases have been known for some time, and CAP Theorem/PACELC theorem have received a lot of recent press in exposing some of the trade-offs that relate to replication consistency spectrum in distributed databases. The emergence of a data structure spectrum is perhaps less discussed, but just as important in understanding the shifting paradigms in the database world. This proliferation of data paradigms means that the questions we ask about database technologies should really be centered around the business use case, and where along these spectrums that use case would be optimally served by applying the appropriate paradigm, rather than asking which database is the best. We now live in an “expanding data universe”, in which there is no single paradigm that fits best for all data structures or data persistence scenarios.

Of course, there is one database that covers more ground than other databases across these expanding and maturing paradigms, Microsoft’s polymorphic Azure Cosmos DB!

Azure Cosmos DB

The data structure spectrum

On the data structure spectrum, Azure Cosmos DB provides a revolutionary common type system referred to as atom-record-sequence (ARS). This facilitates multiple data models at an API and wire protocol level, each representing the different data models shown in the earlier diagram. Although these models may seem unrelated, they conceptually occupy points along a spectrum, as each represents a different level of trade-off between applying structure/meaning to data at design-time vs query time, or schema-on-write vs schema-on-read. From left to right, in the case of column-family, this is provided in the form of the open source Cassandra API. For the document data model, users have a choice between the native SQL API, or the open source MongoDB API. For graph data, users can adopt the Gremlin API. Finally for a key-value store, users can opt for the Table API.

The data persistence spectrums

Similarly on the data persistence spectrum, Azure Cosmos DB is one of the only databases in the world to offer multiple consistency abstractions with turnkey enablement for replication, which can be overridden per request. Azure Cosmos DB also offers the ability to do ACID transactions with snapshot isolation.

As shown in the earlier diagram, all points along the data persistence spectrum for replication consistency are supported, and were in fact uniquely pioneered as well-defined abstractions in Cosmos DB. Read my blog title “Azure Cosmos DB – Tunable Consistency!” for a discussion of the benefits and use cases for each consistency setting, along with descriptions using real world examples. For a more in-depth exploration of the data consistency models we created for Cosmos DB, please take a look at our e-book.

Global distribution and SLAs

In addition to this wide coverage across all these burgeoning spectrums, Azure Cosmos DB is also one of the only turnkey globally distributed databases in the world, enabling ultra-low latencies for both reads and writes in geo-distributed applications through seamless replication and multi-master write region capability, with automatic conflict resolution. Through a tightly controlled resource governance model made possible through a cloud-native software architecture, it also offers financially backed SLAs across consistency, availability, latency and throughput.

Flexible options

Azure Cosmos DB does not circumvent the need to make informed decisions between different points along each spectrum, nor does it completely abdicate the need in some cases to choose a different database platform entirely such as a relational database. However, as illustrated above it does provide very wide support across a growing number of spectrum points in an “expanding data universe”, with turnkey convenience and efficiency. It is thus very strongly placed to give excellent coverage across a high number of real world business use cases.

Avere vFXT for Azure for HPC workloads now generally available

$
0
0

We are very excited to share the general availability (GA) of Avere vFXT for Azure. This culminates months of effort beginning when Microsoft welcomed Avere to the Azure family earlier this year. Customers can now leverage the Avere vFXT to run their high-performance applications in Azure.

The scope of Microsoft Azure’s solutions for high-performance computing (HPC) continues to broaden with Avere vFXT being the latest product to transition from testing to general availability. Avere joins a stellar portfolio of products like Azure Virtual Machines, Azure Batch, Azure CycleCloud, and networking technologies such as Azure ExpressRoute, that helps bring these demanding projects into the cloud without sacrifices.

Since public preview began in late August 2018, customers across the globe have moved new workloads to Azure using our high-performance file caching technology. The Avere vFXT has been deployed at scale, providing data access at very low latency, no matter where the file data originated. The Avere vFXT is deployed as a set of Azure Virtual Machines, adjacent to your cloud-based HPC cluster. The software runs as a cluster of VMs, enabling very high scale and throughput capacity for compute clusters of any size. Sources of storage can also connect into the Avere vFXT cluster — whether located in a datacenter, an Azure blob container, or both. This flexibility offers customers the option of running compute pipelines “as-is” in the cloud while planning longer-term data migration into the cloud.

The Avere vFXT for Azure provides a blob-backed cache in Azure as well, facilitating migration of file-based pipelines without having to re-write applications. For example, a genetic research team can load reference genome data into the blob environment, further optimizing the performance of their secondary analysis workflows.

Avere vFXT

Hybrid Cloud with Avere vFXT for Azure.

The Avere vFXT for Azure:

Addresses bottlenecks in data access

HPC environments frequently require thousands of machines running concurrent operations against a like set of data, which can induce bottlenecks at the storage level. The Avere vFXT for Azure overcomes this by caching data closest to the computing, allowing for scale-out performance by running workloads on Azure Virtual Machines. Avere vFXT creates a performance tier away from the NAS, shielding it from having to field the concurrent requests.

Reduces latency

The Avere vFXT for Azure can greatly lower latency by eliminating the need for frequent roundtrips between compute and on-premises or blob storage.

Supports high-throughput scalability

The Avere vFXT runs as a cluster of VMs, enabling very high scale and throughput. The software enables workloads to run across hundreds or thousands of virtual machines.

Provides flexibility

Run HPC jobs in Azure without having to worry about where the data ultimately resides. Avere vFXT provides low-latency caching access to your on-premises NFS network-attached storage (NAS) environment while you determine your longer-term data management strategy.

Lowers storage costs

Avere vFXT caching lowers your overall storage costs presenting only the data specifically requested by your workload. Using this approach means you don’t have to copy large amounts of data along with what is needed for the job. In addition, the Avere vFXT for Azure has no charge associated with licensing. Users costs are only for the underlying Azure assets used to run the Avere vFXT software.

How to get started

The Avere vFXT for Azure is available today in the Azure Marketplace. For best results, contact your Microsoft team or partners involved to help you build a comprehensive architecture that meets your business objectives and delivers results. Also, if you are heading to Dallas, Texas for SC18, you’ll find Avere experts at the Microsoft booth ready to help you accelerate file-based workloads in Azure!


Four models businesses like yours are using to monetize IoT

$
0
0

This post was co-authored by Peter Cooper, Senior Product Marketing Manager, Azure IoT and Mark Pendergrast, Director of Product Marketing, Microsoft Azure.

It’s easy to talk about all the cool things your company might do to leverage the Internet of Things (IoT). Figuring out how you’re actually going to make them work for your business is a bit more challenging — particularly the part about how to monetize them.

IoT technologies have major potential to open new revenue streams. Capitalizing on them often requires out-of-the-box thinking and a willingness to take smart risks.

Prepare your Organization

We’ve helped thousands of customers around the world profit from IoT. Over the course of these engagements, we’ve found that their monetization models tend to fall into four categories. Here are some options to consider as you build your approach — take a look:

1. The one-and-done: one-time purchasing

A one-time purchase is a common model. IoT connectivity is added as a feature, allowing products to be sold at a premium or to stand out from the competition. This approach works great for scenarios where repeating, revenue-generating services are not required, and your products don’t need ongoing support. For example, many wearable devices and connected home products are sold this way. Once the transaction is done, the customer owns the device outright.

That’s not the end of the story, however. The data you collect from these products can be extremely valuable. You can use it to improve products over time, get more bang for your marketing buck, and spot growth opportunities before your competitors do.

This is the easiest pricing model for manufacturers to adopt because it’s how things are generally sold today. The trick to making this model work is ensuring that continuing costs associated with connected products don’t erode your profits over time. For example, if you’re collecting and storing data or providing a dashboard to customers, these functions must be factored in.

2. Good, better, best: value-added product services

Keeping the revenue flowing after the product has been sold is a major growth opportunity for manufacturers. With connected products, you can offer subscriptions for value-added services. Food processing equipment leader Buhler, for example, is using a subscription-based model for its grain-sorting machines, charging based on which features customers enable.

In addition to boosting profits, subscriptions help you stay close to customers throughout the product lifecycle, not just when things break. Services can include anything from rich data and operational reporting to predictive maintenance alerts. Many companies take a tiered approach, offering a freemium option and charging for more advanced solutions. It’s a great way to crack open a new market — give customers a taste of the value of connected products and they’ll soon want more.

The subscription service model does depart significantly from the single-transaction approach, though. Persuading buyers to pay an ongoing premium requires you to present a clear value proposition and provide strong customer support. Customers will expect regular improvements and a robust, reliable experience.

3. Pay-as-you-go: turning products into services

IoT is overturning the traditional ownership experience, allowing customers to pay only for what they use. Product-as-a-service models are familiar to anyone who has used a bike- or car-sharing service, where vehicles are tracked by GPS and rented by the hour or minute. The buyer avoids capital and maintenance expenses and always has the latest technology, while the vendor gets an ongoing revenue stream that can provide more profit and stability over the long term.
 
This model can work even for very complex and expensive products. For example, Rolls-Royce now sells flight hours instead of airplane engines. Data collected from IoT sensors lets the company know exactly how long the engines have been used — and gives it the data it needs to keep them in top condition. Using advanced analytics, the company can predict and resolve mechanical issues before something breaks.

4. We’re in this together: revenue-sharing

Confident your IoT solution will deliver improved results to customers? Consider a revenue-sharing model. Instead of selling products or services, you’ll be selling outcomes. In fact, another name for this approach is “outcomes as a service.” One example is Itron, whose Total Outcomes service provides IoT services to cities, charging customers based on cost savings and performance targets. Because it requires your company to take on some of the business risk, this model can be very attractive to customers. It’s also the furthest from traditional manufacturing revenue models, and likely to require the most innovative thinking and pricing strategies.

Which model is right for your business?

Plan to invest some time and resources to figure out how you’ll monetize IoT. Successful initiatives often start with a pilot project involving a few good customers. Beyond the IoT solution itself, systems, people, and processes will need to adapt to new ways of doing business. Get a roadmap for how you can start profiting from connected products in our white paper, Navigating the path to new IoT business opportunities through connected products. It walks you through building a unique value proposition, monetization strategies, ways to address common roadblocks, a new technology approach, and how to manage organizational change.

Azure ML Studio now supports R 3.4

$
0
0

Azure ML Studio, the collaborative drag-and-drop data science workbench, now supports R 3.4 in the Execute R Script module. Now you can combine the built-in data manipulation and analysis modules of ML Studio with R scripts to accomplish other data tasks, as for example in this workflow for oil and gas tank forecasting.

ML Studio

With the Execute R Script module you can immediately use more than 650 R packages which come preinstalled in the Azure ML Studio environment. You can also use other R packages (including packages not on CRAN) and source in R scripts you develop elsewhere (as shown above), although this does require the time to install them in the Studio environment. You can even create custom ML Studio models encapsulating R code for others to use in the drag-and-drop environment.

If you're new to Azure ML Studio, check out the Quickstart Tutorial for R to learn how use the Execute R Script module, and to check out what's new in the latest update follow the link below.

Microsoft Docs: What's New in Azure Machine Learning Studio

The AI Journey

$
0
0

Over the last year I have spent a significant amount of time with customers and partners discussing Artificial Intelligence (AI), some of the core patterns emerging in terms of initial implementations, and in many cases where and how to get started on the journey. While there has been, and continues to be, a lot of excitement around the potential for AI, people are starting to move from their initial, bespoke, research-oriented AI implementations, to wanting a more considered and pragmatic approach to their use of AI. In this area we’ve done a lot of thinking about not only the patterns for AI, but the journey itself.

This notion of real world or pragmatic AI stems from conversations with many customers and partners and their efforts to date in the last year. These projects typically align well with the patterns we’ve discussed, have been undertaken to both understand what is possible with AI and to learn more about the tools and platforms that are available. In many instances teams would take a set of data offline, build a model, a tool, or a service/solution, test it out, even put it in production as a proof of concept (PoC) and drive some learning. Unfortunately, many of these projects don’t feed into the core data estate or pipeline, they didn’t quite generate enough ROI to justify scaling the work further, nor did they create a sustainable differentiation for the company.  So, while they were great for learning, which is important in itself, they were not great for creating a sustainable AI asset.  With good learning done, the question now becomes the best way to think about the where and how of creating a sustainable AI asset/capability that support the differentiation and growth of an organization.

In this area we’ve created a framework to think about that AI Journey that I recently tested with customers in Europe and here in Redmond. This framework seems to resonate very well with the people we’ve spoken with, so I wanted to share it more broadly and hear what others think. Note that the journey is not necessarily linear, it’s more about picking the right tools and entry point(s) for any given organization, and scenario, and to think about what makes the most sense from an investment and opportunity cost perspective. Like any new technology the question is not how to use it in every scenario all at the same time, but rather where can it be used to help a company grow and differentiate and where do you want to build core capability versus leverage the services or tools from others. With that in mind here are the four areas we have been discussing with customers/partners.

BI before AI

Data is the foundation for AI and without data there is no AI, so the first opportunity we have is to leverage the data we have within an organization. I like to think that it’s good to look for “Insight” before we try and drive “Intelligence” and thus the notion of “BI before AI”.  Where does a company have unique data that can be leveraged?  Where are their data sources that can augment that data….is a company data driven, do they make decisions based on data, etc.  If not, that is a great place to start as data will be the cornerstone for much of a company’s AI work going forward.

With the hype for AI being so strong many people want to rush into AI before making sure their data estate is well structured. This step is both critical in obvious and subtle ways. Data is used to train models so obviously you need to have enough data to train accurate models without overfitting. More subtly is that it is important to have a diversified dataset to minimize the risk of bias.  One of my colleagues Judson Althoff often says, “…without good data, all you will do with that fancy AI technology is make mistakes with greater confidence than ever before.” For larger organizations with a lot of data, or a long history of data collection, there will be work to do to determine which data to try and clean/normalize and use….and where to start over collecting data from scratch, in a way that best serves the insights and ultimately intelligence they want to drive.  So having a plan for the data estate, and a path that balances driving insights while building out a sustainable data asset is a great entry point on the AI journey.

Along with cleaning and normalizing the data, getting the data in a standard format (e.g. common data model [CDM]) enables you to get insights into where best to prioritize your efforts. Normalizing your data with CDM and adopting tools like PowerBI makes it possible to not only get insights but to also easily create PowerApps and line of business tools.

SaaS AI Offerings

I have personally observed that when AI is used to help support some of the core business processes it is easier to get organizational alignment and support for continued investment, which is always critical when seeking to invest new areas like AI. In this area we are starting to see teams use AI to augment standard business processes, and within Microsoft our team incubated 3 AI solutions (customer service, sales, marketing) in the last year that are now starting to become available to others through the Dynamics 365 AI offerings. In many cases, we start by applying AI on top of our BI. That is automating the understanding of what is inside the data we are collecting. The simple example of this is the use of AI to look for patterns and signals in the new customer service insights.  On a daily basis the AI does clustering on the last time window of topics and compares that with other BI data to identify emerging trends.

By using AI to assist professionals in common business processes and scenarios, the organization sees real impact from AI and provides some proof points to consider for different use cases in the company. The previous examples were role specific but you can use AI to get a broader understanding of the organization with Workplace Analytics. In addition, there are many third-party SaaS services being created to support different horizontal and industry specific processes. These will become more important over time as the opportunity to purchase SaaS services with AI built in, or available as an add-on, will likely become more economical than trying to build your own AI capability for every business area/process within your organization.

AI Accelerators

Up until this point we have normalized our data to get insights on where to invest our efforts and leveraged AI in products that make core business processes more effective or efficient. The next opportunity is to leverage the new AI layer in the development stack to develop more custom solutions or services. In this area, the ability to leverage the tools within Azure AI to augment existing processes or solutions can be very powerful and are being tested within many organizations today. There are many examples within the Azure AI Gallery to explore what is possible. The capabilities offered through the cognitive services for vision, text, speech or knowledge, the BOT service, Azure ML and more provide a great starting point for infusing AI into your core offerings. In the same way Microsoft is using the Azure AI platform to infuse AI into our own products and other companies are also able to do the same thing for their products/services. In most cases, the core set of cognitive services will get you going and there is even the option to customize them when you need something unique to your use case. The advantage of this approach is that you don’t need to be a deep learning expert or have a lot of training data to get started. Simply call the API that you need or plug into the Azure Bot Service to get up and running quickly. The most common AI apps and services developed with these tools follow the 5 patterns for AI solutions.

Custom AI

Even with all the AI building blocks that make it easy to get started using AI in your apps and solutions there will be times that you may want to do something completely custom and build your own models and tools from scratch.  In those cases, we offer the data infrastructure, management tools, computational power, and frameworks to accomplish what you need. Starting with the data we support a variety of data solutions from cloud hosted databases to data lakes. Furthermore, if you started with your data in a CDM schema you can use it in Azure Data Lake Storage generation 2 providing a clean way to scale up your data capabilities as needed. When dealing with big data the ability to process it quickly is important and Azure has support for CPUs, GPUs and FPGAs. For those of you who are not familiar with FPGAs they are customized to execute AI tasks very quickly so that you can lower your model training time but compared to CPUs they are less flexible. Today several frameworks are used for deep learning and Azure provides support for many of them. PyTorch, Keras and TensorFlow are popular ones but we also support the ONNX standard that will give your models portability if you need to move between ONNX supported frameworks.  So, the tools for creating your own customer AI solutions are woven through the Azure platform and available for developers and data scientists to take advantage of today.

Summary

With the continuing evolution of AI, the opportunity to begin applying it to real world problems is here.  Since there are multiple entry points for applying AI, the key is to determine an approach that creates both short term results and builds a long-term asset. Part of achieving that goal is determining where and how to leverage AI, and like all other investment decisions where to apply your scarce resources and where to leverage other tools or services to drive business outcomes. It’s also critical to think about where you have unique data assets and how to bring those into play as part of your overall AI journey and strategy. As you consider your next AI project start with the business outcome you are trying to drive, the depth of AI experience and the level of AI customization needed will help you determine where best to start.  As I often say, “it is too early to do everything with AI but too late to do nothing” so get started and we look forward to seeing what you develop.

Cheers,

Guggs (@StevenGuggs)

What’s new in Azure DevOps Sprint 142 Update

$
0
0
Sprint 142 Update of Azure DevOps is rolling out to all organizations. In this update, there have been several improvements to YAML in Azure Pipelines, we’ve switched on the new navigation for everyone, improved experiences for Azure Boards, as well as introducing the dark theme. Watch the following video to learn more about these features.... Read More

This month on Bing: election info, image UI refresh, NFL answers, and word of the day

$
0
0

Bing election coverage


With the midterm election just over a week away, we want to make it easy to find the information you are looking for about state by state voting details, election news, candidates and initiatives on the ballot. 

We know how difficult it can be to make well-educated decisions in a world that’s full of, at times, conflicting information. With Bing’s multi-perspective answer, we hope to help voters understand the full range of arguments for or against a key ballot initiative by clearly presenting and summarizing the main arguments on each side. 

 

In addition, Bing provides information on US Senate and House races, gubernatorial contests, as well as state House and Senate races in every state. In addition, there are detailed profiles of US Senate candidates and their positions on various key issues. Our goal is to help you become as informed as possible as you fill out your ballots, while providing you with practical information such as absentee ballot application, locating polling locations, and ID requirements. On Nov. 6, Bing will feature live results from the Associated Press to keep you up to date with all of the latest election developments.

 

As always, any news about the election can be found in the Trending section of your Bing homepage, as well as on the ‘2018 Elections’ tab of the Bing News vertical. You can also check out our constantly updating news Spotlight, capturing the biggest news stories of the day.

 

A new way to view and browse images

 

We’ve updated the UX of the image detail page on Bing desktop to more easily highlight new and relevant information about images you click on. For example, if you’re searching for images of food, Bing will automatically find recipes associated with that image, presenting the information in the upper right-hand corner of the screen. Looking for outfits or costumes? Bing will help you find related product pages where you can purchase the item you are looking for. The intention behind these changes it to make it easier than ever to find relevant information and inspiration for your image-based searches. 

 

We’ve also extended these UX changes to our new visual search results page, where you can search using an image as an input. Whether you’ve uploaded an image yourself or are browsing the web, Bing’s object detection features can help you more easily zoom in on particular parts of the image which you might want to search, learn more about, or buy. 

 

NFL expanded 

 

With the NFL season in full swing, we’ve come up with even more ways for football fans to stay on top of their game. No matter what information you’re looking for, Bing presents a comprehensive picture of each game with stats, schedules, viewing information, injury reports, news, and even predictions.

 

Additionally, Bing’s pre-game insights provide specialized notes that guide both new and experienced football fans on the history of the teams and the matchup, as well as what to watch for in the current showdown. 

 

Word of the day

 

Finally, as we’re approaching the new year—we want to give everyone a fun and easy opportunity to expand their vocabulary. Bing’s new word of the day is simple way to expose yourself to new words and concepts. Learn the featured word’s meaning, pronunciation, and see the word in context. You can also navigate through the words for the past week to catch up on words you may have missed. 

 

Tomorrow’s word? That’s a surprise! 

 

We value your feedback

As always, we value your perspective, and one of the best ways for you to provide it is via the Bing Insider Program, which gives our users the opportunity to provide feedback to Bing engineers and partner teams via monthly calls, Insider events, interactive feedback sessions, and more.

Please register here to become a Bing Insider. Even if you don’t join today, the website is full of great content featuring our Insiders as well as new Bing features. We hope you enjoy the new site and come back often.

 

XAML Islands – A deep dive – Part 1

$
0
0

XAML Islands is a technology that enables Windows developers to use new pieces of UI from the Universal Windows Platform (UWP) on their existing Win32 Applications, including Windows Forms and WPF technologies. This allows them to gradually modernize their apps at their own pace, making use of their current code as much as they want.

Background: How did we get here?

In 2012, with Windows 8, we introduced a new framework to modernize the Win32 APIs called Windows Runtime, with many new UI Controls. These UI Controls were part of the visual framework called XAML, which is part of the Windows Runtime. Back then, if you wanted to use any of these new XAML Controls, you would need to create a new App.

In the middle of 2015, with the introduction of Windows 10, UWP was born. The Universal Windows Platform (UWP) allows you to create apps that work across Windows devices (Xbox, Mobile, HoloLens, Desktop, etc).

In 2015 we announced Project Centennial, later Desktop Bridge, a set of tools that allowed developers to bring their existing apps (Win32) to the Microsoft Store (i.e. the new packaging system), so they could convert, for example, their MSI to an APPX. That was the first step on allowing even more apps to be delivered to customers in a safe and reliable way. Later on, we added even more capabilities to this bridge, allowing developers to enhance their apps leveraging some of the new Windows 10 APIs into their existing apps, like live tiles and notification on the new action center. But still, no new UI controls.

And now, at Build 2018, Kevin Gallo announced that Microsoft would be introducing a way for developers to use the new Windows 10 controls into their current WPF, Windows Forms and native Win32 apps, without fully migrating their apps to the Universal Windows Platform. That was branded as UWP XAML Islands, and it is huge! Now you can have your “Islands” of UWP controls wherever you want inside your WPF apps without rewriting thousands of lines of code.

Who is it for?

XAML Islands are intended for existing Win32 Apps that wants to improve their user experience by leveraging new UWP controls and behaviors but, due to effort or cost being prohibitive, are unable to do a full rewrite of the App. You could already leverage Windows 10 APIs, but up until XAML Islands, only non-UI related APIs.

If you are developing a new Windows App, a UWP App is probably the right approach.

How does it work?

Starting with the Windows 10 October 2018 Update (SDK 17763), we added a new SDK API that enables the scenario of XAML Islands.

NOTE: This feature is in a Preview state for the October 2018 Update, so we can get your feedback on our directions – there are important limitations and the feature is not yet ready for production code, but we would value your feedback highly to help inform our plans.

That means that Windows 10 now supports hosting UWP Controls inside the context of a Win32 Process. There are two new system APIs called WindowsXamlManager and DesktopWindowXamlSource.

  • The WindowsXamlManager handles the UWP XAML Framework itself. Its only method, aside from Disposing, is InitializeForCurrentThread, which initializes the UWP XAML Framework inside the current thread of this non-UWP Win32 Desktop app, allowing it to create UWP UI in it.
  • The DesktopWindowXamlSource is the actual instance of your Island content. It has a Content property which you (the developer) are responsible for instantiating and setting. This class also enables the developer to get and set the focus of that element.

The DesktopWindowXamlSource renders to and gets its input from an HWND. It needs to know to which other HWND it will attach the Island’s one, and you are responsible for sizing and positioning the parent’s HWND.

Simplifying, with an instance of the DesktopWindowXamlSource in place, you can attach it’s HWND to any parent HWND you want, from your native Win32 App. You don’t have to manually do that if you use the Windows Community Toolkit, because it already wraps these classes into an easy-to-use implementation, for WPF and WinForms.

In case you are not using the toolkit, or want to interop directly from raw Win32 code, you would need to cast the DesktopWindowXamlSource object (it’s an IInspectable, after all, UWP is an extension of COM – take a look at how to do that here) to a IDesktopWindowXamlSourceNative instance (which is a known COM type, just not exposed inside the SDK yet). This interface exposes the AttachToWindow method that takes an IntPtr that points to the parent HWND. By attaching it to a parent HWND, it would draw whichever UWP Control you instantiated.

The process is very similar to how you create a Win32 element inside WPF. Any framework that exposes HWND can host a XAML Island. So, in theory, you could have a Java or Delphi application hosting a Windows 10 UWP Control. This control can be anything from a simple Button to a fully featured custom control. All you need is a wrapper for that HWND object.

NuGets and dependencies

We don’t want you to have to worry about all these HWND details, especially if you are a WPF or WinForms developer that doesn’t normally need to speak in terms of HWNDs. That is a huge effort and we wanted to simplify that by exposing wrapper controls for the most common usages out there. We have thousands of apps that are using WPF and Windows Forms, which already handle HWND instances, so the current iteration of XAML Island is already exposing these wrapper classes for you. You can find their implementations inside the Windows Community Toolkit NuGet packages for Win32.

The easiest way to use an Island inside an app is to use the NuGet packages that we provide. The one you use depends on which framework you are using. The Windows Community Toolkit contains two control implementations that wrap the WindowsXamlManager and the DesktopWindowXamlSource for your convenient use, one for WPF and one for Windows Forms. These are the available packages:

  • Toolkit.Wpf.UI.XamlHost – This package provides the WindowsXamlHost class for WPF.
  • Toolkit.Wpf.UI.Controls – As of right now, this package provides wrapper classes for 1st party controls, such as the InkCanvas, InkToolbar, MapControl, and MediaPlayerElement, all for WPF.
  • Toolkit.Forms.UI.XamlHost – This package provides the WindowsXamlHost class for Windows Forms.
  • Toolkit.Forms.UI.Controls – As of right now, this package provides wrapper classes for 1st party controls, such as the InkCanvas, InkToolbar, MapControl, and MediaPlayerElement, all for Windows Forms.

Both Microsoft.Toolkit.Wpf.UI.Controls and Microsoft.Toolkit.Forms.UI.Controls have dependencies on their respective .XamlHost packages. They use the WindowsXamlHost control to wrap the 1st party UWP controls, which we’ll get into more details on the next blog post.

Integrating with XAML Islands using the Windows Community Toolkit

Now with the NuGet package in place, integrating with XAML Islands should be simple.

The implementation creates and manages the WindowsXamlManager and the DesktopWindowXamlSource instances for you inside a wrapper control called WindowsXamlHost (take a look at the implementation for WPF right here, and some of the calls inside the base class). It also handles the loading of UWP types for you (inside a special class called UWPTypeFactory), and the focus management, back and forth from the Island to the host control. All of that, for free!

For instance, if you are a WPF developer, all you need to do is create an instance of the WindowsXamlHost class and specify which UWP control you want to instantiate inside it. There are a few ways of doing that. The docs are extensive about it so I definitely recommend reading it.

NOTE: Make sure you follow the Enhance your desktop application for Windows 10, which describes how to use Windows 10 APIs on your Desktop Bridge Win32 App. Without this, you won’t be able to reference any Windows 10 API.

Now let’s reference the control in our Xaml (WPF). To create a reference of our WindowsXamlHost, we use the namespace Microsoft.Toolkit.Wpf.UI.XamlHost, so we need to add it to our namespaces inside the WPF’s XAML. At the first element of your Xaml component, add the following namespace:

xmlns:xamlhost=”clr-namespace:Microsoft.Toolkit.Wpf.UI.XamlHost;assembly=
Microsoft.Toolkit.Wpf.UI.XamlHost”

Now you can create objects from that namespace, and more specifically, the WindowsXamlHost. This is how the markup would look like:

Just as a very, very simple – and not practical – example, you can create a UWP button, like this:


<Window
...
xmlns:xamlhost="clr-namespace:Microsoft.Toolkit.Wpf.UI.XamlHost;assembly=Microsoft.Toolkit.Wpf.UI.XamlHost">

<xamlhost:WindowsXamlHost x:Name="myUwpButton" InitialTypeName="Windows.UI.Xaml.Controls.Button" />

This is the simplest way for specifying the control we want inside our Island, which is setting the InitialTypeName property to the fully qualified name of the UWP control that you want to instantiate and render. Then you can just access it from your code behind.

Unfortunately, the UWP control will not be available right after the page’s InitializeComponent(). To be able to access its instance you need to wait until it fully loads so it is available for you to attach to its properties. There is an event on the control specifically for that, called ChildChanged:


myUwpButton.ChildChanged += MyUwpButton_ChildChanged;

...

private void MyUwpButton_ChildChanged(object sender, System.EventArgs e)
{
    if (myUwpButton.Child is Windows.UI.Xaml.Controls.Button button)
    {
        button.Content = "Click me!";
        button.Click += (s, args)=>
        {
            MessageBox.Show("Hi from UWP Button!");
        };
    }
}

Example of a UWP button.

NOTE: Be careful of the namespaces! You’ll probably have two Button classes, System.Windows.Controls.Button (the WPF one) and Windows.UI.Xaml.Controls.Button (the UWP one).

Again, this is just the simplest thing you can do with Xaml Island, but it is not the right way of doing it! A few bigger Islands is a better approach than many smaller ones. What you probably want is custom controls, which is going to be covered on our next blog post!

With this process, any Win32 App can use the newest UWP controls and adopt the Fluent Design System, regardless of your app model.

Binding

One of the most useful features of XAML is Binding. The Child property of the WindowsXamlHost instance will reference the UWP object. This object and your WPF or WinForms objects are running on the same process and on the exact same thread. This means that you could have a WPF TextBox and a UWP TextBox and bind their Text properties together.


<StackPanel>
    < xamlhost:WindowsXamlHost InitialTypeName="Windows.UI.Xaml.Controls.TextBox" ChildChanged="MyUwpTextBox_ChildChanged" x:Name="myUwpTextBox"/>
    <TextBox x:Name="myWpfTextBox"/>
</StackPanel>

And on your code behind:


private void MyUwpTextBox_ChildChanged(object sender, System.EventArgs e)
{
    if (myUwpTextBox.Child is Windows.UI.Xaml.Controls.TextBox textBox)
    {
        textBox.SetBinding(Windows.UI.Xaml.Controls.TextBox.TextProperty, new Windows.UI.Xaml.Data.Binding
        {
            Source = myWpfTextBox,
            Path = new Windows.UI.Xaml.PropertyPath("Text"),
            Mode = Windows.UI.Xaml.Data.BindingMode.TwoWay,
            UpdateSourceTrigger = Windows.UI.Xaml.Data.UpdateSourceTrigger.PropertyChanged
        });
    }
}

As you can see, I used the C# binding syntax instead of the syntax for binding inside the XAML – Text=”{Binding …}” – because we are not exposing the typed object, after all, the WindowsXamlHost is a control that just instantiates whatever fully qualified class you specify, so there are no bindable properties inside it. The Child property of the WindowsXamlHost is a Windows.UI.Xaml.UIElement, so we could only bind to the dependency properties exposed by that control. To solve this, we could create our own custom WPF control that inherits from WindowsXamlHost, sets the Child to the desired UWP TextBox and exposes the properties as bindable properties for the ones we want to bind to as needed. Here is an example for the Button class.

NOTE: You most likely don’t want/need to do that for every control you have. This process is for allowing a developer to expose it’s UWP control to be used in another context. Again, try to minimize the number of Islands you create. For use within your own App, try to create fewer and bigger Islands, instead of several small ones.

It’s worth noting that even though we are using a TwoWay binding, it is acting as OneWayToSource, so changes to the UWP TextBox are being reflected on the WPF TextBox, but not the other way around. That happens because the WPF TextBox defines it’s TextProperty dependency property as a System.Windows.DependencyProperty, and the UWP TextBox defines its TextProperty as a Windows.UI.Xaml.DependencyProperty. They are not the same type. In fact, the whole binding systems are different, meaning that the UWP binding system is not expecting that type and does not know what to do with it. The other way of doing this that works is if we bind both TextBoxes to a class that implements INotifyPropertyChanged (e.g. a view model). Yes, the good n’ old! If you want to check a full featured sample, take a look at this repository on GitHub.

Where are we?

This was the first blog post showing how we came to this point in time, and I tell you, it’s a great time to be a Windows Developer! Not only we welcome all developers, we also understand that your development investment is important and needs to be a long-term investment.

On the next blog post, we’ll clear things up, showing how to leverage the wrapped controls that we provide inside the Microsoft.Toolkit.Wpf.UI.Controls and Microsoft.Toolkit.Forms.UI.Controls NuGet packages, as well as the most compelling feature of Xaml Islands, hosting your custom controls inside your Win32 Apps.

Stay tuned for more and see you next time!

The post XAML Islands – A deep dive – Part 1 appeared first on Windows Developer Blog.

Because it’s Friday: What does the cat say?

$
0
0

In Greek, it says "tsiou tsiou". In Danish, it says "pip-pip". In English, it says "cheep". At Derek Abbot's Animal Noise page, you can learn what a small bird sounds like according to 20 different languages, and for dozens of other animals, too. (The fox, however, is sadly absent.) It also lists common pet names by animal and country, and the different commands used to direct animals in different places.

Animals

That's all from the blog for this week. We'll be back next week and in the meantime, have a great weekend!


Top Stories from the Microsoft DevOps Community – 2018.11.02

$
0
0
I’m always fascinated to see how people take the flexibility of the Azure DevOps platform and stretch it to the limits. This week, people are creating pipelines for containers, NuGet packages and even retrocomputers! DevOps for a Commodore 64? Sure, you’re doing DevOps for your production web application. And yes, you’ve got a pipeline that... Read More

.NET Core and .NET Standard for IoT – The potential of the Meadow Kickstarter

$
0
0

I saw this Kickstarter today - Meadow: Full-stack .NET Standard IoT platform. It says that "It combines the best of all worlds; it has the power of RaspberryPi, the computing factor of an Arduino, and the manageability of a mobile app. And best part? It runs full .NET Standard on real IoT hardware."

NOTE: I don't have any relationship with the company/people behind this Kickstarter, but it seems interesting so I'm sharing it with you. As with all Kickstarters, it's not a pre-order, it's an investment that may not pan out, so always be prepared to lose your investment. I lost mine with the .NET "Agent" SmartWatch even though all signs pointed to success.

Meadow IoT KickstarterI've done IoT work on Raspberry Pis which is much easier lately with the emerging community support for ARM32, Raspberry Pis, and cool stuff happening on Windows 10 IoT. I've written on how easy it is to get running on Raspberry Pi. I was even able to get my own podcast website running on Raspberry Pi and in Docker.

This Meadow Kickstarter says it's running on the Mono Runtime and will support the .NET Standard 2.0 API. That means that you likely already know how to program to it. Most libraries on NuGet are .NET Standard compliant so a ton of open source software should "Just Work" on any solution that supports .NET Standard.

One thing that seems interesting about Meadow is this sentence: "The power of Raspberry Pi in the computing factor of an Arduino, and the manageability of a mobile app."

Raspberry Pis are full-on computers. Ardunios are small little (mostly) single-tasked devices. Microcomputer vs Microcontroller. It's overkill to have Ubuntu on a computer just to turn on a device. You usually want IoT devices to have as small a surface area as possible.

Meadow says "Meadow has been designed to run on a variety of microcontrollers, and our first board is based on STMicroelectronics' flagship STM32F7 MCU. The Meadow F7 Micro board is an embeddable module that's based on Adafruit Feather form factor." Remember, we are talking megs not gigs here. "We've paired the STM32F7 with 32MB of flash storage and 16MB of RAM, so you can run pretty much anything you can think of building." This is just a 216MHz ARM board.

Be sure to scroll all the way down to the bottom of the page as they outline risks as well as what's left to be done.

What do you think? While you are at it, check out (total coincidence) our sponsor this week is Intel IoT! They have some great developer kits.


Sponsor: Reduce time to market and simplify IOT development using developer kits built on Intel Atom®, Intel® Core™ and Intel® Xeon® processors and tools such as Intel® System Studio and Arduino Create*


© 2018 Scott Hanselman. All rights reserved.
     

Azure.Source – Volume 56

$
0
0

Now in preview

Local testing with live data means faster development with Azure Stream Analytics

Live data local testing is now available for public preview in Azure Stream Analytics Visual Studio tools, which enables you to test jobs locally from the IDE using live event streams from Azure Event Hub, IoT Hub, and Blob Storage. The new local testing runtime can read live streaming data from the cloud or from a local static file. It works the same as the cloud runtime Azure Stream Analytics and therefore supports the same time policies needed for many testing scenarios. The query runs in a simulated environment suitable for a single server development environment and should only be used for query logic testing purposes. It is not suitable for performance/scalability and availability testing.

Also in preview

Now generally available

Avere vFXT for Azure for HPC workloads now generally available

Avere vFXT for Azure is a filesystem caching solution for data-intensive high-performance computing (HPC) tasks, which is deployed as a set of Azure Virtual Machines adjacent to your cloud-based HPC cluster. The software runs as a cluster of VMs, enabling very high scale and throughput capacity for compute clusters of any size. By caching files close to your compute nodes, Avere vFXT speeds read times and lets you work more smoothly even at peak load. Avere vFXT works best with systems that have between 1,000 and 40,000 client cores. Avere vFXT can work with your existing on-premises data storage to provide Azure-based computing resources local access to active files that are stored long-term in your datacenter.

Solution diagram showing Hybrid Cloud with Avere vFXT for Azure

Also generally available

News and updates

Deliver the right events to the right places with Event Domains

An event domain is a management tool for large numbers of Event Grid topics related to the same application. You can think of it as a meta-topic that can have thousands of individual topics. Event domains make available to you the same architecture used by Azure services (like Storage and IoT Hub) to publish their events, which enables you to publish events to thousands of topics. Domains also give you authorization and authentication control over each topic so you can partition your tenants. In addition, we're making Event Grid available in Azure Government regions (public preview until all functionality is available). Lastly, we have updated some of our SDKs, added a time to live feature for Event Subscriptions, and added Portal UI support for configuring dead-lettering and retry policies.

Solution diagram showing the use of Event Domains to manage Event Grid topics

Cross-channel emotion analysis in Microsoft Video Indexer

Azure Video Indexer is a cloud service built on Azure Media Analytics, Azure Search, Cognitive Services (such as the Face API, Microsoft Translator, the Computer Vision API, and Custom Speech Service). Video Indexer’s (VI) new machine learning model mimics human behavior to detect four cross-cultural emotional states in videos: anger, fear, joy, and sadness. With the new emotion detection capability in VI that relies on speech content and voice tonality, you are able to become more insightful about the content of your videos by leveraging them for marketing, customer care, and sales purposes. You don’t need to have any background in machine learning or computer vision to use VI. You can even get started without writing a single line of code. VI offers simple but powerful APIs and puts the power of AI technologies for video within the reach of every developer.

Simplified restore experience for Azure Virtual Machines

Azure Backup now offers an improved restore experience for Azure Virtual Machines by leveraging the power of ARM templates and Azure Managed Disks. The new restore experience directly creates managed disk(s) and virtual machine (VM) templates. This eliminates the manual process of executing scripts or PowerShell commands to convert and configure the .VHD file, and complete the restore operation. There is zero manual intervention after the restore is triggered making it truly a single-click operation for restoring IaaS VMs.

Additional news and updates

Events

Microsoft Connect(); 2018

Save the date to tune in online on December 4, 2018 for Microsoft Connect – a full day of dev-focused delight—including updates on Azure and Visual Studio, keynotes, demos, and real-time coding with experts. Whether you’re just getting started or you’ve been around the blockchain, you’ll find your people here. And it all happens online. Get comfortable, and get inspired.

Illustration promoting Microsoft Connect(); 2018 on December 4

Microsoft @ DevCon4

DevCon4 is the Ethereum conference for designers, developers, researchers, and artists that took place in Prague last week, which is where Microsoft released an enclave-ready Ethereum Virtual Machine (eEVM), which is an open-source, standalone, embeddable, C++ implementation of the Ethereum Virtual Machine. We expect that this codebase will be used as a starting point in projects across the ecosystem to have Trusted Execution Environment (TEE)-enabled EVM contract logic work on any blockchain or in off-chain compute scenarios.

Your guide to Azure Stack, Azure Data Box, and Avere Ignite sessions

Hybrid cloud is evolving from being the integration of a datacenter with the public cloud, to becoming units of computing available at even the world’s most remote destinations working in connection with public cloud. Azure Stack is a hybrid cloud platform that lets you provide Azure services from your datacenter, Azure Data Box helps you move large amounts of data to Azure in a cost-effective way, and Avere vFXT for Azure provides high-performance file access for high-performance computing (HPC) applications. This post provides curated learning paths consisting of on-demand content from Ignite 2018 to help you learn more about them.

Azure shows

Episode 253 - Azure Data Lake Service - Gen 2 | The Azure Podcast

James Baker, a Principal PM in the Azure team, talks to us about the latest offering in the Big Data space - Azure Data Lake Service - Gen 2. He gives us the low-down on what's new and why this is such a big deal for existing and new customers.

RI instance size flexibility & reservations for Azure Cosmos DB, SQL DB, and SUSE | Azure Friday

Yashesvi Sharma joins Scott Hanselman to discuss reserved virtual machine instances and how the reservation you buy can apply to other virtual machines (VMs) sizes in the same size series group. This ensures that you maximize your discounts and make reservation management easier. Also, you can now save even more by purchasing reservations for SQL Database, Azure Cosmos DB, and SUSE Linux usage on Azure.

Hybrid data movement across multiple Azure Data Factories | Azure Friday

Gaurav Malhotra and Scott Hanselman discuss how you can now share a self-hosted Integration Runtime (IR) across multiple data factories and consolidate on a single, highly available, multi-node (up to 4 nodes) self-hosted IR infrastructure. Doing so removes the need for separate, self-hosted IRs per project/data factory making it more manageable for the IT/DevOps team.

What’s new with Speech Services: This Week in Cognitive | AI Show

This week we are taking a look at the newly Generally Available service called Unified Speech as well as some preview services in speech that are fun to play around with. We will walk through the resources you need to get started and show you a demo that will get you thinking about how you can use Cognitive Search, Bots and Custom Speech to deliver delightful experiences to customers.

Learn How to Deploy Machine Learning Models! | AI Show

In this episode, we will provide step by step guidance on how to deploy machine learning models using the Visual Studio Code Tools for AI extension and Azure Machine Learning service.

Microsoft’s Smart Campus IoT and AI project “Garcon” | Internet of Things Show

When IoT and AI come together to make buildings smarter, anything can happen. Microsoft's campus is a great playground for developing and testing solutions such as project "Garcon". Ganesh and Olivier give us a quick walkthrough and demo of the project, including the infamous "tell me a joke" one.

Electric Imp seamlessly integrates with Azure IoT | Internet of Things Show

Join Electric Imp CEO Hugo Fiennes on the IoT Show as he describes their secure IoT connectivity platform and how it seamlessly integrates with Azure IoT Hub. He'll also be demo'ing the Electric Imp cellular breakout board connected to Azure IoT Central.

Whats New in Azure Managed Disks | Tuesdays with Corey

Corey Sanders, Corporate VP - Microsoft Azure Compute team sat down with Kay Singh, Senior PM on the Azure compute team to talk about Managed Disks in Azure! We're talkin' swapping the OS image, enhanced disk metrics and moving disks between Azure subscriptions.

Technical content

Design patterns – IoT and aggregation

In this article, you will learn how to insert IoT data with high throughput and then use aggregations in different fields for reporting. Many databases achieve extremely high throughput and low latency because they partition the data, such as MongoDB, HBase, Cassandra, or Azure Cosmos DB. But what if you would like to partition the data for the insert scenario and also group the data on a different partition key for the reporting scenario? Unfortunately, these are mismatched requirements. You have two options to solve this problem, which are outlined in this post.

Solution diagram showing use Azure Cosmos DB’s change feed and Azure Function to aggregate the data per hours and then store the aggregated data in another collection

Automate your Azure Database for MySQL deployments using ARM templates

The Azure Database for MySQL REST API enables DevOps engineers to automate and integrate provisioning while configuring and operating managed MySQL servers and databases in Azure. The API allows the creation, enumeration, management, and deletion of MySQL servers and databases on the Azure Database for MySQL service. Learn how Azure Resource Manager (ARM) templates leverage the underlying REST API to declare and program the Azure resources required for deployments at scale, aligning with infrastructure as a code concept.

Azure Cosmos DB – A polymorphic database for an expanding data universe

We now live in an “expanding data universe”, in which there is no single paradigm that fits best for all data structures or data persistence scenarios. This post traces back through history to reflect on how NoSQL databases first rose to prominence and how one database covers more ground than other databases across these expanding and maturing paradigms, the polymorphic Azure Cosmos DB.

Making Your Node.js Work Everywhere with Environment Variables

In this post, you'll learn how environment variables allow your apps to run anywhere they need to run - from your computer to your colleagues' machines, internal company servers, cloud servers, or inside containers. John Papa shares the what and when, then dives into the tools and ways he uses environment variables across scenarios - complete with pro tips and recommendations.

Photo by John Papa of his open sketch journal on a wood tabletop showing an illustration of various environment variables with pens used to draw it
Photo: @john_papa

Additional technical content

Azure Tips & Tricks

How to renew or revoke Azure Functions keys | Azure Tips & Tricks

Thumbnail from How to renew or revoke Azure Functions keys by Azure Tips & Tricks from YouTube

Learn how to quickly renew or revoke Azure Functions keys using the Azure portal. When working with HTTP triggers with Azure Functions, you are provided with a set of keys that you could use to authorize who can and can't access your functions.

How to work with the Azure Functions File System | Azure Tips & Tricks

Thumbnail from How to work with the Azure Functions File System by Azure Tips & Tricks from YouTube

Learn how to quickly rename Azure functions using the Azure Portal Console. Working with the Azure Functions File System, you can easily rename your functions using the command line.

Azure DevOps

Damian Brady on DevOps for Data Science and Machine Learning - Episode 008 | The Azure DevOps Podcast

Damian and Jeffrey talk all things data science and machine learning. Damian answers key questions such as: what has been the biggest change in the area of data science since the Azure DevOps release? What does source control look like for data science projects in DevOps? And more. He also explains some of the interesting architectures he has put together for machine learning and gives his recommendations for those who want to go even further with data science after listening to this week’s episode.

Customers and partners

Bringing digital ledger interoperability to Nasdaq Financial Framework through Microsoft Azure Blockchain

To accelerate Nasdaq’s blockchain capabilities aligned with the industry’s rising demand, the company is integrating the Nasdaq Financial Framework with Microsoft Azure Blockchain to build a ledger agnostic blockchain capability that supports a multi-ledger strategy.

Photograph in Times Square showing the Microsoft and NASDAQ logos on a digital billboard

Four models businesses like yours are using to monetize IoT

We’ve helped thousands of customers around the world profit from IoT. Over the course of these engagements, we’ve found that their monetization models tend to fall into four categories: one-time purchasing, value-added product services, turning products into services, and revenue-sharing.

Building an ecosystem for responsible drone use and development on Microsoft Azure

Drones or unmanned aircraft systems (UAS) are great examples of intelligent edge devices being used today to address many of these challenges, from search and rescue missions and natural disaster recovery, to increasing the world’s food supply with precision agriculture. Two important announcements further our commitment to responsible use of drones as commercial IoT edge devices running on Microsoft Azure: AirMap has selected Microsoft Azure as the company's exclusive cloud-computing platform for its drone traffic management platform and developer ecosystem, and DJI Windows SDK for app development enters public preview.

Azure Marketplace new offers – Volume 23

The Azure Marketplace is the premier destination for all your software needs – certified and optimized to run on Azure. Find, try, purchase, and provision applications & services from hundreds of leading software providers. You can also connect with Gold and Silver Microsoft Cloud Competency partners to help your adoption of Azure. In the second half of September we published 33 new offers.


Azure This Week - 2 November 2018 | A Cloud Guru

Thumbnail from Azure This Week - 2 November 2018 by A Cloud Guru on YouTube

In this episode of Azure This Week, Dean takes a look at some new features in Event Grid, the expansion of Azure Availability Zones and there’s a new free O’Reilly e-book on creating your first AI powered bot on Azure.

Best practices for alerting on metrics with Azure Database for MySQL monitoring

$
0
0

Whether you are a developer, database administrator, site reliability engineer, or a DevOps professional, monitoring databases is an important part of maintaining the reliability, availability, and performance of your PostgreSQL server. There are various metrics available for you in Microsoft Azure Database for MySQL to get insights on the behavior of the server. You can also set alerts on these metrics using the Azure portal or Azure CLI.

MyServer - Alert rules

With modern applications evolving from a traditional on-premises approach to becoming more hybrid or cloud-native, there is also a need to adopt some best practices for a successful monitoring strategy on a hybrid and public cloud. Here are some example best practices on how you can use monitoring data on your MySQL server, and areas you can consider improving based on these various metrics.

Active connections

Sample threshold (percentage or value): 80 percent of total connection limit for greater than or equal to 30 minutes, checked every five minutes.

Things to check:

  • If you notice that active connections are at 80 percent of the total limit for the past half hour, verify if this is expected based on the workload.
  • If you think the load is expected, active connections limit can be increased by upgrading the pricing tier or vCores. You can check active connection limits for each SKU.

Active Connections

Failed connections

Sample threshold (percentage or value): 10 failed connections in the last 30 minutes, checked every five minutes.

Things to check:

  • If you see connection request failures over the last half hour, verify if this is expected by checking the logs for failure reasons.

Failed Connections

  • If this is a user error, take the appropriate action. For example, if there is an authentication failed error, check your username/password.
  • If the error is SSL related, check the SSL settings and input parameters are properly configured.
    • Example: psql "sslmode=verify-ca sslrootcert=root.crt host=mydemoserver.postgres.database.azure.com dbname=postgres user=mylogin@mydemoserver"

CPU percent or memory percent

Sample threshold (percentage or value): 100 percent for five minutes or 95 percent for more than two hours.

Things to check:

  • If you have hit 100 percent CPU or memory usage, check your application telemetry or logs to understand the impact of the errors.
  • Review the number of active connections. Check for connection limits. If your application has exceeded the maximum connections or is reaching the limits, then consider scaling up computing.

IO percent

Sample threshold (percentage or value): 90 percent usage for greater than or equal to 60 minutes.

Things to check:

  • If you see that IOPS is at 90 percent for one hour or more, verify if this is expected based on the application workload.
  • If you expect a high load, then increase the IOPS limit by increasing storage. Storage to IOPS mapping is below for reference.

Storage

The storage you provision is the amount of storage capacity available to your Azure Database for PostgreSQL server. The storage is used for the database files, temporary files, transaction logs, and the PostgreSQL server logs. The total amount of storage you provision also defines the I/O capacity available to your server.

  Basic General purpose Memory optimized
Storage type Azure Standard Storage Azure Premium Storage Azure Premium Storage
Storage size 5GB TO 1TB 5GB to 4TB 5GB to 4TB
Storage increment size 1GB 1GB 1GB
IOPS Variable

3IOPS/GB

Min 100 IOPS

Max 6000 IOPS

3IOPS/GB

Min 100 IOPS

Max 6000 IOPS

You can add additional storage capacity during and after the creation of the server. The Basic tier does not provide an IOPS guarantee. In the General purpose and Memory optimized pricing tiers, the IOPS scale with the provisioned storage size in a three to one ratio.

Storage percent

Sample threshold (percentage or value): 80 percent

Things to check:

  • If your server is reaching provisioned storage limits, it will soon be out of space and set to read-only.
  • Please monitor your usage and you can also provision for more storage to continue using the server without deleting any files, logs, and more.

If you have tried everything and none of the monitoring tips mentioned above lead you to a resolution, please don't hesitate to contact Microsoft Azure Support.

Acknowledgments

Special thanks to Anandsagar Kothapalli, Bassu Hiremath, Kalyan Sayyaparaju, Parikshit Savjani, and Praveen Barli for their contributions to this posting.

Best practices for alerting on metrics with Azure Database for PostgreSQL monitoring

$
0
0

Whether you are a developer, database administrator, site reliability engineer, or a DevOps professional, monitoring databases is an important part of maintaining the reliability, availability, and performance of your PostgreSQL server. There are various metrics available for you in Microsoft Azure Database for PostgreSQL to get insights on the behavior of the server. You can also set alerts on these metrics using the Azure portal or Azure CLI.

MyServer - Alert rules

With modern applications evolving from a traditional on-premises approach to becoming more hybrid or cloud-native, there is also a need to adopt some best practices for a successful monitoring strategy on a hybrid and public cloud. Here are some example best practices for using monitoring data on your PostgreSQL server, and areas you can consider improving based on these various metrics.

Active connections

Sample threshold (percentage or value): 80 percent of total connection limit for greater than or equal to 30 minutes, checked every five minutes.

Things to check:

  • If you notice that active connections are at 80 percent of the total limit for the past half hour, verify if this is expected based on the workload.
  • If you think the load is expected, active connections limit can be increased by upgrading the pricing tier or vCores. You can check active connection limits for each SKU.

Active Connections

Failed connections

Sample threshold (percentage or value): 10 failed connections in the last 30 minutes, checked every 5 minutes.

Things to check:

  • If you see connection request failures over the last half hour, verify if this is expected by checking the logs for failure reasons.

Failed Connections

  • If this is a user error, take the appropriate action. For example, if there is an authentication failed error, check your username/password.
  • If the error is SSL related, check that the SSL settings and input parameters are properly configured.
    • For Example: psql "sslmode=verify-ca sslrootcert=root.crt host=mydemoserver.postgres.database.azure.com dbname=postgres user=mylogin@mydemoserver"

CPU percent or memory percent

Sample threshold (percentage or value): 100 percent for 5 minutes or 95 percent for more than two hours.

Things to check:

  • If you have hit 100 percent CPU or memory usage, check your application telemetry or logs to understand the impact of the errors.
  • Review the number of active connections. Check for connection limits. If your application has exceeded the maximum connections or is reaching the limits, then consider scaling up computing.
  • Another tool to help manage your application and optimize your workload is Query Performance Insights. Refer to the Query Store and its usage scenario.

Query Performance Insight

IO percent

Sample threshold (percentage or value): 90 percent usage for greater than or equal to 60 minutes.

Things to check:

  • If you see that IOPS is at 90 percent for one hour or more, verify if this is expected based on the application workload.
  • If you expect a high load, then increase the IOPS limit by increasing storage. Storage to IOPS mapping is below for reference.

Storage

The storage you provision is the amount of storage capacity available to your Azure Database for PostgreSQL server. The storage is used for the database files, temporary files, transaction logs, and the PostgreSQL server logs. The total amount of storage you provision also defines the I/O capacity available to your server.

  Basic General purpose Memory optimized
Storage type Azure Standard Storage Azure Premium Storage Azure Premium Storage
Storage size 5GB TO 1TB 5GB to 4TB 5GB to 4TB
Storage increment size 1GB 1GB 1GB
IOPS Variable

3IOPS/GB

Min 100 IOPS

Max 6000 IOPS

3IOPS/GB

Min 100 IOPS

Max 6000 IOPS

You can add additional storage capacity during and after the creation of the server. The Basic tier does not provide an IOPS guarantee. In the General purpose and Memory optimized pricing tiers, the IOPS scale with the provisioned storage size in a three to one ratio.

Storage percent

Sample threshold (percentage or value):

  • Less than or equal to 10GB, 80 percent threshold.
  • Less than or equal to 100GB, 90 percent threshold.
  • Everything else, 95 percent threshold.

Things to check:

  • If your server is reaching provisioned storage limits, it will soon be out of space and set to read-only.
  • Monitor your usage and you can also provision for more storage to continue using the server without deleting any files, logs, and more.

If you have tried everything and none of the monitoring tips mentioned above lead you to a resolution, please don't hesitate to contact Microsoft Azure Support for assistance.

Acknowledgments

Special thanks to Anandsagar Kothapalli, Bassu Hiremath, Kalyan Sayyaparaju, Parikshit Savjani, and Praveen Barli for their contributions to this posting.

Viewing all 10804 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>