Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

Azure IoT Reference Architecture 2.1 release

$
0
0

A few months ago, we released a significant update to the Azure IoT Reference Architecture, a downloadable resource that aims to accelerate customers building IoT solutions on Azure by providing a proven production ready architecture and technology implementation choices.

Today, we are happy to release updated version 2.1 of the Azure IoT Reference Architecture. The document offers an overview of the IoT space, recommended subsystem factoring for scalable IoT solutions, prescriptive technology recommendations per subsystem, and detailed sections that explore use cases and technology alternatives.

This latest version of the guide includes four essential updates:

  1. Guidance to build IoT solutions by leveraging SaaS (Azure IoT Central), PaaS (Azure IoT solution accelerators), or IaaS (using OSS stack). Azure IoT Central is a fully managed global IoT SaaS (software-as-a-service) solution that makes it easy to connect, monitor, and manage your IoT assets at scale. Azure IoT solution accelerators are open source offerings that provide end to end examples showcasing the use of Azure technologies to achieve faster time to market and time to value.
  2. Incorporating Azure IoT Edge as the intelligent edge for expanding the set of connected devices that gather telemetry, generate insights, and take action based on information close to the physical world. Azure IoT Edge helps deliver cloud intelligence locally by deploying and running Azure services securely on connected or disconnected IoT devices.
  3. Guidance to build appropriate logging and monitoring in IoT solutions to determine the overall health. Logging and monitoring systems assist with understanding if the solution, related devices, and systems are functioning to meet business and customer expectations. Azure Operations Management Suite (OMS) helps administrators to manage their intelligent edge and cloud environments more efficiently by giving them greater visibility into their operational infrastructure.
  4. Included Azure Time Series Insights (TSI) as a recommended storage subsystem. TSI capabilities include SQL-like filtering, aggregation, REST query APIs, and a data explorer to visualize information.

Customers can use the Azure IoT Reference Architecture as well as reference architecture implementations, such as Remote Monitoring and Connected Factory solution accelerators to guide their implementation choices.

We’d love to hear your thoughts, ideas, and suggestions based on your experience with building production IoT solutions with Azure. We are planning consistent updates to the reference architecture over the coming months so please email us at AzureIoTRefArcVoice@microsoft.com with your feedback.


ONNX Runtime for inferencing machine learning models now in preview

$
0
0

We are excited to release the preview of ONNX Runtime, a high-performance inference engine for machine learning models in the Open Neural Network Exchange (ONNX) format. ONNX Runtime is compatible with ONNX version 1.2 and comes in Python packages that support both CPU and GPU to enable inferencing using Azure Machine Learning service and on any Linux machine running Ubuntu 16.

ONNX is an open source model format for deep learning and traditional machine learning. Since we launched ONNX in December 2017 it has gained support from more than 20 leading companies in the industry. ONNX gives data scientists and developers the freedom to choose the right framework for their task, as well as the confidence to run their models efficiently on a variety of platforms with the hardware of their choice.

ONNX

The ONNX Runtime inference engine provides comprehensive coverage and support of all operators defined in ONNX. Developed with extensibility and performance in mind, it leverages a variety of custom accelerators based on platform and hardware selection to provide minimal compute latency and resource usage. Given the platform, hardware configuration, and operators defined within a model, ONNX Runtime can utilize the most efficient execution provider to deliver the best overall performance for inferencing.

The pluggable model for execution providers allows ONNX Runtime to rapidly adapt to new software and hardware advancements. The execution provider interface is a standard way for hardware accelerators to expose their capabilities to the ONNX Runtime. We have active collaborations with companies including Intel and NVIDIA to ensure that ONNX Runtime is optimized for compute acceleration on their specialized hardware. Examples of these execution providers include Intel's MKL-DNN and nGraph, as well as NIVIDIA's optimized TensorRT.

ONNXModel

The release of ONNX Runtime expands upon Microsoft's existing support of ONNX, allowing you to run inferencing of ONNX models across a variety of platforms and devices.

Azure: Using the ONNX Runtime Python package, you can deploy an ONNX model to the cloud with Azure Machine Learning as an Azure Container Instance or production-scale Azure Kubernetes Service. Here are some examples to get started.

.NET:  You can integrate ONNX models into your .NET apps with ML.NET.

Windows Devices: You can run ONNX models on a wide variety of Windows devices using the built-in Windows Machine Learning APIs available in the latest Windows 10 October 2018 update.

CreateDeploy

Using ONNX

Get an ONNX model

Getting an ONNX model is simple: choose from a selection of popular pre-trained ONNX models in the ONNX Model Zoo, build your own image classification model using Azure Custom Vision service, convert existing models from other frameworks to ONNX, or train a custom model in AzureML and save it in the ONNX format.

4Ways

Inference with ONNX Runtime

Once you have a trained model in ONNX format, you're ready to feed it through ONNX Runtime for inferencing. The pre-built Python packages include integration with various execution providers, offering low compute latencies and resource utilization. The GPU build requires CUDA 9.1.

To start, install the desired package from PyPi in your Python environment:

pip install onnxruntime 
pip install onnxruntime-gpu

Then, create an inference session to begin working with your model.

import onnxruntime
session = onnxruntime.InferenceSession("your_model.onnx")  

Finally, run the inference session with your selected outputs and inputs to get the predicted value(s).

prediction = session.run(None, {"input1": value})

For more details, refer to the full API documentation.  

Now you are ready to deploy your ONNX model for your application or service to use.

Get started today

As champions of open and interoperable AI, we are actively invested in building products and tooling to help you efficiently deliver new and exciting AI innovation. We are excited for the community to participate and try out ONNX Runtime! Get started today by installing ONNX Runtime and let us know your feedback on the Azure Machine Learning Service Forum.

Real world ASP.NET Core Performance Tips from a Real customer

$
0
0

When the engineers on the ASP.NET/.NET Core team talk to real customers about actual production problems they have, interesting stuff comes up. I've tried to capture a real customer interaction here without giving away their name or details.

The team recently had the opportunity to help a large customer of .NET investigate performance issues they’ve been having with a newly-ported ASP.NET Core 2.1 app when under load. The customer's developers are experienced with ASP.NET on Windows but in this case they needed help getting started with performance investigations with ASP.NET Core in Linux containers.

As with many performance investigations, there were a variety of issues contributing to the slowdowns, but the largest contributors were time spent garbage collecting (due to unnecessary large object allocations) and blocking calls that could be made asynchronous.

After resolving the technical and architectural issues detailed below, the customer's Web API went from only being able to handle several hundred concurrent users during load testing to being able to easily handle 3,000 and they are now running the new ASP.NET Core version of their backend web API in production.

Problem Statement

The customer recently migrated their .NET Framework 4.x ASP.NET-based backend Web API to ASP.NET Core 2.1. The migration was broad in scope and included a variety of tech changes.

Their previous version Web API (We'll call it version 1) ran as an ASP.NET application (targeting .NET Framework 4.7.1) under IIS on Windows Server and used SQL Server databases (via Entity Framework) to persist data. The new (2.0) version of the application runs as an ASP.NET Core 2.1 app in Linux Docker containers with PostgreSQL backend databases (via Entity Framework Core). They used Nginx to load balance between multiple containers on a server and HAProxy load balancers between their two main servers. The Docker containers are managed manually or via Ansible integration for CI/CD (using Bamboo).

Although the new Web API worked well functionally, load tests began failing with only a few hundred concurrent users. Based on current user load and projected growth, they wanted the web API to support at least 2,000 concurrent users. Load testing was done using Visual Studio Team Services load tests running a combination of web tests mimicking users logging in, doing the stuff of their business, activating tasks in their application, as well as pings that the Mobile App's client makes regularly to check for backend connectivity. This customer also uses New Relic for application telemetry and, until recently, New Relic agents did not work with .NET Core 2.1. Because of this, there was unfortunately no app diagnostic information to help pinpoint sources of slowdowns.

Lessons Learned

Cross-Platform Investigations

One of the most interesting takeaways for me was not the specific performance issues encountered but, instead, the challenges this customer had working in a Linux environment. The team's developers are experienced with ASP.NET on Windows and comfortable debugging in Visual Studio. Despite this, the move to Linux containers has been challenging for them.

Because the engineers were unfamiliar with Linux, they hired a consultant to help deploy their Docker containers on Linux servers. This model worked to get the site deployed and running, but became a problem when the main backend began exhibiting performance issues. The performance problems only manifested themselves under a fairly heavy load, such that they could not be reproduced on a dev machine. Up until this investigation, the developers had never debugged on Linux or inside of a Docker container except when launching in a local container from Visual Studio with F5. They had no idea how to even begin diagnosing issues that only reproduced in their staging or production environments. Similarly, their dev-ops consultant was knowledgeable about Linux infrastructure but not familiar with application debugging or profiling tools like Visual Studio.

The ASP.NET team has some documentation on using PerfCollect and PerfView to gather cross-platform diagnostics, but the customer's devs did not manage to find these docs until they were pointed out. Once an ASP.NET Core team engineer spent a morning showing them how to use PerfCollect, LLDB, and other cross-platform debugging and performance profiling tools, they were able to make some serious headway debugging on their own. We want to make sure everyone can debug .NET Core on Linux with LLDB/SOS or remotely with Visual Studioas

The ASP.NET Core team now believes they need more documentation on how to diagnose issues in non-Windows environments (including Docker) and the documentation that already exists needs to be more discoverable. Important topics to make discoverable include PerfCollect, PerfView, debugging on Linux using LLDB and SOS, and possibly remote debugging with Visual Studio over SSH.

Issues in Web API Code

Once we gathered diagnostics, most of the perf issues ended up being common problems in the customer’s code. 

  1. The largest contributor to the app’s slowdown was frequent Generation 2 (Gen 2) GCs (Garbage Collections) which were happening because a commonly-used code path was downloading a lot of images (product images), converting those bytes into a base64 strings, responding to the client with those strings, and then discarding the byte[] and string. The images were fairly large (>100 KB), so every time one was downloaded, a large byte[] and string had to be allocated. Because many of the images were shared between multiple clients, we solved the issue by caching the base64 strings for a short period of time (using IMemoryCache).
  2. HttpClient Pooling with HttpClientFactory
    1. When calling out to Web APIs there was a pattern of creating new HttpClient instances rather than using IHttpClientFactory to pool the clients.
    2. Despite implementing IDisposable, it is not a best practice to dispose HttpClient instances as soon as they’re out of scope as they will leave their socket connection in a TIME_WAIT state for some time after being disposed. Instead, HttpClient instances should be re-used.
  3. Additional investigation showed that much of the application’s time was spent querying PostgresSQL for data (as is common). There were several underlying issues here.
    1. Database queries were being made in a blocking way instead of being asynchronous. We helped address the most common call-sites and pointed the customer at the AsyncUsageAnalyzer to identify other async cleanup that could help.
    2. Database connection pooling was not enabled. It is enabled by default for SQL Server, but not for PostgreSQL.
      1. We re-enabled database connection pooling. It was necessary to have different pooling settings for the common database (used by all requests) and the individual shard databases which are used less frequently. While the common database needs a large pool, the shard connection pools need to be small to avoid having too many open, idle connections.
    3. The Web API had a fairly ‘chatty’ interface with the database and made a lot of small queries. We re-worked this interface to make fewer calls (by querying more data at once or by caching for short periods of time).
  4. There was also some impact from having other background worker containers on the web API’s servers consuming large amounts of CPU. This led to a ‘noisy neighbor’ problem where the web API containers didn’t have enough CPU time for their work. We showed the customer how to address this with Docker resource constraints.

Wrap Up

As shown in the graph below, at the end of our performance tuning, their backend was easily able to handle 3,000 concurrent users and they are now using their ASP.NET Core solution in production. The performance issues they saw overlapped a lot with those we’ve seen from other customers (especially the need for caching and for async calls), but proved to be extra challenging for the developers to diagnose due to the lack of familiarity with Linux and Docker environments.

Performance and Errors Charts look good, up and to the right
Throughput and Tests Charts look good, up and to the right

Some key areas of focus uncovered by this investigation were:

  • Being mindful of memory allocations to minimize GC pause times

  • Keeping long-running calls non-blocking/asynchronous

  • Minimizing calls to external resources (such as other web services or the database) with caching and grouping of requests

Hope you find this useful! Big thanks to Mike Rousos from the ASP.NET Core team for his work and analysis!


Sponsor: Check out the latest JetBrains Rider with built-in spell checking, enhanced debugger, Docker support, full C# 7.3 support, publishing to IIS and more advanced Unity support.



© 2018 Scott Hanselman. All rights reserved.
     

Customer Notes: Diagnosing issues under load of Web API app migrated to ASP.NET Core on Linux

$
0
0

When the engineers on the ASP.NET/.NET Core team talk to real customers about actual production problems they have, interesting stuff comes up. I've tried to capture a real customer interaction here without giving away their name or details.

The team recently had the opportunity to help a large customer of .NET investigate performance issues they’ve been having with a newly-ported ASP.NET Core 2.1 app when under load. The customer's developers are experienced with ASP.NET on Windows but in this case they needed help getting started with performance investigations with ASP.NET Core in Linux containers.

As with many performance investigations, there were a variety of issues contributing to the slowdowns, but the largest contributors were time spent garbage collecting (due to unnecessary large object allocations) and blocking calls that could be made asynchronous.

After resolving the technical and architectural issues detailed below, the customer's Web API went from only being able to handle several hundred concurrent users during load testing to being able to easily handle 3,000 and they are now running the new ASP.NET Core version of their backend web API in production.

Problem Statement

The customer recently migrated their .NET Framework 4.x ASP.NET-based backend Web API to ASP.NET Core 2.1. The migration was broad in scope and included a variety of tech changes.

Their previous version Web API (We'll call it version 1) ran as an ASP.NET application (targeting .NET Framework 4.7.1) under IIS on Windows Server and used SQL Server databases (via Entity Framework) to persist data. The new (2.0) version of the application runs as an ASP.NET Core 2.1 app in Linux Docker containers with PostgreSQL backend databases (via Entity Framework Core). They used Nginx to load balance between multiple containers on a server and HAProxy load balancers between their two main servers. The Docker containers are managed manually or via Ansible integration for CI/CD (using Bamboo).

Although the new Web API worked well functionally, load tests began failing with only a few hundred concurrent users. Based on current user load and projected growth, they wanted the web API to support at least 2,000 concurrent users. Load testing was done using Visual Studio Team Services load tests running a combination of web tests mimicking users logging in, doing the stuff of their business, activating tasks in their application, as well as pings that the Mobile App's client makes regularly to check for backend connectivity. This customer also uses New Relic for application telemetry and, until recently, New Relic agents did not work with .NET Core 2.1. Because of this, there was unfortunately no app diagnostic information to help pinpoint sources of slowdowns.

Lessons Learned

Cross-Platform Investigations

One of the most interesting takeaways for me was not the specific performance issues encountered but, instead, the challenges this customer had working in a Linux environment. The team's developers are experienced with ASP.NET on Windows and comfortable debugging in Visual Studio. Despite this, the move to Linux containers has been challenging for them.

Because the engineers were unfamiliar with Linux, they hired a consultant to help deploy their Docker containers on Linux servers. This model worked to get the site deployed and running, but became a problem when the main backend began exhibiting performance issues. The performance problems only manifested themselves under a fairly heavy load, such that they could not be reproduced on a dev machine. Up until this investigation, the developers had never debugged on Linux or inside of a Docker container except when launching in a local container from Visual Studio with F5. They had no idea how to even begin diagnosing issues that only reproduced in their staging or production environments. Similarly, their dev-ops consultant was knowledgeable about Linux infrastructure but not familiar with application debugging or profiling tools like Visual Studio.

The ASP.NET team has some documentation on using PerfCollect and PerfView to gather cross-platform diagnostics, but the customer's devs did not manage to find these docs until they were pointed out. Once an ASP.NET Core team engineer spent a morning showing them how to use PerfCollect, LLDB, and other cross-platform debugging and performance profiling tools, they were able to make some serious headway debugging on their own. We want to make sure everyone can debug .NET Core on Linux with LLDB/SOS or remotely with Visual Studio as easily as possible.

The ASP.NET Core team now believes they need more documentation on how to diagnose issues in non-Windows environments (including Docker) and the documentation that already exists needs to be more discoverable. Important topics to make discoverable include PerfCollect, PerfView, debugging on Linux using LLDB and SOS, and possibly remote debugging with Visual Studio over SSH.

Issues in Web API Code

Once we gathered diagnostics, most of the perf issues ended up being common problems in the customer’s code. 

  1. The largest contributor to the app’s slowdown was frequent Generation 2 (Gen 2) GCs (Garbage Collections) which were happening because a commonly-used code path was downloading a lot of images (product images), converting those bytes into a base64 strings, responding to the client with those strings, and then discarding the byte[] and string. The images were fairly large (>100 KB), so every time one was downloaded, a large byte[] and string had to be allocated. Because many of the images were shared between multiple clients, we solved the issue by caching the base64 strings for a short period of time (using IMemoryCache).
  2. HttpClient Pooling with HttpClientFactory
    1. When calling out to Web APIs there was a pattern of creating new HttpClient instances rather than using IHttpClientFactory to pool the clients.
    2. Despite implementing IDisposable, it is not a best practice to dispose HttpClient instances as soon as they’re out of scope as they will leave their socket connection in a TIME_WAIT state for some time after being disposed. Instead, HttpClient instances should be re-used.
  3. Additional investigation showed that much of the application’s time was spent querying PostgresSQL for data (as is common). There were several underlying issues here.
    1. Database queries were being made in a blocking way instead of being asynchronous. We helped address the most common call-sites and pointed the customer at the AsyncUsageAnalyzer to identify other async cleanup that could help.
    2. Database connection pooling was not enabled. It is enabled by default for SQL Server, but not for PostgreSQL.
      1. We re-enabled database connection pooling. It was necessary to have different pooling settings for the common database (used by all requests) and the individual shard databases which are used less frequently. While the common database needs a large pool, the shard connection pools need to be small to avoid having too many open, idle connections.
    3. The Web API had a fairly ‘chatty’ interface with the database and made a lot of small queries. We re-worked this interface to make fewer calls (by querying more data at once or by caching for short periods of time).
  4. There was also some impact from having other background worker containers on the web API’s servers consuming large amounts of CPU. This led to a ‘noisy neighbor’ problem where the web API containers didn’t have enough CPU time for their work. We showed the customer how to address this with Docker resource constraints.

Wrap Up

As shown in the graph below, at the end of our performance tuning, their backend was easily able to handle 3,000 concurrent users and they are now using their ASP.NET Core solution in production. The performance issues they saw overlapped a lot with those we’ve seen from other customers (especially the need for caching and for async calls), but proved to be extra challenging for the developers to diagnose due to the lack of familiarity with Linux and Docker environments.

Performance and Errors Charts look good, up and to the right
Throughput and Tests Charts look good, up and to the right

Some key areas of focus uncovered by this investigation were:

  • Being mindful of memory allocations to minimize GC pause times

  • Keeping long-running calls non-blocking/asynchronous

  • Minimizing calls to external resources (such as other web services or the database) with caching and grouping of requests

Hope you find this useful! Big thanks to Mike Rousos from the ASP.NET Core team for his work and analysis!


Sponsor: Check out the latest JetBrains Rider with built-in spell checking, enhanced debugger, Docker support, full C# 7.3 support, publishing to IIS and more advanced Unity support.



© 2018 Scott Hanselman. All rights reserved.
     

What’s new in Azure Media Services video processing

$
0
0

Developers and media companies trust and rely on Azure Media Services for the ability to encode, protect, index, and deliver videos at scale. This week we are proud to announce several enhancements to Media Services including the general availability of the new Azure Media Services v3 API, as well as updates to Azure Media Player.

Low-latency live streaming, 24-hour transcoding, CMAF, and a shiny new API (v3) ready for production

The Azure Media Services v3 API was announced at the Build conference in May 2018, which provided a simplified development model, enabled a better integration experience with key Azure services like Event Grid and Functions, and much more. The API is now generally available and comes with many new exciting features. You can begin migrating workloads built on the preview API over to production use today.

What’s new?

The new Media Services v3 API is a major milestone in the enhancement of the developer experience for Media Services customers. The new API provides a set of SDKs for .NET, .NET Core, Java, Go, Python, Ruby, and Node.js! In addition, the API includes support for the following key scenarios.

Low-latency live streaming with 24-hour transcoding

LiveEvent, the replacement for the Channel entity in the v2 API, now has several major service enhancements.

We often receive the request to lower the latency when streaming live events. Our new low-latency live streaming mode is now available exclusively on the LiveEvent entity in our v3 API. It supports 8 seconds end-to-end latency when used in combination with Azure Media Player’s new low-latency heuristic profile, or ~10 seconds with native HLS playback on an Apple iOS device. Simply configure your live encoder to use smaller 1-second GOP sizes, and you can quickly reduce your overall latency when delivering content to small or medium sized audiences. Of course, it should be noted that the end-to-end latency can vary depending on local network conditions or by introducing a CDN caching layer. Test your exact configuration as your latency could vary.

Looking forward, we will continue to make improvements to our low-latency solution. Last month we announced that we are joining the open source SRT Alliance to help improve low-latency live streaming to Azure with secure and reliable ingest to the cloud. As part of this announcement we have already begun work to add SRT ingest protocol support to our LiveEvent.

To use the new LowLatency feature, you can set the StreamOptionsFlag to LowLatency on the LiveEvent:

FE39240A-E58B-4773-B137-891FFAF7141A

Once the stream is up and running, use the Azure Media Player Demo page, and set the the playback options to use the “Low Latency Heuristics Profile”.

Low Latency Heuristics Profile selection

Next, when streaming live video, you have two options for long-duration streaming events. If you need to provide linear (24x7x365) live streams, you should use an on-premises encoder with our “pass-through”, non-transcoding LiveEvent. If you require live encoding in the cloud, in the v2 API you were limited to 8 hours of running time. We are very pleased to announce that we have increased support for live transcoding durations up to a full 24 hours when using the new LiveEvent.

Lastly, we have verified several updated RTMP(s)-based encoders including the latest releases from MediaExcel, Telestream Wirecast, Haivision KB, and Switcher Studio.

Easier development with Event Grid and Azure Resource Manager

To make your development experience easier across Azure solutions for media, we are offering more notifications for common operations through Azure Event Grid. You can now subscribe to state change events from Job and JobOutput operations in order to better integrate your custom media applications. If you are creating custom workflows in a Transform, you can specify your own correlation data in the Job object. This correlation data can be extracted from the notifications received through Event Grid to help create workflows that solve common problems with multi-tenant applications, or integration with 3rd-party media asset management systems. When monitoring a live stream, you can use new events such as live ingest heartbeat, connected and disconnected events from the upstream live encoder.

Subscribe to any Media Services event through code, Logic Apps, Functions, or via the Azure portal.

Create event subscription

With the transition over to Azure Resource Management (ARM) for our v3 API, you get the following benefits when managing transforms, live events, DRM keys, streaming endpoints, and assets:

  1. Easier deployment using ARM templates.
  2. Ability to apply role-based access control (RBAC).

Simplified ingest and asset creation

Ingesting content into Media Services used to involve multiple steps such as copying files to Azure Storage, and creating Assets and AssetFiles. In the new API, you can simply point to an existing file in Azure Storage using a SAS URL, or you can ingest from any HTTP(s) accessible URL.

var input = new JobInputHttp(
                     baseUri: "https://nimbuscdn-nimbuspm.streaming.mediaservices.windows.net/2b533311-b215-4409-80af-529c3e853622/",
                     files: new List<String> {"Ignite-short.mp4"}
                     );

We have also simplified the creation of assets in Azure Blob Storage by allowing you to set the container name directly. You can then use the storage APIs to add files into the container. Existing v2 assets will continue to work in the new API, but v3 assets are not backwards compatible.

Streaming and Dynamic Packaging with MPEG CMAF

In the service, we have now released official support for the latest MPEG Common Media Application Format (CMAF) with ‘cbcs’ encryption. CMAF, officially known as MPEG-A Part 19 or ISO/IEC 23000-19, is a new multimedia file format that provides storing and delivery of streaming media using a single encrypted, adaptive bitrate format to a wide range of devices including Apple iPhone, Android, and Windows. Streaming service providers will benefit from this common format through improved interoperability, low-latency streaming, and increased CDN cache efficiency.

To use the new CMAF format, simply add the following new “format=” tag to the URL of your streaming URLs and choose the appropriate manifest type of HLS (for iOS devices) or DASH (for Windows or Android devices).

For MPEG DASH manifest with CMAF format content, use “format=mpd-time-cmaf” as shown below:

https://<<your-account-name>>.streaming.media.azure.net/<<locator-ID>>/<<manifest-name>>.ism/manifest(format=mpd-time-cmaf)

For HLS manifest with CMAF format content use “format=m3u8-cmaf” as shown below:

https://<<your-account-name>>.streaming.media.azure.net/<<locator-ID>>/<<manifest-name>>.ism/manifest(format=m3u8-cmaf)

Manage Media Services through the Command Line

Finally, we have updated the Azure CLI 2.0 module for Media Services to include all features of the v3 API. We will be releasing the final Media Services CLI module on October 23, 2018 for download, or for use directly within the Cloud Shell. The CLI is designed to make scripting Media Services easy. Use the CLI to query for running Jobs, creating Live Events, creating custom Transforms, managing content keys, and more. The CLI module also includes support for Streaming Endpoints, content key policies, and dynamic manifest filters.

CE05FFEB-4B2C-4F51-A02D-073A5648967B

Try out the new API following these quickstart tutorials:

Stay in touch!

Be sure to also check out Video Indexer which also moved to general availability last month. We’re eager to hear from you about these updates! You can ask questions in our MSDN Forum, submit a question on Stack Overflow, or add new ideas to the Azure Media Services and Video Indexer user voice sites. You can also reach us on twitter @MSFTAzureMedia and @Video_Indexer.

Visual Studio Roadmap Updates and Visual Studio 2019 Information

$
0
0

Yesterday, we covered What’s next for Visual Studio for Mac, and today we’ve updated our Visual Studio Roadmap so you can see the latest news about what we’re working on. We’re particularly excited to share this update since it includes information about the first preview of Visual Studio 2019, which we will make available by the end of this calendar year. We plan to have a generally available (GA) version of Visual Studio 2019 in the first half of 2019.

Be sure to check out the full roadmap for all the updates, but some notable improvements are:  

  • A better performing and more reliable debugger, moving to an out-of-process 64-bit process.
  • Improved search accuracy for menus, commands, options, and installable components.
  • Visual Studio tooling for Windows Forms and WPF development on .NET Core 3.

As always, we are committed to making Visual Studio a great development environment: faster, more reliable, more productive for individuals and teams, and easier to get started with developing your code. Please continue to share your feedback with us so we can make the best tools for your development. You can send us your suggestions, ideas, and concerns through our Developer Community portal.

We will keep updating the roadmap as we deliver Visual Studio 2019 features iteratively. To try out the newest features and fixes, keep an eye on the Visual Studio Preview page and on this blog for announcements about the latest previews.

John Montgomery, Director of Program Management for Visual Studio
@JohnMont

John is responsible for product design and customer success for all of Visual Studio, C++, C#, VB, JavaScript, and .NET. John has been at Microsoft for 17 years, working in developer technologies the whole time.

ASP.NET Core 2.2.0-preview3 now available

$
0
0

Today we’re very happy to announce that the third preview of the next minor release of ASP.NET Core and .NET Core is now available for you to try out. We’ve been working hard on this release, along with many folks from the community, and it’s now ready for a wider audience to try it out and provide the feedback that will continue to shape the release.

How do I get it?

You can download the new .NET Core SDK for 2.2.0-preview3 (which includes ASP.NET 2.2.0-preview3) from https://www.microsoft.com/net/download/dotnet-core/2.2

Visual Studio requirements

Customers using Visual Studio should also install and use the Preview channel of Visual Studio 2017 (15.9 Preview 3 or later) in addition to the SDK when working with .NET Core 2.2 and ASP.NET Core 2.2 projects. Please note that the Visual Studio preview channel can be installed side-by-side with existing an Visual Studio installation without disrupting your current development environment.

Azure App Service Requirements

If you are hosting your application on Azure App Service, you can follow these instructions to install the required site extension for hosting your 2.2.0-preview3 applications.

Impact to machines

Please note that is a preview release and there are likely to be known issues and as-yet-to-be discovered bugs. While the .NET Core SDK and runtime installs are side-by-side, your default SDK will become the latest one. If you run into issues working on existing projects using earlier versions of .NET Core after installing the preview SDK, you can force specific projects to use an earlier installed version of the SDK using a global.json file as documented here. Please log an issue if you run into such cases as SDK releases are intended to be backwards compatible.

What’s new in Preview 3

For a full list of changes, bug fixes, and known issues you can read the release announcement.

Routing

We’ve introduced the concept of Parameter Transformers to routing in ASP.NET Core 2.2. A parameter transformer customizes the route generated by transforming parameter’s route values, and gives developers new options when generating routes. For example, a custom slugify parameter transformer in route pattern blog{article:slugify} with Url.Action(new { article = "MyTestArticle" }) generates blogmy-test-article. Parameter transformers implement Microsoft.AspNetCore.Routing.IOutboundParameterTransformer and are configured using ConstraintMap.

These features are specific to the new endpoint routing system used in MVC by default in 2.2.

Parameter transformers are also used by frameworks to transform the URI to which an endpoint resolves. For example, ASP.NET Core MVC uses parameter transformers to transform the route value used to match an area, controller, action, and page.

routes.MapRoute(
    name: "default",
    template: "{controller=Home:slugify}/{action=Index:slugify}/{id?}");

With the preceding route, the action SubscriptionManagementController.GetAll() is matched with the URI /subscription-management/get-all. A parameter transformer doesn’t change the route values used to generate a link. Url.Action("GetAll", "SubscriptionManagement") outputs /subscription-management/get-all.

ASP.NET Core provides API conventions for using a parameter transformers with generated routes:

  • MVC has the Microsoft.AspNetCore.Mvc.ApplicationModels.RouteTokenTransformerConvention API convention. This convention applies a specified parameter transformer to all attribute routes in the app. The parameter transformer will transform attribute route tokens as they are replaced. For more information, see Use a parameter transformer to customize token replacement.
  • Razor pages has the Microsoft.AspNetCore.Mvc.ApplicationModels.PageRouteTransformerConvention API convention. This convention applies a specified parameter transformer to all automatically discovered Razor pages. The parameter transformer will transform the folder and file name segments of Razor page routes. For more information, see Use a parameter transformer to customize page routes.

Link Generation

Added a new service called LinkGenerator, it is a singleton service that supports generating paths and absolute URIs both with and without an HttpContext. If you need to generate links in Middleware or somewhere outside of Razor then this new service will be useful to you. You can use it in Razor, but the existing APIs like Url.Action are already backed by the new service so you can continue to use those.

return _linkGenerator.GetPathByAction(
     httpContext,
     controller: "Home",
     action: "Index",
     values: new { id=42 });

For now this is useful to link to MVC actions and pages from outside of MVC. We will add additional features in the next release targeting non-MVC scenarios.

Health Checks

DbContextHealthCheck

We added a new DbContext based check for when you are using Entity Framework Core:

// Registers required services for health checks
services.AddHealthChecks()
        // Registers a health check for the MyContext type. By default the name of the health check will be the
        // name of the DbContext type. There are other options available through AddDbContextCheck to configure
        // failure status, tags, and custom test query.
        .AddDbContextCheck<MyContext>();

This check will make sure that the application can communicate with the database you configured for MyContext. By default the DbContextHealthCheck will call the CanConnectAsync method that is being added to Entity Framework Core 2.2. You can customize what operation is run when checking health using overloads of the AddDbContextCheck method.

Health Check Publisher

We added the IHealthCheckPublisher interface that has a single method you can implement:

Task PublishAsync(HealthReport report, CancellationToken cancellationToken);

If you add an IHealthCheckPublisher to DI then the health checks system will periodically execute your health checks and call PublishAsync with the result. We expect this to be useful when you are interacting with a push based health system that expects each process to call it periodically in order to determine health.

Tags

In preview3 we added the ability to tag health checks with a list of strings when you register them:

services.AddHealthChecks()
        .AddDbContextCheck<MyContext>(tags: new[] { "db" });

Once you’ve done this then you can filter execution of your checks via tag:

app.UseHealthChecks("/liveness", new HealthCheckOptions
{
    Predicate = (_) => false
});

app.UseHealthChecks("/readiness", new HealthCheckOptions
{
    Predicate = (check) => check.Tags.Contains("db")
});

We see tags as a way for consumers of health checks, application authors, to use as a convenient grouping and filtering mechanism for their health checks. Not something that health check authors will pre-populate.

You can also customize what status a failure of this check means for your application, for example if your application is written such that it can handle the database not being available then a database being down might mean Degraded rather than UnHealthy.

Validation Performance Improvements

MVC’s validation system is designed to be extensible and flexible allowing developer to determine on a per request basis what validators apply to a given model. This is great for authoring complex validation providers. However, in the most common case your application only uses the built-in validation pieces such as DataAnnotations ([Required], [StringLength] etc, or IValidatableObject) and don’t require this extra flexability.

In 2.2.0-preview3, we’re adding a feature that allows MVC to short-circuit validation if it can determine that a given model graph would not require any validation. This results in significant improvements when validating models that cannot or do not have any associated validators. This includes objects such as collections of primitives (byte[], string[], Dictionary<string, string> etc), or complex object graphs without many validators.

For this model – https://github.com/aspnet/Mvc/blob/release/2.2/benchmarkapps/BasicApi/Models/Pet.cs – the table below compares the difference in Requests Per Second (RPS) with and without the enhancement:

Description RPS Memory (MB) Avg. Latency (ms) Startup (ms) First Request (ms) Ratio
Baseline 78,738 398 3.5 547 111.3 1.00
Validation changes 90,167 401 2.9 541 115.9 1.15

HTTP Client Performance Improvements

Some significant performance improvements have been made to SocketsHttpHandler by improving the connection pool locking contention. For applications making many outgoing HTTP requests, such as some Microservices architectures, throughput should be significantly improved. Our internal benchmarks show that under load HttpClient throughput has improved by 60% on Linux and 20% on Windows. At the same time the 90th percentile latency was cut down by two on Linux. See Github #32568 for the actual code change that made this improvement.

Requests Per Second Linux (higher is better)

image

Requests Per Second Windows (higher is better)

image

Request Latency Linux (lower is better)

image

Request Latency Windows (lower is better)

image

ASP.NET Core Module

We added support for the ability to detect client disconnects when you’re using the new IIS in-process hosting model. The HttpContext.RequestAborted cancellation token now gets tripped when your client disconnnects.

The ASP.NET Core Module also features enhanced diagnostics logs that configurable via the new handler settings or environment variables that expose a higher fidelity of diagnostic information.

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <location path="." inheritInChildApplications="false">
    <system.webServer>
      <handlers>
        <add name="aspNetCore" path="*" verb="*" modules="AspNetCoreModuleV2" resourceType="Unspecified" />
      </handlers>
      <aspNetCore processPath="dotnet" arguments=".clientdisconnect.dll" stdoutLogEnabled="false" stdoutLogFile=".logsstdout" hostingModel="inprocess">
        <handlerSettings>
          <handlerSetting name="debugFile" value="debug.txt" />
          <handlerSetting name="debugLevel" value="TRACE" />
        </handlerSettings> 
      </aspNetCore>
    </system.webServer>
  </location>
</configuration>

SignalR Java Client

Preview 3 includes a few notable changes to the SignalR Java Client as we progress towards a 1.0 release:

The “groupId” for the Maven package has changed to com.microsoft.signalr. To reference the new package from a Maven POM file, add the following dependency:

<dependency>
    <groupId>com.microsoft.signalr</groupId>
    <artifactId>signalr</artifactId>
    <version>VERSION TBD</version>
</dependency>

Or in Gradle:

implementation 'com.microsoft.signalr:signalr:VERSION TBD'

In Preview 3 we’ve changed all the APIs to be asynchronous, using RxJava. Our Java Client documentation will be updated to show the new usage patterns. We also have support for the invoke method, allowing the client code to wait for the server method to complete. This version also includes support for serializing custom types in method arguments and return values.

The Java Client currently requires Android API Level 26 (or higher). We are investigating moving down to a lower API level before RTM. If you are planning to use SignalR in an Java-based Android application, please comment on the GitHub issue tracking our Android API level support so we know what API level would work well for our users.

Migrating an ASP.NET Core 2.1 project to 2.2

To migrate an ASP.NET Core project from 2.1.x to 2.2.0-preview3, open the project’s .csproj file and change the value of the the element to netcoreapp2.2. You do not need to do this if you’re targeting .NET Framework 4.x.

Giving Feedback

The main purpose of providing previews is to solicit feedback so we can refine and improve the product in time for the final release. Please help provide us feedback by logging issues in the appropriate repository at https://github.com/aspnet or https://github.com/dotnet. We look forward to receiving your feedback!

Announcing Entity Framework Core 2.2 Preview 3

$
0
0

Today we are making EF Core 2.2 Preview 3 available, together with a new preview of our data provider for Cosmos DB and updated spatial extensions for various providers.

Preview 3 is going to be the last milestone before EF Core 2.2 RTM, so now is your last chance to try the bits and give us feedback if you want to have an impact on the quality and the shape of the APIs in this release.

Besides the new features, you can help by trying EF Core 2.2 preview 3 on applications that are using third party providers.  Although we now have our own testing for this, there might be unforeseen compatibility problems, and the earlier we can detect them, the higher chances we have of addressing them before RTM.

We thank you in advance for reporting any issues your find on our issue tracker on GitHub.

EF Core 2.2 roadmap update

EF Core 2.2 RTM is still planned for the end of the 2018 calendar year, alongside ASP.NET Core 2.2 and .NET Core 2.2.

However, based on a reassessment of the progress we have made so far, and on new information about the work we need to complete 2.2, we are no longer trying to include the following features in the EF Core 2.2 RTM:

  • Reverse engineering database views into query types: This feature is postponed to EF Core 3.0.
  • Cosmos DB Provider: Although we have made a lot of progress setting up the required infrastructure for document-oriented database support in EF Core, and have been steadily adding functionality to the provider, realistically we cannot arrive to a state in which we can release the provider with adequate functionality and quality in the current time frame for 2.2.Overall, we have found that the work necessary to complete the provider to be more than we initially estimated. Also, ongoing evolution in Cosmos DB is leading us to frequently revisit decisions about such things as how we use the Cosmos DB SDK, whether we map all entities to a single collection by default, etc.We plan to maintain the focus on the provider and to continue working with the Cosmos DB team and to keep releasing previews of the provider regularly. You can expect at least one more preview by the end of this year, and RTM sometime in 2019. We haven’t decided yet if the Cosmos DB provider will release as part of EF Core 3.0 or earlier.A good way to keep track of our progress is this checklist in our issue tracker.

Obtaining the preview

The preview bits are available on NuGet, and also as part of ASP.NET Core 2.2 Preview 3 and the .NET Core SDK 2.2 Preview 3, also releasing today. If you are want to try the preview in an application based on ASP.NET Core, we recommend you follow the instructions to upgrade to ASP.NET Core 2.2 Preview 3.

The SQL Server and in-memory providers are also included in ASP.NET Core, but for other providers and any other type of application, you will need to install the corresponding NuGet package.

For example, to add the 2.2 Preview 3 version of the SQL Server provider in a .NET Core library or application from the command line, use:

$ dotnet add package Microsoft.EntityFrameworkCore.SqlServer -v 2.2.0-preview3-35497

Or from the Package Manager Console in Visual Studio:

PM> Install-Package Microsoft.EntityFrameworkCore.SqlServer -Version 2.2.0-preview3-35497

For more details on how to add EF Core to your projects see our documentation on Installing Entity Framework Core.

The Cosmos DB provider and the spatial extensions ship as new separate NuGet packages. We’ll explain how to get started with them in the corresponding feature descriptions.

What is new in this preview?

Around 69 issues have been fixed since we finished Preview 2 last month. This includes product bug fixes and improvements to the new features. Specifically about the new features, the most significant changes are:

Spatial extensions

  • We have enabled spatial extensions to work with the SQLite provider using the popular SpatiaLite library.
  • We switched the default mapping of spatial properties on SQL Server from geometry to geography columns.
  • In order to use spatial extensions correctly with preview 3, it is recommended that you use the GeometryFactory provided by NetTopologySuite instead of creating new instances directly.
  • We collaborated with the NetTopologySuite team to create NetTopologySuite.IO.SqlServerBytes — a new IO module that targets .NET Standard and works directly with the SQL Server serialization format.
  • We enabled reverse engineering for databases containing spatial columns. Just make sure you add the spatial extension package for your database provider before you run Scaffold-DbContext or dotnet ef dbcontext scaffold.

Here is an updated usage example:


// Model class
public class Friend
{
  [Key]
  public string Name { get; set; }

  [Required]
  public IPoint Location { get; set; }
}

// Program
private static void Main(string[] args)
{
     // Create spatial factory
     var geometryFactory = NtsGeometryServices.Instance.CreateGeometryFactory(srid: 4326);

     // Setup data in datbase
     using (var context = new MyDbContext())
     {
         context.Database.EnsureDeleted();
         context.Database.EnsureCreated();

         context.Add(
             new Friend
             {
                 Name = "Bill",
                 Location = geometryFactory.CreatePoint(new Coordinate(-122.34877, 47.6233355))
             });
         context.Add(
             new Friend
             {
                 Name = "Paul",
                 Location = geometryFactory.CreatePoint(new Coordinate(-122.3308366, 47.5978429))
             });
         context.SaveChanges();
     }

     // find nearest friends
     using (var context = new MyDbContext())
     {
         var myLocation = geometryFactory.CreatePoint(new Coordinate(-122.13345, 47.6418066));

         var nearestFriends =
             (from f in context.Friends
              orderby f.Location.Distance(myLocation) descending
              select f).Take(5);

         Console.WriteLine("Your nearest friends are:");
         foreach (var friend in nearestFriends)
         {
             Console.WriteLine($"Name: {friend.Name}.");
         }
     }
}

In order to use this code with SQL Server, simply install the 2.2 preview 3 version of the Microsoft.EntityFrameworkCore.SqlServer.NetTopologySuite NuGet package, and configure your DbContext as follows:


public class MyDbContext : DbContext
{
    public DbSet Friends { get; set; }

    protected override void OnConfiguring(DbContextOptionsBuilder options)
    {
        options.UseSqlServer(
            "Server=(localdb)\mssqllocaldb;Database=SpatialFriends;ConnectRetryCount=0",
            b => b.UseNetTopologySuite());

    }
}

In order to use this code with SQLite, you can install the 2.2 preview 3 version of the Microsoft.EntityFrameworkCore.Sqlite.NetTopologySuite and Microsoft.EntityFrameworkCore.Sqlite packages. Then you can configure the DbContext like this:


public class MyDbContext : DbContext
{
    public DbSet Friends { get; set; }

    protected override void OnConfiguring(DbContextOptionsBuilder options)
    {
        options.UseSqlite(
            "Filename=SpatialFriends.db",
            x => x.UseNetTopologySuite());
    }

    protected override void OnModelCreating(ModelBuilder modelBuilder)
    {
        // For SQLite, you need to configure reference system on column
        modelBuilder
            .Entity()
            .Property(f => f.Location)
            .ForSqliteHasSrid(4326);
    }
}

Note that the spatial extension for SQLite requires the SpatiaLite library. This will be added as a dependency by the NuGet packages previously mentioned if you are on Windows, but on other systems you will need extra steps. For example:

  • On MacOS:
    $ brew install libspatialite
  • On Ubuntu or Debian Linux:
    $ apt-get install libsqlite3-mod-spatialite

In order to use this code with the in-memory provider, simply install the NetTopologySuite package, and the 2.2 preview 3 version of the Microsoft.EntityFrameworkCore.InMemory package. Then you can simply configure the DbContext like this:


public class MyDbContext : DbContext
{
    public DbSet Friends { get; set; }

    protected override void OnConfiguring(DbContextOptionsBuilder options)
    {
        options.UseInMemoryDatabase("SpatialFriends");
    }
}

Cosmos DB provider

We have made several changes and improvements since preview 2:

  • The package name has been renamed to Microsoft.EntityFrameworkCore.Cosmos
  • The UseCosmosSql() method has been renamed to UseCosmos()
  • We now store owned entity references and collections in the same document as the owner
  • Queries can now be executed asynchronously
  • SaveChanges(), EnsureCreated(), and EnsureDeleted() can now be executed synchronously
  • You no longer need to manually generate unique key values for entities
  • We preserve values in non-mapped properties when we update documents
  • We added a ToContainer() API to map entity types to a Cosmos DB container (or collection) explicitly
  • We now use the name of the derived DbContext type, rather ‘Unicorn’ for the container or collection name we use by convention
  • We enabled various existing features to work with Cosmos DB, including retrying execution strategies, and data seeding

We still have some pending work and several limitations to remove in the provider. Most of them as tracked as uncompleted tasks on our task list. In addition to those:

  • Currently, synchronous methods are much slower than the corresponding asynchronous methods
  • The value of the ‘id’ property has to be specified for seeding
  • There is currently no enforcement of uniqueness of primary keys values on entities saved by multiple instances of the DbContext

In order to use the provider, install the 2.2 preview 3 version of the Microsoft.EntityFrameworkCore.Cosmos package.

The following example configures the DbContext to connect to the Cosmos DB local emulator to store a simple blogging model:


public class BloggingContext : DbContext
{
  public DbSet Blogs { get; set; }
  public DbSet Posts { get; set; }

  protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
  {
    optionsBuilder.UseCosmos(
      "https://localhost:8081",
      "C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==",
      "MyDocuments");
  }
}

public class Blog
{
  public int BlogId { get; set; }
  public string Name { get; set; }
  public string Url { get; set; }
  public List Posts { get; set; }
}

public class Post
{
  public int PostId { get; set; }
  public string Title { get; set; }
  public string Content { get; set; }
  public List Tags { get; set; }
}

[Owned]
public class Tag
{
    [Key]
    public string Name { get; set; }
}

If you want, you can create the database programmatically, using EF Core APIs:


using (var context = new BloggingContext())
{
  context.Database.EnsureCreated();
}

Once you have connected to an existing database and you have defined your entities, you can start storing data in the database, for example:


using (var context = new BloggingContext())
{
  context.Blogs.Add(
    new Blog
    {
        BlogId = 1,
        Name = ".NET Blog",
        Url = "https://blogs.msdn.microsoft.com/dotnet/",
        Posts = new List
        {
            new Post
            {
                PostId = 2,
                Title = "Welcome to this blog!",
                Tags = new List
                {
                    new Tag
                    {
                        Name = "Entity Framework Core"
                    },
                    new Tag
                    {
                        Name = ".NET Core"
                    }
                }
            },
        }
      }
    });
  context.SaveChanges();
}

And you can write queries using LINQ:


var dotNetBlog = context.Blogs.Single(b => b.Name == ".NET Blog");

Query tags

  • We fixed several issues with multiple calls of the API and with usage with multi-line strings
  • The API was renamed to TagWith()

This an updated usage example:


  var nearestFriends =
      (from f in context.Friends.TagWith(@"This is my spatial query!")
      orderby f.Location.Distance(myLocation) descending
      select f).Take(5).ToList();

This will generate the following SQL output:


-- This is my spatial query!

SELECT TOP(@__p_1) [f].[Name], [f].[Location]
FROM [Friends] AS [f]
ORDER BY [f].[Location].STDistance(@__myLocation_0) DESC

Collections of owned entities

  • The main update since preview 2 is that the Cosmos DB provider now stores owned collections as part of the same document as the owner.

Here is a simple usage scenario:


modelBuilder.Entity().OwnsMany(c => c.Addresses);

Thank you

The EF team would like to thank everyone for all the feedback and contributions. Once more, please try this preview and report any feedback on our issue tracker.


Introducing Component Firmware Update

$
0
0

The Microsoft Devices Team is excited to announce the release of an open-source model for Component Firmware Update for Windows system developers – Component Firmware Update (CFU).  With CFU, you can easily deliver firmware updates for through Windows Update by using CFU drivers.

Background

Computers and peripherals have components running their own software preprogrammed in the factory.  However, over time, the factory software (“firmware”) must be updated to support new features or fix issues.

Firmware updates for embedded components have three primary delivery mechanisms:

Each of those mechanisms have its own advantages.  Stand-alone tools can support component-specific protocols but require the user to find and download the tools and find out if an update is available and applicable.  UEFI UpdateCapsule drivers can be pushed through Windows Update but can only update components during boot-time when components may not be available or may not be attached.  The most flexible mechanism is the component-specific firmware update driver, which can support component-specific protocols and can run whenever the device is enumerated on the system.

Writing a firmware update driver for each component-specific protocol can become a burden, so we defined the Component Firmware Update (CFU) standard protocol for use in our firmware update drivers and components.  The protocol permits us to use a standardized driver and protocol to deliver firmware to any component that supports CFU.

Due to architectural differences, third-party firmware, or other issues, many of our components cannot support CFU.  We designed CFU to allow a CFU-compatible component to receive firmware by using the CFU protocol and forward it to other components using their specific protocols.  Thus, only one component in a collection of components needs to be CFU-compatible. The CFU driver delivers sub-component firmware to the primary component for forwarding to non-CFU components.

For components with very limited battery power, such as small wireless peripherals, firmware downloads are expensive operations and waste significant battery life if the firmware is ultimately rejected by the peripheral.  To avoid this, CFU “offers” a firmware image before it is downloaded, providing specific properties such as version, hardware platform, and so on.   If the primary component accepts an offer, it may still reject the firmware after download due to integrity issues that may arise during the transport of the image, or if the received image properties do not match the offered properties.

As part of our open-source effort, we are sharing the CFU protocol, driver sample, firmware sample code, and tool sample. This aims at enabling the system and peripheral developers to leverage this protocol, support their development, easily and automatically push firmware updates to Windows Update for many of their firmware components.

Goals and Non-Goals

CFU was developed with the following tenets in mind:

  • Update must occur with little or no user disruption – no “update mode” that requires the user to wait or even be aware that an update is taking place.
  • Update must be delivered through Windows Update drivers.
  • Update must be able to wait to update a device until it becomes available.
  • Drivers must not have to “know” specifics of any update package other than which component device to send it to.
  • Evaluation of the appropriateness of the update lies with the component receiving it, not in the driver.
  • Target must be able to reject firmware before it is downloaded if it is inappropriate.
  • Update must permit third-party versioning schemes to be mapped to a standardized versioning scheme.

CFU permits but does not specify:

  • Authentication policies or methods
  • Encryption policies or methods
  • Rollback policies or methods
  • Recovery of bricked firmware

System Overview

In CFU, a primary component is a device that understands the CFU protocol. This component can receive firmware from a CFU driver for itself or for the sub-components to which the component is connected. The CFU driver (host) is created by the component or device manufacturer and delivered through Windows Update. The driver is installed and loaded when the device is detected by Windows.

Primary Components and Sub-Components

A CFU-compatible system uses a hierarchical concept of a primary component and sub-components.  A primary component is a device that implements the device side of the CFU protocol and can receive updates for itself or its sub-components directly from the CFU driver. A primary component and sub-components can be internal or detachable.  A device may have multiple primary components, with or without sub-components, each with its own CFU driver.

Flow chart describing CFU Driver.

Sub-components are updated by the component after receiving a CFU firmware image that is targeted for the sub-component.  The mechanism that the component uses to update its sub-components is implementation specific between the sub-component and the primary component and is beyond the scope of the CFU specification.

Offers and Payloads

A CFU driver (host) may contain multiple firmware images for a primary component and sub-components associated with the component.

Chart showing firmware images.

A package within the host comprises an offer and a payload or image and other information necessary for the driver to load.  The offer contains enough information about the payload to allow the primary component to decide if it is an acceptable payload.  Offer information includes a CFU protocol version, component ID (and sub-component ID if applicable), firmware version, release vs. debug status, and other information.  For some devices, downloading and flashing new firmware is expensive for battery life and other reasons.  By issuing an offer, the CFU protocol avoids downloading or flashing firmware that would be rejected based on versioning and other platform policies.

The payload of a package is a range of addresses and bytes to be programmed. The bytes are opaque to the host.

Offer Sequence

The general firmware update sequence by using CFU is for the host to issue the offer of each package to the primary component.  In general, the primary component can accept, reject, or skip the offer.

  • Accept offer—The primary component is ready to accept the firmware that was offered. If an offer is accepted, the payload is immediately delivered to the primary component.
  • Reject offer—The primary component is not interested in the firmware, possibly because it already has a better firmware, or the firmware violates some other internal policy.
  • Skip offer—The primary component may be interested in the firmware, but it is choosing to skip it for now.

If the offer is rejected or skipped, the host continues to cycle through its list of offers.  The driver repeats this cycle until all offers are rejected.

The optional skip response permits the primary component to examine the entire offer list to arrange it for ordering dependencies according to internal policies. After it has prioritized the offers in the list, it can continue to skip and accept the highest priority offer when the host replays the sequence. After an offer has been accepted and installed it is subsequently rejected if offered in a later cycle because the entity is up to date.  The cycle ends when all offers have been rejected.  Because updates can change the policies themselves, such as “jail-breaking” during development, all offers are issued every cycle, even those that were previously rejected.

An offer can also be rejected if the primary component has accepted a download but must be restarted.  In this case the component can reboot itself, if the user disruption is minimal, or the update can remain pending until the next system reboot.  The host restarts the offer cycle after the reboot or component reset.

Consider an example of a device that has four components: one primary component and three sub components. Offers are made in no specific order within a cycle.  Here is a representation of a possible host offer cycle:

Flow chart showing sub components.

In an example, in the first round, all offers are skipped to see all the Offers.

Second flow chart showing sub-components.

After seeing all the offers, the primary component determines that sub-component 1 must be updated before sub-component 3, and that the order of the primary component and sub-component 2 does not matter. The component sets sub-component 3 as lower priority than sub-component 1.

In the next offer cycle, the sub-component 3 offer is skipped again because sub-component 1 has not yet been updated and is higher priority.  Each of the other offers is accepted and updated.

Third flow chart showing sub-components.

In the next round, the sub-component 3 offer is accepted because the requirement to first update sub-component 2 has been met. All other offers are rejected because they are up to date.

Fourth flow chart showing sub-components.

Finally, in the last round, all offers are Rejected because the primary component and all sub-components are up to date.

Final flow chart showing sub-components.

At this time, the host has done all it can do. It ends the update process and updates its status in Device Manager according to the update results.

So, this mechanism permits ordering of updates, even to the same entity.  For example, if a component cannot receive version Y until it has version X due to some breaking change, both versions could be included, and version Y could be skipped until version X has been applied.

CFU Driver (host) Independence

It is important to note that the host does not have to make any decisions based on content of the offers or payloads. It simply sends the offers down and sends down the payloads that are accepted. It does not have to have any logic about what it is offering.  This permits it to be reused for diverse components and sub-components by changing only the offers and payloads it contains, and the component that the driver loads on.

The host does know the standard format of the offers to send the offer command. The host needs to understand the standard format of the payloads so that it can break them into addresses and bytes to deliver to the primary component. In the payload, the host does not need to know what data those fields contain.

Payload Delivery

After an offer has been accepted, the CFU Driver (the host) proceeds to download the firmware image, or Payload.  The primary component may prepare itself to receive it upon accepting the offer, or it may wait for the download to commence before making any changes.  The primary component may optionally cache the offer to check it against the payload after the payload is delivered but if possible, must evaluate the payload on its own merit, regardless of the offer.

Payload Delivery is accomplished in three phases, essentially, beginning, middle and end.

The Payload, in simplest terms, is a set of addresses and fixed-size arrays of bytes, for example Address 0x0000 0000 and 16 associated bytes, then Address 0x0000 0010 and 16 more bytes.  These are turned into write requests, one per address in the set, with its associated bytes.

The first write request is flagged so that the Component can do any preparations that it did not do when the Offer was first accepted, such as erase memory.  After the first write request, the Driver sends more Address + Data write commands until the final write.  The final write is flagged such that the Component knows that the download is complete and that it should validate the download and invoke or forward the new firmware.

Chart showing download.

The CFU Protocol specification defines several other result codes to assist in troubleshooting failures.  See the complete specification for details.  There is also room for implementers to add other codes for their own specific purposes, such as requesting immediate resets.

Payload Validation and Authentication

One of the most important aspects of firmware update is the validation of incoming firmware.  The first line of defense is to use a reliable transport mechanism with built-in robustness, such as USB or Bluetooth. These transports have built-in CRCs and retry mechanisms so that data is delivered reliably and in order.  Interfaces such as I2C™, SPI and UART do not have those mechanisms built-in and such robustness must be provided by higher layers.  At Microsoft, we prefer to use USB or Bluetooth Human Interface Device Class (HID) protocols for CFU, with a Vendor-Specific report structure, but any bidirectional command-response based mechanism can be used.

At a minimum, the primary component should verify bytes after each write to ensure that the data is properly stored before accepting the next set of bytes.  Also, a CRC or hash should be calculated on the download in its entirety to be verified after the download is complete, to ensure that the data was not modified in transit. The delivery of a reference CRC or hash to be validated is beyond the scope of the protocol but is typically contained within the download image itself and verified by the primary component or sub-component that receives it before issuing a Result Code.

For enhanced protection, a cryptographic signature mechanism is recommended to provide end-to-end protection against accidental modification or intentional attack at any stage in the update delivery, from creation at the manufacturer to invocation by the component.  If the download is required be confidential, an encryption mechanism can also be employed. Decryption and key management is also beyond of scope of the CFU protocol specification.

After the image has been authenticated, its properties should be validated against the offer and any other internal rules that the manufacturer requires. CFU does not specify the rules to be applied — these are up to the implementer.  It is important to do this check after the update has been authenticated so that any self-declared characteristics can be considered trustworthy.

While it is possible (and recommended) for each sub-Component to validate its own images, one advantage of CFU is that the primary component can accept offers and validate the sub-component image on behalf of the sub-component by using a standardized validation algorithm devised by the manufacturer.  The manufacturer can then design the primary component to apply the firmware by using less-secure means such as ARM-SWD, JTAG or other hardware-based methods.

Payload Invocation

One of the advantages of the CFU Protocol is that it is run at the application level in the primary component.  It is not necessary to place the component in any special mode that disrupts its normal operation.  As long as the component can receive and store the incoming payload without significant disruption, it can continue to do other tasks.  The only potential disruption comes when the new firmware must be invoked.

There are two recommended means to avoid that disruption, although others are possible.  Both involve having enough storage to maintain the current running application while receiving at least one additional image.  For the primary component, this means that it requires at least twice the normal application space, one space for the running primary component application, and one space for the incoming firmware package. For sub-components whose images are smaller than the primary component image, the primary component can use the extra space to store the sub-component image in its entirety.  If the sub-component image is larger than the primary component firmware, then separate packages are necessary, and must all be downloaded successfully for the sub-component update to complete.

The first invocation method uses a small bootloader image to select one of multiple images to run when the device is reset, typically at boot time, connection or power-up. The image selection algorithm is implementation specific, but typically is based on an algorithm involving the version of code, and an indication of successful validation of that image either at boot or when it was received.  This is the most generic approach.

A second invocation method is to physically swap the memory of the desired image into the active address space upon reset.  This capability is available in some microcontrollers and can also be accomplished with logic controls on external memory address bits.  This method has a disadvantage in that it requires specialized hardware but has the advantage that all images are statically linked to the same address space, and the mechanism does not require any bootloader.

CFU Protocol Limitations

There are a few caveats around CFU.

  • CFU cannot update a “bricked” component that can no longer run the protocol, yet new firmware has the potential to brick the component if not thoroughly validated and tested.

Care must be taken when adopting any update mechanism to always test the update mechanism prior to every release.

At Microsoft, we always build a “v.next” version so that we can validate that CFU has not been broken and can validate and invoke any subsequent update properly. Unbricking the component is beyond the of scope for the CFU protocol because the device cannot run the CFU protocol.

Implementers can use other methods to prevent bricking a device, such as having a third “fallback” fail-safe firmware image that is capable of CFU but that may not provide some features, or by implementing CFU as a function of the bootloader that is called by the application. If the application fails, the bootloader can be forced to take over and either fall back or provide a ‘bare-bones’ CFU interface until it is successfully updated.

  • CFU does not provide security. Security features can easily be overlaid on top of CFU by adding features to the validation algorithms used by the component and adding necessary data structures to the downloaded images such as Public Key Digital Signatures and appropriate key management.
  • CFU requires extra memory to store the incoming images because the protocol runs as part current firmware on the primary component. This will add cost to a system for the benefit of non-disruptive updates to the system.

Updating sub-component images that are larger than the component’s available storage requires dividing the sub-component image into a set of smaller update packages called segments and applying each segment separately.

The CFU protocol does not prohibit pausing the download to while portions of the image are forwarded. Thus, it may be possible to stream a large image through the primary component without segmentation.  Such “streamed” segmentation is beyond the scope of the CFU specification.  Care must be taken that the image can be properly validated after such a download is complete, such as maintenance of a running CRC or hash, as it is not fully resident in the primary component at the end of the download.

CFU presumes that the primary component has a set of validation rules to use.  If those rules are to be changed, the component must first be successfully updated by using the old rules before new rules can be applied.

There is example source code for the Host CFU drivers and Firmware along with documentation on GitHub.  Component Firmware Update

 

The post Introducing Component Firmware Update appeared first on Windows Developer Blog.

Announcing .NET Core 2.2 Preview 3

$
0
0

Today, we are announcing .NET Core 2.2 Preview 3We have made more improvements to the overall release that we would love to get your feedback on, either in the comments or at dotnet/core #2004.

ASP.NET Core 2.2 Preview 3 and Entity Framework 2.2 Preview 3 were also released today.

You can see more details of the release in the .NET Core 2.2 Preview 3 release notes. Related instructions, known issues, and workarounds are included in the releases notes. Please report any issues you find in the comments or at  dotnet/core #2004.

Please see the .NET Core 2.2 Preview 2 post to learn more about the new features coming with .NET Core 2.2.

Thanks for everyone that contributed to .NET Core 2.2. You’ve helped make .NET Core a better product!

Download .NET Core 2.2

You can download and get started with .NET Core 2.2, on Windows, macOS, and Linux:

Docker images are available at microsoft/dotnet for .NET Core and ASP.NET Core.

.NET Core 2.2 Preview 3 can be used with Visual Studio 15.8, Visual Studio for Mac and Visual Studio Code.

Platform Support

.NET Core 2.2 is supported on the following operating systems:

  • Windows Client: 7, 8.1, 10 (1607+)
  • Windows Server: 2008 R2 SP1+
  • macOS: 10.12+
  • RHEL: 6+
  • Fedora: 27+
  • Ubuntu: 14.04+
  • Debian: 8+
  • SLES: 12+
  • openSUSE: 42.3+
  • Alpine: 3.7+

Chip support follows:

  • x64 on Windows, macOS, and Linux
  • x86 on Windows
  • ARM32 on Linux (Ubuntu 18.04+, Debian 9+)

Closing

Please download and test .NET Core 2.2 Preview 3. We’re looking for feedback on the release with the intent of shipping the final version later this year.

Azure Update Management: A year of great updates

$
0
0

Azure Update Management is a service included as part of your Azure Subscription that enables you to assess your update status across your environment and manage your Windows and Linux server patching from a single pane of glass, both for on-premises and Azure.

Update Management is available at no additional cost (you only pay for log data stored in the Azure Log Analytics service) and can easily be enabled on Azure and on-premises VMs. To try it, simply navigate to your VM tab in Azure and enable Update management on one or more of your machines.

Over the past year we’ve been listening to your feedback and bringing powerful new capabilities to Azure Update Management. Here’s a look at some of the new features we have developed with your help.

Groups

One of the biggest asks from the community this year is for more flexibility in targeting update deployments, specifically support for groups with dynamic membership. Instead of specifying a static set of machines when you create an update deployment, groups allow you to specify a query that will be evaluated each time an update deployment occurs.

We have released a preview feature that enables you to create an Azure-native query that targets onboarded Azure VMs using flexible Azure-native concepts. For instance, in the following screenshot a query is configured to automatically pick up onboarded VMs that have the tag PatchWindow=SundayNight. As you onboard new VMs, you can simply add this tag and the VMs will automatically start participating in the next patch cycle for this update deployment.

NewUpdateDeployment_thumb[5]

We’ve also added the ability to immediately preview your query while authoring the update deployment. This shows the VMs that would be patched if this update deployment were to run right now.

Preview_thumb[5]

Onboarding new machines

From the virtual machines view you can easily onboard multiple machines into Update Management, even across subscriptions. More details can be found in the documentation.

VirtualMachines_thumb[5]

Pre/post scripts

One of the big asks from the community is for the ability to run custom tasks before and after an update deployment. Some of the more common scenarios include starting VMs before deploying updates (another top UserVoice request), starting and stopping services on machines, and starting a backup job before deploying updates.

To address this, we added “pre/post scripts,” a way to invoke Azure automation runbooks as part of an update deployment. We also published samples to help you get started with your specific needs.

NewUpdateDeployment2_thumb[5]

See the documentation for more information on using pre/post scripts.

Update inclusion

Azure Update Management provides the ability to deploy patches based on classifications. However, there are scenarios where you may want to explicitly list the exact set of patches. Common scenarios include whitelisting patches after canary environment testing and zero-day patch rollouts.

With update inclusion lists you can choose exactly which patches you want to deploy instead of relying on patch classifications.

UpdateInclusion_thumb[5]

More information on how patch inclusion works can be found in the documentation.

Reboot control

Patches often require reboots. Unfortunately reboots affect application availability. We’ve gotten feedback that you would like the ability to control when those reboots happen; reboot control is the result.

With the new reboot control feature, we provide you with flexible options for controlling your reboots. You can suppress reboots during an update run, always reboot, or even create a separate update deployment that only reboots your servers and doesn’t install any patches. With this functionality, the downtime caused by server reboots can be decoupled from your patching cycle.

Some of you have also given the feedback that you want to control reboots yourself, ensuring services are taken down and brought up in a manner consistent with your internal controls. Using the pre/post scripts feature in conjunction with reboot control is a great way to suppress reboots during the patch cycle then do an orchestrated reboot of your servers as a post script.

Try it out!

Update Management is continually improving. If you haven’t started using it, now is a great time to get started. We love hearing feedback from our users on UserVoice and your feedback directly drives features. Give it a try and let us know how to make Azure Update Management even better!

Microsoft’s Developer Blogs are Getting an Update

$
0
0
In the next few days, the Microsoft DevOps blog will move to a new platform with a modern, clean design and powerful features that will make it easy for you to discover and share great content. The DevOps blog, along with a select few others, will move to a new URL tomorrow, followed by additional... Read More

ASP.NET SignalR 2.4.0 Preview 2

$
0
0

We’ve just released the second preview of the upcoming 2.4.0 release of ASP.NET SignalR. As we mentioned in our previous blog post on the future of ASP.NET SignalR we are releasing a minor update to ASP.NET SignalR (the version of SignalR for System.Web and/or OWIN-based applications) that includes, support for the Azure SignalR Service, as well as some bug fixes and minor features.

We recommend you try upgrading to the preview even if you’re not interested in adopting the Azure SignalR Service at this time. Your feedback is critical to making sure we produce a stable and compatible update! You can find details about the release on the releases page of the ASP.NET SignalR GitHub repository.

Highlights

This preview mostly contains further bug fixes and changes to support the Azure SignalR service. Some of the major fixes include:

Other minor fixes are included, for a full list see our GitHub issue tracker

Azure Availability Zones expand with new services and to new regions

$
0
0

Azure Availability Zones are physically separate locations within an Azure region protecting customers’ applications and data from datacenter-level failures. Earlier this year, we announced the general availability of Availability Zones. We are now excited to reveal the continued expansion of Availability Zones into additional regions, North Europe and West US 2. This expanded coverage enables customers operating in the Europe and Western United States to build and run applications that require low-latency synchronous replication with protection from datacenter-level failures. With the combination of Availability Zones and region pairs, customers can create a comprehensive business continuity strategy with data residency in their geography of choice.

Azure’s global footprint consists of 54 regions with more than 100 datacenters serving customers in over 140 countries. Microsoft’s overall strategy is to ensure that customers have broad options for ensuring business continuity. Availability Zones offer additional resiliency capabilities for customers to build and run highly available applications. Azure, with more global regions than any other cloud provider, has been designed to provide first-class resiliency.

In addition to the continued expansion of Availability Zones across Azure regions, we're also excited to announce an expanded list of zone-redundant services including Azure SQL Database, Service Bus, Event Hubs, Application Gateway, VPN Gateway, and ExpressRoute. Learn more about zonal and zone-redundant services.

Azure offers an industry-leading 99.99 percent uptime SLA when virtual machines are running in two or more Availability Zones in the same region. Learn more, and start using Azure Availability Zones today. To learn how to build applications with high availability and disaster recovery, visit the Azure resiliency page.

Azure Marketplace new offers – Volume 22

$
0
0

We continue to expand the Azure Marketplace ecosystem. In September, 64 offers from Cognosys Inc. successfully met the onboarding criteria and went live. Cognosys Inc. continues to be a leading Marketplace publisher, with more than 315 solutions available. See details of the new offers below:

Virtual machines

1 Click secured Joomla on Ubuntu 16.04 LTS

1 Click secured Joomla on Ubuntu 16.04 LTS: Joomla is an open-source content management system for publishing web content. This image is made for enterprise customers looking to deploy a secured Joomla installation.

Hardened Ubuntu 14.04 LTS

Ubuntu 18.04 LTS Hardened: Ubuntu cloud images are pre-installed disk images that have been customized by Ubuntu engineering to run on cloud platforms. This image is made for enterprise customers who are looking to deploy a secure, hardened Ubuntu 18.04 installation.

Web applications

Acquia Drupal 7 on Windows 2012 R2

Acquia Drupal 7 on Windows 2012 R2: Acquia Drupal 7 provides an on-ramp to building websites featuring editorial and user-generated content. Whether you’re building a public-facing site or a private intranet, Acquia Drupal 7 enables you to turn site visitors into participants.

Acquia Drupal on CentOS 7.3

Acquia Drupal on CentOS 7.3: Acquia offers a secure Platform-as-a-Service cloud environment for the Drupal web content management system, with advanced multi-site management, powerful developer tools, and Software-as-a-Service capabilities.

Acquia Drupal on Ubuntu 14.04 LTS

Acquia Drupal on Ubuntu 14.04 LTS: Drupal by one of its major contributors in the areas of mobile-friendly authoring experience, performance, multilingual, and release management.

Acquia Drupal with MSSQL on Win 2012 R2

Acquia Drupal with MSSQL on Win 2012 R2: Acquia Drupal with MSSQL enables you to build websites featuring editorial and user-generated content. Whether you’re building a public-facing site or a private intranet, Acquia Drupal can turn site visitors into active participants.

Apache Solr on Centos 7.3

Apache Solr on Centos 7.3: Solr is the popular, fast, open-source NoSQL search platform from the Apache License project. Solr is highly scalable, providing fault tolerant distributed search and indexing, and it powers search and navigation for many of the largest internet sites.

Apache Solr on Ubuntu 14.04 LTS

Apache Solr on Ubuntu 14.04 LTS: Solr is the popular, fast, open-source NoSQL search platform from the Apache License project.

CentOS 6.9 Hardened

CentOS 6.9 Hardened: CentOS 6.9 (Community Enterprise Operating System) is a Linux distribution that aims to provide a free, enterprise-class, community-supported computing platform functionally compatible with its upstream source, Red Hat Enterprise Linux (RHEL).

CentOS 7.3 Hardened

CentOS 7.3 Hardened: This image of CentOS 7.3 (Community Enterprise Operating System) is made for customers who are looking to deploy a secured, hardened CentOS 7.3 installation.

Hardened IIS On Windows Server 2012 R2

Hardened IIS On Windows Server 2012 R2: Internet Information Services (IIS) 8.5 has several improvements related to performance in large-scale scenarios, such as those used by commercial hosting providers and Microsoft’s own cloud offerings.

Hardened Ubuntu 14.04 LTS

Hardened Ubuntu 14.04 LTS: Ubuntu cloud images are pre-installed disk images that have been customized by Ubuntu engineering to run on cloud platforms. This image is made for enterprise customers who are looking to deploy a secure, hardened Ubuntu 14.04 installation.

Jboss AS on centos 7.4

JBoss AS on centos 7.4: The JBoss Enterprise Application Platform is a subscription-based, open-source, Java EE-based application server runtime platform used for building, deploying, and hosting highly transactional Java applications and services.

Jenkins on CentOS 7.3

Jenkins on CentOS 7.3: Jenkins is an open-source continuous integration tool written in Java. Jenkins provides continuous integration services for software development. It is a server-based system running in a servlet container such as Apache Tomcat.

Joomla on Ubuntu 14.04 LTS

Joomla on Ubuntu 14.04 LTS: Joomla is an open-source content management system for publishing web content. This image is made for enterprise customers who are looking to deploy a secured Joomla installation.

Kentico on Windows 2012 R2

Kentico on Windows 2012 R2: Kentico is an all-in-one platform built entirely in-house, meaning you avoid the frustrations of dealing with disparate systems and are up and running quicker. Providing the right customer experience has never been so simple.

MySQL 5.7 on Ubuntu 14.04 LTS

MySQL 5.7 on Ubuntu 14.04 LTS: MySQL is a popular open-source relational SQL database management system used to develop web-based software applications. This image is for enterprise customers looking to deploy a secured MYSQL 5.7 installation.

PHP MYSQL and IIS on Windows Server 2016

PHP MYSQL and IIS on Windows Server 2016: PHP is a server-side scripting language designed for web development but also used as a general-purpose programming language.

Ruby on centos 7.3

Ruby on centos 7.3: Ruby is a general-purpose programming language with an elegant syntax that is natural to read and easy to write. It supports multiple programming paradigms, including functional, object-oriented, and imperative.

SchlixCMS on Windows 2012 R2

SchlixCMS on Windows 2012 R2: Schlix (formerly known as Baby Gekko) is a lightweight, extensible PHP/MySQL-based content management system platform for publishing websites, intranets, or blogs.

Secured Lamp on CentOS 7.3

Secured Lamp on CentOS 7.3: Secured LAMP components are largely interchangeable and not limited to the original selection. The LAMP is an archetypal model of web service solution stacks.

Secured LAMP on Ubuntu 16.04 LTS

Secured LAMP on Ubuntu 16.04 LTS: LAMP is an acronym of its original four open-source components: the Linux operating system, the Apache HTTP Server, the MySQL relational database management system, and the PHP programming language.

Secured Magento on Windows 2012 R2

Secured Magento on Windows 2012 R2: Magento is an open-source e-commerce platform written in PHP. Magento employs the MySQL/MariaDB relational database management system, the PHP programming language, and elements of the Zend Framework.

Secured MediaWiki on Windows 2012 R2

Secured MediaWiki on Windows 2012 R2: MediaWiki is a free and open-source wiki package written in PHP, originally for use on Wikipedia. It is now used by several other projects of the nonprofit Wikimedia Foundation.

Secured Moodle on Windows 2012 R2

Secured Moodle on Windows 2012 R2: Moodle is a learning management system that provides educators with forums, quizzes, assignments, learning objects, surveys, polls, data collections, and more so they can construct web-based courses and invite students into those courses.

Secured WordPress on Windows Server 2012 R2

Secured WordPress on Windows Server 2012 R2: WordPress is a free and open-source content management system based on PHP and MySQL. Features include a plugin architecture and a template system.

SEOPanel on Ubuntu 14.04 LTS

SEOPanel on Ubuntu 14.04 LTS: This is a complete open-source control panel for managing the search engine optimization of your websites. SEO Panel includes the latest tools to track and increase the performance of your websites.

sepPortal on Windows 2012 R2

sepPortal on Windows 2012 R2: This integrated portal software provides a full spectrum of services for you to offer on your website or company intranet. SepPortal can develop business, education, and entertainment portals; pay-for-play websites; and social networking.

ShoppingCart on Windows 2012 R2

ShoppingCart on Windows 2012 R2: Shopping Cart allows customers to pay you online with debit and credit cards.

Silver Stripe CMS on Windows 2012 R2

Silver Stripe CMS on Windows 2012 R2: SilverStripe is an open-source programming framework and content management system for websites. It provides an intuitive, web-based administration panel, so users don’t need to know programming languages to use it.

Silverstripe on centos 7.3

Silverstripe on centos 7.3: SilverStripe provides an out-of-the-box web-based administration panel that enables users to modify their websites. The core of the software is the SilverStripe Framework, a PHP web application framework.

SilverStripe on Ubuntu 14.04 LTS

SilverStripe on Ubuntu 14.04 LTS: SilverStripe is an open-source programming framework and content management system for websites. It provides an intuitive, web-based administration panel, so users don’t need to know programming languages to use it.

Simple Invoices on centos 7.3

Simpleinvoice on Ubuntu 14.04 LTS: Simple Invoices is an intuitive application to generate, manage, and track invoices. It's ideal for making professional-looking invoices that can be sent to your customers in PDF format.

Simpleinvoice on Ubuntu 14.04 LTS

Simple Invoices on centos 7.3: Simple Invoices is an intuitive application to generate, manage, and track invoices. The application also provides PayPal payment gateway support and eWAY Merchant Hosted payment gateway support.

Simple Machines on CentOS 7.3

Simple Machines on CentOS 7.3: Simple Machines is a PHP/MySQL-based solution with a rich set of features. Its SSI (Server Side Includes) function lets your forum and your website interact with each other, and it's free to download.

Simple Machines on Ubuntu 14.04 LTS

Simple Machines on Ubuntu 14.04 LTS: Simple Machines is a PHP/MySQL-based solution with a rich set of features. Its SSI (Server Side Includes) function lets your forum and your website interact with each other, and it's free to download.

Subversion on CentOS 7.3

Subversion on CentOS 7.3: Subversion is a full-featured version control system originally designed to be a better concurrent versions system (CVS). Subversion has since expanded, but its model, design, and interface remain influenced by that goal.

Subversion on Ubuntu 14.04 LTS

Subversion on Ubuntu 14.04 LTS: Subversion is a full-featured version control system originally designed to be a better concurrent versions system (CVS). Subversion has since expanded, but its model, design, and interface remain influenced by that goal.


Suitecrm on centos 7.3

Suitecrm on centos 7.3: The Suite CRM Open Source Project is a forked version of Sugar CRM. Suite CRM easily adapts to any business environment, and the open-source design enables companies to easily customize and integrate their business processes.

SuiteCRM on Ubuntu 14.04 LTS

SuiteCRM on Ubuntu 14.04 LTS: The Suite CRM Open Source Project is a forked version of Sugar CRM. Suite CRM easily adapts to any business environment, and the open-source design enables companies to easily customize and integrate their business processes.

Survey Project on Windows 2012 R2

Survey Project on Windows 2012 R2: Survey Project is a free, web-based survey and data entry toolkit for processing and gathering data online. It enables users without coding knowledge to develop and publish surveys through the internet.

Test Link on centos

Test Link on centos: Test Link is a web-based test management system that facilitates software quality assurance. It synchronizes requirement and test specifications. Users can create test projects and document test cases, and it supports automated and manual execution.

Testlink on Ubuntu 14.04 LTS

Testlink on Ubuntu 14.04 LTS: Test Link is a test management system that facilitates software quality assurance. It synchronizes requirement and test specifications. Users can create test projects and document test cases, and it supports automated and manual execution.

Thinkup on centOS 7.4

Thinkup on centOS 7.4: ThinkUp is a free, open-source web application that captures your content on social networks like Twitter and Facebook. With ThinkUp, you can store your social activity in a database that you control, making it easy to search, sort, analyze, and publish.

Thinkup on Ubuntu 14.04 LTS

Thinkup on Ubuntu 14.04 LTS: ThinkUp is a free, open-source web application that captures your content on social networks. With ThinkUp, you can store your social activity in a database that you control, making it easy to search, sort, analyze, display, and publish.

Tiki Wiki CMS on Windows 2012 R2

TikiWikiCMS on centOS 7.4: Tiki Wiki CMS Groupware, or simply Tiki, is a free and open-source Wiki-based content management system and online office suite written primarily in PHP.

TikiWikiCMS on centOS 7.4

TikiWikiCMS on Ubuntu 14.04 LTS: Tiki Wiki CMS Groupware, or simply Tiki, is a free and open-source Wiki-based content management system and online office suite written primarily in PHP.

TikiWikiCMS on Ubuntu 14.04 LTS

Tiki Wiki CMS on Windows 2012 R2: Tiki Wiki CMS Groupware, or simply Tiki, is a free and open-source Wiki-based content management system and online office suite written primarily in PHP.

TinyTinyRSS on centos 7.4

TinyTinyRSS on centos 7.4: Tiny Tiny RSS is a web-based open-source news feed (RSS/Atom) reader and aggregator, designed to allow you to read news from any location while feeling as close to a real desktop application as possible.

TinyTinyRSS on Ubuntu 14.04 LTS

TinyTinyRSS on Ubuntu 14.04 LTS: Tiny Tiny RSS is a web-based open-source news feed (RSS/Atom) reader and aggregator, designed to allow you to read news from any location while feeling as close to a real desktop application as possible.

Tomcat on CentOS 7.3

Tomcat on CentOS 7.3: Tomcat is an open-source web server developed by Apache. Tomcat implements several Java EE specifications, including Java Servlet, JavaServer Pages (JSP), Java EL, and WebSocket, and it provides a "pure Java" HTTP web server environment.

Tomcat on Ubuntu 14.04 LTS

Tomcat on Ubuntu 14.04 LTS: Tomcat is an open-source web server developed by Apache. Tomcat implements several Java EE specifications, including Java Servlet, JavaServer Pages (JSP), Java EL, and WebSocket, and it provides a "pure Java" HTTP web server environment.

Trac on centos 7.3

Trac on centos 7.3: Trac is an open-source web-based project management and bug tracking system written in the Python programming language. With Trac, you can easily structure and track your project using team members, tickets, timelines, and useful overviews.

Trac on Ubuntu 14.04 LTS

Trac on Ubuntu 14.04 LTS: Trac is an open-source web-based project management and bug tracking system written in the Python programming language. With Trac, you can easily structure and track your project using team members, tickets, timelines, and useful overviews.

Typo3 on Centos

Typo3 on Centos: TYPO3 is an open-source web content management system written in PHP. TYPO3 is highly flexible and can be extended by new functions without writing any code. The software is available in more than 50 languages and has a built-in localization system.

Typo3 on Ubuntu 14.04 LTS

Typo3 on Ubuntu 14.04 LTS: TYPO3 is an open-source web content management system written in PHP. TYPO3 can be extended by new functions without writing any code. The software is available in more than 50 languages and has a built-in localization system.

Ubuntu 16.04 LTS Hardened

Ubuntu 16.04 LTS Hardened: Ubuntu is a popular operating system running in hosted environments. This image is made for enterprise customers who are looking to deploy a secure, hardened Ubuntu 16.04 installation.

Umbraco CMS on Windows 2012 R2

Umbraco CMS on Windows 2012 R2: Umbraco is an open-source content management system for publishing content on the web and on intranets. It's written in C# and deployed on Microsoft-based infrastructure.

Varnish on CentOs 7.3 Community

Varnish on CentOs 7.3 Community: Varnish Cache is an open-source HTTP engine/reverse HTTP proxy that can speed up a website by up to 1,000 percent by doing exactly what its name implies: caching (or storing) a copy of a webpage the first time a user visits.

WebServer On Windows Server 2016

WebServer On Windows Server 2016: Internet Information Services (IIS) is an extensible web server created by Microsoft for use with the Windows NT family.

WordPress on CentOS 7.3

WordPress on CentOS 7.3: WordPress is a free and open-source content management system based on PHP and MySQL.


X-Cart on Ubuntu 14.04 LTS

X-Cart on Ubuntu 14.04 LTS: X-Cart is e-commerce software that will allow you to design an online store and sell products worldwide without any extensive development knowledge. The application is fully responsive so your site will be displayed properly on multiple devices.


XOOPS on Ubuntu 14.04 LTS

XOOPS on Ubuntu 14.04 LTS: XOOPS is a free, open-source content management system written in PHP. It uses a modular architecture and allows users to customize and update their websites.


Zurmo on Ubuntu 14.04 LTS

Zurmo on Ubuntu 14.04 LTS: Zurmo CRM comes with a fully functional event/time-triggering system that lets you automate and control every operation.


What they know now: Insights from top IoT leaders

$
0
0

This post was co-authored by Peter Cooper, Senior Product Marketing Manager, Azure IoT and Mark Pendergrast, Director of Product Marketing, Azure.

The Internet of Things (IoT) market is red hot. Industrial spending will surge to $123 billion in 2021, with the manufacturing, transportation, and logistics, and utility sectors each expected to spend $40 billion on the technology within the next three years.

Nobody wants to be left behind. In the following video, you’ll hear from industry leaders Henrik Fløe of Grundfos, Doug Weber from Rockwell Automation, Michael MacKenzie from Schneider Electric, and Alasdair Monk of The Weir Group on why they’re bullish on all things IoT, and how they’re leveraging it to innovate and grow.

Here’s a sampling of their insights:

IoT “is a huge disruptor to our industry, to be able to connect more directly with our end-user customers, to be able to track our devices, to be able to track how the devices and the gear is performing, but then also to derive new business models, new value streams that help our customers do more with what they have.”

“It's really important I think that we build our capabilities in a way that makes it flexible, and so that we can swap things out as new technology becomes available.”

“The ability to do things like subscriptions, the ability to take on more of the management of the kind of IT systems that are in the OT environment. Those are the things that are changing our company.”

IoTLeaders

Watch the video to get insights from a pump manufacturer, two automation solutions firms, and an engineering firm on key questions of the day, such as:

  • What specific capabilities and business advantages are they realizing through IoT?
  • How have their organizations had to change to enable success in IoT?
  • What have they learned during their IoT journey that they wish they knew at the beginning?
  • What advice would they give to a company before they start their IoT journey?

Then imagine the possibilities of IoT for your company.

Exploring Clang Tooling Part 2: Examining the Clang AST with clang-query

$
0
0

This post is part of a regular series of posts where the C++ product team and other guests answer questions we have received from customers. The questions can be about anything C++ related: MSVC toolset, the standard language and library, the C++ standards committee, isocpp.org, CppCon, etc.

Today’s post is by guest author Stephen Kelly, who is a developer at Havok, a contributor to Qt and CMake and a blogger. This post is part of a series where he is sharing his experience using Clang tooling in his current team.

In the last post, we created a new clang-tidy check following documented steps and encountered the first limitation in our own knowledge – how can we change both declarations and expressions such as function calls?

In order to create an effective refactoring tool, we need to understand the code generated by the create_new_check.py script and learn how to extend it.

Exploring C++ Code as C++ Code

When Clang processes C++, it creates an Abstract Syntax Tree representing the code. The AST needs to be able to represent all of the possible complexity that can appear in C++ code – variadic templates, lambdas, operator overloading, declarations of various kinds etc. If we can use the AST representation of the code in our tooling, we won’t be discarding any of the meaning of the code in the process, as we would if we limit ourselves to processing only text.

Our goal is to harness the complexity of the AST so that we can describe patterns in it, and then replace those patterns with new text. The Clang AST Matcher API and FixIt API satisfy those requirements respectively.

The level of complexity in the AST means that detailed knowledge is required in order to comprehend it. Even for an experienced C++ developer, the number of classes and how they relate to each other can be daunting. Luckily, there is a rhythm to it all. We can identify patterns, use tools to discover what makes up the Clang model of the C++ code, and get to the point of having an instinct about how to create a clang-tidy check quickly.

Exploring a Clang AST

Let’s dive in and create a simple piece of test code so we can examine the Clang AST for it:

 
int addTwo(int num) 
{ 
    return num + 2; 
} 

int main(int, char**) 
{ 
    return addTwo(3); 
} 

There are multiple ways to examine the Clang AST, but the most useful when creating AST Matcher based refactoring tools is clang-query. We need to build up our knowledge of AST matchers and the AST itself at the same time via clang-query.

So, let’s return to MyFirstCheck.cpp which we created in the last post. The MyFirstCheckCheck::registerMatchers method contains the following line:

Finder->addMatcher(functionDecl().bind("x"), this); 

The first argument to addMatcher is an AST matcher, an Embedded Domain Specific Language of sorts. This is a predicate language which clang-tidy uses to traverses the AST and create a set of resulting ‘bound nodes’. In the above case, a bound node with the name x is created for each function declaration in the AST. clang-tidy later calls MyFirstCheckCheck::check for each set of bound nodes in the result.

Let’s start clang-query passing our test file as a parameter and following it with two dashes. Similar to use of clang-tidy in Part 1, this allows us to specify compile options and avoid warnings about a missing compilation database.

This command drops us into an interactive interpreter which we can use to query the AST:

$ clang-query.exe testfile.cpp -- 

clang-query>

Type help for a full set of commands available in the interpreter. The first command we can examine is match, which we can abbreviate to m. Let’s paste in the matcher from MyFirstCheck.cpp:

clang-query> match functionDecl().bind("x") 

Match #1: 
 
testfile.cpp:1:1: note: "root" binds here 
int addTwo(int num) 
^~~~~~~~~~~~~~~~~~~ 
testfile.cpp:1:1: note: "x" binds here 
int addTwo(int num) 
^~~~~~~~~~~~~~~~~~~ 
 
Match #2: 
 
testfile.cpp:6:1: note: "root" binds here 
int main(int, char**) 
^~~~~~~~~~~~~~~~~~~~~ 
testfile.cpp:6:1: note: "x" binds here 
int main(int, char**) 
^~~~~~~~~~~~~~~~~~~~~ 
2 matches. 

clang-query automatically creates a binding for the root element in a matcher. This gets noisy when trying to match something specific, so it makes sense to turn that off if defining custom binding names:

clang-query> set bind-root false 
clang-query> m functionDecl().bind("x") 

Match #1: 

testfile.cpp:1:1: note: "x" binds here 
int addtwo(int num) 
^~~~~~~~~~~~~~~~~~~ 

Match #2: 

testfile.cpp:6:1: note: "x" binds here 
int main(int, char**) 
^~~~~~~~~~~~~~~~~~~~~ 
2 matches. 

So, we can see that for each function declaration that appeared in the translation unit, we get a resulting match. clang-tidy will later use these matches one at a time in the check method in MyFirstCheck.cpp to complete the refactoring.

Use quit to exit the clang-query interpreter. The interpreter must be restarted each time C++ code is changed in order for the new content to be matched.

Nesting matchers

The AST Matchers form a ‘predicate language’ where each matcher in the vocabulary is itself a predicate, and those predicates can be nested. The matchers fit into three broad categories as documented in the AST Matchers Reference.

functionDecl() is an AST Matcher which is invoked for each function declaration in the source code. In normal source code, there will be hundreds or thousands of results coming from external headers for such a simple matcher.

Let’s match only functions with a particular name:

clang-query> m functionDecl(hasName("addTwo")) 

Match #1: 

testfile.cpp:1:1: note: "root" binds here 
int addTwo(int num) 
^~~~~~~~~~~~~~~~~~~ 
1 match. 

This matcher will only trigger on function declarations which have the name “addTwo“. The middle column of the documentation indicates the name of each matcher, and the first column indicates the kind of matcher that it can be nested inside. The hasName documentation is not listed as being usable with the Matcher<FunctionDecl>, but instead with Matcher<NamedDecl>.

Here, a developer without prior experience with the Clang AST needs to learn that the FunctionDecl AST class inherits from the NamedDecl AST class (as well as DeclaratorDecl, ValueDecl and Decl). Matchers documented as usable with each of those classes can also work with a functionDecl() matcher. That familiarity with the inheritance structure of Clang AST classes is essential to proficiency with AST Matchers. The names of classes in the Clang AST correspond to “node matcher” names by making the first letter lower-case. In the case of class names with an abbreviation prefix CXX such as CXXMemberCallExpr, the entire prefix is lowercased to produce the matcher name cxxMemberCallExpr.

So, instead of matching function declarations, we can match on all named declarations in our source code. Ignoring some noise in the output, we get results for each function declaration and each parameter variable declaration:

clang-query> m namedDecl() 
... 
Match #8: 

testfile.cpp:1:1: note: "root" binds here 
int addTwo(int num) 
^~~~~~~~~~~~~~~~~~~ 

Match #9: 

testfile.cpp:1:12: note: "root" binds here 
int addTwo(int num) 
           ^~~~~~~ 

Match #10: 

testfile.cpp:6:1: note: "root" binds here 
int main(int, char**) 
^~~~~~~~~~~~~~~~~~~~~ 

Match #11: 

testfile.cpp:6:10: note: "root" binds here 
int main(int, char**) 
         ^~~ 

Match #12: 

testfile.cpp:6:15: note: "root" binds here 
int main(int, char**) 
              ^~~~~~

Parameter declarations are in the match results because they are represented by the ParmVarDecl class, which also inherits NamedDecl. We can match only parameter variable declarations by using the corresponding AST node matcher:

clang-query> m parmVarDecl() 

Match #1: 

testfile.cpp:1:12: note: "root" binds here 
int addTwo(int num) 
           ^~~~~~~ 

Match #2: 

testfile.cpp:6:10: note: "root" binds here 
int main(int, char**) 
         ^~~ 

Match #3: 

testfile.cpp:6:15: note: "root" binds here 
int main(int, char**) 
              ^~~~~~

clang-query has a code-completion feature, triggered by pressing TAB, which shows the matchers which can be used at any particular context. This feature is not enabled on Windows however.

Discovery Through Clang AST Dumps

clang-query gets most useful as a discovery tool when exploring deeper into the AST and dumping intermediate nodes.

Let’s query our testfile.cpp again, this time with the output set to dump:

clang-query> set output dump 
clang-query> m functionDecl(hasName(“addTwo”)) 

Match #1: 

Binding for "root": 
FunctionDecl 0x17a193726b8 <testfile.cpp:1:1, line:4:1> line:1:5 used addTwo 'int (int)' 
|-ParmVarDecl 0x17a193725f0 <col:12, col:16> col:16 used num 'int' 
`-CompoundStmt 0x17a19372840 <line:2:1, line:4:1> 
  `-ReturnStmt 0x17a19372828 <line:3:5, col:18>
      `-BinaryOperator 0x17a19372800 <col:12, col:18> 'int' '+' 
          |-ImplicitCastExpr 0x17a193727e8 <col:12> 'int' <LValueToRValue>
            | `-DeclRefExpr 0x17a19372798 <col:12> 'int' lvalue ParmVar 0x17a193725f0 'num' 'int' 
            `-IntegerLiteral 0x17a193727c0 <col:18> 'int' 2

There is a lot here to take in, and a lot of noise which is not relevant to what we are interested in to make a matcher, such as pointer addresses, the word used appearing inexplicably and other content whose structure is not obvious. For the sake of brevity in this blog post, I will elide such content in further listings of AST content.

The reported match has a FunctionDecl at the top level of a tree. Below that, we can see the ParmVarDecl nodes which we matched previously, and other nodes such as ReturnStmt. Each of these corresponds to a class name in the Clang AST, so it is useful to look them up to see what they inherit and know which matchers are relevant to their use.

The AST also contains source location and source range information, the latter denoted by angle brackets. While this detailed output is useful for exploring the AST, it is not as useful for exploring the source code. Diagnostic mode can be re-entered with set output diag for source code exploration. Unfortunately, both outputs (dump and diag) can not currently be enabled at once, so it is necessary to switch between them.

Tree Traversal

We can traverse this tree using the has() matcher:

clang-query> m functionDecl(has(compoundStmt(has(returnStmt(has(callExpr())))))) 

Match #1: 

Binding for "root": 
FunctionDecl <testfile.cpp:6:1, line:9:1> line:6:5 main 'int (int, char **)' 
|-ParmVarDecl <col:10> col:13 'int' 
|-ParmVarDecl <col:15, col:20> col:21 'char **' 
`-CompoundStmt <line:7:1, line:9:1> 
  `-ReturnStmt <line:8:5, col:20> 
      `-CallExpr <col:12, col:20> 'int' 
          |-ImplicitCastExpr <col:12> 'int (*)(int)'
            | `-DeclRefExpr <col:12> 'int (int)' 'addTwo'
            `-IntegerLiteral <col:19> 'int' 3      

With some distracting content removed, we can see that the AST dump contains some source ranges and source locations. The ranges are denoted by angle brackets, which have a beginning and possibly an end position. To avoid repeating the filename and the keywords line and col, only difference from the previously printed source location are printed. For example, <testfile.cpp:6:1, line:9:1> describes a span from line 6 column 1 in testfile.cpp to line 9 column 1 also in testfile.cpp. The range <col:15, col:20> describes the span from column 15 to column 20 in line 6 (from a few lines above) in testfile.cpp as that is the last filename printed.

Because each of the nested predicates match, the top-level functionDecl() matches and we get a binding for the result. We can additionally use a nested bind() call to add nodes to the result set:

clang-query> m functionDecl(has(compoundStmt(has(returnStmt(has(callExpr().bind("functionCall"))))))) 

Match #1: 

Binding for "functionCall": 
CallExpr <testfile.cpp:8:12, col:20> 'int' 
|-ImplicitCastExpr <col:12> 'int (*)(int)'
| `-DeclRefExpr <col:12> 'int (int)' 'addTwo'
`-IntegerLiteral <col:19> 'int' 3 

Binding for "root": 
FunctionDecl <testfile.cpp:6:1, line:9:1> line:6:5 main 'int (int, char **)' 
|-ParmVarDecl <col:10> col:13 'int' 
|-ParmVarDecl <col:15, col:20> col:21 'char **' 
`-CompoundStmt <line:7:1, line:9:1> 
  `-ReturnStmt <line:8:5, col:20> 
      `-CallExpr <col:12, col:20> 'int' 
          |-ImplicitCastExpr <col:12> 'int (*)(int)'
            | `-DeclRefExpr <col:12> 'int (int)' 'addTwo'
            `-IntegerLiteral <col:19> 'int' 3 

The hasDescendant() matcher can be used to match the same node as above in this case:

clang-query> m functionDecl(hasDescendant(callExpr().bind("functionCall")))

Note that over-use of the has() and hasDescendant() matchers – and their complements hasParent() and hasAncestor() – is usually an anti-pattern and can lead to unintended results, particularly while matching nested Expr subclasses in source code. Usually, higher-level matchers should be used instead. For example, while has() may be used to match a desired IntegerLiteral argument in the case above, it would not be possible to specify which argument we wish to match in a function which has multiple arguments. The hasArgument() matcher should be used in the case of callExpr() to resolve this issue, as it can specify which argument should be matched if there are multiple:

clang-query> m callExpr(hasArgument(0, integerLiteral()))

The above matcher will match on every function call whose zeroth argument is an integer literal.

Usually we want to use more narrowing criteria to only match on a particular category of matches. Most matchers accept multiple arguments and behave as though they have an implicit allOf() within them. So, we can write:

clang-query> m callExpr(hasArgument(0, integerLiteral()), callee(functionDecl(hasName("addTwo"))))

to match calls whose zeroth argument is an integer literal only if the function being called has the name “addTwo“.

A matcher expression can sometimes be obvious to read and understand, but harder to write or discover. The particular node types which may be matched can be discovered by examining the output of clang-query. However, the callee() matcher here may be difficult to independently discover because it did not appear to be referenced in the AST dumps from clang-query and it is only one matcher in the long list in the reference documentation. The code of the existing clang-tidy checks are educational both to discover matchers which are commonly used together, and to find a context where particular matchers should be used.

A nested matcher creating a binding in clang-query is another important discovery technique. If we have source code such as:

int add(int num1, int num2) 
{
  return num1 + num2; 
} 

int add(int num1, int num2, int num3) 
{
  return num1 + num2 + num3; 
} 

int main(int argc, char**) 
{ 
  int i = 42; 

  return add(argc, add(42, i), 4 * 7); 
}

and we intend to introduce a safe_int type to use instead of int in the signature of add. All existing uses of add must be ported to some new pattern of code.

The basic workflow with clang-query is that we must first identify source code which is exemplary of what we want to port and then determine how it is represented in the Clang AST. We will need to identify the locations of arguments to the add function and their AST types as a first step.

Let’s start with callExpr() again:

clang-query> m callExpr() 

Match #1: 

testfile.cpp:15:10: note: "root" binds here 
    return add(argc, add(42, i), 4 * 7); 
           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~ 

Match #2: 

testfile.cpp:15:20: note: "root" binds here 
    return add(argc, add(42, i), 4 * 7); 
                     ^~~~~~~~~~ 

This example uses various different arguments to the add function: the first argument is a parameter from a different function, then a return value of another call, then an inline multiplication. clang-query can help us discover how to match these constructs. Using the hasArgument() matcher we can bind to each of the three arguments, and using bind-root false for brevity:

clang-query> set bind-root false 
clang-query> m callExpr(hasArgument(0, expr().bind("a1")), hasArgument(1, expr().bind("a2")), hasArgument(2, expr().bind("a3"))) 

Match #1: 

testfile.cpp:15:14: note: "a1" binds here 
return add(argc, add(42, i), 4 * 7); 
           ^~~~ 

testfile.cpp:15:20: note: "a2" binds here 
return add(argc, add(42, i), 4 * 7); 
                 ^~~~~~~~~~ 

testfile.cpp:15:32: note: "a3" binds here 
return add(argc, add(42, i), 4 * 7); 
                             ^~~~~

Changing the output to dump and re-running the same matcher:

clang-query> set output dump 
clang-query> m callExpr(hasArgument(0, expr().bind("a1")), hasArgument(1, expr().bind("a2")), hasArgument(2, expr().bind("a3"))) 

Match #1: 

Binding for "a1": 
DeclRefExpr <testfile.cpp:15:14> 'int' 'argc'

Binding for "a2": 
CallExpr <testfile.cpp:15:20, col:29> 'int' 
|-ImplicitCastExpr <col:20> 'int (*)(int, int)' 
| `-DeclRefExpr <col:20> 'int (int, int)' 'add' 
|-IntegerLiteral <col:24> 'int' 42 
`-ImplicitCastExpr <col:28> 'int' 
  `-DeclRefExpr <col:28> 'int' 'i' 

Binding for "a3": 
BinaryOperator <testfile.cpp:15:32, col:36> 'int' '*' 
|-IntegerLiteral <col:32> 'int' 4 
`-IntegerLiteral <col:36> 'int' 7 

We can see that the top-level AST nodes of the arguments are DeclRefExpr, CallExpr and BinaryOperator respectively. When implementing our refactoring tool, we might want to wrap the argc as safe_int(argc), ignore the nested add() call, as its return type will be changed to safe_int, and change the BinaryOperator to some safe operation.

As we learn about the AST we are examining, we can also replace the expr() with something more specific to explore further. Because we now know the second argument is a CallExpr, we can use a callExpr() matcher to check the callee. The callee() matcher only works if we specify callExpr() instead of expr():

clang-query> m callExpr(hasArgument(1, callExpr(callee(functionDecl().bind("func"))).bind("a2"))) 

Match #1: 

Binding for "a2": 
CallExpr <testfile.cpp:15:20, col:29> 'int' 
|-ImplicitCastExpr <col:20> 'int (*)(int, int)'
| `-DeclRefExpr <col:20> 'int (int, int)' 'add'
|-IntegerLiteral <col:24> 'int' 42 
`-ImplicitCastExpr <col:28> 'int' 
  `-DeclRefExpr <col:28> 'int' 'i' 

Binding for "func": 
FunctionDecl <testfile.cpp:1:1, line:4:1> line:1:5 add 'int (int, int)' 
... etc 

1 match. 
clang-query> set output diag 
clang-query> m callExpr(hasArgument(1, callExpr(callee(functionDecl().bind("func"))).bind("a2"))) 

Match #1: 

testfile.cpp:15:20: note: "a2" binds here 
return add(argc, add(42, i), 4 * 7); 
                 ^~~~~~~~~~ 

testfile.cpp:1:1: note: "func" binds here 
int add(int num1, int num2) 
^~~~~~~~~~~~~~~~~~~~~~~~~~~ 

Avoiding the Firehose

Usually when you need to examine the AST it will make sense to run clang-query on your real source code instead of a single-file demo. Starting off with a callExpr() matcher will result in a firehose problem – there will be tens of thousands of results and you will not be able to determine how to make your matcher more specific for the lines of source code you are interested in. Several tricks can come to your aid in this case.

First, you can use isExpansionInMainFile() to limit the matches to only the main file, excluding all results from headers. That matcher can be used with Exprs, Stmts and Decls, so it is useful for everything you might want to start matching.

Second, if you still get too many results from your matcher, the has Ancestor matcher can be used to limit the results further.

Third, often particular names of variables can anchor your match to some particular piece of code of interest.

Exploring the AST of code such as

 
void myFuncName() 
{ 
  int i = someFunc() + Point(4, 5).translateX(9);   
} 

might start with a matcher which anchors to the name of the variable, the function it is in and the location in the main file:

varDecl(isExpansionInMainFile(), hasAncestor(functionDecl(hasName("myFuncName"))), hasName("i"))

This starting point will make it possible to explore how the rest of the line is represented in the AST without being drowned in noise.

Conclusion

clang-query is an essential asset while developing a refactoring tool with AST Matchers. It is a prototyping and discovery tool, whose input can be pasted into the implementation of a new clang-tidy check.

In this blog post, we explored the basic use of the clang-query tool – nesting matchers and binding their results – and how the output corresponds to the AST Matcher Reference. We also saw how to limit the scope of matches to enable easy creation of matchers in real code.

In the next blog post, we will explore the corresponding consumer of AST matcher results. This will be the actual re-writing of the source code corresponding to the patterns we have identified as refactoring targets.

Which AST Matchers do you think will be most useful in your code? Let us know in the comments below or contact the author directly via e-mail at stkelly@microsoft.com, or on Twitter @steveire.

I will be showing even more new and future developments in clang-query at code::dive in November. Make sure to put it in your calendar if you are attending!

Public preview: Named Entity Recognition in the Cognitive Services Text Analytics API

$
0
0

Today, we are happy to announce the public preview of Named Entity Recognition as part of the Text Analytics Cognitive Service. Named Entity Recognition (NER) is the ability to take free-form text and identify the occurrences of entities such as people, locations, organizations, and more. With just a simple API call, NER in Text Analytics uses robust machine learning models to find and categorize more than twenty types of named entities in any text documents.

Many organizations have messy piles of unstructured text in the form of customer feedback, enterprise documents, social media feeds, and more. However, it is challenging to understand what information these ever-growing stacks of documents contain. Text Analytics has long been helping customers make sense of these troves of text with capabilities such as Key Phrase Extraction, Sentiment Analysis, and Language Detection. Today's announcement adds to this suite of powerful and easy-to-use natural language processing solutions that make it easy to tackle many problems.

Named Entity Recognition and Entity Linking

Building upon the Entity Linking feature that was announced at Build earlier this year, the new Entities API processes the text using both NER and Entity Linking capabilities. This makes it an extremely powerful solution for squeezing the most structured information out of the unstructured text.

Entity Linking is the ability to identify and disambiguate the well-known identity of an entity found in the text, for example, determining whether the word "Mars" is being used as the planet or as the Roman god of war. This process requires the presence of a knowledge base which recognizes entities are linked. Knowledge bases from Bing and Wikipedia are used for Text Analytics. When the Text Analytics Entities API recognizes an entity using entity linking, it will provide links to more information about the entity on the web.

Named Entity Recognition, in contrast, can identify the entities in unstructured text regardless of whether the entities are well-known or exist in a knowledge base. When Text Analytics identifies an entity using NER, it will provide the type of entity i.e. person, location, organization, and others in the API response. In some cases, it will also provide a subtype.

In cases where an entity is recognized using both Entity Linking and Named Entity Recognition, the API will return the entity's type as well as web links to more information about the entity.

Supported entity types

Using the Text Analytics Cognitive Service, it's currently possible to recognize more than twenty types of entities in both Spanish and English. View the most current list of supported languages:

Type SubType Example
Person N/A* "Jeff", "Ashish Makadia"
Location N/A* "Redmond, Washington", "Paris"
Organization N/A* "Microsoft"
Quantity Number "6", "six"
Quantity Percentage "50%", "fifty percent"
Quantity Ordinal "2nd", "second"
Quantity NumberRange "4 to 8"
Quantity Age "90 days old", "30 years old"
Quantity Currency "$10.99"
Quantity Dimension "10 miles", "40 cm"
Quantity Temperature "32 degrees"
DateTime N/A* "6:30PM February 4, 2012"
DateTime Date "May 2nd, 2017", "05/02/2017"
DateTime Time "8am", "8:00"
DateTime DateRange "May 2nd to May 5th"
DateTime TimeRange "6pm to 7pm"
DateTime Duration "1 minute and 45 seconds"
DateTime Set "every Tuesday"
DateTime TimeZone “UTC-7”, “CST”
URL N/A* "http://www.bing.com"
Email N/A* "support@microsoft.com"

Depending on the input and extracted entities, certain entities may omit the SubType.

Next steps

Read more about Text Analytics and its capabilities, then visit our documentation. Please visit our pricing page to learn about the various tiers of service to fit your needs.

How one artist brought her vision to life using Excel spreadsheets

$
0
0

Landscape architect by day, Microsoft Excel artist by night. Australia native Emma Stevens isn’t your average artist. She uses spreadsheets to create skyline imagery out of text.

Inspired by the Melbourne skyline, Stevens began to create her artwork in a spreadsheet by building an image of the city with words through a spreadsheet app. She added an image of the Melbourne skyline to the background of a spreadsheet, typed the word “Melbourne” in the cell rows, and divided the columns to define each building. Using column breaks and letter shading, Stevens gave each building depth. From a distance, the image looks like the Melbourne skyline, but up close it becomes clear that the name of the city makes up the structure of the entire image.

Image of Emma Stevens creating a Spreadsheet Skyline in Excel.

The boundless potential of a completely blank canvas can sometimes overwhelm an artist. Stevens says the structure of the spreadsheet software helps her stay focused and dive into her work for hours each day. She appreciates the combination of creativity and repetition. “The Excel spreadsheet became a labor of love. I could redo elements that didn’t quite work, or change the tone or color of certain buildings to improve the composition and depth.”

Image of Emma Stevens holding up a Spreadsheet Skyline, made using Excel.

Technology has opened up new possibilities for artists like Stevens. “Using technology was another way to create precision in art. It also gave me the flexibility to ‘undo,’ which is a freedom I do not get with pen on paper.” Stevens doesn’t consider herself an Excel power user, either. With just the basics, she brought her vision to life. “All I needed to look into was how to create macro buttons. Now I can overlay an image of the skyline to work from that I turn off and on so it’s easier to use as a reference,” explains Stevens.

It doesn’t end with the skyline of Melbourne. As for future artwork, Stevens says, “I’ve thought of doing a few other works such as images of musician faces made with song lyrics, or images of people’s kids using their favorite storybook.” She also has plans for a few more Spreadsheet Skylines of other cities around the world such as New York, London, and Tokyo.

As always, we’d love to hear from you, so please send us your thoughts through UserVoice—and keep the conversation going by following Excel on Facebook and Twitter.

The post How one artist brought her vision to life using Excel spreadsheets appeared first on Microsoft 365 Blog.

Computer Vision for Model Assessment

$
0
0

One of the differences between statistical data scientists and machine learning engineers is that while the latter group are concerned primarily with the predictive performance of a model, the former group are also concerned with the fit of the model. A model that misses important structures in the data — for example, seasonal trends, or a poor fit to specific subgroups — is likely to be lacking important variables or features in the source data. You can try different machine learning techniques or adjust hyperparameters to your heart's content, but you're unlikely to discover problems like this without evaluating the model fit.

One of the most powerful tools for assessing model fit is the residual plot: a scatterplot of the predicted values of the model versus the residuals (the difference between the predictions and the original data). If the model fits well, there should be no obvious relation between the two. For example, visual inspection shows us that the model below first fairly well for 19 of the 20 subgroups in the data, but there may be a missing variable that would explain the apparent residual trend for group 19.

Residuals

Assessing residual plots like these is something of an art, which is probably why it isn't routine in machine learning circles. But what if we could use deep learning and computer vision to assess residual plots like these automatically? That's what Professor Di Cook from Monash University proposes, in the 2018 Belz Lecture for the Statistical Society of Australia, Human vs computer: when visualising data, who wins?  

(The slides for the presentation, created using R, are also available online.) The first part of the talk also includes a statistical interpretation of neural networks, and an intuitive explanation of deep learning for computer vision. The talk also references a related  paper, Visualizing Statistical Models : Removing the Blindfold (by Hadley Wickham, Dianne Cook and Jeike Hofman and published in the ASA Data Science Journal in 2015) which is well worth a read as well.

Viewing all 10804 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>