Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

.NET Framework October 2018 Security and Quality Rollup

$
0
0

Today, we released the October 2018 Security and Quality Rollup.

Security

No new security fixes.  See .NET Framework September 2018 Security and Quality Rollup for the latest security update.

Quality and Reliability

This release contains the following quality and reliability improvements.

CLR

  • Updated code to prevent errors regarding invalid date format when Japanese Era 4 is used with a future date [568291]
  • Parsing Japanese dates having a year number exceeding the number of years in that date era will succeed instead of throwing errors [603100]
  • When asynchronously reading a process output an IndexOutOfRangeException is thrown when less than a character’s worth of bytes is read at the beginning of a line [621951]
  • Fix in the JIT compiler for a rare case of struct field assignments, described here: https://github.com/Microsoft/dotnet/issues/779 [641182]
  • DateTime.Now and DateTime.Utc will now always be synchronized with the system time, DateTime and DateTimeOffset operations will continue to work as it used to work [645660]
  • Spin-waits in several synchronization primitives were conditionally improved to perform better on Intel Skylake and more recent microarchitectures. To enable these improvements, set the new configuration variable COMPlus_Thread_NormalizeSpinWait to 1. [647729]
  • Corrected JIT optimization which resulted in removal of interlocked Compare Exchange operation [653568]

WPF

  • Under certain circumstances, WPF applications using the spell-checker that use custom dictionaries can throw unexpected excpetions and crash [622262]

Note: Additional information on these improvements is not available. The VSTS bug number provided with each improvement is a unique ID that you can give Microsoft Customer Support, include in StackOverflow commentsor use in web searches.

Getting the Update

The Security and Quality Rollup is available via Windows Update, Windows Server Update Services, Microsoft Update Catalog, and Docker.

Microsoft Update Catalog

You can get the update via the Microsoft Update Catalog. For Windows 10, .NET Framework updates are part of the Windows 10 Monthly Rollup.

The following table is for Windows 10 and Windows Server 2016+ versions.

Product Version Preview of Quality Rollup KB
Windows 10 1803 (April 2018 Update) Catalog
4458469
.NET Framework 3.5 4458469
.NET Framework 4.7.2 4458469
Windows 10 1709 (Fall Creators Update) Catalog
4457136
.NET Framework 3.5 4457136
.NET Framework 4.7.1, 4.7.2 4457136
Windows 10 1703 (Creators Update) Catalog
4457141
.NET Framework 3.5 4457141
.NET Framework 4.7, 4.7.1, 4.7.2 4457141
Windows 10 1607 (Anniversary Update)
Windows Server 2016
Catalog
4457127
.NET Framework 3.5 4457127
.NET Framework 4.6.2, 4.7, 4.7.1, 4.7.2 4457127

The following table is for earlier Windows and Windows Server versions.

Product Version Preview of Quality Rollup KB
Windows 8.1
Windows RT 8.1
Windows Server 2012 R2
Catalog
4458613
.NET Framework 3.5 4457009
.NET Framework 4.5.2 4457017
.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2 4457015
Windows Server 2012 Catalog
4458612
.NET Framework 3.5 4457008
.NET Framework 4.5.2 4457018
.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2 4457014
Windows 7
Windows Server 2008 R2
Catalog
4458611
.NET Framework 3.5.1 4457008
.NET Framework 4.5.2 4457019
.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2 4457016
Windows Server 2008 Catalog
4458614
.NET Framework 2.0, 3.0 4457007
.NET Framework 4.5.2 4457019
.NET Framework 4.6 4457016

Previous Monthly Rollups

The last few .NET Framework Monthly updates are listed below for your convenience:


.NET Core October 2018 Update – NET Core 1.0 and 1.1

$
0
0

Today, we are releasing the .NET Core October 2018 Update for 1.0 and 1.1. This update includes .NET Core 1.0.13, 1.1.10 and .NET Core SDK 1.1.11.

Security

CVE-2018-8292: .NET Core Information Disclosure Vulnerability

Microsoft is aware of a security feature bypass vulnerability that exists when .NET Core when HTTP authentication information is inadvertently exposed in an outbound request that encounters an HTTP redirect. An attacker who successfully exploited this vulnerability could use the information to further compromise the web application.

The update addresses the vulnerability by correcting how .NET Core applications handles HTTP redirects.

Getting the Update

The latest .NET Core updates are available on the .NET Core download pageThis update is included in Visual Studio 15.8.7 and 15.0.19, which are also releasing today.

Today’s releases are listed as follows:

Docker Images

.NET Docker images have been updated for today’s release. The following repos have been updated.

Note: Look at the “Tags” view in each repository to see the updated Docker image tags.

Note: You must re-pull base images in order to get updates. The Docker client does not pull updates automatically.

Azure App Services deployment

Deployment to Azure App Services has begun and the West Central US region will be live this morning. Remaining regions will be updated over the next few days and deployment is expected to be complete by end of week.

Previous .NET Core Updates

The last few .NET Core updates follow:

Microsoft joins Open Invention Network to help protect Linux and open source

$
0
0

I’m pleased to announce that Microsoft is joining the Open Invention Network (“OIN”), a community dedicated to protecting Linux and other open source software programs from patent risk.

Since its founding in 2005, OIN has been at the forefront of helping companies manage patent risks.  In the years before the founding of OIN, open source licenses typically covered only copyright interests and were silent about patents. OIN was designed to address this gap by creating a voluntary system of patent cross-licenses between member companies covering Linux System technologies. OIN has also been active in acquiring patents at times to help defend the community and to provide education and advice about the intersection of open source and intellectual property. Today, through the stewardship of its CEO Keith Bergelt and its Board of Directors, the organization provides a license platform for roughly 2,400 companies globally. The licensees range from individual developers and startups to some of the biggest technology companies and patent holders on the planet.

We know Microsoft’s decision to join OIN may be viewed as surprising to some, as it is no secret that there has been friction in the past between Microsoft and the open source community over the issue of patents. For others who have followed our evolution as a company, we hope this will be viewed as the next logical step for a company that is listening to its customers and is firmly committed to Linux and other open source programs.   

At Microsoft, we take it as a given that developers do not want a binary choice between Windows vs Linux, or .NET vs Java – they want cloud platforms to support all technologies. They want to deploy technologies at the edge that fit within the constraints of the device and meet customer needs. We also learned that collaborative development through the open source process can accelerate innovation. Following over a decade of work to make the company more open (did you know we open sourced parts of ASP.NET back in 2008?), Microsoft has become one of the largest contributors to open source in the world. Our employees contribute to over 2,000 projects, we provide first-class support for all major Linux distributions on Azure, and we have open sourced major projects such as .NET Core, TypeScript, VS Code, and Powershell.

Joining OIN reflects Microsoft’s patent practice evolving in lock-step with the company’s views on Linux and open source more generally. We began this journey over two years ago through programs like Azure IP Advantage, which extended Microsoft’s indemnification pledge to open source software powering Azure services. We doubled down on this new approach when we stood with Red Hat and others to apply GPL v. 3 “cure” principles to GPL v. 2 code, and when we recently joined the LOT Network, an organization dedicated to addressing patent abuse by companies in the business of assertion.

Now, as we join OIN, we believe Microsoft will be able to do more than ever to help protect Linux and other important open source workloads from patent assertions. We bring a valuable and deep portfolio of over 60,000 issued patents to OIN for the benefit of Linux and other open source technologies. We also hope that our decision to join will attract many other companies to OIN, making the license network even stronger for the benefit of the open source community. 

We look forward to making our contributions to OIN and its members, and to working with the community to help open source developers and users protect the Linux ecosystem and encourage innovation with open source software.

Identify your move-groups and target sizes for migration with Azure Migrate

$
0
0

Planning is crucial in any migration effort and Azure Migrate helps you plan your datacenter migration to Azure. It helps you discover your on-premises environment, create move-groups and assess the move groups for migration to Azure.

How does assessing the move-group help in migration?

Assessing your move-groups helps you identify:

  1. Azure readiness: Whether the on-premises VMs are ready for running on Azure or not?
  2. Azure sizing: What should be the right VM size or disk type for the VM in Azure?
  3. Azure cost estimation: What would be the cost of running the VMs in Azure?

How does Azure Migrate assess if a VM is ready for Azure?

Azure Migrate analyses the configuration details of each VM to identify if the configuration is supported in Azure. For example, it analyses the operating system of the VM to identify if the OS is supported by Azure. Similarly, it analyses properties like the number of cores, memory size, disk size of each attached to the VM to identify if the required configuration is supported in Azure. Based on the analysis, it categorizes the VMs into Ready, Conditionally Ready and Not Ready categories. For the VMs that are in the Conditionally Ready or Not Ready categories, Azure Migrate provides remediation guidance regarding how the readiness issues can be fixed.

How does Azure Migrate perform sizing?

Azure Migrate helps you do intelligent rightsizing for Azure by looking at the performance data of the on-premises VMs and disks. It looks at the utilization data for CPU and memory for each VM and recommends a VM size in Azure accordingly. Similarly, it looks at the disk utilization data for each disk and recommends a disk type in Azure. Performance-based sizing helps you ensure that you are not over-allocating your VMs in Azure and saving cost.

Do I need to change my vCenter statistics level for performance data collection?

You don’t need to that anymore! With the continuous discovery option in Azure Migrate, you can now profile your on-premises environment to collect real-time utilization data for VMs and disks. Earlier, Azure Migrate required you to set the statistics settings in vCenter Server to level three and wait for at least a day before you could start the discovery. The continuous discovery model does not depend on the vCenter Server statistics settings and you do not have to change the statistics settings in vCenter Server. Additionally, since it collects real-time utilization data, the risk of under-sizing the VMs for Azure due to collection of average counters from vCenter Server is no longer there. Learn more about continuous discovery in Azure Migrate.

How do I create move-groups for assessments?

Azure Migrate helps you visualize dependencies of on-premises VMs and create move groups based on the dependencies. If you have legacy applications in your on-premises environment and are struggling to identify the set of VMs that should be moved together to Azure, the dependency visualization functionality is ideal for you. To visualize dependencies, Azure Migrate leverages the Service Map solution in Azure and displays the network connections going in and out of the on-premises VMs. The dependency map helps you identify VMs that are related to each other. Once you identify the related VMs you can create groups of VMs that are self-sufficient and ensure that you are not leaving anything behind when you are migrating to Azure.

I am already using Service Map and have my on-premises agents configured to a Log Analytics workspace, can I attach my existing workspace with Azure Migrate?

Absolutely, in addition to allowing the creation of new Log Analytics workspaces, Azure Migrate now allows you to attach an existing workspace to the migration project. So, if you have an existing Log Analytics workspace, you can attach the same to Azure Migrate to leverage dependency visualization. If you have already installed the MMA and dependency agents on your on-premises VMs and configured them to talk to the workspace, you do not have to do anything else, once you have attached the workspace to the Azure Migrate project, the dependencies will start showing up in Azure Migrate.

Do I have to pay anything for dependency visualization, if I use an existing workspace?

Even if you use an existing workspace, the dependency visualization functionality would be free in Azure Migrate, for the first 180 days, from the day of associating a workspace to a project. However, if you are using any other solution in the workspace, other than Service Map, standard Log Analytics charges would apply. Learn more about Azure Migrate pricing.

This is great! But, where is the data discovered from the on-premises environment stored?

Azure Migrate only collects metadata about your on-premises VMs and allows you to select an Azure geography where the discovered metadata would be stored. Based on the geography selected, Azure Migrate stores the discovered metadata in one of the Azure regions in the geography. Azure Migrate currently only supports the United States as the Azure geography for metadata storage. Support for Azure Government is planned for October 2018, followed by Europe and Asia in December 2018, support for other Azure geographies will be enabled in future. Note that, even though the metadata would be stored in the United States currently, you can still plan your migration to any Azure region by specifying the appropriate Azure region in the assessment properties.

With the new enhancements, we believe that Azure Migrate will make it even more easy for you to bring your on-premises VMs to Azure. So, if you haven’t already, get started now with Azure Migrate!

Visit the Azure migration center to learn how to apply Azure Migrate in your cloud migration journey.

Using .NET Hardware Intrinsics API to accelerate machine learning scenarios

$
0
0

This week’s blog post is by Brian Lui, one of our summer interns on the .NET team, who’s been hard at work. Over to Brian:

Hello everyone! This summer I interned in the .NET team, working on ML.NET, an open-source machine learning platform which enables .NET developers to build and use machine learning models in their .NET applications. The ML.NET 0.6 release just shipped and you can try it out today.

At the start of my internship, ML.NET code was already relying on vectorization for performance, using a native code library. This was an opportunity to reimplement an existing codebase in managed code, using .NET Hardware Intrinsics for vectorization, and compare results.

What is vectorization, and what are SIMD, SSE, and AVX?

Vectorization is a name used for applying the same operation to multiple elements of an array simultaneously. On the x86/x64 platform, vectorization can be achieved by using Single Instruction Multiple Data (SIMD) CPU instructions to operate on array-like objects.

SSE (Streaming SIMD Extensions) and AVX (Advanced Vector Extensions) are the names for SIMD instruction set extensions to the x86 architecture. SSE has been available for a long time: the CoreCLR underlying .NET Core requires x86 platforms support at least the SSE2 instruction set. AVX is an extension to SSE that is now broadly available. Its key advantage is that it can handle 8 consecutive 32-bit elements in memory in one instruction, twice as much as SSE can.

.NET Core 3.0 will expose SIMD instructions as API’s that are available to managed code directly, making it unnecessary to use native code to access them.

ARM based CPU’s do offer a similar range of intrinsics but they are not yet supported on .NET Core (although work is in progress). Therefore, it is necessary to use software fallback code paths for the case when neither AVX nor SSE are available. The JIT makes it possible to do this fallback in a very efficient way. When .NET Core does expose ARM intrinsics, the code could exploit them at which point the software fallback would rarely if ever be needed.

Project goals

  1. Increase ML.NET platform reach (x86, x64, ARM32, ARM64, etc.) by creating a single managed assembly with software fallbacks
  2. Increase ML.NET performance by using AVX instructions where available
  3. Validate .NET Hardware Intrinsics API and demonstrate performance is comparable to native code

I could have achieved the second goal by simply updating the native code to use AVX instructions, but by moving to managed code at the same time I could eliminate the need to build and ship a separate binary for each target architecture – it’s also usually easier to maintain managed code.

I was able to achieve all these goals.

Challenges

It was necessary to first familiarize myself with C# and .NET, and then my work included:

  • use Span<T> in the base-layer implementation of CPU math operations in C#. If you’re unfamiliar with Span<T>, see this great MSDN magazine article C# – All About Span: Exploring a New .NET Mainstay and also the documentation.
  • enable switching between AVX, SSE, and software implementations depending on availability.
  • correctly handle pointers in the managed code, and remove alignment assumptions made by some of the existing code
  • use multitargeting to allow ML.NET continued to function on platforms that don’t have .NET Hardware Intrinsics APIs.

Multi-targeting

.NET Hardware Intrinsics will ship in .NET Core 3.0, which is currently in development. ML.NET also needs to run on .NET Standard 2.0 compliant platforms – such as .NET Framework 4.7.2 and .NET Core 2.1. In order to support both I chose to use multitargeting to create a single .csproj file that targets both .NET Standard 2.0 and .NET Core 3.0.

  1. On .NET Standard 2.0, the system will use the original native implementation with SSE hardware intrinsics
  2. On .NET Core 3.0, the system will use the new managed implementation with AVX hardware intrinsics.

As the code was originally

In the original code, every trainer, learner, and transform used in machine learning ultimately called a SseUtils wrapper method that performs a CPU math operation on input arrays, such as

  • MatMulDense, which takes the matrix multiplication of two dense arrays interpreted as matrices, and
  • SdcaL1UpdateSparse, which performs the update step of the stochastic dual coordinate ascent for sparse arrays.

These wrapper methods assumed a preference for SSE instructions, and called a corresponding method in another class Thunk, which serves as the interface between managed and native code and contains methods that directly invoke their native equivalents. These native methods in .cpp files in turn implemented the CPU math operations with loops containing SSE hardware intrinsics.

Breaking out a managed code-path

To this code I added a new independent code path for CPU math operations that becomes active on .NET Core 3.0, and by keeping the original code path running on .NET Standard 2.0. All previous call sites of SseUtilsmethods now called CpuMathUtils methods of the same name instead, keeping the API signatures of CPU math operations the same.

CpuMathUtils is a new partial class that contains two definitions for each public API representing CPU math operation, one of which is compiled only on .NET Standard 2.0 while the other, only on .NET Core 3.0. This conditional compilation feature creates two independent code paths for CpuMathUtils methods. Those function definitions compiled on .NET Standard 2.0 call their SseUtils counterparts directly, which essentially follow the original native code path.

Writing code with software fallback

On the other hand, the other function definitions compiled on .NET Core 3.0 switch to one of three implementations of the same CPU math operation, based on availability at runtime:

  1. an AvxIntrinsics method which implements the operation with loops containing AVX hardware intrinsics,
  2. a SseIntrinsics method which implements the operation with loops containing SSE hardware intrinsics, and
  3. a software fallback in case neither AVX nor SSE is supported.

You will commonly see this pattern whenever code uses .NET Hardware Intrinsics – for example, this is what the code looks like for adding a scalar to a vector:

If AVX is supported, it is preferred, otherwise SSE is used if available, otherwise the software fallback path. At runtime, the JIT will actually generate code for only one of these three blocks, as appropriate for the platform it finds itself on.

To give you an idea, here what the AVX implementation looks like that’s called by the method above:

You will notice that it operates on floats in groups of 8 using AVX, then any group of 4 using SSE, and finally a software loop for any that remain. (There are potentially more efficient ways to do this, which I won’t discuss here – there will be future blog posts dedicated to .NET Hardware Intrinsics.)

You can see all my code on the dotnet/machinelearning repository.

Since the AvxIntrinsics and SseIntrinsics methods in managed code directly implement the CPU math operations analogous to the native methods originally in .cpp files, the code change not only removes native dependencies but also simplifies the levels of abstraction between public APIs and base-layer hardware intrinsics.

After making this replacement I was able to use ML.NET to perform tasks such as train models with stochastic dual coordinate ascent, conduct hyperparameter tuning, and perform cross validation, on a Raspberry Pi, when previously ML.NET required an x86 CPU.

Here’s what the architecture looks like now (Figure 1):

Performance improvements

So what difference did this make to performance?

I wrote tests using Benchmark.NET to gather measurements.

First, I disabled the AVX code paths in order to fairly compare the native and managed implementations while both were using the same SSE instructions. As Figure 2 shows, the performance is closely comparable: on the large vectors the tests operate on, the overhead added by managed code is not significant.

Figure 2

Second, I enabled AVX support. Figure 3 shows that the average performance gain in microbenchmarks was about 20% over SSE alone.

Figure 3

Taking both together — the upgrade from the SSE implementation in native code to the AVX implementation in managed code — I measured an 18% improvement in the microbenchmarks. Some operations were up to 42% faster, while some others involving sparse inputs have potential for further optimization.

What ultimately matters of course is the performance for real scenarios. On .NET Core 3.0, training models of K-means clustering and logistic regression got faster by about 14% (Figure 4).

Figure 4

In closing

My summer internship experience with the .NET team has been rewarding and inspiring for me. My manager Dan and my mentors Santi and Eric gave me an opportunity to go hands-on with a real shipping project. I was able to work with other teams and external industry partners to optimize my code, and most importantly, as a software engineering intern with the .NET team, I was exposed to almost every step of the entire working cycle of a product enhancement, from idea generation to code review to product release with documentation.

I hope this has demonstrated how powerful .NET Hardware Intrinsics can be and I encourage you to consider opportunities to use them in your own projects when previews of .NET Core 3.0 become available.

Simplify extension development with PackageReference and the VSSDK meta package

$
0
0

Visual Studio 2017 version 15.8 made it possible to use the PackageReference syntax to reference NuGet packages in Visual Studio Extensibility (VSIX) projects. This makes it much simpler to reason about NuGet packages and opens the door for having a complete meta package containing the entire VSSDK.

Before using PackageReference, here’s what the References node looked like in a typical VSIX project:

It contained a lot of references to Microsoft.VisualStudio.* packages. Those are the ones we call VSSDK packages because they each make up a piece of the entire public API of Visual Studio.

Migrate to PackageReference

First, we must migrate our VSIX project to use PackageReference. That is described in the Migrate from packages.config to PackageReference documentation. It’s quick and easy.

Once that is done it is time to get rid of all VSSDK packages and installing the meta package.

Installing the VSSDK meta package

The meta package is a single NuGet package that does nothing but reference all the NuGet packages that make up the VSSDK. So, it references all relevant Microsoft.VisualStudio.* packages and is versioned to match major and minor version of Visual Studio.

For instance, if your extension targets Visual Studio 2015, then you need version 14.0 of the VSSDK meta package. If your extension targets Visual Studio 2017 version 15.6, then install the 15.6 version of the VSSDK meta package.

Before installing the meta package, make sure to uninstall all the Microsoft.VisualStudio.*, VsLangProj* and EnvDTE* packages, as well as stdole, Newtonsoft.Json, from your project. After that is done, install the Madskristensen.VisualStudio.SDK package matching the minimum version of Visual Studio your extension supports. It supports all the way back to Visual Studio 2015 version 14.0.

After the meta package is installed, the References node looks a lot simpler:

You can read more about the VSSDK meta package on GitHub.

Known limitations

To use PackageReference and the VSSDK meta package, make sure that:

  1. The VSIX project targets .NET Framework 4.6 or higher
  2. You are using Visual Studio 2017 version 15.8 or higher
  3. You include search for pre-release packages since it is in beta

Try it today

The VSSDK meta package is right now a prototype that we hope to make default in Visual Studio 2019. I’m personally dogfooding it in about 10 production extensions, but we need more extensions to use it to ensure it contains the right dependencies for the various versions of Visual Studio. When it has been properly tested and we’re confident that it will work, it will be renamed to Microsoft.VisualStudio.SDK or similar.

So please try it out and let us know how it works for you.

Mads Kristensen, Senior Program Manager
@mkristensenMads Kristensen is a senior program manager on the Visual Studio Extensibility team. He is passionate about extension authoring, and over the years, he’s written some of the most popular ones with millions of downloads.

Fluent XAML Theme Editor Preview released!

$
0
0

Preview of the Fluent Xaml Theme Editor.

We’re excited to announce the preview release of the Fluent XAML Theme Editor application!

As some of you may remember from our Build 2018 session this year, we previewed a tool using our new ColorPaletteResources API that allows you to set the color theme of your app through some simple color selections.

This tool is now publicly available to insiders and those running on pre-release builds (Windows 10 SDK Preview Build 17723 or newer) to test out on our GitHub repository!

How to use the Fluent XAML Theme Editor

With the preview build, you can select three major colors for both the Light and Dark themes in the right-hand properties view labeled “Color Dictionary”:

You can select three major colors for both the Light and Dark themes in the right-hand properties view labeled “Color Dictionary.”

  • Region – The background that all the controls sit on, which is a separate resource that does not exist in our framework.
  • Base – Represents all our controls’ backplates and their temporary state visuals like hover or press. In general, Base should be in contrast with the background (or Region color) of your theme and with black text (if in Light theme) and white text (if in Dark theme).
  • Primary – This is essentially the Accent color and should contrast with mainly white text.

It is used in more choice locations to show alternate rest states for toggled controls like list selection, checkbox or radiobutton checked states, slider fill values, and other control parts that need to be shown as different from their default rest state once interacted with.

Refining the colors

In addition to the three major colors for each theme, you can also expand any one of the major colors to see a list of minor colors that change the color on certain controls on a more refined level.

List of minor colors.

In the above example, you’ll notice that although my major Base color is purple in Dark Theme, I can override that second gradient value to be yellow to change the borders and disabled color of controls.

To access the detailed view of colors, simply click the chevron next to the major color buttons:

To access the detailed view of colors, simply click the chevron next to the major color buttons:

Creating, saving and loading presets

The editor will ship with some presets for you to look at to get an idea of what a theme looks like in the app.

The preset dropdown is located at the top of the Color Dictionary properties panel.

The preset dropdown is located at the top of the Color Dictionary properties panel.

When you first boot up it will always be set to Default – which is the Light and Dark theme styling default for all our controls. You can select different themes like Lavender and Nighttime to get an idea of how the tool will theme our controls.

Once you’re ready to start making your own theme, just start editing the colors! Once you’ve started tweaking them, you’ll notice that the Presets ComboBox goes from the name of the preset to “Custom”:

This means that you’ve started a new temporary theme that’s “Custom.” Any changes you make will not affect any of the other Presets in that box.

  • Once you’re satisfied with the changes you’ve made, simply click the “Save” button and browse to your desired save point.
  • Similarly, you can open your saved JSON theme by clicking the “Load” button and browsing to your saved theme’s file location.

Contrast Ratio Checking

Lastly, one of the most important parts of creating your theme needs to be making sure that in either respective theme you are contrast compliant. The tool provides you with a small list of contrast information on the left-hand side of the color selection window when choosing your color.

The tool provides you with a small list of contrast information on the left-hand side of the color selection window when choosing your color.

In this window you can see your contrast with the most prevalent text color in the theme that you’re choosing to edit, in the above case black text because you are editing a Light theme color value.

When you pick a color that falls below the standard contrast ratio of 4.5:1, you’ll be alerted with red text next to your contrast value.

When you pick a color that falls below the standard contrast ratio of 4.5:1, you’ll be alerted with red text next to your contrast value.

You can learn more about contrast ratios and their importance here.

Exporting and using your theme in an app

Of course, once you’ve themed everything, you’ll want to use in your app! To do that you’ll need to click the “Export” button at the bottom of the Color Dictionary properties panel.

Example of Exporting and using your theme in an app .

That button will open a popup window with a generic, unnamed ResourceDictionary stub (seen below).

That button will open a popup window with a generic, unnamed ResourceDictionary stub (seen below).

This window doesn’t make anything final, however, if you want to make some changes to the theme and re-export them to the Export window, it will refresh with your changed colors.

Once you’re ready to use it in your app, click the “Copy to Clipboard” button in the lower right corner and go to UWP Visual Studio solution.

Once in Visual Studio, right-click on the project solution, located in the Solution Explorer.

Once in Visual Studio, right-click on the project solution, located in the Solution Explorer.

And go to Add > New Item and then choose Resource Dictionary.

Resource Dictionary in Visual Studio

Name that dictionary whatever makes sense to you and click Add when you’re done.

Example of naming you dictionary in Visual Studio.

That should generate a blank ResourceDictionary like this:

Example of a blank ResourceDictionary.

Now you can paste the exported theme code from the editor into that ResourceDictionary.

paste the exported theme code from the editor into that ResourceDictionary.

Now you have a fully customized color theme waiting to be use, so let’s apply it!

To do that, you’ll want to go into your page or app.xaml (depending on how much of your app you want the theme to apply to) and merge your theme dictionary into the resources of that page or app.

Fully customized color theme that can be merged into our theme dictionary.

Lastly, don’t forget to set the background color of your page to the RegionColor that you picked for your theme. It’s the only brush that won’t get set automatically.

Code used for setting the background color of your page to the RegionColor that you picked for your theme.

Once that’s in, you’re done! Your theme colors will now be pervasive across your app or page depending.

Scoping your theme

If instead you want to scope your theme colors to a smaller area, you can also put all that power into just a container (Like Grid or StackPanel) and the theme will scope to just the controls that live within that container:

Code used to ensure that the theme will scope to just the controls that live within that container.

Using your theme down-level

When you export your theme, you’ll see a ResourceDictionary markup with a ColorPaletteResources definition like this:

ResourceDictionary markup with a ColorPaletteResources definition.

ColorPaletteResources is a friendly API for our SystemColors that sit within generic.xaml and allows for those SystemColors to be scoped at any level.

If you wanted to enable this same theme to work downlevel, you would have to define each SystemColor individually with each color from your theme:

using the Lavender theme to show this down-level transition.

In the above case we’re using the Lavender theme to show this down-level transition.

Warning: Although this markup format change will enable your theme to be applied across controls in earlier SDK versions, it will not work on a page, container or control scoped level. ColorPaletteResources is the API that allows scoping behavior. This markup format will only work at the app.xaml level for earlier SDKs.

Get started today

To use the Fluent XAML Theme Editor preview now, head on over to our GitHub repo for the app. There you can download the source to the app and get started with what it has to offer!

The post Fluent XAML Theme Editor Preview released! appeared first on Windows Developer Blog.

ERM contributes to a more sustainable future with Microsoft 365

$
0
0

Today’s post was written by Richard Zotov, Group CIO at ERM.

The sustainability industry addresses the complex balancing act between supporting socioeconomic development and ensuring the healthy future of our environment and our communities. ERM employees are passionate about helping to shape a sustainable future with the world’s leading organizations. We work with the majority of Fortune 500 companies, whose activities—from drilling oil to discovering the next miracle drug—have an enormous impact on us all. Because sustainability means something different for every customer, like exploring clean energy sources or guaranteeing an ethical supply chain, ERM employees must be skilled, flexible, and agile in how they approach each engagement.

It is my goal as CIO to make sure this amazing group of more than 5,000 individuals have the best tools at their disposal as they collaborate in creative teams across 40 countries to help our customers achieve their unique business and sustainability goals. So we made the strategic decision to harness technology and data to digitalize and transform how we work. That’s when we deployed Microsoft 365 across our entire organization, from administrators in our 160 offices to mobile consultants gathering data in the field. Today, ERM employees have new tools to work faster, better, and safer, accelerating the positive, global impact of the work we do.

Now that everyone uses the same Microsoft 365 toolkit to collaborate, harness data, and streamline operations, we have a unified foundational layer for our new workplace culture. We’re connecting our entire business to be more efficient. And when it comes to security, we can meet our customers’ high standards and enhance the credibility of our security position with the Microsoft cloud platform.

This workplace transformation is really a two-pronged approach to remaining at the forefront of the sustainability industry. First, we harness technology and data to accelerate our sustainability and environmental health and safety services. The second involves looking at how we develop new revenue streams as an offshoot of our newly digitalized way of doing business.

With Microsoft 365, we are making headway with the first goal. It’s part of an enterprise-wide push to focus on exceptional customer value. For example, we have technologically transformed how we collect, store, and manage data during site investigation, one of our biggest service lines. Onsite data collection used to be a laborious manual process. Today, we use Microsoft 365 to help digitalize data collection—consultants take tablets into the field for data input and upload it for storage in the Microsoft cloud where it’s available in real-time for colleagues to analyze back at the office. Now our customers receive our reports in easily consumable Power BI dashboards, as opposed to lengthy write-ups, and we’re delivering insightful data into the hands of our customers faster.

As we gain experience in transforming our service lines using Microsoft cloud services, among others, we’ll be in a better position to explore new digital opportunities that help add value to the work we do for customers. This strategy will keep us at the leading edge of technology innovation and help maintain our competitive advantage.

As we work with the intelligent tools within the Microsoft 365 cloud platform, we empower our employees to deliver value to our customers—helping them achieve that balance between doing business and being a conscious steward of the environment. It’s great to know that ERM is adding to the global dialogue on sustainability, contributing to a healthier future for the planet.

—Richard Zotov

Read the ERM case study for more on their move to a modern workplace with Microsoft 365.

The post ERM contributes to a more sustainable future with Microsoft 365 appeared first on Microsoft 365 Blog.


­Protect data in use with the public preview of Azure confidential computing

$
0
0

It has been an incredible year for Azure confidential computing, working with partners and customers, that has culminated in our confidential computing offerings becoming publicly available. At Ignite, we announced our intent, and I am excited to say that just two weeks later we are delivering on our promise of releasing the DC-series of virtual machines and open sourcing the Open Enclave SDK.

As a quick recap, Azure confidential computing protects your data while it’s in use. It is the final piece to enable data protection through its lifecycle whether at rest, in transit, or in use. It is the cornerstone of our ‘Confidential Cloud’ vision, which aims to make data and code opaque to the cloud provider.

Today, we are excited to announce a public preview of the DC-series of virtual machines in US East and Europe West. Years of work with our silicon vendors have allowed us to bring application isolation technology to hardware in our datacenters to support this new VM family. While these virtual machines may ‘look and feel’ like standard VM sizes from the control plane, they are backed by hardware-based Trusted Execution Environments (TEEs), specifically the latest generation of Intel Xeon Processors with Intel SGX technology. You can now build, deploy, and run applications that protect data confidentiality and integrity in the cloud. To get started, deploy a DC-series VM through the custom deployment flow in Azure Marketplace.

Customers like Christopher Spanton, Senior Architect for Blockchain at T-Mobile, have already started making use of the infrastructural building blocks.

“Leveraging the latest generation of trusted execution environments through Azure confidential computing has been an exciting opportunity for us to increase both the security and efficiency of our solutions. Specifically, we are working to deliver the next-generation of our internal Role-Based Access Control platform (NEXT directory) in the cloud and the Azure confidential computing platform provides a uniquely powerful platform for running blockchain protocols, such as Hyperledger Sawtooth, on which our solution is based. Our three organizations, T-Mobile, Intel, and Microsoft together have the technology, expertise, and commitment to deliver this kind of complex hybrid-architecture blockchain solution.”

Infrastructure is an important building block, but as you may be aware, enclave-based application development is a new programming paradigm. We are therefore excited to announce that we have open sourced the Open Enclave SDK project that provides a consistent API surface and enclaving abstraction for your confidential computing application development.

At its core, we wanted to ensure the Open Enclave SDK was portable across enclave technologies, cross platform – cloud, hybrid, edge, or on-premises, and designed with architectural flexibility in mind. The current version of Open Enclave SDK (v0.4), supports Intel SGX technology for C/C++ enclave applications, using mBedTLS. Subsequent versions will bring support for Arm TrustZone, additional runtimes, and Windows support. To learn more about the SDK, visit the Open Enclave project webpage and the API documentation.

We are committed to creating a collaborative community to help standardize secure enclave-based application development. Customers and partners in preview have already tested the Open Enclave SDK out and provided initial feedback.

One of those customers was Matthew Gregory, CEO and Founder of Ockam. Matthew shares how the Azure confidential computing platform, combined with Open Enclaves SDK, were able to help improve his organization’s development experience.

“Azure confidential compute uniquely enables Ockam Blockchain Network, a public blockchain, to reside in a public cloud infrastructure and to reap the broad benefits of Azure. The Azure confidential compute platform creates a simple 'as-a-service' developer experience that abstracts away complexity, which accelerates go-to-market time, simplifies ongoing operations, and increases availability. By running Ockam Validator Nodes on the Azure confidential compute platform we can better manage validator keys and verify the chain of trust in a decentralized network."

Whether you are interested in viewing the source code, contributing to the project, or providing feedback on new features and functionality, visit the project’s GitHub repository.

Infrastructure and development environments provide you the building blocks to build enclave-based applications that protect data and code confidentiality and integrity. Based on the feedback from our private preview customers, we have started to invest in higher level scenarios of confidential computing such as confidential querying in databases, creating confidential consortium networks that scale, and secure multiparty machine learning.

Eddy Ortiz, Vice President of Solution Acceleration and Innovation at Royal Bank of Canada is using confidential computing for a few of these scenarios.

“We are always looking to harness the potential of emerging technologies. When we were first introduced to the Azure confidential compute platform, we were intrigued by the possibility of adding a new layer of security and confidentiality to our solutions. We’re currently exploring ways to share and analyze data across different institutions, while maintaining security and confidentiality. We are currently piloting a confidential multiparty data analytics and machine learning pipeline on top of the Azure confidential compute platform, which ensures that participating institutions can be confident that their confidential customer and proprietary data is not visible to other participating institutions, including RBC. So far, the results have been promising.”

We will continue to work on these scenarios across our Azure service offerings and will provide you with more updates over the coming months.

We are excited to be providing you with the building blocks of the next wave of cloud computing. If you have any questions or comments, please reach out to us by posting on our Azure Virtual Machine MSDN forum for the DC-series and filing an issue on GitHub for the Open Enclave SDK.

Getting started

We recommend getting started by deploying through Azure Marketplace. The custom deployment flow deploys and configures the virtual machine and installs the Open Enclave SDK for Linux VMs if selected. Many of the basic VM deployment configurations are supported through the Confidential Computing VM Deployment workflow, including: (1) Windows/Linux VM; (2) New or existing resource group; (3) New or existing VNet; (4) Storage/disk type; (5) Enabled diagnostics, and other properties.

There are a few areas we will continue to improve during public preview, including regions, operating system images, and queryability.

Regions

Regions support has expanded from US East in private preview to also include Europe West in public preview. We are working on expanding our investments into other regions.

Operating system images

The DC-series of VMs are the first set of Generation 2 virtual machines. As such, we have specially configured operating images that are required with these virtual machines. We have worked with our operating system partner teams to enable Generation 2 support for Ubuntu Server 16.04 and Windows Server 2016 Datacenter. These images are automatically used when deploying through the portal. Custom images are not yet supported. DC-series VMs will not show up in the size selector for arbitrary marketplace images, as not all images have been updated yet.

Queryability

It is also possible to programmatically deploy DC-series VMs. If going the programmatic route, it is important to note restrictions and not try to deploy these VMs with arbitrary or custom images. The Generation 2 enabled Ubuntu Server 16.04 and Windows Server 2016 images will not be listed through programmatic API calls as they are meant to be used only with DC-series VMs. If you want to reference the image outside of the template programmatically, they image identifiers are:

Publisher: Canonical; Offer: confidential-compute-preview; SKU: 16.04-LTS

Publisher: MicrosoftWindowsServer; Offer: confidential-compute-preview; SKU: acc-windows-server-2016-datacenter

If not using the marketplace offering, you will then need to follow the steps to install the Open Enclave SDK or Intel SGX SDK. For instructions on how to install on a machine with the latest generation of Intel Xeon processor with SGX technology whether in Azure or on-premise, follow the instructions on GitHub.

Data models within Azure Analysis Services and Power BI

$
0
0

In a world where self-service and speed of delivery are key priorities in any reporting BI solution, the importance of creating a comprehensive and performant data semantic model layer is sometimes overlooked.

I have seen quite a few occurrences where the relational data store such as Azure SQL Database and Azure SQL Data Warehouse are well structured, the reporting tier is well presented whether that is SQL Server reporting services or Power BI, but still, the performance is not as expected.

Before we drill down to the data semantic model, I always advise that understanding your data and how you want to present and report on it is key. By creating a report that takes the end consumer through a data journey is the difference between a good and a bad report. Report designing should take into account who is consuming and what they want to achieve out of the report. For example, if you have a small number of consumers who need to view a lower level of hierarchical data with additional measures or KPIs, then it may not be suitable to visualize this on the first page. As the majority of consumers may want to view a more aggregated view of the data. This example could lead to the first page of the report taking longer to return the data, thus giving a perception of a slow running report to the majority of consumers. To achieve a better experience, we could take the consumers through a data journey to ensure that the detail level does not impact the higher-level data points.

Setting the correct service level agreements and performance expectation is key. Setting an SLA for less than two seconds may be achievable if the data is two million rows. But would this still be met if it was two billion rows? There are many factors in understanding what is achievable, and the data set size is one of them. Network, compute, architecture patterns, data hierarchies, measures, KPIs, consumer device, consumer location, and real-time vs batch reporting are other impacts that can affect the perception of performance to the end consumer.

However, creating and/or optimizing your data semantic model layer will have a drastic impact on the overall performance.

The basics

The question I sometimes get is, with more computing power and the use of Azure, why can I not just report directly from my data lake or operational SQL Server? The answer is that reporting from data is very different from writing and reading data in an online transaction processing (OLTP) approach.

Dimensional modeling developed by Kimball has now been a data warehouse proven methodology and widely used for the last 20 plus years. The ideology behind the dimensional modeling is to be able to generate interactive reporting where consumers can retrieve calculations, aggregate data, and show business KPIs.

Due to creating dimensional models within a star or snowflake schema, you have the ability to retrieve the data in a more performant way due to the schema being designed for retrieval and reads rather than reads and writes, which are commonly associated with an OLTP database design.

In a star or snowflake schema, you have a fact table and many dimension tables. The fact table contains the foreign keys relating them to the dimension tables along with metrics and KPIs. The dimension tables contain the attributes associated with that particular dimension table. An example of this could be a date dimension table that contains month, day, and year as the attributes.

Creating dimensional modeling for data warehousing such as SQL Server or Azure SQL Data Warehouse will assist in what you are trying to achieve out of a reporting solution. Again, traditional warehouses have been deployed widely over the last 20 years. Even with this approach, creating a data semantic model on top of the warehouse can improve performance as well as things like improving concurrency and even adding an additional layer of security between the end consumers and the source warehouse data. SQL Server Analysis Services and now Azure Analysis Services have been designed for this purpose. We can essentially serve the required data from the warehouse into a model that can then be consumed as below.

image

Traditionally, the architecture was exactly that. Ingest data from the warehouse into the cube sitting on an instance of Analysis Services, process the cube, and then serve to SQL Server Reporting Services, Excel, and more. The landscape of data ingestion has changed over the last five to ten years with the adoption of big data and the end consumers wanting to consume a wider range of data sources, whether that is SQL, Spark, CSV, JSON, or others.

Modern BI reporting now needs to ingest, mash, clean and then present this data. Therefore, the data semantic layer needs to be agile but also delivering on performance. However, the principles of designing a model that aligns to dimensional modeling are still key.

In a world of data services in Azure, Analysis Services and Power BI are good candidates for building data semantic models on top of a data warehousing dimensional modeling. The fundamental principles of these services have formed the foundations for Microsoft BI solutions historically, even though they have evolved and now use modern in-memory architecture and allow agility for self-service.

Power BI and Analysis Services Tabular Model

SQL Server Analysis Services Tabular model, Azure Analysis Services, and Power BI share the same underlining fundamentals and principles. They are all built using the tabular model which was first released on SQL Server in 2012.

SQL Server Analysis Services Multi Dimension is a different architecture and is set at the server configuration section at the install point.

The Analysis Service Tabular model (in Power BI) is built on the columnar in-memory architecture, which forms the VertiPaq engine.

image

At processing time, the rows of data are converted into columns, encoded, and compressed allowing more data to be stored. Due to the data being stored in memory the analytical reporting delivers high performance, versus retrieving data from disk-based systems. The purpose of in-memory tabular models is to minimize read times for reporting purposes. Again, understanding the consumer reporting behavior is key, tabular models are designed to retrieve a small number of columns. The balance here is that when you retrieve a high number of columns, the engine needs to sort back into rows of data and decompress, which impacts compute.

Best practices for data modeling

The best practices below are some of the key observations I have seen over the last several years, particularly when creating data semantic models in SQL Server Analysis Services, Azure Analysis Services, or Power BI.

  • Create a dimension model star and/or snowflake, even if you are ingesting data from different sources.
  • Ensure that you create integer surrogate keys on dimension tables. Natural keys are not best practice and can cause issues if you need to change them at a later date. Natural keys are generally strings, so larger in size and can perform poorly when joining to other tables. The key point in regards to performance with tabular models is that natural keys are not optimal for compression. The process with natural keys is that they are:
      • Encoded, hash/dictionary encoding.
      • Foreign keys encoded on the fact table relating to the dimension table, again hash/dictionary encoding.
      • Build the relationships.
  • This has an impact on performance and reduces the available memory for data as a proportion, which will be needed for the dictionary encoding.
  • Only bring into the model the integer surrogate keys or value encoding and exclude any natural keys from the dimension tables.
  • Only bring into the model the foreign keys or integer surrogate keys on the fact table from the dimension tables.
  • Only bring columns into your model that are required for analysis, this may be excluding columns that are not needed or filter on data to only bring the data in that is being analyzed.
  • Reduce cardinality so that the values uniqueness can be reduced, allowing for much greater compression.
  • Add a date dimension into your model.
  • Ideally, we should run calculations at the compute layer if possible.

The best practices noted above have all been used in part or collectively to improve the performance for the consumer experience. Once the data semantic models have been created to align with best practices, then performance expectations can be gauged and aligned with SLA’s. The key focus on the best practices above is to ensure that we utilize the VertiPaq in-memory architecture. A large part of this is to ensure that data can be compressed as much as possible so that we can store more data within the model but also so that we can report upon the data in an efficient way.

In my next blog, I will demonstrate how Microsoft Power BI is exposing more Analysis Services native functionality which makes it possible to consolidate the data semantic model.

    Supercharge your Azure Stream Analytics queries with C# code

    $
    0
    0

    Azure Stream Analytics (ASA) is Microsoft’s fully managed real-time analytics offering for complex event processing. It enables customers to unlock valuable insights and gain a competitive advantage by harnessing the power of big data.

    Our customers love the simple SQL based query language that has been augmented with powerful temporal functions to analyze fast-moving event streams. The ASA query language natively supports complex geo-spatial functions, aggregation functions, and math functions. However, in many advanced scenarios, developers may want to reuse C# code and existing libraries instead of writing long queries for simple operations.

    At Microsoft Ignite 2018, we announced a new feature that allows developers to extend the ASA query language with C# code. Currently, this capability is available for Stream Analytics jobs running on Azure IoT Edge (public preview). In many scenarios, it is more efficient to write C# code to perform some operations. In such cases, instead of being constrained by the SQL-like language, you can author a C# function and invoke it directly from the ASA query! Even better, you can use the ASA tools for Visual Studio to get native C# authoring and debugging experience.

    Writing your own functions is most useful for scenarios like:

    • Building your own machine learning models in ML.NET, and using it for inference at the Edge.
    • Reusing custom library or Nuget to integrate existing code.
    • Converting data from one format to another. For example, binary-to-hex.
    • Parsing complex nested JSON structures.
    • Performing complex mathematical computations.
    • Using regular expressions and manipulating strings.
    • Converting timestamp between time zones.

    Our vision with Azure Stream Analytics is to simplify real-time analytics on big data. And, this feature is one huge step towards improving developer productivity. Instead of stretching the query language to its limits and in a way that is unnatural given what SQL is meant to do, you can now focus on what is most important to your business such as getting your analytics pipeline running at scale with the power of Azure.

    Writing your own C# function is easy

    Within five minutes you can create your first C# function and use it in a Stream Analytics query by following this tutorial. Also, note that you can author JavaScript functions and use them for your ASA jobs running on the cloud.

    Did you know we have more than ten new features in preview ready for you to try out? Sign up for our preview programs.

    There is a strong, growing developer community that supports Stream Analytics. We, at Microsoft, are committed to improving Stream Analytics and are always listening for your feedback to improve our product! Follow us @AzureStreaming to stay updated on the latest features.

    Azure Portal October update

    $
    0
    0

    This post was co-authored by Leon Welicki, Principal Group PM Manager.

    We heard your feedback loud and clear: it is hard to keep up with Azure’s pace of innovation. How do you learn about anything and everything new about Azure portal?

    We’re starting a monthly series to bring to you everything that is new and updated in the Azure Portal and the Azure mobile app. We will specifically cover the areas that affect the user experience and how it affects your daily work.

    This blog will not announce new services on Azure or bring you what’s new on specific Azure services. We recommend that you follow service announcements to keep up to date with the news for all Azure services.

    Sign in to the Azure Portal now and see everything that’s new. Download the Azure mobile app.

    With that, here’s a list of October updates:

    Portal shell and UI

    Azure mobile app

    Quickstart Center

    Compute, networking and storage

    Management tools

    Databases

    Analytics

    Feature drilldown

    Let’s take a closer look at each of the enhancements listed above. To try out some of the preview features, log in to the Azure Portal preview page.

    Portal shell and UI

    User experience refresh

    We've introduced modern design updates to refresh the look and feel of the portal to increase productivity, improve accessibility, and make better use of your screen real estate. Some key improvements include:

    • Improved information density and better use of screen real estate;
    • Simplified visuals that offer a better flow between different areas in the UI;
    • Better highlighting of key navigational elements like global search and breadcrumbs;
    • Improved accessibility and updated colors that improve contrast ratios, allowing usage over long periods of time reducing eye-strain.

    We've made these design updates without changing our existing interaction model, so you don’t need to re-learn how to use the Azure Portal. For more details about these changes, refer to this blog: “Designing for Scale and Complexity.”

    OctoberUpdates

    Refreshed UI

    Account, subscription, and directory management improvements

    The subscription filter is now available in the top navigation bar. It allows you to configure the default directory, search, sort, and set your favorite directories.

    DirectorySettings

    Directory settings

    In preview, you now have the possibility to switch between multiple accounts in the same browser instance.

    MultipleAuthors

    Switch between multiple accounts (preview)

    All Resources view (preview) improvements

    Azure Resource Graph (ARG) provides a fast and rich way to query through large sets of Azure Resources. This new experience allows for faster browsing with very large sets of resources. We have also integrated ARG in the global search so you can enjoy a faster experience when using the search bar.

    NewAllResources

    New “All Resources” view (preview)

    Azure mobile app

    Access control management using RBAC

    You now can now give or revoke access to Azure resources to your co-workers by using the Azure mobile app.

    RBAC

    RBAC now available on Azure mobile app

    Quickstart Center

    Microsoft recommended best practices to setup your Azure environment

    Quickstart Center now includes guides, called “Playbooks,” that help you configure your Azure environment following Microsoft-recommended guidelines for governance. It walks you through RBAC, organizing and tagging your resources, securing your resources, enforcing compliance, and monitoring the health of your Azure environment.

    Guidelines

    Configure your environment following Microsoft-recommended guidelines

    Compute, networking, and storage

    Virtual Machine and Storage Account create blades now in full screen

    Back in May 2018, we started to simplify resource creation in Azure. Starting with AKS and IoT Hub, we revised our designs to make creating resources in Azure faster, more streamlined, and make better use of screen real estate. In this release, we have applied these principles to Virtual Machine and Storage account creation experiences.

    VirtualMachine

    New full-screen Virtual Machine create page

    Virtual Machine Scale Sets (VMSS) UI updates

    We added the following new capabilities in the portal for VMSS:

    • Select new or existing Virtual Networks when creating new Virtual Machine Scale Sets;
    • Add data disks to scale set model;
    • Updated instance details view with instance host metrics;
    • Add extensions to the scale set model
    • Added metrics to the overview page.

    Metrics

    Metrics now available in the instance overview page for Virtual Machine Scale Sets

    Storage Explorer (preview) in browse

    Storage Explorer is now easier to find since it is available as a resource in browse. You can now easily manage multiple storage accounts under different subscriptions in one view.

    StorageExplorer

    Storage Explorer (preview)

    Azure Advisor notifications and other integrated experiences in Virtual Machines

    You can now see Azure Advisor notifications directly in the VM overview page. The notification sends you to Azure Advisor where you can learn more and take action.  Additionally, we have updated the Virtual Machine blade to integrate the following pages: MSI, VM insights, logs, state configuration, serial console, and performance diagnostics.

    AzureAdvisor

    Azure Advisor recommendations at the VM blade

    Service Fabric Mesh new portal experience

    We’ve added a new portal experience for Service Fabric Mesh that displays the status of your services, replicas, and code packages along with the details of each.

    NewServiceFabricMesh

    New Service Fabric Mesh experience

    Desired State Configuration on the Virtual Machine blade

    You can now view the current desired state configuration status for your virtual machines and create ad-hoc configurations to be applied to them.

    DesiredStateConfig

    Desired State Configuration

    Virtual WAN (GA) portal experience

    You’ll find a more streamlined experience on Azure Portal for creating and associating sites and hubs on Virtual WAN. In addition, we’ve added the ability to configure branch-to-branch traffic, as well as preview experiences of ExpressRoute, VPN Point-to-Site, and Office 365 integration in a Virtual WAN.

    Azure_WAN_hubs

    Updates to Virtual WAN

    App Gateway Web Application Firewall UI updates

    You can now use the Azure Portal for more extensive configuration of your Web Application Firewalls (WAF) in Application Gateway, including the ability to specify a WAF exclusion list and configure the maximum request body size and file upload limit.

    ApplicationGateway

    Web application firewall settings in the Application Gateway blade

    Management tools

    Activity Log UI improvements

    You now have a simplified view on Activity Log, making it easier to find and filter logs when you need them. We've also improved the performance to make it more responsive. In addition to these changes we started on a journey to connect Notifications and Activity Logs since many entries in both experiences come from the same data source. The first step is showing a link to the activity logs at the top of the notifications list or when the notifications list is empty.

    SimplifiedActivityLog

    Simplified Activity Log

    Managed Applications UI updates for Service Catalog

    We've heard your feedback on Azure Managed Application Service Catalog, and our first round of improvements feature a better UI and a better UX for both you and your customers. Sorting and searching are now available so that it’s easier for you to find what you’re looking for in the catalog.

    ManagedApplications

    Managed Applications in service catalog

    Managed Applications new default overview and monitoring support

    New views have been added for you to view the health and alert state of your managed application. You’ll also see a new Overview page, and a message from your application developer introducing you to the app, and where you can find documentation and help for it.

    We’ve also added entry points for our brand-new Resource Health blade for surfacing an entire resource group’s health, so you know your application is up and running. Additionally, Managed Applications now has a view for Azure Alerts to surface any custom alert your application’s developer packages into the application for notifying you of potential problems with it.

    ResourceHealth

    Resource Health blade for Managed Applications

    Azure Automation with support for Python 2 runbooks

    You can now import, author, and run Python 2 runbooks in Azure or on a Hybrid Runbook Worker. From your Python 2 runbooks, you can use Automation resources such as schedules, variables, connections, and credentials. In the authoring experience, syntax highlighting helps you to easily read your Python 2 runbook. You can also upload Python 2 packages to be imported by Python 2 runbooks, allowing you to use custom and third-party packages when running serverless in Azure. Supported package types include Python wheel packages and packages that have been source distribution compressed in the *.tar.gz format.

    Additional resources:

    Databases

    SQL Database sign-in experience with improved visibility for the query editor

    We have updated the sign in experience when you first enter the query editor to improve the visibility of sign in options. Also, we’ve highlighted the Azure Active Directory auto-login experience to make it clearer.

    Signin

    Sign in experience for the query editor

    SQL Data Warehouse maintenance window configuration

    Azure SQL Data Warehouse Maintenance Schedules (preview) allows you to seamlessly configure when service maintenance events are rolled out. To increase the visibility of Maintenance Scheduling, the new experience is available in the Essentials page of your data warehouse.

    MaintenanceSchedules

    Maintenance schedules

    Analytics

    Event Hub create blade now in full screen

    We have updated the way you create new event hubs based on your feedback. Instead of creating a namespace and its event hub(s) in two distinct blades, Event Hubs is now offering an integrated full-screen blade that allows you to create a namespace and event hub, as well as configure the event hub with Capture features in a single blade.

    Did you know?

    Tags are name and value pairs that help you identify your resources for management and billing purposes. By tagging your Azure resources, you can retrieve all resources with the same tags across different resource groups. For instance: by tagging resources in your development environment with the name “Environment” and value “Development,” you can easily retrieve all resources in that environment. Learn more about using tags by reading this documentation.

    Let us know what you think

    Thank you for your great feedback. We’re always eager to hear what you think about the Azure Portal and how you think we can improve the experience. If you’d like to learn more about Azure and the Azure portal, we suggest watching the sessions delivered at Microsoft Ignite. You can watch them on demand

    Sign in to the Azure Portal and download the Azure mobile app today to see everything that’s new, and let us know your feedback in the comments section, or on Twitter. See you next month!

    How R gets built on Windows

    $
    0
    0

    I wasn't at the Use of R in Official Statistics (uRos2018) conference in the Netherlands last month, but I'm thankful to Jeroen Ooms for sharing the slides from his keynote presentation. In addition to being a postdoc staffer at ROpenSci, Jeroen maintains the official repository for the daily R builds on Windows — you might recognize his name from the verification certificate that pops up when installing R on Windows). His uRos2018 talk provides a fascinating glimpse into complex the systems, dependencies, and processes that come together to make installing R as easy as as double-click.

    On thing I found if interest is that while R is a complex program in its own right — made up of a mix of C (32%), Fortran (24%) and R (37%)  code — it also relies on a number of external libraries to perform some of its underlying tasks. For example:

    • When you multiply two matrices or perform other linear algebra operations, R uses the BLAS and LAPACK libraries for the calculation. (Microsoft R open swaps in the Intel Math Kernel Library here, which provides the same calculations using different algorithms.)
    • When you download a file with download.file or install.packages, R uses the libcurl library to manage the network transfer.
    • When you use character data with foreign encodings (for example text with accents, symbols, and emojis), R uses the libicu library so those strings can be handled and compared in a uniform way.
    • When you use regular expressions with grep and regexpr, R uses the PCRE library to define the syntax and perform the search operation.
    • For graphics, R uses the cairo library to convert points, lines, and text into a bitmap image for display and/or export.

    Using these libraries means that the R maintainers don't need to reimplement the wheel, but can instead use open-source libraries that are used in a plethora of systems to accomplish these universal tasks. On Linux-based systems, these libraries are generally pre-installed for any application to use. But on Windows (and Mac), R needs to build these libraries for its own use, which makes building R a complex process. Not only do you have to have the right set of compilers and scripts to build R itself, but also all its library dependencies as well. R packages may, in turn, depend on other third-party libraries and in that case the same issues apply to building them, too.

    R dependencies

    You can find all of the scripts used to create the official releases and daily builds (r-patched and r-devel) for R for Windows on Github. Brian Ripley and Duncan Murdoch (and now, Jeroen Ooms) have long provided RTools: a collection of Windows programs (shell commands, other programs) which, given the right compiler, will allow R's build scripts to run. One other interesting bit of news from Jeroen's keynote is that an experimental version of RTools is on the way, which promises to make it easier to build, distribute and install external libraries on Windows.

    If you're at all interested in the inner workings of what it takes to make R run on Windows, I recommend checking out Jeroen's talk at the link below.

    Jeroen Ooms: The R infrastructure

    How to Use Class Template Argument Deduction

    $
    0
    0

    Class Template Argument Deduction (CTAD) is a C++17 Core Language feature that reduces code verbosity. C++17’s Standard Library also supports CTAD, so after upgrading your toolset, you can take advantage of this new feature when using STL types like std::pair and std::vector. Class templates in other libraries and your own code will partially benefit from CTAD automatically, but sometimes they’ll need a bit of new code (deduction guides) to fully benefit. Fortunately, both using CTAD and providing deduction guides is pretty easy, despite template metaprogramming’s fearsome reputation!

     

    CTAD support is available in VS 2017 15.7 and later with the /std:c++17 and /std:c++latest compiler options.

     

    Template Argument Deduction

    C++98 through C++14 performed template argument deduction for function templates. Given a function template like “template <typename RanIt> void sort(RanIt first, RanIt last);”, you can and should sort a std::vector<int> without explicitly specifying that RanIt is std::vector<int>::iterator. When the compiler sees “sort(v.begin(), v.end());”, it knows what the types of “v.begin()” and “v.end()” are, so it can determine what RanIt should be. The process of determining template arguments for template parameters (by comparing the types of function arguments to function parameters, according to rules in the Standard) is known as template argument deduction, which makes function templates far more usable than they would otherwise be.

     

    However, class templates didn’t benefit from these rules. If you wanted to construct a std::pair from two ints, you had to say “std::pair<int, int> p(11, 22);”, despite the fact that the compiler already knows that the types of 11 and 22 are int. The workaround for this limitation was to use function template argument deduction: std::make_pair(11, 22) returns std::pair<int, int>. Like most workarounds, this is problematic for a few reasons: defining such helper functions often involves template metaprogramming (std::make_pair() needs to perform perfect forwarding and decay, among other things), compiler throughput is reduced (as the front-end has to instantiate the helper, and the back-end has to optimize it away), debugging is more annoying (as you have to step through helper functions), and there’s still a verbosity cost (the extra “make_” prefix, and if you want a local variable instead of a temporary, you need to say “auto”).

     

    Hello, CTAD World

    C++17 extends template argument deduction to the construction of an object given only the name of a class template. Now, you can say “std::pair(11, 22)” and this is equivalent to “std::pair<int, int>(11, 22)”. Here’s a full example, with a C++17 terse static_assert verifying that the declared type of p is the same as std::pair<int, const char *>:

     

    C:Temp>type meow.cpp

    #include <type_traits>

    #include <utility>

     

    int main() {

        std::pair p(1729, “taxicab”);

        static_assert(std::is_same_v<decltype(p), std::pair<int, const char *>>);

    }

     

    C:Temp>cl /EHsc /nologo /W4 /std:c++17 meow.cpp

    meow.cpp

     

    C:Temp>

     

    CTAD works with parentheses and braces, and named variables and nameless temporaries.

     

    Another Example: array and greater

    C:Temp>type arr.cpp

    #include <algorithm>

    #include <array>

    #include <functional>

    #include <iostream>

    #include <string_view>

    #include <type_traits>

    using namespace std;

     

    int main() {

        array arr = { “lion”sv, “direwolf”sv, “stag”sv, “dragon”sv };

     

        static_assert(is_same_v<decltype(arr), array<string_view, 4>>);

     

        sort(arr.begin(), arr.end(), greater{});

     

        cout << arr.size() << “: “;

     

        for (const auto& e : arr) {

            cout << e << ” “;

        }

     

        cout << “n”;

    }

     

    C:Temp>cl /EHsc /nologo /W4 /std:c++17 arr.cpp && arr

    arr.cpp

    4: stag lion dragon direwolf

     

    This demonstrates a couple of neat things. First, CTAD for std::array deduces both its element type and its size. Second, CTAD works with default template arguments; greater{} constructs an object of type greater<void> because it’s declared as “template <typename T = void> struct greater;”.

     

    CTAD for Your Own Types

    C:Temp>type mypair.cpp

    #include <type_traits>

     

    template <typename A, typename B> struct MyPair {

        MyPair() { }

        MyPair(const A&, const B&) { }

    };

     

    int main() {

        MyPair mp{11, 22};

     

        static_assert(std::is_same_v<decltype(mp), MyPair<int, int>>);

    }

     

    C:Temp>cl /EHsc /nologo /W4 /std:c++17 mypair.cpp

    mypair.cpp

     

    C:Temp>

     

    In this case, CTAD automatically works for MyPair. What happens is that the compiler sees that a MyPair is being constructed, so it runs template argument deduction for MyPair’s constructors. Given the signature (const A&, const B&) and the arguments of type int, A and B are deduced to be int, and those template arguments are used for the class and the constructor.

     

    However, “MyPair{}” would emit a compiler error. That’s because the compiler would attempt to deduce A and B, but there are no constructor arguments and no default template arguments, so it can’t guess whether you want MyPair<int, int> or MyPair<Starship, Captain>.

     

    Deduction Guides

    In general, CTAD automatically works when class templates have constructors whose signatures mention all of the class template parameters (like MyPair above). However, sometimes constructors themselves are templated, which breaks the connection that CTAD relies on. In those cases, the author of the class template can provide “deduction guides” that tell the compiler how to deduce class template arguments from constructor arguments.

     

    C:Temp>type guides.cpp

    #include <iterator>

    #include <type_traits>

     

    template <typename T> struct MyVec {

        template <typename Iter> MyVec(Iter, Iter) { }

    };

     

    template <typename Iter> MyVec(Iter, Iter) -> MyVec<typename std::iterator_traits<Iter>::value_type>;

     

    template <typename A, typename B> struct MyAdvancedPair {

        template <typename T, typename U> MyAdvancedPair(T&&, U&&) { }

    };

     

    template <typename X, typename Y> MyAdvancedPair(X, Y) -> MyAdvancedPair<X, Y>;

     

    int main() {

        int * ptr = nullptr;

        MyVec v(ptr, ptr);

     

        static_assert(std::is_same_v<decltype(v), MyVec<int>>);

     

        MyAdvancedPair adv(1729, “taxicab”);

     

        static_assert(std::is_same_v<decltype(adv), MyAdvancedPair<int, const char *>>);

    }

     

    C:Temp>cl /EHsc /nologo /W4 /std:c++17 guides.cpp

    guides.cpp

     

    C:Temp>

     

    Here are two of the most common cases for deduction guides in the STL: iterators and perfect forwarding. MyVec resembles a std::vector in that it’s templated on an element type T, but it’s constructible from an iterator type Iter. Calling the range constructor provides the type information we want, but the compiler can’t possibly realize the relationship between Iter and T. That’s where the deduction guide helps. After the class template definition, the syntax “template <typename Iter> MyVec(Iter, Iter) -> MyVec<typename std::iterator_traits<Iter>::value_type>;” tells the compiler “when you’re running CTAD for MyVec, attempt to perform template argument deduction for the signature MyVec(Iter, Iter). If that succeeds, the type you want to construct is MyVec<typename std::iterator_traits<Iter>::value_type>”. That essentially dereferences the iterator type to get the element type we want.

     

    The other case is perfect forwarding, where MyAdvancedPair has a perfect forwarding constructor like std::pair does. Again, the compiler sees that A and B versus T and U are different types, and it doesn’t know the relationship between them. In this case, the transformation we need to apply is different: we want decay (if you’re unfamiliar with decay, you can skip this). Interestingly, we don’t need decay_t, although we could use that type trait if we wanted extra verbosity. Instead, the deduction guide “template <typename X, typename Y> MyAdvancedPair(X, Y) -> MyAdvancedPair<X, Y>;” is sufficient. This tells the compiler “when you’re running CTAD for MyAdvancedPair, attempt to perform template argument deduction for the signature MyAdvancedPair(X, Y), as if it were taking arguments by value. Such deduction performs decay. If it succeeds, the type you want to construct is MyAdvancedPair<X, Y>.”

     

    This demonstrates a critical fact about CTAD and deduction guides. CTAD looks at a class template’s constructors, plus its deduction guides, in order to determine the type to construct. That deduction either succeeds (determining a unique type) or fails. Once the type to construct has been chosen, overload resolution to determine which constructor to call happens normally. CTAD doesn’t affect how the constructor is called. For MyAdvancedPair (and std::pair), the deduction guide’s signature (taking arguments by value, notionally) affects the type chosen by CTAD. Afterwards, overload resolution chooses the perfect forwarding constructor, which takes its arguments by perfect forwarding, exactly as if the class type had been written with explicit template arguments.

     

    CTAD and deduction guides are also non-intrusive. Adding deduction guides for a class template doesn’t affect existing code, which previously was required to provide explicit template arguments. That’s why we were able to add deduction guides for many STL types without breaking a single line of user code.

     

    Enforcement

    In rare cases, you might want deduction guides to reject certain code. Here’s how std::array does it:

     

    C:Temp>type enforce.cpp

    #include <stddef.h>

    #include <type_traits>

     

    template <typename T, size_t N> struct MyArray {

        T m_array[N];

    };

     

    template <typename First, typename… Rest> struct EnforceSame {

        static_assert(std::conjunction_v<std::is_same<First, Rest>…>);

        using type = First;

    };

     

    template <typename First, typename… Rest> MyArray(First, Rest…)

        -> MyArray<typename EnforceSame<First, Rest…>::type, 1 + sizeof…(Rest)>;

     

    int main() {

        MyArray a = { 11, 22, 33 };

        static_assert(std::is_same_v<decltype(a), MyArray<int, 3>>);

    }

     

    C:Temp>cl /EHsc /nologo /W4 /std:c++17 enforce.cpp

    enforce.cpp

     

    C:Temp>

     

    Like std::array, MyArray is an aggregate with no actual constructors, but CTAD still works for these class templates via deduction guides. MyArray’s guide performs template argument deduction for MyArray(First, Rest…), enforcing all of the types to be the same, and determining the array’s size from how many arguments there are.

     

    Similar techniques could be used to make CTAD entirely ill-formed for certain constructors, or all constructors. The STL itself hasn’t needed to do that explicitly, though. (There are only two classes where CTAD would be undesirable: unique_ptr and shared_ptr. C++17 supports both unique_ptrs and shared_ptrs to arrays, but both “new T” and “new T[N]” return T *. Therefore, there’s insufficient information to safely deduce the type of a unique_ptr or shared_ptr being constructed from a raw pointer. As it happens, this is automatically blocked in the STL due to unique_ptr’s support for fancy pointers and shared_ptr’s support for type erasure, both of which change the constructor signatures in ways that prevent CTAD from working.)

     

    Corner Cases for Experts: Non-Deduced Contexts

    Here are some advanced examples that aren’t meant to be imitated; instead, they’re meant to illustrate how CTAD works in complicated scenarios.

     

    Programmers who write function templates eventually learn about “non-deduced contexts”. For example, a function template taking “typename Identity<T>::type” can’t deduce T from that function argument. Now that CTAD exists, non-deduced contexts affect the constructors of class templates too.

     

    C:Temp>type corner1.cpp

    template <typename X> struct Identity {

        using type = X;

    };

     

    template <typename T> struct Corner1 {

        Corner1(typename Identity<T>::type, int) { }

    };

     

    int main() {

        Corner1 corner1(3.14, 1729);

    }

     

    C:Temp>cl /EHsc /nologo /W4 /std:c++17 corner1.cpp

    corner1.cpp

    corner1.cpp(10): error C2672: ‘Corner1’: no matching overloaded function found

    corner1.cpp(10): error C2783: ‘Corner1<T> Corner1(Identity<X>::type,int)’: could not deduce template argument for ‘T’

    corner1.cpp(6): note: see declaration of ‘Corner1’

    corner1.cpp(10): error C2641: cannot deduce template argument for ‘Corner1’

    corner1.cpp(10): error C2514: ‘Corner1’: class has no constructors

    corner1.cpp(5): note: see declaration of ‘Corner1’

     

    In corner1.cpp, “typename Identity<T>::type” prevents the compiler from deducing that T should be double.

     

    Here’s a case where some but not all constructors mention T in a non-deduced context:

     

    C:Temp>type corner2.cpp

    template <typename X> struct Identity {

        using type = X;

    };

     

    template <typename T> struct Corner2 {

        Corner2(T, long) { }

        Corner2(typename Identity<T>::type, unsigned long) { }

    };

     

    int main() {

        Corner2 corner2(3.14, 1729);

    }

     

    C:Temp>cl /EHsc /nologo /W4 /std:c++17 corner2.cpp

    corner2.cpp

    corner2.cpp(11): error C2668: ‘Corner2<double>::Corner2’: ambiguous call to overloaded function

    corner2.cpp(7): note: could be ‘Corner2<double>::Corner2(double,unsigned long)’

    corner2.cpp(6): note: or       ‘Corner2<double>::Corner2(T,long)’

            with

            [

                T=double

            ]

    corner2.cpp(11): note: while trying to match the argument list ‘(double, int)’

     

    In corner2.cpp, CTAD succeeds but constructor overload resolution fails. CTAD ignores the constructor taking “(typename Identity<T>::type, unsigned long)” due to the non-deduced context, so CTAD uses only “(T, long)” for deduction. Like any function template, comparing the parameters “(T, long)” to the argument types “double, int” deduces T to be double. (int is convertible to long, which is sufficient for template argument deduction; it doesn’t demand an exact match there.) After CTAD has determined that Corner2<double> should be constructed, constructor overload resolution considers both signatures “(double, long)” and “(double, unsigned long)” after substitution, and those are ambiguous for the argument types “double, int” (because int is convertible to both long and unsigned long, and the Standard doesn’t prefer either conversion).

     

    Corner Cases for Experts: Deduction Guides Are Preferred

    C:Temp>type corner3.cpp

    #include <type_traits>

     

    template <typename T> struct Corner3 {

        Corner3(T) { }

        template <typename U> Corner3(U) { }

    };

     

    #ifdef WITH_GUIDE

        template <typename X> Corner3(X) -> Corner3<X *>;

    #endif

     

    int main() {

        Corner3 corner3(1729);

     

    #ifdef WITH_GUIDE

        static_assert(std::is_same_v<decltype(corner3), Corner3<int *>>);

    #else

        static_assert(std::is_same_v<decltype(corner3), Corner3<int>>);

    #endif

    }

     

    C:Temp>cl /EHsc /nologo /W4 /std:c++17 corner3.cpp

    corner3.cpp

     

    C:Temp>cl /EHsc /nologo /W4 /std:c++17 /DWITH_GUIDE corner3.cpp

    corner3.cpp

     

    C:Temp>

     

    CTAD works by performing template argument deduction and overload resolution for a set of deduction candidates (hypothetical function templates) that are generated from the class template’s constructors and deduction guides. In particular, this follows the usual rules for overload resolution with only a couple of additions. Overload resolution still prefers things that are more specialized (N4713 16.3.3 [over.match.best]/1.7). When things are equally specialized, there’s a new tiebreaker: deduction guides are preferred (/1.12).

     

    In corner3.cpp, without a deduction guide, the Corner3(T) constructor is used for CTAD (whereas Corner3(U) isn’t used for CTAD because it doesn’t mention T), and Corner3<int> is constructed. When the deduction guide is added, the signatures Corner3(T) and Corner3(X) are equally specialized, so paragraph /1.12 steps in and prefers the deduction guide. This says to construct Corner3<int *> (which then calls Corner3(U) with U = int).

     

    Reporting Bugs

    Please let us know what you think about VS. You can report bugs via the IDE’s Report A Problem and also via the web: go to the VS Developer Community and click on the C++ tab.

    Top Stories from the Microsoft DevOps Community – 2018.10.12

    $
    0
    0
    I’m back! One of the great privileges of my job is that I get to spend a lot of time talking to customers about DevOps. But that often means a lot of time on the road, and stepping away from drinking at the firehose of great content coming from the Azure DevOps community. But now... Read More

    The Economist’s Big Mac Index is calculated with R

    $
    0
    0

    The Economist's Big Mac Index (also described on Wikipedia if you're not a subscriber) was created (somewhat tongue-in-cheek) as a measure to compare the purchasing power of money in different countries. Since Big Macs are available just about everywhere in the world, the price of a Big Mac in Sweden — expressed in US dollars — gives an American traveler a sense of how much more expensive things will be in Stockholm. And comparing the price of a big Mac in several countries converted to a single baseline currency is a measure of how over-valued (or undervalued) those other currencies are compared to that baseline.

    Bigmac

    Since its inception in 1986, the Big Mac Index has been compiled and calculated manually, twice a year. But starting with the most recent published index (July 2018, shown above), the index is now calculated with R. This is the first example of a new program at The Economist to publish the data and methods behind its journalism, and here the data and code behind the Big Mac Index have been published as a Github repository. The method behind the index is provided as a richly-commented Jupyter Notebook, where you can also find some additional charts and within-currency analyses not published in the main index.

    The repository is published under an open MIT license, meaning you can remix the code to create a new index on prices of another international commodity, provided you can find the data. Find everything you need at the Github repository linked below.

    Github (TheEconomist): Data and methodology for the Big Mac index

    Because it’s Friday: Hey, it’s Enrico Pallazzo!

    $
    0
    0

    It seemed like such a simple movie. The Naked Gun (1988) is slapstick comedy through-and-through, but I never would have guessed (h/t Steven O'Grady) how much detail and planning went into the jokes, especially the baseball scene at the end. There's lots of interesting behind-the-scenes info in Sporting News's breakdown of the movie. Even Drebin's bungled National Anthem performance was composed in advance for the scene:

    That's all from us for this week. Next week, I'm in Australia for the Melbourne R User Group and the eResearch Australiasia conference, so I expect blogging will be a little lighter than usual. Have a great weekend!

    C# and .NET Core scripting with the “dotnet-script” global tool

    $
    0
    0

    dotnet scriptYou likely know that open source .NET Core is cross platform and it's super easy to do "Hello World" and start writing some code.

    You just install .NET Core, then "dotnet new console" which will generate a project file and basic app, then "dotnet run" will compile and run your app? The 'new' command will create all the supporting code, obj, and bin folders, etc. When you do "dotnet run" it actually is a combination of "dotnet build" and "dotnet exec whatever.dll."

    What could be easier?

    What about .NET Core as scripting?

    Check out dotnet script:

    C:UsersscottDesktopscriptie> dotnet tool install -g dotnet-script
    
    You can invoke the tool using the following command: dotnet-script
    C:UsersscottDesktopscriptie>copy con helloworld.csx
    Console.WriteLine("Hello world!");
    ^Z
    1 file(s) copied.
    C:UsersscottDesktopscriptie>dotnet script helloworld.csx
    Hello world!

    NOTE: I was a little tricky there in step two. I did a "copy con filename" to copy from the console to the destination file, then used Ctrl-Z to finish the copy. Feel free to just use notepad or vim. That's not dotnet-script-specific, that's Hanselman-specific.

    Pretty cool eh? If you were doing this in Linux or OSX you'll need to include a "shebang" as the first line of the script. This is a standard thing for scripting files like bash, python, etc.

    #!/usr/bin/env dotnet-script
    
    Console.WriteLine("Hello world");

    This lets the operating system know what scripting engine handles this file.

    If you you want to refer to a NuGet package within a script (*.csx) file, you'll use the Roslyn #r syntax:

    #r "nuget: AutoMapper, 6.1.0"
    
    Console.WriteLine("whatever);

    Even better! Once you have "dotnet-script" installed as a global tool as above:

    dotnet tool install -g dotnet-script

    You can use it as a REPL! Finally, the C# REPL (Read Evaluate Print Loop) I've been asking for for only a decade! ;)

    C:UsersscottDesktopscriptie>dotnet script
    
    > 2+2
    4
    > var x = "scott hanselman";
    > x.ToUpper()
    "SCOTT HANSELMAN"

    This is super useful for a learning tool if you're teaching C# in a lab/workshop situation. Of course you could also learn using http://try.dot.net in the browser as well.

    In the past you may have used ScriptCS for C# scripting. There's a number of cool C#/F# scripting options. This is certainly not a new thing:

    In this case, I was very impressed with the easy of dotnet-script as a global tool and it's simplicity. Go check out https://github.com/filipw/dotnet-script and try it out today!


    Sponsor: Check out the latest JetBrains Rider with built-in spell checking, enhanced debugger, Docker support, full C# 7.3 support, publishing to IIS and more advanced Unity support.



    © 2018 Scott Hanselman. All rights reserved.
         

    Azure.Source – Volume 53

    $
    0
    0

    Now in preview

    Protect data in use with the public preview of Azure confidential computing - Years of work with our silicon vendors have allowed us to bring application isolation technology to hardware in our datacenters to support this new VM family. While these virtual machines may ‘look and feel’ like standard VM sizes from the control plane, they are backed by hardware-based Trusted Execution Environments (TEEs), specifically the latest generation of Intel Xeon Processors with Intel SGX technology. You can now build, deploy, and run applications that protect data confidentiality and integrity in the cloud.

    Azure Friday

    Azure Friday | Introducing new Azure API Management capabilities - Miao Jiang joins Scott Hanselman discuss the API economy and how companies must master the challenges inherent in building, maintaining, managing, and exposing APIs to participate. That's where Azure API Management can help. Azure API Management is a solution for publishing APIs to external and internal consumers. With Azure API Management, you can take any backend system, hosted anywhere, and expose it through a modern API gateway.

    The Azure Podcast

    The Azure Podcast | Episode 250 - SignalR Service - Microsoft Developer Advocate Anthony Chu gives us the details on the newly released Azure SignalR Service that allows web and mobile applications to display real-time data with minimal effort.

    Now generally available

    Re-define analytics with Azure SQL Data Warehouse Gen2 tier now generally available - Azure SQL Data Warehouse Compute Optimized Gen2 tier for government customers is now available in the US Gov Arizona and US Gov Texas regions. With Azure Government, only US federal, state, local, and tribal governments and their partners have access to this dedicated instance that only screened US citizens operate. Customers can choose from six government-only datacenter regions, including two regions granted an Impact Level 5 Provisional Authorization. Moreover, Azure Government offers the most compliance certifications of any cloud provider.

    Also generally available

    The IoT Show

    The IoT Show | Azure IoT Edge CLI tooling - If you are a developer who cannot live without command line tools and wants to work with Azure IoT Edge, we've got you covered with a new release of the iotedgedev tool. And who better than Jon Gallant to demo some of the many new features?

    The IoT Show | Azure Digital Twins Introduction - Here is an introduction to Azure Digital Twins, a new Azure service announced recently. What are Digital Twins? What is this new PaaS service about? Daniel Escapa tells us all on the IoT Show.

    News and updates

    Microsoft joins Open Invention Network to help protect Linux and open source - Microsoft is joining the Open Invention Network (“OIN”), a community dedicated to protecting Linux and other open source software programs from patent risk. We bring a valuable and deep portfolio of over 60,000 issued patents to OIN. We also hope that our decision to join will attract many other companies to OIN, making the license network even stronger for the benefit of the open source community.

    Azure Portal October update - This is the first post in a new monthly series that will cover what is new and updated in the Azure Portal and the Azure mobile app. The series will specifically cover the areas that affect the user experience and how it affects your daily work, but will not announce new services on Azure or bring you what’s new on specific Azure services.

    Animation showing the refreshed user interface in the Azure portal

    Improved governance experience with Ethereum Proof-of-Authority 1.2 - This update includes a number of features that improve user-experience, configuration, and deployment reliability: Governance DApp for admin management and validator delegation; WebSocket support to make it easy to subscribe to events directly or connect to external tools and applications; BlockScot block explorer; JIT VM access and Azure Backup support; VM SKU selection; more configuration options (starting block gas limit and block reseal time); and improved reliability.

    Supercharge your Azure Stream Analytics queries with C# code - Azure Stream Analytics (ASA) is Microsoft’s fully managed real-time analytics offering for complex event processing. It enables you to unlock valuable insights and gain a competitive advantage by harnessing the power of big data. With .NET Standard user-defined functions (UDF), you can invoke your own functions written in C# to extend the Stream Analytics query language. Use the Azure Stream Analytics tools for Visual Studio to get native C# authoring and debugging experience.

    AI Show

    AI Show | Spektacom “Power Bat" - Introducing ‘power bats’, an innovative sensor-powered sticker that measures the quality of a player’s shot by capturing data and analyzing impact characteristics through wireless sensor technology and cloud analytics.

    Technical content and training

    Creating a data stream from NIST manufacturing lab data – Part 1 - Learn how to extract actionable insights from your IoT data using this solution guide. In this first post of a multi-part series, the solution uses the data published by US National Institute of Standards and Technology (NIST) Smart Manufacturing Systems test bed that exposes the manufacturing lab’s data.

    A fast, serverless, big data pipeline powered by a single Azure Function - A single Azure function is all it took to fully implement an end-to-end, real-time, mission critical data pipeline. And it was done with a serverless architecture. Serverless architectures simplify the building, deployment, and management of cloud scale applications. This post describes an Azure function and how it efficiently coordinated a data ingestion pipeline that processed over eight million transactions per day.

    Snip Insights – Cross-platform open source AI tool for intelligent screen capture - Snip Insights is an open source cross-platform AI tool for intelligent screen capture that enables cross-platform users to retrieve intelligent insights over a snip or screenshot. Snip Insights uses Microsoft Azure's Cognitive Services APIs to increase users' productivity by reducing the number of steps needed to gain intelligent insights. You can find the code, solution development process, and all other details on GitHub.

    Snip Insights technical architecture

    Data models within Azure Analysis Services and Power BI - Dan Stillwell, Global Cloud Solution Architect, CSU, provides best practices based on key observations he has seen over the last several years, particularly when creating data semantic models in SQL Server Analysis Services, Azure Analysis Services, or Power BI.

    Identify your move-groups and target sizes for migration with Azure Migrate - Planning is crucial in any migration effort and Azure Migrate helps you plan your datacenter migration to Azure. It helps you discover your on-premises environment, create move-groups and assess the move groups for migration to Azure.

    Additional tech content

    Azure tips & tricks

    How to configure a backup for your Azure App Service thumbnail

    How to configure a backup for your Azure App Service - If you are a developer who cannot live without command line tools and wants to work with Azure IoT Edge, we've got you covered with a new release of the iotedgedev tool. And who better than Jon Gallant to demo some of the many new features?

    How to clone web apps using Azure App Services thumbnail

    How to clone web apps using Azure App Services - Here is an introduction to Azure Digital Twins, a new Azure service announced recently. What are Digital Twins? What is this new PaaS service about? Daniel Escapa tells us all on the IoT Show.

    Industries

    Enabling intelligent cloud and intelligent edge solutions for government - Julia White, Corporate Vice President, Microsoft Azure, shares how we’re building Azure cloud/edge capabilities aligned with a set of core principles for robust technology solutions, while uniquely delivering consistency across the cloud and the edge. The intelligent cloud and intelligent edge make it possible to provide consistent power to critical institutions like hospitals and schools, manage precious resources like energy, food and water, as well as helping government improve citizen services.

    Making HIPAA and HITRUST compliance easier - The Azure Security and Compliance Blueprint - HIPAA/HITRUST Health Data and AI offers a turn-key deployment of an Azure PaaS and IaaS solution to demonstrate how to ingest, store, analyze, interact, identity and Securely deploy solutions with health data while being able to meet industry compliance requirements. David Starr Principal Systems Architect, Microsoft Azure covers three core areas where the blueprint can help with compliance are cloud provider and client responsibilities, security threats, and regulatory compliance for healthcare organizations.

    Driving identity security in banking using biometric identification - Howard Bush Principal Program Manager, Banking and Capital Markets, outlines how combining biometric identification with artificial intelligence (AI) enables banks to take a new approach to verifying the digital identity of their prospects and customers. This post highlights Onfido, which provides a multi-factor identity verification service that helps accurately verify online users, uses a cloud-based risk assessment platform that leverages artificial intelligence to automate and scale traditionally human-based fraud expertise to derive identity assurance.

    Mobile device screenshots showing the Onfido Identity Verification Solution

    The Azure DevOps Podcast

    The Azure DevOps Podcast | Dave McKinstry on Integrating Azure DevOps and the Culture of DevOps - Episode 005 - In this episode, Jeffrey and Dave talk about changes for Dave since the launch of Azure DevOps, what his journey has been like in the DevOps industry, his thoughts on companies looking to integrate Azure DevOps and move forward with automated deployment and reaching the continuous integration mark, how he thinks developers can move forward in terms of quality and Agile 101, and the modern skillset of what a developer and/or system engineer should look like in today’s DevOps environment.

    A Cloud Guru's Azure This Week

    A Cloud Guru's Azure This Week for 12 October 2018 thumbnail

    Azure This Week | 12 October 2018 - This time on Azure This Week, Lars talks about HDInsight Enterprise Security Package which is now generally available, Microsoft Graph data which can be analyzed using Azure Data Factory and also the public preview of Azure Database for MariaDB.

    Ingesting a data stream from NIST manufacturing lab data – Part 2

    $
    0
    0

    The Industry Experiences team has recently published a solution guide for extracting insights from existing IoT data. The solution consists of the following components.

    • Ingesting data
    • Hot path processing
    • Cold path processing
    • Analytics clients

    This is the second part to a series of blogs that go through those components in detail. Ingestion of data is divided into two parts. This is part 2, where we cover the component that transforms the raw data then posts data records to Azure Event Hubs. For more information, see Creating a data stream from NIST manufacturing lab data – Part 1.

    Communication between two microservices

    The question is how to make the communication work between the Logic App component and the custom code that transforms the raw data and posts the resulting data records to Event Hubs. Each data record type such as events and samples are received by different Event Hubs.

    Let’s start with the communication mechanism. We can talk about two general ways for communication between microservices, direct messaging or networking communication, and message passing.


    image

    The best practice is to decouple microservices using message passing. The microservice on the receiving end of the message waits for messages to arrive and process them. This method also allows multiple servers processing one queue, enabling easy scalability.

    image

    For this project we use the Azure Queue storage service (Storage queues). Since we are using Storage queues, the component transforming the raw messages must poll the queueing service for new messages and process them.

    The transform and post component

    Custom code transforms the incoming raw result in XML into a flat structure for the data records. The record is a set of name-value pairs, with timestamps.

    To extract the timestamps, use this MTConnect client found on GitHub. You can also use install the library as a NuGet package. Note that the library targets.NET standard 2.0.

    The component polls the queue service to retrieve messages. Azure WebJobs SDK simplifies this task by providing a declarative binding and trigger system that works with storage blobs, queues and tables. The SDK also controls queue polling.

    Azure Functions Host allows hosting custom code targeting the Azure WebJobs SDK. The code can run in different environments. In this case, we use a Docker container to host the function code, and the Docker container is hosted by Azure Container Instances.

    Azure Functions Core Tools is used to continue with the rest of the implementation. Note that we use Version 2.x. Once the tools are installed, run the func new command with the desired options for language, name and function to generate a stub method. Run the func templates list command to see supported templates, then select Queue trigger.

    WebJob example code

    The following is the main entry method. In the attributes, FunctionName tells the host that this is a WebJobs function. Next comes the triggers and the bindings. The code is triggered when there is a message on smssamples queue. That is followed by the bindings to read and write data on various targets. There are three bindings for Event Hubs as destinations. There an input binding for an Azure Blob, and output bindings for three Azure Storage tables. The name of the blob is retrieved from the contents of the message. The C# attribute argument streams/{queueTrigger} specifies that the name of the blob is in the message contents. For details, see Azure Blob storage bindings for Azure Functions.

    [FunctionName("FlattenAndPost")]
    public static async Task Run(
         [QueueTrigger("smssamples", Connection = "queueConnectionString")]
         string myQueueItem,
         TraceWriter log,
         ExecutionContext context,
         [EventHub("samplesEventHub", Connection = "smssamplesEventHub")]
         IAsyncCollector<string> asyncSampleCollector,
         [EventHub("eventsEventhub", Connection = "smsEventsEventHub")]
         IAsyncCollector<string> asyncEventCollector,
         [EventHub("conditionsEventhub", Connection = "smsConditionsEventHub")]
         IAsyncCollector<string> asyncConditionCollector,
         [Table("eventsfromfunction", Connection = "queueConnectionString")]
         IAsyncCollector<EventTableRecord> asyncEventTableCollector,
         [Table("samplesfromfunction", Connection = "queueConnectionString")]
         IAsyncCollector<SampleTableRecord> asyncSampleTableCollector,
         [Table("conditionsfromfunction", Connection = "queueConnectionString")]
         IAsyncCollector<ConditionsTableRecord> asyncConditionTableCollector,
         [Blob("streams/{queueTrigger}", FileAccess.Read)]
         Stream blobStream)

    Once the blob is read, we can then use the MTConnect Client library’s object model. The DeserializeResults method uses the XMLSerializer to deserialize XML into an object. See Using the XMLSerializer for more details.

    var sampleResult = DeserializeResults<MTConnectStreamsType>(blobContents);

    Following that, the code processes events, samples and conditions through separate functions that basically do the same operation. That is, we use LINQ to project the hierarchy to a flat structure and write to the bindings.

    var nonEmptyEvents = sampleResult.Streams.Where(
             s => s.ComponentStream != null
                  && s.ComponentStream.All(cs => cs.Events != null && cs.Events.Any()))
         .ToList();
    
    events = nonEmptyEvents.SelectMany(
         s => s.ComponentStream.SelectMany(
             cs => cs.Events.Select(
                 e => new EventRecord()
                 {
                     HourWindow =
                         new DateTime(
                             e.timestamp.Year,
                             e.timestamp.Month,
                             e.timestamp.Day,
                             e.timestamp.Hour,
                             0,
                             0),
                     Id = Guid.NewGuid().ToString(),
                     DeviceName = s?.name,
                     DeviceId = s?.uuid,
                     Component = cs?.component,
                     ComponentName = cs?.name,
                     ComponentId = cs?.componentId,
                     EventDataItemId = e?.dataItemId,
                     EventTimestamp = e?.timestamp,
                     EventName = e?.name,
                     EventType = e.GetType().Name,
                     EventSequence = e?.sequence,
                     EventSubtype = e?.subType,
                     EventValue = e?.Value
                 }))).OrderBy(r => r.EventSequence).ToList();

    Also note, the Event Hubs bindings are not installed by default and you need to reference the required libraries (Microsoft.Azure.WebJobs.Extensions.EventHubs) from NuGet.

    With this setup, you should be able to run the function locally as described in the Azure Functions Core Tools.

    Running in a Docker container

    Let’s see how to bring this code into a Docker container. The Dockerfile is straightforward, except for a few notes.

    • Take a dependency on the microsoft/azure-functions-dotnet-core2.0 container image.
    • Make sure the workdir is /app.
    • Setting ASPNETCORE_ENVIRONMENT to Development allows you to test and debug while you are building the solution. That needs to be either removed, or flipped to Production once the component is deployed
    • Set the AzureWebJobsScriptRoot environment variable to /app/bin/Debug/netstandard2.0/.
    • Copy the values in local.settings.json file as environment variables.
    • Check that there is no space around “=” when setting the environment variables.
    FROM microsoft/azure-functions-dotnet-core2.0
    WORKDIR /app
    ENV ASPNETCORE_ENVIRONMENT="Development"
    ENV AzureWebJobsScriptRoot=/app/bin/Debug/netstandard2.0/
    ENV AzureWebJobsStorage="…"
    ENV AzureWebJobsDashboard="…"
    ENV queueConnectionString="…"
    ENV smssamplesEventHub="…"
    ENV smsEventsEventHub="…"
    ENV smsConditionsEventHub="…"
    ENV consoleLoggingMode=always
    COPY . .

    Build the new Docker container as usual, register the new container image to the Azure Container Registry, and then run it as described in the tutorial.

    Next steps

    Viewing all 10804 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>