Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

Eyes wide open – Correct Caching is always hard

$
0
0

Image from Pixabay used under Creative CommonsIn my last post I talked about Caching and some of the stuff I've been doing to cache the results of a VERY expensive call to the backend that hosts my podcast.

As always, the comments are better than the post! Thanks to you, Dear Reader.

The code is below. Note that the MemoryCache is a singleton, but within the process. It is not (yet) a DistributedCache. Also note that Caching is Complex(tm) and that thousands of pages have been written about caching by smart people. This is a blog post as part of a series, so use your head and do your research. Don't take anyone's word for it.

Bill Kempf had an excellent comment on that post. Thanks Bill! He said:

The SemaphoreSlim is a bad idea. This "mutex" has visibility different from the state it's trying to protect. You may get away with it here if this is the only code that accesses that particular key in the cache, but work or not, it's a bad practice.
As suggested, GetOrCreate (or more appropriate for this use case, GetOrCreateAsync) should handle the synchronization for you.

My first reaction was, "bad idea?! Nonsense!" It took me a minute to parse his words and absorb. Ok, it took a few hours of background processing plus I had lunch.

Again, here's the code in question. I've removed logging for brevity. I'm also deeply not interested in your emotional investment in my brackets/braces style. It changes with my mood. ;)

public class ShowDatabase : IShowDatabase

{
private readonly IMemoryCache _cache;
private readonly ILogger _logger;
private SimpleCastClient _client;

public ShowDatabase(IMemoryCache memoryCache,
ILogger<ShowDatabase> logger,
SimpleCastClient client){
_client = client;
_logger = logger;
_cache = memoryCache;
}

static SemaphoreSlim semaphoreSlim = new SemaphoreSlim(1);

public async Task<List<Show>> GetShows() {
Func<Show, bool> whereClause = c => c.PublishedAt < DateTime.UtcNow;

var cacheKey = "showsList";
List<Show> shows = null;

//CHECK and BAIL - optimistic
if (_cache.TryGetValue(cacheKey, out shows))
{
return shows.Where(whereClause).ToList();
}

await semaphoreSlim.WaitAsync();
try
{
//RARE BUT NEEDED DOUBLE PARANOID CHECK - pessimistic
if (_cache.TryGetValue(cacheKey, out shows))
{
return shows.Where(whereClause).ToList();
}

shows = await _client.GetShows();

var cacheExpirationOptions = new MemoryCacheEntryOptions();
cacheExpirationOptions.AbsoluteExpiration = DateTime.Now.AddHours(4);
cacheExpirationOptions.Priority = CacheItemPriority.Normal;

_cache.Set(cacheKey, shows, cacheExpirationOptions);
return shows.Where(whereClause).ToList(); ;
}
catch (Exception e) {
throw;
}
finally {
semaphoreSlim.Release();
}
}
}

public interface IShowDatabase {
Task<List<Show>> GetShows();
}

SemaphoreSlim IS very useful. From the docs:

The System.Threading.Semaphore class represents a named (systemwide) or local semaphore. It is a thin wrapper around the Win32 semaphore object. Win32 semaphores are counting semaphores, which can be used to control access to a pool of resources.

The SemaphoreSlim class represents a lightweight, fast semaphore that can be used for waiting within a single process when wait times are expected to be very short. SemaphoreSlim relies as much as possible on synchronization primitives provided by the common language runtime (CLR). However, it also provides lazily initialized, kernel-based wait handles as necessary to support waiting on multiple semaphores. SemaphoreSlim also supports the use of cancellation tokens, but it does not support named semaphores or the use of a wait handle for synchronization.

And my use of a Semaphore here is correct...for some definitions of the word "correct." ;) Back to Bill's wise words:

You may get away with it here if this is the only code that accesses that particular key in the cache, but work or not, it's a bad practice.

Ah! In this case, my cacheKey is "showsList" and I'm "protecting" it with a lock and double-check. That lock/check is fine and appropriate HOWEVER I have no guarantee (other than I wrote the whole app) that some other thread is also accessing the same IMemoryCache (remember, process-scoped singleton) at the same time. It's protected only within this function!

Here's where it gets even more interesting.

  • I could make my own IMemoryCache, wrap things up, and then protect inside with my own TryGetValues...but then I'm back to checking/doublechecking etc.
  • However, while I could lock/protect on a key...what about the semantics of other cached values that may depend on my key. There are none, but you could see a world where there are.

Yes, we are getting close to making our own implementation of Redis here, but bear with me. You have to know when to stop and say it's correct enough for this site or project BUT as Bill and the commenters point out, you also have to be Eyes Wide Open about the limitations and gotchas so they don't bite you as your app expands!

The suggestion was made to use the GetOrCreateAsync() extension method for MemoryCache. Bill and other commenters said:

As suggested, GetOrCreate (or more appropriate for this use case, GetOrCreateAsync) should handle the synchronization for you.

Sadly, it doesn't work that way. There's no guarantee (via locking like I was doing) that the factory method (the thing that populates the cache) won't get called twice. That is, someone could TryGetValue, get nothing, and continue on, while another thread is already in line to call the factory again.

public static async Task<TItem> GetOrCreateAsync<TItem>(this IMemoryCache cache, object key, Func<ICacheEntry, Task<TItem>> factory)

{
if (!cache.TryGetValue(key, out object result))
{
var entry = cache.CreateEntry(key);
result = await factory(entry);
entry.SetValue(result);
// need to manually call dispose instead of having a using
// in case the factory passed in throws, in which case we
// do not want to add the entry to the cache
entry.Dispose();
}

return (TItem)result;
}

Is this the end of the world? Not at all. Again, what is your project's definition of correct? Computer science correct? Guaranteed to always work correct? Spec correct? Mostly works and doesn't crash all the time correct?

Do I want to:

  • Actively and aggressively avoid making my expensive backend call at the risk of in fact having another part of the app make that call anyway?
    • What I am doing with my cacheKey is clearly not a "best practice" although it works today.
  • Accept that my backend call could happen twice in short succession and the last caller's thread would ultimately populate the cache.
    • My code would become a dozen lines simpler, have no process-wide locking, but also work adequately. However, it would be naïve caching at best. Even ConcurrentDictionary has no guarantees - "it is always possible for one thread to retrieve a value, and another thread to immediately update the collection by giving the same key a new value."

What a fun discussion. What are your thoughts?


Sponsor: SparkPost’s cloud email APIs and C# library make it easy for you to add email messaging to your .NET applications and help ensure your messages reach your user’s inbox on time. Get a free developer account and a head start on your integration today!



© 2018 Scott Hanselman. All rights reserved.
     

Three Twitter Threads

$
0
0

I've been heads-down this week preparing for some upcoming talks, so not as much blogging as usual this week. But there have been some interesting conversations on Twitter this week that you may be interested to check out if you're not on the platform.

Steph Lock shares her go-to R packages for every stage of the data science process:

My #rstats #datascience goto 📦
IO: odbc readxl httr
EDA: DataExplorer
Prep: tidyverse
Sampling: rsample modelr
Feature Engineering: recipes
Modelling: glmnet h2o FFTrees
Evaluation: broom yardstick
Deployment: sqlrutils AzureML opencpu
Monitoring: flexdashboard
Docs: rmarkdown

— Steph Locke (@SteffLocke) April 28, 2018

Rachel Thomas and Jeremy Howard advocate thinking differently about AI development, and not falling into the trap of thinking "bigger is always better" when it comes to data (a sentiment I wholeheartedly agree with):

Innovation come from doings things differently, not doing things bigger. @jeremyphoward https://t.co/3TJYs8OCbr pic.twitter.com/I55a6gT1OF

— Rachel Thomas (@math_rachel) May 2, 2018

I wondered what was so different about Python compared to R when it comes to package management, and got some really thoughtful responses:

Serious question: I use R, not Python, and while there's the occasional version/package issue in #rstats it's rarely a big deal. But I hear about this from Python devs all the time. What's so different about Python that this is such a thing? https://t.co/g8ddQu2gpt

— David Smith (@revodavid) April 30, 2018

Twitter definitely has its bad side, but there's a lot of really interesting conversation on the platform as well. Click on each tweet to see the conversations these generated.

Exploring Azure App Service – Introduction

$
0
0

Have you ever needed to quickly stand up a web site, or web API app that was publicly available? Is your team or organization thinking about moving to the cloud but aren’t sure the best place to start? One of the first places you should look is Azure App Service Web Apps. In this post we’ll look at how easy it is to get started, and a quick overview of key concepts.

App Service offers the following benefits:

  • A fully managed platform, meaning Azure automatically updates the operating system and runtime as security and stability fixes are released.
  • 10 free plans to every subscriber, so it won’t cost you money or credits to try your app in Azure.
  • First class support in Visual Studio, meaning that you can go from your app running on your local machine to running in App Service in less than 2 minutes.
  • If offers deployment slots, which enable you to stage multiple versions of your app, and route varying amounts of traffic to the various versions (i.e. do A/B testing, or a ringed release model).
  • Scale up and down quickly and automatically based on load
  • For a more see Why use Web Apps?

In this blog post, I’ll provide an overview of App Service’s key features and concepts by walking through using Visual Studio to publish an ASP.NET application to Azure App Service.

Let’s get going

To get started, you’ll first need:

  • Visual Studio 2017 with the ASP.NET and web development workload installed (download now)
  • An Azure account:
  • Any ASP.NET or ASP.NET Core app, for the purposes of this post, I’ll use a basic ASP.NET Core app

To start I’ll right click my project in Solution Explorer and choose “Publish”

clip_image001

This brings up the Visual Studio publish target dialog, which will default to the Azure App Service pane. The “Create new” radio button is already selected to, so I’ll click the “Publish” button on the bottom right.

This will open the Create App Service dialog in Visual Studio.

Key App Service Concepts

The dialog has four fields that represent key concepts of creating an App Service:

  1. App Name: Will be the default public facing URL in Azure (it will be of the form https://<App_Name>.azurewebsites.net –you can configure domain names later if needed).
  2. Subscription: The Azure Subscription to create the resources in if you have more than one
  3. Resource Group: Groups your application and any dependent resources (SQL Databases, Storage Accounts, etc., see resource group design to learn more). To edit the name, click “New…”.
  4. Hosting Plan: The hosting plan is a set of reserved resources for your app. You can choose to host multiple apps in a single hosting plan (we’ll explore this further in a minute).

clip_image003

One concept that can be confusing is the relationship between the “Hosting Plan” (or App Service plan”) and the “App Service”:

  • The Hosting/App Service plan is the virtual machine resources you are reserving in Azure to host your application. This is what you are paying or using credits for.
  • The App Service is your app and associated settings that run inside of the plan. You can run multiple apps (App Services) in the same plan (virtual machine) with the same implications as sharing any other server or VM between apps.

To explore the App Service plan further, click the “New…” button next to the Hosting Plan dropdown to open the “Configure Hosting Plan” dialog that has three fields:

  1. App Service Plan: A non-public facing name for the plan.
  2. Location: Is the region your app will run in. You generally want to pick a region that is close to customers that will be accessing the site.
  3. Size: The size of the virtual machine you want to reserve for your application and the capabilities you want (e.g. deployment slots require a Standard or Premium plan).
    Note: Free and Shared plans run in the same VM as other App Service apps and are intended for development and testing, see App Service plan overview for more details

Publishing the app

At this point I’m ready to publish my app to App Service. The bottom right panel of the Create App Service dialog will show me all the resources I’m going to create in Azure (in this case a Hosting Plan and App Service). Everything looks good, so I just need to click “Create”:

clip_image005

Visual Studio will create all the resources on my behalf, publish my application, and open my default browser to the URL of the published application.

Conclusion

Hopefully, this overview of App Service concepts has been helpful and inspired you to give App Service a try. We believe that for many people, App Service is the easiest place to get started with cloud development, even if you need to move to other services in the future for further capabilities (compare hosting options to see additional choices). As always, let us know if you run into any issues, or have any questions below or via Twitter.

General availability: Azure Storage metrics in Azure Monitor

$
0
0

Azure Storage metrics in Azure Monitor, which was previously in public preview, is now generally available.

Azure Monitor is the platform service that provides a single source of monitoring data for Azure resources. With Azure Monitor, you can visualize, query, route, archive, and take action on the metrics and logs coming from resources in Azure. You can work with the data using the Monitor portal blade, the Azure Monitor Software Development Kits (SDKs), and through several other methods. Azure Storage is one of the fundamental services in Azure, and now you can chart and query storage metrics alongside other metrics in one consolidated view. For more information on how Azure Storage metrics are defined, you can see the documentation.

The features built on top of metrics are available differently per cloud:

  • Azure Monitor SDK (REST, .Net, Java & CLI): Available in all clouds
  • Metric chart: Available in Public Cloud, and coming soon in Sovereign Clouds
  • Alert: Available in Public Cloud, and coming soon in Sovereign Clouds

Meanwhile, the previous metrics become classic and are still supported. The following screenshot shows what the transition experience is. The Alerts and Metrics work on new metrics, and Alerts (classic), Metrics (classic), Diagnostic settings (classic) and Usage (classic) work on classic metrics.

AzureStorageMetricsTransitionExperience

 

The support of classic metrics will be ended in the future with early notice. It's highly recommended to migrate your workloads or monitoring to adopt new metrics, based on migration guideline.

Azure expands certification scope of Health Information Trust Alliance Common Security Framework

$
0
0

I’m proud to announce that our Azure Health Information Trust Alliance (HITRUST) Common Security Framework (CSF) Certification was not only renewed by HITRUST, but our certification scope has expanded from last year by more than 250 percent! The HITRUST CSF Certification is the most widely recognized security accreditation in the healthcare industry. The HITRUST CSF builds on Health Insurance Portability and Accountability Act (HIPAA) and the Health Information Technology for Economic and Clinical Health (HITECH) Act, by providing a framework for complex compliance requirements that include technical and process elements such as HIPAA, National Institute of Standards and Technology (NIST), The Information Services Office (ISO) and Control Objectives for Information and Related Technologies (COBIT) to ensure controls are in place to safeguard Protected Health Information (PHI).
   
Health customers can further leverage our HITRUST CSF Certification as part of their own certification process when they build on Azure. To accelerate adoption and utilization for customers managing health data, we also recently released the Azure Security and Compliance Blueprint - HIPAA/HITRUST Health Data and AI, which provides tools and guidance for building HIPAA/HITRUST solutions.

Our greatly expanded HITRUST CSF assessment is another indication of our commitment to safeguarding information and maintaining the trust of our customers and the members they serve.
  
“HITRUST has been working with the industry to ensure the appropriate information protection requirements are met when sensitive information is accessed or stored in a cloud environment. By taking the steps necessary to obtain HITRUST CSF Certified status, Microsoft Azure is distinguished as an organization that people can count on to keep their information safe,” said Ken Vander Wal, Chief Compliance Officer, HITRUST.

Learn more by taking a closer look at the Official Letter of Certification.

You can also learn more about the Trust Center and all Microsoft products that comply with HIPAA and HITRUST.

The following is a complete list of Azure services included in this HITRUST certification spanning Azure, Azure Government, and Azure Germany clouds:

Announcing low-priority VMs on scale sets now in public preview

$
0
0

We are thrilled to announce the public preview of low-priority virtual machines (VMs) on VM scale sets. Low-priority VMs allow users to run their workloads at a fraction of the price, enabling significant cost savings. This offering has been available through our Azure Batch service since May 2017, and because we have seen great customer success we are expanding it to VM scale sets. This is a great option for resilient, fault-tolerant applications as these VMs are allocated using our unutilized capacity and can, therefore, be evicted. Low-priority VMs are available through VM scale sets with up to an 80 percent discount.

What are low-priority VMs?

Low-priority VMs enable you to take advantage of our unutilized capacity. The amount of available unutilized capacity can vary based on size, region, time of day, and more. When deploying Low-priority VMs in VM scale sets, Azure will allocate the VMs if there is capacity available, but there are no SLA guarantees. At any point in time when Azure needs the capacity back, we will evict low-priority VMs. Therefore, the low-priority offering is great for flexible workloads, like large processing jobs, dev/test environments, demos, and proofs of concept.

Provisioning low-priority VMs

Low-priority VMs can easily be deployed through our VM scale set offering. There is a new property field, allowing you to easily set the priority to low at the VM scale set creation time. If set, then VMs in the scale set will be low-priority. You can create low-priority VMs on scale sets using the Portal, Azure CLI, PowerShell, and Resource Manager Templates.

low-pri

Features of low-priority VMs

Eviction Policy: When provisioning low-priority VMs, you can set the eviction policy. The two evictions policies that are supported are stop-deallocate on eviction and deleted on eviction. Stop-deallocate on eviction allows users to maintain the disks associated with these VMs. Users can try to restart the low priority in the scale set but remember, there are no allocation guarantees. The delete on eviction policy deletes the VM and all disks associated to the VM. This allows you to save on costs as you will not continue to pay for the disks.

Notifications (coming soon): Users can opt-in to receive in-VM notifications through Azure Scheduled Events. This will notify you if your VMs are being evicted and you will have 30 seconds to finish any jobs and perform shutdown tasks prior to the eviction.

Low-priority VMs are available in all Azure regions. All size families are supported except for B-series and Dv2 Promo Series.

More information

Learn more about our low-priority offering on VM Scale Sets.

Learn more about our low-priority offering on Batch.

Check out the low-priority pricing.

Announcing first-class support for CloudEvents on Azure

$
0
0

As more and more serverless applications are developed, events are now the glue connecting all aspects of modern applications. These events can originate from microservices, from VMs, from the edge, or even from IoT devices. They can be fired for infrastructure automation, for application communication, as triggers from data platforms, or to connect complex analytics and AI services. The value of events is growing exponentially, there are a plethora of diverse sources and consumers of events across public clouds, private clouds, and even at the edge.

One of our major Azure innovations last year was the creation of an event-centric serverless platform, Azure Event Grid. To support the growing diversity of serverless applications, Event Grid was launched with support to use Azure’s native serverless platforms, like Azure Functions, and with support to use custom events, enabling applications to send and receive events whether on Azure or another platform.

We are now taking this open and diverse event approach further by being the first major public cloud to offer first-class support for CloudEvents as part of Event Grid. CloudEvents is a new open specification and standard for describing event data in a common and consistent way. Building on this standard will enable interoperability between different cloud providers, SaaS companies, IoT manufacturers, and many others creating a much richer and more inclusive serverless experience. Additionally, this will enable event-based IoT solutions on the edge to take an event-model dependency without being locked to a single cloud provider. You can find a similar approach to this open flexibility with Azure Functions, where we offer a fully open-source service that can run serverless functions on any platform and on any cloud in a Docker container.

CloudEvents was created in the Serverless Working Group of the Cloud Native Compute Foundation (CNCF), partnering with numerous cloud providers and service providers. The exciting outcome of having broad support for this open, standard and consistent glue is the opportunity for uniform tooling, standard ways to de-serialize events and universal methods for routing and handling events. In the fast-changing serverless world, this interoperability is incredibly important for agile portability of your applications.

Clemens Vasters just showed a demo of CloudEvents on Azure at Kubecon and I am excited to announce that you can now publish and consume events using CloudEvents directly on Event Grid. This allows incoming or outgoing events to use the CloudEvent open standard while still making it incredibly easy to use the rest of the Azure serverless platform, including Azure Functions and Logic Apps.

image Go ahead and try CloudEvents on Event Grid. It is already available in US West Central, US Central, and Europe North. CloudEvents is being actively incubated and is at version 0.1, so feel free to give feedback, make PRs, and even join the effort in the Serverless Working Group of the CNCF. As with everyone else involved, I am incredibly excited to see this project develop!

Azure M-series VMs are now SAP HANA certified

$
0
0

In December 2017, we announced general availability of the Azure M-series virtual machines (VM). These VMs host on the most powerful cloud hardware that is available across all public cloud providers. They deliver configurations up to 128 vCPUs and 4TB RAM for a single VM! Over the past few months, we have seen customers adopt and utilize M-series VMs for high-end database workloads based on SQL Server, Oracle, and other DBMS systems, even already move entire SAP landscapes into Azure.

Microsoft, as a customer of SAP, led the early adoption path by completing our own migration of Microsoft’s SAP landscape into Azure, including our 14TB SAP ERP systems which runs Microsoft’s most critical business processes on the M-series M128s VM for this application's SQL Server DB.

To accommodate even more demanding workloads Azure has invested into accelerating database system performance with optimizations for critical write I/O, exclusively on Azure M-series VMs. Azure Write Accelerator is functionality we recently released for M-series VMs. This has been proven to accelerate performance for critical, transactional log writes that require sub-millisecond latency.

We have been working with SAP over the last few months to leverage and certify Azure M-series VMs for their SAP HANA database, utilizing these capabilities. Today, we are excited to announce that M-Series VMs are certified by SAP for SAP HANA production workloads in the following categories:

VM type

vCPUs

Memory in GiB

SAP HANA workload

M64s

64

1024

OLTP and OLAP

M64ms

64

1792

OLTP

M128s

128

2048

OLTP and OLAP

M128ms

128

3892

OLTP

For exact scenario description, please refer to SAP’s Certified IaaS Platforms webpage. The entries for M-series are expected to be published by SAP in the next few days.

Currently, M-series SAP HANA certification is limited to scale-up scenario, with scale-out certification actively underway. For SAP sizing information using these different M-series VM types, check SAP Note #1928533 - SAP Applications on Azure: Supported Products and Azure VM types (SAP login required). In addition to certification, we have conducted SAP benchmarks that reinforce and document the impressive performance of our M-series VMs for SAP workloads that rely upon SAP HANA.

With M-series VMs, you not only have SAP HANA certified high-performance virtual machines, but also the cloud scale on-demand and agile deployment capabilities of Azure. We recommend you watch the Microsoft Mechanics video below to learn how you can automate and deploy SAP HANA based landscapes in minutes. Learn and experience how Write Accelerator improves database write performance and leverage the power of M-series in Azure today.

For more information on utilizing M-series VMs for SAP HANA, read the article, SAP HANA on Azure operations guide. Even during our coordination and pre-certification phases with SAP, Microsoft was working with customers to diligently test and validate our M-series VMs under SAP HANA workloads.

These efforts were even broader to fully validate and move their production S/4HANA systems using M-series. Based on these experiences, Microsoft assessed the quality, reliability, and performance of the different M-series virtual machines and feel confident in our delivery to meet your expectations.

Hosting SAP HANA on Azure is not limited to the M-series VMs. Beyond M-Series virtual machines, Azure also offers SAP HANA on Azure Large Instances which expand scale-up capacities up to 20TB and scale-out volumes for OLAP up to 30TB with our market leading, BareMetal offerings.

Azure M-series VMs are available in the following Azure regions: East US 2, West US 2, West Europe, UK South, Southeast Asia, US Gov Virginia, and with more regions launching globally. To learn more about migrating SAP applications to Azure, download our SAP Migration whitepaper. For more information about running SAP workloads on Azure we recommend you start here.


Top stories from the VSTS community – 2018.05.04

$
0
0
Here are top stories we found in our streams this week related to DevOps, VSTS, TFS and other interesting topics, listed in no specific order: TOP STORIES Recent work on the Countdown Widget for VSTSHenry Been shares the latest updates and future work he’s planning for the VSTS Countdown widget. Take Two: Another Approach for... Read More

Because it’s Friday: The eyes don’t work

$
0
0

Spring has finally arrived around here, and a recent tweet reminded me of the depths of fear that Spring brought to me as 7-year old me back in Australia: swooping magpies. These native birds, related but quite different to the magpies of North America and Europe, get very aggressive in the Spring, and will attack anyone that walks into their territory, swooping in from behind and attacking your head with their long sharp beak. (They can easily draw blood.) They'll repeat their attacks over and over again until you leave. People try many things to prevent the attacks, like wearing sunglasses backwards or putting fake eyes on the back of your head, but as you can see from the video below they don't really work:

True story: my mum used to make we wear a plastic ice-cream tub on my head as I walked to primary school as a magpie defense. (This was before bike helmets were a thing.) That didn't work either.

That's all from the blog for this week. Have a great weekend, and we'll be back next week. 

 

Azure.Source – Volume 30

$
0
0

Microsoft Build 2018 is just a few days away on May 7-9. Whether you can’t make it to Seattle or you just want to enhance your on-the-ground experience at the event, Microsoft Build Live brings you live as well as on-demand access to three days of inspiring speakers, spirited discussions, and virtual networking. The livestream gives you another way to connect, spark ideas, and deepen your engagement with the latest ideas in the cloud, AI, mixed reality, and more.

Now in preview

Python, Node.js, Go client libraries for Azure Event Hubs in public preview - Azure Event Hubs is expanding its ecosystem to support more languages. Azure Event Hubs is a highly scalable data-streaming platform processing millions of events per second. Event Hubs uses Advanced Message Queuing Protocol (AMQP 1.0) to enable interoperability and compatibility across platforms. Now, you can easily get started with Event Hubs with the addition of new client libraries for Go, Python, and Node.js in public preview.

Monitor Microsoft peering in ExpressRoute with Network Performance Monitor - public preview - Connectivity to Microsoft online services (Office 365, Dynamics 365, and Azure PaaS services) is through the Microsoft peering. We enable bi-directional connectivity between your WAN and Microsoft cloud services through the Microsoft peering routing domain. You must connect to Microsoft cloud services only over public IP addresses that are owned by you or your connectivity provider and you must adhere to all the defined rules. You can now monitor the end-to-end connectivity between your on-premises resources (branch offices, datacenters, and office sites) and Microsoft online services (Office 365, Dynamics 365, and Azure PaaS services) connected through an ExpressRoute. NPM proactively sends you alert notifications whenever the loss and latency of the connection shoots over the set threshold.

Announcing low-priority VMs on scale sets now in public preview - Low-priority VMs allow users to run their workloads at a fraction of the price, enabling significant cost savings. Azure virtual machine scale sets let you create and manage a group of identical, load balanced, and autoscaling VMs. VM scale sets is a great option for resilient, fault-tolerant applications as these VMs are allocated using our unutilized capacity. At any point in time when Azure needs the capacity back, the Azure infrastructure will evict low-priority VMs; therefore, low-priority VMs are great for workloads that can handle interruptions like batch processing jobs, dev/test environments, large compute workloads, and more. Low-priority VMs are available through VM scale sets with up to an 80 percent discount.

Azure Friday | Episode 408 - Low-Priority VM Scale Set (VMSS) - Ziv Rafalovich joins Scott Hanselman to show how to use low-priority VM scale set for a significant cost saving with Azure.

Now generally available

Turbocharge cloud analytics with Azure SQL Data Warehouse - Take advantage of Azure SQL Data Warehouse Gen2, which is now generally available. Gen2, formerly known as Optimized for Compute, comes with five times the compute capacity and four times the concurrent queries of the Gen1 offering. The enhanced storage architecture on Gen2 introduces unlimited columnar storage capacity, while maintaining the ability to independently scale compute and storage. The capacity increase means that users can run their most demanding workloads on Gen2. The increase in concurrent queries provides an opportunity to run more in parallel, to help ensure full utilization of system resources.

  • Adaptive caching powers Azure SQL Data Warehouse performance gains - Azure SQL DW Compute Optimized Gen2 tier fully takes advantage of NVM Express (NVMe) solid-state drive (SSD) devices through adaptive caching of recently used data on NVMe. With this breakthrough on customer workloads, we have observed up to five times the improvement in query performance, compared with the first generation of Azure SQL DW and some workloads improved even more.

Azure SQL DW Compute Optimized Gen2 tier diagram

  • Blazing fast data warehousing with Azure SQL Data Warehouse - Azure SQL DW Compute Optimized Gen2 tier delivers fast query performance through adaptive caching; increased concurrent queries that can be executed to power enterprise-wide dashboards with high concurrency; and predictable performance through scaling with the ability to store unlimited data in SQL’s columnar format, and the availability of new SLOs with an additional five times the compute capacity.
  • Region expansion for the next generation of SQL Data Warehouse - The release of Azure SQL DW Compute Optimized Gen2 tier comes with an expansion of 14 additional regions, which brings the global region footprint of SQL DW Gen2 to 20 and surpasses all other major cloud providers. With more global regions than any other cloud provider, Azure SQL Data Warehouse gives you the flexibility to deploy applications where you need them.

Global VNet Peering now generally available - Global VNet Peering is now generally available in all Azure public regions, excluding the China, Germany, and Azure Government regions. Global VNet Peering enables resources in your virtual network to communicate directly, without gateways, extra hops, or transit over the public internet. This allows a high-bandwidth, low-latency connection across peered virtual networks in different regions. With just a couple of clicks, you can use Global VNet Peering to share resources within a global, private network. You can then easily replicate data across regions for redundancy and disaster recovery.

Write Accelerator for M-Series virtual machines now generally available - Write Accelerator is a new disk capability that offers customers sub-millisecond writes for their disks. Initially supported on M-Series VMs with Azure Managed Disks and Premium Storage, Write Accelerator is recommended for workloads that require highly performant updates, such as database transaction log writes. Write Accelerator is an exclusive functionality for Azure M-series virtual machines in recognition of the performance sensitive workload that is run with these types of high-end VMs

 

AzCopy on Linux now generally available - AzCopy is a command line data transfer utility designed to move large amounts of data to and from Azure Storage with optimal performance. It is designed to handle transient failures with automatic retries, as well as to provide a resume option for failed transfers. This general availability release includes: throughput improvements up to 3x, easy installation, pipe from stdin, and single file transfer support.

General availability: Azure Storage metrics in Azure Monitor - Azure Monitor is the platform service that provides unified user interfaces for monitoring across different Azure services. With Azure Monitor, you can visualize, query, route, archive, and take action on the metrics and logs coming from resources in Azure. With metrics on Azure Storage, you can analyze usage trends, trace requests, and diagnose issues with your storage account.

News and updates

The Azure Cloud Collaboration Center: A First-of-Its Kind Facility - On Wednesday, we invited the world to see how our teams are innovating in the Azure Cloud Collaboration Center, a first-of-its-kind facility that combines innovation and scale to address operational issues and unexpected events to drive new levels of customer responsiveness, security and efficiency. The Cloud Collaboration Center space gives customers a snapshot of what is happening with their data 24/7 and enables real-time troubleshooting of any issue by multiple teams simultaneously from across the organization.

Azure Cloud Collaboration Center photo

Announcing first-class support for CloudEvents on Azure - Azure is the first major public cloud to offer first-class support for CloudEvents as part of Azure Event Grid, which is an event-centric serverless platform. CloudEvents is a new open specification and standard for describing event data in a common and consistent way. Building on this standard will enable interoperability between different cloud providers, SaaS companies, IoT manufacturers, and many others creating a much richer and more inclusive serverless experience. Additionally, this will enable event-based IoT solutions on the edge to take an event-model dependency without being locked to a single cloud provider.

Secure credential management for ETL workloads using Azure Key Vault and Data Factory - Azure Data Factory is now integrated with Azure Key Vault. You can store credentials for your data stores and computes referred in Azure Data Factory ETL (extract, transform, load) workloads in a key vault. Simply create an Azure Key Vault linked service and refer to the secret stored in the key vault in your Data Factory pipelines.

Azure Marketplace new offers: April 1–15 - The Azure Marketplace is the premier destination for all your software needs – certified and optimized to run on Azure. Find, try, purchase, and provision applications & services from hundreds of leading software providers. In the first half of April we published 20 new offers, including: Ethereum developer kit (techlatest.net), TensorFlow Serving Certified by Bitnami, and Machine Learning Server Operationalization.

Microsoft extends AI support to PyTorch 1.0 deep learning framework - PyTorch 1.0 takes the modular, production-oriented capabilities from Caffe2 and ONNX and combines them with PyTorch's existing flexible, research-focused design to provide a fast, seamless path from research prototyping to production deployment for a broad range of AI projects. Azure Machine Learning Services provides support for a variety of frameworks including TensorFlow, Microsoft Cognitive Toolkit, and soon PyTorch 1.0 is another example.

Azure expands certification scope of Health Information Trust Alliance Common Security Framework - Health Information Trust Alliance (HITRUST) renewed our Azure HITRUST Common Security Framework (CSF) Certification, which is the most widely recognized security accreditation in the healthcare industry. In addition, certification scope expanded from last year by more than 250%. The HITRUST CSF builds on Health Insurance Portability and Accountability Act (HIPAA) and the Health Information Technology for Economic and Clinical Health (HITECH) Act, by providing a framework for complex compliance requirements that include technical and process elements such as HIPAA, NIST, ISO and COBIT to ensure controls are in place to safeguard Protected Health Information (PHI).

Azure M-series VMs are now SAP HANA certified - Azure M-series virtual machines (VM) host on the most powerful cloud hardware that is available across all public cloud providers. They deliver configurations up to 128 vCPUs and 4TB RAM for a single VM! Over the past few months, we have seen customers adopt and utilize M-series VMs for high-end database workloads based on SQL Server, Oracle, and other DBMS systems, even already move entire SAP landscapes into Azure. We have been working with SAP over the last few months to leverage and certify Azure M-series VMs for their SAP HANA database, utilizing these capabilities. Today, we are excited to announce that M-Series VMs are certified by SAP for SAP HANA. See this post for information about which VM types received certification for specific production workloads.

Additional news and updates

Azure Friday

Azure Friday | Episode 407 - VS Code for Java Microservices in Kubernetes - Rome Li joins Donovan Brown to discuss how to run your Java microservices in Kubernetes with the help of Visual Studio Code. Java Extension Pack lets you work with Java code and projects. Spring Boot Extension Pack makes it very efficient to work with Spring Boot applications. And Kubernetes Extension visualizes Kubernetes resources and makes it easier to work with kubectl and manifest files.

Developer spotlight

Bring the power of serverless to your IoT application and compete for cash prizes - The Azure IoT on Serverless hackathon is an online, judged competition with cash prizes hosted on Devpost. All ideas are welcome, whether you want to work on that sensors-driven smart-home project you have been putting off, build a remote monitoring solution for a healthcare facility, create an intelligent system to streamline the manufacturing process of your production plant, or even create a self-healing robot wearing cool sunglasses.

The Azure Podcast

The Azure Podcast | Episode 227 - Azure SRE - Get the latest updates and Principal Software Engineer Richard Clawson from the Azure SRE team gives us the inside scoop on how his team keeps Azure running reliably.

Technical content and training

Explore SaaS analytics with Azure SQL Database, SQL Data Warehouse, Data Factory, and Power BI - In this tutorial, you walk through an end-to-end analytics scenario. The scenario demonstrates how analytics over tenant data can empower software vendors to make smart decisions. Using data extracted from each tenant database, you use analytics to gain insights into tenant behavior, including their use of the sample Wingtip Tickets SaaS application. This scenario involves extracting data, optimizing it, and using BI tools to draw out useful insights.

Events

Microsoft Build: Come for the tech, stay for the party - Looking forward to Microsoft Build? Now you’ve got one more reason. After three days of can’t-miss tech sessions and skill-sharpening workshops, we’re throwing an awesome party for attendees at Seattle Center.

Microsoft Build Celebration graphic

Azure tips and tricks

Quickly Connect to a Linux VM with SSH

Easily Start, Restart, Stop or Delete Multiple VMs

Azure DevOps Project: New feature additions

$
0
0
Since we announced Azure DevOps Projects at the Connect conference late last year, we’ve been hard at work to make it as easy as possible to get set up with a fully functioning DevOps pipeline for your team in a few short steps – regardless of what platform you build your applications in and which... Read More

Release Gates – Enable Progressive Exposure and Phased Deployments

$
0
0
We are excited to announce that release gates are now generally available to all VSTS users and accounts so everyone can now add progressive exposure to their continuous delivery pipelines. What are release gates If you haven’t tried them yet,  Release gates enable data-driven approvals for phased deployments with VSTS based on monitoring of deployment... Read More

Microsoft 365 empowers developers to build intelligent apps for where and how the world works

$
0
0

Today, at our annual Build conference, Satya Nadella and Scott Guthrie talked about the vision and strategy of rationalizing Microsoft’s platform into an intelligent cloud and intelligent edge, enlightened by AI and mixed reality and architected for the modern computing landscape. Tomorrow, we will share with you, our developer community, the unique opportunities with Microsoft 365 in today’s multi-sense, multi-device world.

For years, we have been at Build talking about the huge opportunity with Windows and Office as developer platforms. In fact, today we have 135 million commercial monthly active users of Office 365 and nearly 700 million Windows 10connected devices.

But Microsoft’s mission is fundamentally dependent on how well we TOGETHER can harness the power of both Windows and Office in the Microsoft 365 platform.

Image showing how Microsoft 365 brings together Office 365, Windows 10, and Enterprise Mobility + Security (EMS), a complete, intelligent, and secure solution to empower employees.

Microsoft 365 brings together Office 365, Windows 10, and Enterprise Mobility + Security (EMS) as a complete, intelligent, and secure solution to empower employees.

In case you’re not already familiar, Microsoft 365 brings together Office 365, Windows 10, and Enterprise Mobility + Security (EMS) as a complete, intelligent, and secure solution to empower employees. As the largest productivity platform in the world, it’s a vital part of the intelligent edge—and it enables developers to create beneficial experiences that work elegantly across many different device types and many different computing “senses”—including vision and voice.

Today, many of you would consider yourselves Windows or Office developers. Or web developers who target Windows and Office users. Or even mobile developers asking how you might align a mobile experience with other devices. When you leave Build 2018 this week, we hope you consider yourselves Microsoft 365 developers.

New Microsoft 365 experiences empower customers to achieve more

This week, we’re introducing a set of features and updates across a variety of devices and platforms and a better blending between web and application environments for users and developers. Last year at Build, you heard us talk about our commitment to meeting our customers where they areacross platforms. We’re expanding this work to not only bring more Microsoft 365 services across platforms and into applications, but to better connect customers’ existing PC experiences with their phones, helping to increase engagement for developers. These announcements include:

  • A new way to connect your phone to your PC with Windows 10 that enables instant access to text messages, photos, and notifications. Imagine being able to quickly drag and drop your phone’s photos into a document on your PC in one swift movement—without having to take your phone out of your pocket. This new experience will begin to roll out in the Windows Insider Program soon.

Image showing a laptop and a mobile device, connected via Windows 10.

A new way to connect your phone to your PC with Windows 10 that enables instant access to text messages, photos, and notifications.

  • The updated Microsoft Launcher application on Android that will support Enterprise customers with easy access to line of business applications via Microsoft Intune. Microsoft Launcher on Android will also support Timeline for cross-device application launching. Today, your Microsoft Edge browsing sessions on your iPhone or iPad are included in the Timeline experience on your Windows 10 PC. Tomorrow, we’ll show how later this year you’ll be able to access that same timeline on your iPhone with Microsoft Edge.

Image showing the Microsoft Launcher application on Android.

The updated Microsoft Launcher application on Android will support Enterprise customers with easy access to line of business applications via Microsoft Intune.

Image showing a laptop and two mobile devices showcasing Microsoft Launcher on Android and to Microsoft Edge on iPhone and iPad

Timeline is coming to Microsoft Launcher on Android and to Microsoft Edge on iPhone and iPad.

  • Updates to Sets, an easier way to organize your stuff and get back to what you were doing. With Sets, what belongs together stays together, making it easier and faster to create and be productive. As developers, your Universal Windows Platform (UWP) application will work with Sets from the start, helping to keep your customers engaged. And with a few simple changes, your Win32 or web applications are supported within Sets as well.

Screenshot showcasing Sets, an easier way to organize your stuff.

Updates to Sets, an easier way to organize your stuff and get back to what you were doing.

  • Microsoft 365 support of Adaptive Cards, helping developers create rich interactive content within conversations. As a result, end users can approve expense reports or comment on an issue in GitHub directly within an Outlook email or Teams chat. Building on Adaptive Cards, we’re also bringing payments to Outlook. With Microsoft Pay, you’ll be able to quickly and securely pay bills and invoices right from your inbox. Several Microsoft partners will announce support for Microsoft Pay at Build.

Screenshot showcasing Adaptive Cards, helping developers create rich interactive content within conversations.

Microsoft 365 support of Adaptive Cards helps developers create rich interactive content within conversations.

New opportunities for developers with Microsoft 365

Core to the Microsoft 365 platform is the Microsoft Graph. It helps developers connect the dots between people, conversations, schedules, and content within the Microsoft Cloud. We encourage you to tap into the power of the Microsoft Graph to gain unprecedented context and insights to build smarter apps. Tomorrow, we will talk about new opportunities with the Microsoft Graph and new tools with Microsoft 365 that give you the flexibility to design and create in the languages and frameworks of your choice, empowering you to create smarter ways for people to work. These announcements include:

Image showing how the Microsoft Graph helps developers connect the dots between people, conversations, schedules, and content within the Microsoft Cloud.

  • New and updated Microsoft Teams APIs in the Microsoft Graph and support for organization-specific applications in Teams, allowing developers to create tailored, intelligent experiences based on the unique needs of a business or industry. Companies can also publish custom apps to the Teams app store.
  • Deeper SharePoint integration into Microsoft Teams, enabling people to pin a SharePoint page directly into a Teams channel to enable deeper collaboration. Developers can use modern script-based frameworks like React within your projects to add more pieces that can be added and organized within SharePoint pages.
  • Updates helping you support the Fluent Design System, so you can create immersive, deeply engaging experiences with Microsoft’s updated design language. Now every organization can make beautiful solutions that empower your customers to do more. With UWP XAML Islands, you can access the more capable, flexible, powerful XAML controls regardless which UI stack you use—whether it’s Windows Forms, WPF, or native Win32.

Screenshot of the Fluent Design System, helping you create immersive, deeply engaging experiences with Microsoft’s updated design language.

  • .NET Core 3.0, which allows developers to use the latest version of .NET and have your application run in a standalone .NET environment, so you can build amazing app experiences that don’t impact your broader organizational infrastructure. This allows desktop developers to take advantage of side-by-side install of their applications. That means that system-wide updates of .NET will not impact running applications.
  • MSIX, a complete containerization solution providing a simple way to convert large catalogs of applications. It inherits all the great features from UWP, including reliable, robust installation and updating, as well as a managed security model and support for both enterprise management and the Microsoft Store.
  • New Azure Machine Learning and JavaScript custom functions that let developers and organizations create your own powerful additions to Excel’s catalog of formulas.
  • Windows Machine Learning, a new platform, which enables developers to easily develop Machine Learning models in the intelligent cloud—and then deploy them offline and in high-performance to the PC platform.

If you are a developer maintaining Windows desktop applications, you can now use all of these modern tools with your existing investment across Win32, WPF, and Windows Forms applications. I’ll also share tomorrow our commitment to maximize your opportunity with Microsoft Store by providing up to 95 percent share of the revenue for your consumer apps, excluding games. For more details on the updates to Microsoft Store, check out this blog post. For more detail on the developer opportunities I’ve mentioned here, check out Kevin Gallo’s blog post.

Some of the things we’re talking about at Build this week are available for developers to use and try out now, while other experiences will come during the next year.

All of the things we’re talking about give you the power to build applications the way you want, with the most flexibility to make the right choices for your end users. This is an exciting time: Microsoft 365 enables you to achieve more with your current skillset and your current tools. And that in turn empowers you to help your users achieve more.

Thank you for building with us. I can’t wait to see what you’ll build in 2019!

The post Microsoft 365 empowers developers to build intelligent apps for where and how the world works appeared first on Microsoft 365 Blog.

Modernizing applications for our multi-sense, multi device world

$
0
0

Tomorrow at Build 2018, Joe Belfiore and I will have the privilege of sharing with you some of the advancements in Microsoft 365 that are focused on multi-sense and multi-device experiences. Microsoft 365 allows developers to drive more productivity and engagement holistically – in one ecosystem.

We know that building for the future comes with many complex challenges, so we have taken a practical approach to helping you be more productive when updating your existing applications. We focused on four key areas:

  • Provide great user productivity in our multi-sense, multi-device world
  • Engage your employees where they work
  • Deliver pragmatic deployment solutions
  • Make Windows your primary dev box for all your workload needs across the Intelligent Cloud and Intelligent Edge

Great user productivity in our multi-sense, multi-device world

To support the multi-sense, multi-device world in which we live and work – the foundation of our user experiences needs to grow and adapt. With the Fluent Design System, you can use a cohesive system that spans across a variety of inputs and outputs, while embracing the uniqueness of both.

With the Fluent Design System, you can use a cohesive system that spans across a variety of inputs and outputs.

Figure 1: Fluent Design system is natural on each device

Just like Microsoft rolls out Windows 10 incrementally, most of you do the same inside of your company. If you are deploying to devices running Windows 10 Anniversary Update and later, your applications can start using modern controls right away. You’ll be able to do this through Windows UI Library and it’ll be available via NuGet. Controls in this library are the same as Windows uses in its apps and experiences, and the same that ship in the Windows 10 SDK.

Artificial intelligence is a key part of the modernization journey, and tomorrow I will show a proof of concept of how the Windows AI platform enables Microsoft Word to evaluate machine learning models using the hardware resources available on the Intelligent Edge. Developers can solve problems that are impractical to solve using traditional algorithms, as well as train models for line of business applications.

And, for those of you are updating your existing WPF, Windows Forms, or native Win32 applications incrementally, you can use UWP XAML Islands to incorporate the Fluent Design System in your application, regardless of the app model. Now, all Windows applications can adopt Fluent regardless of the UI stack. This includes popular controls like WebView (EdgeHTML), MediaPlayerElement, SwapChainPanel, modern InkCanvas, etc.

Additionally, you will be able to use a new project from Cognitive Service Labs called Project Ink Analysis. This Artificial Intelligence system is what we use to make sense of messy handwriting and shape recognition. It will allow you to build inking applications on both Windows as well as other platforms, leveraging the incredible AI ink services from the cloud.

Microsoft 365: Engage your employees where they work

With the power of the Microsoft Graph, you can extend your app’s reach beyond the “four corners” of a single device, enhancing users’ experiences across mobile and desktop. It connects app and cloud experiences and provides an opportunity to enrich every application with data, tools, and insights through a single consistent REST API, along with SDKs across several platforms.

This year new API sets, webhooks, and capabilities are expanding across Microsoft Graph. Applications can add Activities to the Windows Timeline (now generally available) and gain cross-device consistency and immediate user context.  Applications can also harness the Microsoft Graph in their own applications, including new open source Microsoft Graph UWP controls and SDKs for Java.  New Open API 3.0 endpoints for Microsoft Graph boost interoperability with different systems.

You can also deliver your app’s content in front of your customers who use Office daily and provide a way for them to interact directly with your solution, with new support for Adaptive Cards.

Adaptive Cards in Outlook let you address issues directly within your inbox.

Figure 2: Adaptive Cards in Outlook let you address issues directly within your inbox.

Adaptive Cards, including new payment cards, support a rich and visual language for embeddable experiences. We’re bringing this format to Microsoft Teams and Outlook, letting you convert complex workflow updates into a two-click streamlined experience right within your inbox, and using the same consistent JSON markup across apps.

We’re also announcing that has new support and tools for developers, including new JavaScript APIs. You can extend Excel calculation with custom function support, as well as integrated support for calling Machine Learning models. New support for Power BI Custom Visuals in Excel lets you add engaging visualizations to your app.

Pragmatic deployment solutions

I am excited announce that .NET Core 3 will help you update your .NET version independently of the system – simplifying enterprise catalog management. You will be able to run multiple instances of .NET Core 3 side-by-side on the same computer, which means you can update Windows Forms, WPF, and UWP applications to a new version of .NET without updating the entire system. This will be released in 2019.

Also, our MSIX application container is a complete containerization solution that inherits all the great features from UWP. And, MSIX now supports Windows 7.  Once you update from Windows 7 to Windows 10, your application gets all of the rich containerization features for free.

Make Windows your primary dev box for all your workload needs across the Intelligent Cloud and Intelligent Edge

We’re committed to making Windows the best dev box for projects spanning the Intelligent Cloud and Intelligent Edge. Your feedback has guided us in this mission and we’re excited to announce the following improvements:

  • The latest update to Notepad includes support for Linux line endings, so it now responds to Linux files and line breaks appropriately.
  • Hyper-V w/XRDP for Linux now has enhanced session support for Linux VMs through a collaboration with the XRDP open source project. It’s faster, no more mouse delays and tighter integration for easy sharing of drives and the clipboard.
  • To enable you to use the latest Android emulator side-by-side with Hyper-V VMs, Docker tooling, the HoloLens emulator and more, the Android emulator is now compatible with Hyper-V. A preview for you to explore will be available tomorrow.
  • Boxstarter and Chocolatey together provide an effective solution to dev machine setup that is repeatable, reliable, and fast. Microsoft will be contributing to the open source projects alongside the rest of the community, and we’ve started a sample script project on GitHub where we can all collaborate on setup scripts for various dev scenarios.

The post Modernizing applications for our multi-sense, multi device world appeared first on Windows Developer Blog.


A new Microsoft Store revenue share is coming

$
0
0

Microsoft Store continues to evolve to be the best destination for Windows 10 users to discover and download Microsoft-verified applications that deliver predictable performance. Microsoft Store is also the best destination on Windows 10 for developers to reach new audiences and gain new customers.  We’ll focus on the infrastructure, so you can focus on building the best app and growing your business as a developer. To that end, we are excited about the announcement Joe Belfiore will be making at Build 2018 regarding a new Microsoft Store fee structure coming later this year.

A better revenue share for developers

Starting later this year, consumer applications (not including games) sold in Microsoft Store will deliver to developers 95% of the revenue earned from the purchase of your application or any in-app products in your application, when a customer uses a deep link to get to and purchase your application.  When Microsoft delivers you a customer through any other method, such as in a collection on Microsoft Store or any other owned Microsoft properties, and purchases your application, you will receive 85% of the revenue earned from the purchase of your application or any in-app products in your application

The new fee structure is applicable to purchases made on Windows 10 PCs, Windows Mixed Reality, Windows Phone and Surface Hub devices and excludes purchases on Xbox consoles.

A new way for developers to monetize

These changes to our current Microsoft Store fee represent a new way for you to monetize on the Windows platform. With the new fee structure, Microsoft is only accessing an additional fee when we contribute to you acquiring a new user. These changes enable us to create a world where developers are rewarded for connecting customers with experiences they love in a secure, reliable way.

The fee structure will be defined in detail in an upcoming revision to the App Developer Agreement later this year. Visit this page for current details and to sign up for a notification when the new fee structure goes into effect. Also, please refer to the FAQ below.

What applications will the new fee structure apply to?

Any consumer non-gaming app published to the Microsoft Store for PC, Windows Mixed Reality, Windows Phone or Surface Hub.

When does the new fee structure go into effect?

Later this year (2018). We’ll prompt you to accept a new version of the App Developer Agreement that outlines the Microsoft Store fee structure in detail. The new fee structure will apply to purchases made after the date listed in the App Developer Agreement.

Will the new fee structure apply to games or game subscriptions?

No. The new fee structure only applies to consumer apps on PC, Windows Mixed Reality, Windows Phone or Surface Hub. Apps categorized as Games in the Store will not be eligible for the new fee structure, even if they are available on those device types.

How does the Microsoft Store fee apply to subscriptions and other add-ons (in-app purchases)?

The new fee structure will apply to non-game, consumer app subscriptions and add-ons (in-app purchases). The fee applied to these purchases will be determined by how the user originally acquired the application. The new default 5% Store fee will apply for all transactions using Microsoft’s commerce platform and, if your customer uses a deep link to acquire your application, that’s all you’ll owe. The extra 10% customer acquisition cost will apply when Microsoft delivers you the customer through any other method, such as via a Store collection or a Microsoft Store spotlight.

All future subscription purchases and add-on (in-app) purchases for a user will be assessed the same fee percentage that was assessed when the user first acquired the application.

Will the new fee structure apply to purchases made via Microsoft Store for Business? Microsoft Store for Education?

No. The new fee only applies to individual purchases of consumer apps on PC, Windows Mixed Reality, Windows Phone or Surface Hub. If you allow your app to be offered via organizational licensing in Microsoft Store for Business and/or Microsoft Store for Education, the current Store fee will continue to apply to those purchases.

What about applications that are not games, but are available to customers on Xbox?

Any purchases made by customers on Xbox consoles, whether the product is an app or a game, will use the current fee structure.

What about applications that are available on both Microsoft Store for Windows 10 PC and Microsoft Store for Xbox One?

The new fee structure will apply to non-game consumer app acquisitions by individuals on Microsoft Store for Windows 10 PC (and the other device families mentioned above). The current fee structure will apply to acquisitions on Microsoft Store for Xbox One devices.

What will the fee structure be for applications that are available to earlier OS versions (Windows 8.x and/or Windows Phone 8.x)?

The new fee structure will apply to apps available on Microsoft Store on earlier OS versions (Windows 8.x and/or Windows Phone 8.x).

The post A new Microsoft Store revenue share is coming appeared first on Windows Developer Blog.

Team Foundation Server 2018 Update 2 is now available

$
0
0
Today we announce the release of Team Foundation Server 2018 Update 2. There are a lot of new features in this release, which you can see in our release notes. One big change in Update 2 is that we have re-enabled legacy XAML builds to unblock those customers that still require it in their environment.... Read More

Introducing Visual Studio IntelliCode

$
0
0

Visual Studio IntelliCode brings you the next generation of developer productivity by providing AI-assisted development. Every keystroke and every review is informed by best practices and tailored to your code context. You can try it out today by downloading the experimental extension for Visual Studio 2017 that provides AI-powered IntelliSense.

What is IntelliCode?

Visual Studio IntellICode

IntelliCode is a set of AI-assisted capabilities that improve developer productivity with features like contextual IntelliSense, inference and enforcement for code styles, and focused reviews for your pull requests (PRs.) Check out the video to see what we demoed at BUILD 2018 – it shows you just some of the capabilities that IntelliCode will offer.

AI-assisted IntelliSense, and the other features shown at BUILD 2018, are just the start. Over time you’ll see more ways that we’ll assist your end-to-end developer workflow.

What can IntelliCode do now?

As you type, AI-assisted IntelliSense recommends the most likely API. This makes it easier to learn a new API and dramatically reduces the number of keystrokes required to complete a line. With more context from the code you write, IntelliSense becomes more accurate.

IntelliCode’s improvements are not just about statement completion. IntelliCode also provides guidance as to the most appropriate overload for that API given the current code context. No more extraneous scrolling!

AI-assisted IntelliSense: better recommendations with every keystroke

AI-assisted IntelliSense: better recommendations with every keystroke

How does it work?

IntelliCode generates recommendations by using a machine-learning model that is trained on thousands of public codebases – today it uses over 2000 GitHub repos that each have more than 100 stars to ensure that you’re benefiting from best practices. The model is used in your IDE along with your local code context to provide .NET related APIs that are likely to be the most relevant for you given the line of code you’re writing. We’ll be growing and improving the model over time so the recommendations will get better as we progress.

While it’s still very early, you can download and experiment with this capability in the IntelliCode extension right away. We welcome your feedback.

What’s next?

Beyond what is currently in our experimental extension, here are a few of the things IntelliCode is experimenting with. Right now, the extension is only C#, but we want to expand to other languages later.

Automatic definition of styles and formatting: no more style inconsistencies

Automatic definitions of styles and formatting: no more style inconsistencies

Consistency is important for maintainability; in fact recent research shows that 18% of PR comments are related to coding conventions, styles and naming issues.

IntelliCode can automatically generate an .editorconfig file that best matches your current styles and formatting. Once generated, this file will help you maintain consistency in your code. Fixing up formatting issues is a snap with existing lightbulbs or with a new, code-cleanup command.

Assisting with every review

As developers, you know that code reviews can be time consuming. It’s challenging to focus on the right things when other issues get in the way. IntelliCode makes reviews less painful for everyone by providing focus for the reviewer, and an automated, first-level review.

Find misused variables

With automatic generation of comments in files for potential issues, you’ll be able to identify and fix issues faster. For example, IntelliCode can detect variable misuse, often introduced through copy/paste where a variable is of the correct type but used in the wrong context. These analyses go beyond style concerns or what a conventional static analysis tool can find – it can find actual bugs in your code. It’s discovered bugs in our code too!

Assistance with every review

Get recommendations for files to review

IntelliCode focuses your reviews by indicating which files may need extra attention. These recommendations are based on machine-learning heuristics for the history of the files, their dependencies, the code complexity and history. These capabilities can be applied alongside CI analysis services and other code review processes. The results can be surface in the IDE and in web-based tools. For example, review comments generated by the IntelliCode analyzers can appear in your online Visual Studio Team Services’ pull requests (PRs).

Some of IntelliCode’s analyzers use machine-learning on public codebases, and are then specialized to your own repository. When these analyses become available, they will require a sign-up and registration process.

Why IntelliCode?

Millions of repos of code are now available in the public domain. This code represents a tremendous amount of knowledge that can be accessed at your fingertips, tailored to your context.

Microsoft is investing extensively in machine-learning and AI technologies. We’re working with Microsoft Research to leverage the latest techniques to learn from source code and deliver new, innovative ways to enhance the coding life of developers, so that you can deliver your software with greater confidence and velocity.

Get Involved

We are excited to give you an early glimpse into IntelliCode, which is currently optimized for C#. Although we are sharing some of it today in an IDE extension (download) you can try right away, today’s demo is just a hint into what’s coming soon.

As we expand the capabilities to more scenarios and other languages, we’ll announce a limited preview of IntelliCode. If you want to learn more, keep up with the project, and be invited to the private preview please sign up!

Happy Coding!

Amanda

Amanda Silver, Director of Program Management, Visual Studio and Visual Studio Code
@amandaksilver, #VSIntelliCode

Amanda Silver is a Director of PM for Microsoft’s Developer Division. She was one of the primary language designers on the LINQ project (Language INtegrated Query) which incorporates query expressions and XML as native types in .NET. She was involved with Chakra, the JavaScript engine that powers Edge, since 2009 which was later open sourced in 2015. In 2012, her team launched TypeScript – a cross-platform, typed, superset of JavaScript that compiles to plain JavaScript. Her team delivers the Visual Studio platform and Visual Studio Code. They recently released Visual Studio Live Share and Visual Studio IntelliCode. Unleashing the creativity of developers is her unrelenting passion.

Visual Studio for Mac version 7.5 and beyond

$
0
0

Last year at Build, we launched Visual Studio for Mac, our native macOS IDE for developers building cloud, web, and mobile applications using .NET. Updates have been rolling out at a steady pace ever since, and we’re excited to announce the release of Visual Studio for Mac version 7.5. We have also continued to bring more Visual Studio 2017 code to the Mac.

Our mission has always been to delight developers, and we have something for everyone in this release. You can get started by downloading the new release or updating your existing install to the latest build in the Stable channel.

Here are some of the features we’re most excited to share with you:

  • ASP.NET Core developers now have full Razor editor support. We’ve also introduced JavaScript and TypeScript support.
  • For iOS developers, we added WiFi debugging support for iOS and tvOS applications. We also improved the iOS provisioning system.
  • Android developers will enjoy the new Android SDK manager built right into the IDE, as well as a device manager to keep track of all your devices and emulators
  • Xamarin.Forms developers will enjoy an improved XAML editing experience
  • Cloud developers have support for Azure Functions development using .NET Core.
  • We support .NET Core 2.1 RC and C# 7.2.
  • Code-styling rules can be configured per-project using .editorconfig files.
  • A preview of Team Foundation Version Control support for Team Foundation Server and Visual Studio Team Services is now available.

We’re also shipping improvements to performance and stability, accessibility, and multi-language support, along with fixes for a number of bugs reported by our vibrant developer community. You can find the full list of changes in our release notes.

ASP.NET Core development with Razor, JavaScript, and TypeScript Editor Support

We partnered with the Roslyn and Visual Studio JavaScript tooling teams to reuse Razor, JavaScript, and TypeScript editor source code, bringing the editing experiences you know and love from Visual Studio 2017 to the Mac.

Official Razor support includes IntelliSense and syntax highlighting in .cshtml files

IntelliSense and syntax highlighting in chtml files

Our JavaScript editor has been rewritten to provide the core editor experience you expect, including IntelliSense, enhanced colorization, and brace completion. We’ve also added TypeScript editing support, which shares the same IntelliSense and colorization as our JavaScript experience.

TypeScript Editor in action

Use .editorconfig files to Set Code Style Rules in Projects

One of my favorite features is finally here: .editorconfig

Visual Studio for Mac will now format your code following the conventions specified in the .editorconfig file. This will allow you to set your coding style, preferences, and warnings for your project; making it simpler for code that you contribute to other projects to follow the practices of those projects.

Xamarin.Form Development

We now ship Xamarin.Forms templates that take advantage of .NET Standard Libraries.

Working with XAML just got better, too, with IntelliSense improvements providing better support for self-closing elements and more completions.

Android Development with Xamarin

On the Android side of the house, we added an integrated Android Device Manager dialog, eliminating the need to rely upon 3rd-party tools for device and emulator management. You can find this under Tools > Device Manager.

Integrated Android Device Manager dialog

iOS Development with Xamarin

iOS fans will enjoy a streamlined Entitlements editor experience, making it a breeze to add capabilities and services to your iOS apps.

Simply open the Entitlements.plist file and jump right in! Not only that, our new Automatic Signing experience makes deploying your application to devices very simple. In the Signing section of the Info.plist editor, you’ll find using Automatic Signing makes the burdens of manually tracking your entitlements and provisioning devices things of the past.

Building Serverless solutions with Azure Functions

Our new Azure Functions templates now support the Azure Functions .NET Core SDK, empowering you to build, debug, and test Azure Functions locally. In addition, item templates provide guidance for building functions using the most common triggers, enabling you to get up and running with new functions in minutes.

After creating a new Azure Functions project, right-click and select Add > Add Function, then choose your favorite function from the template dialog. Check out our documentation for a walkthrough to create your first Function in Azure.

New Azure Function dialog

.NET Core 2.1 RC and C# 7.2

Visual Studio for Mac version 7.5 now supports .NET Core 2.1 RC. Major improvements include faster build performance, better compatibility with .NET Framework, and closing gaps in both ASP.NET Core and EF Core. You can read more about the .NET Core 2.1 RC release in the announcement blog post. Support for the newest C# release, version 7.2, is also available today.

Working with your source with Team Foundation Version Control

One of our most popular feature requests has been to add support for Team Foundation Version Control (TFVC) to access source saved in Team Foundation Server or Visual Studio Team Services. We heard you loud and clear! Today, we’re previewing a new extension to do just that.

To install the extension, navigate to Visual Studio > Extensions… in the Visual Studio for Mac menu and search the gallery for “TFVC”. We support Get, Commit (with associated work items), version history, and more.

Feedback

We hope you’ll find Visual Studio for Mac version 7.5 as delightful as we do. Let us know what you think! Your feedback helps us improve our products and better understand your needs as a developer.

Please let us know about issues via Help > Report a Problem. You’ll be able to track your issues and receive updates in the Visual Studio Developer Community.

You can also provide product suggestions via the Help > Provide a Suggestion menu and vote on suggestions at the Visual Studio for Mac UserVoice site.

Miguel de Icaza

Miguel de Icaza, , Distinguished Engineer, Mobile Developer Tools
@migueldeicaza

Miguel is a Distinguished Engineer at Microsoft, focused on the mobile platform and creating delightful developer tools. With Nat Friedman, he co-founded both Xamarin in 2011 and Ximian in 1999. Before that, Miguel co-founded the GNOME project in 1997 and has directed the Mono project since its creation in 2001, including multiple Mono releases at Novell. Miguel has received the Free Software Foundation 1999 Free Software Award, the MIT Technology Review Innovator of the Year Award in 1999, and was named one of Time Magazine’s 100 innovators for the new century in September 2000.

Announcing .NET Core 2.1 RC 1

$
0
0

Today, we’re announcing .NET Core 2.1 Release Candidate 1 (RC 1). The .NET Core 2.1 RC 1 is now ready for broad testing and for production use. Our quality, reliability, and performance testing give us confidence that the release is ready for the first set of production users. On the metrics that we can measure, .NET Core 2.1 is a large step forward from .NET Core 2.0.

ASP.NET Core 2.1 RC 1 and Entity Framework 2.1 RC 1 are also releasing later today. Links will be added later in the day.

You can download and get started with .NET Core 2.1 RC 1, on Windows, macOS, and Linux:

You can see complete details of the release in the .NET Core 2.1 RC 1 release notes. Related instructions, known issues, and workarounds are included in releases notes. Please report any issues you find in the comments or at dotnet/core #1506

You can develop .NET Core 2.1 apps with Visual Studio 2017 15.7, Visual Studio for Mac 7.5, or Visual Studio Code.

At this point in the release, the feature set and performance characteristics are not changing much. Look at Announcing .NET Core 2.1 Preview 2 and Performance Improvements in .NET Core 2.1 to learn about improvements in the release that are not covered in this post.

“Go Live” Support

NET Core 2.1 RC is supported by Microsoft and can be used in production. As always, we recommend that you test your app before deploying to production. If anything seems strange, don’t deploy! If all of your tests pass with .NET Core 2.1 RC and you are enthusiastic to start using the 2.1 benefits, then go ahead and deploy. Tell us about your experiences.

Another option is to adopt the .NET Core 2.1 RC SDK while continuing to target earlier .NET Core releases, like 2.0. We believe that many people do this.

Alpine Support

Alpine Linux is a very small and security-focused Linux distro. It’s also quickly becoming the favored base image for Docker. We added support for it as a preview in 2.0 after many requests for it. Since then, the requests for it have only increased.

We are adding official support for Alpine starting with this RC release. We intend to switch to Alpine 3.8 as the version we test on and support as soon as Alpine 3.8 releases (guess: June 2018).

If you want to use .NET Core and Alpine with Docker, use the following tags:

  • 2.1-sdk-alpine
  • 2.1-runtime-alpine

If you are using the 2.0 Alpine images at microsoft/dotnet, switch to 2.1. We will not update the 2.0 Alpine images much longer, given that they remain in preview.

The runtime ID for Alpine was previously alpine-3.6. There is a now a more generic runtime ID for Alpine and similar distros, called linux-musl, to support any Linux distro that uses musl libc. All of the other runtime IDs assume glibc.

ARM Support

We’ve had many requests for ARM, particularly for running on the Raspberry Pi. .NET Core is now supported on Linux ARM32 distros, like Raspbian and Ubuntu. The same download links at the start of the post provide ARM downloads, too.

Note: .NET Core 2.1 is supported on Raspberry Pi 2+. It isn’t supported on the Pi Zero or other devices that use an ARMv6 chip. .NET Core requires ARMv7 or ARMv8 chips, like the ARM Cortex-A53.

If you want to use .NET Core on ARM32 with Docker, you can use any of the following tags:

  • 2.1-sdk
  • 2.1-runtime
  • 2.1-aspnetcore-runtime
  • 2.1-sdk-stretch-slim-arm32v7
  • 2.1-runtime-stretch-slim-arm32v7
  • 2.1-aspnetcore-runtime-stretch-slim-arm32v7
  • 2.1-sdk-bionic-arm32v7
  • 2.1-runtime-bionic-arm32v7
  • 2.1-aspnetcore-runtime-bionic-arm32v7

Note: The first three tags are multi-arch.

If you are using the 2.0 ARM32 images at microsoft/dotnet, switch to 2.1. We will not update the 2.0 ARM32 images much longer, given that they remain in preview.

Our friends on the Azure IoT Edge team use the .NET Core Bionic ARM32 Docker images to support developers writing C# with Edge devices.

If you are new to Raspberry Pi, I suggest the awesome Pi resources at AdaFruit. You can buy a Pi there, too.

We are hearing requests for ARM64. We are working on it. For now, our best recommendation is to use our ARM32 build on ARM64. Many ARM64 chips (AKA “ARMv8”) support ARM32 instructions (AKA “ARMv7”). Raspberry Pi 3+ devices have such a chip. I installed an experimental version of ARM64 Debian on a Pi 3 for this purpose. It works. We’ll update you when our ARM64 support improves.

Major thanks to Samsung and Qualcomm for investing heavily on .NET Core ARM32 and ARM64 implementations. Please thank them, too! These contributions speak to the value of open-source.

Docker Images

.NET Core and ASP.NET Core images have been updated for .NET Core 2.1 RC at microsoft/dotnet. The samples repo has also been updated for 2.1 RC.

You can quickly try .NET Core 2.1 RC with one of our pre-built sample images:

Console app:

docker pull microsoft/dotnet-samples:dotnetapp
docker run --rm microsoft/dotnet-samples:dotnetapp

ASP.NET Core app

docker pull microsoft/dotnet-samples:aspnetapp
docker run --rm -it -p 8000:80 --name aspnetcore_sample microsoft/dotnet-samples:aspnetapp

You can view the site at http://localhost:8000 on most machines. If that doesn’t work check out View ASP.NET Core app in a running container on Windows.

Brotli Compression

Brotli is a general-purpose lossless compression algorithm that compresses data comparable to the best currently available general-purpose compression methods. It is similar in speed to deflate but offers more dense compression. The specification of the Brotli Compressed Data Format is defined in RFC 7932. The Brotli encoding is supported by most web browsers, major web servers, and some CDNs (Content Delivery Networks). The .NET Core Brotli implementation is based around the c code provided by Google at google/brotli. Thanks, Google!

Brotli support has been added to .NET Core 2.1. Operations may be completed using either the stream-based BrotliStream or the high-performance span-based BrotliEncoder/BrotliDecoder classes. You can see it used in the following example.

The BrotliStream behavior is the same as that of DeflateStream or GZipStream to allow easily converting DeflateStream/GZipStream code to use BrotliStream.

New Cryptography APIs

The following enhancements have been made to .NET Core cryptography APIs:

  • New SignedCms APIsSystem.Security.Cryptography.Pkcs.SignedCms is now available in the System.Security.Cryptography.Pkcs package. The .NET Core implementation is available to all .NET Core platforms and has parity with the class from .NET Framework. See: dotnet/corefx #14197.
  • New X509Certificate.GetCertHash overload for SHA-2 — New overloads for X509Certificate.GetCertHash and X509Certificate.GetCertHashString accept a hash algorithm identifier to enable callers to get certificate thumbprint values using algorithms other than SHA-1. dotnet/corefx #16493.
  • New Span<T>-based cryptography APIs — Span-based APIs are available for hashing, HMAC, (cryptographic) random number generation, asymmetric signature generation, asymmetric signature processing, and RSA encryption.
  • Rfc2898DeriveBytes performance improvements — The implementation of Rfc2898DeriveBytes (PBKDF2) is about 15% faster, based on using Span<T>-based. Users who benchmarked an iteration count for an amount of server time may want to update iteration count accordingly.
  • Added CryptographicOperations classCryptographicOperations.FixedTimeEquals takes a fixed amount of time to return for any two inputs of the same length, making it suitable for use in cryptographic verification to avoid contributing to timing side-channel information. CryptographicOperations.ZeroMemory is a memory clearing routine that cannot be optimized away via a write-without-subsequent-read optimization.
  • Added static RandomNumberGenerator.Fill — The static RandomNumberGenerator.Fill will fill a Span with random values using the system-preferred CSPRNG, and does not require the caller to manage the lifetime of an IDisposable resource.
  • Added support for RFC 3161 cryptographic timestamps — New API to request, read, validate, and create TimestampToken values as defined by RFC 3161.
  • Add Unix EnvelopedCms — The EnvelopedCms class has been added for Linux and macOS.
  • Added ECDiffieHellman — Elliptic-Curve Diffie-Hellman (ECDH) is now available on .NET Core via the ECDiffieHellman class family with the same surface area as .NET Framework 4.7.
  • Added RSA-OAEP-SHA2 and RSA-PSS to Unix platforms — Starting with .NET Core 2.1 the instance provided by RSA.Create() can always encrypt or decrypt with OAEP using a SHA-2 digest, as well as generate or validate signatures using RSA-PSS

.NET Core Global Tools

The .NET Core global tools feature has stabilized. The syntax that was updated in Preview 2 hasn’t changed. You can try a pre-built global tool with the following example:

dotnet tool install -g dotnetsay
dotnetsay

For global tool users, we recommend that you uninstall global tools that you installed with previous releases or just delete the .dotnet/tools and dotnet/toolspkgs directories in your user profile. The layout of the directory has changed, requiring tools to be reinstalled.

For folks that built and published tools for .NET Core Preview 1 or Preview 2, you need to rebuild and republish them with .NET Core 2.1 RC. You will need to do the same thing when we publish RTM. .NET Core applications, including tools, do not roll-forward across preview versions.

Note that .NET Core Global tools must target netcoreapp2.1. See the .NET Core 2.1 RC 1 release notes for more information on Global Tools updates.

SourceLink

SourceLink is a system that enables a source debugging experiences for binaries that you either distribute or consume. It requires producers of SourceLink information and debuggers that support it. The Visual Studio debugger already supports SourceLink, starting with Visual Studio 2017 15.3. We have added support for generating SourceLink information in symbols, binaries, and NuGet packages in the .NET Core 2.1 RC SDK.

You can start producing SourceLink information by following the example at dotnet/sourcelink.

Our goal for the project is to enable anyone building NuGet libraries to provide source debugging for their users with almost no effort. There are a few steps left to enable the full experience, but you can get started now.

The following screenshot demonstrates debugging a NuGet package referenced by an application, with source automatically downloaded from GitHub and used by Visual Studio 2017.

sourcelink-example

Tiered Compilation

We’ve added a preview of a new and exciting capability to the runtime called tiered compilation. It’s a way for the runtime to more adaptively use the Just-In-Time (JIT) compiler to get better performance.

The basic challenge for JIT compilers is that compilation time is part of the application’s execution time. Producing better code usually means spending more time optimizing it. But if a given piece of code only executes once or just a few times, the compiler might spend more time optimizing it than the application would spend just running an unoptimized version.

With tiered compilation, the compiler first generates code as quickly as possible, with only minimal optimizations (first tier). Then, when it detects that certain methods are executed a lot, it produces a more optimized version of those methods (second tier) that are then used instead. The second tier compilation is performed in parallel, which removes the tension between fast compile speeds and producing optimal code. This model can be more generically called Adaptive optimization.

Tiered compilation is also beneficial for long-running applications, such as web servers. We’ll go into more detail in follow-on posts, but the short version is that the JIT can produce much better code than is in the pre-compiled assemblies we ship for .NET Core itself. This is mostly due to the fragile binary interface problem. With tiered compilation, the JIT can use the pre-compiled code it finds for .NET Core and then JIT-compile better code for methods that get called a lot. We’ve seen this scenario having a large impact for the tests in our performance lab.

You can test tiered compilation with your .NET Core 2.1 RC application by setting an environment variable:

COMPlus_TieredCompilation="1"

In the final version of .NET Core 2.1, you will be able to opt in with the System.Runtime.TieredCompilation app-context switch. We aim to make tiered compilation the default in the future after we’ve had a chance to gather feedback and make any necessary improvements. We will publish more details about this new feature shortly.

Closing

.NET Core 2.1 RC 1 is our third release on the way to the final version of 2.1. The quality is high enough now that we’re happy for you to start using .NET Core 2.1 in production. We’re heard a lot of positive feedback so far from folks who have tried out the RC. We hope you have the same experience. Please share your experience with us.

We’re now very close to being done with the 2.1 release. We expect to ship the final version of 2.1 in the first half of 2018. Thanks to everyone that has helped along the way.

Viewing all 10804 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>