Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

Announcing the public preview of Azure Media Clipper

$
0
0

We are excited to announce the public preview of new capabilities in Azure Media Services (AMS) for composing media clips and stitching together rendered videos. Azure Media Clipper is a free JavaScript library that enables web developers to provide their users with an interface for creating clips.

For example, James is a web developer running a web-based video content management service. Users of his service are content owners that live encode sports broadcasts. By integrating Azure Media Clipper into his web service, James enables his content owners to create sports highlights from their media catalog. Content owners can select one or more live or VOD sports broadcasts, specify which segments they want to clip, and submit the clipping job. After the job is processed, the content owners get back a clip for publishing or distribution through social media or other platforms.

Azure Media Clipper enables you to:

  • Trim the pre-slate and post-slate from live archives
  • Compose video highlights from AMS live events, live archives, or fMP4 VOD files
  • Concatenate videos from multiple sources
  • Produce summary clips from your AMS media assets
  • Clip videos with frame accuracy
  • Generate dynamic manifest filters over existing live and VOD assets with group-of-pictures (GOP) accuracy
  • Produce encoding jobs against the assets in your media services account

Demo site

You can try Azure Media Clipper at our demo deployment. Refer to our public documentation for details on the widget’s API surface and code samples.

Widget user interface

The widget is composed of three components:

Components

  • Azure Media Player (AMP) for media playback and preview
  • Clip interface for setting mark times, moving playhead position, and navigating through the timeline with frame or GOP precision
  • Asset selection panel for selecting and searching through assets, and viewing asset metadata

This tool interface is available in light and dark themes:

Light and Dark Theme

Loading assets

Azure Media Clipper takes as input a JSON representation of the assets hosted in your media services account. This JSON contract that you must generate contains all of the required metadata needed to playback the asset and parse its manifest. No rendered media is passed directly to the Clipper. Supported assets include AMS single-bitrate MP4s, multi-bitrate MP4s, and dynamic manifest filters. At this time, Clipper does not support media hosted outside of Azure Media Services.

Assets can be loaded into the Azure Media Clipper by two methods:

  • Statically passing in a library of assets
  • Dynamically generating a list of assets via API

In the first case, you pass in a static set of assets. Each asset includes its AMS asset ID, name, published streaming URL, AES or DRM authentication token (if applicable), and optionally, an array of thumbnail URLs. The thumbnails will be populated into the interface, if passed in. In scenarios where the asset library is static and small, you can simply pass in the asset contract for each asset in the library.

Alternatively, you can load assets dynamically via a JavaScript callback. In scenarios where assets are dynamically generated or the library is large, you should load via the callback.

Producing clips

To create a clip, drag and drop an asset onto the clip interface. For clipping against multiple assets, drag and drop multiple assets into the clip interface from the asset selection panel. You can select and reorder assets in the interface to the desired order. The asset selection panel provides asset duration, type, and resolution metadata for each asset. Use the buttons on the clip interface to find mark times. To preview the output of the clipping job, select the preview button and the clip will playback from the selected mark times.

Producing dynamic manifest filters

Dynamic manifest filters describe a set of rules based on manifest attributes and asset timeline. These rules determine how your streaming endpoint manipulates the output playlist (manifest) and can be used to change which segments are streamed for playback. The filters produced by the Clipper are local filters and are specific to the source asset. Unlike rendered clips, filters are not new assets and do not require an encoding job to produce. They can be create very quickly via our .NET SDK or REST API, however, they are only GOP-accurate. Typically, assets encoded for streaming have a GOP size of 2 seconds.

To create a dynamic manifest filter, select dynamic manifest filter as the clipping mode from the advanced settings menu. You can follow the same process to produce a clip to create the filter. Filters can only be produced against a single asset.

Submitting clipping jobs or dynamic manifest filter API call

Azure Media Clipper produces two types of output:

This output is returned as JSON. To create the clip or filter, you HTTP POST the composer’s output to a service that manages your media services account. To post the data, you must implement a JavaScript callback on the submit job button. From there, your service can submit and track the status of the clipping job or dynamic manifest filter API call. Clipping jobs require encoding and take time to process. Dynamic manifest filters require an API call to produce and do not require significant time to produce.

Clipping in Azure Portal

In addition to the standalone widget, we’re releasing the Clipper in the Azure Portal so you can quickly try out the new features. Navigate to the new Subclip blade to get started.

image

Browser support

Azure Media Clipper is built using modern HTML5 technologies and supports the following browsers:

  • Microsoft Edge 13+
  • Internet Explorer 11+
  • Chrome 54+
  • Safari 10+
  • Firefox 50+

Note: Only HTML5 playback of streams from Azure Media Services is currently supported.

Language support

The Clipper widget is available in the following 18 languages:

  • Chinese (Simplified)
  • Chinese (Traditional)
  • Czech
  • Dutch, Flemish
  • English
  • French
  • German
  • Hungarian
  • Italian
  • Japanese
  • Korean
  • Polish
  • Portuguese (Brazil)
  • Portuguese (Portugal)
  • Russian
  • Spanish
  • Swedish
  • Turkish

You can extend the set of supported languages, in addition to the languages supported by default.

Providing feedback and feature requests

Azure Media Services will continue to grow and evolve, adding more features, and enabling more scenarios. To help serve you better, we are always open to feedback, new ideas, and appreciate any bug reports so that we can continue to provide an amazing service with the latest technologies. To request new features, provide ideas or feedback, please submit to User Voice for Azure Media Services. If you have any specific issues, questions or find any bugs, please post your question to our forum or reach out to us directly through amcinfo@microsoft.com.


Time to migrate off Access Control Service

$
0
0

Access Control Service, otherwise known as ACS, is officially being retired. ACS will remain available for existing customers until November 7, 2018. After this date, ACS will be shut down, causing all requests to the service to fail.

Who is affected by this change?

This announcement affects any customer who has created one or more ACS namespaces in their Azure subscriptions. It may also affect customers of any service that uses ACS as a means of authentication. Examples of such services include Azure Service Bus, Azure Relay, Azure Media Services, and Azure Backup. In accordance with this announcement, Azure Service Bus and Azure Relay will be officially deprecating support for ACS on November 7, 2018.

If your apps and services do not use Access Control Service, then you have no action to take.

To determine if you have any ACS namespaces, you must sign-in to the Azure classic portal (read on for instructions). Your ACS namespaces will be listed under the Active Directory service in the Access Control Namespaces tab. Be sure that you check all Azure subscriptions for existing ACS namespaces.

To determine if your apps and services use ACS, you should monitor for any traffic to https://{your-namespace}.accesscontrol.windows.net. All traffic to ACS is sent to the accesscontrol.windows.net domain. 

Note: Any traffic to the https://accounts.accesscontrol.windows.net domain is already handled by a different service. Traffic to this specific domain will not be affected by ACS retirement.

What is the retirement schedule?

Earlier this year, we restricted creation of new ACS namespaces. As of today, all other ACS functionality is fully operational.

During November 2017, management of ACS namespaces will continue to be available in the Azure classic portal. But after November 2017, this URL will begin redirecting to the new Azure Portal. ACS namespace management will not be available in the new portal. Instead, you will be able to access ACS namespace management via a dedicated URL: https://manage.windowsazure.com/?restoreClassic=true. Please use this URL to manage ACS namespaces after November 30.

In April 2018, the Azure classic portal will be retired completely, and the dedicated URL will be deactivated. At this point in time, you will no longer be able to list, create, delete, disable, or enable ACS namespaces via any portal. However, you will still be able to manage your namespace configuration via the ACS management portal, located at https://{your-namespace}.accesscontrol.windows.net. This includes managing service identities, relying parties, identity providers, claims rules, and more. We strongly recommend taking inventory of all ACS namespaces prior to April 2018, making note of the URL for each instance of the ACS management portal.

Until November 7, 2018, all other ACS functionality will continue to work. This includes the ACS secure token service, the ACS management service, and the ACS token transformation engine. After this date, all ACS services and functionality will be shut down. By this time, you should ensure that all traffic has been migrated off Access Control Service to other technology.

What action is required?

If you determine that you use ACS in some capacity, you should begin planning and executing on a migration strategy immediately. In the vast majority of cases, migration will require significant code changes on your part.

What is the migration path?

The correct migration path for you depends heavily on how your existing apps and services use ACS. To assist in determining the right technology to use, we have published this ACS migration guidance. Generally speaking, there are three options for you to consider:

  • If you use ACS as a means of authenticating to another Microsoft service, you’ll need to follow the migration instructions provided by that service. For instance, Azure Service Bus and Azure Relay customers can find guidance on migrating to Shared Access Signature (SAS) in the Service Bus SAS and Relay SAS migration articles, respectively. Our migration guidance contains instructions for other services.
  • If your apps and services only need support for Microsoft work and school accounts, you can migrate to Azure Active Directory. Azure AD supports many of the features of ACS, but not all. If you determine you require an Azure Active Directory premium license to perform your migration, reach out to us to receive a free developer license.
  • If your apps and services need to support Google, Facebook, Yahoo, Microsoft personal accounts, or other IDPs, consider migrating to Azure AD B2C. Azure AD B2C supports a wide range of identity providers and customizations, but does not support all of the ACS authentication protocols.

Contact Us

For more information about the retirement of ACS, please check our ACS migration guidance page first. If none of the migration options will work for you, or if you still have questions or feedback about ACS retirement, please contact us at acsfeedback@microsoft.com

Azure SQL Databases Disaster Recovery 101

$
0
0

Why should I care?

As a PaaS service, Azure SQL provides automated backup for all databases. It allows customers to recover their data from system or human errors and restores the databases to any point in time during the retention. But it won’t guarantee that your data will be always available, in some extreme cases. Imagine aliens invaded one of our data centers and destroyed everything, or more realistically, extreme weather put the whole area in power outage. Look at what hurricane Irma or Marina did. It happens.

Fortunately, Azure DB is prepared for this. We provide not only one, but two different solutions to recover your data in those disasters: Geo Restore and Geo Replication.

You will learn the followings in the next 5 minutes:

  • What do these features do?
  • What is the difference between these two?
  • How do I choose between these two?

If you already have answer for these questions or want to learn more details about the Azure DB business continuity and disaster recovery, please refer to our online documentation.

What do these features do?

Geo Restore allows customers to recover the database to a different region from backup. The automated backup of all Azure databases is replicated to a secondary region in background. Geo Restore always restore the database from the copy of backup files stored in the secondary region.

Geo Replication will create a continuous copy of your database in one or more secondary region(s) (up to 4 secondary replicas). In event of disaster, you can simply failover to one of the secondary region and bring you database back online. You can also configure failover group to recover the databases automatically.

What is the difference between these two?

There are three major differences: Data Loss, Recovery Time, and Cost.

  • Data Loss – The backup files are replicated to a secondary region in an asynchronized process. It means we may not get a chance to copy the latest backup before the disaster happens. In Azure DB, the RPO of geo restore (Recovery Time Objective, it’s not SLA) is 1 hour. In the same time, geo replication provides 5 seconds RPO.
  • Recovery Time – Geo Restore will basically restore your database from backup files. The recovery time can be impacted by multiple factors: how large your database is, which service tier the database is restoring to, where the backup files are, how many databases you are trying to recover, how many people are trying to recovery their databases… The ERT for geo restore (estimated recovery time) is 12 hours. But for some cases, especially for very large databases, it could run longer than that. If you have geo replication configured, the failover usually takes less than 30 seconds.
  • Cost – If geo replication is all good, why not use it? $$$$! When you configured geo replication, you basically created multiple copies of your databases. You are not paying for one, but two or more databases depending on how many replicas you configured. You can use Geo Restore with no extra cost.

How do I choose between these two?

It’s like choosing a car insurance plan. There’re no golden rules. But I’ll give you some silver bullets:

  • Think about how much it’s going to cost you if you lose the data in last 60 minutes or have your database offline for 24 hours. Compare it with the extra cost of configuring geo replication. If you don’t know the cost, try it out by deleting your database and restoring it to one hour before the deletion time after 24 hours. Trick! If you did so, the database may not be important enough for you and you may not need geo replication.
  • You can apply different DR solutions for different databases. Databases for an online payment system? Yes, please configure geo replication and failover groups! Databases where you store recipes you found from internet? Nah.
  • You can change your mind anytime.

Anything else I should know?

There’s something else you may want to know before you close this page:

  • Do DR drills and document all the steps. Always prepare and plan for the worst.
  • Create a failover server in a secondary region and pre-configure all security objects including logins, users, and certificates. It will save you some time from the recovery.
  • If you are using encryption keys in Azure key vault to protect your data, backup your keys!
  • If you want to learn the whole story of Azure DB business continuity and disaster recovery, read the online documentation.

Public preview of new Azure Policy features

$
0
0

In the week of the Microsoft Ignite Conference, we announced a private preview of a set of new features for Azure Policies in Azure Compute session, Azure Governance session, and Azure Resource Manager session. All of them are now in public preview.

Brand new Azure Policy UI with Continuous Monitoring

We have built a brand new user interface for Azure Policy that enables you to manage policies easily across all your subscriptions in a single place. In addition, you are able to continuously monitor compliance status of all your resources. This is very useful when you have a lot of resources that existed before you applied the policy. You can easily group your policies and look for the non-compliant resources. The policy engine constantly evaluates your resources and updates the compliance status. It also provides historical data in the dashboard. API support for historical data will be added in the future. In addition, the new UI supports a much richer set of policy management features, such as the creation of a custom policy. You can refer to this guide for new information.

image_thumb4_thumb2

Policy initiative and exclusion scope

A policy initiative can group a number of policy definitions. For example, the demo in Azure Resource Manager session groups the policies by resource types. Using initiatives greatly reduces the number of policy assignments you need to manage. These examples show you how to create and assign a policy initiative using PowerShell.

A policy exclusion allows you to assign policy at a high level and then exclude scopes within it. For example, in an environment with applications and central network, you want to have a policy for all application resource groups but not the network resource group. Previously, you had to assign different policies on all application resource groups. With exclusion scope, you can assign the policy at the subscription scope, so that even new application resource groups are automatically governed when they are created.

With the new policy management UI, you can directly create policy initiatives and apply exclusion scopes fro the portal.

Policy language enhancement

Additional resource type support

A lot of users may not know that the policy previously only evaluated for resource types that supported tags. It didn’t work for lots of nested resource types, such as subnets, diagnostic settings, SQL audit settings, etc. However, these resource types represent configurations that users need to enforce. With this new enhancement, the policy can be evaluated on all resource types when it is set to “all” mode. For any new policy you write, we recommend you use “all” mode to leverage the new support. For example, you can now enforce tags just on resource groups. Or, you can audit usage of a specific network security group for all your subnets.

Since these new resource types are typically nested, we also added two new policy effects so that policies can govern nested resources, and even related resources.

AuditIfNotExist

With AuditIfNotExist, a resource creation can trigger a deferred evaluation on other resources, including children resources. A typical use case is to audit all virtual machines that do not have anti-malware extension. An example is available on our GitHub.

DeployIfNotExist

With DeployIfNotExist, a policy provides a mechanism to automatically deploy a template if a specific configuration is not represented. For example, in the scenario above, a policy that leverages DeployIfNotExist effect can deploy the antimalware extension when a VM is created without it. Currently, DeployIfNotExist is only available through built-in policies. The deployment job runs on behalf of the user who created the resource. Custom policies will be supported in the future. Also, the deployment job will be able to run using its own identity in the future. The current available policies are:

  • Deploy network watcher when virtual networks are created
  • Deploy default Microsoft IaaSAntimalware extension for Windows Server
  • Apply Diagnostic Settings for Network Security Groups

Azure Sample Policies GitHub repo

One challenge to adopting policies is constructing the JSON template. We created an Azure Policy Repository which contains quick-start samples from the community. Each sample policy contains instructions on how to use the policy. The policy also contains a “Deploy to Azure” button. Please join our effort to enrich the samples. Also, this is a great place to file issues and request additional features.

Try it out

Please try out the new features and let us know your thoughts and feedback!

Large C# and VB solutions load significantly faster in 15.5 update

$
0
0

On average, 50 percent of all solutions opened by Visual Studio users open in ten seconds or less. However, large solutions can take longer to load because there are a lot of projects that Visual Studio needs to process.

Over the last six months, we looked at ways to make solution load much faster, even for large solutions. We are happy to share that with update 15.5, Visual Studio 2017 loads C# and Visual Basic projects twice as fast as before. (C++ solution load was optimized earlier in Visual Studio 2017, as described here.)

This video compares loading the Orchard Content Management System solution before and after optimization. Check out the video, try Visual Studio 2017 15.5 for your large solution, and tell us how much faster it loads for you! You can reach us at vssolutionload at Microsoft.

Solution Load Video Comparing 15.4 and 15.5

Design Time Build

Loading a solution in Visual Studio is much more involved than just parsing XML project files and rendering the Solution Explorer tree. To let you be productive right away, Visual Studio enables various IDE features at solution load. These features require a deep understanding of projects, project files, and dependencies. To calculate this information, Visual Studio runs a design-time build during solution load. This can be an expensive operation. Additionally, inefficiencies in customized project files and build targets can make it worse. In some cases, developers install certain Nuget packages that add even more operations to the design-time build.

What did we do?

Previously, Visual Studio loaded and built one project at a time on solution load. This design didn’t leverage the additional power of fast, multi-core machines. To reduce solution load time, Visual Studio now starts C# and Visual Basic design-time builds as early as possible. It also batches design-time build operations for all the projects in the solution and executes these build operations in parallel with other solution load operations.

Illustration showing Design Time Build operattions running in parallel with Solution Load Operations

We also improved references analysis. Most C# and Visual Basic projects have references to assemblies. To enable you to work with project references, Visual Studio needs to read information about these assemblies, such as the assembly’s version and description. These simple reads can add many seconds during a large solution load, because solutions often have thousands of assembly references across all their projects. Since many projects often have the same set of references, we further reduced solution load time by adding an in-memory “references” cache that is shared across all projects in a solution.

What can you do?

While much of solution load is automatic, parts of it are in your control. The Project System Tools extension can help you identify projects and targets that are slowing down design-time build during solution load.

To do this, install the extension and then delete the hidden .vs subfolder in your solution folder to clear the design-time build cache. Then open the Build Logging window, start recording, and open a solution. The window will show a list of targets and design time build time for each project. Identify slow projects and inspect their targets, then edit your project files to remove unnecessary targets from the design-time build. You can find more design time build optimization techniques here.

Build Logging

Hard drive type matters!

And here is another trick to make solution load even faster. Visual Studio telemetry shows that machines with an SSD storage load solutions 2-3 times faster than a regular hard drive. As such, we strongly recommend considering an upgrade to SSD if you are using a regular hard drive. While ideally Windows, Visual Studio, and your solution would all be contained on an SSD for the maximum impact, having Windows installed on SSD will have a huge impact on your solution load.

Share your feedback

We hope that you will enjoy faster solution load time in Visual Studio 15.5 update. Please try it on your solutions and let us know what you think.

Viktor Veis, Principal Software Engineering Manager, Visual Studio
@ViktorVeis

Viktor runs the Project and Telemetry team in Visual Studio. He is driving engineering effort to optimize solution load performance. He is passionate about building easy-to-use and fast software development tools and data-driven engineering.

Overview of Visual Studio 2017 and Updates for .NET Developers

$
0
0

Visual Studio 2017 first released in March of this year. Since then, there have been five updates with each bringing more improvements and capabilities. Every improvement is geared towards making you more productive and this post aims to give you an overview of the culmination of features to date. Read on to see how you can get started working on your projects quickly and write better code faster.

Download Visual Studio 2017 version 15.5 today.

New Install Experience, Performance and Reliability

The first thing you’ll notice with Visual Studio 2017 is the new install experience which lets you pick and choose which development tools you want installed. To help you get started working on your projects quickly, Visual Studio 2017 v. 15.5 has parallelized initialization to make your solutions load fast and get you to writing code as soon as possible. Other performance improvements include moving computation –like code analysis—out of the main Visual Studio process to keep your typing speed unimpeded.

Smart Code Editor

Visual Studio has a deep understanding of your code via the Roslyn compiler to provide you with smart editing features like syntax colorization, code completion, completion list filtering, spell-checking mistyped variables, unimported type resolution, outlining, structure visualizers, CodeLens, call hierarchy, hover-able quick info, parameter help, as well as tools for refactoring, applying quick actions, and generating code. The latest update to Visual Studio 2017 includes smart variable naming suggestions and expand/contract selection (Ctrl+W/Ctrl+Shift+W in Default Profile, Shift+Alt+=/Shift+Alt+- in C# Profile).

Smart Code Editor

Pro Tips:

  • Use Quick Launch (Ctrl+Q) to search all Visual Studio settings.

Navigate Your Codebase

Quickly navigate your .NET code by jumping to any file, type, member, or symbol declaration with the redesigned Go To All shortcut (Ctrl+T or Ctrl+,). Find all the references of a symbol or literal in your code, including references across .NET languages, and use the redesigned results window to organize your references by definition, project, and/or path (Shift+F12). And don’t forget to try targeted navigation commands to help you jump directly to symbol definitions (F12, or now also Ctrl+click) or implementations (Ctrl+F12).

Navigate Your Code

Pro tips:

  • Use “f”, “t”, “m”, and “#” as prefixes in your Go To All (Ctrl+T) search to filter results down to files, types, members, or symbols respectively.
  • Use the gear icon in the Go To All dialog to move its position from the right-hand corner of the code editor to the middle.
  • Use the “lock” icon in the Find All References (Shift+F12) window to save your search. Subsequent Find All Reference calls will open a new results tab.

Live Code Analysis

Visual Studio has live code analyzers to help you improve your code quality by detecting errors and potentially problematic code. We provide quick-actions (Ctrl+.) to resolve detected problems across your document, project, or solution. Also invoke the Ctrl+. shortcut to access code suggestions (marked by faded gray dots under the first characters of an expression), learn best practices, stub or generate code, refactor code, and adopt new language features.

Learn more about available quick actions and refactorings in our documentation. Some of the ones added in the latest Visual Studio 2017 update are: sort modifiers, move declaration near reference, convert lambda to C# 7.0 local function, fade and remove unreachable code, add missing file banner, use C# 7.0 pattern matching, simplify with C# 7.1 inferred tuple name, and more.

Live Code Analysis

Pro tips:

  • Enable full-solution analysis to find issues across your entire solution even if you don’t have those files open in the editor: Tools > Options > Text Editor > [C# / Basic] > Advanced > Enable full solution analysis.
  • Errors, warnings, and suggestions appear in the editor scroll bar to give you visuals into where errors are in your open file.
  • Code issues can be suppressed individually using the Ctrl+. shortcut or in-bulk by selecting the issues in the Error list and right-click > Suppress.
  • Have Visual Studio offer to install NuGet packages and add references to unimported types by going to Tools > Options > Text Editor > [C# / Basic] > Advanced > Add using for types in NuGet/reference assemblies.
  • Some code refactorings requires code snippet selections, like Extract Method and Introduce Local Variable.
  • For more code diagnostics and fixes related to best practices, API design, and performance improvements, install our Microsoft Code Analysis 2017 extension.

Code Consistency and Maintenance

Visual Studio 2017 enables coding convention configuration, detects coding style violations, and provides quick-fixes to remedy style issues with the Ctrl+. shortcut. Configure and enforce your team’s formatting, naming, and code style conventions across a repository—allowing overriding values at the project and file level—using EditorConfig. For any given file, the EditorConfig file in the closest containing folder will be enforced.

Code Consistency and Maintenance

Pro Tips:

  • Configure settings for your machine in Tools > Options > Text Editor > [C# / Basic] > Code Style. Override these settings in your repository with an EditorConfig file.
  • Grab an example .editorconfig file from the corefx repository, https://github.com/dotnet/corefx.
  • Use Format Document (Ctrl+K,D or Ctrl+E,D) to clean up formatting violations based on the configuration in your .editorconfig file or in the absence of that, Tools>Options settings.

Unit Testing

Run and debug your unit tests based on the MSTest, NUnit, or XUnit testing frameworks for any application targeting .NET Framework, .NET Standard, or .NET Core. Explore and review your tests in the Test Explorer or immediately see how code changes impact your unit tests inside the editor with Live Unit Testing (Enterprise SKU only).

Unit Testing

Pro Tips:

  • Use Live Unit Testing with MSTestv1 in the latest Visual Studio 2017 update.
  • Include or exclude test projects, or even specific tests, from the set of unit tests run “live” by right-clicking on the test project in Solution Explorer (or on the test itself) and selecting Live Unit Testing > [Include / Exclude].
  • Enable fast test discovery in the Test Explorer with the experimental feature, Real-Time Test Discovery.

Debugging

Visual Studio 2017 improves upon it’s top-notch debugger to allow you to debug your .NET applications targeting the .NET Framework, .NET Standard, and .NET Core. New features in 2017 include the ability to reattach to processes in one click, visibility into which expression returns null with the new Exception Helper, Run to Click, and the ability to Step Back with IntelliTrace (Enterprise SKU only).

If your service runs in Azure, use Snapshot debugging to diagnose issues on your live, deployed cloud applications in Visual Studio 2017 Enterprise.

Debugging

Pro Tips:

  • Hold the CTRL button to transform Run to Click into Set Next Statement.
  • Take a look at the checkbox options in the new Exception Helper, you will find that you can ignore breaking on exceptions thrown from specific libraries.
  • Open the Diagnostic Tools window while debugging and notice a new summary page that allows you to take snapshots, enable CPU profiling, and view Exception events.
  • Right-click and use Step Into Specific to choose which nested function you want to “step into” in the given line of code.

.NET Core-Specific Features

Projects targeting .NET Core and .NET Standard involve the new, lightweight project features in Visual Studio 2017. These include file globbing support, a smaller .csproj file—meaning fewer merge conflicts—and the ability to directly edit the new .csproj file without having to unload and reload a project.

Pro Tip:

  • Add NuGet packages directly to your project by adding the tag: <PackageReference Include=”<PACKAGE_NAME” Version=”PACKAGE_VERSION” />

Modern C# and Visual Basic

Visual Studio 2017 ships with C# 7.0 and Visual Basic 15, version 15.3 ships with C# 7.1, and the latest release ships with C# 7.2. By default, Visual Studio projects support the latest major version of the language (in this case, C# 7.0 and VB 15) with options to allow only older language features or to embrace new language features at the minor version cadence. Notable new language features, include:

  • C# 7.0 : tuples, pattern-matching, local functions, and out var
  • C# 7.1 : async Main, support, inferred tuple element names, and default literals
  • C# 7.2 : Span<T>, non-trailing named arguments, private protected, readonly structs, and in parameters
  • Visual Basic 15 brings tuples and digit separators

Follow the C# and Visual Basic language design discussion on the csharplang and vblang GitHub repositories.

Pro Tips:

  • Change your language adoption cadence by right-clicking your project in Solution Explorer and selecting Properties > Build > Advanced > Language version. If you use a language feature that is not supported by your project’s language version, you can use the shortcut Ctrl+. to upgrade your project language from within the editor.

Upgrade to Visual Studio 2017 v. 15.5 today to enhance your workflow with productivity features and enhancements. Share the Pro-tips outlined in this article with your teammates and let us know if something is blocking your productivity by reporting a problem in Visual Studio. Happy coding!

Over ‘n’ out,

Kasey Uhlenhuth, Program Manager, .NET and Visual Studio
@kuhlenhuth

Kasey Uhlenhuth is a program manager working on improving the Node.js experience within Visual Studio as well as providing Interactive experiences on the Managed Languages team.

New DirectX 12 features in Windows 10 Fall Creators Update

$
0
0

We’ve come a long way since we launched DirectX 12 with Windows 10 on July 29, 2015. Since then, we’ve heard every bit of feedback and improved the API to enhance stability and offer more versatility. Today, developers using DirectX 12 can build games that have better graphics, run faster and that are more stable than ever before. Many games now run on the latest version of our groundbreaking API and we’re confident that even more anticipated, high-end AAA titles will take advantage of DirectX 12.

DirectX 12 is ideal for powering the games that run on PC and Xbox, which is the most powerful console on the market. Simply put, our consoles work best with our software: DirectX 12 is perfectly suited for native 4K games on the Xbox One X.

In the Windows 10 Fall Creators Update, we’ve added features that make it easier for developers to debug their code. In this article, we’ll explore how these features work and offer a recap of what we added in Windows 10 Creators Update.

But first, let’s cover how debugging a game or a program utilizing the GPU is different from debugging other programs.

As covered previously, DirectX 12 offers developers unprecedented low-level access to the GPU (check out Matt Sandy’s detailed post for more info). But even though this enables developers to write code that’s substantially faster and more efficient, this comes at a cost: the API is more complicated, which means that there are more opportunities for mistakes.

Many of these mistakes happen GPU-side, which means they are a lot more difficult to fix. When the GPU crashes, it can be difficult to determine exactly what went wrong. After a crash, we’re often left with little information besides a cryptic error message. The reason why these error messages can be vague is because of the inherent differences between CPUs and GPUs. Readers familiar with how GPUs work should feel free to skip the next section.

The CPU-GPU Divide

Most of the processing that happens in your machine happens in the CPU, as it’s a component that’s designed to resolve almost any computation it it’s given. It does many things, and for some operations, foregoes efficiency for versatility. This is the entire reason that GPUs exist: to perform better than the CPU at the kinds of calculations that power the graphically intensive applications of today. Basically, rendering calculations (i.e. the math behind generating images from 2D or 3D objects) are small and many: performing them in parallel makes a lot more sense than doing them consecutively. The GPU excels at these kinds of calculations. This is why game logic, which often involves long, varied and complicated computations, happens on the CPU, while the rendering happens GPU-side.

Even though applications run on the CPU, many modern-day applications require a lot of GPU support. These applications send instructions to the GPU, and then receive processed work back. For example, an application that uses 3D graphics will tell the GPU the positions of every object that needs to be drawn. The GPU will then move each object to its correct position in the 3D world, taking into account things like lighting conditions and the position of the camera, and then does the math to work out what all of this should look like from the perspective of the user. The GPU then sends back the image that should be displayed on system’s monitor.

To the left, we see a camera, three objects and a light source in Unity, a game development engine. To the right, we see how the GPU renders these 3-dimensional objects onto a 2-dimensional screen, given the camera position and light source. 

For high-end games with thousands of objects in every scene, this process of turning complicated 3-dimensional scenes into 2-dimensional images happens at least 60 times a second and would be impossible to do using the CPU alone!

Because of hardware differences, the CPU can’t talk to the GPU directly: when GPU work needs to be done, CPU-side orders need to be translated into native machine instructions that our system’s GPU can understand. This work is done by hardwire drivers, but because each GPU model is different, this means that the instructions delivered by each driver is different! Don’t worry though, here at Microsoft, we devote a substantial amount of time to make sure that GPU manufacturers (AMD, Nvidia and Intel) provide drivers that DirectX can communicate with across devices. This is one of the things that our API does; we can see DirectX as the software layer between the CPU and GPU hardware drivers.

Device Removed Errors

When games run error-free, DirectX simply sends orders (commands) from the CPU via hardware drivers to the GPU. The GPU then sends processed images back. After commands are translated and sent to the GPU, the CPU cannot track them anymore, which means that when the GPU crashes, it’s really difficult to find out what happened. Finding out which command caused it to crash used to be almost impossible, but we’re in the process of changing this, with two awesome new features that will help developers figure out what exactly happened when things go wrong in their programs.

One kind of error happens when the GPU becomes temporarily unavailable to the application, known as device removed or device lost errors. Most of these errors happen when a driver update occurs in the middle of a game. But sometimes, these errors happen because of mistakes in the programming of the game itself. Once the device has been logically removed, communication between the GPU and the application is terminated and access to GPU data is lost.

Improved Debugging: Data

During the rendering process, the GPU writes to and reads from data structures called resources. Because it takes time to do translation work between the CPU and GPU, if we already know that the GPU is going to use the same data repeatedly, we might as well just put that data straight into the GPU. In a racing game, a developer will likely want to do this for all the cars, and the track that they’re going to be racing on. All this data will then be put into resources. To draw just a single frame, the GPU will write to and read from many thousands of resources.

Before the Fall Creators Update, applications had no direct control over the underlying resource memory. However, there are rare but important cases where applications may need to access resource memory contents, such as right after device removed errors.

We’ve implemented a tool that does exactly this. Developers with access to the contents of resource memory now have substantially more useful information to help them determine exactly where an error occurred. Developers can now optimize time spent trying to determine the causes of errors, offering them more time to fix them across systems.

For technical details, see the OpenExistingHeapFromAddress documentation.

Improved Debugging: Commands

We’ve implemented another tool to be used alongside the previous one. Essentially, it can be used to create markers that record which commands sent from the CPU have already been executed and which ones are in the process of executing. Right after a crash, even a device removed crash, this information remains behind, which means we can quickly figure out which commands might have caused it—information that can significantly reduce the time needed for game development and bug fixing.

For technical details, see the WriteBufferImmediate documentation.

What does this mean for gamers? Having these tools offers direct ways to detect and inform around the root causes of what’s going on inside your machine. It’s like the difference between trying to figure out what’s wrong with your pickup truck based on hot smoke coming from the front versus having your Tesla’s internal computer system telling you exactly which part failed and needs to be replaced.

Developers using these tools will have more time to build high-performance, reliable games instead of continuously searching for the root causes of a particular bug.

Recap of Creators Update

In the Creators Update, we introduced two new features: Depth Bounds Testing and Programmable MSAA. Where the features we rolled out for the Windows 10 Fall Creators Update were mainly for making it easier for developers to fix crashes, Depth Bounds Testing and Programmable MSAA are focused on making it easier to program games that run faster with better visuals. These features can be seen as additional tools that have been added to a DirectX developer’s already extensive tool belt.

Depth Bounds Testing

Assigning depth values to pixels is a technique with a variety of applications: once we know how far away pixels are from a camera, we can throw away the ones too close or too far away. The same can be done to figure out which pixels fall inside and outside a light’s influence (in a 3D environment), which means that we can darken and lighten parts of the scene accordingly. We can also assign depth values to pixels to help us figure out where shadows are. These are only some of the applications of assigning depth values to pixels; it’s a versatile technique!

We now enable developers to specify a pixel’s minimum and maximum depth value; pixels outside of this range get discarded. Because doing this is now an integral part of the API and because the API is closer to the hardware than any software written on top of it, discarding pixels that don’t meet depth requirements is now something that can happen faster and more efficiently than before.

Simply put, developers will now be able to make better use of depth values in their code and can free GPU resources to perform other tasks on pixels or parts of the image that aren’t going to be thrown away.

Now that developers have another tool at their disposal, for gamers, this means that games will be able to do more for every scene.

For technical details, see the OMSetDepthBounds documentation.

Programmable MSAA

Before we explore this feature, let’s first discuss anti-aliasing.

Aliasing refers to the unwanted distortions that happen during the rendering of a scene in a game. There are two kinds of aliasing that happen in games: spatial and temporal.

Spatial aliasing refers to the visual distortions that happen when an image is represented digitally. Because pixels in a monitor/television screen are not infinitely small, there isn’t a way of representing lines that aren’t perfectly vertical or horizontal on a monitor. This means that most lines, instead of being straight lines on our screen, are not straight but rather approximations of straight lines. Sometimes the illusion of straight lines is broken: this may appear as stair-like rough edges, or ‘jaggies’, and spatial anti-aliasing refers to the techniques that programmers use to make these kinds edges smoother and less noticeable. The solution to these distortions is baked into the API, with hardware-accelerated MSAA (Multi-Sample Anti-Aliasing), an efficient anti-aliasing technique that combines quality with speed. Before the Creators Update, developers already had the tools to enable MSAA and specify its granularity (the amount of anti-aliasing done per scene) with DirectX.

Side-by-side comparison of the same scene with spatial aliasing (left) and without (right). Notice in particular the jagged outlines of the building and sides of the road in the aliased image. This still was taken from Forza Motorsport 6: Apex.

But what about temporal aliasing? Temporal aliasing refers to the aliasing that happens over time and is caused by the sampling rate (or number of frames drawn a second) being slower than the movement that happens in scene. To the user, things in the scene jump around instead of moving smoothly. This YouTube video does an excellent job showing what temporal aliasing looks like in a game.

In the Creators Update, we offer developers more control of MSAA, by making it a lot more programmable. At each frame, developers can specify how MSAA works on a sub-pixel level. By alternating MSAA on each frame, the effects of temporal aliasing become significantly less noticeable.

Programmable MSAA means that developers have a useful tool in their belt. Our API not only has native spatial anti-aliasing but now also has a feature that makes temporal anti-aliasing a lot easier. With DirectX 12 on Windows 10, PC gamers can expect upcoming games to look better than before.

For technical details, see the SetSamplePositions documentation.

Other Changes

Besides several bugfixes, we’ve also updated our graphics debugging software, PIX, every month to help developers optimize their games. Check out the PIX blog for more details.

Once again, we appreciate the feedback shared on DirectX 12 to date, and look forward to delivering even more tools, enhancements and support in the future.

Happy developing and gaming!

The post New DirectX 12 features in Windows 10 Fall Creators Update appeared first on Building Apps for Windows.

It has never been a better time to migrate from TFS to VSTS!

$
0
0
From the day Visual Studio Team Services (VSTS) first went live, customers wanted a path to migrate their existing on-premises Team Foundation Server (TFS) data. For a long time, only low-fidelity paths existed – migrating a subset of resources at their “tip” values, using tools to preserve a bit more history, and so forth. Just... Read More

Jenkins on Azure update – ACI experiment and AKS support

$
0
0

I attended a Jenkins Meetup a while back and saw how the engineering team of a local company leveraged Jenkins pipelines and microservices architecture to simplify their build pipelines. It’s obvious to me that they have everything figured out, the Jenkins infrastructure is all set up and things are running well. I asked myself, what could Azure bring to the table?

Scaling out to Azure using Azure Container Agent, the Azure Container Instances (ACI) is ideal for transient workload like spinning up a container agent to run your build, test, and push the build artifacts to say Azure storage. You have an economical way to scale out without the burden of provisioning and managing the infrastructure. ACI also provides per second billing based on the capacity you need. Since spinning up an ACI agent is easy and fast, when your build is complete, just tear the agent down. You do not need to pay for any idle time.

And with Docker, you can create the build environment you need on demand. There is no need to update or patch servers, freeing up resouces from maintaining and upgrading your build agents.

If by now you are curious and wonder if you can move part of your build workload to the cloud, check out the step-by-step tutorial:

Jenkins ACI experiment

Leave us feedback on the pages if you have questions or any requests. We are listening.

Also, if you are wondering about AKS (Managed Kubernetes), we just added the support to our Azure Container Agent as well as to the plugin for deploying to Kubernetes on Azure.

Last week in Azure: News from Connect(); 2017, Azure Virtual Data Center, and more

$
0
0

As covered in Scott Guthrie's post, the annual Microsoft Connect(); event last week focused on developing for the intelligent cloud and intelligent edge to put "AI into the hands of every developer so they unleash the power of data and reimagine possibilities that will improve our world." If you missed any parts of Connect();, you can find all of the Azure-related content on Channel 9 for on-demand viewing.

AI

Expanding AI tools and resources for developers and data scientists on Azure - Announcements at Connect(); include updates to Azure Machine Learning (AML) including Azure IoT Edge integration, a new Azure Databricks service that combines the best of Databricks and Azure for Spark-based analytics, and a new Visual Studio Tools for AI development environment with Azure Machine Learning integration.

New NVIDIA GPUs coming to Azure accelerate HPC and AI workloads - We are launching a new VM size on Azure, the NCv3, which will offer the new NVIDIA Tesla V100 GPU. Our NCv2, offering NVIDIA P100s, and our ND-series, offering NVIDIA P40s, are exiting preview and will be GA for your production workloads starting on December 1st. Azure Batch AI and our Data Science Virtual machine images can also take advantage of these GPU VMs.

Microsoft Cognitive Services updates for Microsoft Connect(); - Learn about recent updates to Text Analytics API, Translator API, and Custom Vision Service. Augment the next generation of applications with the ability to see, hear, speak, understand, and interpret needs using natural methods of communication. Add vision and speech recognition, emotion and sentiment detection, language understanding, and search, to applications without having any data science expertise.

Azure IoT Edge open for developers to build for the intelligent edge - Azure IoT Edge enables you to run cloud intelligence directly on IoT devices even smaller than a Raspberry Pi or as powerful as they need. At Connect();, we announced support for AI Toolkit for Azure IoT Edge, Azure Machine Learning, Azure Stream Analytics, Azure Functions, and more.

Data

Dear Cassandra Developers, welcome to Azure #CosmosDB! - Announced native support for Apache Cassandra API in Azure Cosmos DB, which offers you Cassandra as-a-service powered by Azure Cosmos DB. Azure Cosmos DB provides wire protocol-level compatibility with existing SDKs and tools. This compatibility ensures you can use your existing codebase with the Cassandra API of Azure Cosmos DB with trivial changes.

A technical overview of Azure Databricks - A new service in preview that brings together the best of the Apache Spark analytics platform and Azure. Azure Databricks features optimized connectors to Azure storage platforms (e.g. Data Lake and Blob Storage) for the fastest possible data access, and one-click management directly from the Azure console.

Azure #CosmosDB @ Microsoft Connect(); 2017 - A roll-up blog post that covers all of the Azure Cosmos DB highlights from the Microsoft Connect(); 2017 event.

Microsoft announces the general availability of Azure Time Series Insights - Azure Time Series Insights is a cost-effective and performant service for the analytics, visualization, and storage of time series data. Use Time Series Insights to store and manage terabytes of time-series data, explore and visualize billions of events simultaneously, conduct root-cause analysis, and to compare multiple sites and assets.

MariaDB, PostgreSQL, and MySQL: more choices on Microsoft Azure - Microsoft joined the MariaDB foundation as a Platinum sponsor to work closely together with Monty (Michael Widenius) and the MariaDB community on making MariaDB even better. Azure Database for MariaDB joins Azure database services for PostgreSQL and MySQL on Azure to provide more choices to developers.

Azure #CosmosDB extends support for MongoDB aggregation pipeline, unique indexes, and more - The latest service deployment includes improvements for MongoDB aggregation pipeline support (preview), unique index support, and MongoDB wire protocol version 5 support, which is used in MongoDB 3.4 (preview).

Announcing the general availability of Azure #CosmosDB Table API - Using the Azure Cosmos DB Table API, applications running on Azure Table storage can take advantage of secondary indexes, turnkey global distribution, dedicated throughput, and single-digit millisecond latency with 99.99% comprehensive SLAs.

Pre-announcing the general availability of Azure #CosmosDB Gremlin (Graph) API - Azure Cosmos DB is the first managed PaaS service that brings cloud-native and enterprise-ready features to graph databases. This includes turnkey global distribution, independent scaling of storage and throughput, predictable single-digit millisecond latencies, automatic indexing, and the most comprehensive in the industry SLAs. You can store massive graphs with billions of vertices and edges, and query the graphs with millisecond latency and easily evolve the graph structure and schema.

Apache Spark to Azure #CosmosDB Connector is now generally available - The Apache Spark to Azure Cosmos DB Connector enables both ad-hoc and interactive queries on real-time big data, as well as advanced analytics, data science, machine learning and artificial intelligence. Azure Cosmos DB can be used for capturing data that is collected incrementally from various sources across the globe. This includes social analytics, time series, game or application telemetry, retail catalogs, up-to-date trends and counters, and audit log systems.

#AzureSQLDW: Hub and Spoke series Integration with Azure Analysis Services - Azure SQL Data Warehouse is Microsoft’s SQL analytics platform, the backbone of your Enterprise Data Warehouse (EDW). The service enables you to scale compute and storage elastically (and independently) with massively parallel processing. SQL DW integrates seamlessly with big data stores and acts as a hub to your data marts and cubes for an optimized and tailored performance of your EDW.

Azure Stream Analytics now available on IoT Edge - Azure Stream Analytics on IoT Edge (now in preview) empowers developers to deploy near-real-time analytical intelligence closer to IoT devices so that they can unlock the full value of device-generated data. Designed for customers requiring low latency, resiliency, efficient use of bandwidth and compliance, enterprises can now deploy control logic close to the industrial operations and complement Big Data analytics done in the cloud.

Compute

Announcing General Availability of Azure Reserved VM Instances (RIs) - Azure RIs enable you to reserve Virtual Machines on a one- or three-year term, and provide up to 72% cost savings versus pay-as-you-go prices. Azure RIs give you price predictability and help improve your budgeting and forecasting.

HPC containers with Azure Batch - We added support for Singularity containers in the latest Batch Shipyard release. Singularity is a container solution amenable to both administrators and users of shared HPC and cluster computing environments, while still providing access to accelerators such as GPUs and specialized interconnects in container contexts. Batch Shipyard is an open system for enabling simple, configuration-based container execution on Azure Batch, and aims to allow users of these shared computing environments to easily execute their existing Singularity workloads on Azure.

Additional news

Announcing the general availability of Azure App Service diagnostics - Azure App Service diagnostics provides an intelligent and interactive experience, analyzes what’s wrong with your web apps and quickly guides you to the right information to help troubleshoot and resolve issues faster. App Service diagnostics reduces trial & error troubleshooting by expediting problem resolution with recommended potential solutions.

Azure DevOps Project – public preview - Creating a DevOps Project provisions Azure resources and comes with a Git code repository, Application Insights integration, and a continuous delivery pipeline set up to deploy to Azure. The DevOps Project dashboard helps you monitor code commits, builds, and deployments, from a single view in the Azure portal.

Announcing general availability of Bash in Cloud Shell - Bash in Cloud Shell provides an interactive web-based, Linux command-line experience from virtually anywhere. With a single click through the Azure portal, Azure documentation, or the Azure mobile app, you get access to a secure and authenticated Azure workstation to manage and deploy resources from a native Linux environment held in Azure.

Azure Networking updates for Fall 2017 - Updates to simplify the overall network management, security, scalability, availability and performance of your mission-critical applications.

Content, SDKs, and Samples

Free ebook: Azure Virtual Datacenter - Azure Virtual Datacenter (VDC) is an approach to making the most of the Azure cloud platform's capabilities while respecting your existing security and networking policies. When deploying enterprise workloads to the cloud, IT organizations and business units must balance governance with developer agility. In this free ebook from from the Azure Customer Advisory Team (AzureCAT), Azure Virtual Datacenter provides models to achieve this balance with an emphasis on governance.

New multi-tenant patterns for building SaaS applications on SQL Database - An expanded set of sample SaaS applications, The Wingtip Tickets SaaS application, each using a different database tenancy model on SQL Database. Each sample includes a series of management scripts and tutorials to help you jump start your own SaaS app project. These samples demonstrate a range of SaaS-focused designs and management patterns that can accelerate SaaS application development on SQL Database.

Preview the new Azure Storage SDK for Go & Storage SDKs roadmap - A new and redesigned Azure Storage SDK for Go with documentation and examples. This new SDK was redesigned to follow the next generation design philosophy for Azure Storage SDKs and is built on top of code generated by AutoRest, an open source code generator for the OpenAPI specification.

Azure Shows

Azure CDN: Dynamic Site Acceleration - Richard Li joins Scott Hanselman to discuss the new Dynamic Site Acceleration (DSA) optimization for Azure CDN, and how it can be used In combination with standard CDN caching features to measurably improve the performance of web pages with dynamic content.

App Service Diagnostic and Troubleshooting Experience - Steve Ernst joins Scott Hanselman to introduce App Service diagnostics which is the home for diagnosing and troubleshooting problems with your app. This intelligent service was built to diagnose many different problems and suggest targeted solutions for the customer.

Azure Analysis Services Visual Model Editor - Azure Analysis Services makes it easier to build semantic models with the introduction of its new web modeling experience. In this episode, Josh Caplan will show you just how easy it is to use this experience to create a rich semantic model on top of data stored in Azure SQL Data Warehouse. Learn how to empower your business users to build their own reports with data at cloud scale.

Episode 204 - Back to Cloud Services - A great throwback discussion to one of the earliest (and most robust) of Azure Services, the Classic Cloud Service, with Adam Modlin, a Senior App Dev Consultant at Microsoft.

Cloud Tech 10 - 20th November 2017 - CosmosDB, Cloud Shell, DevOps and more! - Each week, Mark Whitby, a Cloud Solution Architect at Microsoft UK, covers what's happening with Microsoft Azure in just 10 minutes, or less. In this episode:

  • Get a new .NET, Node.js, Java, PHP or Python application complete with CI/CD pipeline up and running in minutes with the Azure DevOps Project
  • The Azure Storage SDK adds support for Go
  • The Azure Cloud Shell's Bash shell is now generally available
  • Azure Database now supports MariaDB along with MySQL and PostgreSQL
  • Find and fix problems with your web applications with Azure's new App Service Diagnostics
  • Azure Databricks in now available in preview
  • Lots of CosmosDB announcements including support for Cassandra, new features for MongoDB and news on general availability of Graph API, Table API and Apache Spark Connector for CosmosDB
  • New NCv3 series Virtual Machines with Nvidia Tesla V100 GPU's
  • Save up to 82% on virtual machine pricing with Azure Reserved Instances
  • Azure Virtual Datacenter eBook release

Azure Advisor – your personalized best practices service got better

$
0
0

This blog post was co-authored by Matt Wagner, Program Manager, Microsoft Azure.

Azure Advisor is your personalized cloud service for Azure best practices that help you to improve availability, enhance protection, optimize performance of your Azure resources, and maximize the return on your IT budget. Azure Advisor was made generally available earlier this year. Since then, tens of thousands of recommendations have been implemented by Azure customers like you. Thank you.

Today we are highlighting a number of capabilities that were recently rolled out based on your feedback. These capabilities help you attain a more comprehensive view of your recommendations across all your subscriptions, and the ability to better customize Azure Advisor to the needs of your specific organization.

What’s new

Advisor brings an all-new dashboard to help you more easily review the overall status of your recommendations across multiple subscriptions, in four categories – high availability, security, performance, and cost. You can see the overall status of active recommendations, their impact, and the number of resources that can be optimized. The cost recommendation category also highlights the total possible savings you can achieve if you implement all recommendations in this category. This is a great way to find the extra IT funds you might need to run your next IT project, like making your business-critical resources more resilient and more secure, or building a smart chat bot.

In addition, the new downloadable reports in PDF and CSV formats allow you to share Advisor recommendations within your organization to prioritize and collaborate on implementation.

 

Microsoft Azure Advisor - dashboad

 

Configuration options allow you to tune Advisor to your needs. In the “Resources” section you can, for example, exclude your non-production workloads from displaying any recommendations by simply unchecking non-production resource groups. Many of you would like Advisor to identify your low-use virtual machines, but have different criteria on CPU utilization thresholds for low usage. In the “Rules” section you can customize your desired CPU threshold per each of your subscriptions.

 

Microsoft Azure Advisor - configuration

Next steps

Review your Azure Advisor recommendations at no cost in your Azure portal.

We won’t stop here, we are working hard on additional Advisor features you will love. We will keep you updated as additional capabilities will be broadly available in coming months.

Please continue to share your comments and ideas about Azure Advisor as it will help us continue to evolve this service. Submit your feedback via the Azure Advisor feedback functionality or post your suggestions below.

R charts in a Tweet

$
0
0

Twitter recently doubled the maximum length of a tweet to 280 characters, and while all users now have access to longer tweets, few have taken advantage of the opportunity. Bob Rudis used the rtweet package to analyze tweets sent with the #rstats hashtag since 280-char tweets were introduced, and most still kept below the old 280-character limit. The actual percentage differed by the Twitter client being used; I've reproduced the charts for the top 10 clients below. (Click the chart for even more clients, and details of the analysis including the R code that generated it.)

Twitter 280

That being said, some have embraced the new freedom with gusto, not least my Microsoft colleague Paige Bailey who demonstrated you can fit some pretty nice R charts — and even animations! — into just 280 characters:

df <- expand.grid(x = 1:10, y=1:10)
df$angle <- runif(100, 0, 2*pi)
df$speed <- runif(100, 0, sqrt(0.1 * df$x))

ggplot(df, aes(x, y)) +
geom_point() +
geom_spoke(aes(angle = angle, radius = speed))

y'all twitterpeople give me 280 characters?
yr just gonna get code samples pic.twitter.com/hyGEE2DxGy

— @DynamicWebPaige (@DynamicWebPaige) November 8, 2017

x <- getURL("https://t.co/ivZZvodbNK")
b <- read.csv(text=x)
c <- get_map(location=c(-122.080954,36.971709), maptype="terrain", source="google", zoom=14)
ggmap(c) +
geom_path(data=b, aes(color=elevation), size=3)+
scale_color_gradientn(colours=rainbow(7), breaks=seq(25, 200, 25)) pic.twitter.com/7WdQLR56uZ

— @DynamicWebPaige (@DynamicWebPaige) November 11, 2017

library(dygraphs)
lungDeaths <- cbind(mdeaths, fdeaths)
dygraph(lungDeaths) %>%
dySeries("mdeaths", label = "Male") %>%
dySeries("fdeaths", label = "Female") %>%
dyOptions(stackedGraph = TRUE) %>%
dyRangeSelector(height = 20)

I ❤️ R's interactive visualizations SO MUCH pic.twitter.com/LevjElly3L

— @DynamicWebPaige (@DynamicWebPaige) November 16, 2017

library(plot3D)

par(mar = c(2, 2, 2, 2))
par(mfrow = c(1, 1))
x <- seq(0, 2*pi,length.out=50)
y <- seq(0, pi,length.out=50)
M <- mesh(x, y)

surf3D(x = (3+2*cos(M$x)) * cos(M$y),
y = (3+2*cos(M$x)) * sin(M$y),
z = 2 * sin(M$x),
colkey=FALSE,
bty="b2") pic.twitter.com/a6GwQTaGYC

— @DynamicWebPaige (@DynamicWebPaige) November 17, 2017

For more 280-character examples in R and other languages follow this Twitter thread, and for more on analyzing tweets with rtweet follow the link below.

rud.is: Twitter Outer Limits : Seeing How Far Have Folks Fallen Down The Slippery Slope to “280” with rtweet

Transforming your VMware environment with Microsoft Azure

$
0
0

Just as each organization is unique, each organization will take a unique path to the cloud. Whether you are transferring data, migrating infrastructure, modernizing applications, or building a new app, Azure allows you to move to the cloud in a way that makes the most sense for your needs.

As part of this journey, one request I hear frequently is the desire to move existing on-premises VMware workloads to Azure. This includes migrating VMware-based applications to Azure, integrating with Azure, and deploying VMware virtualization on Azure hardware. 

A frictionless path to Azure for your VMware environment

Today we are announcing new services to help you at every step of your VMware migration to Azure.

  • Migrate applications with Azure Migrate. On November 27th, Azure Migrate, a free service, will be broadly available to all Azure customers. While most cloud vendors offer single server migration capabilities, Azure Migrate helps you through the journey of migrating an entire multi-server application across the following phases: 

Discovery and assessment. Azure Migrate can discover your on-premises VMware-based applications without requiring any changes to your VMware environment. Azure Migrate offers the unique capability to visualize group level dependencies in multi-VM applications, allowing you to logically group and prioritize the entire application for migration. Through utilization discovery of the CPU, memory, disks, and network, Azure Migrate also has built-in rightsizing to offer size and cost guidance so when you migrate, you can save money.

VMWare

Uniquely visualize entire application dependencies with Azure Migrate

Migration. Once discovery has completed, with just a few easy clicks, you can migrate your on-premises applications to Azure. Azure Site Recovery (ASR) enables customers to migrate VMware-virtualized Windows Server and Linux workloads with minimal downtime. ASR offers application-centric migration, allowing you to sequence your application servers as they migrate. No other cloud provider offers this built-in multi-tier sequencing. Additionally, Azure Database Migration Service enables customers to migrate their SQL Server and Oracle databases directly into the fully managed Azure SQL Database. For customers who need large volume storage migration, we recently announced Azure Data Box, an appliance designed to simplify data movement to Azure.

Resource & Cost Optimization. Once deployed in Azure, with the free Azure Cost Management service (formerly called Cloudyn), you can easily forecast, track, and optimize your spending. Our calculations show up to 84% TCO savings for certain on-premises VMware to Azure migration scenarios. You can reference this VMware to Azure TCO guide to learn more and even run TCO calculations, yourself. As an example, Capstone Mining has gone through this journey and already saved $6M in capital and operating costs.

  • Integrate VMware workloads with Azure services. There are many Azure services that you can use together with VMware workloads without any migration or deployment, enabling you to keep your entire environment secure and well-managed across cloud and on-premises. This includes Azure Backup, Azure Site Recovery (for Disaster Recovery), update/configuration management, Azure Security Center and operational intelligence using Azure Log Analytics. You can even manage your Azure resources in the public cloud using the VMware vRealize Automation console. Somerset County Council and Russell Reynolds Associates are example customers who have integrated Azure services with their VMware VMs. 
  • Host VMware infrastructure with VMware virtualization on Azure. Most workloads can be migrated to Azure easily using the above services; however, there may be specific VMware workloads that are initially more challenging to migrate to the cloud. For these workloads, you may need the option to run the VMware stack on Azure as an intermediate step. Today, we’re excited to announce the preview of VMware virtualization on Azure, a bare-metal solution that runs the full VMware stack on Azure hardware, co-located with other Azure services. We are delivering this offering in partnership with premier VMware-certified partners. General availability is expected in the coming year. Please contact your Microsoft sales representative if you’d like to participate in this preview.  Hosting the VMware stack in public cloud doesn’t offer the same cost savings and agility of using cloud-native services, but this option provides you additional flexibility on your path to Azure.

Here are some resources to help with migration to Azure:

Beyond Migration

Many of you are looking to move to the cloud to help your business move faster. Azure provides security, reliability, and global scale to help you deliver and scale your applications. At the same time, we understand that it may not be possible to run your entire business in the cloud. You may have low-latency, regulatory, or compliance requirements that require you to run some of your applications on-premises, in a hybrid way. The reality is, running your VMware virtualization stack in the cloud does not address your hybrid requirements.  For this, you need a broad set of hybrid services and solutions that provide not just connectivity and virtualization, but true consistency across your cloud and on-premises environments.  

Azure is the only true hybrid cloud that enables consistency across application development, management, security, data, and identity. This is made possible with a rich set of offerings like Azure Stack, Azure Backup, Azure Site Recovery, Azure Security Center, SQL Server Stretch DB, Azure Active Directory, and hybrid management with patching, configuration, and monitoring of both cloud and on-premises servers. No other cloud offers this level of comprehensive hybrid capabilities.

We are committed to investing in tools, solutions and services that make Azure simple for you no matter what your needs and where you are in your cloud journey. I can’t wait to hear more about your cloud story.

Thanks,

Corey Sanders

Scale up your parallel R workloads with containers and doAzureParallel

$
0
0

by JS Tan (Program Manager, Microsoft)

The R language is by and far the most popular statistical language, and has seen massive adoption in both academia and industry. In our new data-centric economy, the models and algorithms that data scientists build in R are not just being used for research and experimentation. They are now also being deployed into production environments, and directly into products themselves.

However, taking your workload in R and deploying it at production capacity, and at scale, is no trivial matter.  Because of R's rich and robust package ecosystem, and the many versions of R, reproducing the environment of your local machine in a production setting can be challenging. Let alone ensuring your model's reproducibility!

This is why using containers is extremely important when it comes to operationalizing your R workloads. I'm happy to announce that the doAzureParallel package, powered by Azure Batch, now supports fully containerized deployments. With this migration, doAzureParallel will not only help you scale out your workloads, but will also do it in a completely containerized fashion, letting your bypass the complexities of dealing with inconsistent environments. Now that doAzureParallel runs on containers, we can ensure a consistent immutable runtime while handling custom R versions, environments, and packages.

Doazureparallel-containers

By default, the container used in doAzureParallel is the 'rocker/tidyverse:latest' container that is developed and maintained as part of the rocker project. For most cases, and especially for beginners, this image will contain most of what is needed. However, as users become more experienced or have more complex deployment requirements, they may want to change the Docker image that is used, or even build their own. doAzureParallel supports both those options, giving you flexibility (without any compromise on reliability). Configuring the Docker image is easy. Once you know which Docker image you want to use, you can simply specify its location in the cluster configuration and doAzureParallel will just know to use it when provisioning subsequent clusters. More details on configuring your Docker container settings with doAzureParallel are included in the documentation. 

With this release, we hope to unblock many users who are looking to take their R models, and scale it up in the cloud. To get started with doAzureParallel, visit our Github page. Please give it a try and let us know if you have questions, feedback, or suggestions, or via email at razurebatch@microsoft.com.

Github (Azure): doAzureParallel

 

Announcing Language Server Protocol Preview Release

$
0
0

Visual Studio is joining Visual Studio Code in offering support for the Language Server Protocol. As an extension author, you can now write Visual Studio extensions that leverage existing language servers to provide a rich editing experience for languages that initially had no native language support in Visual Studio. With these extensions, you can use Visual Studio to code in your favorite language!

Use Visual Studio to code in your favourite language

This is a preview release that is available as an extension on Visual Studio Marketplace and can only be installed on Visual Studio 2017 Preview. Read our documentation to find out more.

What is an LSP?

The Language Server Protocol (LSP) is a common protocol that provides language service features to developer tools. A language server contains the deep understanding of a specific language while the Language Server Protocol provides the communication between the language server and the developer tool. The complex logic of a language server needs to be implemented only once and then from there, smaller pieces of code can be written to allow the language server to talk to the specific developer tool. This allows for a consistent editing experience for any tool that supports the protocol.

How do I Start?

You will need to have the Visual Studio extension development workload installed on Visual Studio 2017 Preview. For the preview, you will have to declare a dependency on the Language Protocol Client Preview extension. This means that any customers who wish to install your language server extension must also have the Language Server Protocol Client preview extension installed first. For more details, check out our sample on Github, the client and protocol API documentation, and the walkthrough on how to add a language server protocol extension.

Let Us Know What You Think

Releasing as a preview first gives us a great opportunity to gather and incorporate feedback to make the final product even better. Please leave us comments and questions on the LSP extension page or email us at vslsp@microsoft.com with your feedback.

We can’t wait to see what extensions will be created by the community!

Stephanie Su, Program Manager, Visual Studio

Stephanie Su is a Program Manager on the Visual Studio Extensibility team focused on the authoring and acquisition scenarios of extensions.


Over 450 Areas of Bird’s Eye Imagery Now Live on Bing Maps

$
0
0

If you’re headed out on an end-of-the-year vacation this holiday season, you can preview that special place on Bing Maps using the new Bird’s Eye experience. We have been busy processing the oblique imagery to bring you major metropolitan areas to view before you travel. We recently surpassed the 450 areas mark for our Bird’s Eye imagery and there are a host of vacation favorites amongst them.

New Bird’s Eye areas

Oblique imagery is a great complement to Aerial 2D imagery because it has much more depth and provides a view of your destination that is more familiar and in line with what people expect. You can see Bird’s Eye imagery in Bing Maps, and this view can offer a better context for navigation because building facades can be used as landmarks. With this release, the new Bird’s Eye experience passes 450 areas and the count of oblique aerial view of your favorite places keeps growing. Response to the previous version of the Bird’s Eye was very positive and offers a unique asset that Bing Maps can provide to its users. The new experience continues this asset forward with better quality imagery and a better overall user experience.

Ferry Building, San Francisco CA

Ferry Building, San Francisco, CA, on Bing Maps - https://binged.it/2zU4S3h

Denver Center for Performing Arts - Denver, CO

Sculpture Park, Denver, CO on Bing Maps - https://binged.it/2zUUsk7

Wrigley Field - Chicago, IL

Wrigley Field, Chicago, IL on Bing Maps - https://binged.it/2zUy0aS

With this release, the Bing Maps Bird’s Eye product offers a much higher resolution aerial oblique views and is based off Microsoft-owned data. Our count today is over 450 areas viewable through the Bing Maps website. Have you been to any of these vacation spots? This product is also available to developers in our Bing Maps SDK.

“The Strip” - Las Vegas, NV

“The Strip” in Las Vegas, NV, on Bing Maps - https://binged.it/2hErXfZ

The Gateway Arch - St. Louis, MO

The Gateway Arch in St. Louis, MO on Bing Maps - https://binged.it/2hFNHrO

The Magic Kingdom, Disney World - Orlando, F

The Magic Kingdom, Disney World, in Orlando, FL on Bing Maps - https://binged.it/2zT8D9j

The world is a big place, you can now get a fresh look at over two hundred thousand square kilometers of it. New high-resolution imagery is available on Bing Maps in 15 different countries across the world. Using aircraft-mounted UltraCam Osprey cameras, Bing Maps brings you high-resolution views of your favorite vacation spots.

Bourbon Street and Jackson Square - New Orleans, LA

Charters Street, New Orleans, LA on Bing Maps - https://binged.it/2hFOE3n

The Pantheon - Italy

The Pantheon in Italy on Bing Maps - https://binged.it/2zWeElB

Graceland - Memphis, TN

Graceland in Memphis, TN on Bing Maps - https://binged.it/2zUz8uP

New oblique imagery cities

 

Alameda County, CA

 

Boston, MA

 

Contra Costa County, CA

Albuquerque, NM

 

Boulder, CO

 

Corpus Christi, TX

Anaheim, CA

 

Buffalo, NY

 

Denver, CO

Ann Arbor, MI

 

Chapel Hill, NC

 

Dover, DE

Atlanta, GA

 

Charleston, SC

 

Fort Worth, TX

Augusta, GA

 

Chicago, IL

 

Fort Myers, FL

Austin, TX

 

Cleveland, OH

 

Fresno, CA

Bakersfield, CA

 

Colorado Springs, CO

 

Fremont, CA

Birmingham, AL

 

Columbia, SC

 

Greensboro, NC

Greenville, SC

 

Omaha, NE

 

Santa Cruz County, CA

Houston, TX

 

Oklahoma City, OK

 

Sarasota, FL

Jackson, MS

 

Orlando, FL

 

Shreveport, LA

Jacksonville, FL

 

Palm Bay, FL

 

Solano County, CA

Las Vegas, NV

 

Philadelphia County, PA

 

Berkeley, CA

Lincoln, NE

 

Philadelphia, PA

 

Tallahassee, FL

Louisville, KY

 

Providence, RI

 

Tampa, FL

Lubbock, TX

 

Raleigh, NC

 

Tulsa, OK

Madison, WI

 

Richmond, VA

 

Wichita, KS

Marin County, CA

 

Sacramento, CA

 

Winston Salem, NC

Modesto, CA

 

Saint Louis, MO

 

Winter Park, FL

Montgomery, AL

 

Saint Petersburg, FL

   

Napa County, CA

 

San Bernardino, CA

   

Niagara Falls, NY

 

San Francisco County, CA

   

Norfolk, VA

 

San Jose, CA

   

Norristown, PA

 

Santa Clara County, CA

   

 Stay tuned for even more high-resolution imagery from around the world on Bing Maps.

- Bing Maps Team

Supporting Inner Source with Forks

$
0
0
We’re very excited to announce that we’ve added the ability to fork Git repositories hosted in Visual Studio Team Services. If you work on open source projects, then you’re probably already familiar with repository forks. A fork takes a Git repository and creates a duplicate copy of it – on Visual Studio Team Services –... Read More

Windows Community Standup on November 29th, 2017

$
0
0

Kevin Gallo is hosting the next Windows Community Standup on November 29th, 2017 at 10:00am PST on Channel 9 with Seth Juarez!

Kevin will be answering questions we didn’t get to from the Windows Developer Day on October 10th. While we’re going through the list, we will be taking live questions too.

Windows community standup is a monthly online broadcast where we provide additional transparency on what we are building out, and why we are excited. As always, we welcome your feedback on what could be done better.

Once again, we can’t wait to see you at 10:00am PST on  November 29th, 2017 over at https://channel9.msdn.com.

The post Windows Community Standup on November 29th, 2017 appeared first on Building Apps for Windows.

Orchard Core Beta 1 released

$
0
0

This is a guest post by Sebastien Ros on behalf of the Orchard community

Two years ago, the Orchard community started developing Orchard on .NET Core. After 1,500 commits, 297,000 lines of code, 127 projects, we think it’s time to release a public version, namely Orchard Core Beta 1.

What is Orchard Core?

If you know what Orchard and .NET Core are, then it might seem obvious: Orchard Core is a redevelopment of Orchard on ASP.NET Core.

Orchard Core consists of two different targets:

  • Orchard Core Framework: An application framework for building modular, multi-tenant applications on ASP.NET Core.
  • Orchard Core CMS: A Web Content Management System (CMS) built on top of the Orchard Core Framework.

It’s important to note the differences between the framework and the CMS. Some developers who want to develop SaaS applications will only be interested in the modular framework. Others who want to build administrable websites will focus on the CMS and build modules to enhance their sites or the whole ecosystem.

Beta

Quoting Jeff Atwood on https://blog.codinghorror.com/alpha-beta-and-sometimes-gamma/:

“The software is complete enough for external testing — that is, by groups outside the organization or community that developed the software. Beta software is usually feature complete, but may have known limitations or bugs. Betas are either closed (private) and limited to a specific set of users, or they can be open to the general public.”

It means we feel confident that developers can start building applications and websites using the current state of development. There are bugs, limitations and there will be breaking changes, but the feedback has been strong enough that we think it’s time to show you what we have accomplished so far.

Building Software as a Service (SaaS) solutions with the Orchard Core Framework

It’s very important to understand the Orchard Core Framework is distributed independently from the CMS on nuget.org. We’ve made some sample applications on https://github.com/OrchardCMS/OrchardCore.Samples that will guide you on how to build modular and multi-tenant applications using just Orchard Core Framework without any of the CMS specific features.

One of our goals is to enable community-based ecosytems of hosted applications which can be extended with modules, like e-commerce systems, blog engines and more. The Orchard Core Framework enables a modular environment that allows different teams to work on separate parts of an application and make components reusable across projects.

What’s new in Orchard Core CMS

Orchard Core CMS is a complete rewrite of Orchard CMS on ASP.NET Core. It’s not just a port as we wanted to improve the performance drastically and align as close as possible to the development models of ASP.NET Core.

  • Performance. This might the most obvious change when you start using Orchard Core CMS. It’s extremely fast for a CMS. So fast that we haven’t even cared about working on an output cache module. To give you an idea, without caching Orchard Core CMS is around 20 times faster than the previous version.
  • Portable. You can now develop and deploy Orchard Core CMS on Windows, Linux and macOS. We also have Docker images ready for use.
  • Document database abstraction. Orchard Core CMS still requires a relational database, and is compatible with SQL Server, MySQL, PostgreSQL and SQLite, but it’s now using a document abstraction (YesSql) that provides a document database API to store and query documents. This is a much better approach for CMS systems and helps performance significantly.
  • NuGet Packages. Modules and themes are now shared as NuGet packages. Creating a new website with Orchard Core CMS is actually as simple as referencing a single meta package from the NuGet gallery. It also means that updating to a newer version only involves updating the version number of this package.
  • Live preview. When editing a content item, you can now see live how it will look like on your site, even before saving your content. And it also works for templates, where you can browse any page to inspect the impact of a change on templates as you type it.
  • Liquid templates support. Editors can safely change the HTML templates with the Liquid template language. It was chosen as it’s both very well documented (Jekyll, Shopify, …) and secure.
  • Custom queries. We wanted to provide a way to developers to access all their data as simply as possible. We created a module that lets you create custom ad-hoc SQL, and Lucene queries that can be re-used to display custom content, or exposed as API endpoints. You can use it to create efficient queries, or expose your data to SPA applications.
  • Recipes. Recipes are scripts that can contain content and metadata to build a website. You can now include binary files, and even use them to deploy your sites remotely from a staging to a production environment for instance. They can also be part of NuGet Packages, allowing you to ship predefined websites.
  • Scalability. Because Orchard Core is a multi-tenant system, you can host as many websites as you want with a single deployment. A typical cloud machine can then host thousands of sites in parallel, with database, content, theme and user isolation.

Resources

Development plan

The Orchard Core source code is available on GitHub.

There are still many important pieces to add and you might want to check our roadmap, but it’s also the best time to jump into the project and start contributing new modules, themes, improvements, or just ideas.

Feel free to drop on our dedicated Gitter chat and ask questions.

Windows 10 SDK Preview Build 17040 now available

$
0
0

Today, we released a new Windows 10 Preview Build of the SDK to be used in conjunction with Windows 10 Insider Preview (Build 17040 or greater). The Preview SDK Build 17040 contains bug fixes and under development changes to the API surface area.

The Preview SDK can be downloaded from developer section on Windows Insider.

For feedback and updates to the known issues, please see the developer forum. For new developer feature requests, head over to our Windows Platform UserVoice.

Things to note:

  • This build works in conjunction with previously released SDKs and Visual Studio 2017. You can install this SDK and still also continue to submit your apps that target Windows 10 Creators build or earlier to the store.
  • The Windows SDK will now formally only be supported by Visual Studio 2017 and greater. You can download the Visual Studio 2017 here.

Known Issues

  • “All tests run with Windows App Certification Kit will fail.  During installation, please uncheck Windows App Certification Kit”

What’s New:

  • C++/WinRT Now Available:
    The C++/WinRT headers and cppwinrt compiler (cppwinrt.exe) are now included in the Windows SDK. The compiler comes in handy if you need to consume a third-party WinRT component or if you need to author your own WinRT components with C++/WinRT. The easiest way to get working with it after installing the Windows Insider Preview SDK is to start the Visual Studio Developer Command Prompt and run the compiler in that environment. Authoring support is currently experimental and subject to change. Stay tuned as we will publish more detailed instructions on how to use the compiler in the coming week.The ModernCPP blog has a deeper dive into the CppWinRT compiler. Please give us feedback by creating an issue at: https://github.com/microsoft/cppwinrt.

Breaking Changes

New MIDL key words. 

As a part of the “modernizing IDL” effort, several new keywords are added to the midlrt tool. These new keywords will cause build breaks if they are encountered in IDL files.

The new keywords are:

  • event
  • set
  • get
  • partial
  • unsealed
  • overridable
  • protected
  • importwinmd

If any of these keywords is used as an identifier, it will generate a build failure indicating a syntax error.

The error will be similar to:

1 >d:ossrconecorecomcombaseunittestastatestserverstestserver6idlremreleasetest.idl(12) : error MIDL2025 : [msg]syntax error [context]: expecting a declarator or * near “)”

To fix this, modify the identifier in error to an “@” prefix in front of the identifier. That will cause MIDL to treat the offending element as an identifier instead of a keyword.

API Updates and Additions

When targeting new APIs, consider writing your app to be adaptive in order to run correctly on the widest number of Windows 10 devices. Please see Dynamically detecting features with API contracts (10 by 10) for more information.

The following APIs have been added to the platform since the release of 16299.


namespace Windows.ApplicationModel {
  public enum StartupTaskState {
    EnabledByPolicy = 4,
  }
}
namespace Windows.ApplicationModel.Background {
  public sealed class MobileBroadbandPcoDataChangeTrigger : IBackgroundTrigger
  public sealed class TetheringEntitlementCheckTrigger : IBackgroundTrigger
}
namespace Windows.ApplicationModel.Calls {
  public enum PhoneCallMedia {
    AudioAndRealTimeText = 2,
  }
  public sealed class VoipCallCoordinator {
    VoipPhoneCall RequestNewAppInitiatedCall(string context, string contactName, string contactNumber, string serviceName, VoipPhoneCallMedia media);
    VoipPhoneCall RequestNewIncomingCall(string context, string contactName, string contactNumber, Uri contactImage, string serviceName, Uri brandingImage, string callDetails, Uri ringtone, VoipPhoneCallMedia media, TimeSpan ringTimeout, string contactRemoteId);
  }
  public sealed class VoipPhoneCall {
    void NotifyCallAccepted(VoipPhoneCallMedia media);
  }
}
namespace Windows.ApplicationModel.Chat {
  public sealed class RcsManagerChangedEventArgs
  public enum RcsManagerChangeType
  public sealed class RcsNotificationManager
}
namespace Windows.ApplicationModel.UserActivities {
  public sealed class UserActivity {
    public UserActivity(string activityId);
  }
  public sealed class UserActivityChannel {
    public static void DisableAutoSessionCreation();
  }
  public sealed class UserActivityVisualElements {
    string AttributionDisplayText { get; set; }
  }
}
namespace Windows.Devices.PointOfService {
  public sealed class BarcodeScannerReport {
    public BarcodeScannerReport(uint scanDataType, IBuffer scanData, IBuffer scanDataLabel);
  }
  public sealed class ClaimedBarcodeScanner : IClosable {
    bool IsVideoPreviewShownOnEnable { get; set; }
    void HideVideoPreview();
    IAsyncOperation<bool> ShowVideoPreviewAsync();
  }
  public sealed class UnifiedPosErrorData {
    public UnifiedPosErrorData(string message, UnifiedPosErrorSeverity severity, UnifiedPosErrorReason reason, uint extendedReason);
  }
}
namespace Windows.Globalization {
  public static class ApplicationLanguages {
    public static IVectorView<string> GetLanguagesForUser(User user);
  }
  public sealed class Language {
    LanguageLayoutDirection LayoutDirection { get; }
  }
  public enum LanguageLayoutDirection
}
namespace Windows.Graphics.Imaging {
  public enum BitmapPixelFormat {
    P010 = 104,
  }
}
namespace Windows.Management.Deployment {
  public sealed class PackageManager {
    IAsyncOperationWithProgress<DeploymentResult, DeploymentProgress> RequestAddPackageAsync(Uri packageUri, IIterable<Uri> dependencyPackageUris, DeploymentOptions deploymentOptions, PackageVolume targetVolume, IIterable<string> optionalPackageFamilyNames, IIterable<Uri> relatedPackageUris, IIterable<Uri> packageUrisToInstall);
  }
}
namespace Windows.Media.Audio {
  public sealed class AudioGraph : IClosable {
    IAsyncOperation<CreateMediaSourceAudioInputNodeResult> CreateMediaSourceAudioInputNodeAsync(MediaSource mediaSource);
    IAsyncOperation<CreateMediaSourceAudioInputNodeResult> CreateMediaSourceAudioInputNodeAsync(MediaSource mediaSource, AudioNodeEmitter emitter);
  }
  public sealed class AudioGraphSettings {
    double MaxPlaybackSpeedFactor { get; set; }
  }
  public sealed class AudioStateMonitor
  public sealed class CreateMediaSourceAudioInputNodeResult
  public sealed class MediaSourceAudioInputNode : IAudioInputNode, IAudioInputNode2, IAudioNode, IClosable
  public enum MediaSourceAudioInputNodeCreationStatus
}
namespace Windows.Media.Capture {
  public sealed class CapturedFrame : IClosable, IContentTypeProvider, IInputStream, IOutputStream, IRandomAccessStream, IRandomAccessStreamWithContentType {
    BitmapPropertySet BitmapProperties { get; }
    CapturedFrameControlValues ControlValues { get; }
  }
  public enum KnownVideoProfile {
    HdrWithWcgPhoto = 8,
    HdrWithWcgVideo = 7,
    HighFrameRate = 5,
    VariablePhotoSequence = 6,
    VideoHdr8 = 9,
  }
  public sealed class MediaCaptureSettings {
    IDirect3DDevice Direct3D11Device { get; }
  }
  public sealed class MediaCaptureVideoProfile {
    IVectorView<MediaFrameSourceInfo> FrameSourceInfos { get; }
    IMapView<Guid, object> Properties { get; }
  }
  public sealed class MediaCaptureVideoProfileMediaDescription {
    IMapView<Guid, object> Properties { get; }
    string Subtype { get; }
  }
}
namespace Windows.Media.Capture.Frames {
  public sealed class AudioMediaFrame
  public sealed class MediaFrameFormat {
    AudioEncodingProperties AudioEncodingProperties { get; }
  }
  public sealed class MediaFrameReference : IClosable {
    AudioMediaFrame AudioMediaFrame { get; }
  }
  public sealed class MediaFrameSourceController {
    AudioDeviceController AudioDeviceController { get; }
  }
  public sealed class MediaFrameSourceInfo {
    string ProfileId { get; }
    IVectorView<MediaCaptureVideoProfileMediaDescription> VideoProfileMediaDescription { get; }
  }
  public enum MediaFrameSourceKind {
    Audio = 4,
    Image = 5,
  }
}
namespace Windows.Media.Core {
  public sealed class MediaBindingEventArgs {
    void SetDownloadOperation(DownloadOperation downloadOperation);
  }
  public sealed class MediaSource : IClosable, IMediaPlaybackSource {
    DownloadOperation DownloadOperation { get; }
    public static MediaSource CreateFromDownloadOperation(DownloadOperation downloadOperation);
  }
}
namespace Windows.Media.Devices {
  public sealed class VideoDeviceController : IMediaDeviceController {
    VideoTemporalDenoisingControl VideoTemporalDenoisingControl { get; }
  }
  public sealed class VideoTemporalDenoisingControl
  public enum VideoTemporalDenoisingMode
}
namespace Windows.Media.DialProtocol {
  public sealed class DialReceiverApp {
    IAsyncOperation<string> GetUniqueDeviceNameAsync();
  }
}
namespace Windows.Media.MediaProperties {
  public static class MediaEncodingSubtypes {
    public static string P010 { get; }
  }
  public enum MediaPixelFormat {
    P010 = 2,
  }
}
namespace Windows.Media.Playback {
  public sealed class MediaPlaybackSession {
    MediaRotation PlaybackRotation { get; set; }
    MediaPlaybackSessionOutputDegradationPolicyState GetOutputDegradationPolicyState();
  }
  public sealed class MediaPlaybackSessionOutputDegradationPolicyState
  public enum MediaPlaybackSessionVideoConstrictionReason
}
namespace Windows.Media.Streaming.Adaptive {
  public sealed class AdaptiveMediaSourceDiagnosticAvailableEventArgs {
    string ResourceContentType { get; }
    IReference<TimeSpan> ResourceDuration { get; }
  }
  public sealed class AdaptiveMediaSourceDownloadCompletedEventArgs {
    string ResourceContentType { get; }
    IReference<TimeSpan> ResourceDuration { get; }
  }
  public sealed class AdaptiveMediaSourceDownloadFailedEventArgs {
    string ResourceContentType { get; }
    IReference<TimeSpan> ResourceDuration { get; }
  }
  public sealed class AdaptiveMediaSourceDownloadRequestedEventArgs {
    string ResourceContentType { get; }
    IReference<TimeSpan> ResourceDuration { get; }
  }
}
namespace Windows.Networking.BackgroundTransfer {
  public sealed class DownloadOperation : IBackgroundTransferOperation, IBackgroundTransferOperationPriority {
    void MakeCurrentInTransferGroup();
  }
  public sealed class UploadOperation : IBackgroundTransferOperation, IBackgroundTransferOperationPriority {
    void MakeCurrentInTransferGroup();
  }
}
namespace Windows.Networking.Connectivity {
  public sealed class CellularApnContext {
    string ProfileName { get; set; }
  }
  public sealed class ConnectionProfileFilter {
    IReference<Guid> PurposeGuid { get; set; }
  }
  public sealed class WwanConnectionProfileDetails {
    WwanNetworkIPKind IPKind { get; }
    IVectorView<Guid> PurposeGuids { get; }
  }
  public enum WwanNetworkIPKind
}
namespace Windows.Networking.NetworkOperators {
  public sealed class MobileBroadbandAntennaSar {
    public MobileBroadbandAntennaSar(int antennaIndex, int sarBackoffIndex);
  }
  public sealed class MobileBroadbandModem {
    IAsyncOperation<MobileBroadbandPco> TryGetPcoAsync();
  }
  public sealed class MobileBroadbandModemIsolation
  public sealed class MobileBroadbandPco
  public sealed class MobileBroadbandPcoDataChangeTriggerDetails
  public sealed class TetheringEntitlementCheckTriggerDetails
}
namespace Windows.Networking.Sockets {
  public sealed class ServerMessageWebSocket : IClosable
  public sealed class ServerMessageWebSocketControl
  public sealed class ServerMessageWebSocketInformation
  public sealed class ServerStreamWebSocket : IClosable
  public sealed class ServerStreamWebSocketInformation
}
namespace Windows.Networking.Vpn {
  public sealed class VpnNativeProfile : IVpnProfile {
    string IDi { get; set; }
    VpnPayloadIdType IdiType { get; set; }
    string IDr { get; set; }
    VpnPayloadIdType IdrType { get; set; }
    bool IsImsConfig { get; set; }
    string PCscf { get; }
  }
  public enum VpnPayloadIdType
}
namespace Windows.Security.Authentication.Identity.Provider {
  public enum SecondaryAuthenticationFactorAuthenticationMessage {
    CanceledByUser = 22,
    CenterHand = 23,
    ConnectionRequired = 20,
    DeviceUnavaliable = 28,
    MoveHandCloser = 24,
    MoveHandFarther = 25,
    PlaceHandAbove = 26,
    RecognitionFailed = 27,
    TimeLimitExceeded = 21,
  }
}
namespace Windows.Services.Maps {
  public sealed class MapRouteDrivingOptions {
    IReference<DateTime> DepartureTime { get; set; }
  }
}
namespace Windows.System {
  public sealed class AppActivationResult
  public sealed class AppDiagnosticInfo {
    IAsyncOperation<AppActivationResult> ActivateAsync();
  }
  public sealed class AppResourceGroupInfo {
    IAsyncOperation<bool> TryResumeAsync();
    IAsyncOperation<bool> TrySuspendAsync();
    IAsyncOperation<bool> TryTerminateAsync();
  }
  public sealed class User {
    public static User GetDefault();
  }
  public enum UserType {
    SystemManaged = 4,
  }
}
namespace Windows.System.Diagnostics {
  public sealed class DiagnosticInvoker {
    IAsyncOperationWithProgress<DiagnosticActionResult, DiagnosticActionState> RunDiagnosticActionFromStringAsync(string context);
  }
}
namespace Windows.System.Diagnostics.DevicePortal {
  public sealed class DevicePortalConnection {
    ServerMessageWebSocket GetServerMessageWebSocketForRequest(HttpRequestMessage request);
    ServerMessageWebSocket GetServerMessageWebSocketForRequest(HttpRequestMessage request, SocketMessageType messageType, string protocol);
    ServerMessageWebSocket GetServerMessageWebSocketForRequest(HttpRequestMessage request, SocketMessageType messageType, string protocol, uint outboundBufferSizeInBytes, uint maxMessageSize, MessageWebSocketReceiveMode receiveMode);
    ServerStreamWebSocket GetServerStreamWebSocketForRequest(HttpRequestMessage request);
    ServerStreamWebSocket GetServerStreamWebSocketForRequest(HttpRequestMessage request, string protocol, uint outboundBufferSizeInBytes, bool noDelay);
  }
  public sealed class DevicePortalConnectionRequestReceivedEventArgs {
    bool IsWebSocketUpgradeRequest { get; }
    IVectorView<string> WebSocketProtocolsRequested { get; }
    Deferral GetDeferral();
  }
}
namespace Windows.System.RemoteSystems {
  public static class KnownRemoteSystemCapabilities {
    public static string NearShare { get; }
  }
}
namespace Windows.System.UserProfile {
  public static class GlobalizationPreferences {
    public static GlobalizationPreferencesForUser GetForUser(User user);
  }
  public sealed class GlobalizationPreferencesForUser
}
namespace Windows.UI.ApplicationSettings {
  public sealed class AccountsSettingsPane {
    public static IAsyncAction ShowAddAccountForUserAsync(User user);
    public static IAsyncAction ShowManageAccountsForUserAsync(User user);
  }
  public sealed class AccountsSettingsPaneCommandsRequestedEventArgs {
    User User { get; }
  }
}
namespace Windows.UI.Composition {
  public sealed class BounceScalarNaturalMotionAnimation : ScalarNaturalMotionAnimation
  public sealed class BounceVector2NaturalMotionAnimation : Vector2NaturalMotionAnimation
  public sealed class BounceVector3NaturalMotionAnimation : Vector3NaturalMotionAnimation
  public class CompositionLight : CompositionObject {
    bool IsEnabled { get; set; }
  }
  public sealed class Compositor : IClosable {
    string Comment { get; set; }
    BounceScalarNaturalMotionAnimation CreateBounceScalarAnimation();
    BounceVector2NaturalMotionAnimation CreateBounceVector2Animation();
    BounceVector3NaturalMotionAnimation CreateBounceVector3Animation();
  }
  public sealed class PointLight : CompositionLight {
    Vector2 AttenuationCutoff { get; set; }
  }
  public sealed class SpotLight : CompositionLight {
    Vector2 AttenuationCutoff { get; set; }
  }
}
namespace Windows.UI.Composition.Core {
  public sealed class CompositorController : IClosable
}
namespace Windows.UI.Composition.Desktop {
  public sealed class HwndTarget : CompositionTarget
}
namespace Windows.UI.Xaml {
  public sealed class BringIntoViewOptions {
    double HorizontalAlignmentRatio { get; set; }
    double HorizontalOffset { get; set; }
    double VerticalAlignmentRatio { get; set; }
    double VerticalOffset { get; set; }
  }
  public sealed class BringIntoViewRequestedEventArgs : RoutedEventArgs
  public sealed class EffectiveViewportChangedEventArgs
  public enum FocusVisualKind {
    Reveal = 2,
  }
  public class FrameworkElement : UIElement {
    event TypedEventHandler<FrameworkElement, EffectiveViewportChangedEventArgs> EffectiveViewportChanged;
    void InvalidateViewport();
    virtual bool IsViewport();
  }
  public class UIElement : DependencyObject {
    public static RoutedEvent BringIntoViewRequestedEvent { get; }
    public static RoutedEvent ContextRequestedEvent { get; }
    KeyboardAcceleratorPlacementMode KeyboardAcceleratorPlacementMode { get; set; }
    public static DependencyProperty KeyboardAcceleratorPlacementModeProperty { get; }
    DependencyObject KeyboardAcceleratorToolTipTarget { get; set; }
    public static DependencyProperty KeyboardAcceleratorToolTipTargetProperty { get; }
    DependencyObject KeyTipTarget { get; set; }
    public static DependencyProperty KeyTipTargetProperty { get; }
    event TypedEventHandler<UIElement, BringIntoViewRequestedEventArgs> BringIntoViewRequested;
    virtual void OnBringIntoViewRequested(BringIntoViewRequestedEventArgs e);
    virtual void OnKeyboardAcceleratorInvoked(KeyboardAcceleratorInvokedEventArgs args);
  }
}
namespace Windows.UI.Xaml.Automation.Peers {
  public sealed class AutoSuggestBoxAutomationPeer : FrameworkElementAutomationPeer, IInvokeProvider {
    void Invoke();
  }
  public class CalendarDatePickerAutomationPeer : FrameworkElementAutomationPeer, IInvokeProvider, IValueProvider
}
namespace Windows.UI.Xaml.Controls {
  public class AppBarButton : Button, ICommandBarElement, ICommandBarElement2 {
    string KeyboardAcceleratorText { get; set; }
    public static DependencyProperty KeyboardAcceleratorTextProperty { get; }
    AppBarButtonTemplateSettings TemplateSettings { get; }
  }
 public class AppBarToggleButton : ToggleButton, ICommandBarElement, ICommandBarElement2 {
    string KeyboardAcceleratorText { get; set; }
    public static DependencyProperty KeyboardAcceleratorTextProperty { get; }
    AppBarToggleButtonTemplateSettings TemplateSettings { get; }
  }
  public class MenuFlyoutItem : MenuFlyoutItemBase {
    string KeyboardAcceleratorText { get; set; }
    public static DependencyProperty KeyboardAcceleratorTextProperty { get; }
    MenuFlyoutItemTemplateSettings TemplateSettings { get; }
  }
  public class NavigationView : ContentControl {
    string PaneTitle { get; set; }
    public static DependencyProperty PaneTitleProperty { get; }
    event TypedEventHandler<NavigationView, object> PaneClosed;
    event TypedEventHandler<NavigationView, NavigationViewPaneClosingEventArgs> PaneClosing;
    event TypedEventHandler<NavigationView, object> PaneOpened;
    event TypedEventHandler<NavigationView, object> PaneOpening;
  }
  public sealed class NavigationViewPaneClosingEventArgs
  public sealed class ScrollViewer : ContentControl {
    bool IsResponsiveToOcclusions { get; set; }
    public static DependencyProperty IsResponsiveToOcclusionsProperty { get; }
  }
  public enum WebViewPermissionType {
    Screen = 5,
  }
}
namespace Windows.UI.Xaml.Controls.Maps {
  public sealed class MapControl : Control {
    string Region { get; set; }
    public static DependencyProperty RegionProperty { get; }
  }
  public class MapElement : DependencyObject {
    bool IsEnabled { get; set; }
    public static DependencyProperty IsEnabledProperty { get; }
  }
}
namespace Windows.UI.Xaml.Controls.Primitives {
  public sealed class AppBarButtonTemplateSettings : DependencyObject
  public sealed class AppBarToggleButtonTemplateSettings : DependencyObject
  public class ListViewItemPresenter : ContentPresenter {
    bool DisableTilt { get; set; }
    public static DependencyProperty DisableTiltProperty { get; }
  }
  public sealed class MenuFlyoutItemTemplateSettings : DependencyObject
}
namespace Windows.UI.Xaml.Input {
  public sealed class GettingFocusEventArgs : RoutedEventArgs {
    bool TryCancel();
    bool TrySetNewFocusedElement(DependencyObject element);
  }
  public sealed class KeyboardAcceleratorInvokedEventArgs {
    KeyboardAccelerator KeyboardAccelerator { get; }
  }
  public enum KeyboardAcceleratorPlacementMode
  public sealed class LosingFocusEventArgs : RoutedEventArgs {
    bool TryCancel();
    bool TrySetNewFocusedElement(DependencyObject element);
  }
}

The post Windows 10 SDK Preview Build 17040 now available appeared first on Building Apps for Windows.

Viewing all 10804 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>