Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

Microsoft R Open 3.3.3 now available

$
0
0

Microsoft R Open (MRO), Microsoft's enhanced distribution of open source R, has been upgraded to version 3.3.3, and is now available for download for Windows, Mac, and Linux. This update upgrades the R language engine to R 3.3.3, upgrades the installer, and updates the bundled packages.

R 3.3.3 makes just a few minor fixes compared to R 3.3.2 (see the full list of changes here), so you shouldn't encounter any compatibility issues when upgrading from MRO 3.3.2. For CRAN packages, MRO 3.3.3 points to CRAN snapshot taken on March 15, 2017 but as always, you can use the built-in checkpoint package to access packages from an earlier date (for compatibility) or a later date (to access new and updated packages). 

MRO is supported on Windows, Mac and Linux. MRO 3.3.3 is 100% compatible with R 3.3.3, and you can use it with any of the 10,000+ packages available on CRAN. Here are some highlights of new packages released since the last MRO update.

We hope you find Microsoft R Open useful, and if you have any comments or questions please visit the Microsoft R Open forum. To download Microsoft R Open, simply follow the link below.

MRAN: Download Microsoft R Open

 


Azure Container Registry now generally available

$
0
0

Companies of all sizes are embracing containers as a fast and portable way to lift, shift and modernize into cloud-native apps. As part of this process, customers need a way to store and manage images for all types of container deployments. In November, we announced the preview of Azure Container Registry, which enables developers to create and maintain Azure container registries to store and manage private Docker container images.

Today, we're announcing the general availability of Azure Container Registry supporting a network-close, private registry for Linux and Windows container images. Azure Container Registry integrates well with orchestrators hosted in Azure Container Service, including Docker Swarm, Kubernetes and DC/OS as well as other Azure Services including Service Fabric and Azure App Services. Customers can benefit from using familiar tooling capable of working with the open source Docker Registry v2. Learn more by watching this Azure Container Registry GA video.
Building on the November Preview, we’ve added the following features and capabilities:

  • Availability in 23 regions, with a global footprint (with more coming)
  • Repositories, tag, and manifest listing in the Azure Portal
  • Dual-key password providing key rotation
  • Nested level repositories
  • Azure CLI 2.0 support

Global Availability

Container registry is now available globally. As part of our general availability release, all features are now available in all regions.

A full list of the supported regions are:

  • Australia East
  • Australia Southeast
  • Brazil South
  • Canada Central
  • Canada East
  • Central India
  • Central US
  • East US 2
  • East US
  • Japan East
  • Japan West
  • North Central US
  • North Europe
  • South Central US
  • South India
  • Southeast Asia
  • UK South
  • UK West
  • West Central US
  • West Europe
  • West US 2
  • West US

Multi-Arch Support

With the release of Windows Containers, we’re increasingly seeing customers who want both Windows and Linux images. While the Azure Container Registry supports both Windows and Linux, docker has added the ability to pull a single named image and have it resolve the os version based on the host the image is pulled from. Using multi-arch support, a customer can push both Windows and Linux based tags and their development teams can create their dockerfiles using FROM contoso.comaspnetcorecorpstandard. The Azure Container Registry multi-arch features will pull the appropriate image based on the host it’s called from.

Nested Repositories

Development teams often work in hierarchies and deploy solutions based on collections. The bikesharing team may have a collection of images they wish to group together (bikesharingweb, bikesharingapi), while the headtrax team has their collection (headtraxweb, headtraxapi, headtraxadmin), with a set of corporate images available to all members (aspnet:corpstandard).
The Azure Container Registry supports nested repos to enable teams to group their repos and images to match their development.

Repositories, tags, manifests

Customers have requested visibility into the contents of their registry. With the GA release, you will now have an integrated experience in the Azure portal to view the repositories, images, tags and the contents of manifests associated with an image.

To view repositories and tags you’ve already created in your repository:

  1. Log in to the Azure Portal.
  2. Select "More Services" on the left-side panel.
  3. Search for "Container registries".
  4. Select the registry you want to inspect.
  5. On the left-hand side panel, select "Repositories".

The repositories blade will display a list of all the repositories (including nested registries) that you have created, as well as the images that are stored in these repositories.

If you select a specific image, it will open up a "Tags" blade containing the tags associated with that image. Additionally, if you select a tag, you will have the ability to see the manifest for that image tag.

Repositories

 

Improved passwords

We have also made improvements for registry admin accounts. While we recommend using a service principal as a best practice, we wanted to improve on the safety of this alternative providing the ability to do key rotation. As such, new container registries will have access to two admin passwords, both of which can be regenerated. Having two passwords allows you to maintain connections by allowing you to swap to another password if you need to regenerate one.

To regenerate passwords, go to the "Access Keys" section of a registry on which you have enabled an Admin user.

Demos - Access Keys

Summary

We hope you enjoy the new features and capabilities of container registries. If you have any comments, requests, or issues, you can reach out to us on StackOverflow or log issues at https://github.com/azure/acr/issues

Spring Into DevOps on Radio TFS with Gopinath Chigakkagari

$
0
0

As part of the #SpringIntoDevOps series, Gopinath Chigakkagari – GPM of the Release Management team at Microsoft joined the most recent episode of Radio TFS with MVP’s Greg Duncan and Josh Garverick to talk about the latest news around Visual Studio Team Service and Team Foundation Server as well as dive into release management and DevOps in general.  Worth listening to for Gopi’s explaination of pipelines in VSTS alone.

Gopinath Chigakkagari on RadioTFS

If podcasts are your thing, then don’t forget that Carl Franklin and Richard Campbell regularly talk about DevOps over on .NET Rocks including some great interviews as part of #SpringIntoDevOps. Richard also frequently covers DevOps topics over on his other podcast, RunAs Radio!

Microsoft + Docker – Investing in the future of your applications

$
0
0

This post was authored by the Microsoft and Docker teams.

Did you know when you combine Docker’s cross platform support of Linux and Windows containers and Microsoft cloud technologies, you get a comprehensive offering that can support virtually every enterprise workload?

Docker   Microsoft 1

One platform, one journey for all applications

Microsoft and Docker aim to provide a modern platform for developers and IT pros to build, ship, and run any application on-premises, in the cloud, or through service providers across both Windows and Linux operating systems. Together we are bringing container applications across platforms, integrating across our developer tools, the operating system, and cloud infrastructure to provide a seamless experience across the application environment from development to test and production.

Docker   Microsoft 2

Whether you host your workloads in private datacenters, public cloud, or hybrid, Microsoft and Docker offer great end-to-end solutions or individual components from the developer’s keyboard to cloud. Azure Container Service provides the simplest way to deploy your container orchestration environment, such as Docker Swarm, so your app teams can deploy their apps more quickly. Windows Server Containers are powered by the same Docker toolchain, so you use the same Docker tooling to build and run those containers as you do your Linux containers and with the tooling you choose including Eclipse, Visual Studio, Jenkins, and Visual Studio Team Service. Windows Server Containers help secure and modernize existing enterprise .NET and line-of-business server applications with little or no code changes. Package existing apps in containers to realize the benefit of a more agile DevOps model, then deploy on-premises, to any cloud, or in a hybrid model. Reduce infrastructure and management costs for those applications, as well.

See it in action @ DockerCon 2017

Come visit Docker + Microsoft sessions @ DockerCon, taking place in Austin, TX from April 17th –20th. Learn how to modernize traditional applications as well as learn new technologies to help you build your next great application. You’ll learn about customer success stories on how they were able to achieve ROI targets and up to 80% cost savings through infrastructure consolidation and operational efficiencies with Docker Enterprise Edition (EE) and Azure.

Check out our sessions

  • Docker + Microsoft – Investing in the future of your applications on Tuesday, April 18th from 11:45am-12:25pm
  • Beyond – the path to Windows and Linux parity in Docker on Tuesday, April 18th from 2:00pm-2:40pm

There will also be hands-on labs for you to experience Docker on Windows. We’ll provision a Docker environment for you in Azure, and provide self-paced learning guides. You can learn more by reading Elton Stoneman’s blog on Docker + Microsoft sessions at DockerCon, from modernizing traditional apps like .NET to building new Windows Server Container apps.

Legacy web apps in the enterprise

$
0
0

Migrating legacy web apps to modern standards can be both costly and time consuming. IT departments are generally cost centers, and it makes sense for enterprises to want to maximize the ROI on their existing LOB apps. Many of these sites may continue to exist without being upgraded for a while yet, and it’s important to us that these apps do not block Windows customers as they adopt newer versions of Windows. This is why Windows 10 includes Internet Explorer 11 alongside Microsoft Edge, to provide a consistent and predictable level of compatibility with existing legacy applications.

In this blog post, we will discuss how Internet Explorer and Microsoft Edge can work together to support your legacy web apps, while still defaulting to the higher bar for security and modern experiences enabled by Microsoft Edge. Working with multiple browsers can be difficult, particularly if you have a substantial number of internal sites. To help manage this dual-browser experience, we are introducing a new web tool specifically targeted towards larger organizations: the Enterprise Mode Site List Portal.

The future of Internet Explorer

Naturally, this is a question we get quite frequently. With Microsoft Edge and the modern web representing the future, what will happen to Internet Explorer?

While we encourage everyone on Windows 10 to use Microsoft Edge—our modern web browser designed for faster, safer browsing—we are cognizant of the sizable investment that many of you have in legacy web apps. Our guidance to developers and IT administrators is simple. Upgrading web apps to modern standards is the best long-term solution. With that said, you can still use Internet Explorer 11 for backward compatibility and upgrade web apps on your own schedule.

Internet Explorer 11 supports Document modes and Enterprise Mode, which are essential tools for maintaining this backward compatibility. Internet Explorer, and the aforementioned tools, are considered components of the Windows operating system. They follow the Lifecycle Policy for the product on which they are installed. For Internet Explorer 11, this includes the lifespan of Windows 7, Windows 8.1, and Windows 10.

Cataloging your internal sites

Show of hands: who knows the exact number of internal sites and web apps your company has today? The answer to this is, of course, dependent on the size of your organization and many other factors. However, if we were in a large room of IT professionals, chances are there wouldn’t be many hands up.

As your organization grows, it’s only natural that the number of web apps should grow proportionally. It’s tough to have a firm grasp of what constitutes your “intranet”, in the non-networking sense of the word. This is an inherent problem that most will face when modernizing their web apps. In order to determine your dependency on legacy technologies, you first need to identify all the sites that must be tested, then learn their optimal configuration. There are a few ways you can go about this. If you attended our session at Microsoft Ignite in Atlanta last September, you should be familiar with these approaches.

Let’s go through them one-by-one:

Screen capture showing the F12 Developer Tools open to the "emulation" tab and configured to emulate "Internet Explorer 11"

F12 developer tools. The first method is by far the most manual approach. With the F12 developer tools in Internet Explorer 11, you can emulate any site with different Document modes and Enterprise Modes. Cycling through these different options will help you determine the appropriate compatibility setting. There’s some user training required to understand the technology behind the process, but fortunately little configuration is needed. One-by-one, you can build a list of sites and the legacy technologies they require. You can learn more about this approach here.

Screen capture showing an Enterprise Site Discovery report with an inventory of visited URLs.

Enterprise Site Discovery. The next approach is much more automated. Enterprise Site Discovery automatically collects inventory data on any set of computers you designate, effectively crowdsourcing the information you would learn from the F12 developer tools. Any time a user browses the web, data—such as the URL, domain, document mode, browser state reason, and number of visits—is captured. This information can be scoped to particular domains and zones for privacy. The more data you collect, the clearer of a picture you will have. Over enough time and with enough devices, the list will begin to build itself with increasing accuracy. You can learn more about this approach here.

Screen capture showing the Windows Upgrade Analytics dashboard

Windows Upgrade Analytics. The final method is based on Enterprise Site Discovery, and is the most scalable solution. Windows Upgrade Analytics is a free service that helps IT departments easily analyze their environment and upgrade to Windows 10 through the Operations Management Suite. As a part of this solution, the same site discovery data is collected, which can be similarly scoped for privacy. Going one step further, the raw inventory data is automatically analyzed and snapshot reports, like the one pictured below, are generated. You can learn more about this approach here.

Now that we have all this site information, what do we do with it?

Dual-browser experience

Microsoft Edge and Internet Explorer 11 work better together on Windows 10. Based on the size of your legacy web app dependency, determined by the data collected above, there are several options from which you can choose to configure your enterprise browsing environment:

  • Use Microsoft Edge as your primary browser
  • Use Microsoft Edge as your primary browser and use Enterprise Mode to open sites in IE11 that use IE proprietary technologies
  • Use Microsoft Edge as your primary browser and open all intranet sites in IE11
  • Use IE11 as your primary browser and use Enterprise Mode to open sites in Microsoft Edge that use modern web technologies
  • Use IE11 as your primary browser

This blog post goes into more detail on when to use which option, and which option is best for you.

Now that we have a catalog of legacy web apps, let’s define an experience where you can use a modern browser but still maintain compatibility with your older apps.

Managing your Enterprise Mode Site List

The Enterprise Mode Site List is an XML document where you can specify a list of sites, their compat mode, and their intended browser. With this schema, you can automatically launch a page in a particular browser. In the case of IE11, that page can be launched in a particular compat mode to always render correctly. You can also restrict IE11 to only the legacy web apps that need it, automatically sending sites not included in the Enterprise Mode Site List to Microsoft Edge, as of the Anniversary Update last year. Once implemented, users can easily view this site list by visiting “about:compat” in either browser.

There are equivalent Enterprise Mode Site List policies for both Microsoft Edge and Internet Explorer 11. The former list is used to determine which sites should open in IE11; the latter list is used to both (1) determine with which compat mode to load a site, and (2) determine which sites should open in Microsoft Edge. We recommend using one list for both browsers, where each policy points to the same XML file location.

The most straightforward way to build and manage your Enterprise Mode Site List is by using any generic text editor. However, we’ve provided a couple tools to make that process even easier.

Enterprise Mode Site List Manager

The first tool is called the Enterprise Mode Site List Manager. There are two versions: one for the old, v.1 XML schema, and one for the new, v.2 XML schema. This tool helps you create error-free XML documents, with simple n+1 versioning and URL verification. If your site list is of a relatively small size, this is the easiest way to manage your Enterprise Mode Site List.

On the other hand, if your site list is relatively large, you may encounter some difficulties with the client tool. It is not very scalable; it is designed for a single user. If you have more than one user managing your site list, there is the potential for overlap, among other complications.

Enterprise Mode Site List Portal

Today we are proud to announce a new tool specifically targeted for larger organizations: The Enterprise Mode Site List Portal.

Screen capture showing the Enterprise Mode Site List Portal dashboard

The Enterprise Mode Site List Portal is a web tool originally built by our own IT department, now made open-source on GitHub. The web app is designed for IIS with a SQL Server backend, leveraging Active Directory for employee management. In addition to all the functionality of the client tool, the Enterprise Mode Site List Portal helps enterprises:

  1. Manage site lists from any device supporting Windows 7 or greater;
  2. Submit change requests;
  3. Operate offline via an on-premise solution;
  4. Provide role-based governance;
  5. Test configuration settings before releasing to a live environment.

This new tool allows you to manage your Enterprise Mode Site List, hosted by the app, with multiple users. Updates are made by submitting new change requests, which are then approved by a designated group of people. Those updates are first made to a pre-production environment for testing, which can be rolled back if necessary. The final production changes can be deployed immediately, or scheduled for a later date. Users are notified of any updates in the request process via e-mail.

Already being used internally here at Microsoft, the Enterprise Mode Site List Portal has reduced site list management time by 65%. For some enterprises, processing a single change to their site list can take an entire week. What’s more, some enterprises have upwards of tens of thousands of entries in their site list. Using this new web tool can save you valuable time and expedite your modernization process.

As the tool is open-source, the source code is readily available for examination and experimentation. We encourage you to fork the code, submit pull requests, and send us your feedback!

Hopefully this helps illustrate the array of options to help manage legacy web apps in the enterprise. If you have any questions or concerns, please do not hesitate to reach out and ask. We are always looking for ways to improve your enterprise browsing experience!

– Josh Rennert, Program Manager, Microsoft Edge

The post Legacy web apps in the enterprise appeared first on Microsoft Edge Dev Blog.

The faces of R, analyzed with R

$
0
0

Maëlle Salmon recently created a collage of profile pictures of people who use the #rstats hashtag in their Twitter bio to indicate their use of R. (I've included a detail below; click to see the complete version at Maëlle's blog.)

FacesofR-detail

Naturally, Maëlle created the collage using R itself. Matching Twitter bios were found using the search_users function in the rtweet package, which also provides the URL of the profile image to be downloaded using the httr package. From there, Maëlle used the magick package to resize the pictures and assemble the collage.

Now, you'll notice that while many people use their face as their Twitter profile picture, others use a logo or some other kind of design. Colin Fay used the Microsoft Computer Vision API to analyze the profile pictures and generate a descriptive caption for each. (Once again, the process was automated using R; you can find the R code at Colin's blog post.) Some of the generated captions are straightforward: "a woman posing for a picture". Some of the captions are, well, a bit off the mark: "a person on a surf board in a skate park". (The API apparently thinks the R logo looks like a surfboard; captions like this at least had a lower confidence score.) Nonetheless, the captions provide a tool for collecting together similar images; here, for example, are those given the caption "a person on a surf board in a skate park:

Surfboards

If you'd like to play around with the computer vision captions yourself, you'll just need a free API key and the code from Colin's blog post, linked below.

Colin FAY: Playing with #RStats and Microsoft Computer Vision API

Managing Windows IoT Core devices with Azure IoT Hub

$
0
0

Device management in Windows IoT Core

In Fall 2016, Microsoft announced Azure IoT Hub device management, providing the features and extensibility model, including an SDK for a wide range of platforms, to build robust device management solutions. With the recent release of the Windows 10 Creators Update, we are excited to announce the availability of the Windows IoT Azure DM Client Library. The open source library allows developers to easily add device management capabilities to their Azure connected Windows IoT Core device. Enterprise device management for Windows has been available for many years. The Windows IoT Azure DM Client Library makes these capabilities, such as device restart, certificate and application management, as well as many others, available via Azure IoT Hub device management.

A quick introduction

IoT devices, in comparison to desktops, laptops and phones, have in many cases a much more restricted connectivity, less local resources and in many cases no UI. Remote device management also requires devices to be provisioned for a DM service, adding another challenge to the device setup.

Azure IoT DM is designed for devices with resource and connectivity restrictions. Those devices will also use Azure IoT for their operation, so they need to be provisioned for Azure IoT. This makes Azure IoT DM a very attractive choice for remote device management for IoT devices.

Device management in Windows 10 is based on the Configuration Service Provider (CSP) model. A CSP is an interface in Windows that allows reading and modification of settings of a specific feature of the device. For example, a Wi-Fi profile can be configured with the Wi-Fi CSP, the Reboot CSP is used to configure reboot settings, and so on.

All the CSPs ultimately map into API calls, registry keys and changes in the file system. The CSPs raise the level of abstraction and offer a consistent interface that works on all editions of Windows – desktop, mobile and IoT. The Windows IoT Azure DM Client Library will use the same, proven infrastructure.

Windows IoT Core + Azure IoT Hub: Better together

Azure IoT Hub provides the features and an extensibility model that enable device and back-end developers to build robust device management solutions. Devices can report their state to the Azure IoT Hub and can receive desired state updates and management commands from the Azure IoT Hub.

Device management in Azure IoT is based on the concepts of the device twin and the direct methods. The device twins are JSON documents that store device state information (metadata, configurations and conditions). IoT Hub persists a device twin for each device that you connect to IoT Hub. The device twin contains the reported properties that reflect the current state of the device, and the desired properties that represent the expected configuration of the device. Direct methods allow the back-end to send a message to a connected device and receive a response.

The device twin and the direct methods can be used to support the business logic of your IoT solution as well as implementing the device management operations.

The Windows IoT Azure DM Client Library connects the CSP-based device management stack in Windows IoT Core with the cloud back-end based on Azure IoT Hub. The client runs on the device and translates the direct method calls and desired properties updates to the CSP calls. The client also queries the device state using the CSP calls and translates that into reported properties for the device twin in the Azure IoT Hub.

Before an IoT device can be managed through the Azure IoT Hub, it must be registered with a unique device identity and an authentication key. The authentication key needs to be securely stored on the device to prevent accidental or malicious duplication of the device identity. In Windows 10 IoT Core the key can be stored in the TPM. How this is done is described in the previous post Building Secure Apps for Windows IoT Core.

With the device provisioned with Azure IoT Hub credentials (connection information and authentication key), managing Windows 10 Core devices through Azure IoT Hub requires no additional enrollment or configuration.

In this post, we will focus mostly on the client aspects of the device management. Please refer to the general Azure IoT Hub device management documentation for a broader look at what the service provides. Below we explore how the Azure IoT Hub device twin and direct methods can be used to manage Windows IoT Core devices.

How to use the Windows IoT Azure DM Client Library

Devices connecting to Azure IoT Hub can only have one connection to the service. This means that all applications, including the DM library, must share an Azure IoT Hub connection. We will provide two sample implementations that you can use depending on if your device has other applications that will connect to the same IoT Hub, as the same device.

Standalone device management client

If your device only needs Azure IoT Hub for device management and no other application will connect to the same IoT Hub using the same Azure device ID, you can use the IoTDMBackground sample to add DM capabilities to your device.

The IoTDMBackground is a background app that can be deployed on your device. The IoTDMBackground app requires the device to be securely connected to Azure IoT. Once started, the IoTDMBackground will receive direct method calls and device twin updates from the Azure IoT Hub, and perform the device management operations.

Integrated device management client

There are scenarios where the capabilities of the standalone device management client are insufficient:

  1. Some device management, e.g. a device reboot or an application restart, might interrupt the normal operation of the device. In cases where this is not acceptable, the device should be able to declare itself busy and decline or postpone the operation.
  2. If your app is already connected to the Azure IoT Hub (for example, sending telemetry messages, receiving direct method calls and device twin updates), it cannot share its Azure identity with another app on the system, such as the IoTDMBackground.
  3. Some IoT devices expose basic device management capabilities to the user – such as the “check for updates” button or various configuration settings. Implementing this in your app is not an easy task even if you know which API or CSP you need to invoke.

The purpose of the integrated device management client is to address these scenarios. The integrated device management client is a .NET library that links to your IoT app. The library is called the IoTDMClientLib and is part of the IoTDM.sln solution. The library allows your app to declare its busy state, share device identity between itself and your app, and invoke some common device management operations.

To integrate the device management to your app, build the IoTDMClientLib project, which will produce the IoTDMClientLib.dll. You will reference it in your app.

The ToasterApp project in the IoTDM.sln solution is a sample application that uses the integrated client. You can study it and use it as an example, or if you prefer step-by-step instructions, follow the guidance below.

1. If your app is already connected to the Azure IoT Hub, you already have an instance of DeviceClient instantiated somewhere in your app. Normally it would look like this:


DeviceClient deviceClient =
   DeviceClient.CreateFromConnectionString(connectionString, TransportType.Mqtt);

2. Now use the DeviceClient object to instantiate the AzureIoTHubDeviceTwinProxy object for connecting your device management client to Azure IoT Hub:


IDeviceTwin deviceTwinProxy = new AzureIoTHubDeviceTwinProxy(deviceClient);

3. Your app needs to implement the IDeviceManagementRequestHandler interface which allows the device management client to query your app for busy state, app details and so on:


IDeviceManagementRequestHandler appRequestHandler = new MyAppRequestHandler(this);

You can look at ToasterDeviceManagementRequestHandler implementation for an example of how to implement the request handler interface.

Next, add the using Microsoft.Devices.Management statement at the top of your file, and the systemManagement capability to your application’s manifest (see ToasterAppPackage.appxmanifest file).

You are now ready to create the DeviceManagementClient object:


this.deviceManagementClient = await
    DeviceManagementClient.CreateAsync(deviceTwinProxy, appRequestHandler);

You can use this object to perform some common device management operations.

Finally, we will set up the callback that handles the desired properties updates (if your application already uses the device twin, it will already have this call):


await deviceClient.SetDesiredPropertyUpdateCallback(OnDesiredPropertyUpdate, null);

The callback will be invoked for all the desired properties – those specific to device management and those that are not. This is why we need to let the device management client filter out and handle properties that it is responsible for:


public Task OnDesiredPropertyUpdate(TwinCollection desiredProperties,
        object userContext)
{
    // Let the device management client process properties
    // specific to device management
    this.deviceManagementClient.ProcessDeviceManagementProperties(desiredProperties);

    // App developer can process all the top-level nodes here
    return Task.CompletedTask;
}

As an app developer, you’re still in control. You can see all the property updates received by the callback but delegate the handling of the device management-specific properties to the device management client, letting your app focus on its business logic.

To deploy and run your app, follow the instructions here.

The end-to-end solution

Obviously, the entire device management solution requires two parts – the client running on the device and the back-end component running in the cloud. Typically, your back-end component will consist of the Azure IoT Hub, which is the entry point into the cloud for your devices, coupled with other Azure services that support the logic of your application – data storage, data analytics, web services, etc.

Fortunately, you don’t need to build a full solution to try out your client. You can use the existing tools such as the DeviceExplorer to trigger direct method calls and device twin changes for your devices.

For example, to send the immediate reboot command to your IoT device, call microsoft.management.immediateReboot direct method on your device:

The device management client running on the IoT device will respond to the direct method and (unless it is in busy state) proceed with rebooting the device.

The Windows IoT Azure DM Client Library supports a variety of device management operations listed in the documentation on the GitHub site. In addition to the reboot management, application management, update, factory reset and more are supported. The list of capabilities will grow as the project evolves.

The Windows IoT Azure DM Client Library includes a sample called the DM Dashboard, which hides the implementation detail of the device management operations. Unlike the Device Explorer, you don’t need to consult the documentation and manually craft JSON to use it.

Here is how you can invoke the reboot operation using the DM Dashboard tool:

The DM Dashboard is a convenient tool for testing the client side of your device management solution, but since it operates on one device at a time, it is not suitable for managing multiple devices in a production environment.

Next steps

The Windows IoT Azure DM Client Library is still in beta phase and will continue to evolve. We’re very interested in your feedback, and we want to learn about your IoT needs. So, head over to our GitHub page, clone the repo and tell us what you think.

The post Managing Windows IoT Core devices with Azure IoT Hub appeared first on Building Apps for Windows.

Because it’s Friday: Art Collective

$
0
0

Reddit conducted an interesting social experiment last weekend. It provided all of its users with a blank canvas, and the ability to color its pixels according to just three simple rules:

  • You can place a pixel anywhere on the canvas
  • The pixel color must be chosen from a limited palette (recalling the era of 16-bit gaming)
  • After placing a pixel, you must wait 5 minutes to place another

The time restriction, and the need to defend your patch on the canvas from others who may want to use (or disrupt) it, meant that Redditors had to band together to collaboratively decorate the canvas. You can see the results of this collaborative process in the time-lapse video below. The first part shows the overall canvas, and then details of various sections.

Interestingly (as Ars Technica points out, and as anyone who's ever managed a comments section knows) while many were there to contribute, others came to cause grief. But unlike on a comments board, the power balance no longer favored the trolls and spammers: when you need to actively defend your "message", and there are more who oppose your message than support it, the trolls failed to dominate in this collaborative setting. 

That's all from us for this week. Have a good weekend, and we'll be back on Monday.


ICYMI – Your weekly TL;DR

$
0
0

Big news this week with the release of the Windows 10 Creators Update!  Before you go heads-down, check out our round-up below!

Windows 10 Creators Update and Creators Update SDK are Released

Access was opened this week to download the Windows 10 Creators Update and the Creators Update SDK. Take advantage of the new platform capabilities.

What will you make with the Creators Update SDK? Read more: https://t.co/ervljxUPQF

— Windows Developer (@windowsdev) April 5, 2017

Updating your tooling for Windows 10 Creators Update

Read about getting your system updated and configured so you can submit your apps to the Windows Store in the wake of the Windows 10 Creators Update, build 15063.

Are you as excited as we are for the new Creators Update SDK? Dive in here to get started. https://t.co/RPqkrTiRjO

— Windows Developer (@windowsdev) April 5, 2017

New Share Experience in Windows 10 Creators Update

We’ve made sharing in Windows 10 Creators Update better than ever before. In addition to refreshing the entire Share experience, we’ve added new features for developers. Check it out here.

Take a look at our shiny new sharing icon (and experience). Read about it here! https://t.co/ZWOjFo3qz5

— Windows Developer (@windowsdev) April 6, 2017

Announcing UWP Community Toolkit 1.4

The fourth release of the UWP Community Toolkit came out this week and focuses on stabilizations and improvements on existing controls and services.

Hey devs – the newest version of the UWP Community Toolkit has arrived! https://t.co/RhUSZLGasu

— Windows Developer (@windowsdev) April 3, 2017

Monetizing your app: Use Interactive Advertising Bureau ad sizes

Looking for ways to better monetize your app’s ads? You can boost revenue using simple optimizations while building and managing your app. In the first of this blog series, we show you how to use IAB sizes to capitalize on monetization.

Looking for ways to boost ad revenue? Here is the first tip in our new monetization series: https://t.co/NaJRi8PsBB

— Windows Developer (@windowsdev) April 3, 2017

High-DPI Scaling Improvements for Desktop Applications in the Windows 10 Creators Update

As you may know, desktop applications can be blurry or incorrectly sized when running on high-DPI displays, especially when docking, unlocking or using remote technologies. Check out some of the improvements coming in the Windows 10 Creators Update to combat these problems.

Do you ever have apps render blurry on high-DPI displays? We've fixed that. https://t.co/acSHUYBEjS

— Windows Developer (@windowsdev) April 4, 2017

Download Visual Studio to get started.

The Windows team would love to hear your feedback. Please keep the feedback coming using our Windows Developer UserVoice site. If you have a direct bug, please use the Windows Feedback tool built directly into Windows 10.

The post ICYMI – Your weekly TL;DR appeared first on Building Apps for Windows.

Global customer access and additional services now available for Azure in India

$
0
0

Since the launch of the Microsoft Cloud in India, we have seen tremendous growth of our customers’ cloud usage. For example, we’re collaborating with Tata Motors, India’s leading auto manufacturer, to provide connected driving experiences powered by Azure. Flipkart has adopted Azure as its exclusive public cloud platform to enable its continued growth and expansion, and to scale quickly and stay resilient. Kotak Mahindra Bank partnered with Zing HR and turned to Azure for an integrated and flexible mobile HR solution.

Today, I’m excited to share that global companies can now benefit from access to the three Azure regions in India: West India (Mumbai), Central India (Pune) and South India (Chennai). These regions provide world-class reliability and performance combined with data residency in India to support the digital transformation of organizations. Customers gain the benefit of data replication across these locations, ensuring business continuity in both pure cloud and hybrid scenarios.

Selected noteworthy services deployed in India regions since my last update include Power BI and HDInsight, with full features across Windows and Linux. Power BI is a suite of business analytics tools to analyze data and share insights to monitor a business and get answers quickly with rich dashboards available on every device. With HDInsight, you can easily spin up enterprise-grade, open source cluster types, guaranteed with the industry’s best 99.9% SLA and 24/7 support.

I’m proud about these expansions, and to see Azure in India enabling our users to deploy dev, test and production workloads closest to their employees, partners and customers anywhere in the world.

Get started tile now live on Azure Stack

$
0
0

We recently announced the release of Azure Stack Technical Preview 3 refresh with Azure PaaS Services. Coming as a part of this release is the Get Started tile that you have grown to love in public Azure.

As of this release the tile will only contain content related to Azure Stack administration. We are currently working on bringing you separate experiences for the tenant and admin portals. You will be able to see those changes in future releases.

Get%20started

On the admin portal, the Get Started tile will give you an insight into experiences that are "Azure Stack" specific and do not necessarily apply to public Azure. These tutorial videos, created by Program Managers working hard on delivering these experiences to you, will introduce you to new concepts and ideas in Azure Stack and will quickly bring you up to speed on various components of Azure Stack administration.

In these tutorials, you will learn how to make VM images available to your tenants, offer tenant services, add content to your Marketplace, monitor your infrastructure, and how to work with the Azure Stack portal.

The content is hosted online with the videos being hosted on Channel 9, hence you will require an active internet connection to access it.

We are really excited to bring these experiences to you and are looking forward to getting your feedback. Please let us know if you have suggestions on improving the content or if you'd like to see tutorials on new topics!

Availability of migration of ExpressRoute for Classic to Resource Manager IaaS Migration

$
0
0

Azure Resource Manager provides a lot of benefits with support for tagging, RBAC, and infrastructure orchestration using templates. As part of Azure Resource Manager, Virtual Machines gain these new capabilities plus additional compute specific capabilities such as:

  • Ability to resize to any VM size without having to delete and recreate the VM
  • Ability to migrate to managed disks which provide a simplified management experience while also providing higher availability by leveraging multiple storage controllers for availability sets with multiple instances
  • Support for 3 fault domains and 20 update domains

Learn more about Resource Manager and understand the differences between Classic and Resource Manager deployment models.

To allow our customers to benefit from these additional capabilities, we introduced a service that enables customers to bring their classic VMs over into the Resource Manager world without downtime!

Today, we are pleased to announce that customers using ExpressRoute can also migrate their Virtual Networks, including all the VMs in the VNET, to Azure Resource Manager without downtime. Learn more by reading the instructions on how to Migrate ExpressRoute circuits and associated virtual networks from the classic to the Resource Manager deployment model.

There are some edge cases for ExpressRoute migration you should be aware of. Please review the unsupported features and configurations to make sure your environment is supported.

As part of this release, we’re also announcing a revamped documentation set for migration. Based on customer recommendations and migrations, we’ve added additional planning docs and answers for the most frequently asked questions. Existing documents have also been restructured to be easier to understand as part of this effort.

For a great video overview of the migration process please check out Corey Sanders in the below Microsoft Mechanics episode, Azure Classic to Azure Resource Manager Migration.

Infuse some AI into your Azure apps at hands-on Seattle workshop

$
0
0

If you’re an Azure developer interested to incorporate the very latest AI and machine learning techniques into your apps and enterprise solutions, here’s a free in-person workshop you’ll want to register for.

Microsoft is running an all-day AI Immersion Workshop on Tuesday, May 9th, at the W Hotel in Seattle. We’ll provide an overview of Microsoft’s extensive AI investments and offerings at this event, followed by deep technical tutorials, specifically designed for hands-on developers such as yourself. The tutorials being featured at this event include:

  • Applied Machine Learning for Developers.
  • Big AI – Applying Artificial Intelligence at Scale.
  • Weaving Cognitive and Azure Services to Provide Next-Generation Intelligence.
  • Deep Learning and the Microsoft Cognitive Toolkit.
  • Building Intelligent SaaS Applications.

Seasoned Microsoft engineers and data scientists are running the tutorials, and they will there to guide and help you along the way, as you complete your apps through the day.

Spots are filling up rapidly, so be sure to register now and reserve your spot:

Register

For more details, including session abstracts and instructor names, be sure to check out our event agenda page here. See you in Seattle next month!

AllImmersion

Use Azure Media Services with PowerApps

$
0
0

You can now build PowerApps with media hosted on Azure Media Services. In this walkthrough, Contoso Corp. wants to build an online learning app for its employees with videos of their products and services.

  • Create a new Azure Media Services account, if you don’t have one already

createAMS1

  • From your Azure Media Services account, locate and publish your video assets from Settings > Assets.

createAMS2

  • Encode your videos. After the videos are published, copy the manifest URLs. Start the streaming endpoint of your service, if not already.

createAMS3

ams1

  • We want to build a gallery of all the available AMS videos and have the user pick a video to play. An Excel spreadsheet is a quick way to load the data to the app. Here is the Excel Table we will use with the links to the AMS video URLs:

createAMS5

  • From PowerApps, choose Content > Data sources. From the right panel, choose Add data source and Add static data to your app. Browse and load the Excel file.
  • From PowerApps, add a Horizontal Gallery control from Insert > Gallery > Custom gallery. Choose Add an item from Insert tab and add the Video control from Media.
  • Bind the Gallery to the Excel table by setting the Items property of the gallery to the name of the table.

ams_e1

  • Set the Media property of the first video control in the gallery to ThisItem.VideoURL. You should see the list of the AMS videos load in the gallery. Set the Disabled property for the video control to true.

ams_e2

  • Add a Video control from Insert > Media for the main video. Bind its Media property to Gallery1.Selected.VideoURL

ams2

  • You can also add text fields for the Title and description from the Excel file and show them in the app. The complete app is shown in the picture below:

createAMS4

You can learn more about building apps from the PowerApps documentation. For feedback and questions on AMS videos in PowerApps, please post them on our forums.

 

Happy app building!

Considerations on using Deployment Slots in your DevOps Pipeline

$
0
0

The goal of DevOps is to continuously deliver value.  Using deployment slots can allow you to do this with zero downtime. In the Azure Portal, in the Azure App Service resource blade for your Web App, you can add a deployment slot by navigating to “Deployment slots,” adding a slot, and giving the slot a name. The deployment slot has its own hostname and is a live app.

2017-04-10_8-02-58

Deployment slots are extremely powerful, but care must be taken when you start to integrate them into your DevOps Pipeline.  The goal of this post is to focus on best practices and anti-patterns.

Often when I see people using deployment slots in their pipelines, they attempt to swap across environments. This can lead to undesirable results.  One example I witnessed had two deployment slots: Dev and QA.

2017-04-03_9-35-36

The thinking was they would copy the files to Dev, then swap to QA, and finally swap into Production.  On paper, this seems logical. However, rarely are you only dealing with the web application.  You also must deploy the database and other dependencies. In the Dev and QA environments, you will also want to run tests such as load and performance tests.

First, we will address testing. Each slot of the web application shares the same resources. Therefore, if you were to run load tests against the QA slot, it would impact the performance of the Production slot as well. Therefore, if you intend to run load tests you need two separate web applications with matching App Service Plans. Matching the level of the App Service Plan is important so that you will be using comparable sized resources.

Second, I want to address restarting an environment deployment. Like many deployment products, Visual Studio Team Services allows you to restart a failed deployment. If you swap deployment slots and deploy a database in your production environment, you may have to restart the deployment if the database fails.  When you restart your deployment, you would swap the slots again. This would swap the desired code back out of Production when you restart the deployment.

Rolling back with slots

Many users of slots get excited when they realize they can swap in both directions to “roll back” a change.  Although this is true, you need to consider that rarely are you only dealing with the web application. In the cases where you also deployed a database, simply swapping the slots back might leave you in a worse place. You must remember that you are only swapping the web application and not all its dependencies.  You must only make changes to your database that do not break the current version of the application. So, to be able to roll back the web application, you must engineer your database deployments to always be at least one version backwards compatible. This will allow you to swap your slots and allow your previous version to function as expected. You may also have to support multiple API versions for web services as well.

Never do anything for the first time in Production

When I use deployment slots, I do so in every environment. If we return to the Dev, QA, and Production example, I would create three different web applications, each with a Stage and Production slot.  Notice in the images below that everything I intend to do in production is also done in Dev and QA.

2017-04-07_10-20-022017-04-07_10-20-29 2017-04-07_10-22-32

The reason I do this is so each environment is the same. This allows me to verify all my deployment tasks in Dev and QA before I attempt to deploy to Production. If anything is going to fail, it should fail in Dev and/or QA where I can resolve without impacting Production. If you do something for the first time in your Production deployment, the only time you will know if it will work or not is in Production.

For more information on deployment slots see the Azure Documentation.


Announcing TypeScript 2.3 RC

$
0
0
The TypeScript 2.3 Release Candidate is here today! This release brings more ECMAScript features, new settings to make starting projects easier, and more.

To get started with the release candidate, you can grab it through NuGet or over npm through

npm install -g typescript@rc

You can also get TypeScript for Visual Studio 2015 (if you have Update 3). Our team is working on supporting Visual Studio 2017 in the near future, with details available on our previous blog post.

Other editor support will be coming with the proper release, but you can follow instructions to enable newer versions of TypeScript in Visual Studio Code and Sublime Text 3.

In this post we’ll take a closer look at the new --strict option along with async generator and iterator support, but to see a more detailed list of our release, check out the TypeScript Roadmap.

The --strict option

By default, TypeScript’s type system is as lenient as possible to allow users to add types gradually. But have you ever started a TypeScript project with all the strictest settings you could think of?

While TypeScript has options for enabling different levels of strictness, it’s very common to start at the strictest settings so that TypeScript can provide the best experience.

The problem with this is that the compiler has grown to have a lot of different options. --noImplicitAny, --strictNullChecks, --noImplicitThis, and --alwaysStrict are just a few of the more common strictness options that you need to remember when starting a new project. Unfortunately if you can’t remember these, it just makes TypeScript harder to use.

That’s why in TypeScript 2.3, we’re introducing the --strict flag. The --strict flag enables these common strictness options implicitly. If you ever need to opt out, you can explicitly turn these options off yourself. For example, a tsconfig.json with all --strict options enabled except for --noImplicitThis would look like the following:

{
    "compilerOptions": {
        "strict": true,
        "noImplicitThis": false
    }
}

In the future --strict may include other strict checks that we believe will benefit all users, but can be manually toggled off by disabling them explicitly (as mentioned above.)

Downlevel generator & iterator support

Prior to TypeScript 2.3, generators were not supported when targeting ES3 & ES5. This stemmed from the fact that support for generators implied that other parts of the language like forof loops could play well with iterators, which wasn’t the case. TypeScript assumed these constructs could only work on arrays when targeting ES3/ES5, and because generalizing the emit would lead to drastic changes in output code. Something as conceptually simple as a forof loop would have to handle cases that might never come up in practice and could add slight overhead.

In TypeScript 2.3, we’ve put the work in for users to start working with generators. The new --downlevelIteration flag gives users a model where emit can stay simple for most users, and those in need of general iterator & generator support can opt in. As a result, TypeScript 2.3 makes it significantly easier to use libraries like redux-saga, where support for generators is expected.

Async generators & iterators

With our support for regular generator & iterator support, TypeScript 2.3 brings support for async generators and async iterators. You can read more about these features on the TC39 proposal, but we’ll try to give a brief explanation and example.

Async iterators are an upcoming ECMAScript feature that allows iterators to produce results asynchronously. They can be cleanly consumed from asynchronous functions with a new construct called async for loops. These have the syntax

for await (let item of items) {
    /*...*/
}

Async generators are generators which can await at any point. They’re declared using a syntax like

async function* asyncGenName() {
    /*...*/
}

Let’s take a quick look at an example that use both of these constructs together.

// Returns a Promise that resolves after a certain amount of time.
function sleep(milliseconds: number) {
    return new Promise<void>(resolve => {
        setTimeout(resolve, milliseconds);
    });
}

// This converts the iterable into an async iterable.
// Each element is yielded back with a delay.
async function* getItemsReallySlowly<T>(items: Iterable<T>) {
    for (const item of items) {
        await sleep(500);
        yield item;
    }
}

async function speakLikeSloth(items: string[]) {
    // Awaits before each iteration until a result is ready.
    for await (const item of getItemsReallySlowly(items)) {
        console.log(item);
    }
}

speakLikeSloth("never gonna give you up never gonna let you down".split(" "))

Keep in mind that our support for async iterators relies on support for Symbol.asyncIterator to exist at runtime. You may need to polyfill Symbol.asyncIterator, which for simple purposes can be as simple as

(Symbol as any).asyncIterator = Symbol.asyncIterator || Symbol.from("Symbol.asyncIterator");

or even

(Symbol as any).asyncIterator = Symbol.asyncIterator || "__@@asyncIterator__";

If you’re using ES5 and earlier, you’ll also need to use the --downlevelIterators flag. Finally, your TypeScript lib option will need to include "esnext".

Enjoy!

Keep an eye out for the full release of TypeScript 2.3 later this month which will have many more features coming.

For our Visual Studio 2017 users: as we mentioned above, we’re working hard to ensure future TypeScript releases will be available for you soon. We apologize for this inconvenience, but can assure you that a solution will be made available.

We appreciate any and all constructive feedback, and welcome you to leave comments below and file issues on GitHub if needed.

Monetizing your app: Advertisement placement

$
0
0

App developers are free to place their ads in any part of their app; this allows developers for some level of flexibility to blend the ad experience into their app for best results. We have seen that developers who take the time to do this get the best performance from their ads and are therefore able to earn more advertising revenue.

There are essentially two major factors to consider when placing an ad:

1. Optimize for Viewability – Over the last few years, advertisers are moving toward tracking the viewability of the advertisements, and for that reason are only paying for ads when they are viewable. Also, advertisers are willing to pay more for viewable impressions.

The Microsoft Ads SDK sends information back to the advertisers about whether an ad was viewable or not. It is recommended that you place advertisements in areas of your app where they have a greater chance of being viewed. For example, on the scoreboard of your game app, the viewable area of a scrolling text app or ensuring that the ad is not hidden by other UX elements such as a button.

Note: If you hide an advertisement behind another UX element, this is considered ‘fraud’ and it’s likely that the application will be removed from the Windows Store if detected.

2. Optimize for Clicks – Many different types of ads serve on your app. Ads can be classified by the way the advertiser pays for the ad — either pay for each impression, pay for each click or pay for each conversion. While Microsoft Advertising pays based on an impression measure of revenue (eCPM – effective cost per a thousand impressions served in your app), a number of advertisers only pay for clicks (CPC – cost per click). The effective clicks help calculate the eCPM that is finally paid out to the developer.

Ad networks also track apps that have a higher click-through rate (CTR), and these apps are generally targeted more heavily by ad campaigns overall. We see that, in general, apps that get higher revenue are apps that have a better click-through rate.

Note: You can track the impressions, clicks, CTR and revenue in the Advertising Performance section under “Analytics” in the Dev Center Dashboard.

Stay tuned for additional tips to increase ad monetization over the next few weeks.

The post Monetizing your app: Advertisement placement appeared first on Building Apps for Windows.

Deploying to On-Premises Environments with Visual Studio Team Services or Team Foundation Server

$
0
0

I hear this particular question frequently as a reason teams are concerned about adopting Visual Studio Team Services when their applications still run on-premises.  The good news is that it is typically a quick walkthrough on how build & deployment pipelines work.  I want to give a big thanks to Sachi Williamson from Northwest Cadence for the guest blog post today!Ed Blankenship

Your company’s apps may not be hosted in the cloud yet for various reasons, such as their configuration, dependencies, or network requirements. That’s okay!   What many people don’t know is that you can still take advantage of great tools like Visual Studio Team Services or Team Foundation Server to manage your deployments.  You’re probably asking yourself, “how can a cloud SaaS service like Team Services be able to deploy to our on-premises environments?”  That’s what we will explore today.

For the core service, your team has a choice to either use Visual Studio Team Services as a completely hosted SaaS service by Microsoft or you can run it on-premises by setting up Team Foundation Server (TFS).  When you build and deploy your apps through Team Services or TFS, you use agents to run the build and deployment tasks.  Team Services allows you to take advantage of hosted agents for running your build and deployment pipelines.  The hosted agents are perfect for many scenarios including your automated build process.  However, when you want to deploy on-premises, you will want to run the deployment steps from agents that have access to your on-premises environment.  This alternative scenario is enabled through leveraging private agents.

How does an agent communicate with Team Services or TFS?

The agent communicates with Team Services or TFS to determine which pipeline tasks it needs to run in addition to reporting log entries and job status. This communication is always initiated by the agent. All the messages from the agent to Team Services or TFS happen over HTTP or HTTPS, depending on how you configure the agent. This polling model allows the agent to be configured in different topologies as shown below.  In the Team Services example, you’ll notice we included an additional scenario where you are running a “private agent” in a cloud-hosted virtual machine as well.

deployonprem1

How does an agent communicate with target servers for deployment?

When you use the agent to deploy artifacts to a target set of servers, the agent must have “line of sight” connectivity to those servers. The hosted agents pool, by default, has connectivity to from the Azure cloud to anything else running in the Azure cloud or exposed to the public Internet.  For example, you may have an Azure Website that a hosted agent is able to deploy to through endpoints exposed through the Azure App Service platform.

If your on-premises environments do not have connectivity to the hosted pool (which is typically the case because of firewalls), you will want to setup and configure a private agent on servers hosted in your on-premises network. The private agents need to have connectivity to the target on-premises environments where you want to deploy, and also have access to the Internet to connect to Team Services, as shown in the following diagram.  If you are using Team Foundation Server, you will connect your private agent to your Team Foundation Server.

deployonprem2

To read more on communication and deployment to target servers, check out this documentation.

How do I deploy from Team Services or TFS to on-premises environments?

Build and deployment agents can run on many platforms.  There are a few walkthroughs available for setting up your agent on various operating systems:

One step that you will need to take is to setup the ability for the agent to authenticate with your Team Services account or your Team Foundation Server.  One approach for authenticating is creating a Personal Access Token (PAT).

deployonprem3

The next step for setting up your private agent is to download and install the agent software on the server you want to run the deployment tasks.  Team Services and TFS allow you to group many agents into “pools” which you will use later when configuring the pipeline to decide which pool of agents to use.  For this example, you can add your private agent to the “Default” pool or you can create a new “On-premises” pool.

deployonprem4

deployonprem5

Now you can start editing your release pipeline.  In default pipelines, you will notice the “Run on agent” scope for your each of your environments.  If you select it, you will see a “deployment queue” option to choose which pool of agents you want to run these deployment tasks on.  Since we want to run these tasks against on-premises environments, we should select the “On-premises” agent pool that we created in the previous step.

deployonprem6

You can now queue a new release.  Once the release starts you’ll notice that it will choose a private agent from the on-premises pool and run any deployment steps on your on-premises network.

deployonprem7

That’s release all there is to it!  Whether your environments are on-premises or hosted in any cloud, Visual Studio Team Services and Team Foundation Server make it simple to deploy to any of your environments using private agents.

Visual Studio for Teams of C++ Developers

$
0
0

In this blog post we will dive into how Visual Studio supports teams of C and C++ developers. We’ll begin by creating a small C++ program and placing it in a Git repository in Visual Studio Team Services. Next we’ll see how to commit and push updates and get updates from others. Finally, we will work with GitHub repos using the GitHub extension for Visual Studio.

Adding an Existing C++ Project to Git in Visual Studio Team Services

In this example, you will create a small sample application and use Visual Studio to create a Git repository in Visual Studio Team Services. If you have an existing project, you can use it instead.

To get started you’ll need an account on Visual Studio Team Services. Sign up for a free Visual Studio Team Services account. You can use a personal, work or school account. During the process, a new default project may be created but will not be used in this example.

  1. Download the sample project from and unzip it into a suitable working directory. You can also use one of your own C++ projects; the steps will be the same.
  2. Start Visual Studio 2017 and load the CalculatingWithUnknowns solution. Expand the Source Files node in Solution Explorer to see the solution files:
    Visual Studio Solution Explorer showing project C++ source files
  3. The blue status bar at the bottom of the Visual Studio window is where you perform Git-related tasks. Create a new local Git repo for your project by selecting Add to Source Control in the status bar and then selecting Git from the . This will create a new repo in the folder the solution is in and commit your code into that repo.
  4. You can select items in the status bar to quickly navigate between Git tasks in Team Explorer.
    Status bar showing four different Git tasks
    1. Up-arrow with two shows the number of unpublished commits in your local branch. Selecting this will open the Sync view in Team Explorer.
    2. Pencil with 0 shows the number of uncommitted file changes. Selecting this will open the Changes view in Team Explorer.
    3. Current repo is CalculatingWithUnknowns  shows the current Git repo. Selecting this will open the Connect view in Team Explorer.
    4. Current Git branch is master shows your current Git branch. Selecting this displays a branch picker to quickly switch between Git branches or create new branches.
  5. In the Sync view in Team Explorer, select the Publish Git Repo button under Publish to Visual Studio Team Services.
    Sync view in Team Explorer with Publish Git Repo button highlighted in red
  6. Verify your email and select your account in the Account Url drop down. Enter your repository name (or accept the default, in this case CalculatingWithUnknowns) and select Publish Repository. Your code is now in a Team Services repo. You can view your code on the web by selecting See it on the web.

As you write your code, your changes are automatically tracked by Visual Studio. Continue to the next section if you want to learn how to commit and track changes to code, push your changes and sync and get changes from other team members. You can also configure your C++ project for continuous integration (CI) with Visual Studio Team Services.

Team Explorer Home dialog highlighting the example, CalculatingWithUnknowns C++ project, was pushed and you can see it on the web.

Commit and Push Updates and Get Updates from Others

Code change is inevitable. Fortunately, Visual Studio 2017 makes it easy to connect to repositories like Git hosted in Visual Studio Team Services or elsewhere and make changes and get updates from other developers on your team.

These examples use the same project you configured in the previous section. To commit and push updates:

  1. Make changes to your project. You can modify code, change settings, edit text files or change other files associated with the project and stored in the repository – Visual Studio will automatically track changes. You can view changes by right-clicking on a file in Solution Explorer then clicking View History, Compare with Unmodified, and/or Blame (Annotate).

C++ source file differences in CalculatingWithUnknowns.cpp.

  1. Commit changes to your local Git repository by selecting the pending changes icon from the status bar.

Status bar showing one pending change in the C++ project

  1. On the Changes view in Team Explorer, add a message describing your update and commit your changes.

Team Explore Changes dialog with a branch comment and the Commit All button highlighted

  1. Select the unpublished changes status bar icon or the Sync view in Team Explorer. Select Push to update your code in Team Services/TFS.

To sync your local repo with changes from your team as they make updates:

  1. From the Sync view in Team Explorer, fetch the commits that your team has made. Double-click a commit to view its file changes.
  2. Select Sync to merge the fetched commits into your local repo and then push any unpublished changes to Team Services.
  3. The changes from your team are now in your local repo and visible in Visual Studio.

Work with GitHub repos using the GitHub Extension for Visual Studio

The GitHub Extension for Visual Studio is the easiest way to connect your GitHub repositories in Visual Studio. With the GitHub Extension, you can clone repos in one click, create repositories and clone it in Visual Studio in one step, publish local work to GitHub, create and view pull requests in Visual Studio, create gists and more.

In this section, we walk through installation, connecting to GitHub and cloning a repo.

  1. Install the GitHub Extension for Visual Studio. If you already have Visual Studio installed without the extension, you can install the GitHub Extension it from the Visual Studio GitHub site. You can also select it as part of the Visual Studio installation process. To install (or modify) with Visual Studio 2017, run the installer and click Individual components and then click GitHub extension for Visual Studio under Code tools, then proceed with other selections and installation (or modification):

Individual components in the installer with GitHub extension for Visual Studio selected

  1. On the Connect view of Team Explorer, expand the GitHub connection and select Sign In. Provide your GitHub credentials to complete sign in.

Team Explorer Connect dialog with GitHub options including sign-in

  1. Click Clone to bring up a dialog that shows all the repositories you can access. If you want to clone one, select it and then click Clone.
  2. To create a new repo, click Create and provide information about the repository. You can choose among several Git ignore preferences and licenses and choose whether your repo is public or private. If you have a private account, you will be restricted to private repositories.

Create a GitHub Repository dialog

  1. To publish an existing project on your machine, click on the Sync tab in the Team Explorer window to get to the Publish to GitHub

To learn more about the extension, visit the GitHub Extension for Visual Studio page.

 

How to control PowerPoint on Windows with a Bluetooth Nintendo Switch JoyCon controller! (or a Surface Pen)

$
0
0

I usually use a Logitech Presentation Clicker to control PowerPoint presentations, but I'm always looking for new ways. Michael Samarin has a great app called KeyPenX that lets you use a Surface pen to control PowerPoint!

However, I've also got this wonderful Nintendo Switch and two JoyCon controllers. Rachel White reminded me that they are BlueTooth! So why not pair them to your machine and map some of their buttons to keystrokes?

Let's do it!

First, hold the round button on the black side of the controller between the SL and SR buttons, then go into Windows Settings and Add Bluetooth Device.

Add a Bluetooth Device

You can add them both if you like! They show up like Game Controllers to Windows:

Hey a JoyCon is a JoyStick to Windows!

Ah, but these are Joysticks. We need to map JoyStick Actions to Key Presses. Enter JoyToKey. If you keep using it (even though you can use it free) it's Shareware, you can buy JoyToKey for just $7.

Hold down a button on your Joystick/Joycon to see what it maps to. For example, here I'm clicking in on the stick and I can see that's Button 12.

Using JoyToKey to map JoyCons to PowerPoint

Map them anyway you like. I mapped left and right to PageUp and PageDown so now I can control PowerPoint!

Using JoyToKey to map JoyCons to PowerPoint

And here it is in action:

ZOMG YOU CAN CONTROL POWERPOINT WITH THE #NintendoSwitch JoyCon! /ht @ohhoe

A post shared by Scott Hanselman (@shanselman) on Apr 10, 2017 at 12:38pm PDT


So fun! Enjoy!


Sponsor: Did you know VSTS can integrate closely with Octopus Deploy? Watch Damian Brady and Brian A. Randell as they show you how to automate deployments from VSTS to Octopus Deploy, and demo the new VSTS Octopus Deploy dashboard widget. Watch now



© 2017 Scott Hanselman. All rights reserved.
     
Viewing all 10804 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>