Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

Networking to and within the Azure Cloud, part 2

$
0
0

This is the second blog post of a three-part series. Before you begin reading, I would suggest reading the first post Networking to and within the Azure Cloud, part 1.

Hybrid networking is a nice thing, but the question then is how do we define hybrid networking? For me, in the context of the connectivity to virtual networks, ExpressRoute’s private peering or VPN connectivity, it is the ability to connect cross-premises resources to one or more Virtual Networks (VNets). While this all works nicely, and we know how to connect to the cloud, how do we network within the cloud? There are at least 3 Azure built-in ways of doing this. In this series of 3 blog posts, my intent is to briefly explain:

  1. Hybrid networking connectivity options
  2. Intra-cloud connectivity options
  3. Putting all these concepts together

Intra-Cloud Connectivity Options

Now that your workload is connected to the cloud, what are the native options to communicate within the Azure cloud?

There are 3 native options:

  1. VNet to VNet via VPN (VNet-to-VNet connection)
  2. VNet to VNet via ExpressRoute
  3. VNet to VNet via Virtual Network Peering (VNet peering) and VNet transit

My intent here is to compare these methods, what they allow, and the kind of topologies you can achieve with these.

VNet-to-VNet via VPN

As exposed in the below picture, when 2 VNets are connected together using VNet-to-VNet via VPN, this is what routing tables look like for both virtual networks:

1.VNet-VNet-VPN_thumb[6]

This is interesting with 2 VNets, but it can grow by some measure:

2.VNet-VNet-VPN-multi-VNet_thumb[2]

If you notice, the route tables for VNet4 and VNet5 indicate how to reach VNet3. However, VNet4 is not able to reach VNet5. Despite this being the case, there are 2 methods to achieve this:

  1. Full Mesh VNet-to-VNet
  2. Using BGP enabled VPNs

With both of these methods, all 3 VNets know how to reach each VNet. Obviously this could scale to many more VNets, assuming the limits of the VPN Gateways are respected (maximum number of tunnels, etc.).

VNet-to-VNet via ExpressRoute

While maybe not everyone realizes, linking a VNet to an ExpressRoute circuit has an interesting side-effect when you are linking more than one VNet to the same ExpressRoute circuit. For example, when you have 2 VNets linked to the same ExpressRoute circuit, this is what the route table looks like:

3.VNet-VNet-ER-2VNet_thumb[2]

Interestingly, both VNets are able to communicate with each other, without going outside of the Microsoft Enterprise Edge (MSEE) router. This makes possible the communication between VNets that are either within the same geopolitical region or globally, except on National Clouds, if this is an ExpressRoute Premium circuit. This means you can use the world wide Microsoft backbone, to connect multiple VNets together. And by the way, that VNet-to-VNet traffic is free, as long as you can connect these VNets to the same ExpressRoute circuit. In the example below, you would have 3 VNets connected to the same ExpressRoute circuit:

4. VNet-VNet-VPN-ER-multi-VNet_thumb[2]

You also see in the picture above the different routes that appear in each VNet’s subnet’s Effective Route Table (read that, it’s very useful to understand why routing doesn’t work like you expect, if that ever happens).

VNet-to-VNet with Virtual Network Peering

The final option to connect multiple VNets together is to use Virtual Network Peering, which is constrained within one Azure region. This peering arrangement between 2 VNets makes the VNets behave essentially like if this were 1 big virtual network, but you can govern/control these communications with NSGs and route tables. For an illustration of what this means, see below:

5.VNet-VNet-Peering-2-VNets_thumb[2]

So taking that to the next level, you could imagine a topology where you would have a hub and spoke topology like this:

6.VNet-VNet-Peering-Hub&Spoke_thumb[2]

Peering is non-transitive, therefore in that case, HR VNet cannot talk directly to Marketing VNet, however they can all three, HR, marketing, and engineering, talk to the Hub VNet, that would contain shared resources, like Domain Controllers, Monitoring Systems, Firewalls, or other Network Virtual Appliances (NVA). Using a combination of User-Defined-Routes applied on the spoke VNets and NVA in the centralized Hub VNet. However, like in the case of VPN, if for some reason you would need each VNet to be able to talk to each other, you could create a topology similar to this as well:

7.VNet-VNet-Peering-Hub&Spoke-Full-Mesh_thumb[2]

When using VNet peering, one of the great resource that can be shared is the Gateways, both VPN and ExpressRoute gateways. That would look something like this:

8.VNet-VNet-Peering-VNet Transit_thumb[2]

This way, you do not have to deploy an ExpressRoute or VPN Gateway in every spoke VNet, but can centralize the security stamp and Gateway access into the Hub Vnet.

Please make sure to check out the next post when it comes out to put all these concepts together!


Cloud migration and disaster recovery of load balanced multi-tier applications

$
0
0

Support for Microsoft Azure virtual machines availability sets has been a highly anticipated capability by many Azure Site Recovery customers who are using the product for either cloud migration or disaster recovery of applications. Today, I am excited to announce that Azure Site Recovery now supports creating failed over virtual machines in an availability set. This in turn allows that you can configure an internal or external load balancer to distribute traffic between multiple virtual machines of the same tier of an application. With the Azure Site Recovery promise of cloud migration and  disaster recovery of applications, this first-class integration with availability sets and load balancers makes it simpler for you to run your failed over applications on Microsoft Azure with the same guarantees that you had while running them on the primary site.

In an earlier blog of this series, you learned about the importance and complexity involved in recovering applications – Cloud migration and disaster recovery for applications, not just virtual machines. The next blog was a deep-dive on recovery plans describing how you can do a One-click cloud migration and disaster recovery of applications. In this blog, we look at how to failover or migrate a load balanced multi-tier application using Azure Site Recovery.

To demonstrate real-world usage of availability sets and load balancers in a recovery plan, a three-tier SharePoint farm with a SQL Always On backend is being used.  A single recovery plan is used to orchestrate failover of this entire SharePoint farm.

 

Disaster Recovery of three tier SharePoint Farm

 

Here are the steps to set up availability sets and load balancers for this SharePoint farm when it needs to run on Microsoft Azure:

  1. Under the Recovery Services vault, go to Compute and Network settings of each of the application tier virtual machines, and configure an availability set for them.
  2. Configure another availability set for each of web tier virtual machines.
  3. Add the two application tier virtual machines and the two web tier virtual machines in Group 1 and Group 2 of a recovery plan respectively.
  4. If you have not already done so, click the following button to import the most popular Azure Site Recovery automation runbooks into your Azure Automation account.

     DeployToAzure

  5. Add script ASR-SQL-FailoverAG as a pre-step to Group 1.  
  6. Add script ASR-AddMultipleLoadBalancers as a post-step to both Group 1 and Group 2.
  7. Create an Azure Automation variable using the instructions outlined in the scripts. For this example, these are the exact commands used.
 $InputObject = @{"TestSQLVMRG" = "SQLRG" ;
                 "TestSQLVMName" = "SharePointSQLServer-test" ;
                 "ProdSQLVMRG" = "SQLRG" ;
                 "ProdSQLVMName" = "SharePointSQLServer";
                 "Paths" = @{
                     "1"="SQLSERVER:SQLSharePointSQLDEFAULTAvailabilityGroupsConfig_AG";
                     "2"="SQLSERVER:SQLSharePointSQLDEFAULTAvailabilityGroupsContent_AG"};
                 "406d039a-eeae-11e6-b0b8-0050568f7993"=@{
                     "LBName"="ApptierInternalLB";
                     "ResourceGroupName"="ContosoRG"};
                 "c21c5050-fcd5-11e6-a53d-0050568f7993"=@{
                     "LBName"="ApptierInternalLB";
                     "ResourceGroupName"="ContosoRG"};
                 "45a4c1fb-fcd3-11e6-a53d-0050568f7993"=@{
                     "LBName"="WebTierExternalLB";
                     "ResourceGroupName"="ContosoRG"};
                 "7cfa6ff6-eeab-11e6-b0b8-0050568f7993"=@{
                     "LBName"="WebTierExternalLB";
                     "ResourceGroupName"="ContosoRG"}}

$RPDetails = New-Object -TypeName PSObject -Property $InputObject  | ConvertTo-Json

New-AzureRmAutomationVariable -Name "SharePointRecoveryPlan" -ResourceGroupName "AutomationRG" -AutomationAccountName "ASRAutomation" -Value $RPDetails -Encrypted $false

You have now completed customizing your recovery plan and it is ready to be failed over.

Azure Site Recovery SharePoint Recovery Plan

 

Once the failover (or test failover) is complete and the SharePoint farm runs in Microsoft Azure, it looks like this:

SharePoint Farm on Azure failed over using Azure Site Recovery

 

Watch this demo video to see all this in action - how using in-built constructs that Azure Site Recovery provides we can failover a three-tier application using a single-click recovery plan. The recovery plan automates the following tasks:

  1. Failing over SQL Always On Availability Group to the virtual machine running in Microsoft Azure
  2. Failing over the web and app tier virtual machines that were part of the SharePoint farm
  3. Attaching an internal load balancer on the application tier virtual machines of the SharePoint farm that are in an availability set
  4. Attaching an external load balancer on the web tier virtual machines of the SharePoint farm that are in an availability set

 

With relentless focus on ensuring that you succeed with full application recovery, Azure Site Recovery is the one-stop shop for all your disaster recovery and migration needs. Our mission is to democratize disaster recovery with the power of Microsoft Azure, to enable not just the elite tier-1 applications to have a business continuity plan, but offer a compelling solution that empowers you to set up a working end to end disaster recovery plan for 100% of your organization's IT applications.

You can check out additional product information and start protecting and migrating your workloads to Microsoft Azure using Azure Site Recovery today. You can use the powerful replication capabilities of Azure Site Recovery for 31 days at no charge for every new physical server or virtual machine that you replicate, whether it is running on VMware or Hyper-V. To learn more about Azure Site Recovery, check out our How-To Videos. Visit the Azure Site Recovery forum on MSDN for additional information and to engage with other customers, or use the Azure Site Recovery User Voice to let us know what features you want us to enable next.

Azure File Storage on-premises access for Ubuntu 17.04

$
0
0

Azure File Storage is a service that offers shared File Storage for any OS that implements the supported SMB Protocol. Since GA we supported both Windows and Linux. However, on premises access was only available to Windows. While Windows customers widely use this capability, we have received the feedback that Linux customers wanted to do the same. And with this capability Linux access will be extending beyond the storage account region to cross region as well as on premises. Today we are happy to announce Azure File Storage on-premises access from across all regions for our first Linux distribution - Ubuntu 17.04. This support is right out of the box and no extra setup is needed.

How to Access Azure File Share from On-Prem Ubuntu 17.04

Steps to access Azure File Share from an on-premises Ubuntu 17.04 or Azure Linux VM are the same.

Step 1: Check to see if TCP 445 is accessible through your firewall. You can test to see if the port is open using the following command:

nmap <azure storage account>.file.core.windows.net

image

Step 2: Copy the command from Azure Portal or replace <storage account name>, <file share name>, <mountpoint> and <storage account key> on the mount command below. Learn more about mounting at  how to use Azure File on Linux.

sudo mount -t cifs //<storage account name>.file.core.windows.net/<file share name> <mountpoint> -o vers=3.0,username=<storage account name>,password=<storage account key>,dir_mode=0777,file_mode=0777,sec=ntlmssp

Step 3: Once mounted, you can perform file-operations

clip_image001[8]

Other Linux Distributions

Backporting of this enhancement to Ubuntu 16.04 and 16.10 is in progress and can be tracked here: CIFS: Enable encryption for SMB3. RHEL is also in progress. Full-support will be released with next release of RHEL.

Summary and Next Steps

We are excited to see tremendous adoption of Azure File Storage. You can try Azure File storage by getting started in under 5 minutes. Further information and detailed documentation links are provided below.

We will continue to enhance the Azure File Storage based on your feedback. If you have any comments, requests, or issues, you can use the following channels to reach out to us:

Discover and act on insights with new Azure monitoring and diagnostics capabilities

$
0
0

Imagine if you could get a dynamic, end-to-end view of your IT environment, and respond to issues as they arise. You could see how your applications, services, and workloads are connected, and how your servers interact with the networks. You could see connections turn red if they fail, view cascading alerts, and see rogue clients that could be causing problems. Not only that, but with just a tap of a button you could fix the issue, and see the red alerts go away. Bringing together your data from workloads, applications, and networks help you start to see the big picture. However, when you apply machine learning and crowd-sourced knowledge from around the globe, suddenly you can visualize and act on your data in a way that you never have before.

To help get this visibility into your cloud and on-premises environment and make it actionable, we are introducing today new monitoring and diagnostics capabilities in Azure:

Visibility

  • Map out process and server dependencies with Service Map, a new technology in Azure Insight & Analytics, to make it easier to troubleshoot and plan ahead for future changes or migrations.
  • Use DNS Analytics, a new solution in Azure Insight & Analytics, to help you visualize real-time security, performance, and operations-related data for your DNS servers.

Action

  • Remediate issues right away with a new option in Azure Insight & Analytics to Take Action and resolve an issue directly from a log search result.
  • Use the Smart Diagnostics functionality in Azure Application Insights to diagnose sudden changes in the performance or usage of your web application.

In addition, expanded support for Linux has been added to monitoring tools and capabilities in Azure Automation & Control, including Linux patching and Linux file change tracking. You can also now ingest custom logs into Azure Application Insights for more powerful data correlations and analytics.

Visualize dependencies in your environment

A big part of what makes you successful is how quickly you can resolve issues. We take the hard part out of data collection and insights, so you can analyze the issues and resolve them more quickly. The Service Map technology, now generally available, allows you to automatically discover and build a common reference map of dependencies across servers, processes, and 3rd party services, in real-time. This helps you to isolate problems and accelerate root-cause analysis, by visualizing process and server dependencies. In the same dashboard, you can see prioritized alerts, recent changes, notable security issues, and system updates that are due.

In addition, you can use this new capability to make sure nothing is left behind during migrations with the help of the detailed process and server inventory, or identify servers that need to be decommissioned. InSpark uses Service Map to help customer plan and execute migrations to Azure. “Before Service Map, we had to rely on customers to provide information about their servers’ dependencies, and that information was error prone in and incomplete,” says Maarten Goet, Managing Partner. “Now, with Service Map, we immediately see all of their dependencies and we can build an accurate plan for moving business services into Azure.”

See across network devices

Modern businesses are powered by apps that have fast, reliable and secure network connections. Domain Name System (DNS) server is a core component of an organization’s IT infrastructure, that enables such network connections. So, the visibility into operations, performance, and audit of DNS servers is critical for businesses. 

You can use DNS Analytics, a solution now in public preview in Azure Insight & Analytics, to get visibility into your entire DNS infrastructure. This solution helps you visualize real-time security, performance, and operations-related information of your DNS servers through real-time dashboard views. You can drill-down into the dashboard to gain granular details on the state of your DNS infrastructure, create alerts, and remediate DNS server issues. “DNS Analytics provided us with the in-depth information I have been missing,” says Marius Sandbu, cloud architect for Evry, “both to be able to troubleshoot DNS registrations from clients and servers, but also to detect traffic to malware domains.”

Take action to remediate

Turning insights into action is made easier with the ability to create alerts if something is out of the norm and connecting alerts with workflows to remediate automatically. Now it’s even easier in Azure with the capability to perform in-line remediation using the Take Action button in Azure Insight & Analytics. In the Log Search view, you can now choose to take action from a search result to immediately address whatever was detected by the log. This functionality fixes your log issue by selecting a runbook, previously scripted or leveraged from the runbook gallery in Azure Automation, and deploying it. You can solve your problem right away, eliminating extra work and time during an already pressing situation.

Diagnose application issues on the spot

You can now diagnose sudden changes in your web app’s performance or usage with a single click, powered by Machine Learning algorithms in Azure Application Insights Analytics. The Smart Diagnostics feature is available whenever you create or render a time chart. Anywhere it finds an unusual change from the trend of your results, such as a spike or a dip, it identifies a pattern of dimensions that might explain the change. This helps you diagnose the problem quickly. Smart Diagnostics can successfully identify a pattern of property values associated with a change, and highlight the difference between results with and without that pattern, essentially suggesting the most probable root cause leading to an anomaly.

Get started today

Azure management and security services help you to gain greater visibility into your environment with advanced data analysis and visualization, and make it easy to turn insights into action. Learn more about the capabilities of Azure Insight & Analytics, Azure Automation, and Azure Application Insights.

Deploy PHP application to Azure App Service using VSTS

$
0
0

This blog post shows how you can deploy a new PHP application from Visual Studio Team Services or Microsoft Team Foundation Server to Azure App Service.

Download the sample

  • Fork the Hello World sample app repository to your github account
    https://github.com/RoopeshNair/php-docs-hello-world

Create a web app

  • From Azure portal > App Services > + Add
    addazureappservice
  • Select Web Apps > Click “Create” with App Name, Subscription and Resource Group details. Once the deployment is successful, configure the PHP version in the “Application settings” to use “7.0” as shown below

createnewwebapp-1

Setup Release

  1. Open the Releases tab of the Build & Release hub, open the + drop-down in the list of release definitions, and choose Create release definition
  2. In the DEPLOYMENT TEMPLATES dialog, select the “Deploy PHP App to Azure App Service” template and choose OK.

deployphptemplate

  1. Click “Choose Later” for the artifact to be deployed.

chooselater

  1. Configure the Azure App Service Deployment task:
      • Azure Subscription: Select a connection from the list under Available Azure Service Connections. If no connections appear, choose Manage, select New Service Endpoint | Azure Resource Manager, and follow the prompts. Then return to your release definition, refresh the Azure Subscription list, and select the connection you just created.
        Note: If your Azure subscription is defined in an Azure Government Cloud, ensure your deployment process meets the relevant compliance requirements. For more details, see Azure Government Cloud deployments.
      • App Service Name: the name of the App Service (the part of the URL without .azurewebsites.net)
      • Deploy to Slot: make sure this is cleared (the default)
      • Virtual Application : leave blank
      • Package or Folder: Click on “…” button
        packageorfolder
      • Click on “Link to an artifact source”

linkartifactsource

      • Select the Github as the artifact source type and point it to your Github repository (forked earlier). You may need to create GitHub endpoint

githubartifact

      • Select the repo root as the folder to deploy
        selectedpackage
      • Advanced:(optional)
        • Deployment script: The task gives you additional flexibility to run deployment script on the Azure App Service. For example, you can run a script to update dependencies (example composer extension) on the Azure App Service instead of packaging the dependencies in the build step.
          composerextension
        • Take App Offline: If you run into locked file problems when you test the release, try selecting this check box.
    1. Type a name for the new release definition and, optionally, change the name of the environment from Default Environment to QA. Also, set the deployment condition on the environment to “Automatically start after release creation”.
    2. Save the new release definition. Create a new release and verify that the application has been deployed correctly.

    Related Topics

    1. Configure PHP in Azure App Service Web Apps

    Visual Studio 2017 Preview 15.2

    $
    0
    0

    Today we are releasing Visual Studio 2017 Preview 15.2. For information on what this preview contains, please refer to the Visual Studio 2017 Preview release notes.

    If you haven’t heard about our new Preview releases, do take a few minutes to learn about them on the Visual Studio Preview page on visualstudio.com.

    As always, we welcome your feedback. For problems, let us know via the Report a Problem option in the upper right corner, either from the installer or the Visual Studio IDE itself. Track your feedback on the developer community portal. For suggestions, let us know through UserVoice.

     

    Sarika Calla, Principal Program Manager, Visual Studio

    Sarika Calla runs the Visual Studio release engineering team with responsibility for making Visual Studio releases available to our customers around the world.

    Office Online Server April 2017 release

    $
    0
    0

    We are excited to announce our second major update to Office Online Server (OOS), which includes support for Windows Server 2016 as well as several improvements. OOS allows organizations to provide users with browser-based versions of Word, PowerPoint, Excel and OneNote, among other capabilities offered in Office Online, from their own datacenter.

    In this release, we officially offer support for Windows Server 2016, which has been highly requested. If you are running Windows Server 2016, you can now install OOS on it. Please verify that you have the latest version of the OOS release to ensure the best experience.

    In addition, this release includes the following improvements:

    • Performance improvements to co-authoring in PowerPoint Online.
    • Equation viewing in Word Online.
    • New navigation pane in Word Online.
    • Improved undo/redo in Word Online.
    • Enhanced W3C accessibility support for users who rely on assistive technologies.
    • Accessibility checkers for all applications to ensure that all Office documents can be read and authored by people with different abilities.

    We encourage OOS customers to visit the Volume License Servicing Center to download the April 17, 2017 release. You must uninstall the previous version of OOS to install this release. We only support the latest OOS version—with bug fixes and security patches available from Microsoft Updates Download Center.

    Customers with a Volume Licensing account can download OOS from the Volume License Servicing Center at no cost and will have view-only functionality—which includes PowerPoint sharing in Skype for Business. Customers that require document creation and edit and save functionality in OOS need to have an on-premises Office Suite license with Software Assurance or an Office 365 ProPlus subscription. For more information on licensing requirements, please refer to our product terms.

    The post Office Online Server April 2017 release appeared first on Office Blogs.

    The week in .NET – Happy birthday .NET with Robin Cole, TinyORM, 911 Operator

    $
    0
    0

    Previous posts:

    On .NET

    This week on the show, we’ll speak with Don Schenck about Red Hat. We’ll take questions on Gitter, on the dotnet/home channel and on Twitter. Please use the #onnet tag. It’s OK to start sending us questions in advance if you can’t do it live during the show.

    Happy birthday .NET with Robin Cole

    In February we got together with many Microsoft alumni and current employees for a huge .NET Birthday bash. We spoke to Robin Cole, who joined Microsoft in 2005 working on many projects including Expression and Visual Studio. In this quick interview, she shares her thoughts on developers and designers and exciting future ahead.

    Package of the week: TinyORM

    TinyORM is a new micro-ORM for .NET that automates connection and transaction management, that is simple and easy to use correctly.

    Game of the Week: 911 Operator

    911 Operator is an indie simulation game. Ever wanted to see what it was like to be a 911 operator? Well, now you can! In 911 Operator, you’ll manage emergency lines by answering incoming calls and reacting appropriately. Give first aid instructions, dispatch emergency respondents or even choose to ignore the call which could very well be from a prankster. In 911 Operator, you can play in any city of the world by using Free Play mode to download real maps, which of course includes real addresses, streets and emergency infrastructure.

    911 Operator

    911 Operator was created by Jutsu Games using C# and Unity. It is available on Steam for PC, Mac and Linux.

    Meetup of the week: Global Azure Bootcamp in Miami, FL

    The dotnetmiami user group hosts their Global Azure Bootcamp this Saturday at 9:00AM in Miami.

    .NET

    ASP.NET

    C#

    F#

    New F# language Suggestions:

    There was a major F# conference two weeks ago, F# eXchange. You can view all of the talks online here. If you wish to see all the new and exciting areas where F# is going, please watch them. They’re entirely free.

    Check out F# Weekly for more great content from the F# community.

    VB

    Xamarin

    Microsoft Engineering is offering a limited number of technical sessions to help your team build better apps faster, and avoid the common pitfalls in going mobile. The Go Mobile Tech Workshops are dedicated sessions for your team covering everything from your technology stack and architecture to the latest in Visual Studio 2017 and DevOps best practices. These workshops help your team get ahead with current projects and prepare for what is coming next in app development.

    Apply here.

    Azure

    UWP

    Game Development

    And this is it for this week!

    Contribute to the week in .NET

    As always, this weekly post couldn’t exist without community contributions, and I’d like to thank all those who sent links and tips. The F# section is provided by Phillip Carter, the gaming section by Stacey Haffner, the Xamarin section by Dan Rigby, and the UWP section by Michael Crump.

    You can participate too. Did you write a great blog post, or just read one? Do you want everyone to know about an amazing new contribution or a useful library? Did you make or play a great game built on .NET?
    We’d love to hear from you, and feature your contributions on future posts:

    This week’s post (and future posts) also contains news I first read on The ASP.NET Community Standup, on Weekly Xamarin, on F# weekly, and on The Morning Brew.


    Warren Buffett Shareholder Letters: Sentiment Analysis in R

    $
    0
    0

    Warren Buffett — known as the "Oracle of Omaha" — is one of the most successful investors of all time. Wherever the winds of the market may blow, he always seems to find a way to deliver impressive returns for his investors and his company, Berkshire Hathaway. Every year he authors his famous "shareholder letter" with his musing about the market and investment strategy and — perhaps as reflects his continued success — this sentiment analysis of his letters by data scientist Michael Toth shows that the tone has been generally positive over time. Only five of the forty years of letters show an average negative sentiment: those correspond to market downturns in 1987, 1990, 2001/2002 and 2008.

    Berkshire_hathaway_sentiment

    Michael used the R language to generate a sentiment score for each letter, and the process was surprisingly simple (you can find the R code here). The letters are published as PDF documents, from which the text can be extracted using the pdf_text function in the pdftools package. Then you can use the tidytext package to decompose the letters into individual words, whose Bing sentiment score can be calculated using its get_sentiments function. From there, a simple ggplot2 bar chart is used to show the average sentiment scores for each letter.

    For more on the sentiment of Warren Buffett's shareholder letters, including an analysis of the most-used positive and negative words, follow the link to the complete blog post below.

    Michael Toth: Sentiment Analysis of Warren Buffett's Letters to Shareholders

    Building a Telepresence App with HoloLens and Kinect

    $
    0
    0

    When does the history of mixed reality start? There are lots of suggestions, but 1977 always shows up as a significant year. That’s the year millions of children – many of whom would one day become the captains of Silicon Valley – first experienced something they wouldn’t be able to name for another decade or so.

    The plea of an intergalactic princess that set off a Star Wars film franchise still going strong today: “Help me Obi-wan Kenobi, you’re my only hope.” It’s a fascinating validation of Marshal McLuhan’s dictum that the medium is the message. While the content of Princess Leia’s message is what we have an emotional attachment to, it is the medium of the holographic projection – today we would call it “augmented reality” or “mixed reality” – that we remember most vividly.

    While this post is not going to provide an end-to-end blueprint for your own Princess Leia hologram, it will provide an overview of the technical terrain, point out some of the technical hurdles and point you in the right direction. You’ll still have to do a lot of work, but if you are interested in building a telepresence app for the HoloLens, this post will help you get there.

    An external camera and network connection

    The HoloLens is equipped with inside-out cameras. In order to create a telepresence app, however, you are going to need a camera that can face you and take videos of you – in other words, an outside-in camera. This post is going to use the Kinect v2 as an outside-in camera because it is widely available, very powerful and works well with Unity. You may choose to use a different camera that provides the features you need, or even use a smartphone device.

    The HoloLens does not allow third-party hardware to plug into its mini-USB port, so you will also need some sort of networking layer to facilitate inter-device communication. For this post, we’ll be using the HoloToolkit’s sharing service – again, because it is just really convenient to do so and even has a dropdown menu inside of the Unity IDE for starting the service. You could, however, build your own custom socket solution as Mike Taulty did or use the Sharing with UNET code in the HoloToolkit Examples, which uses a Unity provided networking layer.

    In the long run, the two choices that will most affect your telepresence solution are what sort of outside-in cameras you plan to support and what sort of networking layer you are going to use. These two choices will determine the scalability and flexibility of your solution.

    Using the HoloLens-Kinect project

    Many telepresence HoloLens apps today depend in some way on Michelle Ma’s open-source HoloLens-Kinect project. The genius of the app is that it glues together two libraries, the Unity Pro plugin package for Kinect with the HoloToolkit sharing service, and uses them in unintended ways to arrive at a solution.

    Even though the Kinect plugin for Unity doesn’t work in UWP (and the Kinect cannot be plugged into a HoloLens device in any case), it can still run when deployed to Windows or when running in the IDE (in which case it is using the .NET 3.5 framework rather than the .NET Core framework). The trick, then, is to run the Kinect integration in Windows and then send messages to the HoloLens over a wireless network to get Kinect and the device working together.

    On the network side, the HoloToolkit’s sharing service is primarily used to sync world anchors between different devices. It also requires that a service be instantiated on a PC to act as a communication bus between different devices. The sharing service doesn’t have to be used as intended, however. Since the service is already running on a PC, it can also be used to communicate between just the PC and a single HoloLens device. Moreover, it can be used to send more than just world anchors – it can really be adapted to send any sort of primitive values – for instance, Kinect joint positions.

    To use Ma’s code, you need two separate Unity projects: one for running on a desktop PC and the other for running on the HoloLens. You will add the Kinect plugin package to the desktop app. You will add the sharing prefab from the HoloToolkit to both projects. In the app intended for the HoloLens, add the IP address of your machine to the Server Address field in the Sharing Stage component.

    The two apps are largely identical. On the PC side, the app takes the body stream from the Kinect and sends the joint data to a script named BodyView.cs. BodyView creates spheres for each joint when it recognizes a new body and then repositions these joints whenever it gets updated Kinect.

    
    private GameObject CreateBodyObject(ulong id)
    {
        GameObject body = new GameObject("Body:" + id);
        for (int i = 0; i < 25; i++)
        {
            GameObject jointObj = GameObject.CreatePrimitive(PrimitiveType.Sphere);
    
            jointObj.transform.localScale = new Vector3(0.3f, 0.3f, 0.3f);
            jointObj.name = i.ToString();
            jointObj.transform.parent = body.transform;
        }
        return body;
    }
    
    
    private void RefreshBodyObject(Vector3[] jointPositions, GameObject bodyObj)
    {
        for (int i = 0; i < 25; i++)
        {
            Vector3 jointPos = jointPositions[i];
    
            Transform jointObj = bodyObj.transform.FindChild(i.ToString());
            jointObj.localPosition = jointPos;
        }
    }
    
    

    As this is happening, another script called BodySender.cs intercepts this data and sends it to the sharing service. On the HoloLens device, a script named BodyReceiver.cs gets this intercepted joint data and passes it to its own instance of the BodyView class that animates the dot man made up of sphere primitives.

    The code used to adapt the sharing service for transmitting Kinect data is contained in Ma’s CustomMessages2 class, which is really just a straight copy of the CustomMessages class from the HoloToolkit sharing example with a small modification that allows joint data to be sent and received:

    
    
    public void SendBodyData(ulong trackingID, Vector3[] bodyData)
    {
        // If we are connected to a session, broadcast our info
        if (this.serverConnection != null && this.serverConnection.IsConnected())
        {
            // Create an outgoing network message to contain all the info we want to send
            NetworkOutMessage msg = CreateMessage((byte)TestMessageID.BodyData);
    
            msg.Write(trackingID);
    
            foreach (Vector3 jointPos in bodyData)
            {
                AppendVector3(msg, jointPos);
            }
    
            // Send the message as a broadcast
            this.serverConnection.Broadcast(
                msg,
                MessagePriority.Immediate,
                MessageReliability.UnreliableSequenced,
                MessageChannel.Avatar);
        }
    }
    
    

    Moreover, once you understand how CustomMessages2 works, you can pretty much use it to send any kind of data you want.

    Be one with The Force

    Another thing the Kinect is very good at is gesture recognition. HoloLens currently supports a limited number of gestures and is constrained by what the inside-out cameras can see – mostly just your hands and fingers. You can use the Kinect-HoloLens integration above, however, to extend the HoloLens’ repertoire of gestures to include the user’s whole body.

    For example, you can recognize when a user raises her hand above her head simply by comparing the relative positions of these two joints. Because this pose recognition only requires the joint data already transmitted by the sharing service and doesn’t need any additional Kinect data, it can be implemented completely on the receiver app running in the HoloLens.

    
    private void DetectGesture(GameObject bodyObj)
    {
        string HEAD = "3";
        string RIGHT_HAND = "11";
    
        // detect gesture involving the right hand and the head
        var head = bodyObj.transform.FindChild(HEAD);
        var rightHand = bodyObj.transform.FindChild(RIGHT_HAND);
    
        // if right hand is half a meter above head, do something
        if (rightHand.position.y > head.position.y + .5)
            _gestureCompleteObject.SetActive(true);
        else
            _gestureCompleteObject.SetActive(false);
    }
    
    

    In this sample, a hidden item is shown whenever the pose is detected. It is then hidden again whenever the user lowers her right arm.

    The Kinect v2 has a rich literature on building custom gestures and even provides a tool for recording and testing gestures called the Visual Gesture Builder that you can use to create unique HoloLens experiences. Keep in mind that while many gesture solutions can be run directly in the HoloLens, in some cases, you may need to run your gesture detection routines on your desktop and then notify your HoloLens app of special gestures through a further modified CustomMessages2 script.

    As fun as dot man is to play with, he isn’t really that attractive. If you are using the Kinect for gesture recognition, you can simply hide him by commenting a lot of the code in BodyView. Another way to go, though, is to use your Kinect data to animate a 3D character in the HoloLens. This is commonly known as avateering.

    Unfortunately, you cannot use joint positions for avateering. The relative sizes of a human being’s limbs are often not going to be the same as those on your 3D model, especially if you are trying to animate models of fantastic creatures rather than just humans, so the relative joint positions will not work out. Instead, you need to use the rotation data of each joint. Rotation data, in the Kinect, is represented by an odd mathematical entity known as a quaternion.

    Quaternions

    Quaternions are to 3D programming what midichlorians are to the Star Wars universe: They are essential, they are poorly understood, and when someone tries to explain what they are, it just makes everyone else unhappy.

    The Unity IDE doesn’t actually use quaternions. Instead it uses rotations around the X, Y and Z axes (pitch, yaw and roll) when you manipulate objects in the Scene Viewer. These are also known as Euler angles.

    There are a few problems with this, however. Using the IDE, if I try to rotate the arm of my character using the yellow drag line, it will actually rotate both the green axis and the red axis along with it. Somewhat more alarming, as I try to rotate along just one axis, the Inspector windows show that my rotation around the Z axis is also affecting the rotation around the X and Y axes. The rotation angles are actually interlocked in such a way that even the order in which you make changes to the X, Y and Z rotation angles will affect the final orientation of the object you are rotating. Another interesting feature of Euler angles is that they can sometimes end up in a state known as gimbal locking.

    These are some of the reasons that avateering is done using quaternions rather than Euler angles. To better visualize how the Kinect uses quaternions, you can replace dot man’s sphere primitives with arrow models (there are lots you can find in the asset store). Then, grab the orientation for each joint, convert it to a quaternion type (quaternions have four fields rather than the three in Euler angles) and apply it to the rotation property of each arrow.

    
    private static Quaternion GetQuaternionFromJointOrientation(Kinect.JointOrientation jointOrientation)
    {
        return new Quaternion(jointOrientation.Orientation.X, jointOrientation.Orientation.Y, jointOrientation.Orientation.Z, jointOrientation.Orientation.W);
    }
    private void RefreshBodyObject(Vector3[] jointPositions, Quaternion[] quaternions, GameObject bodyObj)
    {
        for (int i = 0; i < 25; i++)
        {
            Vector3 jointPos = jointPositions[i];
    
            Transform jointObj = bodyObj.transform.FindChild(i.ToString());
            jointObj.localPosition = jointPos;
            jointObj.rotation = quaternions[i];
        }
    }
    
    

    These small changes result in the arrow man below who will actually rotate and bend his arms as you do.

    For avateering, you basically do the same thing, except that instead of mapping identical arrows to each rotation, you need to map specific body parts to these joint rotations. This post is using the male model from Vitruvius avateering tools, but you are welcome to use any properly rigged character.

    Once the character limbs are mapped to joints, they can be updated in pretty much the same way arrow man was. You need to iterate through the joints, find the mapped GameObject, and apply the correct rotation.

    
    private Dictionary<int, string> RigMap = new Dictionary<int, string>()
    {
        {0, "SpineBase"},
        {1, "SpineBase/SpineMid"},
        {2, "SpineBase/SpineMid/Bone001/Bone002"},
        // etc ...
        {22, "SpineBase/SpineMid/Bone001/ShoulderRight/ElbowRight/WristRight/ThumbRight"},
        {23, "SpineBase/SpineMid/Bone001/ShoulderLeft/ElbowLeft/WristLeft/HandLeft/HandTipLeft"},
        {24, "SpineBase/SpineMid/Bone001/ShoulderLeft/ElbowLeft/WristLeft/ThumbLeft"}
    };
    
    private void RefreshModel(Quaternion[] rotations)
    {
        for (int i = 0; i < 25; i++)
        {
            if (RigMap.ContainsKey(i))
            {
                Transform rigItem = _model.transform.FindChild(RigMap[i]);
                rigItem.rotation = rotations[i];
            }
        }
    }
    
    

    This is a fairly simplified example, and depending on your character rigging, you may need to apply additional transforms on each joint to get them to the expected positions. Also, if you need really professional results, you might want to look into using inverse kinematics for your avateering solution.

    If you want to play with working code, you can clone Wavelength’s Project-Infrared repository on github; it provides a complete avateering sample using the HoloToolkit sharing service. If it looks familiar to you, this is because it happens to be based on Michelle Ma’s HoloLens-Kinect code.

    Looking at point cloud data

    To get even closer to the Princess Leia hologram message, we can use the Kinect sensor to send point cloud data. Point clouds are a way to represent depth information collected by the Kinect. Following the pattern established in the previous examples, you will need a way to turn Kinect depth data into a point cloud on the desktop app. After that, you will use shared services to send this data to the HoloLens. Finally, on the HoloLens, the data needs to be reformed as a 3D point cloud hologram.

    The point cloud example above comes from the Brekel Pro Point Cloud v2 tool, which allows you to read, record and modify point clouds with your Kinect.

    The tool also includes a Unity package that replays point clouds, like the one above, in a Unity for Windows app. The final steps of transferring point cloud data over the HoloToolkit sharing server to HoloLens is an exercise that will be left to the reader.

    If you are interested in a custom server solution, however, you can give the open source LiveScan 3D – HoloLens project a try.

    HoloLens shared experiences and beyond

    There are actually a lot of ways to orchestrate communication for the HoloLens of which, so far, we’ve mainly discussed just one. A custom socket solution may be better if you want to institute direct HoloLens-to-HoloLens communication without having to go through a PC-based broker like the sharing service.

    Yet another option is to use a framework like WebRTC for your communication layer. This has the advantage of being an open specification, so there are implementations for a wide variety of platforms such as Android and iOS. It is also a communication platform that is used, in particular, for video chat applications, potentially giving you a way to create video conferencing apps not only between multiple HoloLenses, but also between a HoloLens and mobile devices.

    In other words, all the tools for doing HoloLens telepresence are out there, including examples of various ways to implement it. It’s now just a matter of waiting for someone to create a great solution.

    The post Building a Telepresence App with HoloLens and Kinect appeared first on Building Apps for Windows.

    Cleaning up the Visual Studio 2017 package cache

    $
    0
    0

    With the ability to disable or move the package cache for Visual Studio 2017 and other products installed with the new installer, packages are removed for whatever instance(s) you are installing, modifying, or repairing.

    If you have a lot of instances and want to clean all of them up easily from the command line – perhaps scripting it for users in an organization – you can combine tools such as vswhere or the VSSetup PowerShell module with the installer at %ProgramFiles(x86)%Microsoft Visual StudioInstallervs_installer.exe.

    Batch script with vswhere

    You can get the installation path for all instances and call the installer for each to disable the cache (only necessary once, but for simplicity of the script we’ll pass it for each instance) and modify – which will basically just remove package payloads – or re-enable the cache and repair the packages to re-download packages.

    Note that the following sample is intended for use within a batch script. If typing on the command line only use one “%”. Run this within an elevated command prompt to avoid being prompted to elevate each time vs_installer.exe is launched.

    @echo off
    setlocal enabledelayedexpansion
    for /f "usebackq delims=" %%i in (`vswhere -all -property installationPath`) do (
      if /i "%1"=="cache" (
        set args=repair --cache
      ) else (
        set args=modify --nocache
      )
      start /wait /d "%ProgramFiles(x86)%Microsoft Visual StudioInstaller" vs_installer.exe !args! --installPath "%%i" --passive --norestart
      if "%ERRORLEVEL%"=="3010" set REBOOTREQUIRED=1
    )
    if "%REBOOTREQUIRED%"=="1" (
      echo Please restart your machine
      exit /b 3010
    )
    

    PowerShell script with VSSetup

    While you can also use vswhere within PowerShell easily (e.g. vswhere -format json | convertfrom-json), this example uses the VSSetup PowerShell module you can easily obtain in Windows 10 with: install-module -scope currentuser VSSetup.

    Put the following example into a script and run it from an elevated PowerShell host to avoid being prompted to elevate each time vs_installer.exre is launched.

    param (
      [switch] $Cache
    )
    $start_args = if ($Cache) {
      'repair', '--cache'
    } else {
      'modify', '--nocache'
    }
    get-vssetupinstance -all | foreach-object {
      $args = $start_args + '--installPath', "`"$($_.InstallationPath)`"", '--passive', '--norestart'
      start-process -wait -filePath "${env:ProgramFiles(x86)}Microsoft Visual StudioInstallervs_installer.exe" -args $args
      if ($LASTEXITCODE -eq 3010) {
        $REBOOTREQUIRED = 1
      }
    }
    if ($REBOOTREQUIRED) {
      "Please restart your machine"
      exit 3010
    }
    

    Both of these examples will remove all instances’ packages or put them pack depending on your command line arguments you would pass to the scripts.

    Android and iOS development with C++ in Visual Studio

    $
    0
    0

    When it comes to building mobile applications, many developers write most or a part of the apps in C++. Why? Those who are building computationally intensive apps such as games and physics simulations choose C++ for its unparalleled performance, and the others choose C++ for its cross-platform nature and the ability to leverage existing C/C++ libraries in their mobile applications. Whether you’re targeting Universal Windows Platform (UWP), Android, or iOS, Visual Studio enables building cross-platform C++ mobile applications with full editing and debugging capabilities all in one single IDE.

    In this blog post, we will focus on how to build Android and iOS apps with C++ in Visual Studio. First we will talk a look at how to acquire the tools for Android and iOS development, then we will create a few C++ mobile apps using the built-in templates. Next we will use the Visual Studio IDE to write C++ and Java code, then we will use the world-class Visual Studio debugger to catch issues in C++ and Java code. Finally, we will talk about how the C++ mobile solution can be used in conjunction with Xamarin.

    Install Visual Studio for Android and iOS development

    First, download Visual Studio 2017 and launch the Visual Studio installer.

    To build Android or iOS applications, choose the “Mobile development with C++” workload under the “Mobile & Gaming” category.

    1-vs-install

    Android development: By default, this workload includes the core Visual Studio editor, the C++ debugger, GCC and Clang compilers, Android SDKs and NDKs, Android build tools, Java SDK, and C++ Android development tools. You could choose to install the Google Android Emulator in the Optional Component list if you don’t have an Android device for testing. This should give you everything you need to start building Android applications.

    iOS development: if you’re also targeting iOS, check “C++ iOS development tools” in the Optional Component list and you would be good to go.

    Create a new Android application using project templates

    If you plan to start with targeting Android first and worry about other platforms later, the VS built-in Android project templates including Native-Activity Application, Static Library, Dynamic Shared Library, could be a great starting point. If you’d rather start with a cross-platform solution to target multiple mobile platforms, jump to the next section Build an OpenGLES Application on Android and iOS where we’ll talk about building an app that targets both platforms with shared C++ code.

    You can find the Android templates under Visual C++ -> Cross Platform -> Android node.

    2-templates

    Here we’re going to create a new Native Activity Application (Android), which is popular for creating games and graphical-intensive apps. Once the project is created, in the Solution Platforms dropdown, choose the right architecture that matches the Android emulator or device that you’re using, and then press F5 to run the app.

    3-solution-platforms

    By default, Visual Studio uses the Clang toolchain to compile for Android. The app should build and run successfully, and you should see the app changing colors in the background. This article Create an Android Native Activity App discusses the Native Activity project in more details.

    Build an OpenGLES Application on Android and iOS

    The OpenGL ES Application project template under Visual C++->Cross Platform node is a good starting point for a mobile app targeting both Android and iOS. OpenGL ES (OpenGL for Embedded Systems or GLES) is a 2D and 3D graphics API that is supported on many mobile devices. This template creates a simple iOS app and an Android Native Activity app which has C++ code in common that uses OpenGL ES to display the same animated rotating cube on each platform.

    4-opengles-template

    The created OpenGL ES Application solution includes three library projects in the Libraries folder, one for each platform and the other one for shared C++ code, and two application projects for Android and iOS respectively.

    5-solution-explorer

    Now let’s run this app on both Android and iOS.

    Build and run the app on Android

    The solution created by the template sets the Android app as the default project. Just like run the Android Native Activity app we discussed earlier, in the Solution Platforms dropdown, select the right architecture that matches the Android emulator or device that you’re using, and then press F5 to run the app. The OpenGL ES app should build and run successfully and you will see a colored 3D spinning cube.

    Build and run the app on iOS

    The iOS project created in the solution can be edited in Visual Studio, but because of licensing restrictions, it must be built and deployed from a Mac. Visual Studio communicates with a remote agent running on the Mac to transfer project files and execute build, deployment, and debugging commands. You can setup your Mac by following instructions Install And Configure Tools to Build using iOS.

    Once the remote agent is running on the Mac and Visual Studio is paired to it, we can build and run the iOS app. In the Solution Platforms dropdown in Visual Studio, choose the right architecture for the iOS simulator (x86) or the iOS device. In Solution Explorer, open the context menu for the OpenGLESApp1.iOS.Application project and choose Build. Then choose iOS Simulator on the toolbar to run the app in the iOS Simulator on your Mac. You should see the same colored 3D spinning cube in the iOS Simulator.

    6-ios-simulator

    This article Build an OpenGL ES Application on Android and iOS includes more details about the OpenGLES project.

    Visual Studio to target all mobile platforms

    If you’re building an app to target multiple mobile platforms (Android, iOS, UWP) and wish to share the common code in C++, you can achieve this by having one single Visual Studio solution and leverage the same code-authoring and debugging experience all in the same IDE. With Visual Studio, you can easily share and re-use your existing C++ libraries through the shared project component to target multiple platforms. The following screenshot shows a single solution with 4 projects, one for each mobile platform and one shared project for common C++ code.

    7-shared-project

    To learn more, please refer to how Half Brick makers of popular mobile games Fruit Ninja and Jetpack Joyride use Visual Studio for a C++ cross-platform mobile development experience.

    Write cross-platform C++ code with the full power of Visual Studio IDE

    With Visual Studio, you can write cross-platform C++ code using the same powerful IntelliSense and code navigation features, making code writing much more efficient. These editing capabilities not only light up in the common code, but are context-aware of the target platform when you write platform-specific code.

    Member list and Quick Info, as shown in the following screenshot, are just two examples of the IntelliSense features Visual Studio offers. Member list shows you a list of valid members from a type or namespace. Typing in “->” following an object instance in the C++ code will display a list of members, and you can insert the selected member into your code by pressing TAB, or by typing a space or a period. Quick Info displays the complete declaration for any identifier in your code. IntelliSense is implemented based on the Clang toolchain when targeting the Android platform. In the following screenshot, Visual Studio is showing a list of the available Android-specific functions when the Android Native Activity project is active.

    8-editing

    Auto-complete, squiggles, reference highlighting, syntax colorization, code snippets are some of the other useful productivity features to be of great assistance in code writing and editing.

    Navigating in large codebases and jumping between multiple code files can be a tiring task. Visual Studio offers many great code navigation features, including Go To Definition, Go To Line/Symbols/Members/Types, Find All References, View Call Hierarchy, Object Browser, and many more, to boost your productivity.

    The Peek Definition feature, as shown in the following screenshot, brings the definition to the current code file, allows viewing and editing code without switching away from the code that you’re writing. You can find Peek Definition by opening the context menu on right click or shortcut Alt+F12 for a method that you want to explore. In the example in the screenshot, Visual Studio brings in the definition of __android_log_print method that is defined in the Android SDK log.h file as an embedded window into the current cpp file, making reading and writing Android code more efficiently.

    9-peek-definition

    Debug C++ code with the world-class Visual Studio debugger

    Troubleshooting issues in the code can be time-consuming. Use the Visual Studio debugger to help find and fix issues faster. Set breakpoints in your Android C++ code and press F5 to launch the debugger. When the breakpoint is hit, you can watch the value of variables and complex expressions in the Autos and Watch windows as well as in the data tips on mouse hover, view the call stack in the Call Stack window, and step in and step out of the functions easily. In the example in the screenshot below, the Autos window is showing value changed in the Android sensorManager and accelerometerSensor types.

    10-debugging

    The Android debugging experience in Visual Studio also supports for debugging pre-built Android application via other IDE(s), other basic debugger capabilities (tracepoints, conditional breakpoints) and advanced features such as debugger visualizations (Natvis Support) and attaching to a running Android application as well.

    You can find more information about the C++ debugger in this blog post C++ Debugging and Diagnostics.

    Java debugging and language support for Android

    Whether you’re writing Java or C++ code in your Android apps, Visual Studio has it covered. Visual Studio includes a Java Debugger that enables debugging Java source files in your Android projects, and with the Visual Studio Java Language Service for Android extension, you can also take advantage of the  IntelliSense and browsing capabilities for Java files in the Visual Studio IDE.

    Editing Java code

    First, install the Visual Studio Java Language Service for Android extension. It provides colorization (both syntactic and semantic), error and warning squiggles as well as code outlining and semantic highlighting in your Java files. You will also get IntelliSense assistance, such as Member List, Parameter Help, Quick Info, making writing Java code more efficient. In the following screenshot, Visual Studio provides a member list for the android.util.Log class.

    11-java-editing

    Another handy feature for larger codebases or for navigating 3rd party libraries for which you have the source code available is Go to definition (F12) which will take you to the symbol definition location if available.

    Debugging Java code

    To turn on Java debugging for your Android projects in your next debugging session, in the Debug Target toolbar, change Debug Type dropdown to “Java Only” as shown in the following screenshot.

    12-java-setting

    Now you can set line breakpoints, including conditions or hit counts for the breakpoints, anywhere in the Java code. When a breakpoint is hit, you can view variables in the Locals and Autos window, see call stack in the Call Stack window, and check log output in the Logcat window.

    13-java-debugging

    Blog post Java debugging and language support in Visual Studio for Android has more details on this topic.

    Build Xamarin Android Native Applications

    Xamarin is a popular cross-platform solution for creating rich native apps using C# across mobile platforms while maximizing code reuse. With Xamarin, you could create apps with native user interfaces and get native performance on each mobile platform. If you want to leverage Xamarin for writing user interfaces in C# while re-using your existing C/C++ libraries, Visual Studio fully supports building and debugging Xamarin Android apps that reference C++ code. This blog post Developing Xamarin Android Native Applications describes this scenario in more details.

    Referencing C++ libraries in Xamarin iOS apps can be achieved by following this blog post Calling C/C++ libraries from Xamarin code.

    Try out Visual Studio 2017 for mobile development with C++

    Download Visual Studio 2017, try it out and share your feedback. For problems, let us know via the Report a Problem option in the upper right corner of the VS title bar. Track your feedback on the developer community portal. For suggestions, let us know through UserVoice.

    ASP.NET – Overposting/Mass Assignment Model Binding Security

    $
    0
    0

    imageThis little post is just a reminder that while Model Binding in ASP.NET is very cool, you should be aware of the properties (and semantics of those properties) that your object has, and whether or not your HTML form includes all your properties, or omits some.

    OK, that's a complex - and perhaps poorly written - sentence. Let me back up.

    Let's say you have this horrible class. Relax, yes, it's horrible. It's an example. It'll make sense in a moment.

    public class Person
    
    {
    public int ID { get; set; }
    public string First { get; set; }
    public string Last { get; set; }
    public bool IsAdmin { get; set; }
    }

    Then you've got an HTML Form in your view that lets folks create a Person. That form has text boxes/fields for First, and Last. ID is handled by the database on creation, and IsAdmin is a property that the user doesn't need to know about. Whatever. It's secret and internal. It could be Comment.IsApproved or Product.Discount. You get the idea.

    Then you have a PeopleController that takes in a Person via a POST:

    [HttpPost]
    
    [ValidateAntiForgeryToken]
    public async Task<IActionResult> Create(Person person)
    {
    if (ModelState.IsValid)
    {
    _context.Add(person);
    await _context.SaveChangesAsync();
    return RedirectToAction("Index");
    }
    return View(person);
    }

    If a theoretical EvilUser found out that Person had an "IsAdmin" property, they could "overpost" and add a field to the HTTP POST and set IsAdmin=true. There's nothing in the code here to prevent that. ModelBinding makes your code simpler by handling the "left side -> right side" boring code of the past. That was all that code where you did myObject.Prop = Request.Form["something"]. You had lines and lines of code digging around in the QueryString or Form POST.

    Model Binding gets rid of that and looks at the properties of the object and lines them up with HTTP Form POST name/value pairs of the same names.

    NOTE: Just a friendly reminder that none of this "magic" is magic or is secret. You can even write your own custom model binders if you like.

    The point here is that folks need to be aware of the layers of abstraction when you use them. Yes, it's convenient, but it's hiding something from you, so you should know the side effects.

    How do we fix the problem? Well, a few ways. You can mark the property as [ReadOnly]. More commonly, you can use a BindAttribute on the method parameters and just include (whitelist) the properties you want to allow for binding:

    public async Task<IActionResult> Create([Bind("First,Last")] Person person)

    Or, the correct answer. Don't let models that look like this get anywhere near the user. This is the case for ViewModels. Make a model that looks like the View. Then do the work. You can make the work easier with something like AutoMapper.

    Some folks find ViewModels to be too cumbersome for basic stuff. That's valid. There are those that are "All ViewModels All The Time," but I'm more practical. Use what works, use what's appropriate, but know what's happening underneath so you don't get some scriptkiddie overposting to your app and a bit getting flipped in your Model as a side effect.

    Use ViewModels when possible or reasonable, and when not, always whitelist your binding if the model doesn't line up one to one (1:1) with your HTML Form.

    What are your thoughts?


    Sponsor: Check out JetBrains Rider: a new cross-platform .NET IDE. Edit, refactor, test, build and debug ASP.NET, .NET Framework, .NET Core, or Unity applications. Learn more and get access to early builds!



    © 2017 Scott Hanselman. All rights reserved.
         

    Announcing Azure Analysis Services general availability

    $
    0
    0

    Today at the Data Amp event, we are announcing the general availability of Microsoft Azure Analysis Services, the latest addition to our data platform in the cloud. Based on the proven analytics engine in SQL Server Analysis Services, Azure Analysis Services is an enterprise grade OLAP engine and BI modeling platform, offered as a fully managed platform-as-a-service (PaaS). Azure Analysis Services enables developers and BI professionals to create BI Semantic Models that can power highly interactive and rich analytical experiences in BI tools and custom applications.

    Why Azure Analysis Services?

    The success of any modern data-driven organization requires that information is available at the fingertips of every business user, not just IT professionals and data scientists, to guide their day-to-day decisions. Self-service BI tools have made huge strides in making data accessible to business users. However, most business users don’t have the expertise or desire to do the heavy lifting that is typically required - finding the right sources of data, importing the raw data, transforming it into the right shape, and adding business logic and metrics - before they can explore the data to derive insights. With Azure Analysis Services, a BI professional can create a semantic model over the raw data and share it with business users so that all they need to do is connect to the model from any BI tool and immediately explore the data and gain insights. Azure Analysis Services uses a highly optimized in-memory engine to provide responses to user queries at the speed of thought.

    Integrated with the Azure data platform

    Azure Analysis Services is the latest addition to the Azure data platform. It integrates with many Azure data services enabling customers to build sophisticated analytics solutions.

    • Azure Analysis Services can consume data from Azure SQL Database and Azure SQL Data Warehouse. Customers can build enterprise data warehouse solutions in Azure using a hub-and-spoke model, with the SQL data warehouse at the center and multiple BI models around it targeting different business groups or subject areas.
    • With more and more customers adopting Azure Data Lake and HDInsight, Azure Analysis Services will soon offer the ability to build BI models on top of these big data platforms, enabling a similar hub-and-spoke model as with Azure SQL Data Warehouse.
    • In addition to the above, Azure Analysis Services can also consume data from on-premises data stores such as SQL Server, Oracle, and Teradata. We are working on adding support for several more data sources, both cloud and on-premises.
    • Azure Data Factory is a data integration service that orchestrates the movement and transformation of data, a core capability in any enterprise BI/analytics solution. Azure Analysis Services can be integrated into any Azure Data Factory pipeline by including an activity that loads data into the model. Azure Automation and Azure Functions can also be used for doing lightweight orchestration of models using custom code.
    • Power BI and Excel are industry leading data exploration and visualization tools for business users. Both can connect to Azure Analysis Services models and offer a rich interactive experience. In addition, third party BI tools such as Tableau are also supported.

    AzureAS

    How are customers using Azure Analysis Services?

    Since we launched the public preview of Azure Analysis Services last October, thousands of developers have been using it to build BI solutions. We want to thank all our preview customers for trying out the product and giving us valuable feedback. Based on this feedback, we have made several quality, reliability, and performance improvements to the service. In addition, we introduced Scale Up & Down and Backup & Restore to allow customers to better manage their BI solutions. We also introduced the B1, B2, and S0 tiers to offer customers more pricing flexibility.

    Following are some customers and partners that have built compelling BI solutions using Azure Analysis Services.

    • Milliman is one of the world's largest providers of actuarial and related products and services. They built a revolutionary, industry first, financial modeling product called Integrate, using Azure to run highly complex and mission critical computing tasks in the cloud.

    “Once the complex data movement and transformation processing is complete, the resulting data is used to populate a BI semantic model within Azure Analysis Services, that is easy to use and understand. Power BI allows users to quickly create and share data through interactive dashboards and reports, providing a rich immersive experience for users to visualize and analyze data in one place, simply and intuitively. The combination of Power BI and Azure Analysis Services enables users of varying skills and backgrounds to be able to deliver to the ever-growing BI demands needed to run their business and collaborate on mission critical information on any device.”

    Paul Maher, Principal and CTO, Milliman Life Technology Solutions

    “Another great use case for Azure Analysis Services is leveraging its powerful modeling capabilities to bring together numerous disparate corporate data sources. An initiative at Milliman is currently in design leveraging various Finance data sets in order to create a broader scope and more granular access to critical business information. Providing a cohesive and simple-to-access data source for all levels of users gives the business leaders a new tool – whether they use Excel or Power BI for their business analytics.”

    Andreas Braendle, CIO, Milliman

    • Contidis is a company in Angola that is building the new Candando supermarket chain. They created a comprehensive BI solution using Power BI and Azure Analysis Services to help their employees deliver better customer service, uncover fraud, spot inventory errors, and analyze the effectiveness of store promotions.

    “Since we implemented our Power BI solution with Azure Analysis Services and Azure SQL Data Warehouse, we’ve realized a big improvement in business insight and efficiency. Our continued growth is due to many factors, and Power BI with Azure Analysis Services is one of them.”

    Renato Correia, Head of IT and Innovation, Contidis

    • DevScope is a Microsoft worldwide partner who is helping customers build solutions using Azure Analysis Services.

    One of the great advantages of using Azure Analysis Services and Power BI is that it gives us the flexibility to start small and scale up only as fast as we need to, paying only for the services we use. We also have a very dynamic security model with Azure Analysis Services and Azure Active Directory and  in addition to providing row-level security, we use Analysis Services to monitor report usage and send automated alerts if someone accesses a report or data record that they shouldn’t."

    Rui Romano, BI Team Manager, DevScope

    Azure Analysis Services is now generally available in 14 regions across the globe: Southeast Asia, Australia Southeast, Brazil South, Canada Central, North Europe, West Europe, West India, Japan East, UK South, East US 2, North Central US, South Central US, West US, and West Central US. We will continue to add regions based on customer demand, including government and national clouds.

    Please use the following resources to learn more about Azure Analysis Services, get your questions answered, and give us feedback and suggestions about the product.

    Join us at the Data Insights Summit (June 12-13, 2017) or at one of the user group meetings where you can hear directly from our engineers and product managers.

    Microsoft Cognitive Services – General availability for Face API, Computer Vision API and Content Moderator

    $
    0
    0

    This post was authored by the Cognitive Services Team​.

    Microsoft Cognitive Services enables developers to create the next generation of applications that can see, hear, speak, understand, and interpret needs using natural methods of communication. We have made adding intelligent features to your platforms easier.

    Today, at the first ever Microsoft Data Amp online event, we’re excited to announce the general availability of Face API, Computer Vision API and Content Moderator API from Microsoft Cognitive Services.

    • Face API detects human faces and compares similar ones, organizes people into groups according to visual similarity, and identifies previously tagged people and their emotions in images.
    • Computer Vision API gives you the tools to understand the contents of any image. It creates tags that identify objects, beings like celebrities, or actions in an image, and crafts coherent sentences to describe it. You can now detect landmarks and handwriting in images. Handwriting detection remains in preview.
    • Content Moderator provides machine assisted moderation of text and images, augmented with human review tools. Video moderation is available in preview as part of Azure Media Services.

    Let’s take a closer look at what these APIs can do for you.

    Bring vision to your app

    Previously, users of Face API could obtain attributes such as age, gender, facial points, and headpose. Now, it’s also possible to obtain emotions in the same Face API call. This responds to some user scenarios in which both age and emotions were requested simultaneously. Learn more about Face API in our guides.

    Recognizing landmarks

    We’ve added more richness to Computer Vision API by integrating landmark recognition. Landmark models, as well as Celebrity Recognition, are examples of Domain Specific Models. Our landmark recognition model recognizes 9,000 natural and man-made landmarks from around the world. Domain Specific Models is a continuously evolving feature within Computer Vision API.

    Let’s say I want my app to recognize this picture I took while traveling:

    Landmark image

    You could have an idea about where this comes from, but how could a machine easily know it?

    In C#, we can leverage these capabilities by making a simple REST API call as the following. By the way, other languages are at the bottom of this post.

    using System;
    using System.IO;
    using System.Net.Http;
    using System.Net.Http.Headers;
    
    namespace CSHttpClientSample
    {
        static class Program
        {
            static void Main()
            {
                Console.Write("Enter image file path: ");
                string imageFilePath = Console.ReadLine();
    
                MakeAnalysisRequest(imageFilePath);
    
                Console.WriteLine("nnHit ENTER to exit...n");
                Console.ReadLine();
            }
    
            static byte[] GetImageAsByteArray(string imageFilePath)
            {
                FileStream fileStream = new FileStream(imageFilePath, FileMode.Open, FileAccess.Read);
                BinaryReader binaryReader = new BinaryReader(fileStream);
                return binaryReader.ReadBytes((int)fileStream.Length);
            }
    
            static async void MakeAnalysisRequest(string imageFilePath)
            {
                var client = new HttpClient();
    
                // Request headers. Replace the second parameter with a valid subscription key.
                client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "putyourkeyhere");
    
                // Request parameters. You can change "landmarks" to "celebrities" on requestParameters and uri to use the Celebrities model.
                string requestParameters = "model=landmarks";
                string uri = "https://westus.api.cognitive.microsoft.com/vision/v1.0/models/landmarks/analyze?" + requestParameters;
                Console.WriteLine(uri);
    
                HttpResponseMessage response;
    
                // Request body. Try this sample with a locally stored JPEG image.
                byte[] byteData = GetImageAsByteArray(imageFilePath);
    
                using (var content = new ByteArrayContent(byteData))
                {
                    // This example uses content type "application/octet-stream".
                    // The other content types you can use are "application/json" and "multipart/form-data".
                    content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
                    response = await client.PostAsync(uri, content);
                    string contentString = await response.Content.ReadAsStringAsync();
                    Console.WriteLine("Response:n");
                    Console.WriteLine(contentString);
                }
            }
        }
    }
    

    The successful response, returned in JSON would be the following:

    ```json
    {
      "requestId": "b15f13a4-77d9-4fab-a701-7ad65bcdcaed",
      "metadata": {
        "width": 1024,
        "height": 680,
        "format": "Jpeg"
      },
      "result": {
        "landmarks": [
          {
            "name": "Colosseum",
            "confidence": 0.9448209
          }
        ]
      }
    }
    ```
    

    Recognizing handwriting

    Handwriting OCR is also available in preview in Computer Vision API. This feature detects text in a handwritten image and extracts the recognized characters into a machine-usable character stream.
    It detects and extracts handwritten text from notes, letters, essays, whiteboards, forms, etc. It works with different surfaces and backgrounds such as white paper, sticky notes, and whiteboards. No need to transcribe those handwritten notes anymore; you can snap an image instead and use Handwriting OCR to digitize your notes, saving time, effort, and paper clutter. You can even decide to do a quick search when you want to pull the notes up again.

    You can try this out yourself by uploading your sample in the interactive demonstration.

    Let’s say that I want to recognize the handwriting in the whiteboard:

    Whiteboard image

    An inspiration quote I’d like to keep.

    In C#, I would use the following:

    using System;
    using System.IO;
    using System.Collections;
    using System.Collections.Generic;
    using System.Net.Http;
    using System.Net.Http.Headers;
    
    namespace CSHttpClientSample
    {
        static class Program
        {
            static void Main()
            {
                Console.Write("Enter image file path: ");
                string imageFilePath = Console.ReadLine();
    
                ReadHandwrittenText(imageFilePath);
    
                Console.WriteLine("nnnHit ENTER to exit...");
                Console.ReadLine();
            }
    
            static byte[] GetImageAsByteArray(string imageFilePath)
            {
                FileStream fileStream = new FileStream(imageFilePath, FileMode.Open, FileAccess.Read);
                BinaryReader binaryReader = new BinaryReader(fileStream);
                return binaryReader.ReadBytes((int)fileStream.Length);
            }
    
            static async void ReadHandwrittenText(string imageFilePath)
            {
                var client = new HttpClient();
    
                // Request headers - replace this example key with your valid subscription key.
                client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "putyourkeyhere");
    
                // Request parameters and URI. Set "handwriting" to false for printed text.
                string requestParameter = "handwriting=true";
                string uri = "https://westus.api.cognitive.microsoft.com/vision/v1.0/recognizeText?" + requestParameter;
    
                HttpResponseMessage response = null;
                IEnumerable<string> responseValues = null;
                string operationLocation = null;
    
                // Request body. Try this sample with a locally stored JPEG image.
                byte[] byteData = GetImageAsByteArray(imageFilePath);
                var content = new ByteArrayContent(byteData);
    
                // This example uses content type "application/octet-stream".
                // You can also use "application/json" and specify an image URL.
                content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
    
                try {
                    response = await client.PostAsync(uri, content);
                    responseValues = response.Headers.GetValues("Operation-Location");
                }
                catch (Exception e)
                {
                    Console.WriteLine(e.Message);
                }
    
                foreach (var value in responseValues)
                {
                    // This value is the URI where you can get the text recognition operation result.
                    operationLocation = value;
                    Console.WriteLine(operationLocation);
                    break;
                }
    
                try
                {
                    // Note: The response may not be immediately available. Handwriting recognition is an
                    // async operation that can take a variable amount of time depending on the length
                    // of the text you want to recognize. You may need to wait or retry this operation.
                    response = await client.GetAsync(operationLocation);
    
                    // And now you can see the response in in JSON:
                    Console.WriteLine(await response.Content.ReadAsStringAsync());
                }
                catch (Exception e)
                {
                    Console.WriteLine(e.Message);
                }
            }
        }
    }
    

    Upon success, the OCR results returned include text, bounding box for regions, lines, and words through the following JSON:

    {
      "status": "Succeeded",
      "recognitionResult": {
        "lines": [
          {
            "boundingBox": [
              542,
              724,
              1404,
              722,
              1406,
              819,
              544,
              820
            ],
            "text": "You must be the change",
            "words": [
              {
                "boundingBox": [
                  535,
                  725,
                  678,
                  721,
                  698,
                  841,
                  555,
                  845
                ],
                "text": "You"
              },
              {
                "boundingBox": [
                  713,
                  720,
                  886,
                  715,
                  906,
                  835,
                  734,
                  840
                ],
                "text": "must"
              },
              {
                "boundingBox": [
                  891,
                  715,
                  982,
                  713,
                  1002,
                  833,
                  911,
                  835
                ],
                "text": "be"
              },
              {
                "boundingBox": [
                  1002,
                  712,
                  1129,
                  708,
                  1149,
                  829,
                  1022,
                  832
                ],
                "text": "the"
              },
              {
                "boundingBox": [
                  1159,
                  708,
                  1427,
                  700,
                  1448,
                  820,
                  1179,
                  828
                ],
                "text": "change"
              }
            ]
          },
          {
            "boundingBox": [
              667,
              905,
              1766,
              868,
              1771,
              976,
              672,
              1015
            ],
            "text": "you want to see in the world !",
            "words": [
              {
                "boundingBox": [
                  665,
                  901,
                  758,
                  899,
                  768,
                  1015,
                  675,
                  1017
                ],
                "text": "you"
              },
              {
                "boundingBox": [
                  752,
                  900,
                  941,
                  896,
                  951,
                  1012,
                  762,
                  1015
                ],
                "text": "want"
              },
              {
                "boundingBox": [
                  960,
                  896,
                  1058,
                  895,
                  1068,
                  1010,
                  970,
                  1012
                ],
                "text": "to"
              },
              {
                "boundingBox": [
                  1077,
                  894,
                  1227,
                  892,
                  1237,
                  1007,
                  1087,
                  1010
                ],
                "text": "see"
              },
              {
                "boundingBox": [
                  1253,
                  891,
                  1338,
                  890,
                  1348,
                  1006,
                  1263,
                  1007
                ],
                "text": "in"
              },
              {
                "boundingBox": [
                  1344,
                  890,
                  1488,
                  887,
                  1498,
                  1003,
                  1354,
                  1005
                ],
                "text": "the"
              },
              {
                "boundingBox": [
                  1494,
                  887,
                  1755,
                  883,
                  1765,
                  999,
                  1504,
                  1003
                ],
                "text": "world"
              },
              {
                "boundingBox": [
                  1735,
                  883,
                  1813,
                  882,
                  1823,
                  998,
                  1745,
                  999
                ],
                "text": "!"
              }
            ]
          }
        ]
      }
    }
    

    To easily get started in your preferred language, please refer to the following:

    For more information about our use cases, don’t hesitate to take a look at our customer stories, including a great use of our Vision APIs with GrayMeta.

    Happy coding!


    Introducing H2O.ai on Azure HDInsight

    $
    0
    0

    We are excited to announce that H2O’s AI platform is now available on Azure HDInsight Application Platform. Users can now use H2O.ai’s open source solutions on Azure HDInsight, which allows reliable open source analytics with an industry-leading SLA.

    To learn more about H2O integration with HDInsight, register for the webinar held by H2O and Azure HDInsight team.

    HDInsight and H2O to make data science on big data easier

    Azure HDInsight is the only fully-managed cloud Hadoop offering that provides optimized open source analytical clusters for Spark, Hive, MapReduce, HBase, Storm, Kafka, and R Server backed by a 99.9% SLA. Each of these big data technologies and ISV applications, such as H2O, are easily deployable as managed clusters with enterprise-level security and monitoring.

    The ecosystem of data science has grown rapidly in the last a few years, and H2O’s AI platform provides open source machine learning framework that works with Spark sparklyr and PySpark. H2O’s Sparkling Water allows users to combine the fast, scalable machine learning algorithms of H2O with the capabilities of Spark. With Sparkling Water, users can drive computation from Scala/R/Python and utilize the H2O Flow UI, providing an ideal machine learning platform for application developers.

    Setting up an environment to perform advanced analytics on top of big data is hard, but with H2O Artificial Intelligence for HDInsight, customers can get started with just a few clicks. This solution will install Sparkling Water on an HDInsight Spark cluster so you can exploit all the benefits from both Spark and H2O. The solution can access data from Azure Blob storage and/or Azure Data Lake Store in addition to all the standard data sources that H2O support. It also provides Jupyter Notebooks with in-built examples for an easy jumpstart, and a user-friendly H2O FLOW UI to monitor and debug the applications.

    Getting started

    With the industry leading Azure cloud platform, getting started with H2O on HDInsight is super easy with just a few clicks. Customer can install H2O during the creation of a new HDInsight cluster by simply selecting the customer applications when creating a cluster, selecting “H2O Artificial Intelligence for HDInsight”, and agreeing to the license terms.

    1

    Customers can also deploy H2O when on an existing HDInsight Spark cluster by clicking the “Application” link:

    4

    Sparkling Water integrates H2O's fast scalable machine learning engine with Spark. It provides utilities to publish Spark data structures (RDDs, DataFrames) as H2O's frames and vice versa. Python interface enabling use of Sparkling Water directly from pySpark and many others. The architecture for H2O on HDInsight is as below:

    image

    After installing H2O on HDInsight, you can simply use Jupyter notebooks, which is built-in to Spark clusters, to write your first H2O on HDInsight applications. You can simply open the Jupyter Notebook, and will see a folder named “H2O-PySparkling-Examples”, which has a few getting started examples.

    2

    H2O Flow is an interactive web-based computational user interface where you can combine code execution, text, mathematics, plots, and rich media into a single document. It provides richer visualization experience for the machine learning models, and provides native support for hyper-parameter tuning, ROC Curve, etc.

    H2O Flow

    Together with this combined offering of H2O on HDInsight, customers can easily build data science solutions and run them at enterprise grade and scale. Azure HDInsight provides the tools for a user to create a Data Science environment with underlying big data frameworks like Hadoop and Spark, while H2O’s technology brings a set of sophisticated, fully distributed algorithms to rapidly build and deploy highly accurate models at scale.

    H2O.ai is now available on the Microsoft Azure marketplace and in HDInsight application. For more technical details, please refer to H2O documentation and this technical blog post on HDInsight blog.

    Resources

    Summary

    We are pleased to announce the expansion of HDInsight Application Platform to include H2O.ai. By deploying H2O on HDInsight, customers can easily build analytical solutions and run them at enterprise grade and scale.

    Modernizing the DOM tree in Microsoft Edge

    $
    0
    0

    The DOM is the foundation of the web platform programming model, and its design and performance impacts the rest of the browser pipeline. However, its history and evolution is far from a simple story.

    What we think of as “the DOM” is really the cooperation of several subsystems, such as JS binding, events, editing, spellchecking, HTML attributes, CSSOM, text, and others, all working together. Of these subsystems, the DOM “tree” is at the center.

    A diagram of the web platform pipeline, with the DOM Tree and cooperating components (CSS Cascade, DOM API & Capabilities, and Chakra JavaScript) highlighted.

    A diagram of the web platform pipeline. This post focuses on the DOM tree and cooperating components.

    Several years ago, we began a long journey to update to a modern DOM “tree” (node connectivity structures). By modernizing the core tree, which we completed in Microsoft Edge 14, we landed a new baseline and the scaffolding to deliver on our promise of a fast and reliable DOM. With Windows 10 Creators Update and Microsoft Edge 15, the journey we started is beginning to bear fruit.

    Circular tree map showing "DOM Tree" at the center, surrounded by "JS Binding," "Editing," "Spellcheck," "Events," and "Attributes."

    “The DOM” is really the cooperation of several subsystems that make up the web programming model.

    We’re just scratching the surface, but want to take this opportunity to geek out a bit, and share some of the internal details of this journey, starting with the DOM’s arcane history and showcasing some of our accomplishments along the way.

    The history of the Internet Explorer DOM tree

    When web developers today think of the DOM, they usually think of a tree that looks something like this:

    A diagram of a simple tree

    A simple tree

    However nice and simple (and obvious) this seems, the reality of Internet Explorer’s DOM implementation was much more complicated.

    Simply put, Internet Explorer’s DOM was designed for the web of the 90s. When the original data structures were designed, the web was primarily a document viewer (with a few animated GIFs and other images thrown in). As such, algorithms and data structures more closely resembled those you might see powering a document viewer like Microsoft Word. Recall in the early days of the web that there was no JavaScript to allow scripting a web page, so the DOM tree as we know it didn’t exist. Text was king, and the DOM’s internals were designed around fast, efficient text storage and manipulation. Content editing (WYSIWYG) was already a feature at the time, and the manipulation paradigm centered around the editing cursor for character insertion and limited formatting.

    A text-centric design

    As a result of its text-centric design, the principle structure of the DOM was the text backing store, a complex system of text arrays that could be efficiently split and joined with minimal or no memory allocations. The backing store represented both text and tags as a linear progression, addressable by a global index or Character Position (CP). Inserting text at a given CP was highly efficient and copy/pasting a range of text was centrally handled by an efficient “splice” operation. The figure below visually illustrates how a simple markup containing “hello world” was loaded into the text backing store, and how CPs were assigned for each character and tag.

    Diagram of the text backing store, with special positional placeholders for non-text entities such as tags and the insertion point.

    The text backing store, with special positional placeholders for non-text entities such as tags and the insertion point.

    To store non-textual data (e.g. formatting and grouping information), another set of objects was separately maintained from the backing store: a doubly-linked list of tree positions (TreePos objects). TreePos objects were the semantic equivalent of tags in HTML source markup – each logical element was represented by a begin and end TreePos. This linear structure made it very fast to traverse the entire DOM “tree” in depth-first pre-order traversal (as required for nearly every DOM search API and CSS/Layout algorithm). Later, we extended the TreePos object to include two other kinds of “positions”: TreeDataPos (for indicating a placeholder for text) and PointerPos (for indicating things like the caret, range boundary points, and eventually for “new” features like generated content nodes).

    Each TreePos object also included a CP object, which acted as the tag’s global ordinal index (useful for things like the legacy document.all API). CPs were used to get from a TreePos into the text backing store, easily compare node order, or even find the length of text by subtracting CP indices.

    To tie it all together, a TreeNode bound pairs of tree positions together and established the “tree” hierarchy expected by the JavaScript DOM as illustrated below.

    Diagram showing the dual representation of the DOM as both text and (possibly overlapping) nodes

    The dual representation of the DOM as both text and (possibly overlapping) nodes

    Adding layers of complexity

    The foundation of CPs caused much of the complexity of the old DOM. For the whole system to work properly, CPs had to be up-to-date. Thus, CPs were updated after every DOM manipulation (e.g. entering text, copy/paste, DOM API manipulations, even clicking on the page—which set an insertion point in the DOM). Initially, DOM manipulations were driven primarily by the HTML parser, or by user actions, and the CPs-always-up-to-date model was perfectly rational. But with rise of JavaScript and DHTML, these operations became much more common and frequent.

    To compensate, new structures were added to make these updates efficient, and the splay tree was born, adding an overlapping series of tree connections onto TreePos objects. The added complexity helped with performance—at first; global CP updates could be achieved with O(log n) speed. Yet, a splay tree is really only optimized for repeated local searches (e.g., for changes centered around one place in the DOM tree), and did not prove to be a consistent benefit for JavaScript and its more random-access patterns.

    Another design phenomenon was that the previously-mentioned “splice” operations that handled copy/paste, were extended to handle all tree mutations. The core “splice engine” worked in three steps, as illustrated in the figure below.

    Timeline diagram of the splice engine algorithm

    The splice engine algorithm

    In step 1, the engine would “record” the splice by traversing the tree positions from the start of the operation to the end. A splice record was then created containing command instructions for this action (a structure re-used in the browser’s Undo stack). In step 2, all nodes (i.e., TreeNode and TreePos objects) associated with the operation were deleted from the tree. Note that in the IE DOM tree, TreeNode/TreePos objects were distinct from the script-referenced Element objects to facilitate overlapping tags, so deleting them was not a functional problem. Finally, in step 3, the splice record was used to “replay” (re-create) new objects in the target location. For example, to accomplish an appendChild DOM operation, the splice engine created a range around the node (from the TreeNode‘s begin TreePos to its end), “spliced” the range out of the old location, and created new nodes to represent the node and its children in the new location. As you can imagine, this created a lot of memory allocation churn, in addition to the inefficiencies of the algorithm.

    No encapsulation

    These are just a few of the examples of the complexity of the Internet Explorer DOM. To add insult to injury, the old DOM had no encapsulation, so code from the Parser all the way to the Display systems had CP/TreePos dependencies, which required many dev-years to detangle.

    With complexity comes errors, and the DOM code base was a reliability liability. According to an internal investigation, from IE7 to IE11, approximately 28% of all IE reliability bugs originated from code in core DOM components. This complexity also manifested as a tax on agility, as each new HTML5 feature became more expensive to implement as it became harder to retrofit concepts into the existing architecture.

    Modernizing the DOM tree in Microsoft Edge

    The launch of Project Spartan created the perfect opportunity modernize our DOM. Free from platform vestiges like docmodes and conditional comments, we began a massive refactoring effort. Our first, and most critical target: the DOM’s core tree.

    We knew the old text-centric model was no longer relevant; we needed a DOM tree that actually was a tree internally in order to match the expectations of the modern DOM API. We needed to dismantle the layers of complexity that made it nearly impossible to performance-tune the tree and the other surrounding systems. And finally, we had a strong desire to encapsulate the new tree to avoid creating cross-component dependencies on core data structures. All of this effort would lead to a DOM tree with the right model in place, primed and ready for additional improvements to come.

    To make the transition to the modern DOM as smooth as possible (and to avoid building a new DOM tree in isolation and attempting to drop and stabilize untested code at the end of the project—a.k.a. the very definition of “big bang integration”), we transitioned the existing codebase in-place in three phases. The first phase of the project defined our tree component boundary with corresponding APIs and contracts. We chose to design the APIs as a set of “reader” and “writer” functions that operated on nodes. Instead of APIs that look like this:

    parent.appendChild(child);
    element.nextSibling;

    our APIs looked like this:

    TreeWriter::AppendChild(parent, child);
     TreeReader::GetNextSibling(element);

    This API design discourages callers from thinking about tree objects as actors with their own state. As a result, a tree object is only an identity in the API, allowing for more robust contracts and hiding representational details, which proved useful in phase 3.

    The second phase migrated all code that depended on legacy tree internals to use the newly established component boundary APIs instead. During this migration, the implementation of the tree API would continue to be powered by the legacy structures. This work took the most time and was the least glamorous; it took several dev-years to detangle consumers of the old tree structures and properly encapsulate the tree. Staging the project this way let us release EdgeHTML 12 and 13 with our fully-tested incremental changes, without disrupting the shipping schedule.

    In the third and final phase, with all external code using the new tree component boundary APIs, we began to refactor and replace the core data structures. We consolidated objects (e.g., the separate TreePos, TreeNode, and Element objects), removed the splay tree and splice engine, dropped the concept of PointerPos objects, and removed the text backing storage (to name a few). Finally, we could rid the code of CPs.

    The new tree structure is simple and straightforward; it uses four pointers instead of the usual five to maintain connectivity: parent, first-child, next, and previous sibling (last-child is computed as the parent’s first-child’s previous sibling) and we could hide this last-child optimization behind our TreeReader APIs without changing a single caller. Re-arranging the tree is fast and efficient, and we even saw some improvements in CPU performance on public DOM APIs, which were nice side-effects of the refactoring work.

    Diagram of Microsoft Edge’s new DOM tree structure, showing all four possible pointers

    Microsoft Edge’s new DOM tree structure, showing all four possible pointers.

    With the new DOM tree, reliability also improved significantly, dropping from 28% of all reliability issues to just around 10%, and at the same time providing secondary benefits of reducing time spent debugging and improving team agility.

    The next steps in the journey

    While this feels like the end of our journey, in fact it’s just the beginning. With our DOM tree APIs in place and powered by a simple tree, we turned our attention to the other subsystems that comprise the DOM, with an eye towards two classes of inefficiencies: inefficient implementations inside the subsystems, and inefficient communication between them.

    Circular tree map showing "DOM Tree" at the center, surrounded by "JS Binding," "Editing," "Spellcheck," "Events," and "Attributes."

    The DOM tree is at the center of many cooperating components that make up the web programming model.

    For example, one of our top slow DOM APIs (even after the DOM tree work) has historically been querySelectorAll. This is a general-purpose search API, and uses the selectors engine to search the DOM for specific elements. Not surprisingly, many searches involve particular element attributes as search criteria (e.g., an element’s id, or one of its class identifiers). As soon as the search code entered the attributes subsystem, it ran into a whole new class of inefficiencies, completely unrelated to those addressed by the new DOM tree.

    For the attributes subsystem, we are simplifying the storage mechanism for element content attributes. In the early days of the web, DOM attributes were primarily directives to the browser about how to display a piece of markup. A great example of this is the colspan attribute:

    <tr>
         <td colspan="2">Total:</td>
         <td>$12.34</td>
     </tr>

    colspan has semantic meaning to the browser and thus has to be parsed. Given that pages weren’t very dynamic back then and attributes were generally treated like enums, IE created an attribute system that was optimized around eager parsing for use in formatting and layout.

    Today’s app patterns, however, heavily use attributes like id, class, and data-*, which are treated less like browser directives and more like generic storage:

    <li id="cart" data-customerid="a8d3f916577aeec" data-market="en-us">
         <b>Total:</b>
         <span class="total">$12.34</span>
     </li>

    Thus, we’re deferring most work beyond the bare minimum necessary to store the string. Additionally, since UI frameworks often encourage repeated CSS classes across elements, we plan to atomize strings to reduce memory usage and improve performance in APIs like querySelector.

    Though we still plenty of work planned, with Windows 10 Creators Update, we’re happy to share that we’ve made significant progress!

    Show me the money

    Reliably measuring and improving performance is hard and the pitfalls of benchmarking are well documented. To get the most holistic view of browser performance possible, the Microsoft Edge team uses a combination of user telemetry, controlled measurement real-world scenarios, and synthetic benchmarks to guide our optimizations.

    Venn Diagram with three labelled circles: "User telemetry," "Synthetic benchmarks," and "Performance Lab"

    User telemetry “paints with a broad brush”, but by definition measures the most impactful work. Below is an example of our build-over-build tracking of the firstChild API across our user base. This data isn’t directly actionable, since it doesn’t provide all the details of the API call (i.e. what shape and size is the DOM tree) needed for performance tuning, but it’s the only direct measurement of the user’s experience and can provide feedback for planning and retrospectives.

    Screen capture showing Build-to-build user performance telemetry for firstChild

    Build-to-build user performance telemetry for firstChild (lower is better)

    We highlighted our Performance lab and the nitty-gritty details of measuring browser performance a while ago, and while the tests themselves and the hardware in the lab has changed since then, the methodology is still relevant. By capturing and replaying real-world user scenarios in complex sites and apps like Bing Maps and Office 365, we’re less likely to overinvest in narrowly applicable optimizations that don’t benefit users. This graph is an example of our reports for a simulated user on Bing Maps. Each data point is a build of the browser, and hovering provides details about the statistical distribution of measurements and links to more information for investigating changes.

    Screen capture showing build to build telemetry from the Performance Lab

    A graph of build-over-build performance of a simulated user on Bing Maps in the Performance Lab

    Our Performance lab’s fundamental responsibility is to provide the repeatability necessary to test and evaluate code changes and implementation options. That repeatability also serves as the platform for synthetic benchmarks.

    In the benchmark category, our most exciting improvement is in Speedometer. Speedometer simulates using the TodoMVC app for several popular web frameworks including Ember, Backbone, jQuery, Angular, and React. With the new DOM tree in place and other improvements across other browser subsystems like the Chakra JavaScript engine, the time to run the Speedometer benchmark decreased by 30%; in the Creators update, our performance focus netted another improvement of 35% (note that Speedometer’s scores are a measure of speed and thus an inverse function of time).

    Chart showing Microsoft Edge scores on the Speedometer benchmark over the past four release. Edge 12: 5.44. Edge 13: 37.83. Edge 14: 53.92. Edge 15: 82.67.

    Release-over-release performance on the Speedometer benchmark (higher is better)

    Of course the most important performance metric is the user’s perception, so while totally unscientific, we’ve been super excited to see others notice our work!

    @toddreifsteck @thejohnjansen 2 more big things I've noticed: 1) faster DOM manipulation and…

    — Bryan Crow (@BryanTheCrow) March 22, 2017

    kudos to Microsoft, latest Edge version is nearly 50% faster on https://t.co/KFX8Y4SpDI (and now crushes Firefox)

    — Jeff Atwood (@codinghorror) March 31, 2017

    Latest Edge also 2x faster for Ember. Finally, some change I can believe in! Good work Edge team. ???? pic.twitter.com/nj6Qld4aTW

    — Jeff Atwood (@codinghorror) March 31, 2017

    We’re not done yet, and we know that Microsoft Edge is not yet the fastest on the Speedometer benchmark. Our score will continue to improve as a side effect of our performance work and we’ll keep the dev community updated on our progress.

    Conclusion

    A fast DOM is critical for today’s web apps and experiences. Windows 10 Creators Update is the first of a series of releases focused on performance on top of a re-architected DOM tree. At the same time, we’ll continue to improve our performance telemetry and community resources like the CSS usage and API catalog.

    We’re just beginning to scratch the surface of what’s possible with our new DOM tree, and there’s still a long journey ahead, but we’re excited to see where it leads and to share it with you! Thanks!

    Travis Leithead & Matt Kotsenas, Program Managers, Microsoft Edge

    The post Modernizing the DOM tree in Microsoft Edge appeared first on Microsoft Edge Dev Blog.

    New Offline Books for Visual Studio 2017 Available for Download

    $
    0
    0

    Today we are happy to announce that new offline books for Visual Studio 2017 are now available for download. Now you can easily download content published on MSDN and Docs for consumption on-the-go, without needing an active internet connection. We are also hosting the book generation and fetching services entirely on Microsoft Azure, which makes them more performant and reliable – we will be continuously updating the content, so you will no longer be stuck with outdated books and wait 6 months for the next release. The process to create and update an offline book now takes hours instead of months!

    The new offline books continue to integrate directly with Visual Studio, allowing you to rely on the familiar in-context help (F1) and many features of the Help Viewer, such as indexed search, favorites and tables of contents that mirror those of online pages.

    Adding Help Viewer to your Visual Studio installation

    Starting with Visual Studio 2017, Help Viewer is now an optional component that you have to manually select during installation. With the new Visual Studio installer, this is a two-click process: simply select Individual Components, and click on Help Viewer under Code tools.

    Visual Studio 2017 Offline Books Help Viewer Install

    Available Books

    In addition to your usual developer content, such as books covering Visual C#, Visual F# and others, we have added brand new content to the list, including:

    • ASP.NET Core
    • ASP.NET API Reference
    • NuGet
    • Scripting Language Reference

    All these books are available in the Manage Content section of Help Viewer – click on Add next to the books that you are interested in and select Update at the bottom of the screen.

    Visual Studio 2017 Offline Books Help Viewer

    Feedback

    We are constantly looking to improve our offline content story. If you encounter any issues with the Help Viewer app, let us know via the Report a Problem option in the installer or in Visual Studio itself. If you have any suggestions, bug reports or ideas related to the content in offline books, please submit them on our UserVoice site – we will address them as soon as possible!

    Den Delimarsky, Program Manager, docs.microsoft.com
    @DennisCode

    Den drives the .NET, UWP and sample code experiences on docs.microsoft.com. He can be found occasionally writing about security and bots on his blog.

    Windows Developers at Microsoft Build 2017

    $
    0
    0

    Microsoft Build 2017 kicks off on May 10 in Seattle, with an expected capacity crowd of over 5,000 developers—plus countless more online. Join us for the live-streamed keynotes, announcements, technical sessions and more. You’ll be among the first to hear about new developments that will help you engage your users, keep their information safe and reach them in more places. Big things have been unveiled and promoted at Microsoft Build over the years and this year’s conference won’t be any different!

    There will be quite a bit of content specifically relevant to Windows developers:

    • Improvements that help you immediately engage your users with beautiful UI and natural inputs
    • Team collaboration and connectedness to streamline and improve your development experience
    • Services that make it easier to reach customers and learn what they want from your software
    • Connected screens and experiences that make your end-to-end experience stickier and more engaging
    • Mixed reality and creating deeply immersive experiences

    Sign up for a Save-the-Date reminder on our Build site for Windows Developers and we’ll keep you in the loop as new details and information come in. When you sign up, you’ll also gain the ability to:

    • Save sessions for later viewing
    • Create and share content collections
    • Discuss what you’ve seen and heard with other developers
    • Upvote content you like and track trending sessions

    You’ll find sign-up, sessions and content at https://developer.microsoft.com/windows/projects/events/build/2017.

    The post Windows Developers at Microsoft Build 2017 appeared first on Building Apps for Windows.

    Microsoft R Server 9.1 now available

    $
    0
    0

    During today's Data Amp online event, Joseph Sirosh announced the new Microsoft R Server 9.1, which is available for customers now. In addition the updated Microsoft R Client, which has the same capabilities for local use, is available free for everyone on both Windows and — new to this update — Linux.

    This release adds many new capabilities to Microsoft R, including: 

    Based on R 3.3.3. This update is based on the latest release of open source R from the R Foundation, bringing many improvements to the core language engine.

    Microsoft R Client now available on Linux. The Microsoft R Client is available free, and has all of the same capabilities as Microsoft R Server for a standalone machine (or, you can use it to push heavy workloads to a remote Microsoft R Server, SQL Server or compute cluster). With this release it is now also available on Linux (specifically, Ubuntu, Red Hat / CentOS and SUSE) in addition to Windows.

    New function for "pleasingly parallel" R computations on data partitions. The new rxExecBy function allows you to apply any R function to partitions of a data set, and compute on the partitions in parallel. (You don't have to manually split the data first; this happens automatically at the data source, without the need to move or copy the data.) You can use any parallel platform supported in Microsoft R: multiple cores in a local context, multiple threads in SQL Server, or multiple nodes in a Spark cluster.

    New functions for sentiment scoring and image featurization have been added to the built-in MicrosoftML package. The new getSentiment function returns a sentiment score (on a scale from "very negative" to "very positive") for a piece of English text. (You can also featurize text in English, French, German, Dutch, Italian, Spanish and Japanese as you build your own models.) The new featurizeImage function decomposes an image into a few numeric variables (using a selection of ResNet recognizers) which can then be used as the basis of a predictive model. Both functions are based on deep neural network models generated from thousands of compute-hours of training by Microsoft Research.

    New machine learning models in Hadoop and Spark. The new machine learning functions introduced with Version 9.0 (such as FastRank gradient-boosted trees and GPU-accelerated deep neural networks) are now available in the Hadoop and Spark contexts in addition to standalone servers and within SQL Server. 

    Production-scale deployment of R models. The new publishService function creates a real-time web-services for certain RevoScaleR and MicrosoftML functions that is independent of any underlying R interpreter, and can return results with millisecond latency. There are also new tools for launching and managing asynchronous batch R jobs.

    Updates for R in SQL Server. There are also improvements for R Services in SQL Server 2016, in addition to the new R functions described above. There are new tools to manage installed R packages on SQL Server, and faster single-row scoring. Even more new capabilities are coming in SQL Server 2017 (now in preview for both Windows and Linux), including the ability to use both R and Python for in-database computations.

    For more on the updates in Microsoft R Server 9.1, check the official blog post by Nagesh Pabbisetty linked below.

    SQL Server Blog: Introducing Microsoft R Server 9.1 release

    Viewing all 10804 articles
    Browse latest View live