Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

New and improved Event and CSS inspection for Microsoft Edge DevTools

$
0
0

Editor’s note: This is the first post in a series highlighting what’s new and improved in the Microsoft Edge DevTools with EdgeHTML 16.

EdgeHTML 16 is now rolling out to devices around the world as part of the Windows 10 Fall Creator’s Update—and with it, some great improvements to the Microsoft Edge DevTools.

In this post, we’ll walk through a slew of new updates to the Elements tab (formerly known as the DOM Explorer) with improvements to CSS at-rules, Event Listeners, Pseudo-Elements and the overall user experience.

CSS at-rules

In EdgeHTML 16, we’ve completely redesigned how at-rules appear in the Elements tab. Previously, usage of CSS at-rules were either missing or incorrectly displayed. The DevTools now show @supports, @media and @keyframes styles in separate sections within the Styles pane:

Screen capture showing at-rules in the Elements tab

The new at-rules interface inspecting @keyframes on the CodePen example above

We also added a new pane for inspecting the rendered font of an element. In the Fonts pane, you can now see whether a font is served from the local machine or over the network and what, if any, fallback font face was used. If over the network, the DevTools will show the correlating @font-face rule:

Screen capture of text reading "Arial (Local, System, 67 chars)"

You can now inspect the rendered font of an element in the Fonts pane.

Ancestor Event Listeners

The DevTools now allow you to inspect all event listeners on ancestor elements up to the current Window with the Ancestors option. You can also group event listeners by Element to see how the events will fire at each level of the element tree. This enables you to determine where a rogue event listener might be firing higher up the tree.

Screen capture showing the new Ancestors option.

You can now inspect all event listeners on ancestor elements up to the current Window with the Acnestors option.

Screen capture of Ancestor Event Listeners grouped by Element

Ancestor Event Listeners can be grouped by Element or Event.

Pseudo-Elements

In the Styles pane, the DevTools now group styles by their respective pseudo-element, supporting ::before, ::after, ::first-letter, ::first-line and ::selection. This should make it easier to determine which style is winning the cascade.

Screen capture of the Styles pane with styles grouped by pseudo-element

Styles are now grouped by their respective psuedo-element.

User Experience

This release also includes some tweaks to the user experience of the DevTools to make it easier for developers who work in multiple browsers.

In particular, Layout pane has been removed and the box model is now at the top of the Computed pane, freeing up space for two new panes: Fonts and DOM breakpoints.

Screen capture of the new Computed pane with the box model

The box model is now at the top of the Computed pane.

Finally, to make it easier on developers who switch between multiple browsers, we’ve added two new ways to launch the DevTools: Ctrl+Shift+I and Ctrl+Shift+J. Ctrl+Shift+I will launch the DevTools just as the F12 keybinding does today. Ctrl+Shift+J will launch the DevTools (if not already open) and take you directly to the Console.

You can try out and file feedback for these new features starting with EdgeHTML 16, and find us on Twitter @EdgeDevTools to let us know what you think!

Clay Martin, Program Manager, Microsoft Edge DevTools

The post New and improved Event and CSS inspection for Microsoft Edge DevTools appeared first on Microsoft Edge Dev Blog.


Improve website performance by optimizing images

$
0
0

We all want our web applications to load as fast as possible to give the best possible experience to the users. One of the steps to achieve that is to make sure the images we use are as optimized as possible.

If we can reduce the file size of the images then we can significantly reduce the weight of the website. This is important for various reasons, including:

  • Less bandwidth needed == cheaper hosting
  • The website loads faster
  • Faster websites have higher conversion rates
  • Less data needed to load your page on mobile devices (mobile data can be expensive)

To optimize images is always better for the user and therefore for you too, but it’s something that is easy to forget to do and a bit cumbersome. So, let’s look at a couple of options that are simple to use.

All these options use great optimization algorithms that are capable of reducing the file size of images by up to 75% without any noticeable quality loss.

Gulp

If you are already using Gulp, then using the gulp-imagemin package is a good option. When configured it will automatically optimize the images as part of your build.

Pros:

  • Can be automated as part of a build
  • Uses industry standard optimization algorithms
  • Supports both lossy and lossless optimization
  • It is open source

Cons:

  • Requires some configuration
  • Increases the build time sometimes by a lot
  • Doesn’t optimize dynamically added images

Visual Studio Image Optimizer

The Image Optimizer extension for Visual Studio is one of the most popular extensions due to its simplicity of use and strong optimization algorithms.

Pros:

  • Remarkably simple to use – no configuration
  • Uses industry standard optimization algorithms
  • Supports both lossy and lossless optimization
  • It is open source

Cons:

  • No build time support
  • Doesn’t optimize dynamically added images

Azure Image Optimizer

Installing the Azure.ImageOptimizer NuGet package into any ASP.NET application will automatically optimize images once the app is deployed to Azure App Services with zero code changes to the web application. It uses the same algorithms as the Image Optimizer extension for Visual Studio.

To try out the Azure Image Optimizer you’ll need an Azure subscription. If you don’t already have one you can get started for free.

This is the only solution that optimize images dynamically added at runtime such as a user uploaded profile pictures.

Pros:

  • Remarkably simple to use
  • Uses industry standard optimization algorithms
  • Supports both lossy and lossless optimization
  • Optimizes dynamically added images
  • Set it and forget it
  • It is open source

Cons:

  • Only works on Azure App Service

To understand how the Azure Image Optimizer works, check out the documentation on GitHub. Spoiler alert – it is an Azure Webjob running next to your web application.

Final thoughts

There are many more options for image optimizations that I didn’t cover, but it doesn’t really matter how you chose to optimize the images. The important part is that you optimize them.

My personal preference is to use the Image Optimizer extension for Visual Studio to optimize the known images and combine that with the Azure.ImageOptimizer NuGet package to handle any dynamically added images at runtime.

For more information about image optimization techniques check out Addy Osmani’s very comprehensive eBook Essential Image Optimization.

Introducing Azure Automation watcher tasks public preview

$
0
0

Azure Automation has the ability to integrate and automate processes across Azure and on-premises environments using a hybrid worker. This hybrid management capability has been extended to now deliver an automatic response to events in your datacenter using watcher tasks.

Watcher tasks deliver the ability to author a watcher runbook that polls a system in your environment and then call an action runbook to process any events found. Typical scenarios that this new functionality is used for are:

  • Look for new files that arrive in a folder and take action based on the content of the files.
  • Monitor for new requests in SharePoint and fulfill the request using an action runbook.
  • Watch for new email in an Office 365 folder and escalate to a ticketing system.
  • Monitor ITSM systems for incidents or requests and respond automatically.
  • Watch for alerts generated in a monitoring system and perform remediation tasks based on the alert type.

Watcher tasks can be authored for specific needs in the organization by using the extensive PowerShell modules available from Microsoft, partners, and customers. These modules can be imported from the PowerShell Gallery to integrate into existing systems to watch for events. Custom integration can be written in PowerShell to connect and monitor systems for actions to take.

You can get access to this new functionality from the Azure Automation account.

WatchersList

Watcher tasks consist of a watcher and action runbook that are authored in PowerShell and run on a Windows hybrid worker.

Pricing for Automation watcher tasks is available on the pricing page.

To learn more about Azure Automation watcher tasks, check out the watcher task tutorial.

Elastic database tools for Java

$
0
0

We are excited to announce the release of a new version of the Elastic Database Client Library for Azure SQL database supporting Java. The client library provides tools to help developers scale out the data tier of their applications using sharding, including support for multi-tenancy patterns for Software as a Service (SaaS) applications.

The key capabilities offered by the client library include:

  • Shard map management: Allowing you to define groups of shards for your application and manage key ranges for your shards.
  • Data-dependent routing: Enabling you to route incoming requests to the appropriate shard.
  • Multi-shard query: Supporting query across multiple shards for tasks such as data collection or reporting.

Like the existing C# version, the Java version is open source, and we welcome contributions from the community.

Key scenarios

High-volume OLTP: The client library enables a single cloud-based OLTP application to process massive data volumes, and to support high-end transaction processing needs by distributing data across potentially thousands of database shards.

Multi-tenant SaaS applications: The client library simplifies database management for highly scalable multi-tenant cloud applications. It can be used with a database-per-tenant model, where a database is provisioned for each tenant providing high tenant isolation, and with multi-tenant databases, where many tenants reside in each shard and share resources for cost effectiveness.

Continuous data collection: For applications designed to capture telemetry, or using an Internet-of-Things (IoT) data ingestion pattern, the client library enables seamless scale over time by allowing for regular creation of new shards for new date ranges. Newer shards can use higher service tiers and scale down over time as usage is reduced.

Next steps

  • To add the library to your Maven project, simplify add the following dependency in your POM file.
<dependency>
    <groupId>com.microsoft.azure</groupId>
    <artifactId>elastic-db-tools</artifactId>
    <version>1.0.0</version>
</dependency>
  • To get started with the sample project, follow the the instructions to download the sample project.
  • For more information on Azure SQL database tools for managing scaled out databases, see the documentation.
  • To make contributions to the code, follow instructions on GitHub.
  • Please submit feedback, questions, or comments to SaaSFeedback@microsoft.com.

Announcing Azure Location Based Services public preview

$
0
0

Today we announced the Public Preview availability of Azure Location Based Services (LBS). LBS is a portfolio of geospatial service APIs natively integrated into Azure that enable developers, enterprises and ISVs to create location aware apps and IoT, mobility, logistics and asset tracking solutions. The portfolio currently comprises of services for Map Rendering, Routing, Search, Time Zones and Traffic. In partnership with TomTom and in support of our enterprise customers, Microsoft has added native location capabilities to the Azure public cloud.

Azure LBS has a robust set of geospatial services atop a global geographic data set. These services are comprised of 5 primary REST services and a JavaScript Map Control. Each service has a unique set of capabilities atop of the base map data and are built in unison and in accordance with Azure standards making it easy to work interoperable between the services. Additionally, Azure LBS is fully hosted and integrated into the Azure cloud meaning the services are compliant with all Azure fundamentals for privacy, usability, global readiness, accessibility and localization. Users can manage all Azure LBS account information from within the Azure portal and billed like any other Azure service.

Azure LBS uses key-based authentication. To get a key, go to the Azure portal and create and Azure LBS account. By creating an Azure LBS account, you automatically generate two Azure LBS keys. Both keys will authenticate requests to the various Azure LBS services. Once you have your account and your keys, you’re ready to start accessing Azure Location Based Services. And, the API model is simple to use. Simply parameterize your URL request to get rich responses from the service:

Sample Address Search Request: atlas.microsoft.com/search/address/json?api-version=1&query=1 Microsoft Way, Redmond, WA

Azure LBS enters public preview with five distinct services. Render (for maps), Route (for directions), Search, Time Zones and Traffic and a JavaScript Map Control. Each of these services are described in more detail below.

Azure Map Control

The Azure Map Control is a JavaScript web control with built-in capabilities for fetching Azure LBS vector map tiles, drawing data atop of it and interacting with the map canvas. The Azure Map Control allows developers to layer their data atop of Azure LBS Maps in both vector and raster layers meaning if enterprise customers have coordinates for points, lines and polygons or if they have geo-annotated maps of a manufacturing plant, a shopping mall or a theme park they can overlay these rasterized maps as a new layer atop of the Azure Map Control. The map control has listeners for clicking the map canvas and getting coordinates from the pixels allowing customers to send those coordinates to the services for searching for businesses around that point, finding the nearest address or cross street to that point, generating a route to or from that point or even connecting to their own database of information to find geospatially referenced information important to their business that is near that point.

Azure Location Based Services Map Control


The Azure Map Control makes it simple for developers to jumpstart their development. By adding a few lines of code to any HTML document, you get a fully functional map.

<!DOCTYPE html>
<html lang="en">
  <head>
<meta charset="utf-8"><meta name="viewport" content="width=device-width, user-scalable=no">
   <title>Hello Azure LBS</title>
    <link href="https://atlas.microsoft.com/sdk/css/atlas.min.css?api-version=1.0" rel="stylesheet" type="text/css">
     <script src="https://atlas.microsoft.com/sdk/js/atlas.min.js?api-version=1.0"></script>
      <style>
       html,
body {width: 100%; height: 100%;padding: 0;margin: 0;}
        #map {width: 100%;height: 100%;}
      </style>
     </head>
<body>
     <div id="map">
   <script>
    var map = new atlas.Map("map", {"subscription-key": "[AZURE_LBS_KEY]", center: [-122.33, 47.64],zoom: 12, language: "en-US"});
   </script>
      </div>
  </body>
</html>

In the above code sample, be sure to replace [AZURE_LBS_KEY] with your actual Azure LBS Key created with your Azure LBS Account in the Azure portal.

Render Service

The Azure LBS Render Service is use for fetching maps. The Render Service is the basis for maps in Azure LBS and powers the visualizations in the Azure Map Control. Users can request vector-based map tiles to render data and apply styling on the client. The Render Service also provides raster maps if you want to embed a map image into a web page or application. Azure LBS maps have high fidelity geographic information for over 200 regions around the world and is available in 35 languages and two versions of neutral ground truth.

Azure Location Based Services Render Service

The Azure LBS cartography was designed from the ground up and created with the enterprise customer in mind. There are lower amounts of information at lower levels of delineation (zooming out) and higher fidelity information as you zoom in. The design is meant to inspire enterprise customers to render their data atop of Azure LBS Maps without additional detail bleeding through disrupting the value of customer data.

Routing Service

The Azure LBS Routing Service is used for getting directions, but not just point A to point B directions. The Azure LBS Routing Service has a slew of map data available to the routing engine allowing it to modify the calculated directions based on a variety of scenarios.  First, the Routing Service provides customers the standard routing capabilities they would expect with a step-by-step itinerary. The calculation of the route can use the faster, shortest or avoiding highly congested roads or traffic incidents. For traffic-based routing, this comes in two flavors: “historic” which is great for future route planning scenarios when users would like to have a general idea of what traffic tends to look like on a given route; and, “live” which is ideal for active routing scenarios when a user is leaving now and wants to know where traffic exists and the best ways to avoid it.

Azure LBS Routing will allow for commercial vehicle routing providing alternate routes made just for trucks. The commercial vehicle routing supports parameters such as vehicle height, weight, the number of axels and hazardous material contents all to choose the best, safest and recommend roads for transporting their haul. The Routing Service provides a variety of travel modes, including walking, biking, motorcycling, taxiing or van routing.

Azure Location Based Services Route Service

Customers can also specify up to 50 waypoints along their route if they have pre-determined stops to make. If customers are looking for the best order in which to stop along their route, they can have Azure LBS determine the best order in which to route to multiple stops by passing up to 20 waypoints into the Routing Service where an itinerary will be generated for them.

Using the Azure LBS Route Service, customers can also specify arrival times when they need to be at a specific location by a certain time. Using the massive amount of traffic data, almost a decade of probes captured per geometry and high frequency intervals Azure LBS can let customers know given day or the week and time when is the best time of departure. Additionally, Azure LBS can use current traffic conditions to notify customers of a road change that may impact their route and provide updated times and/or alternate routes.

Azure LBS can also take into considering the engine type being used. By default, Azure LBS assumes a combustion engine is being used; however, if an electrical engine is in use Azure LBS will accept input parameters for power settings and generate the most energy efficient route.

The Routing Services also allows for multiple, alternate routes to be generated in a single query. This will save on over the wire transfer. Customers can also specify that they would like to avoid specific route types such as toll roads, freeways, ferries or carpool roads.

Sample Commercial Vehicle Route Request: atlas.microsoft.com/route/directions/json?api-version=1&query=52.50931,13.42936:52.50274,13.43872&travelMode=truck

Search Service

The Azure LBS Search Service provides the ability for customers to find real world objects and their respective location. The Search Service provides for three major functions:

  1. Geocoding: Finding addresses, places and landmarks
  2. POI Search: Finding businesses based on a location
  3. Reverse Geocoding: Finding addresses or cross streets based on a location

Azure Location Based Services Search Service

With the Search Service, customers can find addresses and places from around the world. Azure LBS supports address level geocoding in 38 regions, cascading to house numbers, street-level and city level geocoding for other regions of the world. Customers can pass addresses into the service based in a structured address form; or, they can use an unstructured form when they want to allow for their customers to search for addresses, places or business in a single query. Users can restrict their searches by region or bounding box and can query for a specific coordinate to influence the search results to improve quality. Reverse the query to provide a coordinate, say from a GPS receiver, customers can get the nearest address or cross street returned from the service.

The Azure LBS Search Service also allows customers to query for business listings. The Search Service contains hundreds of categories and hundreds of sub-categories for finding businesses or points of interest around a specific point or within a bounding area. Customers can query for businesses based on brand name or general category and filter those results based on location, bounding box or region.

Sample POI Search Request (Key Required): atlas.microsoft.com/search/poi/category/json?api-version=1&query=electric%20vehicle%20station&countrySet=FRA

Time Zone Service

The Azure LBS Time Zone Service is a first of it’s kind providing the ability to query time zones and time for locations around the world. Customers can now submit a location to Azure LBS and receive the respective time zone, the respective time in that time zone and the offset to Coordinated Universal Time (UTC). The Time Zone Service provides access to historical and future time zone information including changes for daylight savings. Additionally, customers can query for a list of all the time zones and the current version of the data – allowing customers to optimize their queries and downloads. For IoT customers, the Azure LBS Time Zone Service allows for POSIX output, so users can download information to their respective devices that only infrequently access the internet. Additionally, for Microsoft Windows users, Azure LBS can transform Windows time zone IDs to IANA time zone IDs.

Sample Time Zone Request (Key Required): atlas.microsoft.com/timezone/byCoordinates/json?api-version=1&query=32.533333333333331,-117.01666666666667

Traffic Service

The Azure LBS Traffic Service provides our customers with the ability to overlay and query traffic flow and incident information. In partnership with TomTom, Azure LBS will have access to a best in class traffic product with coverage in 55 regions around the world. The Traffic Service provides the ability to natively overlay traffic information atop of the Azure Map Control for a quick and easy means of viewing traffic issues. Additionally, customers have access to traffic incident information – real time issues happening on the road and collected through probe information on the roads. The traffic incident information provides additional detail such as the type of incident and the exact location. The Traffic Service will also provide our customers with details of incidents and flow such as the distance and time from one’s current position to the “back of the line;” and, once a user is in the traffic congestion the distance and time until they’re out of it.

Azure Location Based Services Traffic Service

Sample Traffic Flow Segment Request: atlas.azure-api.net/traffic/flow/segment/json?api-version=1&unit=MPH&style=absolute&zoom=10&query=52.41072,4.84239

Azure Location Based Services are available now in public preview via the Azure portal. Get your account created today.

Last week in Azure: Migrating VMWare environments to Azure, and more

$
0
0

Due to the Thanksgiving holiday in the US, last week was a light news week for Azure. But that doesn’t mean nothing new was announced. On Tuesday, Corey Sanders announced new services that will make it easier than ever to migrate VMWare-based environments to Azure. Central to this is Azure Migrate, which is a free service that became broadly available yesterday. This is just one set of investments in Azure’s comprehensive hybrid capabilities to help you along your cloud journey. See below for a summary of what else happened prior to the holiday break.

Compute & Containers

Transforming your VMware environment with Microsoft Azure – The post from Corey Sanders mentioned above that goes into detail about the new services that will help you migrate your VMWare environments to Azure.

Jenkins on Azure update - ACI experiment and AKS support – Learn how you can get your Jenkins build agents running in Azure with the Jenkins in the Cloud tutorial in which you’ll use Azure Container agent to add on-demand capacity and use Azure Container Instances (ACI) to build the Spring PetClinic Sample Applicaiton.

Data

#AzureSQLDW cost savings with optimized for elasticity and Azure Functions – part 1 – Learn how you can push data quickly into Azure Analysis Services for optimized performance by using Azure Functions to create schedule-based scaling to control when you want to have your data warehouse scaled up, down, or paused. This enables you to control cost by having the compute level match query activity patterns.

Azure SQL Databases Disaster Recovery 101 – Geo Restore and Geo Replication provide two different solutions for coping with disaster recovery. In this post you’ll learn how these solutions differ based on data loss, recovery time, and cost. Understanding these differences will help you choose between the two solutions.

Management

Azure Advisor - your personalized best practices service got better – Azure Advisor has a new dashboard to help you review its recommendations across multiple subscriptions. .

Public preview of new Azure Policy features – Several new features for Azure Policies announced at Ignite (Azure Compute session, Azure Governance session, and Azure Resource Manager session) are now in public preview.

Time to migrate off Access Control Service – Access Control Service is retiring and will be discontinued next year on November 7, 2018. Read this post for details about the retirement schedule, how to determine if you’re using this service, and how to migrate if you are.

Additional news

Announcing the public preview of Azure Media Clipper – A public preview of new capabilities in Azure Media Services (AMS) for composing media clips and stitching together rendered videos. Azure Media Clipper is a free JavaScript library that enables web developers to provide their users with an interface for creating clips.

R3 on Azure: Strengthening our partnership – Microsoft is expanding its strategic partnership with R3 to integrate more deeply with R3’s distributed ledger platform, Corda and R3Net, with Azure.

Microsoft showcases latest industrial IoT innovations at SPS 2017 – Learn about several IoT product announcements that we made at SPS IPC Drives 2017 in Nuremberg last week.

Content, SDKs, and samples

The Developer’s Guide to Microsoft Azure - 2nd Edition – This is a major revision to the original Developer’s Guide to Azure. This free e-book provides a comprehensive technical overview of Azure for developers from my colleague Michael Crump and Pluralsight author Barry Luijbregts.

Azure Shows

Using Web App for Containers in a Multi-Tier Application – Ahmed Elnably joins Scott Hanselman to discuss a sample project, Developer Finder, that was created to showcase how to use Web App for Containers in the context of a multi-tiered application.

Azure Analysis Services: Desktop PowerBI to the Cloud – The new web modeling experience for Azure Analysis Service can supersize the models that you have built for Power BI. In this episode, Josh Caplan will show how you can take data models that were built inside the Power BI desktop and easily convert them to Azure Analysis Services models. You can then use all the Power of Azure Analysis Services to scale your model to hundreds or even thousands of users.

The Azure Podcast: Episode 205 - SQL Vulnerability Assessment – Extremely informative discussion with Ronit Reger, a Senior PM in the SQL Team, about this new service they put out for SQL Azure to make is easier for customers to find out if their database is vulnerable to attacks. A must-have for anyone using SQL in Azure.

Cloud Tech 10 - 27th November 2017 - Jenkins with ACI, SQL DW with Functions and more! – Each week, Mark Whitby, a Cloud Solution Architect at Microsoft UK, covers what's happening with Microsoft Azure in just 10 minutes, or less. In this episode:

  • Using Azure Container Instances as build agents for Jenkins
  • Improvements to Azure Advisor
  • Scale Azure SQL Data Warehouse using Azure Functions
  • Public Preview of Azure Policy

VSTS Update – Nov 28

$
0
0

This week we will be deploying our sprint 126 work.  This deployment is probably one of the most complicated in a while.  Quite a lot of it actually rolled out at our Connect(); event the week before Thanksgiving and then Thanksgiving week stalled the completion of the deployments so we are just getting back to it this week.   All the ruckus around Connect(); and Thanksgiving has caused the sprint 126 deployment to run up against the sprint 127 deployment.  The sprint 127 deployment will begin to hit public rings within a week.

Also, to be quite honest, the rush to get all of our new capabilities deployed and enabled in time for our Connect(); event led to some stability problems I’m not happy about.  This is part of what caused us to be more cautious during Thanksgiving week.  We are also spending time now really looking at all the incidents we had, root causing them and figuring out how not to have it happen again.  I apologize for any down time we inflicted on you.  We’re working to adjust processes to catch issues earlier.

You can read the release notes for the sprint 126/Connect(); releases to get all the details.  The thing that should probably most jump out at you is how much there is.  It’s 43 new features and countless bug fixes and small improvements.  That is a new record for us in a single sprint delivery.

I’m not going to try to recap the highlights here because there are so many *big* improvements that I’d just end up rewriting the release notes.  I encourage you to go read them and feel free to pass on any feedback you have.

Thanks,

Brian

Azure Analysis Services integration with Azure Diagnostic Logs

$
0
0

We are pleased to announce that Azure Analysis Services is integrated with Azure Monitor Resource Diagnostic Logs. Diagnostic logging is a key feature for IT owned BI implementations. We have taken steps to ensure you can confidently run diagnostic logging on production Azure Analysis Services servers without a performance penalty.

Various scenarios are supported, including the following:

  • Auditing
  • Monitoring of server health
  • Derivation of usage metrics
  • Understanding which user groups are using which datasets and when
  • Detection of long-running or problematic queries
  • Detection of users experiencing errors

Traditionally, customers have used SSAS Extended Events (xEvents) on premises. This normally involved xEvent session management, output to a binary XEL file, use of special system functions in SQL Server to access the data within the files, and complex parsing of XML output. Having done all that, the data could be stored somewhere and subsequently consumed for analysis. It was often not automatically integrated with other usage data such as performance counter metrics and logs from other components of the architecture. Azure diagnostic logging makes this process simpler and easier for Azure Analysis Services.

Set up diagnostic logging

To set it up, select the “Diagnostic logs” blade for an Azure Analysis Services server in the Azure portal. Then click the add diagnostic setting link.

01 Azure Portal

The diagnostic settings blade is displayed.

02 Diagnostic Settings

Here you can define up to 3 targets for diagnostic logs.

  1. Archive to a storage account: Log files are stored in JSON format (not XEL files).
  2. Stream to an event hub: This allows broad integration with, for example, big-data systems.
  3. Send to Log Analytics: This leverages the particularly useful Azure Log Analytics, which provides built in analysis, dashboarding and notification capabilities.

The following log categories are available for selection.

  • The engine category instructs Azure Analysis Services to log the following xEvents. Unlike xEvents in SSAS, it is not possible to select individual xEvents. The Log Analytics model assumes it is relatively inexpensive to log all the events and ask questions later. We have had feedback from the community that these are the most valuable xEvents, and we have excluded verbose events that can affect server performance. Further xEvents may of course be added in the future, especially when releasing new features.

(XEvent) Category

Event Name

Security Audit

Audit Login

Security Audit

Audit Logout

Security Audit

Audit Server Starts And Stops

Progress Reports

Progress Report Begin

Progress Reports

Progress Report End

Progress Reports

Progress Report Current

Queries

Query Begin

Queries

Query End

Commands

Command Begin

Commands

Command End

Errors & Warnings

Error

Discover

Discover End

Notification

Notification

Session

Session Initialize

Locks

Deadlock

Query Processing

VertiPaq SE Query Begin

Query Processing

VertiPaq SE Query End

Query Processing

VertiPaq SE Query Cache Match

Query Processing

Direct Query Begin

Query Processing

Direct Query End

  • The service category includes the following service-level events.

Operation name

Occurs when

CreateGateway

User configures a gateway on server

ResumeServer

Resume a server

SuspendServer

Pause a server

DeleteServer

Delete a server

RestartServer

User restarts a server through SSMS or PowerShell

GetServerLogFiles

User exports server log through PowerShell

ExportModel

User exports model in Azure Portal. For example, "Open in Power BI Desktop", "Open in Visual Studio"

  • The All Metrics category logs events for metric readings. These are the same metrics displayed in the Metrics blade of the Azure portal for an Azure Analysis Services server.

03 Metrics

Metrics and server events are integrated with xEvents in Log Analytics for side-by-side analysis. Log Analytics can also be configured to receive events from a range of other Azure services providing a holistic view of diagnostic logging data across customer architectures. Adding the diagnostic setting can be done from PowerShell using the Set-AzureRmDiagnosticSetting cmdlet.

Consume diagnostic logs in Log Analytics

With some log data already generated, navigate to the Log Analytics section of the Azure portal and select the target “OMS workspace”. Then click on Log Search.

04 Log Analytics

Click on all collected data to get started.

05 Log Search

Then, click on AzureDiagnostics and Apply. AzureDiagnostics includes engine and service events.

06 Azure Diagnostics

Notice that a Log Analytics query is being constructed on the fly. The EventClass_s field contains xEvent names, which may look familiar if you have used xEvents on premises. Click EventClass_s or one of the event names and Log Analytics will continue constructing a query based on interaction in the user interface. Log Analytics has the ability to save searches for later reuse.

This post describes only the tip of the iceberg regarding consuming Log Analytics data. For example, Operations Management Suite provides a website with enhanced query, dashboarding, and alerting capabilities on Log Analytics data.

Search query sample

The following sample query returns queries submitted to Azure Analysis Services that took over 5 minutes (300,000 miliseconds) to complete. The generic xEvent columns are normally stored as strings and therefore end with the “_s” suffix. In order to filter on the queries that took over 5 minutes, it is necessary to cast Duration_s to a numeric value. This can be achieved using the toint() syntax.

search * | where ( Type == "AzureDiagnostics" ) | where ( EventClass_s == "QUERY_END" ) | where toint(Duration_s) > 300000

Query scale out

When using scale out, you can identify read-only replicas because the ServerName_s field values have the replica instance number appended to the name. The resource field contains the Azure resource name, which matches the server name that the users see. Additionally, the IsQueryScaleoutReadonlyInstance_s field equals true for replicas.

08 Scale out

Consume diagnostic logs in Power BI

The feature that probably unlocks Log Analytics data for most BI professionals is the Power BI button. Simply click to download a text file that contains an M expression, which can be pasted into a blank query in Power BI Desktop. The expression contains the current Log Analytics query and consumes from the Log Analytics REST API.

09 Power BI button

This enables a variety of analytical reports such as the following one, showing the information below.

  • The S4 server is not hitting the 100 GB memory limit.
  • During the time range, the QPU is maxed out. S4 servers are limited to 400 QPUs.
  • Long running queries were taking place during processing/data refresh operations.
  • Users received timeout errors due to contention between long running queries and processing operations.
  • The server may be a good candidate for query scale out.

10 Power BI report

We hope you’ll agree that Azure Analysis Services integration with Azure Monitor Resource Diagnostic Logs provides a rich capability for auditing and monitoring, side-by-side analysis of xEvent data with other data such as metrics data, and is easier to set up than xEvents on premises. Enjoy!


Configuring HTTPS in ASP.NET Core across different platforms

$
0
0

As the web moves to be more secure by default, it’s more important than ever to make sure your websites have HTTPS enabled. And if you’re going to use HTTPS in production its a good idea to develop with HTTPS enabled so that your development environment is as close to your production environment as possible. In this blog post we’re going to go through how to setup an ASP.NET Core app with HTTPS for local development on Windows, Mac, and Linux.

This post is primarily focused on enabling HTTPS in ASP.NET Core during development using Kestrel. When using Visual Studio you can alternatively enable HTTPS in the Debug tab of your app to easily have IIS Express enable HTTPS without it going all the way to Kestrel. This closely mimics what you would have if you’re handling HTTPS connections in production using IIS. However, when running from the command-line or in a non-Windows environment you must instead enable HTTPS directly using Kestrel.

The basic steps we will use for each OS are:

  1. Create a self-signed certificate that Kestrel can use
  2. Optionally trust the certificate so that your browser will not warn you about using a self-signed certificate
  3. Configure Kestrel to use that certificate

You can also reference the complete Kestrel HTTPS sample app

Create a certificate

Windows

Use the New-SelfSignedCertificate Powershell cmdlet to generate a suitable certificate for development:

New-SelfSignedCertificate -NotBefore (Get-Date) -NotAfter (Get-Date).AddYears(1) -Subject "localhost" -KeyAlgorithm "RSA" -KeyLength 2048 -HashAlgorithm "SHA256" -CertStoreLocation "Cert:CurrentUserMy" -KeyUsage KeyEncipherment -FriendlyName "HTTPS development certificate" -TextExtension @("2.5.29.19={critical}{text}","2.5.29.37={critical}{text}1.3.6.1.5.5.7.3.1","2.5.29.17={critical}{text}DNS=localhost")

Linux & Mac

For Linux and Mac we will use OpenSSL. Create a file https.config with the following data:

Run the following command to generate a private key and a certificate signing request:

openssl req -config https.config -new -out csr.pem

Run the following command to create a self-signed certificate:

openssl x509 -req -days 365 -extfile https.config -extensions v3_req -in csr.pem -signkey key.pem -out https.crt

Run the following command to generate a pfx file containing the certificate and the private key that you can use with Kestrel:

openssl pkcs12 -export -out https.pfx -inkey key.pem -in https.crt -password pass:<password>

Trust the certificate

This step is optional, but without it the browser will warn you about your site being potentially unsafe. You will see something like the following if you browser doesn’t trust your certificate:

Windows

To trust the generated certificate on Windows you need to add it to the current user’s trusted root store:

  1. Run certmgr.msc
  2. Find the certificate under Personal/Certificates. The “Issued To” field should be localhost and the “Friendly Name” should be HTTPS development certificate
  3. Copy the certificate and paste it under Trusted Root Certification Authorities/Certificates
  4. When Windows presents a security warning dialog to confirm you want to trust the certificate, click on “Yes”.

Linux

There is no centralized way of trusting the a certificate on Linux so you can do one of the following:

  1. Exclude the URL you are using in your browsers exclude list
  2. Trust all self-signed certificates on localhost
  3. Add the https.crt to the list of trusted certificates in your browser.

How exactly to achieve this depends on your browser/distro.

Mac

Option 1: Command line

Run the following command:

sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain https.crt

Some browsers, such as Chrome, require you to restart them before this trust will take affect.

Option 2: Keychain UI

If you open the “Keychain Access” app you can drag your https.crt into the Login keychain.

Configure Kestrel to use the certificate we generated

To configure Kestrel to use the generated certificate, add the following code and configuration to your application.

Application code

This code will read a set of HTTP server endpoint configurations from a custom section in your app configuration settings and then apply them to Kestrel. The endpoint configurations include settings for configuring HTTPS, like which certificate to use. Add the code for the ConfigureEndpoints extension method to your application and then call it when setting up Kestrel for your host in Program.cs:

Windows sample configuration

To configure your endpoints and HTTPS settings on Windows you could then put the following into your appsettings.Development.json, which configures an HTTPS endpoint for your application using a certificate in a certificate store:

Linux and Mac sample configuration

On Linux or Mac your appsettings.Development.json would look something like this, where your certificate is specified using a file path:

You can then use the user secrets tool, environment variables, or some secure store such as Azure KeyVault to store the password of your certificate using the HttpServer:Endpoints:Https:Password configuration key instead of storing the password in a file that goes into source control.

For example, to store the certificate password as a user secret during development, run the following command from your project:

dotnet user-secrets set HttpServer:Endpoints:Https:Password

To override the certificate password using an environment variable, create an environment variable named HttpServer:Endpoints:Https:Password (or HttpServer__Endpoints__Https__Password if your system does not allow :) with the value of the certificate password.

Run your application

When running from Visual Studio you can change the default launch URL for your application to use the HTTPS address by modifying the launchSettings.json file:

Redirect from HTTP to HTTPS

When you setup your site to use HTTPS by default, you typically want to allow HTTP requests, but have them redirected to the corresponding HTTPS address. In ASP.NET Core this can be accomplished using the URL rewrite middleware. Place the following code in the Configure method of your Startup class:

Conclusion

With a little bit of work you can setup your ASP.NET Core 2.0 site to always use HTTPS. For a future release we are working to simplify setting up HTTPS for ASP.NET Core apps and we plan to enable HTTPS in the project templates by default. We will share more details on these improvements as they become publicly available.

How to download embedded videos with F12 Tools in your browser

$
0
0

I got an email this week asking how to download some of my Azure Friday video podcast videos from http://friday.azure.com as well as some of the Getting Started Videos from Azure.com.

NOTE: Respect copyright and consider what you’re doing and WHY before you use this technique to download videos that may have been embedded for a reason.

I told them to download the videos with F12 tools, and they weren't clear how. I'll use an Azure Friday video for the example. Do be aware that there are a ton of ways to embed video on the web and this doesn't get around ones that REALLY don't want to be downloaded. This won't help you with Netflix, Hulu, etc.

First, I'll visit the site with the video I want in my browser. I'll use Chrome but this also works in Edge or Firefox with slightly different menus.

Then press F12 to bring up the Developer Tools pane and click Network. In Edge, click Content Type, then Media.

Download embedded videos with F12

Click the "clear" button to set up your workspace. That's the International No button there in the Network pane. Now, press Play and get ready.

Look in the Media list for something like ".mp4" or something that looks like the video you want. It'll likely have an HTTP Response in the 20x range.

Download 200

In Chrome, right click on the URL and select Copy as CURL. If you're on Windows pick cmd.exe and bash if you're on Linux/Mac.

Downloading with CURL

You'll get a crazy long command put into your clipboard. It's not all needed but it's a very convenient feature the browser provides, so it's worth using.

Get Curl: If you don't have the "curl" command you'll want to download "curl.exe" from here https://curl.haxx.se/dlwiz/ and, if you like, put it in your PATH. If you have Windows, get the free bundled curl version with installer here.

Open a terminal/command prompt - run cmd.exe on Windows - and paste in the command. If the browser you're using only gives you the URL and not the complete "curl" command, the command you're trying to build is basically curl [url] -o [outputfile.mp4]. It's best if you can get the complete command like the one Chrome provides, as it may include authentication cookies or other headers that omitting may prevent your download from working.

image

BEFORE you press enter, make sure you add "-o youroutputfilename.mp4." Also, if you can an error about security and certificates, you may need to add "--insecure."

Downloading a streaming video file with CURL

In the screenshot above I'm saving the file as "test.mp4" on my desktop.

There are several ways to download embedded videos, including a number of online utilities that come and go, but this technique has been very reliable for me.


Sponsor: Scale your Python for big data & big science with Intel® Distribution for Python. Near-native code speed. Use with NumPy, SciPy & scikit-learn. Get it Today



© 2017 Scott Hanselman. All rights reserved.
     

How to generate a Secret Santa list with R

$
0
0

Several recent blog posts have explored the Secret Santa problem and provided solutions in R. This post provides a roundup of various solutions and how they are implemented in R. 

If you wanted to set up a "Secret Santa" gift exchange at the office, you could put everyone's name into a hat and have each participant draw a name at random. The problem is that someone might draw their own name, but if that happens you can just reshuffle all the names back into the hat and start the process over. That's essentially what the R code below, from a blog post by David Selby, does:

🙈🎁 w/ code:
"Secret Santa in R" ✏ @TeaStatshttps://t.co/HFrh8w6CWU #rstats pic.twitter.com/cGfNA45CbG

— Mara Averick (@dataandme) November 24, 2017

That's not an entirely satisfying solution (at least to me), with all of the having to check for self-giving and restarting if so. Thomas Lumley calculates that the chance of requiring a do-over is about 63% (for more than 2 participants, anyway), and on average you'd need about 2.7 tries to get a "valid" gift list. (This is an example of the negative binomial distribution in action: we keep on drawing a set of names from the hat, until we get a set that has no-one giving to themselves. You can generate 100 examples of this playing out in R with rnbinom(100,1, exp(-1))+1 — the +1 is for the final successful draw from the hat.)

An easier way might be simply to seat all the participants in random order in a circle and assign them to give a gift to the person on their right. This is easy to do in R, and doing it in code has the benefit of keeping the recipients secret from each other. First, let's select 10 names from the babynames dataset:

> library(babynames)
> santas <- sample(babynames$name, 10, prob=babynames$prop) 

(Using prob=babynames$prop selects names according to their prevalence in US births 1880-2015.) Then, it's a simple matter of reordering the names at random, and assigning a gift to the next in line:

> p <- sample(santas)
> cbind(santa=p, recipient=c(tail(p,-1),p[1]))
      santa       recipient
 [1,] "Sherman"   "Shayna"
 [2,] "Shayna"    "Elizabeth"
 [3,] "Elizabeth" "Mary"
 [4,] "Mary"      "Kathleen"
 [5,] "Kathleen"  "Russell"
 [6,] "Russell"   "James"
 [7,] "James"     "Arlene"
 [8,] "Arlene"    "Ruth"
 [9,] "Ruth"      "Darryl"
[10,] "Darryl"    "Sherman"

Now, this "sit in a circle" process isn't quite the same as the "keep on drawing names from the hat" process. In the above example, the 10 names form a cycle, and it will never happen that Arlene gives to Ruth and Ruth gives to Arlene. Nonetheless, I think it's the simplest and fairest way of generating a Secret Santa list.

Thinking of the gift-giving relationship as a graph, the process above generates a Hamiltonian path through each of the recipients, and never generates more than one clique. Tristan Mahr explores the graph-theory nature of the Secret Santa problem in a blog post. There, he uses the DiagrammeR package to solve variants of the Secret Santa problem by constructing graphs, which can created and visualized quite easily in R.

Santagraph

Finally, Sarah Lotspeich and Lucy D'Agostino McGowan take the whole process one step further and show how to generate emails each participant using the ponyexpress package, notifying them of their Secret Santa recipient.

ADL Tools for Visual Studio Code (VSCode) supports Python & R Programming

$
0
0

We are thrilled to introduce support for Azure Data Lake (ADL) Python and R extensions within Visual Studio Code (VSCode). This means you can easily add Python or R scripts as custom code extensions in U-SQL scripts, and submit such scripts directly to ADL with one click. For data scientists who value the productivity of Python and R, ADL Tools for VSCode offers a fast and powerful code editing solution. VSCode makes it simple to get started and provides easy integration with U-SQL for data extract, data processing, and data output.

With ADL Tools for VSCode, you can choose your preferred language and use already familiar techniques to build your custom code. For example, developers using Python can now use REFERENCE ASSEMBLY to bring in the needed Python libraries and leverage built-in reducers to run Python code on each job execution vertex. You can also embed your Python code, which accepts a pandas DataFrame as input and returns a pandas DataFrame as output, into your U-SQL script. For data scientist using R, you can perform massively parallel execution of R code for data science scenarios such as merging various data files, parallel feature engineering, partitioned data model building, and so on.  To facilitate code clarity and reuse, the tools also allow to write code behind using different languages for a U-SQL file.

Key customer benefits

  • Local editor authoring and execution experience for Python Code-Behind to support distributed analytics.
  • Local editor authoring and execution experience for R Code-Behind to support distributed analytics.
  • Flexible mechanism to allow you to write single or multiple Python, R, and C# Code-Behind as part of a single U-SQL file.
  • Dynamic Code-Behind to embed Python and R script into your U-SQL script.
  • Integration with Azure Data Lake for Python and R with easy U-SQL job submissions.

How to develop U-SQL with Python and R

  • Right-click the U-SQL script file, select ADL: Generate Python Code Behind File, and a xxx.usql.py file is generated in your working folder. Then write your Python code.

Python Command

Python Script

  • Right-click the U-SQL script file, select ADL: Generate R Code Behind File, and a xxx.usql.r file is generated in your working folder. Then write your R code. 

R Command

 R Script

How to install or update

First, install Visual Studio Code and download Mono 4.2.x (for Linux and Mac). Then get the latest Azure Data Lake Tools by going to the VSCode Extension repository or the VSCode Marketplace and searching “Azure Data Lake Tools”.

Extension

Second, please complete the one-time set up to register Python and R extensions assemblies for your ADL account. See instructions at Develop U-SQL with Python, R, and CSharp for Azure Data Lake Analytics in Visual Studio Code.

 

For more information about Azure Data Lake Tool for VSCode, please use the following resources:

 

Learn more about today’s announcements on the Azure blog and the Big Data blog. Discover more on the Azure service updates page.

If you have questions, feedback, comments, or bug reports, please use the comments below or send a note to hdivstool@microsoft.com.

R 3.4.3 released

$
0
0

R 3.4.3 has been released, as announced by the R Core team today. As of this writing, only the source distribution (for those that build R themselves) is available, but binaries for Windows, Mac and Linux should appear on your local CRAN mirror within the next day or so.

This is primarily a bug-fix release. It fixes an issue with incorrect time zones on MacOS High Sierra, and some issues with handling Unicode characters. (Incidentally, representing international and special characters is something that R takes great care in handling properly. It's not an easy task: a 2003 essay by Joel Spolsky describes the minefield that is character representation, and not much has changed since then.) You can check out the complete list of changes here. Whatever your platform, R 3.4.3 should be backwards-compatible will other R versions in the R 3.4.x series, and so your scripts and packages should continue to function as they did before.

The codename for this release is "Kite-Eating Tree", and as with all R codenames this is a references to a classic Peanuts episode. If you're interested in the source of other R release names, Lucy D'Agostino McGowan provides the Peanuts references for R release names back to R 2.14.0.

Kite-eating-tree

r-devel mailing list: R 3.4.3 is released

November extensions round-up: Actionable Insights for Agile teams and a widget for easy access to all teams in your project

$
0
0
The round-up is back for November! I’m sorry for not posting in October. My wife and I welcomed our second child, a girl, into the world. I was at home spending time with them because they grow up so fast 🙂 But now I’m back, and some really exciting extensions went live in the Marketplace... Read More

Announcing public preview of Wiki search

$
0
0
Search wiki pages Over time as teams document more content in wiki pages, finding relevant content becomes increasingly difficult. To maximize collaboration, you need the ability to easily discover content across all your projects. Now you can use wiki search to quickly find relevant wiki pages by title or page content across all projects in your... Read More

Top stories from the VSTS community – 2017.12.01

$
0
0
Here are top stories we found in our streams this week related to DevOps, VSTS, TFS and other interesting topics. TOP STORIES #LearningVSTS – Creating My First Visual Studio Team Services Site and Team Project – Mickey GoussetBack when I first started blogging about and using Visual Studio Team System, my blogging style was to... Read More

Announcing JUnit Support for Visual Studio Code

$
0
0

Today, we’re pleased to release a new extension to our Visual Studio Code Java extension family – Test Runner/Debugger for Java. It’s a lightweight test runner/debugger with below features we hope you will like.

  • Recognize JUnit4 tests
  • Run test
  • Debug test
  • View test status and run summary

Java 9 Support

Same as the Debugger for Java, this is also an open source project. Please check out the github page https://github.com/Microsoft/vscode-java-test/ for more details and feedback.

Along with the new extension, we’re also updating our Debugger extension with version 0.4.0. With this release, we’re adding a few useful tools to make Java debugger in VS Code more enjoyable.

Launch in terminal

Since the standard VS Code debug console does not allow input, we’re now providing an alternative for you to use external or integrated terminal within VS Code to launch your application. Now you can input values to step over the input statements. It’s also a simple configuration in launch.json.

Java 9 Support

Stop on Entry

With a simple configuration, now you can ask the debugger to stop at your first line of code when it’s launched, and step through from there without needing to put a breakpoint beforehand.

Java 9 Support

Other changes

This new release also include these additional updates

  1. Multi-root workspace support
  2. Bug fixes.

Details could be found at our extension marketplace page and our debugging tutorial. For our next release, we’re now working on 3 other highly demanded feature, step filter, expression evaluation and hot code replacement. Please stay tuned and we will enable those soon! Please find more details in our changelog and don’t hesitate if you would like to share your thoughts with us, just join the Gitter discussion or submit an issue!

Try it out

If you’re trying to find a performant editor for your Java project, please give it a try

Xiaokai He, Program Manager, Java Tools and Services
@XiaokaiHe

Xiaokai is a program manager working on Java tools and services. He’s currently focusing on making Visual Studio Code great for Java developers, as well as supporting Java in various of Azure services.

Windows 10 SDK Preview Build 17046 now available

$
0
0

Today, we released a new Windows 10 Preview Build of the SDK to be used in conjunction with Windows 10 Insider Preview (Build 17046 or greater). The Preview SDK Build 17046 contains bug fixes and under development changes to the API surface area.

The Preview SDK can be downloaded from developer section on Windows Insider.

For feedback and updates to the known issues, please see the developer forum. For new developer feature requests, head over to our Windows Platform UserVoice.

Things to note:

  • This build works in conjunction with previously released SDKs and Visual Studio 2017. You can install this SDK and still also continue to submit your apps that target Windows 10 Creators build or earlier to the store.
  • The Windows SDK will now formally only be supported by Visual Studio 2017 and greater. You can download the Visual Studio 2017 here.

Known Issues

  • “All tests run with Windows App Certification Kit will fail. During installation, please uncheck Windows App Certification Kit”

What’s New:

  • C++/WinRT Now Available:
    The C++/WinRT headers and cppwinrt compiler (cppwinrt.exe) are now included in the Windows SDK. The compiler comes in handy if you need to consume a third-party WinRT component or if you need to author your own WinRT components with C++/WinRT. The easiest way to get working with it after installing the Windows Insider Preview SDK is to start the Visual Studio Developer Command Prompt and run the compiler in that environment. Authoring support is currently experimental and subject to change. Stay tuned as we will publish more detailed instructions on how to use the compiler in the coming week.The ModernCPP blog has a deeper dive into the CppWinRT compiler. Please give us feedback by creating an issue at: https://github.com/microsoft/cppwinrt.

Breaking Changes

  • New MIDL key words.

As a part of the “modernizing IDL” effort, several new keywords are added to the midlrt tool. These new keywords will cause build breaks if they are encountered in IDL files.

The new keywords are:

  • event
  • set
  • get
  • partial
  • unsealed
  • overridable
  • protected
  • importwinmd

If any of these keywords is used as an identifier, it will generate a build failure indicating a syntax error.

The error will be similar to:
1 >d:ossrconecorecomcombaseunittestastatestserverstestserver6idlremreleasetest.idl(12) : error MIDL2025 : [msg]syntax error [context]: expecting a declarator or * near “)”

To fix this, modify the identifier in error to an “@” prefix in front of the identifier. That will cause MIDL to treat the offending element as an identifier instead of a keyword.

API Updates and Additions

When targeting new APIs, consider writing your app to be adaptive in order to run correctly on the widest number of Windows 10 devices. Please see Dynamically detecting features with API contracts (10 by 10) for more information.

The following APIs have been added to the platform since the release of 16299.


namespace Windows.ApplicationModel {
  public enum StartupTaskState {
    EnabledByPolicy = 4,
  }
}
namespace Windows.ApplicationModel.Background {
  public sealed class MobileBroadbandPcoDataChangeTrigger : IBackgroundTrigger
  public sealed class TetheringEntitlementCheckTrigger : IBackgroundTrigger
}
namespace Windows.ApplicationModel.Calls {
  public enum PhoneCallMedia {
    AudioAndRealTimeText = 2,
  }
  public sealed class VoipCallCoordinator {
    VoipPhoneCall RequestNewAppInitiatedCall(string context, string contactName, string contactNumber, string serviceName, VoipPhoneCallMedia media);
    VoipPhoneCall RequestNewIncomingCall(string context, string contactName, string contactNumber, Uri contactImage, string serviceName, Uri brandingImage, string callDetails, Uri ringtone, VoipPhoneCallMedia media, TimeSpan ringTimeout, string contactRemoteId);
  }
  public sealed class VoipPhoneCall {
    void NotifyCallAccepted(VoipPhoneCallMedia media);
  }
}
namespace Windows.ApplicationModel.Chat {
  public sealed class RcsManagerChangedEventArgs
  public enum RcsManagerChangeType
  public sealed class RcsNotificationManager
}
namespace Windows.ApplicationModel.Store.Preview.InstallControl {
  public sealed class AppInstallManager {
    IAsyncOperation<IVectorView<AppInstallItem>> SearchForAllUpdatesAsync(string correlationVector, string clientId, AppUpdateOptions updateOptions);
    IAsyncOperation<IVectorView<AppInstallItem>> SearchForAllUpdatesForUserAsync(User user, string correlationVector, string clientId, AppUpdateOptions updateOptions);
    IAsyncOperation<AppInstallItem> SearchForUpdatesAsync(string productId, string skuId, string correlationVector, string clientId, AppUpdateOptions updateOptions);
    IAsyncOperation<AppInstallItem> SearchForUpdatesForUserAsync(User user, string productId, string skuId, string correlationVector, string clientId, AppUpdateOptions updateOptions);
    IAsyncOperation<IVectorView<AppInstallItem>> StartProductInstallAsync(string productId, string flightId, string clientId, string correlationVector, AppInstallOptions installOptions);
    IAsyncOperation<IVectorView<AppInstallItem>> StartProductInstallForUserAsync(User user, string productId, string flightId, string clientId, string correlationVector, AppInstallOptions installOptions);
  }
  public sealed class AppInstallOptions
  public sealed class AppInstallStatus {
    bool IsStaged { get; }
  }
  public sealed class AppUpdateOptions
}
namespace Windows.ApplicationModel.UserActivities {
  public sealed class UserActivity {
    public UserActivity(string activityId);
  }
  public sealed class UserActivityChannel {
    public static void DisableAutoSessionCreation();
  }
  public sealed class UserActivityVisualElements {
    string AttributionDisplayText { get; set; }
  }
}
namespace Windows.Devices.PointOfService {
  public sealed class BarcodeScannerReport {
    public BarcodeScannerReport(uint scanDataType, IBuffer scanData, IBuffer scanDataLabel);
  }
  public sealed class ClaimedBarcodeScanner : IClosable {
    bool IsVideoPreviewShownOnEnable { get; set; }
    void HideVideoPreview();
    IAsyncOperation<bool> ShowVideoPreviewAsync();
  }
  public sealed class UnifiedPosErrorData {
    public UnifiedPosErrorData(string message, UnifiedPosErrorSeverity severity, UnifiedPosErrorReason reason, uint extendedReason);
  }
}
namespace Windows.Devices.PointOfService.Provider {
  public sealed class BarcodeScannerDisableScannerRequest
  public sealed class BarcodeScannerDisableScannerRequestEventArgs
  public sealed class BarcodeScannerEnableScannerRequest
  public sealed class BarcodeScannerEnableScannerRequestEventArgs
  public sealed class BarcodeScannerGetSymbologyAttributesRequest
  public sealed class BarcodeScannerGetSymbologyAttributesRequestEventArgs
  public sealed class BarcodeScannerHideVideoPreviewRequest
  public sealed class BarcodeScannerHideVideoPreviewRequestEventArgs
  public sealed class BarcodeScannerProviderConnection : IClosable
  public sealed class BarcodeScannerProviderTriggerDetails
  public sealed class BarcodeScannerSetActiveSymbologiesRequest
  public sealed class BarcodeScannerSetActiveSymbologiesRequestEventArgs
  public sealed class BarcodeScannerSetSymbologyAttributesRequest
  public sealed class BarcodeScannerSetSymbologyAttributesRequestEventArgs
  public sealed class BarcodeScannerStartSoftwareTriggerRequest
  public sealed class BarcodeScannerStartSoftwareTriggerRequestEventArgs
  public sealed class BarcodeScannerStopSoftwareTriggerRequest
  public sealed class BarcodeScannerStopSoftwareTriggerRequestEventArgs
  public enum BarcodeScannerTriggerState
  public sealed class BarcodeSymbologyAttributesBuilder
}
namespace Windows.Globalization {
  public static class ApplicationLanguages {
    public static IVectorView<string> GetLanguagesForUser(User user);
  }
  public sealed class Language {
    LanguageLayoutDirection LayoutDirection { get; }
  }
  public enum LanguageLayoutDirection
}
namespace Windows.Graphics.Imaging {
  public enum BitmapPixelFormat {
    P010 = 104,
  }
}
namespace Windows.Management.Deployment {
  public sealed class PackageManager {
    IAsyncOperationWithProgress<DeploymentResult, DeploymentProgress> RequestAddPackageAsync(Uri packageUri, IIterable<Uri> dependencyPackageUris, DeploymentOptions deploymentOptions, PackageVolume targetVolume, IIterable<string> optionalPackageFamilyNames, IIterable<Uri> relatedPackageUris, IIterable<Uri> packageUrisToInstall);
  }
}
namespace Windows.Media.Audio {
  public sealed class AudioGraph : IClosable {
    IAsyncOperation<CreateMediaSourceAudioInputNodeResult> CreateMediaSourceAudioInputNodeAsync(MediaSource mediaSource);
    IAsyncOperation<CreateMediaSourceAudioInputNodeResult> CreateMediaSourceAudioInputNodeAsync(MediaSource mediaSource, AudioNodeEmitter emitter);
  }
  public sealed class AudioGraphSettings {
    double MaxPlaybackSpeedFactor { get; set; }
  }
  public sealed class AudioStateMonitor
  public sealed class CreateMediaSourceAudioInputNodeResult
  public sealed class MediaSourceAudioInputNode : IAudioInputNode, IAudioInputNode2, IAudioNode, IClosable
  public enum MediaSourceAudioInputNodeCreationStatus
}
namespace Windows.Media.Capture {
  public sealed class CapturedFrame : IClosable, IContentTypeProvider, IInputStream, IOutputStream, IRandomAccessStream, IRandomAccessStreamWithContentType {
    BitmapPropertySet BitmapProperties { get; }
    CapturedFrameControlValues ControlValues { get; }
  }
  public enum KnownVideoProfile {
    HdrWithWcgPhoto = 8,
    HdrWithWcgVideo = 7,
    HighFrameRate = 5,
    VariablePhotoSequence = 6,
    VideoHdr8 = 9,
  }
  public sealed class MediaCaptureSettings {
    IDirect3DDevice Direct3D11Device { get; }
  }
  public sealed class MediaCaptureVideoProfile {
    IVectorView<MediaFrameSourceInfo> FrameSourceInfos { get; }
    IMapView<Guid, object> Properties { get; }
  }
  public sealed class MediaCaptureVideoProfileMediaDescription {
    IMapView<Guid, object> Properties { get; }
    string Subtype { get; }
  }
}
namespace Windows.Media.Capture.Frames {
  public sealed class AudioMediaFrame
  public sealed class MediaFrameFormat {
    AudioEncodingProperties AudioEncodingProperties { get; }
  }
  public sealed class MediaFrameReference : IClosable {
    AudioMediaFrame AudioMediaFrame { get; }
  }
  public sealed class MediaFrameSourceController {
    AudioDeviceController AudioDeviceController { get; }
  }
  public sealed class MediaFrameSourceInfo {
    string ProfileId { get; }
    IVectorView<MediaCaptureVideoProfileMediaDescription> VideoProfileMediaDescription { get; }
  }
  public enum MediaFrameSourceKind {
    Audio = 4,
    Image = 5,
  }
}
namespace Windows.Media.Core {
  public sealed class MediaBindingEventArgs {
    void SetDownloadOperation(DownloadOperation downloadOperation);
  }
  public sealed class MediaSource : IClosable, IMediaPlaybackSource {
    DownloadOperation DownloadOperation { get; }
    public static MediaSource CreateFromDownloadOperation(DownloadOperation downloadOperation);
  }
}
namespace Windows.Media.Devices {
  public sealed class VideoDeviceController : IMediaDeviceController {
    VideoTemporalDenoisingControl VideoTemporalDenoisingControl { get; }
  }
  public sealed class VideoTemporalDenoisingControl
  public enum VideoTemporalDenoisingMode
}
namespace Windows.Media.DialProtocol {
  public sealed class DialReceiverApp {
    IAsyncOperation<string> GetUniqueDeviceNameAsync();
  }
}
namespace Windows.Media.MediaProperties {
  public static class MediaEncodingSubtypes {
    public static string P010 { get; }
  }
  public enum MediaPixelFormat {
    P010 = 2,
  }
}
namespace Windows.Media.Playback {
  public sealed class MediaPlaybackSession {
    MediaRotation PlaybackRotation { get; set; }
    MediaPlaybackSessionOutputDegradationPolicyState GetOutputDegradationPolicyState();
  }
  public sealed class MediaPlaybackSessionOutputDegradationPolicyState
  public enum MediaPlaybackSessionVideoConstrictionReason
}
namespace Windows.Media.Streaming.Adaptive {
  public sealed class AdaptiveMediaSourceDiagnosticAvailableEventArgs {
    string ResourceContentType { get; }
    IReference<TimeSpan> ResourceDuration { get; }
  }
  public sealed class AdaptiveMediaSourceDownloadCompletedEventArgs {
    string ResourceContentType { get; }
    IReference<TimeSpan> ResourceDuration { get; }
  }
  public sealed class AdaptiveMediaSourceDownloadFailedEventArgs {
    string ResourceContentType { get; }
    IReference<TimeSpan> ResourceDuration { get; }
  }
  public sealed class AdaptiveMediaSourceDownloadRequestedEventArgs {
    string ResourceContentType { get; }
    IReference<TimeSpan> ResourceDuration { get; }
  }
}
namespace Windows.Networking.BackgroundTransfer {
  public sealed class DownloadOperation : IBackgroundTransferOperation, IBackgroundTransferOperationPriority {
    void MakeCurrentInTransferGroup();
  }
  public sealed class UploadOperation : IBackgroundTransferOperation, IBackgroundTransferOperationPriority {
    void MakeCurrentInTransferGroup();
  }
}
namespace Windows.Networking.Connectivity {
  public sealed class CellularApnContext {
    string ProfileName { get; set; }
  }
  public sealed class ConnectionProfileFilter {
    IReference<Guid> PurposeGuid { get; set; }
  }
  public sealed class WwanConnectionProfileDetails {
    WwanNetworkIPKind IPKind { get; }
    IVectorView<Guid> PurposeGuids { get; }
  }
  public enum WwanNetworkIPKind
}
namespace Windows.Networking.NetworkOperators {
  public sealed class MobileBroadbandAntennaSar {
    public MobileBroadbandAntennaSar(int antennaIndex, int sarBackoffIndex);
  }
  public sealed class MobileBroadbandModem {
    IAsyncOperation<MobileBroadbandPco> TryGetPcoAsync();
  }
  public sealed class MobileBroadbandModemIsolation
  public sealed class MobileBroadbandPco
  public sealed class MobileBroadbandPcoDataChangeTriggerDetails
  public sealed class TetheringEntitlementCheckTriggerDetails
}
namespace Windows.Networking.Sockets {
  public sealed class ServerMessageWebSocket : IClosable
  public sealed class ServerMessageWebSocketControl
  public sealed class ServerMessageWebSocketInformation
  public sealed class ServerStreamWebSocket : IClosable
  public sealed class ServerStreamWebSocketInformation
}
namespace Windows.Networking.Vpn {
  public sealed class VpnNativeProfile : IVpnProfile {
    string IDi { get; set; }
    VpnPayloadIdType IdiType { get; set; }
    string IDr { get; set; }
    VpnPayloadIdType IdrType { get; set; }
    bool IsImsConfig { get; set; }
    string PCscf { get; }
  }
  public enum VpnPayloadIdType
}
namespace Windows.Security.Authentication.Identity.Provider {
  public enum SecondaryAuthenticationFactorAuthenticationMessage {
    CanceledByUser = 22,
    CenterHand = 23,
    ConnectionRequired = 20,
    DeviceUnavaliable = 28,
    MoveHandCloser = 24,
    MoveHandFarther = 25,
    PlaceHandAbove = 26,
    RecognitionFailed = 27,
    TimeLimitExceeded = 21,
  }
}
namespace Windows.Services.Maps {
  public sealed class MapRouteDrivingOptions {
    IReference<DateTime> DepartureTime { get; set; }
  }
  public sealed class PlaceInfo {
    public static PlaceInfo CreateFromAddress(string displayAddress);
    public static PlaceInfo CreateFromAddress(string displayAddress, string displayName);
  }
}
namespace Windows.Services.Store {
  public sealed class StoreCanAcquireLicenseResult
  public enum StoreCanLicenseStatus
  public sealed class StoreContext {
    bool CanSilentlyDownloadStorePackageUpdates { get; }
    IAsyncOperation<StoreCanAcquireLicenseResult> CanAcquireStoreLicenseForOptionalPackageAsync(Package optionalPackage);
    IAsyncOperationWithProgress<StorePackageUpdateResult, StorePackageUpdateStatus> TrySilentDownloadAndInstallStorePackageUpdatesAsync(IIterable<StorePackageUpdate> storePackageUpdates);
    IAsyncOperationWithProgress<StorePackageUpdateResult, StorePackageUpdateStatus> TrySilentDownloadStorePackageUpdatesAsync(IIterable<StorePackageUpdate> storePackageUpdates);
  }
}
namespace Windows.Storage.Provider {
  public interface IStorageProviderUriSource
  public enum StorageProviderResolveContentUriResult
}
namespace Windows.System {
  public sealed class AppActivationResult
  public sealed class AppDiagnosticInfo {
    IAsyncOperation<AppActivationResult> ActivateAsync();
  }
  public sealed class AppResourceGroupInfo {
    IAsyncOperation<bool> TryResumeAsync();
    IAsyncOperation<bool> TrySuspendAsync();
    IAsyncOperation<bool> TryTerminateAsync();
  }
  public sealed class User {
    public static User GetDefault();
  }
  public enum UserType {
    SystemManaged = 4,
  }
}
namespace Windows.System.Diagnostics {
  public sealed class DiagnosticInvoker {
    IAsyncOperationWithProgress<DiagnosticActionResult, DiagnosticActionState> RunDiagnosticActionFromStringAsync(string context);
  }
}
namespace Windows.System.Diagnostics.DevicePortal {
  public sealed class DevicePortalConnection {
    ServerMessageWebSocket GetServerMessageWebSocketForRequest(HttpRequestMessage request);
    ServerMessageWebSocket GetServerMessageWebSocketForRequest(HttpRequestMessage request, SocketMessageType messageType, string protocol);
    ServerMessageWebSocket GetServerMessageWebSocketForRequest(HttpRequestMessage request, SocketMessageType messageType, string protocol, uint outboundBufferSizeInBytes, uint maxMessageSize, MessageWebSocketReceiveMode receiveMode);
    ServerStreamWebSocket GetServerStreamWebSocketForRequest(HttpRequestMessage request);
    ServerStreamWebSocket GetServerStreamWebSocketForRequest(HttpRequestMessage request, string protocol, uint outboundBufferSizeInBytes, bool noDelay);
  }
  public sealed class DevicePortalConnectionRequestReceivedEventArgs {
    bool IsWebSocketUpgradeRequest { get; }
    IVectorView<string> WebSocketProtocolsRequested { get; }
    Deferral GetDeferral();
  }
}
namespace Windows.System.RemoteSystems {
  public static class KnownRemoteSystemCapabilities {
    public static string NearShare { get; }
  }
}
namespace Windows.System.UserProfile {
  public static class GlobalizationPreferences {
    public static GlobalizationPreferencesForUser GetForUser(User user);
  }
  public sealed class GlobalizationPreferencesForUser
}
namespace Windows.UI.ApplicationSettings {
  public sealed class AccountsSettingsPane {
    public static IAsyncAction ShowAddAccountForUserAsync(User user);
    public static IAsyncAction ShowManageAccountsForUserAsync(User user);
  }
  public sealed class AccountsSettingsPaneCommandsRequestedEventArgs {
    User User { get; }
  }
}
namespace Windows.UI.Composition {
  public sealed class BounceScalarNaturalMotionAnimation : ScalarNaturalMotionAnimation
  public sealed class BounceVector2NaturalMotionAnimation : Vector2NaturalMotionAnimation
  public sealed class BounceVector3NaturalMotionAnimation : Vector3NaturalMotionAnimation
  public class CompositionLight : CompositionObject {
    bool IsEnabled { get; set; }
  }
  public sealed class Compositor : IClosable {
    string Comment { get; set; }
    BounceScalarNaturalMotionAnimation CreateBounceScalarAnimation();
    BounceVector2NaturalMotionAnimation CreateBounceVector2Animation();
    BounceVector3NaturalMotionAnimation CreateBounceVector3Animation();
  }
  public sealed class PointLight : CompositionLight {
    Vector2 AttenuationCutoff { get; set; }
  }
  public sealed class SpotLight : CompositionLight {
    Vector2 AttenuationCutoff { get; set; }
  }
}
namespace Windows.UI.Composition.Core {
  public sealed class CompositorController : IClosable
}
namespace Windows.UI.Composition.Desktop {
  public sealed class HwndTarget : CompositionTarget
}
namespace Windows.UI.Input.Spatial {
  public sealed class SpatialInteractionController {
    BatteryReport TryGetBatteryReport();
  }
}
namespace Windows.UI.Xaml {
  public sealed class BringIntoViewOptions {
    double HorizontalAlignmentRatio { get; set; }
    double HorizontalOffset { get; set; }
    double VerticalAlignmentRatio { get; set; }
    double VerticalOffset { get; set; }
  }
  public sealed class BringIntoViewRequestedEventArgs : RoutedEventArgs
  public sealed class EffectiveViewportChangedEventArgs
  public enum FocusVisualKind {
    Reveal = 2,
  }
  public class FrameworkElement : UIElement {
    event TypedEventHandler<FrameworkElement, EffectiveViewportChangedEventArgs> EffectiveViewportChanged;
    void InvalidateViewport();
    virtual bool IsViewport();
  }
  public class UIElement : DependencyObject {
    public static RoutedEvent BringIntoViewRequestedEvent { get; }
    public static RoutedEvent ContextRequestedEvent { get; }
    KeyboardAcceleratorPlacementMode KeyboardAcceleratorPlacementMode { get; set; }
    public static DependencyProperty KeyboardAcceleratorPlacementModeProperty { get; }
    DependencyObject KeyboardAcceleratorToolTipTarget { get; set; }
    public static DependencyProperty KeyboardAcceleratorToolTipTargetProperty { get; }
    DependencyObject KeyTipTarget { get; set; }
    public static DependencyProperty KeyTipTargetProperty { get; }
    event TypedEventHandler<UIElement, BringIntoViewRequestedEventArgs> BringIntoViewRequested;
    virtual void OnBringIntoViewRequested(BringIntoViewRequestedEventArgs e);
    virtual void OnKeyboardAcceleratorInvoked(KeyboardAcceleratorInvokedEventArgs args);
  }
}
namespace Windows.UI.Xaml.Automation.Peers {
  public sealed class AutoSuggestBoxAutomationPeer : FrameworkElementAutomationPeer, IInvokeProvider {
    void Invoke();
  }
  public class CalendarDatePickerAutomationPeer : FrameworkElementAutomationPeer, IInvokeProvider, IValueProvider
}
namespace Windows.UI.Xaml.Controls {
  public class AppBarButton : Button, ICommandBarElement, ICommandBarElement2 {
    string KeyboardAcceleratorText { get; set; }
    public static DependencyProperty KeyboardAcceleratorTextProperty { get; }
    AppBarButtonTemplateSettings TemplateSettings { get; }
  }
  public class AppBarToggleButton : ToggleButton, ICommandBarElement, ICommandBarElement2 {
    string KeyboardAcceleratorText { get; set; }
    public static DependencyProperty KeyboardAcceleratorTextProperty { get; }
    AppBarToggleButtonTemplateSettings TemplateSettings { get; }
  }
  public class MenuFlyoutItem : MenuFlyoutItemBase {
    string KeyboardAcceleratorText { get; set; }
    public static DependencyProperty KeyboardAcceleratorTextProperty { get; }
    MenuFlyoutItemTemplateSettings TemplateSettings { get; }
  }
  public class NavigationView : ContentControl {
    string PaneTitle { get; set; }
    public static DependencyProperty PaneTitleProperty { get; }
    event TypedEventHandler<NavigationView, object> PaneClosed;
    event TypedEventHandler<NavigationView, NavigationViewPaneClosingEventArgs> PaneClosing;
    event TypedEventHandler<NavigationView, object> PaneOpened;
    event TypedEventHandler<NavigationView, object> PaneOpening;
  }
  public sealed class NavigationViewPaneClosingEventArgs
  public sealed class ScrollViewer : ContentControl {
    bool IsResponsiveToOcclusions { get; set; }
    public static DependencyProperty IsResponsiveToOcclusionsProperty { get; }
  }
  public enum WebViewPermissionType {
    Screen = 5,
  }
}
namespace Windows.UI.Xaml.Controls.Maps {
  public sealed class MapControl : Control {
    string Region { get; set; }
    public static DependencyProperty RegionProperty { get; }
  }
  public class MapElement : DependencyObject {
    bool IsEnabled { get; set; }
    public static DependencyProperty IsEnabledProperty { get; }
  }
}
namespace Windows.UI.Xaml.Controls.Primitives {
  public sealed class AppBarButtonTemplateSettings : DependencyObject
  public sealed class AppBarToggleButtonTemplateSettings : DependencyObject
  public class ListViewItemPresenter : ContentPresenter {
    bool DisableTilt { get; set; }
    public static DependencyProperty DisableTiltProperty { get; }
  }
  public sealed class MenuFlyoutItemTemplateSettings : DependencyObject
}
namespace Windows.UI.Xaml.Input {
  public sealed class GettingFocusEventArgs : RoutedEventArgs {
    bool TryCancel();
    bool TrySetNewFocusedElement(DependencyObject element);
  }
  public sealed class KeyboardAcceleratorInvokedEventArgs {
    KeyboardAccelerator KeyboardAccelerator { get; }
  }
  public enum KeyboardAcceleratorPlacementMode
  public sealed class LosingFocusEventArgs : RoutedEventArgs {
    bool TryCancel();
    bool TrySetNewFocusedElement(DependencyObject element);
  }
}

The post Windows 10 SDK Preview Build 17046 now available appeared first on Building Apps for Windows.

A case study in messy data analysis: the Australian same-sex marriage survey

$
0
0

Last month the Australian people signaled their approval of legalizing same-sex marriage by a 62%:38% margin in a national survey. (On a personal note, I was elated and relieved by the result: my husband and I have discussed eventually retiring to Australia, and with this decision our marriage would be recognized there.) While fears of a surprise Brexit-like electoral backlash proved unfounded, researchers including R user Miles McBain explored the results for correlations to demographic variables. This process wasn't as simple as it might have been though: the Australian Bureau of Statistics released the results as a pair of Excel files that violate just about every good practice for sharing data in spreadsheets:

Survey-results

Miles shares the R code he used to extract useful data from this spreadsheet as a blog post that makes a great case study in dealing with messy data using R. The post demonstrates how he used the read_excel function (from readxl package) to extract specific sub-tables from the spreadsheet by specifying row and column ranges, and then use the dplyr package to clean up and merge the data. If you want to explore the data yourself, you can find the R code and the source data in this Github repository.

In a follow-up post, Miles combines the same-sex marriage survey data with Australian Census data to explore various demographic relationships. Unlike the US Census data (which is easily accessible in R thanks to the tidycensus package), there's no interface package for Australian Census data. (Selected tables are available in the Census2016 package, however.) Instead, Miles demonstrates how to use R to download and extract data from the the "Census DataPacks" (CSV data files and Excel data dictionaries) provided by the Australian Bureau of Statistics.  Yet more data wrangling allows Miles to create summary charts of the responses, such as this chart of proportion voting No by percent of the district population declaring a religious affiliation, broken down by state. As you may expect, those districts with more religious populations voted No at greater rates.

Religious-affliliation

Both of these post provide great examples of working with government data, which is often provided in inconvenient formats with messy structures. Follow the links below for step-by-step guides, including the R code used to extract the data, structure it for analysis, and create useful charts.

Medium (Miles McBain): Tidying the Australian Same Sex Marriage Postal Survey Data with RCombining Australian Census data with the Same Sex Marriage Postal Survey in R

Because it’s Friday: The Whole of the Moon

$
0
0

As we've noted before, the Solar System is a big place. You can watch a voyage from the Sun to Jupiter, and it takes 45 minutes at the speed of light. A scale model of the Solar System, with the Sun the size of a weather balloon, is 3.5 miles across ... and that's not even including Pluto. And this virtual scale model, a browser-based rendition by John Worth with the moon just one pixel in size, is no less impressive. I can't do it justice here — the site surely holds the record for the widest horizontal scrollbar on any page on the Web — so here's a little snippet of the Earth-Moon system:

One pixel moon

While you can use the astrological symbols at the top to jump to the planets, try manually scrolling to get the full effect of the vast spaces between them. There are also some useful tidbits to discover along the way...

That's all from us here at the blog for this week. Have a great weekend, and we'll see you back here on Monday. Enjoy!

Viewing all 10804 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>