Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

Introducing WinAppDriver UI Recorder

$
0
0

A new open-sourced tool is now available for the Windows Application Driver (WinAppDriver) community: the WinAppDriver UI Recorder tool. This tool will enable users to easily create automated UI tests.

For those of you not familiar with WinAppDriver, it is a UI automation service for Windows 10 that users can use to test their applications. We recently released our v1.1 preview that you can read more about here.

What is UI Recorder

For many in the WinAppDriver community, Inspect has been the most common tool for users to select UI elements and view their attribute data. Though Inspect serves its intended purpose of viewing accessibility data, it falls behind when it comes to supporting scenarios specifically for UI automation, such as being able to generate XPath queries.

For situations such as these, the WinAppDriver UI Recorder tool hopes to fill in the gaps from Inspect and serve as its alternative.

As for its initial release, the UI Recorder tool will enable the following two key scenarios:

1) Inspecting UI elements and retrieving their XPath expressions

2) Generating C# code for certain actions (mouse click) when “Record” is active

  • Generated Code can be pasted into the UI Recorder Template folder for WinAppDriver playback

We’re hoping that with this tool, users will have a simpler and more intuitive approach in writing automation scripts for WinAppDriver.

Getting Started

The code for the UI Recorder is open-sourced and available on WinAppDriver’s GitHub repo here. It’s recommended to use Visual Studio 2017 to start building and compiling it. Once compiled, you can immediately start using UI Recorder.

In addition to access to the source, a zipped executable can be found here on our GitHub Releases section.

Using UI Recorder

The UI Recorder tool aims to provide an intuitive, and simplistic, user interface that is divided into two panels, as seen below:

The UI Recorder tool aims to provide an intuitive, and simplistic, user interface that is divided into two panels.

UI Recorder tracks both keyboard and mouse interactions against an application interface—representing a UI action. When Recording is active, both the top and bottom panels are dynamically updated with varying UI element information every time a new UI action takes place. The Top Panel shows the generated XPath query of the currently selected UI element, and the Bottom Panel shows the raw XML information for the same element. You can navigate to the C# Code tab on the bottom panel to view generated C# code of the recorded action which you can use on a WinAppDriver test.

The following animation provides an example of the recording process:

Example of the recording process

The code recorded can be coped over to the clipboard and pasted onto the WinAppDriver UI Recorder template project to be replayed.

Providing Feedback

With the UI Recorder tool being open-source, we highly encourage the community to submit any PRs with changes or enhancements, and to post suggestions on how they would like to see the UI Recorder grow.

Please use the GitHub Issues Board to provide any feedback on the UI Recorder tool – we look forward to hearing about any suggestions, feature requests, or bug reports!

Staying Informed

To stay up to date with WinAppDriver news follow @mrhassanuz.

Summary

A new tool for WinAppDriver is available now: UI Recorder. Now users have an intuitive way automate UI with WinAppDriver by not only being able to generate XPath expressions on the fly, but also C# code from by recording UI action events through mouse clicks.

The post Introducing WinAppDriver UI Recorder appeared first on Windows Developer Blog.


Event trigger based data integration with Azure Data Factory

$
0
0

Event driven architecture (EDA) is a common data integration pattern that involves production, detection, consumption and reaction to events. Today, we are announcing the support for event based triggers in your Azure Data Factory (ADF) pipelines. A lot of data integration scenarios requires data factory customers to trigger pipelines based on events. A typical event could be file landing or getting deleted in your azure storage. Now you can simply create an event based trigger in your data factory pipeline.

image

As soon as the file arrives in your storage location and the corresponding blob is created, it will trigger and run your data factory pipeline. You can create an event based trigger on blob creation, blob deletion or both in your data factory pipelines.

image

With the “Blob path begins with” and “Blob path ends with” properties, you can tell us for which containers, folders, and blob names you wish to receive events. You can also use wide variety of patterns for both “Blob path begins with” and “Blob path ends with” properties. At least, one of these properties is required.

image

Examples:

  • Blob path begins with (/containername/) – Will receive events for any blob in the container.
  • Blob path begins with (/containername/foldername) – Will received events for any blobs in the containername container and foldername folder.
  • Blob path begins with (/containername/foldername/file.txt) – Will receive events for a blob named file.txt in the foldername folder under the containername container.
  • Blob path ends with (file.txt) – Will receive events for a blob named file.txt at any path.
  • Blob path ends with (/containername/file.txt) – Will receive events for a blob named file.txt under container containername.
  • Blob path ends with (foldername/file.txt) – Will receive events for a blob named file.txt in foldername folder under any container.

Our goal is to continue adding features and improve the usability of Data Factory tools. Get more information and detailed steps on event based triggers in data factory.

Get started building pipelines easily and quickly using Azure Data Factory. If you have any feature requests or want to provide feedback, please visit the Azure Data Factory forum.

Cost Reporting ARM APIs across subscriptions for EA customers

$
0
0

Azure enterprise customers today manage their subscriptions on the EA portal and use the EA hierarchy to group and report on usage and costs by subscription. Until today, the only APIs available for the enterprise hierarchy was the key based APIs, this month we are releasing ARM supported APIs for the enrollment hierarchy. This will enable users with the required privileges to make API calls to the individual nodes in the management hierarchy and get the most current cost and usage information.

The benefits of this API is an improved security posture, seamless onboarding to the cost APIs and benefiting from the continued investment in planned work on the APM APIs, like budgets. Departments today support rudimentary spending limits, but in the coming weeks we will be supporting budgets, that were recently announced for subscriptions and resource groups on EA hierarchy nodes as well. The ARM APIs also standardize the pattern and enable AD based authentication.

Hierarchy Updates

As part of this release the ARM API introduces a few new terms:

  • Enrollments in the ARM APIs are Billing Accounts
  • Departments continue on as Departments
  • Accounts in the ARM APIs are referred to as Enrollment Accounts

This release of ARM APIs above the subscription scope supports all currently supported functions of usage details, monetary balances, marketplace charges and price sheet. The price sheet API also supports calls at a subscription grain to get the specific prices for that subscription based on the offer type. Each of these calls by default apply to the current (open period), with the option to call the API for specific billing period in the past. Here’s the detailed list of operations and scopes with links to the documentation:

 

Billing Account (Enrollment)

Department

Enrollment Account (Account)

Usage Details

Supported

Supported

Supported

Monetary Balance

Supported

N/A

N/A

Market place Charges

Supported

Supported

Supported

Price sheets

Supported

 

 

Budgets

Planned

Planned

Planned

Links

Documentation Page

Resumable Online Index Create is in public preview for Azure SQL DB

$
0
0

We are delighted to announce that Resumable Online Index Create (ROIC) is now available for public preview in Azure SQL DB. The feature allows the ability to pause an index create operation and resume it later from where the index create operation was paused or failed, rather than having to restart the operation from the beginning. Additionally, this feature creates indexes using only a small amount of log space. You can use the new feature in the following scenarios:

  • Resume an index create operation after an index create failure, such as after a database failover or after running out of disk space. There is no need to restart the operation from the beginning. This can save a significant amount of time when creating indexes for large tables.
  • Pause an ongoing index create operation and resume it later. For example, you may need to temporarily free up system resources in order to execute a high priority task or you may have a single maintenance window that is too short to complete the operation for a large index. Instead of aborting the index create process, you can pause the index create operation and resume it later without losing prior progress.
  • Create large indexes without using a lot of log space and a long-running transaction that blocks other maintenance activities. This helps log truncation and avoids out of log errors that are possible for long running index create operations.

With this release, we extend the resumable functionality adding this feature to available resumable online index rebuild.   

Examples (T-SQL commands):

Create resumable index with MAXDOP=2

CREATE  INDEX test_idx on test_table (col1) WITH (ONLINE=ON, MAXDOP=2, RESUMABLE=ON ) 

Pause a running resumable online index creation

ALTER INDEX test_idx on test_table PAUSE  

Resume a paused online index creation

ALTER INDEX test_idx on test_table RESUME  

Abort a running or paused resumable online index creation

ALTER INDEX test_idx on test_table ABORT

For more information about ROIC please review the following documents:

For further communication on this topic please contact the ResumableIDXPreview@microsoft.com alias.

Join the Bing Maps team at Microsoft Inspire 2018 in Las Vegas

$
0
0

The Bing Maps team will be at Microsoft Inspire 2018, in Las Vegas, Nevada, July 15 through the 18th. If you are registered for the event, stop by the Bing Maps booth, as well as attend our sessions.

Bing Maps sessions details:

How Partners are Building Intelligent Experiences with Search and Geospatial

Breakout session: July 18, 2018 at 2:30 pm

The Bing APIs offer a wide range of solutions for customers to enable customized apps and services. From Cognitive Services search engine services (web, image, visual, news, video, entity, and more) to geospatial mapping and fleet management solutions, these enterprise grade APIs allow customers the ability to create customized, intelligent and immersive experiences harnessing the power of Bing. Join this session and learn how partners are using Bing APIs.

Open Source Tools and Developing with Bing Maps and Search APIs

Theater session: July 16, 2018 at 3:30 pm

Open source tools for developers are changing the way companies can integrate the latest location technologies from Bing Maps or intelligent search from using Bing APIs. In this session we'll review several of these new tools and provide insight on how you can accelerate your geospatial application development.

Mobile Workforce Location Tracking with Bing Maps

Theater session: July 16, 2018 at 4:00 pm

If your business relies on coordinating your mobile workforce, this theater session is essential viewing. We'll look at Bing Maps Fleet Tracker which leverages the mobile phones your staff is already carrying to provide a real-time dashboard of where your assets are located. You’ll learn how to do a one click deployment to have your tracking solution live on Azure in just minutes, including location based alerts.

If you are not able to attend Microsoft Inspire 2018, we will share news and updates on the blog after the conference and post recordings of the Bing Maps sessions on https://www.microsoft.com/maps.

For more information about the Bing Maps Platform, go to https://www.microsoft.com/maps/choose-your-bing-maps-API.aspx.

- Bing Maps Team

AI, Machine Learning and Data Science Roundup: June 2018

$
0
0

A monthly roundup of news about Artificial Intelligence, Machine Learning and Data Science. This is an eclectic collection of interesting blog posts, software announcements and data applications I've noted over the past month or so.

Open Source AI, ML & Data Science News

Intel open-sources NLP Architect, a Python library for deep learning with natural language.

Gym Retro, an open source platform for reinforcement learning research on video games.

Facebook open-sources DensePose, a toolkit to transform 2D images of people into a 3-D surface map of the human body.

MLflow, an open source machine learning platform from Databricks, has been released.

Industry News

In a 12-minute documentary video and accompanying Wired article, Facebook describes how it uses Machine Learning to improve quality of the News Feed.

In the PYPL language rankings, Python ranks #1 in popularity and R is #7; both are rising.

Google announces its ethical principles for AI applications, and AI applications it will not pursue.

Wolfram Research launches the Wolfram Neural Network Repository, with implementations of around 70 neural net models.

Google Cloud introduces preemptive pricing for GPUs, with discounts compared to GPUs attached to non-preemptable VMs.

Conversational AIs conducting conversational full-duplex conversations: Microsoft Xiaoce and Google Duplex.

Microsoft News

Microsoft's head of AI research, Harry Shum, on "raising" ethical AI.

Microsoft has acquired Semantic Machines, a startup focused on conversational AI.

Microsoft is developing a bias-detection tool, with the goal of reducing discrimination in applied AI.

Microsoft's Bot Builder SDKv4 further simplifies the process of developing conversational bots.

Cognitive Services Labs, which offers previews of emerging Microsoft Cognitive Services technologies, adds labs for anomaly detection, ink analysis and more.

ML.NET 0.2, a cross-platform open source machine learning framework for .NET developers, has been released.

Microsoft R Open 3.5.0 has been released.

Azure Databricks now provides a machine-learning runtime and GPU support.

Learning resources

A tutorial on visualizing machine learning models with LIME, a package for R and Python.

A visual introduction to Machine Learning, Part II: Model Tuning and the Bias-Variance Tradeoff, with an in-depth and graphically elegant look at decision trees.

Materials for five new AI-oriented courses have been published to the LearnAI Materials site.

A Developer's Guide to Building AI Applications: a free e-book from O'Reilly and Microsoft.

Microsoft Professional Program for Artificial Intelligence, a free, self-paced on-line certification in AI skills.

The Azure AI Lab provides complete worked applications for generative image synthesis, cognitive search, automated drone flight, artistic style transfer and machine reading comprehension.

CVAE-GAN: a new generative algorithm for synthesizing novel realistic images, like faces.

Land cover mapping with aerial images, using deep learning and FPGAs.

Improving medical imaging diagnostics, using the new Azure Machine Learning Python package for Computer Vision.

Find previous editions of the monthly AI roundup here

Visual Search from Bing now lets you search what you see

$
0
0
Today we're launching new intelligent Visual Search capabilities that build upon the visual technology already in Bing so you can search the web using your camera. Now you can search, shop, and learn more about your world through the photos you take.

These new Visual Search capabilities are available today in the US on the Bing app for iOS and Android, and for Microsoft Launcher (Android only). They’ll also begin rolling out today for Microsoft Edge for Android, and will be coming soon to Microsoft Edge for iOS and Bing.com. Just click the camera button to get started:


                         
For example, imagine you see a landmark or flower and want to learn more. Simply take a photo using one of the apps, or upload a picture from your camera roll. Bing will identify the object in question and give you more information by providing additional links to explore.


                        
You can even shop from your photos for fashion and home furnishings. Let’s say you see a friend's jacket you like, but don't know its brand or where to purchase. Upload a pic into the app's search box and Bing will return visually-similar jackets, prices, and details for where to purchase.



We’ll be working hard over the coming months to add more capabilities to Visual Search, so your input on these features is greatly appreciated, as always. We hope you’re as excited by Visual Search as we are!

- The Bing Team

Visual Studio Code C/C++ extension June 2018 Update

$
0
0

Visual Studio Code C/C++ extension June 2018 Update

Today we’re very happy to announce the availability of the June 2018 update to the C/C++ extension for Visual Studio Code! In this update, we are continuing our efforts to make IntelliSense configuration easier by auto-detecting compile_commands.json files for IntelliSense, significantly improving recursive search performance, making browse.path optional, and adding “compilerPath” support for MSVC.

IntelliSense auto-detects compile_commands.json

In this update, compile_commands.json files in the workspace will be detected to auto-configure IntelliSense, eliminating the need to manually specify includes and defines.

Compile_commands.json file is a compilation database that consists of an array of “command objects”, where each command object specifies one way a translation unit is compiled in the project. Its format is specified in the Clang documentation, and it can be generated by many build systems, such as CMake. The C/C++ extension added support for it in the October 2017 update, but it was an optional setting that required manually setting the path to the file. This latest update added auto-detection to make use of such files. If multiple files are found, you will be presented with a dropdown to choose the appropriate one. The following screenshot shows an example of the message suggesting the use of a compile_commands.json file to auto-configure IntelliSense and the dropdown with two choices.

Once selected, IntelliSense will be fully powered by using the information in that file with no further configuration required. If you need to change the path, you can find the “compileCommands” setting in the c_cpp_properties.json file (access via Command Palette -> C/Cpp: Edit Configurations…).

Performance improvement for IntelliSense path recursive search

Support for recursive search of “includePath” was added in the May 2018 update. This latest update significantly improves the search performance on large folders. By intelligently cutting down the number of paths that need to be processed by the IntelliSense engine, we were able to make search faster – in particular, subsequent folder opening is orders of magnitude faster. Due to these improvements, recursive search is now the default behavior for newly-opened folders. To opt out, simply remove “**” from each path in the c_cpp_properties.json file.

Browse.path is now optional

For the longest time, we had two path settings: “includePath” for the IntelliSense engine, and “browse.path” for the Tag Parser. The IntelliSense engines document explains the difference between the two. While it could be useful at times to have different sets of paths, in many cases they end up being duplicated. In this update, we’re making “browse.path” optional, which means it won’t be populated in newly-created c_cpp_properties.json files. The “includePath” values will be used for the Tag Parser in addition to the IntelliSense engine, if “browse.path” is not present.

compilerPath setting added support for MSVC (Microsoft Visual C++ Compiler)

“compilerPath” is a setting introduced previously that allows users to specify a compiler from which IntelliSense can retrieve system includes and defines. This update added support for MSVC – this means the “compilerPath” setting in the default Windows configuration will use the latest MSVC installed on the machine for includes  and its value can be changed to a different version if needed.

Tell us what you think

Download the C/C++ extension for Visual Studio Code, try it out and let us know what you think. File issues and suggestions on GitHub. If you haven’t already provided us feedback, please take this quick survey to help shape this extension for your needs.


New Work Hubs

$
0
0
In our recent post “New navigation for Visual Studio Team Services” we shared an early look at our plans for the upcoming year. For the Work hubs in VSTS, we’re investing in ways that address usability issues many of you have shared with us. In this post, we’ll describe a few of the first steps... Read More

A guide to working with character data in R

$
0
0

HandlingstringscoverR is primarily a language for working with numbers, but we often need to work with text as well. Whether it's formatting text for reports, or analyzing natural language data, R provides a number of facilities for working with character data. Handling Strings with R, a free (CC-BY-NC-SA) e-book by UC Berkeley's Gaston Sanchez, provides an overview of the ways you can manipulate characters and strings with R. 

There are many useful sections in the book, but a few selections include:

Note that the book does not cover analysis of natural language data, for which you might want to check out the CRAN Task View on Natural Language Processing or the book Text Mining with R: A Tidy Approach. It's also sadly silent on the topic of character encoding in R, a topic that often causes problems when dealing with text data, especially from international sources. Nonetheless, the book is a really useful overview of working with text in R, and has been updated extensively since it was last published in 2014. You can read Handling Strings with R at the link below.

Gaston Sanchez: Handling Strings with R

.NET Core 2.1 June Update

$
0
0

We released .NET Core 2.1.1. This update includes .NET Core SDK 2.1.301, ASP.NET Core 2.1.1 and .NET Core 2.1.1.

See .NET Core 2.1.1 release notes for complete details on the release.

Quality Updates

CLI

  • [4050c6374] The “pack” command under ‘buildCrossTargeting’ for ‘Microsoft.DotNet.MSBuildSdkResolver’ now throws a “NU5104” warning/error because the SDK stage0 was changed to “2.1.300” [change was intended].
  • [ea539c7f6] Add retry when Directory.Move (#9313)

CoreCLR

  • [13ea3c2c8e] Fix alternate stack for Alpine docker on SELinux (#17936) (#17975)
  • [88db627a97] Update g_highest_address and g_lowest_address in StompWriteBarrier(WriteBarrierOp::StompResize) on ARM (#18107)
  • [0ea5fc4456] Use sysconf(_SC_NPROCESSORS_CONF) instead of sysconf(_SC_NPROCESSORS_ONLN) in PAL and GC on ARM and ARM64

CoreFX

  • [3700c5b793] Update to a xUnit Performance Api that has a bigger Etw buffer size. … (#30328)
  • [6b38470265] Use _SC_NPROCESSORS_CONF instead of _SC_NPROCESSORS_ONLN in Unix_ProcessorCountTest on ARM/ARM64 (#30132)
  • [fe653a068c] check SwitchingProtocol before ContentLength (#29948) (#29993)
  • [f11f3e1fcf] Fix handling of cursor position when other ESC sequences already in stdin (#29897) (#29923)
  • [77a4a19622] [release/2.1] Port nano test fixes (#29995)
  • [7ce9270ac7] Fix Sockets hang caused by concurrent Socket disposal (#29786) (#29846)
  • [ed23f5391f] Fix terminfo number reading with 32-bit integers (#29655) (#29765)
  • [1c34018f14] Fix getting attributes for sharing violation files (#29790) (#29832)
  • [bc71849976] [release/2.1] Fix deadlock when waiting for process exit in Console.CancelKeyPress (#29749)
  • [adc1c4d0d5] Fix WebSocket split UTF8 read #29834 (#29840) (#29853)

WCF

  • [0a99dd88] Add net461 as a supported framework for S.SM.Security.
  • [45855085] Generate ThisAssembly.cs, update the version and links for svcutil.xmlserializer (#2893)
  • [68457365] Target svcutil.xmlserializer app at dotnetcore. (#2855)

Getting the Update

The .NET Core 2.1 June 2018 Update is available from the .NET Core 2.1.1 download page.

You can always download the latest version of .NET Core at .NET Downloads.

Docker Images

.NET Docker images have been updated for today’s release. The following repos have been updated.

Note: Look at the “Tags” view in each repository to see the updated Docker image tags.

Note: You must re-pull base images in order to get updates. docker build –pull does this aut

.NET Core End of Life Updates

.NET Core Supported OS Lifecycle Policy is regularly updated as supported operating systems reach end of life.

Previous .NET Core Updates

The last few .NET Core updates follow:

Because it’s Friday: The lioness sleeps tonight

$
0
0

Handlers for the lion enclosure at San Diego Zoo have developed a novel way to provide stimulation for their big cats: let them play tug-of-war with people outside. People plural that is — it turns out that a young lioness is no match for a trio of pro wrestlers:

🤼‍♂️ How many #NXT #WWE superstar wrestlers does it take to win in tug of war with a 2 1/2 year old lion cub? Apparently more than 3! #NXTSanAntonio #SAZoo pic.twitter.com/avyPVwRYjN

— San Antonio Zoo & Zoo School🦏 (@SanAntonioZoo) May 19, 2018

That's all for this week. Have a great weekend (and a very happy Pride!) and we'll be back next week.

Using Flurl to easily build URLs and make testable HttpClient calls in .NET

$
0
0

FlurlI posted about using Refit along with ASP.NET Core 2.1's HttpClientFactory earlier this week. Several times when exploring this space (both on Twitter, googling around, and in my own blog comments) I come upon Flurl as in, "Fluent URL."

Not only is that a killer name for an open source project, Flurl is very active, very complete, and very interesting. By the way, take a look at the https://flurl.io/ site for a great example of a good home page for a well-run open source library. Clear, crisp, unambiguous, with links on how to Get It, Learn It, and Contribute. Not to mention extensive docs. Kudos!

Flurl is a modern, fluent, asynchronous, testable, portable, buzzword-laden URL builder and HTTP client library for .NET.

You had me at buzzword-laden! Flurl embraces the .NET Standard and works on .NET Framework, .NET Core, Xamarin, and UWP - so, everywhere.

To use just the Url Builder by installing Flurl. For the kitchen sink (recommended) you'll install Flurl.Http. In fact, Todd Menier was kind enough to share what a Flurl implementation of my SimpleCastClient would look like! Just to refresh you, my podcast site uses the SimpleCast podcast hosting API as its back-end.

My super basic typed implementation that "has a" HttpClient looks like this. To be clear this sample is WITHOUT FLURL.

public class SimpleCastClient

{
private HttpClient _client;
private ILogger<SimpleCastClient> _logger;
private readonly string _apiKey;

public SimpleCastClient(HttpClient client, ILogger<SimpleCastClient> logger, IConfiguration config)
{
_client = client;
_client.BaseAddress = new Uri($"https://api.simplecast.com"); //Could also be set in Startup.cs
_logger = logger;
_apiKey = config["SimpleCastAPIKey"];
}

public async Task<List<Show>> GetShows()
{
try
{
var episodesUrl = new Uri($"/v1/podcasts/shownum/episodes.json?api_key={_apiKey}", UriKind.Relative);
_logger.LogWarning($"HttpClient: Loading {episodesUrl}");
var res = await _client.GetAsync(episodesUrl);
res.EnsureSuccessStatusCode();
return await res.Content.ReadAsAsync<List<Show>>();
}
catch (HttpRequestException ex)
{
_logger.LogError($"An error occurred connecting to SimpleCast API {ex.ToString()}");
throw;
}
}
}

Let's explore Tim's expression of the same client using the Flurl library!

Not we set up a client in Startup.cs, use the same configuration, and also put in some nice aspect-oriented events for logging the befores and afters. This is VERY nice and you'll note it pulls my cluttered logging code right out of the client!

// Do this in Startup. All calls to SimpleCast will use the same HttpClient instance.

FlurlHttp.ConfigureClient(Configuration["SimpleCastServiceUri"], cli => cli
.Configure(settings =>
{
// keeps logging & error handling out of SimpleCastClient
settings.BeforeCall = call => logger.LogWarning($"Calling {call.Request.RequestUri}");
settings.OnError = call => logger.LogError($"Call to SimpleCast failed: {call.Exception}");
})
// adds default headers to send with every call
.WithHeaders(new
{
Accept = "application/json",
User_Agent = "MyCustomUserAgent" // Flurl will convert that underscore to a hyphen
}));

Again, this set up code lives in Startup.cs and is a one-time thing. The Headers, User Agent all are dealt with once there and in a one-line chained "fluent" manner.

Here's the new SimpleCastClient with Flurl.

using Flurl;

using Flurl.Http;

public class SimpleCastClient
{
// look ma, no client!
private readonly string _baseUrl;
private readonly string _apiKey;

public SimpleCastClient(IConfiguration config)
{
_baseUrl = config["SimpleCastServiceUri"];
_apiKey = config["SimpleCastAPIKey"];
}

public Task<List<Show>> GetShows()
{
return _baseUrl
.AppendPathSegment("v1/podcasts/shownum/episodes.json")
.SetQueryParam("api_key", _apiKey)
.GetJsonAsync<List<Show>>();
}
}

See in GetShows() how we're also using the Url Builder fluent extensions in the Flurl library. See that _baseUrl is actually a string? We all know that we're supposed to use System.Uri but it's such a hassle. Flurl adds extension methods to strings so that you can seamlessly transition from the strings (that we all use) representations of Urls/Uris and build up a Query String, and in this case, a GET that returns JSON.

Very clean!

Flurl also prides itself on making HttpClient testing easier as well. Here's a more sophisticated example of a library from their site:

// Flurl will use 1 HttpClient instance per host

var person = await "https://api.com"
.AppendPathSegment("person")
.SetQueryParams(new { a = 1, b = 2 })
.WithOAuthBearerToken("my_oauth_token")
.PostJsonAsync(new
{
first_name = "Claire",
last_name = "Underwood"
})
.ReceiveJson<Person>();

This example is doing a post with an anonymous object that will automatically turn into JSON when it hits the wire. It also receives JSON as the response. Even the query params are created with a C# POCO (Plain Old CLR Object) and turned into name=value strings automatically.

Here's a test Flurl-style!

// fake & record all http calls in the test subject

using (var httpTest = new HttpTest()) {
// arrange
httpTest.RespondWith(200, "OK");
// act
await sut.CreatePersonAsync();
// assert
httpTest.ShouldHaveCalled("https://api.com/*")
.WithVerb(HttpMethod.Post)
.WithContentType("application/json");
}

Flurl.Http includes a set of features to easily fake and record HTTP activity. You can make a whole series of assertions about your APIs:

httpTest.ShouldHaveCalled("http://some-api.com/*")

.WithVerb(HttpMethd.Post)
.WithContentType("application/json")
.WithRequestBody("{"a":*,"b":*}") // supports wildcards
.Times(1)

All in all, it's an impressive set of tools that I hope you explore and consider for your toolbox! There's a ton of great open source like this with .NET Core and I'm thrilled to do a small part to spread the word. You should to!


Sponsor: Check out dotMemory Unit, a free unit testing framework for fighting all kinds of memory issues in your code. Extend your unit testing with the functionality of a memory profiler.



© 2018 Scott Hanselman. All rights reserved.
     

Supercharging the Git Commit Graph

$
0
0
Have you ever run gitk and waited a few seconds before the window appears? Have you struggled to visualize your commit history into a sane order of contributions instead of a stream of parallel work? Have you ever run a force-push and waited seconds for Git to give any output? You may be having performance... Read More

Silicon development on Microsoft Azure

$
0
0

This week at the Design Automation Conference (DAC), we look forward to joining the conversation on “Why Cloud, Why Now,” for silicon development workflows.

Cloud computing is enabling digital transformation across industries. Silicon, or semiconductors, are a foundational building block for the technology industry, and new opportunities are emerging in cloud computing for silicon development. The workflows for silicon development have always pushed the limits of compute, storage and networking. Over time, the silicon development flow has been greatly expanded upon to handle the increasing size, density and manufacturing complexity of the industry. This has and continues to push the envelope for high performance compute (HPC) and storage infrastructure.

Silicon

Azure provides a globally available, high performance computing (HPC) platform, that is secure, reliable and scalable to meet current and emerging infrastructure needs with the silicon design and development workflow based on EDA software.

  • Compute: Silicon development is compute and memory intensive. At times, it utilizes up to thousands of cores, demands the ability to quickly move and manage massive data sets for design and collaboration. Azure customers can choose from a range of compute- and memory-optimized Linux and Windows VMs to run their workflows.
  • Storage: Azure Storage offers multiple storage options for handling high performance NFS scenarios in cloud-only and hybrid environments. For workflow steps like silicon RTL verification that can generate millions and millions of small (4 to 16K) temp files, Azure Avere provides high performance NFS support. Azure Avere also offers a single global namespace for hybrid or cloud-only deployment.
  • Networking: Azure offers custom networking options to allow for fast, scalable, and secure network connectivity between customer premises and global Azure regions.
  • Scalability: Azure offers nearly unlimited scalability. Given the cyclical nature of the silicon industry, using Azure enables organizations to rapidly increase and/or decrease the number of cores needed, while only having to pay for the resources that are used.
  • Security: Azure offers a wide array of security tools and capabilities, to enable customers to secure their platform, maintain privacy and controls, meet compliance requirements, and ensure transparency.
  • Global presence: Azure has more regions globally than any other cloud provider, offering the scale needed to bring applications closer to users around the world, preserving data residency, and providing comprehensive compliance and resiliency options for customers. Using Azure’s footprint, the cost, the time, and the complexity of operating a global semiconductor infrastructure can be reduced.
  • Scheduling and orchestration: Azure supports commonly used tools in the silicon industry, and also provides alternatives like Azure Batch, a high-performance cloud native job scheduler. Azure CycleCloud is a site-installed, web-based orchestration tool for HPC clusters and workflows and helps create, manage, use, and optimize dynamic clustered compute environments. It can also set policies that govern access and use of Azure, which helps manage resources and costs.

Microsoft is both a developer and consumer of silicon. We are partnering with our internal silicon development organizations as well as the industry – design, manufacturing, and tools – to addressing current and emerging infrastructure needs with the silicon design and development workflow that is based on electronic design automation (EDA) software.

More detailed information on configuring Azure services for Silicon development is available in this white paper.

We look forward to sharing more details on using Azure for silicon development at DAC 2018 this week. Join us for the following sessions or contact our Silicon industry team at AzureForSilicon@microsoft.com.

  • Monday, June 25, 2018 at 12:30 PM Pacific Time: “Why Cloud, Why Now”
  • Monday, June 25, 2018 at 4:30 PM Pacific Time: Panel discussion – Cloud computing for EDA
  • Tuesday, June 26, 2018 at 2:30 PM Pacific Time: Panel discussion – EDA on the cloud: Are we ready?

Customer 360 Powered by Zero2Hero now available on Azure Marketplace

$
0
0

With today’s fast moving technology and abundance of data sources, gaining a complete view of your customer is increasingly challenging and critical. This includes campaign interaction, opportunities for marketing optimization, current engagement, and recommendations for next best action.

To continuously drive business growth, financial services organizations are especially focused on innovation and speed-to-market in this area, as they look to overcome the added challenge of implementing and integrating best-of-breed solutions jointly, to quickly gain that 360-degree view of the customer.

To address these needs in an accelerated way, Bardess is bringing together the technology of Cloudera, Qlik, and Trifacta, along with their own accelerators and industry expertise, to deliver rapid value to customers.

Value%20Prop%20Final

Customer 360 Powered by Zero2Hero, the first in a new series of integrated solutions coming to Azure Marketplace and AppSource as Consulting Services offers, is now available.

What is Customer 360 Powered by Zero2Hero?

By combining Cloudera’s modern platform for machine learning and analytics, Qlik’s powerful, agile business intelligence and analytics suite, Trifacta’s data preparation platform, and Bardess accelerators, organizations can uncover insights and easily build comprehensive views of their customers across multiple touch points and enterprise systems.

The solution offers a complete platform for Customer 360 workloads available on Microsoft Azure in minutes, enabling:

  • Modernized data management, optimized for the cloud, to transform complex data into clear and actionable insights with Cloudera Enterprise.
  • Democratization of your analytics by empowering business users to prep their data for analysis using Trifacta.
  • Identification of patterns, relationships and outliers in vast amounts of data in visually compelling ways using Qlik.
  • Artificial Intelligence (AI), Machine Learning (ML), predictive, prescriptive and geospatial capabilities to fully leverage data assets using Cloudera Data Science Workbench.
  • Building, testing, deploying, and management of workloads in the cloud through Microsoft Azure.
  • Accelerated implementation and industry best practices through Bardess services.

Learn more about Customer 360 Powered by Zero2Hero on the Azure Marketplace, and look for more integrated solutions, offering best-of-breed technology via a unified buying and implementation experience enabled by a systems integrator, coming to the Azure Marketplace in July.

4 month retirement notice: Access Control Service

$
0
0

The Access Control Service, otherwise known as ACS, is officially being retired. ACS will remain available for existing customers until November 7, 2018. After this date, ACS will be shut down, causing all requests to the service to fail.

This blog post is a follow up to the initial announcement of the retirement of ACS service.

Who is affected by this change?

This retirement affects any customer who has created one or more ACS namespaces in their Azure subscriptions. For instance, this may include Service Bus customers that have created an ACS namespace indirectly when creating a Service Bus namespace. If your apps and services do not use ACS, then you have no action to take.

What action is required?

If you are using ACS, you will need a migration strategy. The correct migration path for you depends on how your existing apps and services use ACS. We have published migration guidance to assist. In most cases, migration will require code changes on your part.

If you are uncertain whether your apps and services are using ACS, you are not alone. After the retirement of ACS from the Azure portal in April 2018, you had to contact Azure support to list your namespaces. Moving forward, we are pleased to announce this will no longer be the case.

Access Control Service PowerShell now available

ACS PowerShell provides a direct replacement for the ACS functionality in the classic Azure portal. For more details, please follow the instructions to download from the PowerShell gallery.

How to list and delete your ACS namespaces

Once you have installed ACS PowerShell, you can follow these simple steps to determine and ultimately delete your ACS namespaces:

1. Connect to ACS using the Connect-AcsAccount cmdlet.

2. List your available Azure subscriptions using the Get-AcsSubscription cmdlet.

3. List your ACS namespaces using the Get-AcsNamespace cmdlet

The Azure customers most likely to find ACS namespaces signed up Azure Service Bus prior to 2014. These namespaces can be identified by their –sb extension. The Service Bus team has provided migration guidance and will continue to publish updates to their blog.

4. Disable your ACS namespace using the Disable-AcsNamespace cmdlet.

This step is optional. If you believe that you have completed migration, it is recommended that you disable your namespace prior to deletion. After being disabled, requests will receive a 404 response from https://{your-namespace}.accesscontrol.windows.net. The namespace is otherwise untouched and can be restored using the Enable-AcsNamespace cmdlet. 

5. Delete your ACS namespace using the Remove-AcsNamespace cmdlet.

This step will permanently remove your namespace and is not recoverable.

Contact us

For more information about the retirement of ACS, please check our ACS migration guidance first. If none of the migration options will work for you, or if you still have questions or feedback about ACS retirement, please contact us at acsfeedback@microsoft.com.

Healthcare on 5G

$
0
0

Today many healthcare computational workloads still exist at or near the point of care because of high latency, low bandwidth, or challenges with wireless power requirements and limited battery capabilities. Limitations on the numbers of connections per cell are also poised to stunt the future growth of IoT (Internet of Things).

4G LTE has typical latencies of 50-100ms, bandwidth less than 50Mbps, and in the range of thousands of connections per cell (maximum). This has forced users to spend capital buying expensive, powerful hardware to be co-located at or close to the point of need, and to secure and maintain that hardware over its lifetime. In the case of IoT and wearables these limitations have either prevented certain use cases or significantly limited capabilities.

5G has a latency of less than 1ms, bandwidth of up to 10 Gbps, and up to a million connections per square KM! This is going to pave the way for many new innovations in healthcare. I discuss a few of these below.

Healthcare AR / VR from the Cloud

Microsoft HoloLens today enables MR (Mixed Reality) in healthcare. The AR (Augmented Reality), VR (Virtual Reality) in healthcare market was recently valued at USD $769 million in 2017 and is expected to reach USD $4,998 million by 2023, representing a CAGR of 36.6 percent during the forecast period. 5G latencies are low enough, and bandwidth high enough that the computational workloads associated with AR / VR / MR can run in the cloud and be delivered to the patients or healthcare workers headset via 5G. This can include both workloads to generate models — for example, a VR model for a specific patient created using images taken of that patient — as well as workloads to display the model to the user and enable the user to navigate through and interact with the model in real time. The net effect of enabling these technologies from the cloud is that AR / VR / MR will require less capital outlay for user hardware; the only hardware required by the user will be the “thin client” to drive the headset display and audio, and the 5G radio. This will make AR / VR / MR economically feasible for a myriad of use cases that promise to improve healthcare, from patient engagement and education, to healthcare worker training, surgical planning and simulation, treatment for PTSD, and many others.

MSHoloLens_CommercialPillar_Healthcare_00861_1920x1080_RGB

IoMT (Internet of Medical Things) powers healthcare analytics / AI / ML from the Cloud

Healthcare market research in 2017 estimates 4.5 billion IoMT devices existed in 2015, accounting for 30.3 percent of all IoT devices globally. This number is expected to grow to 20-30 billion IoMT devices by 2020! These will include patient wearables, as well as IoMT devices in healthcare environments and in patients’ homes. In many cases, IoMT devices will be connected to the cloud via 5G—which is capable of supporting millions of connections per square KM. 5G will also enable IoMT devices to connect with low bandwidth, and low-power options will enable these devices to include lower power radios with longer lasting batteries. In turn, 5G will enable the proliferation of IoMT devices around and on patients, while greatly increasing the data available to advanced analytics, artificial intelligence and machine learning. In turn, those processes will deliver new insights to healthcare workers in real time, empowering them to deliver improved healthcare and patient outcomes. Microsoft Azure Sphere will secure these solutions end-to-end from IoMT devices to the cloud.

Autonomous driving for healthcare

Every year 3.6 million American patients miss doctor appointments due to a lack of reliable transportation, resulting in “no show” rates as high as 30 percent. Uber Health and Lyft Concierge already remove transportation as a barrier, enabling healthcare organizations to provide patients a convenient transportation option. This improves attendance at doctor appointments, and thereby healthcare. Autonomous driving will further reduce the cost of such transportation and improve patient safety during transportation. 5G will enable computational workloads that support autonomous driving such as optimal routing, maintenance, patient engagement/entertainment, and communications. 5G will also facilitate others moving to the cloud, further reducing the cost of autonomous driving, and ultimately helping to reduce the cost of healthcare.

These are just a few of the workloads that 5G will enable to be delivered from the cloud, helping to reduce healthcare costs, and improving the quality of care and patient outcomes. Many new healthcare use cases will emerge as 5G becomes more broadly available and new types of affordable, scalable, cloud-based computational workloads become possible.

5G Rolling Out in 2018!

All major wireless providers are working on 5G rollout. Verizon is planning to offer 5G in multiple cities in the US including Sacramento and Los Angeles starting in Q4 2018!

Are you factoring 5G into your IT / Cloud plans?

As healthcare plans IT for next 3-5 years it is important to take 5G into consideration as it opens new opportunities to move workloads to the cloud both for the use cases discussed in this blog, and many more.

I recommend you read this blog to find out more about how Microsoft is enabling healthcare organizations to transform their business and optimize patients' health outcomes using the Azure intelligent, trusted, and secure health cloud platform. See Microsoft Health Solutions for more about how Microsoft is empowering healthcare workers to reduce the cost of healthcare, and improve patient outcomes. See Azure for Health for more about how Microsoft is enabling healthcare organizations to transform their business to lower healthcare costs, improve patient and clinician experiences, and improve patient outcomes

I post regularly about new developments on social media. If you would like to follow me you can find me on LinkedIn and Twitter. What other opportunities and challenges do you see with 5G and cloud computing in healthcare? We welcome your feedback and questions.

Structured streaming with Azure Databricks into Power BI & Cosmos DB

$
0
0

In this blog we’ll discuss the concept of Structured Streaming and how a data ingestion path can be built using Azure Databricks to enable the streaming of data in near-real-time. We’ll touch on some of the analysis capabilities which can be called from directly within Databricks utilising the Text Analytics API and also discuss how Databricks can be connected directly into Power BI for further analysis and reporting. As a final step we cover how streamed data can be sent from Databricks to Cosmos DB as the persistent storage.

Structured streaming is a stream processing engine which allows express computation to be applied on streaming data (e.g. a Twitter feed). In this sense it is very similar to the way in which batch computation is executed on a static dataset. Computation is performed incrementally via the Spark SQL engine which updates the result as a continuous process as the streaming data flows in.

clip_image002

The above architecture illustrates a possible flow on how Databricks can be used directly as an ingestion path to stream data from Twitter (via Event Hubs to act as a buffer), call the Text Analytics API in Cognitive Services to apply intelligence to the data and then finally send the data directly to Power BI and Cosmos DB.

The concept of structured streaming

All data which arrives from the data stream is treated as an unbounded input table. For each new data within the data stream, a new row is appended to the unbounded input table. The entirety of the input isn’t stored, but the end result is equivalent to retaining the entire input and executing a batch job.

image

The input table allows us to define a query on itself, just as if it were a static table, which will compute a final result table written to an output sink. This batch-like query is automatically converted by Spark into a streaming execution plan via a process called incremental execution.

Incremental execution is where Spark natively calculates the state required to update the result every time a record arrives. We are able to utilize built in triggers to specify when to update the results. For each trigger that fires, Spark looks for new data within the input table and updates the result on an incremental basis.

Queries on the input table will generate the result table. For every trigger interval (e.g. every three seconds) new rows are appended to the input table, which through the process of Incremental Execution, update the result table. Each time the result table is updated, the changed results are written as an output.

image

The output defines what gets written to external storage, whether this be directly into the Databricks file system, or in our example CosmosDB.

To implement this within Azure Databricks the incoming stream function is called to initiate the StreamingDataFrame based on a given input (in this example Twitter data). The stream is then processed and written as parquet format to internal Databricks file storage as shown in the below code snippet:

val streamingDataFrame = incomingStream.selectExpr("cast (body as string) AS Content")
.withColumn("body", toSentiment(%code%nbsp;"Content"))
 
import org.apache.spark.sql.streaming.Trigger.ProcessingTime
val result = streamingDataFrame
.writeStream.format("parquet")
.option("path", "/mnt/Data")
.option("checkpointLocation", "/mnt/sample/check")
.start()

image

Mounting file systems within Databricks (CosmosDB)

Several different file systems can be mounted directly within Databricks such as Blob Storage, Data Lake Store and even SQL Data Warehouse. In this blog we’ll explore the connectivity capabilities between Databricks and Cosmos DB.

Fast connectivity between Apache Spark and Azure Cosmos DB accelerates the ability to solve fast moving Data Sciences problems where data can be quickly persisted and retrieved using Azure Cosmos DB. With the Spark to Cosmos DB connector, it’s possible to solve IoT scenarios, update columns when performing analytics, push-down predicate filtering, and perform advanced analytics against fast changing data against a geo-replicated managed document store with guaranteed SLAs for consistency, availability, low latency, and throughput.

image

  • From within Databricks, a connection is made from the Spark master node to Cosmos DB gateway node to get the partition information from Cosmos.
  • The partition information is translated back to the Spark master node and distributed amongst the worker nodes.
  • That information is translated back to Spark and distributed amongst the worker nodes.
  • This allows the Spark worker nodes to interact directly to the Cosmos DB partitions when a query comes in. The worked nodes are able to extract the data that is needed and bring the data back to the Spark partitions within the Spark worker nodes.

Communication between Spark and Cosmos DB is significantly faster because the data movement is between the Spark worker nodes and the Cosmos DB data nodes.

Using the Azure Cosmos DB Spark connector (currently in preview) it is possible to connect directly into a Cosmos DB storage account from within Databricks, enabling Cosmos DB to act as an input source or output sink for Spark jobs as shown in the code snippet below:

import com.microsoft.azure.cosmosdb.spark.CosmosDBSpark
import com.microsoft.azure.cosmosdb.spark.config.Config

val writeConfig = Config(Map("Endpoint, MasterKey, Database, PreferredRegions, Collection, WritingBatchSize"))

import org.apache.spark.sql.SaveMode
sentimentdata.write.mode(SaveMode.Overwrite).cosmosDB(writeConfig)

Connecting Databricks to PowerBI

Microsoft Power BI is a business analytics service that provides interactive visualizations with self-service business intelligence capabilities, enabling end users to create reports and dashboards by themselves without having to depend on information technology staff or database administrators.

Azure Databricks can be used as a direct data source with Power BI, which enables the performance and technology advantages of Azure Databricks to be brought beyond data scientists and data engineers to all business users.

Power BI Desktop can be connected directly to an Azure Databricks cluster using the built-in Spark connector (Currently in preview). The connector enables the use of DirectQuery to offload processing to Databricks, which is great when you have a large amount of data that you don’t want to load into Power BI or when you want to perform near real-time analysis as discussed throughout this blog post.

image

This connector utilises JDBC/ODBC connection via DirectQuery, enabling the use of a live connection into the mounted file store for the streaming data entering via Databricks. From Databricks we can set a schedule (e.g. every 5 seconds) to write the streamed data into the file store and from Power BI pull this down regularly to obtain a near-real time stream of data.

From within Power BI, various analytics and visualisations can be applied to the streamed dataset bringing it to life!

image

Want to have a go at building this architecture out? For more examples of Databricks see the official Azure documentation:

Please read more on Stream Analytics with Power BI.

Backup your applications on Azure Stack with Azure Backup

$
0
0

Earlier this month, we announced Azure Stack expanding availability to 92 countries. Today, we are announcing the capability to backup files and applications data using Microsoft Azure Backup Server. Azure Stack tenants can now take app consistent backup of their data in Azure Stack VMs, store them on the stack for operational recoveries, and send the data to Azure for long-term retention and offsite copy needs.

Screenshot_1

Key benefits

Application consistent backups for SQL, SharePoint, and Exchange

App consistent backups mean that Azure Backup makes sure that while taking a backup, the memory is flushed, and no IOs are pending. This means that in addition to being recoverable, your applications being completely consistent at the time of backups. With Azure Backup Server, you can take App consistent backups of your applications, ensuring that the data, when recovered, is valid and consistent.

Item Level Recovery for local recovery points

Quick operation recoveries can be triggered directly from the Microsoft Azure Backup Server running on your stack, with Item Level Recovery. This means that you can recover a single file from a 50GB volume backed up, without having to recover the whole volume at a staging location.

Long term retention of offsite copies in Azure

Most organizations today have a requirement to have an offsite copy of the data which could be recovered in cases as a datacentre going down. In addition, long term retention is often a legal requirement as well. Seamless integration with Azure means that you do not need to worry about managing cumbersome tapes, and that you have the power to recover the data with a click whenever needed.

Centralized view in Azure Portal

With Azure Backup, you can view all your Microsoft Azure Backup Servers, the items they are protecting, the storage they consume and the cloud recovery points centrally in an Azure Portal. This enables you to get information about all your Azure Backup entities, including the Azure VMs, systems protected with MARS agent, and SQL in Azure VMs from one central place.

Security features

Azure backup protects your data from malicious attacks and helps you recover by the three-pronged strategy of Prevention, Alerting, and Recovery. This means that critical operations as change of passphrase require multi factor authentication. Further, operations as stop protection with deletion of cloud data trigger an alert, and the data is retained for 14 days even after delete data is triggered, allowing you to recover the data from your cloud backups.

Download Microsoft Azure Backup Server now! Find more details about how to plan and install your Microsoft Azure Backup Server. You can also find more information about Microsoft Azure Backup Server pricing.

Related links and additional content

Viewing all 10804 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>