Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

Windows 10 SDK Preview Build 17672 available now!

$
0
0

Today, we released a new Windows 10 Preview Build of the SDK to be used in conjunction with Windows 10 Insider Preview (Build 17672 or greater). The Preview SDK Build 17672 contains bug fixes and under development changes to the API surface area.

The Preview SDK can be downloaded from developer section on Windows Insider.

For feedback and updates to the known issues, please see the developer forum. For new developer feature requests, head over to our Windows Platform UserVoice.

Things to note:

  • This build works in conjunction with previously released SDKs and Visual Studio 2017. You can install this SDK and still also continue to submit your apps that target Windows 10 Creators build or earlier to the store.
  • The Windows SDK will now formally only be supported by Visual Studio 2017 and greater. You can download the Visual Studio 2017 here.
  • This build of the Windows SDK will install on Windows 10 Insider Preview and supported Windows operating systems.

Known Issues

Windows Device Portal

Please note that there is a known issue in this Windows Insider build that prevents the user from enabling Developer Mode through the For developers settings page.

Unfortunately, this means that you will not be able to remotely deploy a UWP application to your PC or use Windows Device Portal on this build. There are no known workarounds at the moment. Please skip this flight if you rely on these features.

Missing Contract File

The contract Windows.System.SystemManagementContract is not included in this release. In order to access the following APIs, please use a previous Windows IoT extension SDK with your project.

This bug will be fixed in a future preview build of the SDK.

The following APIs are affected by this bug:


namespace Windows.Services.Cortana {
  public sealed class CortanaSettings     
}
namespace Windows.System {
  public enum AutoUpdateTimeZoneStatus
  public static class DateTimeSettings
  public enum PowerState
  public static class ProcessLauncher
  public sealed class ProcessLauncherOptions
  public sealed class ProcessLauncherResult
  public enum ShutdownKind
  public static class ShutdownManager
  public struct SystemManagementContract
  public static class TimeZoneSettings
}

API Spot Light:

Check out LauncherOptions.GroupingPreference.


namespace Windows.System {
  public sealed class FolderLauncherOptions : ILauncherViewOptions {
    ViewGrouping GroupingPreference { get; set; }
  }
  public sealed class LauncherOptions : ILauncherViewOptions {
    ViewGrouping GroupingPreference { get; set; }
  }

This release contains the new LauncherOptions.GroupingPreference property to assist your app in tailoring its behavior for Sets. Watch the presentation here.

What’s New:

MC.EXE

We’ve made some important changes to the C/C++ ETW code generation of mc.exe (Message Compiler):

The “-mof” parameter is deprecated. This parameter instructs MC.exe to generate ETW code that is compatible with Windows XP and earlier. Support for the “-mof” parameter will be removed in a future version of mc.exe.

As long as the “-mof” parameter is not used, the generated C/C++ header is now compatible with both kernel-mode and user-mode, regardless of whether “-km” or “-um” was specified on the command line. The header will use the _ETW_KM_ macro to automatically determine whether it is being compiled for kernel-mode or user-mode and will call the appropriate ETW APIs for each mode.

  • The only remaining difference between “-km” and “-um” is that the EventWrite[EventName] macros generated with “-km” have an Activity ID parameter while the EventWrite[EventName] macros generated with “-um” do not have an Activity ID parameter.

The EventWrite[EventName] macros now default to calling EventWriteTransfer (user mode) or EtwWriteTransfer (kernel mode). Previously, the EventWrite[EventName] macros defaulted to calling EventWrite (user mode) or EtwWrite (kernel mode).

  • The generated header now supports several customization macros. For example, you can set the MCGEN_EVENTWRITETRANSFER macro if you need the generated macros to call something other than EventWriteTransfer.
  • The manifest supports new attributes.
    • Event “name”: non-localized event name.
    • Event “attributes”: additional key-value metadata for an event such as filename, line number, component name, function name.
    • Event “tags”: 28-bit value with user-defined semantics (per-event).
    • Field “tags”: 28-bit value with user-defined semantics (per-field – can be applied to “data” or “struct” elements).
  • You can now define “provider traits” in the manifest (e.g. provider group). If provider traits are used in the manifest, the EventRegister[ProviderName] macro will automatically register them.
  • MC will now report an error if a localized message file is missing a string. (Previously MC would silently generate a corrupt message resource.)
  • MC can now generate Unicode (utf-8 or utf-16) output with the “-cp utf-8” or “-cp utf-16” parameters.

API Updates and Additions

When targeting new APIs, consider writing your app to be adaptive in order to run correctly on the widest number of Windows 10 devices. Please see Dynamically detecting features with API contracts (10 by 10) for more information.

The following APIs have been added to the platform since the release of 17134.


namespace Windows.ApplicationModel {
  public sealed class AppInstallerFileInfo
  public sealed class LimitedAccessFeatureRequestResult
  public static class LimitedAccessFeatures
  public enum LimitedAccessFeatureStatus
  public sealed class Package {
    IAsyncOperation<PackageUpdateAvailabilityResult> CheckUpdateAvailabilityAsync();
    AppInstallerFileInfo GetAppInstallerFileInfo();
  }
  public enum PackageUpdateAvailability
  public sealed class PackageUpdateAvailabilityResult
}
namespace Windows.ApplicationModel.Calls {
  public sealed class VoipCallCoordinator {
    IAsyncOperation<VoipPhoneCallResourceReservationStatus> ReserveCallResourcesAsync();
  }
}
namespace Windows.ApplicationModel.Store.Preview.InstallControl {
  public enum AppInstallationToastNotificationMode
  public sealed class AppInstallItem {
    AppInstallationToastNotificationMode CompletedInstallToastNotificationMode { get; set; }
    AppInstallationToastNotificationMode InstallInProgressToastNotificationMode { get; set; }
    bool PinToDesktopAfterInstall { get; set; }
    bool PinToStartAfterInstall { get; set; }
    bool PinToTaskbarAfterInstall { get; set; }
  }
  public sealed class AppInstallManager {
    bool CanInstallForAllUsers { get; }
  }
  public sealed class AppInstallOptions {
    AppInstallationToastNotificationMode CompletedInstallToastNotificationMode { get; set; }
    bool InstallForAllUsers { get; set; }
    AppInstallationToastNotificationMode InstallInProgressToastNotificationMode { get; set; }
    bool PinToDesktopAfterInstall { get; set; }
    bool PinToStartAfterInstall { get; set; }
    bool PinToTaskbarAfterInstall { get; set; }
    bool StageButDoNotInstall { get; set; }
  }
  public sealed class AppUpdateOptions {
    bool AutomaticallyDownloadAndInstallUpdateIfFound { get; set; }
  }
}
namespace Windows.Devices.Enumeration {
  public sealed class DeviceInformationPairing {
    public static bool TryRegisterForAllInboundPairingRequestsWithProtectionLevel(DevicePairingKinds pairingKindsSupported, DevicePairingProtectionLevel minProtectionLevel);
  }
}
namespace Windows.Devices.Lights {
  public sealed class LampArray
  public enum LampArrayKind
  public sealed class LampInfo
  public enum LampPurpose : uint
}
namespace Windows.Devices.Sensors {
  public sealed class SimpleOrientationSensor {
    public static IAsyncOperation<SimpleOrientationSensor> FromIdAsync(string deviceId);
    public static string GetDeviceSelector();
  }
}
namespace Windows.Devices.SmartCards {
  public static class KnownSmartCardAppletIds
  public sealed class SmartCardAppletIdGroup {
    string Description { get; set; }
    IRandomAccessStreamReference Logo { get; set; }
    ValueSet Properties { get; }
    bool SecureUserAuthenticationRequired { get; set; }
  }
  public sealed class SmartCardAppletIdGroupRegistration {
    string SmartCardReaderId { get; }
    IAsyncAction SetPropertiesAsync(ValueSet props);
  }
}
namespace Windows.Devices.WiFi {
  public enum WiFiPhyKind {
    He = 10,
  }
}
namespace Windows.Graphics.Capture {
  public sealed class GraphicsCaptureItem {
    public static GraphicsCaptureItem CreateFromVisual(Visual visual);
  }
}
namespace Windows.Graphics.Imaging {
  public sealed class BitmapDecoder : IBitmapFrame, IBitmapFrameWithSoftwareBitmap {
    public static Guid HeifDecoderId { get; }
    public static Guid WebpDecoderId { get; }
  }
  public sealed class BitmapEncoder {
    public static Guid HeifEncoderId { get; }
  }
}
namespace Windows.Media.Core {
  public sealed class MediaStreamSample {
    IDirect3DSurface Direct3D11Surface { get; }
    public static MediaStreamSample CreateFromDirect3D11Surface(IDirect3DSurface surface, TimeSpan timestamp);
  }
}
namespace Windows.Media.Devices.Core {
  public sealed class CameraIntrinsics {
    public CameraIntrinsics(Vector2 focalLength, Vector2 principalPoint, Vector3 radialDistortion, Vector2 tangentialDistortion, uint imageWidth, uint imageHeight);
  }
}
namespace Windows.Media.MediaProperties {
  public sealed class ImageEncodingProperties : IMediaEncodingProperties {
    public static ImageEncodingProperties CreateHeif();
  }
  public static class MediaEncodingSubtypes {
    public static string Heif { get; }
  }
}
namespace Windows.Media.Streaming.Adaptive {
  public enum AdaptiveMediaSourceResourceType {
    MediaSegmentIndex = 5,
  }
}
namespace Windows.Security.Authentication.Web.Provider {
  public sealed class WebAccountProviderInvalidateCacheOperation : IWebAccountProviderBaseReportOperation, IWebAccountProviderOperation
  public enum WebAccountProviderOperationKind {
    InvalidateCache = 7,
  }
  public sealed class WebProviderTokenRequest {
    string Id { get; }
  }
}
namespace Windows.Security.DataProtection {
  public enum UserDataAvailability
  public sealed class UserDataAvailabilityStateChangedEventArgs
  public sealed class UserDataBufferUnprotectResult
  public enum UserDataBufferUnprotectStatus
  public sealed class UserDataProtectionManager
  public sealed class UserDataStorageItemProtectionInfo
  public enum UserDataStorageItemProtectionStatus
}
namespace Windows.Services.Cortana {
  public sealed class CortanaActionableInsights
  public sealed class CortanaActionableInsightsOptions
}
namespace Windows.Services.Store {
  public sealed class StoreContext {
    IAsyncOperation<StoreRateAndReviewResult> RequestRateAndReviewAppAsync();
  }
  public sealed class StoreRateAndReviewResult
  public enum StoreRateAndReviewStatus
}
namespace Windows.Storage.Provider {
  public enum StorageProviderHydrationPolicyModifier : uint {
    AutoDehydrationAllowed = (uint)4,
  }
}
namespace Windows.System {
  public sealed class FolderLauncherOptions : ILauncherViewOptions {
    ViewGrouping GroupingPreference { get; set; }
  }
  public sealed class LauncherOptions : ILauncherViewOptions {
    ViewGrouping GroupingPreference { get; set; }
 
namespace Windows.System.UserProfile {
  public sealed class AssignedAccessSettings
}
namespace Windows.UI.Composition {
  public sealed class AnimatablePropertyInfo : CompositionObject
  public enum AnimationPropertyAccessMode
  public enum AnimationPropertyType
  public class CompositionAnimation : CompositionObject, ICompositionAnimationBase {
    void SetAnimatableReferenceParameter(string parameterName, IAnimatable source);
  }
  public enum CompositionBatchTypes : uint {
    AllAnimations = (uint)5,
    InfiniteAnimation = (uint)4,
  }
  public sealed class CompositionGeometricClip : CompositionClip
  public class CompositionObject : IAnimatable, IClosable {
    void GetPropertyInfo(string propertyName, AnimatablePropertyInfo propertyInfo);
  }
  public sealed class Compositor : IClosable {
    CompositionGeometricClip CreateGeometricClip();
  }
  public interface IAnimatable
}
namespace Windows.UI.Composition.Interactions {
  public sealed class InteractionTracker : CompositionObject {
    IReference<float> PositionDefaultAnimationDurationInSeconds { get; set; }
    IReference<float> ScaleDefaultAnimationDurationInSeconds { get; set; }
    int TryUpdatePositionWithDefaultAnimation(Vector3 value);
    int TryUpdateScaleWithDefaultAnimation(float value, Vector3 centerPoint);
  }
}
namespace Windows.UI.Notifications {
  public sealed class ScheduledToastNotification {
    public ScheduledToastNotification(DateTime deliveryTime);
    IAdaptiveCard AdaptiveCard { get; set; }
  }
  public sealed class ToastNotification {
    public ToastNotification();
    IAdaptiveCard AdaptiveCard { get; set; }
  }
}
namespace Windows.UI.Shell {
  public sealed class TaskbarManager {
    IAsyncOperation<bool> IsSecondaryTilePinnedAsync(string tileId);
    IAsyncOperation<bool> RequestPinSecondaryTileAsync(SecondaryTile secondaryTile);
    IAsyncOperation<bool> TryUnpinSecondaryTileAsync(string tileId);
  }
}
namespace Windows.UI.StartScreen {
  public sealed class StartScreenManager {
    IAsyncOperation<bool> ContainsSecondaryTileAsync(string tileId);
    IAsyncOperation<bool> TryRemoveSecondaryTileAsync(string tileId);
  }
}
namespace Windows.UI.ViewManagement {
  public sealed class ApplicationView {
    bool IsTabGroupingSupported { get; }
  }
  public sealed class ApplicationViewTitleBar {
    void SetActiveIconStreamAsync(RandomAccessStreamReference activeIcon);
  }
  public enum ApplicationViewWindowingMode {
    CompactOverlay = 3,
    Maximized = 4,
  }
  public enum ViewGrouping
  public sealed class ViewModePreferences {
    ViewGrouping GroupingPreference { get; set; }
  }
}
namespace Windows.UI.ViewManagement.Core {
  public sealed class CoreInputView {
    bool TryHide();
    bool TryShow();
    bool TryShow(CoreInputViewKind type);
  }
  public enum CoreInputViewKind
}
namespace Windows.UI.Xaml.Controls {
  public class NavigationView : ContentControl {
    bool IsTopNavigationForcedHidden { get; set; }
    NavigationViewOrientation Orientation { get; set; }
    UIElement TopNavigationContentOverlayArea { get; set; }
    UIElement TopNavigationLeftHeader { get; set; }
    UIElement TopNavigationMiddleHeader { get; set; }
    UIElement TopNavigationRightHeader { get; set; }
  }
  public enum NavigationViewOrientation
  public sealed class PasswordBox : Control {
    bool CanPasteClipboardContent { get; }
    public static DependencyProperty CanPasteClipboardContentProperty { get; }
    void PasteFromClipboard();
  }
  public class RichEditBox : Control {
    RichEditTextDocument RichEditDocument { get; }
  }
  public sealed class RichTextBlock : FrameworkElement {
    void CopySelectionToClipboard();
  }
  public class SplitButton : ContentControl
  public sealed class SplitButtonClickEventArgs
  public enum SplitButtonOrientation
  public sealed class TextBlock : FrameworkElement {
    void CopySelectionToClipboard();
  }
  public class TextBox : Control {
    bool CanPasteClipboardContent { get; }
    public static DependencyProperty CanPasteClipboardContentProperty { get; }
    bool CanRedo { get; }
    public static DependencyProperty CanRedoProperty { get; }
    bool CanUndo { get; }
    public static DependencyProperty CanUndoProperty { get; }
    void CopySelectionToClipboard();
    void CutSelectionToClipboard();
    void PasteFromClipboard();
    void Redo();
    void Undo();
  }
  public sealed class WebView : FrameworkElement {
    event TypedEventHandler<WebView, WebViewWebResourceRequestedEventArgs> WebResourceRequested;
  }
  public sealed class WebViewWebResourceRequestedEventArgs
}
namespace Windows.UI.Xaml.Controls.Primitives {
  public class FlyoutBase : DependencyObject {
    FlyoutShowMode ShowMode { get; set; }
    public static DependencyProperty ShowModeProperty { get; }
    public static DependencyProperty TargetProperty { get; }
    void Show(FlyoutShowOptions showOptions);
  }
  public enum FlyoutPlacementMode {
    BottomLeftJustified = 7,
    BottomRightJustified = 8,
    LeftBottomJustified = 10,
    LeftTopJustified = 9,
    RightBottomJustified = 12,
    RightTopJustified = 11,
    TopLeftJustified = 5,
    TopRightJustified = 6,
  }
  public enum FlyoutShowMode
  public sealed class FlyoutShowOptions : DependencyObject
}
namespace Windows.UI.Xaml.Hosting {
  public sealed class XamlBridge : IClosable
}
namespace Windows.UI.Xaml.Markup {
  public sealed class FullXamlMetadataProviderAttribute : Attribute
}
namespace Windows.Web.UI.Interop {
  public sealed class WebViewControl : IWebViewControl {
    event TypedEventHandler<WebViewControl, object> GotFocus;
    event TypedEventHandler<WebViewControl, object> LostFocus;
  }
  public sealed class WebViewControlProcess {
    string Partition { get; }
  }
  public sealed class WebViewControlProcessOptions {
    string Partition { get; set; }
  }
}

The post Windows 10 SDK Preview Build 17672 available now! appeared first on Windows Developer Blog.


Control Azure Data Lake costs using Log Analytics to create service alerts

$
0
0

Azure Data Lake customers use the Data Lake Store and Data Lake Analytics to store and run complex analytics on massive amounts of data. However it is challenging to manage costs, keep up-to-date with activity in the accounts, and proactively know when usage thresholds are nearing certain limits. Using Log Analytics and Azure Data Lake we can address these challenges and know when the costs are increasing or when certain activities take place.

image

In this post, you will learn how to use Log Analytics with your Data Lake accounts to create alerts that can notify you of Data Lake activity events and when certain usage thresholds are reached. It is easy to get started!

Step 1: Connect Azure Data Lake and Log Analytics

Data Lake accounts can be configured to generate diagnostics logs, some of which are automatically generated (e.g. regular Data Lake operations such as reporting current usage, or whenever a job completes). Others are generated based on requests (e.g. when a new file is created, opened, or when a job is submitted). Both Data Lake Analytics and Data Lake Store can be configured to send these diagnostics logs to a Log Analytics account where we can query the logs and create alerts based on the query results.

To send diagnostics logs to a Log Analytics account, follow the steps outlined in the blog post Struggling to get insights for your Azure Data Lake Store? Azure Log Analytics can help!

Step 2: Create a query that can identify a specific event or aggregated threshold

Specific key questions about the state or usage of your Azure Data Lake account can be generally answered with a query that parses usage or metric logs. To query the logs in Log Analytics, in the account home (OMS Workspace), click on Log Search.

image

In the Log Search blade, you can start typing queries using Log Analytics Query Language:

image

There are two main types of queries that can be used in Log Analytics to configure alerts:

  • Queries that return individual events, these single events will show a single entry per row (e.g. every time a file is opened).
  • Queries that aggregate values or metrics for a specific window of time as a threshold by aggregating single events (e.g. 10 files opened in the past five minutes), or the values of a metric (e.g. total AUs assigned to jobs).

Here are some sample queries, the first two return events while the third aggregate values:

  • This query returns a new entry every time a new Data Lake Store folder is created in the specified Azure Data Lake Store (ADLS) account:
AzureDiagnostics
| where Category == "Requests"
| where ResourceProvider == "MICROSOFT.DATALAKESTORE"
| where Resource == "[Your ADLS Account Name]"
| where OperationName == "mkdirs"
  • This query returns a new entry every time a job fails in any of the Data Lake Analytics accounts configured to the Log Analytics workspace:
AzureDiagnostics
| where ResourceProvider == "MICROSOFT.DATALAKEANALYTICS"
| where OperationName == "JobEnded"
| where ResultType == "CompletedFailure"
  • This query returns a list of jobs submitted by users in a 24-hour interval, including user account and sum of jobs submitted in the 24h interval:
AzureDiagnostics
| where ResourceProvider == "MICROSOFT.DATALAKEANALYTICS"
| where OperationName == "SubmitJob"
| summarize AggregatedValue = count(identity_s) by bin(TimeGenerated, 24h), identity_s

image

Queries like these will be used in the next step when configuring alerts.

Step 3: Create an alert to be notified when the event is detected or when the threshold is reached.

Using a query such as those shown in the previous step, Log Analytics can be used to create an alert that will notify users via e-mail, text message, or webhook when the event is captured or metric threshold is reached. Check out this blog post for creating a new alert: Simple Trick to Stay on top of your Azure Data Lake: Create Alerts using Log Analytics.

Please note that the alerts will be slightly delayed and you can read more details regarding the delays and Log Analytics SLAs in Understanding alerts in Log Analytics.

image

Tell us what you think

Setting up alerts in Log Analytics can help you understand usage and manage costs as utilization increases. The process to set up alerts allows enough flexibility to adapt to your specific needs. Are you looking for a specific metric or usage activity? Reach out and let us know in the comments, or on our feature requests UserVoice. Check out the Azure Data Lake blog, where we regularly share updates and tips on how to get the most out of your Azure Data Lake accounts.

Real Browser Integration Testing with Selenium Standalone, Chrome, and ASP.NET Core 2.1

$
0
0

I find your lack of tests disturbingBuckle up kids, this is nuts and I'm probably doing it wrong. ;) And it's 2am and I wrote this fast. I'll come back tomorrow and fix the spelling.

I want to have lots of tests to make sure my new podcast site is working well. As mentioned before, I've been updating the site to ASP.NET Core 2.1.

Here's some posts if you want to catch up:

I've been doing my testing with XUnit and I want to test in layers.

Basic Unit Testing

Simply create a Razor Page's Model in memory and call OnGet or WhateverMethod. At this point you are NOT calling Http, there is no WebServer.

public IndexModel pageModel;


public IndexPageTests()
{
var testShowDb = new TestShowDatabase();
pageModel = new IndexModel(testShowDb);
}

[Fact]
public async void MainPageTest()
{
// FAKE HTTP GET "/"
IActionResult result = await pageModel.OnGetAsync(null, null);

Assert.NotNull(result);
Assert.True(pageModel.OnHomePage); //we are on the home page, because "/"
Assert.Equal(16, pageModel.Shows.Count()); //home page has 16 shows showing
Assert.Equal(620, pageModel.LastShow.ShowNumber); //last test show is #620
}

Moving out a layer...

In-Memory Testing with both Client and Server using WebApplicationFactory

Here we are starting up the app and calling it with a client, but the "HTTP" of it all is happening in memory/in process. There are no open ports, there's no localhost:5000. We can still test HTTP semantics though.

public class TestingFunctionalTests : IClassFixture<WebApplicationFactory<Startup>>

{
public HttpClient Client { get; }
public ServerFactory<Startup> Server { get; }

public TestingFunctionalTests(ServerFactory<Startup> server)
{
Client = server.CreateClient();
Server = server;
}

[Fact]
public async Task GetHomePage()
{
// Arrange & Act
var response = await Client.GetAsync("/");

// Assert
Assert.Equal(HttpStatusCode.OK, response.StatusCode);
}
...
}

Testing with a real Browser and real HTTP using Selenium Standalone and Chrome

THIS is where it gets interesting with ASP.NET Core 2.1 as we are going to fire up both the complete web app, talking to the real back end (although it could talk to a local test DB if you want) as well as a real headless version of Chrome being managed by Selenium Standalone and talked to with the WebDriver. It sounds complex, but it's actually awesome and super useful.

First I add references to Selenium.Support and Selenium.WebDriver to my Test project:

dotnet add reference "Selenium.Support"

dotnet add reference "Selenium.WebDriver"

Make sure you have node and npm then you can get Selenium Standalone like this:

npm install -g selenium-standalone@latest

selenium-standalone install

Chrome is being controlled by automated test softwareSelenium, to be clear, puts your browser on a puppet's strings. Even Chrome knows it's being controlled! It's using the (soon to be standard, but clearly defacto standard) WebDriver protocol. Imagine if your browser had a localhost REST protocol where you could interrogate it and click stuff! I've been using Selenium for over 11 years. You can even test actual Windows apps (not in the browser) with WinAppDriver/Appium but that's for another post.

Now for this part, bare with me because my ServerFactory class I'm about to make is doing two things. It's setting up my ASP.NET Core 2. 1 app and actually running it so it's listening on https://localhost:5001. It's assuming a few things that I'll point out. It also (perhaps questionable) is launching Selenium Standalone from within its constructor. Questionable, to be clear, and there's others ways to do this, but this is VERY simple.

If it offends you, remembering that you do need to start Selenium Standalone with "selenium-standalone start" you could do it OUTSIDE your test in a script.

Perhaps do the startup/teardown work in a PowerShell or Shell script. Start it up, save the process id, then stop it when you're done. Note I'm also doing checking code coverage here with Coverlet but that's not related to Selenium - I could just "dotnet test."

#!/usr/local/bin/powershell

$SeleniumProcess = Start-Process "selenium-standalone" -ArgumentList "start" -PassThru
dotnet test /p:CollectCoverage=true /p:CoverletOutputFormat=lcov /p:CoverletOutput=./lcov .hanselminutes.core.tests
Stop-Process -Id $SeleniumProcess.Id

Here my SeleniumServerFactory is getting my Browser and Server ready.

SIDEBAR NOTE: I want to point out that this is NOT perfect and it's literally the simplest thing possible to get things working. It's my belief, though, that there are some problems here and that I shouldn't have to fake out the "new TestServer" in CreateServer there. While the new WebApplicationFactory is great for in-memory unit testing, it should be just as easy to fire up your app and use a real port for things like Selenium testing. Here I'm building and starting the IWebHostBuilder myself (!) and then making a fake TestServer only to satisfy the CreateServer method, which I think should not have a concrete class return type. For testing, ideally I could easily get either an "InMemoryWebApplicationFactory" and a "PortUsingWebApplicationFactory" (naming is hard). Hopefully this is somewhat clear and something that can be easily adjusted for ASP.NET Core 2.1.x.

My app is configured to listen on both http://localhost:5000 and https://localhost:5001, so you'll note where I'm getting that last value (in an attempt to avoid hard-coding it). We also are sure to stop both Server and Brower in Dispose() at the bottom.

public class SeleniumServerFactory<TStartup> : WebApplicationFactory<Startup> where TStartup : class

{
public string RootUri { get; set; } //Save this use by tests

Process _process;
IWebHost _host;

public SeleniumServerFactory()
{
ClientOptions.BaseAddress = new Uri("https://localhost"); //will follow redirects by default

_process = new Process() {
StartInfo = new ProcessStartInfo {
FileName = "selenium-standalone",
Arguments = "start",
UseShellExecute = true
}
};
_process.Start();
}

protected override TestServer CreateServer(IWebHostBuilder builder)
{
//Real TCP port
_host = builder.Build();
_host.Start();
RootUri = _host.ServerFeatures.Get<IServerAddressesFeature>().Addresses.LastOrDefault(); //Last is https://localhost:5001!

//Fake Server we won't use...this is lame. Should be cleaner, or a utility class
return new TestServer(new WebHostBuilder().UseStartup<TStartup>());
}

protected override void Dispose(bool disposing)
{
        base.Dispose(disposing);
        if (disposing) {
            _host.Dispose();
_process.CloseMainWindow(); //Be sure to stop Selenium Standalone
        }
    }
}

But what does a complete series of tests look like? I have a Server, a Browser, and an (theoretically optional) HttpClient. Focus on the Browser and Server.

At the point when a single test starts, my site is up (the Server) and an invisible headless Chrome (the Browser) is actually being puppeted with local calls via WebDriver. All this is hidden from to you - if you want. You can certainly see Chrome (or other browsers) get automated, but what's nice about Selenium Standalone with hidden/headless Browser testing is that my unit tests now also include these complete Integration Tests and can run as part of my Continuous Integration Build.

Again, layers. I test classes, then move out and test Http Request/Response interactions, and finally the site is up and I'm making sure I can navigate, that data is loading. I'm automating the "smoke tests" that I used to do myself! And I can make as many of this a I'd like now that the scaffolding work is done.

public class SeleniumTests : IClassFixture<SeleniumServerFactory<Startup>>, IDisposable

{
public SeleniumServerFactory<Startup> Server { get; }
public IWebDriver Browser { get; }
public HttpClient Client { get; }
public ILogs Logs { get; }

public SeleniumTests(SeleniumServerFactory<Startup> server)
{
Server = server;
Client = server.CreateClient(); //weird side effecty thing here. This call shouldn't be required for setup, but it is.

var opts = new ChromeOptions();
opts.AddArgument("--headless"); //Optional, comment this out if you want to SEE the browser window
opts.SetLoggingPreference(OpenQA.Selenium.LogType.Browser, LogLevel.All);

var driver = new RemoteWebDriver(opts);
Browser = driver;
Logs = new RemoteLogs(driver); //TODO: Still not bringing the logs over yet
}

[Fact]
public void LoadTheMainPageAndCheckTitle()
{
Browser.Navigate().GoToUrl(Server.RootUri);
Assert.StartsWith("Hanselminutes Technology Podcast - Fresh Air and Fresh Perspectives for Developers", Browser.Title);
}

[Fact]
public void ThereIsAnH1()
{
Browser.Navigate().GoToUrl(Server.RootUri);

var headerSelector = By.TagName("h1");
Assert.Equal("HANSELMINUTES PODCASTrnby Scott Hanselman", Browser.FindElement(headerSelector).Text);
}

[Fact]
public void KevinScottTestThenGoHome()
{
Browser.Navigate().GoToUrl(Server.RootUri + "/631/how-do-you-become-a-cto-with-microsofts-cto-kevin-scott");

var headerSelector = By.TagName("h1");
var link = Browser.FindElement(headerSelector);
link.Click();
Assert.Equal(Browser.Url.TrimEnd('/'),Server.RootUri); //WTF
}

public void Dispose()
{
Browser.Dispose();
}
}

Here's a build, unit test/selenium test with code coverage actually running. I started running it from PowerShell. The black window in the back is Selenium Standalone doing its thing (again, could be hidden).

Two consoles, one with PowerShell running XUnit and one running Selenium

If I comment out the "--headless" line, I'll see this as Chrome is automated. Cool.

Chrome is loading my site and being automated

Of course, I can also run these in the .NET Core Test Explorer in either Visual Studio Code, or Visual Studio.

image

Great fun. What are your thoughts?


Sponsor: Check out JetBrains Rider: a cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!



© 2018 Scott Hanselman. All rights reserved.
     

Do more with Chef and Microsoft Azure

$
0
0

We’re committed to making Azure work great with the open source tools you know and love, and if you’re using Chef products or open source projects, there’s never been a better time to try Azure. We’ve had a rich history of partnership and collaboration with Chef to deliver automation tools that help you with cloud adoption. Today, at ChefConf, the Chef and Azure teams are excited to announce the inclusion of Chef InSpec, directly in Azure Cloud Shell, as well as the new Chef Developer Hub in Azure Docs.

Inspec in Azure Cloud Shell

In addition to other open source tools like Ansible and Terraform that are already available, today we are announcing the availability of Chef Inspec, pre-installed and ready to use for every Azure user in the Azure Cloud Shell. This makes bringing your Inspec tests to Azure super-simple, in fact it’s the easiest way to try out Inspec – no installation or configuration required.

 

azurecloudshell-inspec

Figure 1: InSpec Exec within Azure Cloud Shell

Chef Developer Hub for Azure

We are launching the new Chef Developer Hub so Azure customers can more easily implement their solutions using Chef open source software. Whether you’re using Chef, Inspec or Habitat, you’ll find five-minute quick starts, tutorials and reference materials to help get you started and successfully build a solution. All of our docs are open source and hosted on GitHub. We look forward to getting your feedback, which you can make directly from within docs or via GitHub.

Habitat support for Azure Kubernetes Service and Azure Container Registry

Earlier this month at Microsoft Build, Chef announced new integrations for Habitat, with our fully managed container registry and Kubernetes services, Azure Container Registry (ACR) and Azure Kubernetes Services (AKS).

With these new integrations you can publish to ACR directly from the Habitat Builder service. This allows you to have a seamless, integrated workflow using Habitat, from pushing code to GitHub, right through to your service being deployed into AKS, complete with integrated monitoring and management.

 

1-workflow-1 (002)

Figure 2: How to deploy services from GitHub to ACR and AKS

We’re excited to build on our partnership and take the next steps with Chef to deliver new automation solutions that will benefit Azure and Azure Stack customers, enabling cloud success. If you’re at ChefConf, drop by our booth to chat and see a demo of the new integrations. If you’re not able to join us in person, make sure you check out the new Chef Developer Hub to see how easy it is to bring your Chef solutions to Azure. 

Load confidently with SQL Data Warehouse PolyBase Rejected Row Location

$
0
0

Every row of your data is an insight waiting to be found. That is why it is critical you can get every row loaded into your data warehouse. When the data is clean, loading data into Azure SQL Data Warehouse is easy using PolyBase. It is elastic, globally available, and leverages Massively Parallel Processing (MPP). In reality clean data is a luxury that is always available. In those cases you need to know which rows failed to load and why.

In Azure SQL Data Warehouse the Create External Table definition has been extended to include a Rejected_Row_Location parameter. This value represents the location in the External Data Source where the Error File(s) and Rejected Row(s) will be written.

CREATE EXTERNAL TABLE [dbo].[Reject_Example]
(
[Col_one] TINYINT NULL,
[Col_two] VARCHAR(100) NULL,
[Col_three] NUMERIC(2,2) NULL
)
WITH
(
DATA_SOURCE = EDS_Reject_Row
,LOCATION = 'Read_Directory'
,FILE_FORMAT = CSV
,REJECT_TYPE = VALUE
,REJECT_VALUE = 100
,REJECTED_ROW_LOCATION=‘Reject_Directory'
)

What happens when data is loaded?

When a user runs a Create Table as Select (CTAS) on the table above, PolyBase creates a directory on the External Data Source at the Rejected_Row_Location if one doesn’t exist. A child directory is created with the name “_rejectedrows”. The “_” character ensures that the directory is escaped for other data processing, unless explicitly named in the location parameter. Within this directory there is a folder created based on the time of load submission in the format YearMonthDay-HourMinuteSecond (ex. 20180330-173205). In this folder, two types of files are written, the _reason file and the data file.

The reason files and the data files both have the queryID associated with the CTAS statement. Because the data and the reason are in separate files corresponding files have a matching suffix.

Next Steps

We are excited to offer this new capability to all SQL DW Customers. For syntax take a look at the Create External Table (transact-SQL) Documentation. Download the latest version of SQL Server Management Studio (SSMS).

An update on the integration of Avere Systems into the Azure family

$
0
0

It has been three months since we closed on the acquisition of Avere Systems. Since that time, we’ve been hard at work integrating the Avere and Microsoft families, growing our presence in Pittsburgh and meeting with customers and partners at The National Association of Broadcasters Show.

It's been exciting to hear how Avere has helped businesses address a broad range of compute and data challenges, helping produce blockbuster movies and life-saving drug therapies faster than ever before with hybrid and public cloud options. I’ve also appreciated having the opportunity to address our customers questions and concerns and thought it might be helpful to share the most common ones with the broader Azure/Avere community:

  • When will Avere be available on Microsoft Azure?
    • We are on track to release Microsoft Avere vFXT to the Azure Marketplace later this year.  With this technology Azure customers will be able to run compute-intensive applications completely on Azure or to take advantage of our scale on an as-needed basis.
  • Will Microsoft continue to support the Avere FXT physical appliance?
    • Yes, we will continue to invest in, upgrade and support the Microsoft Avere FXT physical appliance, which customers tell us is particularly important for their on-premise and hybrid environments.
  • Will Microsoft continue to support Avere vFXT on other public cloud platforms?
    • Yes, we will continue supporting current offerings after we ship Microsoft Avere vFXT on Microsoft Azure.

Avere was a strategic acquisition for Microsoft, with a great team and technology. Ron Bianchini, the co-founder and ex-CEO of Avere Systems, is an award-winning serial entrepreneur, former professor of computer engineering at Carnegie-Mellon University and now a distinguished engineer at Microsoft. And, Mike Kazar, co-founder and ex-CTO of Avere Systems, Microsoft Partner Software Engineer, and an acknowledged expert in network files systems, was named the recipient of the 2013 IEEE Reynold B. Johnson Information Storage Systems Award for his outstanding contributions to information storage systems.

It’s exciting to have Ron, Mike and the whole Avere team onboard at Microsoft. You can learn more about Avere here and we’ll continue to update you on product availability, roll-out plans and more through the Azure blog.

Serverless real-time notifications in Azure using Azure #CosmosDB

$
0
0

header

There were lots of announcements at the Microsoft Build 2018 conference, but one that caught my eye was the preview release of Azure SignalR, a Platform-as-a-Service (PaaS) offering that lets you implement real-time messages and notifications quite easily, without worrying about instances or hosts.

So it made me wonder, could I build something using my favorite globally-distributed and serverless database, Azure Cosmos DB, and Azure’s serverless compute offering, Azure Functions? It turns out others were interested in this topic too.

Real-time, really?

For those of you that do not know, SignalR is a library that’s been around since 2013 for the ASP.NET Framework, recently rewritten for ASP.NET Core under the name of SignalR Core, that allows you to easily create real-time applications and push content to clients through the Websocket protocol, gracefully falling back to other alternatives depending on the client. It works great for games, dashboards/monitoring apps, collaborative apps, mapping/tracking apps, or any app requiring notifications.

By leveraging the Websocket protocol, content can be pushed to clients without the overhead of opening multiple HTTP connections and over a single two-way TCP channel that is maintained for the entire session.

Going serverless!

One requirement of SignalR was that you, the developer, had to have a host to manage and maintain connections with clients, usually deployed as a Web instance. In Azure, you can do it in any of the PaaS offerings like App Service and Cloud Services. While it wasn’t a complex task, it still meant you had to maintain and ensure the service could handle a number of connections and the load based on your design and user load definitions.

With Azure SignalR, the host is provided for you. You only need to connect the clients and servers to the exposed service endpoint. You can even scale the service based on your needs transparently.

If you add a serverless database plus Azure Functions as a serverless platform to run all your code, then the result would be… Serverless, Serverless everywhere!

Streamlining notifications

The architecture would need:

  • A client that can save data and receive real-time notifications. For simplicity, a web client was the faster option and to showcase we’ve picked a common chat application.
  • An API that can receive data from the clients and save them to the database.
  • Some mechanism that can act upon new data and notify all clients in real-time.

Since I was using Azure Functions (in Consumption Plan) as my compute layer, I distributed all functionality in four functions:

  • FileServer which acts as a serverless file server for static files that will let a web client browse and obtain the files in the www folder, as if there was a web host continually running (but shhh, there isn’t!). The web client is using Azure SignalR’s npm package for connectivity and transport protocol resolution. Alternatively, static assets could be served through Azure CDN too.
  • SaveChat, which will receive chat messages from the connected web clients and save it to Cosmos DB using Output bindings.
  • SignalRConfiguration, which will send the required information to the web client to initialize SignalR’s Websocket connection.
  • FeedToSignalR, which will trigger a CosmosDB Trigger, based on new data in Azure Cosmos DB, and broadcast it through Azure SignalR to all connected clients.

In order to support custom routes, in particular for static web host, I implemented Azure Functions Proxies through a proxies.json file. So, when the user browses the base URL, it is instead calling one of the HTTP triggered functions.

The complete flow is as follows:

  • When the web client loads the static resources (e.g., browses the base URL), it pulls the SignalR configuration from SignalRConfiguration.
  • It will then negotiate with Azure SignalR the best transport protocol that the browser supports and connect.

signalr

  • When the user writes a message, it will save it to Azure Cosmos DB via an Ajax call to SaveChat.

chat

  • Each chat line is stored as a document in Azure Cosmos DB.

db

  • The FeedToSignalR will trigger and broadcast it to all Azure SignalR connected clients.

invocation

The final result is a complete serverless flow that consumes resources only when needed, and doesn’t need to maintain any host or extra service layer.

While this is just a chat app, it can easily be implemented in other flows like IoT, dashboards, system-wide broadcasts, and a big etcetera, dream big!

Next steps

The code is public in Github and it even includes a nice button that will deploy the complete architecture for you in your Azure subscription in one click!

deploybutton

Clone, fork, and use it as a base for your new or current projects. Just remember that Azure SignalR is still in preview, the libraries or service APIs might change in the future. You can try Azure Cosmos DB for free today, no sign up or credit card required. Stay up-to-date on the latest Azure Cosmos DB news and features by following us on Twitter #CosmosDB, @AzureCosmosDB.

Create enterprise subscription experience in Azure portal public preview

$
0
0

Typically, Azure Enterprise Agreement (EA) subscriptions were created in the EA Portal and management of services was completed in the Azure portal. Our goal is to converge on the Azure portal as the primary avenue for users to manage their Azure services and subscriptions.

We are making available the public preview of the create subscription experience in the Azure portal. This capability will directly align with the the ability to create multiple enterprise subscriptions using the Create Subscription API. This experience is fully integrated in the Azure portal and will enable you to quickly get an EA subscription created without any programming.

Getting started

The following steps only apply to EA and EA Dev/Test subscriptions. The majority of users will be able to access the user experience below. There will be some users who do not meet the prerequisites to create a subscription in the Azure portal. For those users, the “+Add” button will open a separate window to create new subscriptions.

The steps for using the create enterprise subscription experience in the Azure portal are as follows:

  1. If you are not an account owner, get added by an EA enrollment admin.
  2. Navigate to the Subscriptions extension in the Azure portal.
  3. Click the “+ Add” button in the top left corner of the experience.
  4. Fill out the new subscription name and offer. You will be able to change the subscription name later using the rename capability, if needed.
  5. Click the “Create” button.

Enterprise subscription

A notification will appear the upper right-hand corner, indicating that the new subscription is being created.

Notification

 

 

 

It might take some time for the new subscription to be created. After a few minutes, be sure to refresh the Subscription extension to get the latest subscription list.

More resources for this topic


Azure AD Authentication for Azure Storage now in public preview

$
0
0

We are excited to announce the preview of Azure AD Authentication for Azure Blobs and Queues. This capability is one of the features most requested by enterprise customers looking to simplify how they control access to their data as part of their security or compliance needs. This capability is available in all public regions of Azure.

Azure Storage supports several mechanisms that give you flexibility to control who can access your data, as well as how, when, and from where they can access it. With AAD authentication, customers can now use Azure's role-based access control framework to grant specific permissions to users, groups and applications down to the scope of an individual blob container or queue. This capability extends the existing Shared Key and SAS Tokens authorization mechanisms which continue to be available.

Developers can also leverage Managed Service Identity (MSI) to give Azure resources (Virtual Machines, Function Apps, Virtual Machine Scale Set etc.) an automatically managed identity in Azure AD. Administrators can assign roles to these identities and run applications securely, without having any credentials in your code.


image

image

AADBlogPostPic3

Administrators can grant permissions and use AAD Authentication with any Azure Resource Manager storage account using the Azure portal, Azure PowerShell, CLI or the Microsoft Azure Authorization Resource Provider API. This feature is available for all redundancy types of Azure Storage.

As with most previews, this should not be used for production workloads and there will be no production SLA until the feature becomes Generally Available.

Find out more about Azure AD Authentication for Storage.

ASP.NET Core Performance Improvements

$
0
0
This is a guest post by Mike Rousos

I recently had an opportunity to help a developer with an ASP.NET Core app that was functionally correct but slow when under a heavy user load. We found a few different factors contributing to the app’s slowdown while investigating, but the majority of the issues were some variation of blocking threads that could have run in a non-blocking way. It was a good reminder for me just how crucial it is to use non-blocking patterns in multi-threaded scenarios like a web app.

Beware of Locks

One of the first problems we noticed (through CPU analysis with PerfView) was that a lot of time was spent in logging code paths. This was confirmed with ad hoc exploration of call stacks in the debugger which showed many threads blocked waiting to acquire a lock. It turns out some common logging code paths in the application were incorrectly flushing Application Insights telemetry. Flushing App Insights requires a global lock and should generally not be done manually during the course of an app’s execution. In this case, though, Application Insights was being flushed at least once per HTTP request and, under load, this became a large bottleneck!

You can see this sort of pattern in the images below from a small repro I made. In this sample, I have an ASP.NET Core 2.0 web API that enables common CRUD operations against an Azure SQL database with Entity Framework Core. Load testing the service running on my laptop (not the best test environment), requests were processed in an average of about 0.27 seconds. After adding a custom ILoggerProvider calling Console.WriteLine inside of a lock, though, the average response time rose to 1.85 seconds – a very noticeable difference for end users. Using PerfView and a debugger, we can see that a lot of time (66% of PerfView’s samples) is spent in the custom logging method and that a lot of worker threads are stuck there (delaying responses) while waiting for their turn with the lock.

Something's up with this logging call

Something’s up with this logging call

 

Threads waiting on lock acquisition

Threads waiting on lock acquisition

ASP.NET Core’s Console logger used to have some locking like this in versions 1.0 and 1.1, causing it to be slow in high-traffic scenarios, but these issues have been addressed in ASP.NET Core 2.0. It is still a best practice to be mindful of logging in production, though.

For very performance-sensitive scenarios, you can use LoggerMessage to optimize logging even further. LoggerMessage allows defining log messages ahead-of-time so that message templates don’t need to be parsed every time a particular message is logged. More details are available in ourdocumentation, but the basic pattern is that log messages are defined as strongly-typed delegates:

// This delegate logs a particular predefined message
private static readonly Action<ILogger, int, Exception> _retrievedWidgets =
    LoggerMessage.Define<int>(
        LogLevel.Information,
        new EventId(1, nameof(RetrievedWidgets)),
        "Retrieved {Count} widgets");

// A helper extension method to make it easy to call the 
// LoggerMessage-produced delegate from an ILogger
public static void RetrievedWidgets(this ILogger logger, int count) =>
    _retrievedWidgets(logger, count, null);

Then, that delegate is invoked as needed for high-performance logging:

var widgets = await _dbContext.Widgets.AsNoTracking().ToListAsync();
_logger.RetrievedWidgets(widgets.Length);

Keep Asynchronous Calls Asynchronous

Another issue our investigation uncovered in the slow ASP.NET Core app was similar: calling Task.Wait() or Task.Result on asynchronous calls made from the app’s controllers instead of using await. By making controller actions async and awaiting these sorts of calls, the executing thread is freed to go serve other requests while waiting for the invoked task to complete.

I reproduced this issue in my sample application by replacing async calls in the action methods with synchronous alternatives. At first, this only caused a small slowdown (0.32 second average response instead of 0.27 seconds) because the async methods I was calling in the sample were all pretty quick. To simulate longer async tasks, I updated both the async and synchronous versions of my sample to have a Task.Delay(200) in each controller action (which, of course, I used await with when async and .Wait() with when synchronous). In the async case, average response time went from 0.27s to 0.46s which is more or less what we would expect if each request has an extra pause or 200ms. In the synchronous case, though, the average time went from 0.32 seconds to 1.47 seconds!

The charts below demonstrate where a lot of this slowdown comes from. The green lines in the charts represent requests served per second and the red lines represent user load. In the first chart (which was taken while running the async version of my sample), you can see that as users increase, more requests are being served. In the second chart (corresponding to theTask.Wait() case), on the other hand, there’s a strange pattern of requests per second remaining flat for several minutes after user load increases and only then increasing to keep up. This is because the existing pool of threads serving requests couldn’t keep up with more users (since they were all blocked on Task.Wait() calls) and throughput didn’t improve until more threads were created.

Threads Keeping Up

Asynchronous RPS compared to user load

 

Sync Thread Growth Lag

Synchronous RPS compared to user load

 

Attaching a debugger to both scenarios, I found that 75 managed threads were being used in the async test but 232 were in use in the synchronous test. Even though the synchronous test did eventually add enough threads to handle the incoming requests, calling Task.Result and Task.Wait can cause slowdowns when user load changes. Analyzers (like AsyncFixer) can help to find places where asynchronous alternatives can be used and there are EventSource events that can be used to find blocking calls at runtime, if needed.

Wrap-Up

There were some other perf issues in the application I helped investigate (server GC wasn’t enabled in ASP.NET Core 1.1 templates, for example, something that has been corrected in ASP.NET Core 2.0), but one common theme of the problems we found was around blocking threads unnecessarily. Whether it’s from lock contention or waiting on tasks to finish, it’s important to keep threads unblocked for good performance in ASP.NET Core apps.

If you’d like to dig into your own apps to look for perf trouble areas, check out the Channel9 PerfView tutorials for an overview of how PerfView can help uncover CPU and memory-related perf issues in .NET applications.

London Midland Firstline Workers streamline operations and stay connected with Office 365

$
0
0

Today’s post was written by Kirk Trewin, head of fleet production at London Midland.

Profile picture of Kirk Trewin, head of fleet production at London Midland.London Midland trains are an important part of our customers’ everyday lives. Passengers rely on our service for their essential journeys; ensuring our trains run on time means we keep our passengers’ days running smoothly. This commitment to great service led us to invest in Office 365 and Surface Hub. We’re using this technology to keep our digital transformation on track—helping to empower employees to reduce downtime, connect better with customers, and ensure safe and reliable operations.

We are experiencing an increase in footfall across our rail network and this expansion requires London Midland employees to work smarter. It’s critical that everything we do is precise and accurate, even seemingly simple procedures like opening doors to let passengers on and off a train are the result of complex processes that must work perfectly. So, behind the scenes at London Midland, informed decisions need to be made as quickly as possible. The reliability of our service hangs in the balance, and with it, our customers’ trust. If a train requires repair, we use Office 365 apps to bring a wealth of information straight to our technicians via mobile devices, so they can communicate with colleagues and refer to training materials when and where they need to.

Our Firstline Workers use Skype for Business to stay connected with colleagues in the head office, and the ability to share videos and photographs from the field makes the decision-making process incredibly efficient. For example, if a technician identifies corrosion on a unit, a video call makes it easy to consult coworkers and arrive at a solution. In the past, a technician had to gather evidence from a train, create a report, and present it to colleagues at the office. This process could take days, but with integrated collaboration tools from Office 365, we are down to minutes. Our trains spend less time in the depot awaiting a verdict, reducing service downtime.

We were searching for technology that made life easier for technicians, who represent a valuable segment of our Firstline Workforce. We’ve introduced Surface Hubs to streamline their access to available, consistent training materials. It’s been a huge success; technicians can refresh their knowledge or consult a training video instead of flipping through a 2,000-page manual or waiting to consult another busy colleague. Looking ahead, we’re producing digital, interactive schematics for Surface Hub, so our technicians can test the right solution for a technical problem on a unit. Virtual troubleshooting means technicians can make more informed diagnoses, further reducing unnecessary downtime.

Ours is, by definition, a mobile business—London Midland employees need to stay productive across our network. Office 365 tools like OneDrive have been game changers, and employees are embracing cloud-based document sharing and collaboration, working securely from anywhere, on any device. This has truly streamlined the way we do business. Our meetings are more productive, mainly due to OneNote, a digital notetaking app, which makes recording and sharing action items easy and intuitive.

For a distributed, fast-moving workforce, staying connected can be a challenge. We believe a connected, engaged workforce is more productive and empowered. Our company intranet, built using SharePoint Online, is where employees go for up-to-the-minute information about the company. This is particularly beneficial for Firstline Workers who interact with our customers at our 150 stations and on the trains. Our stations used to be equipped with a ticket issuing system and nothing more. Today, train platform and crew are connected to the wireless network, our intranet, and all the Office 365 productivity apps to work more efficiently. With a wealth of data at their fingertips, Firstline Workers are providing customers with accurate information and better service. And across the business, we use Yammer to share strategies and solve problems from the ground up.

We use Surface Hubs and Power BI to drive key performance indicators, so our fleet engineers can track a particular class of vehicle to see how it performs in the field, or drill into details to offer solutions for the production department. We have always stored a huge amount of data in charts and reports, but with information displayed in user-friendly dashboards on Surface Hub, we gain new insight into our operations and can improve performance indicators, such as miles per technical incident, by making faster, more informed decisions.

The more information we have, and the more open our lines of communication, the better equipped we are to streamline every aspect of our service. As we strive to make our fleet the most reliable in the country, we’re using modern, digital tools to work more effectively toward ensuring a safe, efficient service for our customers.

—Kirk Trewin

Read the case study to learn more about how London Midland is using Office 365 and Surface Hub to find innovative ways to serve passengers.

The post London Midland Firstline Workers streamline operations and stay connected with Office 365 appeared first on Microsoft 365 Blog.

Use packages reliably with upstreams for VSTS feeds

$
0
0
Software packages are a crucial part of development in languages ranging from C# to JavaScript to Python to Go. They help you iterate faster, avoid solving a problem that’s been solved many times before, and allow you to focus on your unique value. But, they can also add uncertainty and risk to your development process.... Read More

Updated Microsoft Store App Developer Agreement and GDPR

$
0
0

As of May 23rd, the Microsoft Store team has updated the Microsoft Store App Developer Agreement (ADA). The next time you log in to the Dev Center dashboard, you will be prompted to reaccept the ADA before you can update or manage your apps.

In the new version, we are making a few changes to clarify the restrictions around using and storing personal information in accordance with the General Data Protection Regulation (GDPR). For more information, you can view the full ADA and change history.

Note: This ADA update DOES NOT include the new Microsoft Store revenue model announced at Microsoft Build 2018.

What is GDPR?

On May 25, 2018, a European privacy law, the General Data Protection Regulation is set to take effect.

The GDPR imposes new rules on companies, government agencies, non-profits, and other organizations that offer goods and services to people in the European Union (EU), or that collect and analyze data tied to EU residents. The GDPR applies no matter where you are located.

Under GDPR, what constitutes personal information?

You can refer to European Commission’s official website on data protection for more information, but we suggest you confer with your own legal or regulatory compliance team to address specific questions you have.

Learn more about GDPR

To learn more about GDPR, please visit the European Commission’s official website on data protection.

We also encourage you to visit Microsoft.com/GDPR for resources and best practices for GDPR compliance.  You can even assess your GDPR compliance with a quick, interactive 10-question evaluation.

The post Updated Microsoft Store App Developer Agreement and GDPR appeared first on Windows Developer Blog.

.NET Framework May 2018 Preview of Quality Rollup for Windows 10 April 2018 Update (version 1803)

$
0
0

Today, we are releasing the May 2018 Preview of Quality Rollup for Windows 10 April 2018 Update (version 1803).

Quality and Reliability

This release contains the following quality and reliability improvements.

CLR

  • Resolves an issue in WindowsIdentity.Impersonate where handles were not being explicitly cleaned up. [581052]
  • Resolves an issue in deserialization when using a collection, for example, ConcurrentDictionary by ignoring casing. [524135]
  • Removes case where floating-point overflow occurs in the thread pool’s hill climbing algorithm. [568704]
  • Resolves instances of high CPU usage with background garbage collection. This can be observed with the following two functions on the stack: clr!*gc_heap::bgc_thread_function, ntoskrnl!KiPageFault. Most of the CPU time is spent in the ntoskrnl!ExpWaitForSpinLockExclusiveAndAcquire function. This change updates background garbage collection to use the CLR implementation of write watch instead of the one in Windows. [574027]
  • Floating-point overflow in the thread pool’s hill climbing algorithm. [569602]

WPF

  • A crash can occur during shutdown of an application that hosts WPF content in a separate AppDomain. (A notable example of this is an Office application hosting a VSTO add-in that uses WPF.) [543980]
  • Addresses an issue that caused XAML Browser Applications (XBAP’s) targeting .NET 3.5 to sometimes be loaded using .NET 4.x runtime incorrectly. [555344]
  • A WPF application can crash due to a NullReferenceException if a Binding (or MultiBinding) used in a DataTrigger (or MultiDataTrigger) belonging to a Style (or Template, or ThemeStyle) reports a new value, but whose host element gets GC’d in a very narrow window of time during the reporting process. [562000]
  • A WPF application can crash due to a spurious ElementNotAvailableException. This can arise if:
    1.Change TreeView.IsEnabled
    2.Remove an item X from the collection
    3.Re-insert the same item X back into the collection
    4.Remove one of X’s subitems Y from its collection
    (Step 4 can happen any time relative to steps 2 and 3, as long as it’s after step 1. Steps 2-4 must occur before the asynchronous call to UpdatePeer, posted by step 1; this will happen if steps 1-4 all occur in the same button-click handler.) [555225]
  • In certain .NET applications, timing issues in the finalizer thread could potentially cause exceptions during AppDomain or process shutdown. [606469]
  • Corrects issue with WPF application where you are trying to input Japense characters via IME pad [515186]
  • ComboBox grouped items now report children correctly via UIAutomation [504282]

Note: Additional information on these improvements is not available. The VSTS bug number provided with each improvement is a unique ID that you can give Microsoft Customer Support, include in StackOverflow comments or use in web searches.

Getting the Update

The Security and Quality Rollup is available via Windows Update, Windows Server Update Services, and Microsoft Update Catalog.

Microsoft Update Catalog

You can get the update via the Microsoft Update Catalog.

Product Version Preview of Quality Rollup KB
Windows 10 April 2018 Update (version 1803) Catalog
4100403
.NET Framework 4.7.2 4100403

Previous Monthly Rollups

The last few .NET Framework Monthly updates are listed below for your convenience:

Transact capabilities for SaaS apps now available in Azure Marketplace

$
0
0

Increasingly, customers are turning to cloud marketplaces to discover, trial, and buy cloud solutions. Software as a service (SaaS) apps are a core part of those customer needs. Azure Marketplace has long-offered SaaS apps for discovery and trial. At Build, we announced that SaaS apps can now be transacted within Azure Marketplace.

What does this mean for partners?

ISVs building and selling SaaS applications built for Azure can now not only list or offer trials, but also monetize their SaaS applications directly with customers. This allows partners:

To expose offers easily

  • Simple listing with a Contact Me option
  • Easy integration of a trial experience from Azure Marketplace
  • Monetize with a subscription API service

More procurement options

  • Offer simple, flat monthly pre-paid billing
  • Streamline billing for customers through consolidated Azure billing and invoicing
  • Spend less time wrestling with enterprise procurement

To get access to a global customer base and a global salesforce

  • Gather leads immediately to a CRM
  • Let marketplace facilitate co-selling with Microsoft sellers and help customers: find, try and buy partner SaaS applications

What does this mean for customers?

For IT Pros and Developers looking for any SaaS offer or subscription, Azure Marketplace allows those users to discover, try and now subscribe to SaaS solutions. This means customers can:

Find, try, and buy SaaS applications

  • Find dozens of SaaS solutions to meet more business needs and enhance their Cloud Solutions
  • Try solutions with integrated login experience (AAD trial enabled) with access to free trials and downloads
  • Subscribe to SaaS applications with subscription offers

Reduce the friction of procurement and payment

  • Flat monthly pre-paid billing ($/mo)
  • Reduce procurement overhead with billing all delivered through Microsoft
  • Manage subscriptions in one place

Easily manage subscriptions

  • Manage all app subscriptions within Azure Management
  • Easy cancel at any time

Get started with SaaS subscriptions

You can discover SaaS services in both Azure marketplace as well as Azure portal. You can subscribe to a SaaS service in Azure portal.

At the time of launch, the supported billing model is a flat monthly fee per subscription of the SaaS service. We are working on enabling additional business models in the future.

monetization1
You can use the new ‘Software as a service (SaaS)’ experience to discover and manage all your SaaS services.

monetization2
 
Once a SaaS service has been subscribed to, it can be in one of the following states:

  • Pending – You have subscribed to the SaaS service in Azure. However, you have not started using the SaaS service yet. At this point, your monthly recurring payment has not started yet.
  • Subscribed – You have subscribed to the SaaS service in Azure and started consuming the SaaS service. You will be charged the flat monthly fee every month, unless you delete your account in the SaaS service or delete your SaaS service in Azure portal.
  • Unsubscribed – You have unsubscribed or deleted the account directly in the SaaS service. You will not be billed once you have unsubscribed from the SaaS service.

Integration with Azure marketplace to enable SaaS transactions is achieved through the following simple steps:

  • Notify Azure whenever a user, who came to the SaaS service from Azure, signs up for a new account.
  • Notify Azure whenever a registered user from Azure changes the plan (example: user moves from ‘basic’ plan to a ‘premium’ plan).
  • Notify Azure whenever a registered user unsubscribes or deletes the account.
  • Receive and act on notifications from Azure, if the user has unsubscribed from the SaaS service in Azure.

Each of these actions is enabled via APIs in Azure marketplace. If you are interested in publishing your SaaS service in Azure, you can start your onboarding into Azure Marketplace.


New container images in Azure Marketplace

$
0
0

Azure has a thriving ecosystem where partners can publish many different offer types including virtual machine images, SaaS, solution templates, and managed applications, allowing customers to solve their unique problems in the way which best fits their scenario.

At Build, we announced a new category of offer in Azure Marketplace - container images

Azure customers can now discover and acquire secure and certified container images in Azure marketplace to build a container-based solution. All images available in Azure marketplace are certified and validated against container runtimes in Azure like managed Azure Kubernetes Service (AKS) allowing customers to build with confidence and deploy with flexibility.

For ISVs who have their applications offered in container-based images, this new offer type offers the opportunity to publish their solutions and reach Azure users who are building container-based architecture. This helps ISVs to publish with confidence through Azure Certified program and validation across industry-standard container formats and across the different Azure container services.

Get started with Azure Marketplace container images

Container images can be discovered in Azure Marketplace, as well as Azure portal under the ‘Container apps’ section.

container1 
Here, you can browse the range of container images available and once you select one you can view the details of the container image and subscribe to it. This copies the image to an instance of the Azure Container Registry, either new or an existing one, within your Azure subscription.

Once the image is available in your Azure Container Registry, it can be used like any other container image. For example, you can launch an instance of the container in Azure Container Instance, or you can build your custom image based on the image from Azure marketplace and integrate it in your CI/CD pipeline.

You can also opt in for auto updates. Enrolling for auto update will push newer versions of the container image to your Azure Container Registry  as and when they are published by the image publisher.

We have partnered with Bitnami and Couchbase for this initial launch partners. There is also an entire list of container images in Azure marketplace. All images at the time of launch are available for free, or BYOL (Bring Your Own License).

If you are an Azure partner and are interested in helping us shape the container partner ecosystem, start by onboarding to Marketplace as a Publisher.

Devs imagine, create, and code the future at Microsoft Build

$
0
0

Microsoft Build

On Monday, May 7, more than 6,000 developers from more than 70 countries descended on the Washington State Convention Center in Seattle, Washington, for Microsoft Build. From experienced coders to eight-year-old prodigies, the attendees were united by a passion for building apps for the intelligent cloud. To catch up on sessions you missed, check out the on-demand content.

CEO Satya Nadella kicked things off by talking about how the intelligent cloud will revolutionize every aspect of our lives. Alongside Executive Vice President for Cloud and Enterprise Group Scott Guthrie and Corporate Vice President for Windows Joe Belfiore, Nadella showed developers how they can use Microsoft Azure and Microsoft 365 to create transformational multisense and multidevice experiences.

Scott Guthrie at Microsoft Build

In one keynote demonstration, attendees got to see how an audio-video device using Microsoft 365 and AI services could transform a common business meeting. Through the use of facial recognition, each attendee was greeted by name as they entered the room. The device even transcribed their speech in real-time, automatically assigning the text to the right speaker. Power BI was used to visualize data. Cortana, together with Microsoft Graph, created a summary of action items, automatically attaching the files that were mentioned.

With so much going on, here are some of the key takeaways we heard from this year’s attendees:

1. A chance to build new skills

Attendees flocked to breakout sessions, workshops, and demos to learn skills that could enrich and advance their careers, honing their expertise in the hands-on labs and instructor-led workshops. And for anyone looking to be wowed by the latest tech, the exhibit hall was the place to be. Developers experienced HoloLens, programmed their own AI-powered drones, and explored cutting-edge IoT solutions amid myriad other activities and ideas.

2. Rich opportunities to build solutions—and careers

From API gateways to XAML, Microsoft Build featured all the innovative tools, platforms, and products developers need to create groundbreaking cloud-based solutions. There were tons of opportunities to dive deep into technologies like Azure Containers, Power BI, and Cognitive Services then discuss concepts with the Microsoft engineers behind these products and platforms.

Of all the announcements made at the event, one of the most popular was the news that developers will now get to keep up to 95 percent of the revenue for apps sold in the Microsoft Store. It was another reason why the audience left the event excited by the opportunities presented by the intelligent cloud.

3. A great place to build unique relationships

Inspiration wasn’t confined to the breakouts. Lively debates and brainstorms reverberated throughout the halls as developers traded tips, proposed real-world “what-if” scenarios, and challenged one another. It proved that attending Microsoft Build is just the beginning. The innovative solutions that arise from ideas sparked there are the true goal. An attendee celebration at Seattle Center capped the camaraderie of the conference with superheroes and superstars at MoPOP, sculptures of blown glass at Chihuly Garden and Glass, silent discoes, and savvy gamers.

Microsoft Build attendees

Thank you

The Microsoft cloud has the potential to transform lives on a huge scale. That power comes with an obligation. As Nadella reminded us in his keynote, “We have the responsibility to ensure that these technologies are empowering everyone.” Developers like you will make that vision a reality, so thank you for making Microsoft Build a fun, successful event. We’ll see you again next year—and while you’re waiting, check out the on-demand content to catch any sessions you may have missed!

Accelerate your SAP on Azure HANA project with SUSE Microsoft Solution Templates

$
0
0

This blog post has been co-authored by Sebastian Max Dusch

Following the launch of M-SERIES/SAP HANA certified Virtual Machines (VMs) on Azure, we are happy to announce a new marketplace offering in partnership with SUSE. The offering leverages Azure Resource Manager Templates to automate the rapid provisioning of an SAP HANA infrastructure in Azure with SUSE technology.

The templates deploy VMs which are optimized for SAP HANA using the latest pay-as-you-go version of the SUSE Linux Enterprise Server for SAP Applications operating system. The templates are based on a simplified SAP sizing in four t-shirt sizes Demo, Small, Medium and Large.

Each template first lays the network foundation by implementing an Azure Virtual Network, enabling subnet customization and Network Security Groups. All templates use Premium (SSD) Managed Disks and can be deployed in 2-tier, 3-tier and 3-tier high availability (HA) architectures. The Demo and Small templates are targeted at non-productive SAP HANA workloads. The Medium and Large templates can be used for productive SAP HANA workloads and are based on best practice for storage configuration and utilizing Write Accelerator technology for sub-millisecond writes of the SAP HANA transaction log.

An overview of each template is shown below with an extended overview of the 3-tier HA template showing VM sizes for each t-shirt and database storage for the “Large” t-shirt. Please refer to the linked technical document on the SUSE website for the complete technical overview.

2 – tier

image

3 – tier

image

3 – tier HA

The 3-tier HA template will deploy the VMs for the SAP HANA database (DB), xSCS, HA-NFS, and Application servers in Availability Sets with Azure Internal Load Balancers deployed for the DB, xSCS and HA-NFS clusters.

With the proper post-processing activities this will allow you to reach an infrastructure SLA of 99.95 percent.

image

3 – tier HA: VM SKU

T-Shirt

VM SKU

VM Nr

Instance

Demo

E4s_v3

2

DB



D2_v3

2

xSCS



D2_v3

2

HA-NFS



D2_v3

2

APP


Small

E64_v3

1

DB



D2_v3

2

xSCS



D2_v3

2

HA-NFS



E64_v3

5

APP


Medium

M64s

1

DB



D2_v3

2

xSCS



D2_v3

2

HA-NFS



E64_v3

5

APP


Large

M128s

2

DB



D2_v3

2

xSCS



D2_v3

2

HA-NFS



E64_v3

10

APP

3 – tier HA: DB disk layout for Large t-shirt template

image

For detailed technical solution information, please refer to the SUSE best practice documentation.

In future releases the SUSE Microsoft Solution Templates will be extended to include automated configuration of the SUSE Pacemaker cluster integrated with Azure Load Balancer, Availability Zone deployments and M-SERIES/SAP HANA scale-out.

Click to get started and Accelerate your SAP on Azure HANA project.

Emerging AI Patterns

$
0
0

One of the top conversations that we have with businesses all over the world is how digital transformation is impacting every part of their operations. This wave of “Digital Transformation” impacts every business and every industry; from media to sports, from finance to healthcare, and from Fortune 1000 organizations to small businesses. This wave of transformation is being driven by the ongoing advances in computing building blocks of the last few decades: compute, storage, and networks. At the same time as companies are continuing down the path of digital transformation, there is a new generation of software building blocks that will drive even greater transformation for businesses in the future of which AI is one of the core drivers. AI will help to transform all industries, including transportation, manufacturing, retail, agriculture, and more and create opportunities we have yet to consider.

In this post I will talk about how Microsoft is helping businesses transform with AI. This moves beyond creating the AI platform, or using AI to enhance an existing application or service and into how people are starting to think about using AI to change core business processes. If you have been following along, this is the third and final part in a 3-part series on AI. If you want to revisit the other two posts before continuing, Part 1 provides some context on why this time is different for AI. It is followed up by Part 2 whichPart 2 looks at how Microsoft thinks about AI and the tools and services that we make available for you to create your own AI solutions and in turn how we use it to enhance our own products and services.

In this blog we’ll dive more into the patterns we see emerging for the use of AI across industries and experiences. As we talk about AI with business and industry leaders, we see 4 patterns emerging on how to apply AI to businesses and business scenarios that provide a useful frame for the conversation.

  1. Virtual Agents – The first pattern is the use of virtual agents to interact with employees, customers, and partners on behalf of a company. These agents can help answer questions, provide support and over time become a proactive representative of your company and your brand. Today Microsoft uses a virtual agent in customer support that will engage in close to 50 million conversations this year.
  2. Ambient Intelligence – The second pattern is anchored on tracking people and objects in a physical space. In many ways this is using AI to both map a physical space and activity to a digital space, and then allowing actions on top of the digital graph. Many people will think about “pickup and go” retail shopping experience as a prime example, but this pattern is also applicable to safety, manufacturing, construction scenarios and more. Think about a warehouse that can detect a person walking in one aisle and a forklift driving in another that are on a collision course, the AI can prevent the pending accident. We also showed this pattern applied to an office/meeting scenario at BUILD.
  3. AI-Assisting Professionals – AI can be used to help almost any professional be more effective. For example, we can help people in finance with forecasting. Large companies manage forecasts by having their front-line sellers predict what they are going to do, then having many layers of reviewers to help judge the forecast. Using historical data and global insights from Bing, LinkedIn, and custom data sources, an AI system can reliably forecast how a subsidiary will do while removing all of the middle layers of judgement and the time consumed doing that. We also see AI starting to assist doctors in areas like genomics and public health. There are great examples in sales, marketing, legal and practically every other profession.
  4. Autonomous Systems – The fourth pattern that we see is for autonomous systems. You might think of self-driving cars when you think about autonomous systems, but it also extends to robotic process automation and network protection. Threats to a network can be hard to identify when they are happening and the lag before responding can result in a lot of damage. Having the network automatically respond as a threat is happening can minimize the risk and free the team to focus on other tasks.

In this post I will cover these 4 patterns along with how Microsoft is helping businesses achieve their goals.

Virtual Agents

At Ignite 2017 we discussed how AI will be used to help large businesses with customer support. Since that time, the Microsoft AI Solution for Customer Service, which is being used by Microsoft, HP, Macy’s, Australian Government Department of Human Services, and others has shown tremendous success in using AI to transform customer engagement. Looking at the initial results from the early adopters we have seen great improvements for their businesses.

  • Microsoft – where possible we are testing AI Solutions within our own business processes before releasing software for others to use (e.g. forecasting and customer care). We built this solution to solve our own problems first and it is trusted to handle one of the largest enterprise support organizations in the world. With the addition of a virtual agent, it has created some impressive results. Over a 6-month period we saw a 2x increase in users successfully being able to help themselves with a 3x decrease in transfers to agents. Users dislike having to repeat themselves as they get transferred between agents, but with this solution the state transfers with the call creating a better experience for customers and our call center employees, at scale with over 100,000 virtual agent sessions per day.
  • HP – one of the early users of this solution has scaled up to handle 70% of support cases with AI and maintains a greater than 85% accurate dialog rate. To achieve this level of accuracy the service was initially trained on over 1 million chat logs and 50 KB of support articles to create a sophisticated dialog.
  • Macy’s – gives an example of how an AI solution for customer support can evolve to become a brand ambassador for the company. The virtual agent is integrated into both the web and mobile web experiences for shopping, so users can use the agents where they are. Macy’s integrated their backend APIs for promotions, alerts, and more into the agent so that the agent could go beyond the corpus of data that it was trained on to pull up account details, shipping updates, and more providing broad and personalized information.

Last year at Ignite when we discussed the Microsoft AI Solution for Customer Service it was framed in the context of helping users get the support they need while also improving the experience for agents who would get to focus on more value-added support cases. Since then there has been an expansion into more personalized experiences as demonstrated by what Macy’s is doing. Looking forward, business agents will be moving towards conversational commerce and blending proactive conversation capability with the current reactive capability.

Today online commerce is primarily a self-help experience with the user either having to know exactly what they are looking for or spending a lot of time researching. With a virtual agent there is the opportunity to have a virtual personal shopper by your side who can help you with your online experience. As you search for products the agent can intelligently recommend relevant intents based on what you have been looking at. As you select options that are intelligently designed to guide you through the purchase experience, the information that you need is presented to you in adaptive cards that contains rich yet focused information so that the user can make better decisions without information overload. Since the agent can maintain historical information, a conversation could be resumed later, and transactions could be completed all with the use of the agent. This pattern is by far one of the furthest along and an area we see a lot of activity in today. Companies can choose from a set of tools in the platform today to get started with initial agents before moving up to a full customer care solution.

Ambient Intelligence

Computing is often thought of in the context of devices but with the use of sensors a physical space can be digitized which creates an environment where people, objects and activities can be detected and tracked. When AI is added to the digitized environment you then make it possible to reimagine how a room interacts with the people and objects in it. For example, using facial recognition to know when a person has entered a room is valuable not just for personalization but also for safety. If the fire alarm goes off, knowing who was in the building and has left, or if anyone is still inside can be quite useful. The next level beyond just tracking people is to track people’s interaction with objects around them. This capability makes it possible to create several interesting retail experiences including; grab and go shopping, personalized offers, immediate and on the spot assistance when needed, and more. In manufacturing we can use this type of capability for health and safety scenarios that can include keeping a person safe by flagging when they are going to pick up something that might too heavy, or to make the problem of losing things less burdensome by flagging where a lost item might be with instructions on how to find it and more. There are many interesting scenarios enabled by the ambient pattern that are just starting to be explored.

We see real-world examples of this today using mixed reality to keep mission critical systems up and running. Preserving perishable goods is a great example. Cows produce 6 gallons of milk daily, so it is important that the milk is packaged on time to avoid waste. However, when a milk packaging line fails because of a faulty part, it can take several days to get up and running resulting in a lot of spoiled milk. Tetra Pak uses cloud-connected predictive analytics to analyze packaging lines data to predict maintenance issues to reduce down time. This is done by using mixed reality headsets to cut down fixing time by having remote experts guide service engineers.

AI Assisting Professionals

AI presents a great opportunity to augment human ingenuity by providing proactive timely support to people so that they can focus their energy on their most important tasks. This shows up in a variety of ways including digital assistants like Cortana, Alexa, Siri, and Google Home that many of us use today to do tasks for us. In business settings we have seen many early uses of AI helping busy professionals including Bing for Business which will take your companies organizational chart, internal sites, and cloud documents, and then integrate them into your search experience so that you have the public and private data that you need. AI helps professionals in other ways more specific to different industries. Attorneys who need to assess the risk of an event will spend a lot of time going through contracts to find the impact of any exposure to a negative event. Machine reading techniques can be used to understand the details of all their contracts and then surface the problems. Journalists can use AI to serve as a virtual editor that will look beyond rules-based spellchecking to make writing suggestions. Finance professionals can also use AI to do sales forecasts to make better forecasts or sellers can use AI to target sales prospects and how to close a deal. Marketers can use AI to predict new trends in customer interest before investing a lot of money to experiment. We are working on a variety of areas where we can apply AI to assist professionals and one of my favorite examples is the work we are doing to help medical professionals.

Microsoft Research is known for world class advances in computer science, but they also focus on medical, health, and genomics. Their approach is to apply novel computational tools and analytical techniques to make healthcare more impactful and to assist patients. One application of AI to healthcare is Project InnerEye.

Today expert medical practitioners spend a lot of time analyzing 3D images to identify tumors versus other healthy tissue. Using years of Microsoft AI research in computer vision including deep decision forests which are used in the Kinect and HoloLens, and machine learning, and then applying it to more medical images than the average medical practitioner could analyze on their own, Project InnerEye can assist in identifying the presence of a tumor. To ensure that the medical practitioner is in charge, the results can be adjusted by the experts until everyone is comfortable with the result. Since we are a platform company we are also making the technology available to third-party medical software companies so that medical practitioners can use the tools they are most comfortable with today.

Autonomous Systems

Most people when they think of autonomous systems, visions of self-driving cars come up but there are so many other ways to apply this technology as well. Networks when they are under attack can take a long time for someone to notice and even longer to determine the root cause and then address it. Machine learning does a great job of detecting patterns in a vast amount of data, so it can be used to quickly identify attacks in real-time and then AI can be used to find the most effective approach for addressing the problem. The Network Admin will be notified and can control the process if needed but speeding up this process will limit the amount of damage that an attacker could cause. At the RSA 2018 Conference, Mark Russinovich shared how Microsoft’s investment in AI has created new security capabilities for protecting all of our customers.

For security, autonomous systems can help create secure products by finding security flaws during development. Microsoft Security Risk Detection is our offering for customers to use AI to find security flaws.

DocuSign is a company that enables people to sign documents from any device which means that they take security very seriously. Like most software developers, DocuSign uses a variety of software components in their products including ones from third parties and the security of their product is only as strong as all of the components put together. Using Microsoft Security Risk Detection, DocuSign can automatically check the security of all their components across multiple virtual machines with no extra work.

“We were able to automatically run millions of test cases across multiple virtual machines, entirely automated, with no extra work …. We’re always looking for services that add value, and that scale of automation is a great added value.” — John Heasman, Senior Director, Software Security, DocuSign

It is an exciting time in the industry as artificial intelligence is not just a topic of conversation but becoming the starting point for thinking about what will become the next wave(s) of digital and/or AI based transformation. The examples presented here across the four patterns are just a few examples, but we see the patterns as an interesting point to start the conversation. Three weeks I ago I started this series discussing why this time is different for AI compared to other times of AI excitement in the New Generation of Software Building Blocks post. Last week we focused on Microsoft’s approach to AI and how we are making it available to others across the platform, our products, and solutions. Today we presented four patterns for the use of AI in businesses and industries that we hope provide a useful framing around the topic. As you think through your own use cases think about how these patterns can apply to your business. Thank you for taking the time to read the series and if there are any other topics that we should explore please let us know.

 

Cheers,

Guggs

Announcing Enhanced Security for Bing Maps API Keys

$
0
0

The Bing Maps Developer portal  shipped a new feature allowing you to restrict access on your Bing Maps API keys to a set of domains that you specify. With this feature customers can define a strict set of Referrer values or IP address ranges that the key will be validated against.  Requests originating from your allow list will process normally, while requests from outside of your list will return an access denied response.

Adding domain Security to your API key is completely optional and keys left as-is will continue to function as they do today. The allow list for a key is independent from all of your other keys, enabling you to have distinct rules for each of your keys.

Currently we only support exact referrer name matching, meaning if your browser or header request sends https://www.contoso.com/, please have your referrer name as: https://www.contoso.com/ in the whitelisting definition. We will support short urls in a future release.

Note: Wildcards are not supported when specifying your IP addresses. However you CAN specify an IP range.

To set your Allow list, follow these simple steps:

  • Sign into the developer portal with your Microsoft Account
  • Select the My Account → My Keys menu choice to show a list of all of your keys
  • Click the ‘Enable Security’ link for the key you wish to set restrictions on
  • For a referrer rule, specify a Rule name and Referrer, then hit the green ‘plus’ button to add it
  • For an IP Range Rule, click to the IP range tab, then enter a rule name and your desired starting and ending IP address
  • You can continue to add as many rules as needed for each key
  • Press the close button when you are finished.

Application Key Security Settings

That’s it! Note that it can take up to 30 minutes for these changes to take effect.

If you have questions or feedback for the team, please reach out to our Bing Maps Enterprise Support team at bmesupp@microsoft.com.

- Bing Maps Team

Viewing all 10804 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>