Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

A year of bringing AI to the edge

$
0
0

This post is co-authored by Anny Dow, Product Marketing Manager, Azure Cognitive Services.

In an age where low-latency and data security can be the lifeblood of an organization, containers make it possible for enterprises to meet these needs when harnessing artificial intelligence (AI).

Since introducing Azure Cognitive Services in containers this time last year, businesses across industries have unlocked new productivity gains and insights. The combination of both the most comprehensive set of domain-specific AI services in the market and containers enables enterprises to apply AI to more scenarios with Azure than with any other major cloud provider. Organizations ranging from healthcare to financial services have transformed their processes and customer experiences as a result.

 

These are some of the highlights from the past year:

Employing anomaly detection for predictive maintenance

Airbus Defense and Space, one of the world’s largest aerospace and defense companies, has tested Azure Cognitive Services in containers for developing a proof of concept in predictive maintenance. The company runs Anomaly Detector for immediately spotting unusual behavior in voltage levels to mitigate unexpected downtime. By employing advanced anomaly detection in containers without further burdening the data scientist team, Airbus can scale this critical capability across the business globally.

“Innovation has always been a driving force at Airbus. Using Anomaly Detector, an Azure Cognitive Service, we can solve some aircraft predictive maintenance use cases more easily.”  —Peter Weckesser, Digital Transformation Officer, Airbus

Automating data extraction for highly-regulated businesses

As enterprises grow, they begin to acquire thousands of hours of repetitive but critically important work every week. High-value domain specialists spend too much of their time on this. Today, innovative organizations use robotic process automation (RPA) to help manage, scale, and accelerate processes, and in doing so free people to create more value.

Automation Anywhere, a leader in robotic process automation, partners with these companies eager to streamline operations by applying AI. IQ Bot, their unique RPA software, automates data extraction from documents of various types. By deploying Cognitive Services in containers, Automation Anywhere can now handle documents on-premises and at the edge for highly regulated industries:

“Azure Cognitive Services in containers gives us the headroom to scale, both on-premises and in the cloud, especially for verticals such as insurance, finance, and health care where there are millions of documents to process.” —Prince Kohli, Chief Technology Officer for Products and Engineering, Automation Anywhere

For more about Automation Anywhere's partnership with Microsoft to democratize AI for organizations, check out this blog post.

Delighting customers and employees with an intelligent virtual agent

Lowell, one of the largest credit management services in Europe, wants credit to work better for everybody. So, it works hard to make every consumer interaction as painless as possible with the AI. Partnering with Crayon, a global leader in cloud services and solutions, Lowell set out to solve the outdated processes that kept the company’s highly trained credit counselors too busy with routine inquiries and created friction in the customer experience. Lowell turned to Cognitive Services to create an AI-enabled virtual agent that now handles 40 percent of all inquiries—making it easier for service agents to deliver greater value to consumers and better outcomes for Lowell clients.

With GDPR requirements, chatbots weren’t an option for many businesses before containers became available. Now companies like Lowell can ensure the data handling meets stringent compliance standards while running Cognitive Services in containers. As Carl Udvang, Product Manager at Lowell explains:

"By taking advantage of container support in Cognitive Services, we built a bot that safeguards consumer information, analyzes it, and compares it to case studies about defaulted payments to find the solutions that work for each individual."

One-to-one customer care at scale in data-sensitive environments has become easier to achieve.

Empowering disaster relief organizations on the ground

A few years ago, there was a major Ebola outbreak in Liberia. A team from USAID was sent to help mitigate the crisis. Their first task on the ground was to find and categorize the information such as the state of healthcare facilities, wifi networks, and population density centers.  They tracked this information manually and had to extract insights based on a complex corpus of data to determine the best course of action.

With the rugged versions of Azure Stack Edge, teams responding to such crises can carry a device running Cognitive Services in their backpack. They can upload unstructured data like maps, images, pictures of documents and then extract content, translate, draw relationships among entities, and apply a search layer. With these cloud AI capabilities available offline, at their fingertips, response teams can find the information they need in a matter of moments. In Satya’s Ignite 2019 keynote, Dean Paron, Partner Director of Azure Storage and Edge, walks us through how Cognitive Services in Azure Stack Edge can be applied in such disaster relief scenarios (starting at 27:07): 

Transforming customer support with call center analytics

Call centers are a critical customer touchpoint for many businesses, and being able to derive insights from customer calls is key to improving customer support. With Cognitive Services, businesses can transcribe calls with Speech to Text, analyze sentiment in real-time with Text Analytics, and develop a virtual agent to respond to questions with Text to Speech. However, in highly regulated industries, businesses are typically prohibited from running AI services in the cloud due to policies against uploading, processing, and storing any data in public cloud environments. This is especially true for financial institutions.

A leading bank in Europe addressed regulatory requirements and brought the latest transcription technology to their own on-premises environment by deploying Cognitive Services in containers. Through transcribing calls, customer service agents could not only get real-time feedback on customer sentiment and call effectiveness, but also batch process data to identify broad themes and unlock deeper insights on millions of hours of audio. Using containers also gave them flexibility to integrate with their own custom workflows and scale throughput at low latency.

What's next?

These stories touch on just a handful of the organizations leading innovation by bringing AI to where data lives. As running AI anywhere becomes more mainstream, the opportunities for empowering people and organizations will only be limited by the imagination.

Visit the container support page to get started with containers today.

For a deeper dive into these stories, visit the following


Multi-protocol access on Data Lake Storage now generally available

$
0
0

We are excited to announce the general availability of multi-protocol access for Azure Data Lake Storage. Azure Data Lake Storage is a unique cloud storage solution for analytics that offers multi-protocol access to the same data. This is a no-compromise solution that allows both the Azure Blob Storage API and Azure Data Lake Storage API to access data on a single storage account. You can store all your different types of data in one place, which gives you the flexibility to make the best use of your data as your use case evolves. The general availability of multi-protocol access creates the foundation to enable object storage capabilities on Data Lake Storage. This brings together the best of both object storage and Hadoop Distributed File System (HDFS) to enable scenarios that were not possible until today without data copy.

Multi-protocol access generally available

Broader ecosystem of applications and features

Multi-protocol access provides a powerful foundation to enable integrations and features for Data Lake Storage. Existing object storage applications and connectors can now be used to access data stored in Data Lake Storage with no changes. This vastly accelerated the integration of Azure services and the partner ecosystem with Data Lake Storage. We are also announcing the general availability of multiple Azure service integrations with Data Lake Storage including: Azure Stream Analytics, IoT Hub, Azure Event Hubs Capture, Azure Data Box, and Logic Apps. These Azure services now integrate seamlessly with Data Lake Storage. Real-time scenarios are now enabled by easily ingesting streaming data into Data Lake Storage via IoT Hub, Stream Analytics and Event Hubs Capture.

Ecosystem partners have also strongly leveraged multi-protocol access for their applications. Here is what our partners are saying:

“Multi-protocol access is a massive paradigm shift that enables cloud analytics to run on a single account for both blob data and analytics data. We believe that multi-protocol access helps customers rapidly achieve integration with Azure Data Lake Storage using our existing blob connector. This brings tremendous value to customers without needing to do costly re-development efforts.” - Rob Cornell, Head of Cloud Alliances, Talend

Our customers are excited about how their existing blob applications and workloads “just work” leveraging the multi-protocol capability. There are no changes required for their existing blob applications saving them precious development and validation resources. We have customers today running multiple workloads seamlessly against the same data using both the blob connector and the Azure Data Lake Storage connector.

We are also making the ability to tier data between hot and cool tiers for Data Lake Storage generally available. This is great for analytics customers who want to keep frequently used analytics data in the hot tier and move less used data to cooler storage tiers for cost efficiencies. As we continue our journey, we will be enabling more capabilities on Data Lake Storage in upcoming releases. Stay tuned for more announcements in the future!

Get started with multi-protocol access

Visit our multi-protocol access documentation to get started. For additional information see our preview announcement. To learn more about pricing, see our pricing page.

Customize Excel and track notes in Outlook—here’s what’s new to Microsoft 365 in November

Preview: Live transcription with Azure Media Services

$
0
0

Azure Media Services provides a platform with which you can broadcast live events. You can use our APIs to ingest, transcode, and dynamically package and encrypt your live video feeds for delivery via industry-standard protocols like HTTP Live Streaming (HLS) and MPEG-DASH. You can also use our APIs to integrate with CDNs and deliver to millions of concurrent viewers. Customers are using this platform for scenarios ranging from multi-day sporting events and entire seasons of professional sports, to webinars and town-hall meetings.

Live transcriptions is a new preview feature in our v3 APIs, wherein you can enhance the streams delivered to your viewers with machine-generated text that is transcribed from spoken words in the audio feed. This feature is an option you can enable for any type of Live Event that you create in our service, including pass-through Live Events, where you configure a live encoder upstream to generate and push a multiple bitrate live feed into the service (visualized in the diagram below).
   Schematic diagram for live transcription

Figure 1. Schematic diagram for live transcription

When a live contribution feed is sent to the service, it extracts the audio signal, decodes it, and calls to the Azure Cognitive Services speech-to-text APIs to get the speech transcribed. The resultant text is then packaged into formats that are suitable for delivery via streaming protocols. For HTTP Live Streaming (HLS) protocol with media packaged into MPEG Transport Stream (TS) fragments, the text is packaged into WebVTT fragments. For delivery via MPEG-DASH or HLS with CMAF protocols, the text is wrapped in IMSC1.1 compatible TTML, and then packaged into MPEG-4 Part 30 (ISO/IEC 14496-30) fragments.

You can use Azure Media Player (version 2.3.3 or newer) to play the video, as well as display the text on a wide variety of browsers and devices. You can also play back the streams on the iOS native player. If building an app for Android devices, playback of transcriptions has been verified by NexPlayer. You can contact them to request a demo.

Display of live transcription on Azure Media Player

Figure 2. Display of live transcription on Azure Media Player

For HTTP Live Streaming (HLS) protocol with media packaged into MPEG Transport Stream (TS) fragments, the text is packaged into WebVTT fragments. For delivery via MPEG-DASH or HLS with CMAF protocols, the text is wrapped in IMSC1.1 compatible TTML, and then packaged into MPEG-4 Part 30 (ISO/IEC 14496-30) fragments.

The live transcription feature is now available in preview in the West US 2 region. Read the full article here to learn how to get started with this preview feature.

Windows 10 SDK Preview Build 19028 available now!

$
0
0

Today, we released a new Windows 10 Preview Build of the SDK to be used in conjunction with Windows 10 Insider Preview (Build 19028 or greater). The Preview SDK Build 19028 contains bug fixes and under development changes to the API surface area.

The Preview SDK can be downloaded from developer section on Windows Insider.

For feedback and updates to the known issues, please see the developer forum. For new developer feature requests, head over to our Windows Platform UserVoice.

Things to note:

  • This build works in conjunction with previously released SDKs and Visual Studio 2017 and 2019. You can install this SDK and still also continue to submit your apps that target Windows 10 build 1903 or earlier to the Microsoft Store.
  • The Windows SDK will now formally only be supported by Visual Studio 2017 and greater. You can download the Visual Studio 2019 here.
  • This build of the Windows SDK will install on only on Windows 10 Insider Preview builds.
  • In order to assist with script access to the SDK, the ISO will also be able to be accessed through the following static URL: https://software-download.microsoft.com/download/sg/Windows_InsiderPreview_SDK_en-us_19028_1.iso.

Tools Updates

Message Compiler (mc.exe)

  • Now detects the Unicode byte order mark (BOM) in .mc files. If the If the .mc file starts with a UTF-8 BOM, it will be read as a UTF-8 file. Otherwise, if it starts with a UTF-16LE BOM, it will be read as a UTF-16LE file. Otherwise, if the -u parameter was specified, it will be read as a UTF-16LE file. Otherwise, it will be read using the current code page (CP_ACP).
  • Now avoids one-definition-rule (ODR) problems in MC-generated C/C++ ETW helpers caused by conflicting configuration macros (e.g. when two .cpp files with conflicting definitions of MCGEN_EVENTWRITETRANSFER are linked into the same binary, the MC-generated ETW helpers will now respect the definition of MCGEN_EVENTWRITETRANSFER in each .cpp file instead of arbitrarily picking one or the other).

Windows Trace Preprocessor (tracewpp.exe)

  • Now supports Unicode input (.ini, .tpl, and source code) files. Input files starting with a UTF-8 or UTF-16 byte order mark (BOM) will be read as Unicode. Input files that do not start with a BOM will be read using the current code page (CP_ACP). For backwards-compatibility, if the -UnicodeIgnore command-line parameter is specified, files starting with a UTF-16 BOM will be treated as empty.
  • Now supports Unicode output (.tmh) files. By default, output files will be encoded using the current code page (CP_ACP). Use command-line parameters -cp:UTF-8 or -cp:UTF-16 to generate Unicode output files.
  • Behavior change: tracewpp now converts all input text to Unicode, performs processing in Unicode, and converts output text to the specified output encoding. Earlier versions of tracewpp avoided Unicode conversions and performed text processing assuming a single-byte character set. This may lead to behavior changes in cases where the input files do not conform to the current code page. In cases where this is a problem, consider converting the input files to UTF-8 (with BOM) and/or using the -cp:UTF-8 command-line parameter to avoid encoding ambiguity.

TraceLoggingProvider.h

  • Now avoids one-definition-rule (ODR) problems caused by conflicting configuration macros (e.g. when two .cpp files with conflicting definitions of TLG_EVENT_WRITE_TRANSFER are linked into the same binary, the TraceLoggingProvider.h helpers will now respect the definition of TLG_EVENT_WRITE_TRANSFER in each .cpp file instead of arbitrarily picking one or the other).
  • In C++ code, the TraceLoggingWrite macro has been updated to enable better code sharing between similar events using variadic templates.

Signing your apps with Device Guard Signing

Windows SDK Flight NuGet Feed

We have stood up a NuGet feed for the flighted builds of the SDK. You can now test preliminary builds of the Windows 10 WinRT API Pack, as well as a microsoft.windows.sdk.headless.contracts NuGet package.

We use the following feed to flight our NuGet packages.

Microsoft.Windows.SDK.Contracts which can be used with to add the latest Windows Runtime APIs support to your .NET Framework 4.5+ and .NET Core 3.0+ libraries and apps.

The Windows 10 WinRT API Pack enables you to add the latest Windows Runtime APIs support to your .NET Framework 4.5+ and .NET Core 3.0+ libraries and apps.

Microsoft.Windows.SDK.Headless.Contracts provides a subset of the Windows Runtime APIs for console apps excludes the APIs associated with a graphical user interface. This NuGet is used in conjunction with

Windows ML container development. Check out the Getting Started guide for more information.

Breaking Changes

Removal of api-ms-win-net-isolation-l1-1-0.lib

In this release api-ms-win-net-isolation-l1-1-0.lib has been removed from the Windows SDK. Apps that were linking against api-ms-win-net-isolation-l1-1-0.lib can switch to OneCoreUAP.lib as a replacement.

Removal of IRPROPS.LIB

In this release irprops.lib has been removed from the Windows SDK. Apps that were linking against irprops.lib can switch to bthprops.lib as a drop-in replacement.

Removal of WUAPICommon.H and WUAPICommon.IDL

In this release we have moved ENUM tagServerSelection from WUAPICommon.H to wupai.h and removed the header. If you would like to use the ENUM tagServerSelection, you will need to include wuapi.h or wuapi.idl.

API Updates, Additions and Removals

The following APIs have been added to the platform since the release of Windows 10 SDK, version 1903, build 18362.

Additions:

 

namespace Windows.AI.MachineLearning {
  public sealed class LearningModelSessionOptions {
    bool CloseModelOnSessionCreation { get; set; }
  }
}
namespace Windows.ApplicationModel {
  public sealed class AppInfo {
    public static AppInfo Current { get; }
    Package Package { get; }
    public static AppInfo GetFromAppUserModelId(string appUserModelId);
    public static AppInfo GetFromAppUserModelIdForUser(User user, string appUserModelId);
  }
  public interface IAppInfoStatics
  public sealed class Package {
    StorageFolder EffectiveExternalLocation { get; }
    string EffectiveExternalPath { get; }
    string EffectivePath { get; }
    string InstalledPath { get; }
    bool IsStub { get; }
    StorageFolder MachineExternalLocation { get; }
    string MachineExternalPath { get; }
    string MutablePath { get; }
    StorageFolder UserExternalLocation { get; }
    string UserExternalPath { get; }
    IVectorView<AppListEntry> GetAppListEntries();
    RandomAccessStreamReference GetLogoAsRandomAccessStreamReference(Size size);
  }
}
namespace Windows.ApplicationModel.AppService {
  public enum AppServiceConnectionStatus {
    AuthenticationError = 8,
    DisabledByPolicy = 10,
    NetworkNotAvailable = 9,
    WebServiceUnavailable = 11,
  }
  public enum AppServiceResponseStatus {
    AppUnavailable = 6,
    AuthenticationError = 7,
    DisabledByPolicy = 9,
    NetworkNotAvailable = 8,
    WebServiceUnavailable = 10,
  }
  public enum StatelessAppServiceResponseStatus {
    AuthenticationError = 11,
    DisabledByPolicy = 13,
    NetworkNotAvailable = 12,
    WebServiceUnavailable = 14,
  }
}
namespace Windows.ApplicationModel.Background {
  public sealed class BackgroundTaskBuilder {
    void SetTaskEntryPointClsid(Guid TaskEntryPoint);
  }
  public sealed class BluetoothLEAdvertisementPublisherTrigger : IBackgroundTrigger {
    bool IncludeTransmitPowerLevel { get; set; }
    bool IsAnonymous { get; set; }
    IReference<short> PreferredTransmitPowerLevelInDBm { get; set; }
    bool UseExtendedFormat { get; set; }
  }
  public sealed class BluetoothLEAdvertisementWatcherTrigger : IBackgroundTrigger {
    bool AllowExtendedAdvertisements { get; set; }
  }
}
namespace Windows.ApplicationModel.ConversationalAgent {
  public sealed class ActivationSignalDetectionConfiguration
  public enum ActivationSignalDetectionTrainingDataFormat
  public sealed class ActivationSignalDetector
  public enum ActivationSignalDetectorKind
  public enum ActivationSignalDetectorPowerState
  public sealed class ConversationalAgentDetectorManager
  public sealed class DetectionConfigurationAvailabilityChangedEventArgs
  public enum DetectionConfigurationAvailabilityChangeKind
  public sealed class DetectionConfigurationAvailabilityInfo
  public enum DetectionConfigurationTrainingStatus
}
namespace Windows.ApplicationModel.DataTransfer {
  public sealed class DataPackage {
    event TypedEventHandler<DataPackage, object> ShareCanceled;
  }
}
namespace Windows.Devices.Bluetooth {
  public sealed class BluetoothAdapter {
    bool IsExtendedAdvertisingSupported { get; }
    uint MaxAdvertisementDataLength { get; }
  }
}
namespace Windows.Devices.Bluetooth.Advertisement {
  public sealed class BluetoothLEAdvertisementPublisher {
    bool IncludeTransmitPowerLevel { get; set; }
    bool IsAnonymous { get; set; }
    IReference<short> PreferredTransmitPowerLevelInDBm { get; set; }
    bool UseExtendedAdvertisement { get; set; }
  }
  public sealed class BluetoothLEAdvertisementPublisherStatusChangedEventArgs {
    IReference<short> SelectedTransmitPowerLevelInDBm { get; }
  }
  public sealed class BluetoothLEAdvertisementReceivedEventArgs {
    BluetoothAddressType BluetoothAddressType { get; }
    bool IsAnonymous { get; }
    bool IsConnectable { get; }
    bool IsDirected { get; }
    bool IsScannable { get; }
    bool IsScanResponse { get; }
    IReference<short> TransmitPowerLevelInDBm { get; }
  }
  public enum BluetoothLEAdvertisementType {
    Extended = 5,
  }
  public sealed class BluetoothLEAdvertisementWatcher {
    bool AllowExtendedAdvertisements { get; set; }
  }
  public enum BluetoothLEScanningMode {
    None = 2,
  }
}
namespace Windows.Devices.Bluetooth.Background {
  public sealed class BluetoothLEAdvertisementPublisherTriggerDetails {
    IReference<short> SelectedTransmitPowerLevelInDBm { get; }
  }
}
namespace Windows.Devices.Display {
  public sealed class DisplayMonitor {
    bool IsDolbyVisionSupportedInHdrMode { get; }
  }
}
namespace Windows.Devices.Input {
  public sealed class PenButtonListener
  public sealed class PenDockedEventArgs
  public sealed class PenDockListener
  public sealed class PenTailButtonClickedEventArgs
  public sealed class PenTailButtonDoubleClickedEventArgs
  public sealed class PenTailButtonLongPressedEventArgs
  public sealed class PenUndockedEventArgs
}
namespace Windows.Devices.Sensors {
 public sealed class Accelerometer {
    AccelerometerDataThreshold ReportThreshold { get; }
  }
  public sealed class AccelerometerDataThreshold
  public sealed class Barometer {
    BarometerDataThreshold ReportThreshold { get; }
  }
  public sealed class BarometerDataThreshold
  public sealed class Compass {
    CompassDataThreshold ReportThreshold { get; }
  }
  public sealed class CompassDataThreshold
  public sealed class Gyrometer {
    GyrometerDataThreshold ReportThreshold { get; }
  }
  public sealed class GyrometerDataThreshold
  public sealed class Inclinometer {
    InclinometerDataThreshold ReportThreshold { get; }
  }
  public sealed class InclinometerDataThreshold
  public sealed class LightSensor {
    LightSensorDataThreshold ReportThreshold { get; }
  }
  public sealed class LightSensorDataThreshold
  public sealed class Magnetometer {
    MagnetometerDataThreshold ReportThreshold { get; }
  }
  public sealed class MagnetometerDataThreshold
}
namespace Windows.Foundation.Metadata {
  public sealed class AttributeNameAttribute : Attribute
  public sealed class FastAbiAttribute : Attribute
  public sealed class NoExceptionAttribute : Attribute
}
namespace Windows.Globalization {
  public sealed class Language {
    string AbbreviatedName { get; }
    public static IVector<string> GetMuiCompatibleLanguageListFromLanguageTags(IIterable<string> languageTags);
  }
}
namespace Windows.Graphics.Capture {
  public sealed class GraphicsCaptureSession : IClosable {
    bool IsCursorCaptureEnabled { get; set; }
  }
}
namespace Windows.Graphics.DirectX {
  public enum DirectXPixelFormat {
    SamplerFeedbackMinMipOpaque = 189,
    SamplerFeedbackMipRegionUsedOpaque = 190,
  }
}
namespace Windows.Graphics.Holographic {
  public sealed class HolographicFrame {
    HolographicFrameId Id { get; }
  }
  public struct HolographicFrameId
  public sealed class HolographicFrameRenderingReport
  public sealed class HolographicFrameScanoutMonitor : IClosable
  public sealed class HolographicFrameScanoutReport
  public sealed class HolographicSpace {
    HolographicFrameScanoutMonitor CreateFrameScanoutMonitor(uint maxQueuedReports);
  }
}
namespace Windows.Management.Deployment {
  public sealed class AddPackageOptions
  public enum DeploymentOptions : uint {
    StageInPlace = (uint)4194304,
  }
  public sealed class PackageManager {
    IAsyncOperationWithProgress<DeploymentResult, DeploymentProgress> AddPackageByUriAsync(Uri packageUri, AddPackageOptions options);
    IVector<Package> FindProvisionedPackages();
    PackageStubPreference GetPackageStubPreference(string packageFamilyName);
    IAsyncOperationWithProgress<DeploymentResult, DeploymentProgress> RegisterPackageByUriAsync(Uri manifestUri, RegisterPackageOptions options);
    IAsyncOperationWithProgress<DeploymentResult, DeploymentProgress> RegisterPackagesByFullNameAsync(IIterable<string> packageFullNames, RegisterPackageOptions options);
    void SetPackageStubPreference(string packageFamilyName, PackageStubPreference useStub);
   IAsyncOperationWithProgress<DeploymentResult, DeploymentProgress> StagePackageByUriAsync(Uri packageUri, StagePackageOptions options);
  }
  public enum PackageStubPreference
  public enum PackageTypes : uint {
    All = (uint)4294967295,
  }
  public sealed class RegisterPackageOptions
  public enum RemovalOptions : uint {
    PreserveRoamableApplicationData = (uint)128,
  }
  public sealed class StagePackageOptions
  public enum StubPackageOption
}
namespace Windows.Media.Audio {
  public sealed class AudioPlaybackConnection : IClosable
  public sealed class AudioPlaybackConnectionOpenResult
  public enum AudioPlaybackConnectionOpenResultStatus
  public enum AudioPlaybackConnectionState
}
namespace Windows.Media.Capture {
  public sealed class MediaCapture : IClosable {
    MediaCaptureRelativePanelWatcher CreateRelativePanelWatcher(StreamingCaptureMode captureMode, DisplayRegion displayRegion);
  }
  public sealed class MediaCaptureInitializationSettings {
    Uri DeviceUri { get; set; }
    PasswordCredential DeviceUriPasswordCredential { get; set; }
  }
  public sealed class MediaCaptureRelativePanelWatcher : IClosable
}
namespace Windows.Media.Capture.Frames {
  public sealed class MediaFrameSourceInfo {
    Panel GetRelativePanel(DisplayRegion displayRegion);
  }
}
namespace Windows.Media.Devices {
  public sealed class PanelBasedOptimizationControl
  public sealed class VideoDeviceController : IMediaDeviceController {
    PanelBasedOptimizationControl PanelBasedOptimizationControl { get; }
 }
}
namespace Windows.Media.MediaProperties {
  public static class MediaEncodingSubtypes {
    public static string Pgs { get; }
    public static string Srt { get; }
    public static string Ssa { get; }
    public static string VobSub { get; }
  }
  public sealed class TimedMetadataEncodingProperties : IMediaEncodingProperties {
    public static TimedMetadataEncodingProperties CreatePgs();
    public static TimedMetadataEncodingProperties CreateSrt();
    public static TimedMetadataEncodingProperties CreateSsa(byte[] formatUserData);
    public static TimedMetadataEncodingProperties CreateVobSub(byte[] formatUserData);
  }
}
namespace Windows.Networking.BackgroundTransfer {
  public sealed class DownloadOperation : IBackgroundTransferOperation, IBackgroundTransferOperationPriority {
    void RemoveRequestHeader(string headerName);
    void SetRequestHeader(string headerName, string headerValue);
  }
  public sealed class UploadOperation : IBackgroundTransferOperation, IBackgroundTransferOperationPriority {
    void RemoveRequestHeader(string headerName);
    void SetRequestHeader(string headerName, string headerValue);
  }
}
namespace Windows.Networking.Connectivity {
  public enum NetworkAuthenticationType {
    Owe = 12,
  }
}
namespace Windows.Networking.NetworkOperators {
  public sealed class NetworkOperatorTetheringAccessPointConfiguration {
    TetheringWiFiBand Band { get; set; }
    bool IsBandSupported(TetheringWiFiBand band);
    IAsyncOperation<bool> IsBandSupportedAsync(TetheringWiFiBand band);
  }
  public sealed class NetworkOperatorTetheringManager {
    public static void DisableNoConnectionsTimeout();
    public static IAsyncAction DisableNoConnectionsTimeoutAsync();
    public static void EnableNoConnectionsTimeout();
    public static IAsyncAction EnableNoConnectionsTimeoutAsync();
    public static bool IsNoConnectionsTimeoutEnabled();
  }
  public enum TetheringWiFiBand
}
namespace Windows.Networking.PushNotifications {
  public static class PushNotificationChannelManager {
    public static event EventHandler<PushNotificationChannelsRevokedEventArgs> ChannelsRevoked;
  }
  public sealed class PushNotificationChannelsRevokedEventArgs
  public sealed class RawNotification {
    IBuffer ContentBytes { get; }
  }
}
namespace Windows.Security.Authentication.Web.Core {
  public sealed class WebAccountMonitor {
    event TypedEventHandler<WebAccountMonitor, WebAccountEventArgs> AccountPictureUpdated;
  }
}
namespace Windows.Security.Isolation {
  public sealed class IsolatedWindowsEnvironment
  public enum IsolatedWindowsEnvironmentActivator
  public enum IsolatedWindowsEnvironmentAllowedClipboardFormats : uint
  public enum IsolatedWindowsEnvironmentAvailablePrinters : uint
  public enum IsolatedWindowsEnvironmentClipboardCopyPasteDirections : uint
  public struct IsolatedWindowsEnvironmentContract
  public struct IsolatedWindowsEnvironmentCreateProgress
  public sealed class IsolatedWindowsEnvironmentCreateResult
  public enum IsolatedWindowsEnvironmentCreateStatus
  public sealed class IsolatedWindowsEnvironmentFile
  public static class IsolatedWindowsEnvironmentHost
  public enum IsolatedWindowsEnvironmentHostError
  public sealed class IsolatedWindowsEnvironmentLaunchFileResult
  public enum IsolatedWindowsEnvironmentLaunchFileStatus
  public sealed class IsolatedWindowsEnvironmentOptions
  public static class IsolatedWindowsEnvironmentOwnerRegistration
  public sealed class IsolatedWindowsEnvironmentOwnerRegistrationData
  public sealed class IsolatedWindowsEnvironmentOwnerRegistrationResult
  public enum IsolatedWindowsEnvironmentOwnerRegistrationStatus
  public sealed class IsolatedWindowsEnvironmentProcess
  public enum IsolatedWindowsEnvironmentProcessState
  public enum IsolatedWindowsEnvironmentProgressState
  public sealed class IsolatedWindowsEnvironmentShareFolderRequestOptions
  public sealed class IsolatedWindowsEnvironmentShareFolderResult
  public enum IsolatedWindowsEnvironmentShareFolderStatus
  public sealed class IsolatedWindowsEnvironmentStartProcessResult
  public enum IsolatedWindowsEnvironmentStartProcessStatus
  public sealed class IsolatedWindowsEnvironmentTelemetryParameters
  public static class IsolatedWindowsHostMessenger
  public delegate void MessageReceivedCallback(Guid receiverId, IVectorView<object> message);
}
namespace Windows.Storage {
  public static class KnownFolders {
    public static IAsyncOperation<StorageFolder> GetFolderAsync(KnownFolderId folderId);
    public static IAsyncOperation<KnownFoldersAccessStatus> RequestAccessAsync(KnownFolderId folderId);
    public static IAsyncOperation<KnownFoldersAccessStatus> RequestAccessForUserAsync(User user, KnownFolderId folderId);
  }
  public enum KnownFoldersAccessStatus
  public sealed class StorageFile : IInputStreamReference, IRandomAccessStreamReference, IStorageFile, IStorageFile2, IStorageFilePropertiesWithAvailability, IStorageItem, IStorageItem2, IStorageItemProperties, IStorageItemProperties2, IStorageItemPropertiesWithProvider {
    public static IAsyncOperation<StorageFile> GetFileFromPathForUserAsync(User user, string path);
  }
  public sealed class StorageFolder : IStorageFolder, IStorageFolder2, IStorageFolderQueryOperations, IStorageItem, IStorageItem2, IStorageItemProperties, IStorageItemProperties2, IStorageItemPropertiesWithProvider {
    public static IAsyncOperation<StorageFolder> GetFolderFromPathForUserAsync(User user, string path);
  }
}
namespace Windows.Storage.Provider {
  public sealed class StorageProviderFileTypeInfo
  public sealed class StorageProviderSyncRootInfo {
    IVector<StorageProviderFileTypeInfo> FallbackFileTypeInfo { get; }
  }
  public static class StorageProviderSyncRootManager {
    public static bool IsSupported();
  }
}
namespace Windows.System {
  public sealed class UserChangedEventArgs {
    IVectorView<UserWatcherUpdateKind> ChangedPropertyKinds { get; }
  }
  public enum UserWatcherUpdateKind
}
namespace Windows.UI.Composition.Interactions {
  public sealed class InteractionTracker : CompositionObject {
    int TryUpdatePosition(Vector3 value, InteractionTrackerClampingOption option, InteractionTrackerPositionUpdateOption posUpdateOption);
  }
  public enum InteractionTrackerPositionUpdateOption
}
namespace Windows.UI.Input {
  public sealed class CrossSlidingEventArgs {
    uint ContactCount { get; }
  }
  public sealed class DraggingEventArgs {
    uint ContactCount { get; }
  }
  public sealed class GestureRecognizer {
    uint HoldMaxContactCount { get; set; }
    uint HoldMinContactCount { get; set; }
    float HoldRadius { get; set; }
    TimeSpan HoldStartDelay { get; set; }
    uint TapMaxContactCount { get; set; }
    uint TapMinContactCount { get; set; }
    uint TranslationMaxContactCount { get; set; }
    uint TranslationMinContactCount { get; set; }
  }
  public sealed class HoldingEventArgs {
    uint ContactCount { get; }
    uint CurrentContactCount { get; }
  }
  public sealed class ManipulationCompletedEventArgs {
    uint ContactCount { get; }
    uint CurrentContactCount { get; }
  }
  public sealed class ManipulationInertiaStartingEventArgs {
    uint ContactCount { get; }
  }
  public sealed class ManipulationStartedEventArgs {
    uint ContactCount { get; }
  }
  public sealed class ManipulationUpdatedEventArgs {
    uint ContactCount { get; }
    uint CurrentContactCount { get; }
  }
  public sealed class RightTappedEventArgs {
    uint ContactCount { get; }
  }
  public sealed class SystemButtonEventController : AttachableInputObject
  public sealed class SystemFunctionButtonEventArgs
  public sealed class SystemFunctionLockChangedEventArgs
  public sealed class SystemFunctionLockIndicatorChangedEventArgs
  public sealed class TappedEventArgs {
    uint ContactCount { get; }
  }
}
namespace Windows.UI.Input.Inking {
  public sealed class InkModelerAttributes {
    bool UseVelocityBasedPressure { get; set; }
  }
}
namespace Windows.UI.Text {
  public enum RichEditMathMode
  public sealed class RichEditTextDocument : ITextDocument {
    void GetMath(out string value);
    void SetMath(string value);
    void SetMathMode(RichEditMathMode mode);
  }
}
namespace Windows.UI.ViewManagement {
  public sealed class UISettings {
    event TypedEventHandler<UISettings, UISettingsAnimationsEnabledChangedEventArgs> AnimationsEnabledChanged;
    event TypedEventHandler<UISettings, UISettingsMessageDurationChangedEventArgs> MessageDurationChanged;
  }
  public sealed class UISettingsAnimationsEnabledChangedEventArgs
  public sealed class UISettingsMessageDurationChangedEventArgs
}
namespace Windows.UI.ViewManagement.Core {
  public sealed class CoreInputView {
    event TypedEventHandler<CoreInputView, CoreInputViewHidingEventArgs> PrimaryViewHiding;
    event TypedEventHandler<CoreInputView, CoreInputViewShowingEventArgs> PrimaryViewShowing;
  }
  public sealed class CoreInputViewHidingEventArgs
  public enum CoreInputViewKind {
    Symbols = 4,
  }
  public sealed class CoreInputViewShowingEventArgs
  public sealed class UISettingsController
}

The post Windows 10 SDK Preview Build 19028 available now! appeared first on Windows Developer Blog.

AI, Machine Learning and Data Science Roundup: November 2019

$
0
0

A roundup of news about Artificial Intelligence, Machine Learning and Data Science. This is an eclectic collection of interesting blog posts, software announcements and data applications from Microsoft and elsewhere that I've noted recently.

Open Source AI, ML & Data Science News

Python 3.8 is now available. From now on, new versions of Python will be released on a 12-month cycle, in October of each year.

Python takes the #2 spot in Github's annual ranking of programming language popularity, displacing Java and behind JavaScript.

PyTorch 1.3 is now available, with improved performance, deployment to mobile devices, "Captum" model interpretability tools, and Cloud TPU support.

The Gradient documents the growing dominance of PyTorch, particularly in research.

Keras Tuner, hyperparameter optimization for Keras, is now available on PyPI.

ONNX, the open exchange format for deep learning models, is now a Linux Foundation project.

AI Inclusive, a newly-formed worldwide organization to promote diversity in the AI community.

Industry News

Databricks announces the MLflow Model Registry, to share and collaborate on machine learning models with MLflow.

Flyte, Lyft's cloud-native machine learning and data processing platform, has been released as open source.

RStudio introduces Package Manager, a commercial RStudio extension to help organizations manage binary R packages on Linux systems.

Exploratory, a new commercial tool for data science and data exploration, built on R.

GCP releases Explainable AI, a new tool to help humans understand how a machine learning model reaches its conclusions.

Google proposes Model Cards, a standardized way of sharing information about ML models, based on this paper.

GCP AutoML Translation is now generally available, and the GCP Translation API is now available in Basic and Advanced editions.

GCP Cloud AutoML is now integrated with the Kaggle data science competition platform.

Amazon Rekognition adds Custom Labels, allowing users to train the image classification service to recognize new objects with as few as 10 training images per label.

Amazon Sagemaker can now use hundreds of free and paid machine learning models offered in Amazon Marketplace.

The AWS Step Functions Data Science SDK, for building machine learning workflows in Python running on AWS infrastructure, is now available.

Microsoft News

Azure Machine Learning service has released several major updates, including:

Visual Studio Code adds several improvements for Python developers, including support for interacting with and editing Jupyter notebooks.

ONNX Runtime 1.0 is now generally available, for embedded inference of machine learning models in the open ONNX format.

Many new capabilities have been added to Cognitive Services, including:

Bot Framework SDK v4 is now available, and a new Bot Framework Composer has been released on Github for visual editing of conversation flows.

SandDance, Microsoft's interactive visual exploration tool, is now available as open source.

Learning resources

An essay about the root causes of problems with diversity in NLP models: for example, "hers" not being recognized as a pronoun. 

Videos from the Artificial Intelligence and Machine Learning Path, a series of six application-oriented talks presented at Microsoft Ignite.

A guide to getting started with PyTorch, using Google Colab's Free GPU offer.

Public weather and climate datasets, provided by Google.

Applications

The Relightables: capture humans in a custom light stage, drop video into a 3-D scene with realistic lighting.

How Tesla builds and deploys its driving automation models with PyTorch (presentation at PyTorch DevCon).

OpenAI has released the full GPT-2 language generation model.

Spleeter, a pre-trained PyTorch model to separate a music track into vocal and instrument audio files.

Detectron2, a PyTorch reimplementation of Facebook's popular object-detection and image-segmentation library.

Find previous editions of the AI roundup here.

Embracing nullable reference types

$
0
0

Probably the most impactful feature of C# 8.0 is Nullable Reference Types (NRTs). It lets you make the flow of nulls explicit in your code, and warns you when you don’t act according to intent.

The NRT feature holds you to a higher standard on how you deal with nulls, and as such it issues new warnings on existing code. So that those warnings (however useful) don’t break you, the feature must be explicitly enabled in your code before it starts complaining. Once you do that on existing code, you have work to do to make that code null-safe and satisfy the compiler that you did.

How should you think about when to do this work? That’s the main subject of this post, and we propose below that there’s a “nullable rollout phase” until .NET 5 ships (November 2020), wherein popular libraries should strive to embrace NRTs.

But first a quick primer.

Remind me – what is this feature again?

Up until now, in C# we allow references to be null, but we also allow them to be dereferenced without checks. This leads to what is by far the most common exception – the NullReferenceException – when nulls are accidentally dereferenced. An undesired null coming from one place in the code may lead to an exception being thrown later, from somewhere else that dereferences it. This makes null bugs hard to discover and annoying to fix. Can you spot the bug?:

static void M(string s) 
{ 
    Console.WriteLine(s.Length);
}
static void Main(string[] args)
{
    string s = (args.Length > 0) ? args[0] : null;
    M(s);
}

In C# 8.0 we want to help get rid of this problem by being stricter about nulls. This means we’re going to start complaining when values of ordinary reference types (string, object, IDisposable etc) are null. However, new warnings on existing code aren’t something we can just do, no matter how good it is for you! So NRT is an optional feature – you have to turn it on to get new warnings. You can do that either at the project level, or directly in the source code with a new directive:

#nullable enable

If you put this on the example above (e.g. at the top of the file) you’ll get a warning on this line:

    string s = (args.Length > 0) ? args[0] : null; // WARNING!

saying you shouldn’t assign the right-hand-side value to the string variable s because it might be null! Ordinary reference types have become non-nullable! You can fix the warning by giving a non-null value:

    string s = (args.Length > 0) ? args[0] : "";

If you want s to be able to be null, however, that’s fine too, but you have to say so, by using a nullable reference type – i.e. tagging a ? on the end of string:

    string? s = (args.Length > 0) ? args[0] : null;

Now the warning on that line goes away, but of course it shows up on the next line where you’re now passing something that you said may be null (a string?) to something that doesn’t want a null (a string):

    M(s); // WARNING!

Now again you can choose whether to change the signature of M (if you own it) to accept nulls or whether to make sure you don’t pass it a null to begin with.

C# is pretty smart about this. Let’s only call M if s is not null:

    if (s != null) M(s);

Now the warning disappears. This is because C# tracks the null state of variables across execution flow. In this case, even though s is declared to be a string?, C# knows that it won’t be null inside the true-branch of the if, because we just tested that.

In summary the nullable feature splits reference types into non-nullable reference types (such as string) and nullable reference types (such as string?), and enforces their null behavior with warnings.

This is enough of a primer for the purposes of this post. If you want to go deeper, please visit the docs on Nullable Reference Types, or check some of the earlier posts on the topic (Take C# 8.0 for a spin, Introducing Nullable Reference Types in C#).

There are many more nuances to how you can tune your nullable annotations, and we use a good many of them in our “nullification” of the .NET Core Libraries. The post Try out Nullable Reference Types explores those in great detail.

How and when to become “null-aware”?

Now to the meat of this post. When should you adopt nullable reference types? How to think about that? Here are some observations about the interaction between libraries and clients. Afterwards we propose a shared timeline for the whole ecosystem – the “nullable rollout phase” – to guide the adoption based on what you are building.

What happens when you enable nullable reference types in your code?

You will have to go over your signatures to decide in each place where you have a reference type whether to leave it non-nullable (e.g. string) or make it nullable (e.g. string?). Does your method handle null arguments gracefully (or even meaningfully), or does it immediately check and throw? If it throws on null you want to keep it non-nullable to signal that to your callers. Does your method sometimes return null? If so you want to make the return type nullable to “warn” your callers about it.

You’ll also start getting warnings when you use those members wrong. If you dereference the result of a method that returns string? and you don’t check it for null first, then you’ll have to fix that.

What happens when you call libraries that have the feature enabled?

If you yourself have the feature enabled and a library you depend on has already been compiled with the feature on, then it too will have nullable and nonnullable types in its signatures, and you will get warnings if you use those in the wrong way.

This is one of the core values of NRTs: That libraries can accurately describe the null behavior of the APIs, in a way that is checkable in client code at the call site. This raises expressiveness on API boundaries so that everyone can get a handle on the safe propagation and dereferencing of nulls. Nobody likes null reference exceptions or argument-null exceptions! This helps you write the code right the first time, and avoid the sources of those exceptions before you even compile and run the code.

What happens when you call libraries that have not enabled the feature?

Nothing! If a library was not compiled with the feature on, your compiler cannot assume one way or the other about whether types in the signatures were supposed to be nullable or not. So it doesn’t give you any warnings when you use the library. In nullable parlance, the library is “null-oblivious”. So even though you have opted in to getting the null checking, it only goes as far as the boundary to a null-oblivious library.

When that library later comes out in a new version that does enable the feature, and you upgrade to that version, you may get new warnings! All of a sudden, your compiler knows what is “right” and “wrong” in the consumption of those APIs, and will start telling you about the “wrong”!

This is good of course. But if you adopt NRTs before the libraries you depend on, it does mean that you’ll get some churn as they “come online” with their null annotations.

The nullable rollout phase

Here comes the big ask of you. In order to minimize the impact and churn, I want to recommend that we all think about the next year’s time until .NET 5 (November 2020) as the “nullable rollout phase”, where certain behaviors are encouraged. After that, we should be in a “new normal” where NRTs are everywhere, and everyone can use this feature to track and be explicit about nullability.

What should library authors do?

We strongly encourage authors of libraries (and similar infrastructure, such as code generators) to adopt NRTs during the nullable rollout phase. Pick a time that’s natural according to your shipping schedule, and that lets you get the work done, but do it within the next year. If your clients pester you to do it quicker, you can tell them “No! Go away! It’s still the nullable rollout phase!”

If you do go beyond the nullable rollout phase, however, your clients start having a point that you are holding back their adoption, and causing them to risk churn further down the line.

As a library writer you always face a dilemma between reach of your library and the feature set you can depend on in the runtime. In some cases you may feel compelled to split your library in two so that one version can target e.g. the classic .NET Framework, while a “modern” version makes use of e.g. new types and features in .NET Core 3.1.

However, with Nullable Reference Types specifically, you should be able to work around this. If you multitarget your library (e.g. in Visual Studio) to .NET Standard 2.0 and .NET Core 3.1, you will get the reach of .NET Standard 2.0 while benefitting from the nullable annotations of the .NET Core 3.1 libraries.

You also have to set the language version to C# 8.0, of course, and that is not a supported scenario when one of the target versions is below .NET Core 3.0. However, you can still do it manually in your project settings, and unlike many C# 8.0 features, the NRT feature specifically happens to not depend on specific elements of .NET Core 3.1. But if you try to use other language features of C# 8.0 while targeting .NET Standard 2.0, all bets are off!

What should library users do?

You should be aware that there’s a nullable rollout phase where things will be in flux. If you don’t mind the flux, by all means turn the feature on right away! It may be easier to fix bugs gradually, as libraries come online, rather than in bulk.

If you do want to save up the work for one fell swoop, however, you should wait for the nullable rollout phase to be over, or at least for all the libraries you depend on to have enabled the feature.

It’s not fair to nag your library providers about nullability annotations until the nullable rollout phase is over. Engaging them to help get it done, through OSS or as early adopters or whatever, is of course highly encouraged, as always.

What will Microsoft do?

We will also aim to be done with null-annotating our core libraries when .NET 5 comes around – and we are currently on track to do so. (Tracking issue: Annotate remainder of .NET Core assemblies for nullable reference types).

We will also keep a keen eye on the usage and feedback during this time, and we will feel free to make adjustments anywhere in the stack, whether library, compilers or tooling, in order to improve the experience based on what we hear. Adjustments, not sweeping changes. For instance, this and this issue were already addressed by this and this fix.

When .NET 5 rolls around, if we feel the nullable rollout phase has been a success, I could see us turning the feature on by default for new projects in Visual Studio. If the ecosystem is ready for it, there is no reason why any new code should ignore the improved safety and reliability you get from nullability annotations!

At that point, the mechanisms for opt-in and opt-out become effectively obsolete – a mechanism to deal with legacy code.

Call to action

Make a plan! How are you going to act on nullable reference types? Try it out! Turn it on in your code and see what happens. Scary many warnings? That may happen until you get your signatures annotated right. After that, the remaining warnings are about the quality of your consuming code, and those are the reward: an opportunity to fix the places where your code is probably not null safe!

And as always: Have fun exploring!

Happy hacking,

Mads Torgersen, C# lead designer

The post Embracing nullable reference types appeared first on .NET Blog.

The open source Carter Community Project adds opinionated elegance to ASP.NET Core routing

$
0
0

imageI blogged about NancyFX 6 years ago and since then lots of ASP.NET open source frameworks that build upon - and improve! - web development on .NET have become popular.

There's more than one way to serve and angle bracket (or curly brace) my friends!

Jonathan Channon and the Carter Community (JC was a core Nancy contributor as well) have been making a thin layer of extension methods and conventions on top of ASP.NET Core to make URL routing "more elegant." Carter adds and formalizes a more opinionated framework and also adds direct support for the amazing FluentValidation.

One of the best things about ASP.NET Core is its extensibility model and Carter takes full advantage of that. Carter is ASP.NET.

You can add Carter to your existing ASP.NET Core app by just "dotnet add package carter" and adding it to your Startup.cs:

public class Startup

{
public void ConfigureServices(IServiceCollection services)
{
services.AddCarter();
}

public void Configure(IApplicationBuilder app)
{
app.UseRouting();
app.UseEndpoints(builder => builder.MapCarter());
}
}

At this point you can make a quick "microservice" - in this case just handle an HTTP GET - in almost no code, and it's super clear to read:

public class HomeModule : CarterModule

{
public HomeModule()
{
Get("/", async (req, res) => await res.WriteAsync("Hello from Carter!"));
}
}

Or you can add Carter as a template so you can later "dotnet new carter." Start by adding the Carter Template with "dotnet new -i CarterTemplate" and now you can make a new boilerplate starter app anytime.

There's a lot of great sample code on the Carter Community GitHub. Head over to https://github.com/CarterCommunity/Carter/tree/master/samples and give them more Stars!

Carter can also cleanly integrate with your existing ASP.NET apps because, again, it's extensions and improvements on top of ASP.NET. Now how you can add Carter to a ASP.NET Core app that's using Controllers in the MVC pattern just like this:

public void Configure(IApplicationBuilder app)

{
app.UseRouting();
app.UseEndpoints(builder =>
{
builder.MapDefaultControllerRoute();
builder.MapCarter();
});
}

Then easily handle a GET by returning a list of things as JSON like this:

this.Get<GetActors>("/actors", async (req, res) =>

{
var people = actorProvider.Get();
await res.AsJson(people);
});

 

Again, check out Carter on GitHub at and follow https://twitter.com/CarterLibs on Twitter!


Sponsor: Like C#? We do too! That’s why we've developed a fast, smart, cross-platform .NET IDE which gives you even more coding power. Clever code analysis, rich code completion, instant search and navigation, an advanced debugger... With JetBrains Rider, everything you need is at your fingertips. Code C# at the speed of thought on Linux, Mac, or Windows. Try JetBrains Rider today!



© 2019 Scott Hanselman. All rights reserved.
     

Azure IoT Tools November Update: standalone simulator for Azure IoT Edge development and more!

$
0
0

Welcome to the November update of Azure IoT Tools!

In this November release, you will see the new standalone simulator for Azure IoT Edge development, the support of Vcpkg for IoT Plug and Play development and more new features.

Deploy Event Grid module on Azure IoT Edge

Event Grid on IoT Edge brings the power and flexibility of Azure Event Grid to the edge for all pub/sub and event driven scenarios. There are several ways to deploy Event Grid module in VS Code.

1. When adding a new module to your new or existing IoT Edge solution, now there is a new option to choose Azure Event Grid

2. When adding a new module to your new or existing IoT Edge solution, select Module from Azure Marketplace, you can see Azure Event Grid on IoT Edge.

3. In VS Code command palette, type and select Azure IoT Edge: Show Sample Gallery. You can open a new sample with pub/sub Functions along with Event Grid module.

Click here to learn more about Azure Event Grid on IoT Edge.

Standalone simulator for Azure IoT Edge development

For Azure IoT Edge developers, we have Azure IoT EdgeHub Dev Tool to provide a local development experience with a simulator for creating, developing, testing, running, and debugging Azure IoT Edge modules and solutions. However, the Azure IoT EdgeHub Dev Tool runs on top of Python environment. Not every Azure IoT Edge developers especially those using Windows as development environment has Python and Pip installed. Therefore, we have shipped a standalone simulator for Azure IoT EdgeHub Dev Tool so that developers who use Windows as development environment no longer need to setup Python environment. The standalone simulator has already been integrated in the latest release of Azure IoT Tools for Visual Studio Code. When you use Azure IoT Tools for Visual Studio Code,

Support Vcpkg for IoT Plug and Play development

Vcpkg is a cross-platform library manager that helps you manage C and C++ libraries on Windows, Linux and MacOS. With the support of Vcpkg for IoT Plug and Play development, developers could easily leverage the Vcpkg to manage the Azure IoT C device SDK as well as other C/C++ dependencies.

Previously, source code is the only way to include the Azure IoT C device SDK. Now, developers could generate device code stub of IoT Plug and Play via both Vcpkg and source code.

For more details with the step-by-step instructions, you can check out this tutorial to see how to create an IoT Plug and Play device via Vcpkg.

Configure an Embedded Linux C project using containerized device toolchain

We release the preview experience of containerized toolchain months ago aiming to simplify the toolchain acquisition efforts for device developers working on C / C++ project for Embedded Linux that requires the cross-compiling toolchain, device SDK and dependent libraries set up properly. Instead of doing this on local machine, which could lead to a messed-up environment, we provided a couple of common container images for devices with various architectures (e.g. ARMv7, ARM64 and x86).

And now you can further use this feature by configuring an existing C / C++ project you have to be able to compile in the container, and then deploy to the target device you use. If you want to further customize the container, we provided with extra device libraries and packages that are required for your device.

Check the tutorials to learn how to use it for your existing code base.

Try it out

Please don’t hesitate to give it a try and if you’re new to Azure, remember you can sign up for a free Azure account to get $200 free Azure credit and access to over 25 always free services (including Azure IoT Hub)! If you have any feedback, feel free to reach us at https://github.com/microsoft/vscode-azure-iot-tools/issues. We will continuously improve our IoT developer experience to empower every IoT developers on the planet to achieve more!

The post Azure IoT Tools November Update: standalone simulator for Azure IoT Edge development and more! appeared first on Visual Studio Blog.

Top Stories from the Microsoft DevOps Community – 2019.11.29

$
0
0

While our American colleagues are busy enjoying their Thanksgiving break, I wanted to post about something I’m extremely thankful for. No not the two days without any meetings this week (although that was awesome), but the incredible DevOps community building exciting things with the help of Azure.

iot hackdays

Open Source Cloud Summit Johannesburg – IoT Edge Lab

While folks in the US were busy eating pumpkin pie and fixing their relatives laptops on Thanksgiving, the community on Johannesburg were holding an Open Cloud Summit. Some amazing posts coming out of the #OSSSummitJHB hashtag, but my personal favorite was the Azure IoT Edge Hands On Lab from MVP Allan Pead. Allan has ran this lab at a couple of IoT Hackdays this months and I’m very jealous – definitely want to give it a go. In this case you learn how to do CI/CD to a Raspberry Pi based robot using Azure Pipelines. For more information take a look at the Hands On Lab repo on GitHub.

100 Days of Infrastructure as Code in Azure

Ryan Irujo, Pete Zerger and Tao Yang have been learning different areas of Infrastructure as Code in Azure and this week they have been digging more into YAML Pipelines. It’s definitely worth following along with them by adding a watch on their GitHub repo so that you get notified of changes. (Also don’t forget to sign up for the beta of the new GitHub Mobile app if you want to manage your notifications on the go)

How to Configure CI/CD in Azure DevOps

Over on the excellent Redgate Hub sysadmin blog, Joydip Kanjilal posted a very comprehensive run though of the process setting up a basic CI/CD pipeline for a .NET Core app with Visual Studio 2019, Azure Pipelines and Azure. While it’s a demo I do often and there is plenty of help available for, it’s great to see such a simple and detailed walk-through of this ‘bread and butter’ pipeline but aimed at the community of sysadmins. While you are there, be sure to check out the excellent Redgate extensions for Azure DevOps which make doing CD with SQL Server databases a lot easier.

Use GitHub Actions to deploy code to Azure

Popular tech columnist, Simon Bisson, wrote up how to use the new GitHub Actions for Azure to deploy straight from GitHub to your Azure service of choice. After reading his article, if you want to learn more about the GitHub Actions for Azure, check-out the blog post from last week – note that there is even an action to trigger Azure Pipelines which can come in handy should you want to do your CI build using GitHub Actions and then trigger a release using Azure Pipelines.

3 Ways to run Automated Tests on Azure DevOps

On the TechFabric blog, Seleznov Ihor has posted a deep-dive into three ways to run automated tests in Azure Pipelines, Unit tests, UI tests and API tests in this case with a .NET Core application.

Continuous Infrastructure in GCP using Azure Pipelines

Ashish Raj has been on a roll lately with Azure DevOps content and this week was no different with a great look into using GCP with Azure Pipelines and Terraform. His short (15m) video on YouTube is well worth a watch if multi-cloud deployments with Terraform is something you are looking into.

The Unicorn Project

Last but not least, one final thing to be thankful for is that Gene Kim‘s latest book, The Unicorn Project is now available. Like with The Phoenix Project, Gene explains how DevOps principals work in practice using a fictional narrative that works really well and keeps you engaged. This time the story of Parts Unlimited is told from the position of the engineering teams on the ground facing hard choices and trying to do the right thing while facing difficult deadlines and fighting for the very survival of the business. Many of the incidents and scenarios ring true from my time as a consultant (the mention of CSV BOM’s made me shiver thinking about the time that tripped me up) but also times even here at Microsoft where we’ve let technical debt build up and had to recognize that fact and pay it back down. I would encourage everyone to read the book and buy several copies for folks on your team as you’ll quickly find yourself looking at situations at work and thinking ‘What Would Maxine Do’. The term ‘digital transformation’ can be overused and full of buzzwords – but this book does a great job of explaining what it actually means and what it feels like to go through it. Even better as it’s a narrative the audio book version works really well too and is narrated by the award winning professional actor/producer Frankie Corzo, making is a great listen on the go.

Enjoy the rest of the holiday weekend if you are in the US. Don’t forget, if you’ve written an article about Azure DevOps or find some great content about DevOps on Azure, please share it with the #AzureDevOps hashtag on Twitter!

The post Top Stories from the Microsoft DevOps Community – 2019.11.29 appeared first on Azure DevOps Blog.

Application Gateway Ingress Controller for Azure Kubernetes Service

$
0
0

Today we are excited to offer a new solution to bind Azure Kubernetes Service (AKS) and Application Gateway. The new solution provides an open source Application Gateway Ingress Controller (AGIC) for Kubernetes, which makes it possible for AKS customers to leverage Application Gateway to expose their cloud software to the Internet.

Bringing together the benefits of the Azure Kubernetes Service, our managed Kubernetes service, which makes it easy to operate advanced Kubernetes environments and Azure Application Gateway, our native, scalable, and highly available, L7 load balancer has been highly requested by our customers.

How does it work?

Application Gateway Ingress Controller runs in its own pod on the customer’s AKS. Ingress Controller monitors a subset of Kubernetes’ resources for changes. The state of the AKS cluster is translated to Application Gateway specific configuration and applied to the Azure Resource Manager. The continuous re-configuration of Application Gateway ensures uninterrupted flow of traffic to AKS’ services. The diagram below illustrates the flow of state and configuration changes from the Kubernetes API, via Application Gateway Ingress Controller, to Resource Manager and then Application Gateway.

Much like the most popular Kubernetes Ingress Controllers, the Application Gateway Ingress Controller provides several features, leveraging Azure’s native Application Gateway L7 load balancer. To name a few:

  • URL routing
  • Cookie-based affinity
  • Secure Sockets Layer (SSL) termination
  • End-to-end SSL
  • Support for public, private, and hybrid web sites
  • Integrated web application firewall

agic2

The architecture of the Application Gateway Ingress Controller differs from that of a traditional in-cluster L7 load balancer. The architectural differences are shown in this diagram:

clip_image003

  • An in-cluster load balancer performs all data path operations leveraging the Kubernetes cluster’s compute resources. It competes for resources with the business apps it is fronting. In-cluster ingress controllers create Kubernetes Service Resources and leverage kubenet for network traffic. In comparison to Ingress Controller, traffic flows through an extra hop.
  • Ingress Controller leverages the AKS’ advanced networking, which allocates an IP address for each pod from the subnet shared with Application Gateway. Application Gateway has direct access to all Kubernetes pods. This eliminates the need for data to pass through kubenet. For more information on this topic see our “Network concepts for applications in Azure Kubernetes Service” article, specifically “Comparing network models” section.

Solution performance

As a result of Application Gateway having direct connectivity to the Kubernetes pods, the Application Gateway Ingress Controller can achieve up to 50 percent lower network latency vs in-cluster ingress controllers. Application Gateway is a managed service, backed by Azure virtual machine scale sets. As a result, Application Gateway does not use AKS compute resources for data path processing. It does not share or interfere with the resources allocated to the Kubernetes deployment. Autoscaling Application Gateway at peak times, unlike an in-cluster ingress, will not impede the ability to quickly scale up the apps’ pods. And of course, switching from in-cluster L7 ingress to Application Gateway will immediately decrease the compute load used by AKS.

We compared the performance of an in-cluster ingress controller and Application Gateway Ingress Controller on a three node AKS cluster with a simple web app running 22 pods per node. A total of 66 web app pods shared resources with three in-cluster ingresses – one per node. We configured Application Gateway with an instance count of two. We used Apache Bench to create a total of 100K requests with concurrency set at 3K requests. We launched Apache Bench twice: once pointing it to the SLB fronting the in-cluster ingress controller, and a second time connecting to the public IP of Application Gateway. On this very busy AKS cluster we recorded the mean latency across all requests:

  • Application Gateway: 480ms per request
  • In-cluster Ingress: 710ms per request

As proven by the data gathered above, under heavy load, the in-cluster ingress controller has approximately 48 percent higher latency per request compared to Application Gateway ingress. Running the same benchmark on the same cluster but with two web app pods per node, a total of six pods, we observed the in-cluster ingress controller performing with approximately 17 percent higher latency than Application Gateway.

What’s next?

Application Gateway Ingress Controller is now stable and available for use in production environments. The project is maturing quickly, and we are working actively to add new capabilities. We are working on enhancing the product with features that customers have been asking for, such as using certificates stored on Application Gateway, mutual TLS authentication, gRPC, and HTTP/2. We invite you to try the new Application Gateway Ingress Controller for AKS, follow our progress, and most importantly - give us feedback on GitHub.

Azure Cost Management updates – November 2019

$
0
0

Whether you're a new student, thriving startup, or the largest enterprise, you have financial constraints and you need to know what you're spending, where, and how to plan for the future. Nobody wants a surprise when it comes to the bill, and this is where Microsoft Azure Cost Management comes in.

We're always looking for ways to learn more about your challenges and how Cost Management can help you better understand where you're accruing costs in the cloud, identify and prevent bad spending patterns, and optimize costs to empower you to do more with less. Here are a few of the latest improvements and updates based on your feedback:

Let's dig into the details.

Cost Management now available for Cloud Solution Providers

In case you missed it, as of November 1, Cloud Solution Provider (CSP) partners can now see and manage costs for their customers using Azure Cost Management in the Azure portal by transitioning them to Azure plan subscriptions via Microsoft Customer Agreement. Partners can also enable Azure Cost Management for customers to allow them to see and manage the cost of their subscriptions.

If you're working with a CSP partner to manage your Azure subscriptions, talk to them about getting you onboarded and your subscriptions switched over to the new Azure plan using Microsoft Customer Agreement. Not only will this allow you to see and manage costs in the Azure portal, but you'll also be able to use some Azure services that aren't currently available to your classic CSP subscriptions. As an example, some organizations have dependencies on external solutions that still require classic services, including virtual machines. To work around this, organizations are creating separate pay-as-you-go subscriptions for those resources. This adds additional overhead to manage separate billing accounts with Microsoft and your partner. Once you've switched over to Azure plan subscriptions, you may be able to consolidate any existing CSP and non-CSP subscriptions into a single billing account, managed by your partner. In general, you'll have the same benefits and offerings at the same time as everyone else using Microsoft Customer Agreement. Make sure you talk to your partner today!

If you're a CSP provider, enabling Cost Management for your customers involves three steps:

  1. Confirm acceptance of the Microsoft Customer Agreement on behalf of your customers
    Present the Microsoft Customer Agreement to your customers and, once they've agreed, confirm the customer's official acceptance in Partner Center or via the API/SDK.
  2. Transition your customers to Azure plan
    The last step for you, as the partner, to see and manage cost in the Azure portal is to transition existing CSP offers to an Azure plan. You'll need to do this once for each reseller and direct customer.
  3. Enable Azure Cost Management for your customers
    In order for your customers to see and manage costs in Azure Cost Management, they need to have access to view charges for their subscriptions. This can be enabled from the Azure portal for each customer and shows them their cost based on pay-as-you-go prices and do not include partner discounts or any discounts you may offer. Please ensure your customers understand the cost will not match your invoice if you offer additional discounts or use custom prices.

To learn more about what you'll see after enabling Azure Cost Management for your customers, read Get started with Azure Cost Management for partners.

What's new in Cost Management Labs

With Cost Management Labs, you get a sneak peek at what's coming in Azure Cost Management and can engage directly with us to share feedback and help us better understand how you use the service so we can deliver more tuned and optimized experiences. Here are a few features you can see in Cost Management Labs:

  • Get started quicker with the cost analysis Home view
    Cost Management offers five built-in views to get started with understanding and drilling into your costs. The Home view gives you quick access to those views so you get to what you need faster.
  • Performance optimizations in cost analysis and dashboard tilesNow available in the public portal
    Whether you're using tiles pinned to the dashboard or the full experience, you'll find cost analysis loads faster than ever.
  • NEW: Show views name on pinned cost analysis tilesNow available in the public portal
    When you pin cost analysis to the dashboard, it now shows the name of the view you pinned. To change it, simply save the view with the desired name and pin cost analysis again!
  • NEW: Quick access to cost analysis help and supportNow available in the public portal
    Have a question? Need help? The quickstart tutorial is now one click away in cost analysis. And if you run into an issue, create a support request from cost analysis to send additional context to help you submit and resolve your issue quicker than ever.
    Use the 'Quickstart tutorial' command at the top of cost analysis to see documentation and 'New support request' to create a support request with additional context to resolve your issue quicker

Of course, that's not all. Every change in Cost Management is available in Cost Management Labs a week before it's in the full Azure portal. We're eager to hear your thoughts and understand what you'd like to see next. What are you waiting for? Try Cost Management Labs today.

Customizing the name on dashboard tiles

You already know you can save and share views in cost analysis. You'll typically start by saving a customized view in cost analysis so others can use it. You might share a link so they can jump directly into the view from outside the portal or share an image of the view to include in an email or presentation. But if you really want to keep an eye on specific perspectives of your cost every time you sign in to the portal, the best option is to pin your view to the dashboard.

Azure portal dashboard with tiles for all the built-in views available in Azure Cost Management

Pinning is easy: Just click the pin icon in the top-right corner of cost analysis and you're done. When you pin your view, the tile shows the name of your view, the scope it represents, and the main chart or table from cost analysis. If you have an older tile you need to rename, open it in cost analysis, click Save as to change the name of the view, then pin it again.

Enjoy and let us know what you'd like to see next!

Upcoming changes to Azure usage data

Many organizations use the full Azure usage and charges to understand what's being used, identify what charges should be internally billed to which teams, and to look for opportunities to optimize costs with Azure reservations and Azure Hybrid Benefit. If you're doing any analysis or setup integration based on product details in the usage data, please update your logic for the following services. All of the following changes will start effective December 1:

Also, remember the key-based EA billing APIs have been replaced by new Azure Resource Manager APIs. The key-based APIs will still work through the end of your enrollment, but will no longer be available when you renew and transition into Microsoft Customer Agreement. Please plan your migration to the latest version of the UsageDetails API to ease your transition to Microsoft Customer Agreement at your next renewal.

Save up to 72 percent with Azure reservations – now available for 16 services

Azure reservations help you save up to 72% compared to pay-as-you-go rates when you commit to one or three years of usage. You may know Azure Advisor tells you when you can save money with virtual machine reservations, but did you know with the addition of six new services, you can now purchase reservations for a total of 16 services? Here's the full list as of today:

  • Virtual machines and managed disks
  • Blob storage
  • App Service
  • SQL database and data warehouse
  • Azure Database for MySQL, MariaDB, and PostgreSQL
  • Cosmos DB
  • Data Explorer
  • Databricks
  • SUSE and Red Hat Linux
  • Azure Red Hat OpenShift
  • Azure VMWare solution by CloudSimple

What services would you like to see next? Learn more about Azure reservations and start saving today!

New videos

If you weren't able to make it to Microsoft Ignite 2019 or didn't catch the Azure Cost Management sessions, they're now available online and open for everyone:

If you're looking for something a little shorter, you can also check out these videos:

Subscribe to the Azure Cost Management YouTube channel to stay in the loop with new videos as they're released and let us know what you'd like to see next.

Documentation updates

There were many documentation updates. Here are a few you might be interested in:

Want to keep an eye on all of the documentation updates? Check out the Cost Management doc change history in the azure-docs repository on GitHub. If you see something missing, select Edit at the top of the document and submit a quick pull request.

What's next?

These are just a few of the big updates from last month. We're always listening and making constant improvements based on your feedback, so please keep the feedback coming.

Follow @AzureCostMgmt on Twitter and subscribe to the YouTube channel for updates, tips, and tricks. And, as always, share your ideas and vote up others in the Cost Management feedback forum.

SAP HANA backup using Azure Backup is now generally available

$
0
0

Today, we are sharing the general availability of Microsoft Azure Backup’s solution for SAP HANA databases in the UK South region.

Azure Backup is Azure's native backup solution, which is BackInt certified by SAP. This offering aligns with Azure Backup’s mantra of zero-infrastructure backups, eliminating the need to deploy and manage backup infrastructure. You can now seamlessly backup and restore SAP HANA databases running on Microsoft Azure Virtual Machines (VM) — M series Virtual Machine is also supported, and leverage enterprise management capabilities that Azure Backup provides.

Benefits

  • 15-minute Recovery Point Objective (RPO): Recovery of critical data of up to 15 minutes is possible.
  • One-click, point-in-time restores: Easy to restore production data on SAP HANA databases to alternate servers. Chaining of backups and catalogs to perform restores is all managed by Azure behind the scenes.
  • Long-term retention: For rigorous compliance and audit needs, you can retain your backups for years, based on the retention duration, beyond which the recovery points will be pruned automatically by the built-in lifecycle management capability.
  • Backup Management from Azure: Use Azure Backup’s management and monitoring capabilities for improved management experience.

Watch this space for more updates on GA rollout to other regions. We are currently in preview in these Azure geos.

Getting started

Faster and cheaper: SQL on Azure continues to outshine AWS

$
0
0

Over a million on-premises SQL Server databases have moved to Azure, representing a massive shift in where customers are collecting, storing, and analyzing their data.

Modernizing your databases provides the opportunity to transform your data architecture. SQL Server on Azure Virtual Machines allows you to maintain control over your database and operating system while still benefiting from cloud flexibility and scale. For some, this represents a step in the journey to a fully-managed database, while others choose this deployment option for compatibility with on-premises workloads such as SQL Server Reporting Services.

Whatever the reason, migrating SQL workloads to Azure Virtual Machines is a popular option. Azure customers benefit from our unique built-in security and manageability capabilities, which automate tasks like patching and backups. In addition to providing these unparalleled innovations, it is important to provide customers with the best price-performance possible. Once again, SQL Server on Azure Virtual Machines comes out on top.

SQL Server on Azure leads in price-performance

GigaOm, an independent research firm, recently published a study comparing throughput performance between SQL Server on Azure Virtual Machines and SQL Server on AWS EC2. Azure emerged as the clear leader across both Windows and Linux for mission-critical workloads, up to 3.4 times faster and up to 87 percent less expensive than AWS EC2.1

GigaOm Report

The images above are performance and price-performance comparisons from the GigaOm report. The performance metric is throughput (transactions per second, tps); higher performance is better. The price-performance metric is three-year pricing divided by throughput (transactions per second, tps), lower price-performance is better.

With Azure Ultra Disk, GigaOm was able to achieve 80,000 input or output per second (IOPS) per single disk, maxing out the virtual machine’s throughput limit, and well exceeding the capabilities of AWS provisioned IOPS.2

A key reason why Azure price-performance is superior to AWS is Azure BlobCache, which provides free reads. Given that most online transaction processing (OLTP) workloads today come with a ten-to-one read-to-write ratio, this provides customers with significant savings.

Unmatched innovation from the team that brought SQL Server to the world

With a proven track record over 25 years, the engineering team behind SQL Server continues to drive security and innovation to meet our customers’ changing needs. Whether executing on-premises, in the cloud, or on the edge, the result is the most comprehensive, consistent, and secure solution for your data.

Azure SQL Virtual Machines offer unique built-in security and manageability, including automatic security patching and automated high-availability, and database recovery to a specific point in time. Azure’s unique security capabilities include advanced data security for SQL Server on Azure Virtual Machines, which enables both vulnerability assessments and advanced threat protection. Customers self-installing SQL Server on virtual machines in the cloud can now register with our resource provider to enable this same functionality.

Get started with SQL in Azure today

Migrate from SQL Server on-premises to SQL Server 2019 in Azure Virtual Machines today. Get started with preconfigured Azure SQL Virtual Machine images on Red Hat Enterprise Linux, SUSE Linux Enterprise Server, Ubuntu, and Windows in minutes. Take advantage of the Azure Hybrid Benefit to reuse your existing on-premises Windows server and SQL Server licenses in Azure for significant savings.

When you add it up, SQL databases are simply best on Azure. Learn more about why SQL Server is best on Azure, and use a $200 in Azure credits with a free account3 or Azure Dev or Test credits4 for additional cost savings.

 


1Price-performance claims based on data from a study commissioned by Microsoft and conducted by GigaOm in October 2019. The study compared price performance between SQL Server 2017 Enterprise Edition on Windows Server 2016 Datacenter edition in Azure E64s_v3 instance type with 4x P30 1TB Storage Pool data (Read-Only Cache) + 1x P20 0.5TB log (No Cache) and the SQL Server 2017 Enterprise Edition on Windows Server 2016 Datacenter edition in AWS EC2 r4.16xlarge instance type with 1x 4TB gp2 data + 1x 1TB gp2 log. Benchmark data is taken from a GigaOm Analytic Field Test derived from a recognized industry standard, TPC Benchmark™ E (TPC-E). The Field Test does not implement the full TPC-E benchmark and as such is not comparable to any published TPC-E benchmarks. The Field Test is based on a mixture of read-only and update intensive transactions that simulate activities found in complex OLTP application environments. Price-performance is calculated by GigaOm as the cost of running the cloud platform continuously for three years divided by transactions per second throughput. Prices are based on publicly available US pricing in West US for SQL Server on Azure Virtual Machines and Northern California for AWS EC2 as of October 2019. The pricing incorporates three-year reservations for Azure and AWS compute pricing, and Azure Hybrid Benefit for SQL Server and Azure Hybrid Benefit for Windows Server and License Mobility for SQL Server in AWS, excluding Software Assurance costs.  Price-performance results are based upon the configurations detailed in the GigaOm Analytic Field Test.  Actual results and prices may vary based on configuration and region.

2Claims based on data from a study commissioned by Microsoft and conducted by GigaOm in October 2019. The study compared price-performance between SQL Server 2017 Enterprise Edition on Windows Server 2016 Datacenter edition in Azure E64s_v3 instance type with 1x Ultra 1.5TB with 650MB per sec throughput and the SQL Server 2017 Enterprise Edition on Windows Server 2016 Datacenter edition in AWS EC2 r4.16xlarge instance type with 1x 1.5TB io1 provisioned log + data. Benchmark data is taken from a GigaOm Analytic Field Test derived from a recognized industry standard, TPC Benchmark™ E (TPC-E). The Field Test does not implement the full TPC-E benchmark and as such is not comparable to any published TPC-E benchmarks. The Field Test is based on a mixture of read-only and update intensive transactions that simulate activities found in complex OLTP application environments. Price-performance is calculated by GigaOm as the cost of running the cloud platform continuously for three years divided by transactions per second throughput. Prices are based on publicly available US pricing in north Europe for SQL Server on Azure Virtual Machines and Ireland for AWS EC2 as of October 2019. Price-performance results are based upon the configurations detailed in the GigaOm Analytic Field Test.  Actual results and prices may vary based on configuration and region.

3Additional information about $200 Azure free account available at https://azure.microsoft.com/en-us/free/.

4Dev or Test Azure credits and pricing available for paid Visual Studio subscribers only.

HP announces Education Edition laptops—built for schools, designed for learning


Enabling collaborative bot development across your organization for any user

$
0
0

This post was co-authored by Omar Aftab, Partner Director of Program Management, Power Virtual Agents.

Conversational artificial intelligence (AI) is enabling organizations to improve their business in areas like customer service and employee engagement by automating some of the most commonly requested services, which frees up employees to take on more value-adding activities. While the benefits of conversational AI are well established, determining who in an organization should build these solutions is not always clear.

As is true of many applications, conversational AI solutions (or bots) can be built using software-as-a-service (SaaS) or platform-as-a-service (PaaS) offerings. Consequently, organizations are forced to decide between empowering business users who are closest to the business problems or empowering developers with coding experience to have full control over how these solutions are built, without many options to bridge the gap and allow for collaboration between the two. However, with the integration of Bot Framework Skills into Microsoft Power Virtual Agents (a graphical interface offering for business users creating bots, now generally available), Microsoft uniquely empowers both business users and developers to collaborate seamlessly in building conversational AI solutions.

In the bot building journey, bot builders across the organization should not work in siloes. If a business user is building a bot but wants to add a nuanced scenario, they should be able to collaborate with a developer who can customize the bot further. Similarly, developers building a bot can also leverage bots that have been built by business users as a skill.

Microsoft offers an end-to-end, no-cliffs bot building experience with Power Virtual Agents and Bot Framework.

  • Power Virtual Agents provides a no-code experience for bot development – ideal for business users and domain experts to easily build a bot, without having to worry about the technical aspects of bot development.
  • Bot Framework is an open-source SDK and tools purpose-built for bot development – ideal for developers who want to build a bot using a code experience and want full control of technical aspects of bot development, including language model ownership, and visual design. Additionally, Azure Bot Service allows developers to host and deploy their bots to popular channels like Teams and other messaging platforms where users will interact with the bot.
  • Bot Framework Skills offering a no-cliffs bot building experience – no matter your starting point. With Bot Framework Skills, Power Virtual Agents users have a no-cliff bot building experience because they can collaborate with Bot Framework developers to extend their bots with custom capabilities. Equally important, Bot Framework developers can extend their bot as a skill and allow subject matter experts to update bot conversations.

An image showing how Power Virtual Agents and Microsoft Blot Frameworks expand on each other for ease of collaboration between developers and business users.

For example, suppose an organization is creating a travel bot using Power Virtual Agents. The business users build out the dialogs with a UI-based experience that allows the bot to handle customers’ intents, such as Check miles and rewards, Check flight status, Update account information, and Book a flight.

A screenshot showing a test chat with an example travel bot.

However, what if someone in the organization has already built a Book a Flight skill with custom language models using Bot Framework and Language Understanding service as illustrated below?

A screenshot showing the already built Book a Flight skill

In this scenario, business experts can collaborate with the developer who has built this flight booking skill by selecting it as an action in Power Virtual Agents.

A screenshot showing a test chat with an example bot using the Book a Flight skill

As conversational AI adoption continues to grow, we believe it is important for organizations to take an interdisciplinary team approach to bot development. For this reason, Microsoft offers an end-to-end, no-cliffs bot building experience that empowers business subject matter experts and developers alike to collaborate.

Achieve operational excellence in the cloud with Azure Advisor

$
0
0

Many customers have questions when it comes to managing cloud operations. How can I implement real-time cloud governance at scale? What’s the best way to monitor my cloud workloads? How can I get help when I need it?

Azure offers a great deal of guidance when it comes to optimizing your cloud operations. At the organizational level, the Microsoft Cloud Adoption Framework for Azure can help you design and implement your approach to management and governance in the cloud. At the cloud resource level, Azure Advisor provides personalized recommendations to help you optimize your Azure workloads for a variety of objectives—including cost savings, security, performance, and availability—based on your usage and configurations.

Recently, Advisor introduced a new recommendation category—operational excellence—to help you follow best practices for process and workflow efficiency, resource manageability, and deployment.

Introducing a new Azure Advisor recommendation category: operational excellence

Azure Advisor now offers a new category of recommendations—operational excellence—to help you optimize your cloud process and workflow efficiency, resource manageability, and deployment practices. You can get these recommendations from Advisor in the operational excellence tab of the Advisor dashboard. They’re also available via Advisor’s CLI and API.

Screenshot of Azure Advisor in the Azure portal, showing the new operational excellence category.
The operational excellence category is launching with nine recommendations, and more on the way. Examples include creating Azure Service Health alerts to be notified when Azure service issues affect you; repairing invalid log alert rules; and following best practices using Azure policy, such as tag management, geo-compliance requirements, and specifying permitted virtual machine (VM) SKUs for deployment. Together, these recommendations will help you optimize your cloud operations practices.

New operational excellence recommendations

Here’s a quick round-up of the new operational excellence recommendations in Advisor at launch:

  • Create Azure Service Health alerts to be notified when Azure service issues affect you.
  • Design your storage accounts to prevent hitting the maximum subscription limit.
  • Ensure you have access to Azure cloud experts when you need it.
  • Repair invalid log alert rules.
  • Follow best practices using Azure Policy, including tag management, geo-compliance requirements, and VM audits for managed disks.

For more detailed information on Advisor’s operational excellence recommendations, refer to our documentation. Be sure to check back regularly, as we’re constantly adding new recommendations.

Review your operational excellence recommendations today

Visit Advisor in the Azure portal here to start optimizing your cloud workloads for operational excellence. For more in-depth guidance, visit our documentation. Let us know if you have a suggestion for Advisor by submitting an idea here.

Extended filesystem programming capabilities in Azure Data Lake Storage

$
0
0

Since the general availability of Azure Data Lake Storage Gen2 in February 2019, customers have been getting insights at cloud scale faster than ever before. Integration to analytics engines is critical for their analytics workloads and equally important is the ability to programmatically ingest, manage, and analyze data. This ability is critical for key areas of enterprise data lakes such as data ingestion, event-driven big data platforms, machine learning, and advanced analytics. Programmatic access is possible today using Azure Data Lake Storage Gen2 REST APIs or Blob REST APIs. In addition, customers can enable continuous integration and continuous delivery (CI/CD) pipelines using Blob PowerShell and CLI capabilities via multi-protocol access. As part of the journey to enable our developer ecosystem, our goal is to make customer application development easier than ever before.

We are excited to announce the public preview of .NET SDK, Python SDK, Java SDK, PowerShell, and CLI for filesystem operations for Azure Data Lake Storage Gen2. Customers who are used to the familiar filesystem programming model can now implement this model using .NET, Python, and Java SDKs. Customers can also now incorporate these filesystem operations into their CI/CD pipelines using PowerShell and CLI, thereby enriching CI/CD pipeline automation for big data workloads on Azure Data Lake Storage Gen2. As part of this preview, the SDKs, PowerShell, and CLI include support for CRUD operations for filesystems, directories, files, and permissions through filesystem semantics for Azure Data Lake Storage Gen2.

Detailed reference documentation for all these filesystem semantics are provided in the links below. These links will also help you get started and provide feedback.

This public preview is available globally in all regions. Your participation and feedback are critical to help us enrich your development experience. Join us in our journey.

Research: A concrete way to measure IT’s impact on an employee’s at-work experience

Improving Tracking Prevention in Microsoft Edge

$
0
0

Today, we’re excited to announce some improvements to our tracking prevention feature that have started rolling out with Microsoft Edge 79. In our last blog post about tracking prevention in Microsoft Edge, we mentioned that we are experimenting with ways that our Balanced mode can be further improved to provide even greater privacy protections by default without breaking sites. We are looking to strike a balance between two goals:

  1. Blocking more types of trackers – Microsoft Edge’s tracking prevention feature is powered by Disconnect’s tracking protection lists. We wanted to build off our initial implementation of tracking prevention in Microsoft Edge 78 and maximize the protections we offered by default by exploring blocking other categories of trackers (such as those in the Content category) in Balanced mode. These changes resulted in Microsoft Edge 79 blocking ~25% more trackers than Microsoft Edge 78.
  2. Maintaining compatibility on the web – We knew that blocking more categories of trackers (especially those in the Content category) had the potential to break certain web workflows such as federated login or embedded social media content.

We learned through experimentation that it is possible to manage these tradeoffs by relaxing tracking prevention for organizations with which a user has established a relationship. To determine this list, we built on-device logic that combines users’ personal site engagement scores with the observation that some organizations own multiple domains that they use to deploy functionality across the web. It’s worth mentioning that this compatibility mitigation only applies to Balanced mode; Strict mode will continue to block the largest set of trackers without any mitigations.

Illustration of sharks circling a webpage

Site engagement

The Chromium project’s site engagement score is a measure of how engaged a specific user is with a specific site. Site engagement scores can range from 0 (meaning a user has no relationship with a site) to 100 (meaning that a user is extremely engaged with a site). Activities such as browsing to a site repeatedly/over several days, spending time interacting with a site, and playing media on a site all cause site engagement scores to increase, whereas not visiting a site causes site engagement scores to decay exponentially over time. You can view your own site engagement scores by navigating to edge://site-engagement.

It’s also worth noting that site engagement scores are computed on your device and never leave it. This means that they are not synced across your devices or sent to Microsoft at any time.

Through local experimentation, we found that a site engagement score of 4.1 was a suitable threshold to define a site that a user has an active relationship with. While this value is subject to change based on user feedback and future experiments, it was selected as an initial value for two reasons:

  1. It is low enough to ensure successful interactions with a site that a user has not previously had a history of engagement with.
  2. It is high enough to ensure that sites a user visits infrequently will drop off the list relatively quickly.

While site engagement helps signal which sites are important to individual users, allowing third party storage access/resource loads from only these sites would not consider the fact that organizations can serve content that users care about from multiple domains, which can still result in site breakages.

Combining site engagement with organizations

In our last blog post about tracking prevention, we introduced the concept of an organization, that is, a single company that can own multiple domains related to their business (such as Org1 owning “org1.test” and “org1-cdn.test”). We also shared that in order to keep sites working smoothly, our tracking prevention implementation groups such domains together and exempts storage/resource blocks when a domain in one organization requests resources from another domain in that same organization.

In order to keep sites that users engage with working as expected while also increasing the types of trackers that we block by default, we combined the concept of an organization together with site engagement to create a new mitigation. This mitigation takes effect whenever a user has established an ongoing relationship with a given site (currently defined by a site engagement score of 4.1 or greater). For example, consider the following organization which owns two domains:

Social Org

  • social.example
  • social-videos.example

A user will be considered to have a relationship with Social Org if they have established a site engagement score of at least 4.1 with any one of its domains.

If another site, content-embedder.example, includes third-party content (say an embedded video from social-videos.example) from any of Social Org’s domains that would normally be restricted by tracking prevention, it will be temporarily allowed as long as the user’s site engagement score with Social Org’s domains is maintained above the threshold.

If a site does not belong to an organization, a user will need to establish a site engagement score of at least 4.1 with it directly before any storage access/resource load blocks imposed by tracking prevention will be lifted.

What does this mean?

By exempting sites and organizations that you have an ongoing and established relationship with from tracking prevention, we can ensure that the web services and applications you care about continue to work as you expect across the web. Leveraging site engagement also allows us to only unblock content that is likely to be important to you and reflects your current needs. This ensures that actions such as briefly visiting a site or seeing a popup aren’t enough to unblock content by themselves. If content does get unblocked due to you interacting with a site, it is always unblocked in a temporary manner that is proportional to how highly engaged you are with that site/its parent organization. By combining these exemptions with more strict blocking of trackers by default, we can provide higher levels of protection while still maintaining compatibility on the ever-evolving set of sites that you engage with.

It’s worth noting that tracking prevention, when enabled, will always block storage access and resource loads for sites that fall into the Fingerprinting or Cryptomining categories on Disconnect’s tracking protection lists. We will also not apply the site engagement-based mitigation outlined above for our most privacy-minded users who opt into tracking prevention’s Strict mode.

Illustration of a tugboat towing a webpage

Putting everything together: What’s changed?

The best way to learn what’s changed with tracking prevention in Microsoft Edge 79 is to take a look at the table below:

  • Along the top are the categories of trackers as defined by Disconnect’s tracking protection list categories.
  • Along the left side are comparisons of the improvements made to our tracking prevention feature broken down into Basic, Balanced, and Strict.
  • The letter “S” in a cell denotes that storage access is blocked.
  • The letter “B” in a cell denotes that both storage access and resource loads (i.e. network requests) are blocked.
  • A “-“ in a cell denotes that no block will be applied to either storage access or resource loads.
  • The “Same-Org Mitigation” refers to the first mitigation that we introduced in our previous blog post and recapped above.
  • The “Org Engagement Mitigation” refers to the second mitigation based on site engagement that we introduced earlier in this post.
Advertising Analytics Content Cryptomining Fingerprinting Social Other Same Org Mitigation Org Engagement Mitigation
Basic
Microsoft Edge 78 B B Enabled Not impl.
Microsoft Edge 79 B B Enabled N/A
Balanced
Microsoft Edge 78 S B B S Enabled Not impl.
Microsoft Edge 79 S S B B S S Enabled Enabled1
Strict 2
Microsoft Edge 78 B B B B B B Enabled Not impl.
Microsoft Edge 79 B B S B B B B Enabled Disabled
  1. Does not apply to Cryptomining or Fingerprinting categories.
  2. Strict mode blocks more resource loads than Balanced. This can result in Strict mode appearing to block less tracking requests than Balanced since the trackers making the requests are never even loaded to begin with.

With our recent updates in Microsoft Edge 79, we have seen, on average, 25% more trackers blocked in Balanced mode. Close monitoring of user feedback and engagement time also showed no signs of negative compatibility impact, suggesting that the org engagement mitigation is effective at minimizing breakage on sites that users actively engage with. While this does mean that top sites have the org engagement mitigation applied more often, we believe this is an acceptable tradeoff versus compatibility, especially as more top sites are starting to give users mechanisms to transparently view, control, and delete their data.

As with all our features, we’ll continue to monitor telemetry and user feedback channels to learn more and continually improve tracking prevention in future releases. We are also exploring additional compatibility mitigations such as the Storage Access API, which we intend to experiment with in a future version of Microsoft Edge.

InPrivate Changes

In our previous blog post, we mentioned that users browsing in InPrivate will automatically get Strict mode protections. By listening to the feedback our users provided, we found that this led to unexpected behavior (such as causing sites that worked in a normal browsing window to fail to load InPrivate) and broke some important use cases. That’s why in Microsoft Edge 79, your current tracking prevention settings will be carried over to InPrivate sessions.

We are currently experimenting in our Canary and Dev channels with a switch at the bottom of our settings panel (which you can reach by navigating to edge://settings/privacy) that will allow you to re-enable Strict mode protections InPrivate by default:

Screen capture showing the "Tracking Prevention" settings pane in Microsoft Edge

See blocked trackers

We’ve also made it easier for you to view the trackers that Microsoft Edge has blocked for you. Navigate to edge://settings/privacy/blockedTrackers to test out this new experience today!

Send us feedback

We’d love to hear your thoughts on our next iteration of tracking prevention. If something looks broken, or if you have feedback to share on these changes, we’d love to hear from you. Please send us feedback using the “smiley face” in the top right corner of the browser.

Screen capture showing the "smiley face" button to send feedback in Microsoft Edge

Send feedback at any time with the Send a Smile button in Microsoft Edge

As always, thanks for being a part of this journey towards a more private web!

–  Scott Low, Senior Program Manager
–  Brandon Maslen, Senior Software Engineer

The post Improving Tracking Prevention in Microsoft Edge appeared first on Microsoft Edge Blog.

Viewing all 10804 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>