Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

Announcing the Insider Dev Tour 2018!

$
0
0

Hello friends!

Yes, the rumor-mill was right (this time!). It’s time for the Insider Dev Tour.

Each year after Build, we run a world-wide event tour to bring all the latest technology to you, in person. This year we’ve extended the event to even more locations through our partnerships with Windows, the Windows Insider program, and our developer and Insider MVPs.

The Insider Dev Tour is for developers and code curious folks interested in building intelligent experiences today using the latest Microsoft technologies. It’s also for those who want a peek into the future of what kinds of tech will be super-important in every industry. This event is open to anyone who can read code or WANTS to read code– beginner, expert, student, or hobbyist developer.

You’ll learn about Artificial Intelligence, the latest for desktop development, Microsoft 365, Progressive Web Apps, Office for developers, Mixed Reality, Microsoft Graph, and much more. You’ll learn little-known tricks and tips that will help you be more efficient and awesome in your careers, no matter what those careers might be.

The tour is an opportunity to connect directly with leads and engineers from Redmond, as well as regional industry leads and Microsoft Developer MVPs. We’re excited to meet you in person.

The event agenda and session details are all posted on the registration site. We hope to see you there!

Find your local city and Register Now! 

Thank you—and see you there!

Pete & Dona

#InsiderDevTour

The post Announcing the Insider Dev Tour 2018! appeared first on Windows Developer Blog.


Trends in the workplace—white paper on the cultural transformation

$
0
0

Digital transformation is reshaping our global economy, including the way people work individually and in teams. As companies seek to empower people to do their best work, a cultural transformation isn’t just inevitable—it’s essential. Microsoft is partnering with customers to foster a modern workplace that is productive, responsive, creative, and secure. To learn more, read the white paper “New Culture of Work.”

The post Trends in the workplace—white paper on the cultural transformation appeared first on Microsoft 365 Blog.

Mind Bytes: Solving Societal Challenges with Artificial Intelligence

$
0
0

By Francesca Lazzeri (@frlazzeri), Data Scientist at Microsoft

Artificial​ ​intelligence​ ​(AI)​ solutions​ ​are playing a growing role in our everyday life, ​​and​ ​are​ ​being adopted​ ​broadly, in private and public domains. ​ ​While​ ​the​ ​notion​ ​of​ ​AI​ ​has​ ​been around​ ​for​ ​over​ ​sixty​ ​years, real-world​ ​AI scenarios and applications​ ​have​ ​only​ ​increased​ ​in​ ​the​ ​last​ ​decade​ ​due​ ​to​ ​three​ ​simultaneous events: ​ ​​improved​ ​computing​ ​power, capability​ ​to​ ​capture​ ​and​ ​store​ ​massive​ ​amounts​ ​of​ ​​​data, and faster​ ​algorithms.  

AI solutions help determine the commercials you see online, the movie you will watch with your family, the routes you may take to get to work. Beyond​ ​the​ most popular apps, ​these​ ​systems​ ​are​ ​also​ ​being​ ​implemented​ ​in​ ​critical areas​ such as​ health care, immigration policy, ​finance, ​and​ the ​workplace. The design​ ​and​ ​implementation​ ​of​ ​these AI ​tools​ ​presents​ ​deep societal​ ​challenges​ ​​that​ ​will​ ​shape​ ​our​ ​present​ ​and near​ ​future. ​ ​ ​

To identify and contribute to the current dialog around the emerging societal challenges that AI is bringing, we attended Mind Bytes at the University of Chicago. Mind Bytes is an annual research computing symposium and exposition designed to showcase to more than 200 attendees cutting-edge research and applications in the field of AI. Some of the most interesting demos and posters presented were:

  • An Online GIS Platform to Support China’s National Park System establishment – Manyi Wang
  • 17 Years in Chicago Neighborhoods: Mapping Crime Trajectories in 801 Census Tracts – Liang CAI
  • Characterizing the Ultrastructural Determinants of Biophotonic Reflectivity in Cephalopod Skin: A Challenge for 3D Segmentation – Stephen Senft, Teodora Szasz, Hakizumwami B. Runesha, Roger T. Hanlon
  • Exploring Spatial Distribution of Risk Factors for Teen Pregnancy – Emily Orenstein and Iris Mire

The Mind Bytes panel on Solving Societal Challenges with Artificial Intelligence represented an incredible occasion for us to interact with students and many other AI experts from the field, and understand how we can work together to ensure that AI is developed in a responsible manner, so that people will trust it and deploy it broadly, both to increase business and personal productivity and to help solve societal problems.

Specifically, the panel focused on three fundamental areas of discussion related to current and future AI aspects:

  • What areas do you see AI most successfully applied?
  • What is the major challenge that you think should be met before getting the full benefit of AI?
  • What can researchers and students do now to build a system able to address those challenges?

The following sections aim at answering these questions in more detail and reflect on the latest academic and industry research. AI is already with us, and we are now faced with important choices on how it will be designed and applied. Most promisingly, the approaches observed at Mind Bytes demonstrate that there is growing interest in developing AI that is attuned to underlying issues of fairness and equality.

What areas do you see AI most successfully applied?

Today’s AI allows faster and deeper progress in every field of human endeavor, and it is crucial to enabling the digital transformation that is at the heart of global economic development. Every aspect of a business or organization, from engaging with customers to transforming products and services, optimizing operations and empowering employees, can benefit from this digital transformation.

AI has also the potential to help society overcome some of its most challenging issues such as reducing poverty, improving education, delivering healthcare and eradicating rare diseases.

Another field where AI can have a significant positive impact is in serving the more than 1 billion people in the world with disabilities. One example of how AI can make a difference is a Microsoft app called Seeing AI, that can assist people with blindness and low vision as they navigate daily life.  Seeing AI was developed by a team that included a Microsoft engineer who lost his sight at 7 years of age. This powerful app proves the potential for AI to empower people with disabilities by collecting images from the user’s surroundings and describing what is happening around them.  

What is the major challenge that you think should be met before getting the full benefit of AI?

As AI begins to augment human understanding and decision-making in fields like education, healthcare, transportation, agriculture, energy and manufacturing, it will increase the need to solve one of the most crucial societal challenges nowadays: advancing inclusion in our society. 

The​ ​threat​ ​of​ ​bias​ ​rises​ ​when​ AI​ ​systems​ ​are​ ​applied ​to​ ​critical​ ​societal areas ​like​​ healthcare and education.​ While​ ​all​ ​possible​ ​consequences ​of​ ​such​ ​biases​ ​are​ ​worrying,​ ​finding pragmatic solutions​ ​can be a very complex process.​ ​Biased​ ​AI​ ​can​ ​be the result of​​ ​many different ​factors,​ ​for example what​ ​goals​ ​AI​ ​developers​ ​have​ ​in​ ​mind during​ ​development ​and​ ​whether​ ​the​ ​​​systems​ ​developed are representative enough of ​different​ ​parts​ ​of​ ​the​ ​population. ​ ​

Most importantly, AI​ ​solutions​ ​learn ​from​ ​training​ ​data. ​Training​ ​data​ ​can​ ​be imperfect, skewed, often​ ​drawing​ ​on​ ​incomplete​​ ​samples​ ​that​ ​are​ ​poorly​ ​defined​ ​before​ ​use. ​​Additionally, ​because of necessary ​labelling and feature engineering processes, ​human biases​ ​and​ ​cultural​ ​assumptions​ ​can also be​ ​transmitted​ ​by​ ​classification​ ​choices. ​All these technical challenges can result in the​ ​exclusion​ ​of​ ​sub-populations​ ​from​ ​what​ ​AI​ ​is​ ​able​ ​to​ ​see and​ learn from. ​

Data​ ​is​ ​also very expensive, ​and​ ​data​ ​at scale​ ​is​ ​hard​ ​to​ ​collect and use.​ ​Most of the time, data scientists​ ​who​ ​want​ ​to​ ​train​ ​a model​ end up ​using easily available​ ​data, ​often​ ​crowd-sourced,​ ​scraped,​ ​or​ ​otherwise​ ​gathered​ ​from existing​ ​​apps​ ​and​ ​websites.​ ​This​ ​type​ ​of​ ​data​ ​can​ ​simply​ advantage socioeconomically​ ​privileged​ ​populations,​ who have a faster and easier ​access​ ​to​ ​connected​ ​devices and​ ​online​ ​services.​

What can researchers and students do now to build a system able to address those challenges?

We believe that researchers and students must work together to ensure that AI-based technologies are designed and deployed in a way that will earn the trust of the users who use them and whose data is being collected to build those AI solutions. It is vital for the future of our society to design AI to be reliable and create solutions that reflect ethical values that are deeply rooted in important and timeless principles.

For example, when AI systems provide guidance on medical treatment, loan applications or employment, they should make the same recommendations for everyone with similar symptoms, financial circumstances or professional qualifications. The design of any AI system starts with the choice of training data, which is the first place where unfairness can arise. Training data should sufficiently represent the world in which we live, or at least the part of the world where the AI system will operate.

Students should develop analytical techniques to detect and address potential unfairness. We believe the following three steps will support the creation and utilization of healthy AI solutions:

  • Systematic evaluation of the quality and fitness of the data and models used to train and operate AI based products and services.
  • Involvement of domain experts in the design and operation of AI systems used to make substantial decisions about people.
  • A robust feedback mechanism so that users can easily report performance issues they encounter.

Finally, we believe that, when AI applications are used to suggest actions and decisions that will impact other lives, it is important that affected populations understand how those decisions were made, and that AI developers who design and deploy those solutions become accountable for how they operate.

These standards are critical to addressing the societal impacts of AI and building trust as the technology becomes more and more a part of the products and services that people use at work and at home every day.

Accelerate innovation with Consulting Services in Azure Marketplace

$
0
0

We are excited to announce that Azure customers can now easily maximize the potential of the intelligent cloud through newly released Azure Marketplace Consulting Services offerings. This announcement builds on the success of Consulting Services offers for Dynamics 365 and Power BI that are listed in AppSource and is part of our commitment to investing in the cloud marketplace to be the premier place to find, try, and buy trusted cloud solutions.

Whether you are seeking someone to help learn new skills in Azure, to handle the unique parameters of migrating a sensitive workload, or to create high-impact visual storytelling to drive business decision making, Azure Marketplace makes accessing this expertise easier than ever. The partners in Azure Marketplace can help you get started with confidence, rightsize, and optimize your cloud so you can accelerate your pace of innovation. These partners will provide tailored services and augment your internal capabilities, enabling you to grow your business quickly and securely.

The new offer provides assessments, briefings, implementations, proof-of-concepts, and workshops by Microsoft partners with a Silver or Gold cloud competency, such as Bright Wolf and Wipro. Each of these offers clearly presents what outcomes you can expect, delivered by a partner with the experience to provide deep personalization.

Partners share our excitement. Peter Bourne, the CEO of Bright Wolf, shared this insight with us: “Our customers seek customized, differentiated Azure IoT solutions to grow their business, not an off-the-shelf or one-size-fits-all product. The consulting services offering helps them find experienced Microsoft partners like Bright Wolf directly and quickly, simplifying the process for everyone involved.”

These services launched on May 7th for customers in the United States. We plan to make these offers available in Canada, the United Kingdom, and Australia in the next few months, with additional countries coming online soon after.

Take advantage of these new consulting services by simply navigating to the Azure Marketplace Consulting Services page to browse the offers catalog. If you see something you like, click the Contact Me button to have a partner representative reach out to you to learn about your needs.

Partners interested in this opportunity can learn more about Azure Marketplace and AppSource and submit Azure-focused consulting offers

Windows 10 SDK Preview Build 17666 now available!

$
0
0

Today, we released a new Windows 10 Preview Build of the SDK to be used in conjunction with Windows 10 Insider Preview (Build 17666 or greater). The Preview SDK Build 17666 contains bug fixes and under development changes to the API surface area.

The Preview SDK can be downloaded from developer section on Windows Insider.

For feedback and updates to the known issues, please see the developer forum. For new developer feature requests, head over to our Windows Platform UserVoice.

Things to note:

  • This build works in conjunction with previously released SDKs and Visual Studio 2017. You can install this SDK and still also continue to submit your apps that target Windows 10 Creators build or earlier to the Store.
  • The Windows SDK will now formally only be supported by Visual Studio 2017 and greater. You can download the Visual Studio 2017 here.
  • This build of the Windows SDK will install on Windows 10 Insider Preview and supported Windows operating systems.

Known Issues

Installation on an operating system that is not a Windows 10 Insider Preview build is not supported and may fail.

The contract Windows.System.SystemManagementContract is not included in this release. In order to access the following APIs, please use a previous Windows IoT extension SDK with your project.

This bug will be fixed in a future preview build of the SDK.

The following APIs are affected by this bug:


namespace Windows.Services.Cortana {
  public sealed class CortanaSettings     
}
namespace Windows.System {
  public enum AutoUpdateTimeZoneStatus
  public static class DateTimeSettings
  public enum PowerState
  public static class ProcessLauncher
  public sealed class ProcessLauncherOptions
  public sealed class ProcessLauncherResult
  public enum ShutdownKind
  public static class ShutdownManager
  public struct SystemManagementContract
  public static class TimeZoneSettings
}

API Spot Light:

Check out LauncherOptions.GroupingPreference.

 
namespace Windows.System {
  public sealed class FolderLauncherOptions : ILauncherViewOptions {
    ViewGrouping GroupingPreference { get; set; }
  }
  public sealed class LauncherOptions : ILauncherViewOptions {
    ViewGrouping GroupingPreference { get; set; }
  }

This release contains the new LauncherOptions.GroupingPreference property to assist your app in tailoring its behavior for Sets. Watch the presentation here.

What’s New:

MC.EXE

We’ve made some important changes to the C/C++ ETW code generation of mc.exe (Message Compiler):

The “-mof” parameter is deprecated. This parameter instructs MC.exe to generate ETW code that is compatible with Windows XP and earlier. Support for the “-mof” parameter will be removed in a future version of mc.exe.

As long as the “-mof” parameter is not used, the generated C/C++ header is now compatible with both kernel-mode and user-mode, regardless of whether “-km” or “-um” was specified on the command line. The header will use the _ETW_KM_ macro to automatically determine whether it is being compiled for kernel-mode or user-mode and will call the appropriate ETW APIs for each mode.

  • The only remaining difference between “-km” and “-um” is that the EventWrite[EventName] macros generated with “-km” have an Activity ID parameter while the EventWrite[EventName] macros generated with “-um” do not have an Activity ID parameter.

The EventWrite[EventName] macros now default to calling EventWriteTransfer (user mode) or EtwWriteTransfer (kernel mode). Previously, the EventWrite[EventName] macros defaulted to calling EventWrite (user mode) or EtwWrite (kernel mode).

  • The generated header now supports several customization macros. For example, you can set the MCGEN_EVENTWRITETRANSFER macro if you need the generated macros to call something other than EventWriteTransfer.
  • The manifest supports new attributes.
    • Event “name”: non-localized event name.
    • Event “attributes”: additional key-value metadata for an event such as filename, line number, component name, function name.
    • Event “tags”: 28-bit value with user-defined semantics (per-event).
    • Field “tags”: 28-bit value with user-defined semantics (per-field – can be applied to “data” or “struct” elements).
  • You can now define “provider traits” in the manifest (e.g. provider group). If provider traits are used in the manifest, the EventRegister[ProviderName] macro will automatically register them.
  • MC will now report an error if a localized message file is missing a string. (Previously MC would silently generate a corrupt message resource.)
  • MC can now generate Unicode (utf-8 or utf-16) output with the “-cp utf-8” or “-cp utf-16” parameters.

API Updates and Additions

When targeting new APIs, consider writing your app to be adaptive in order to run correctly on the widest number of Windows 10 devices. Please see Dynamically detecting features with API contracts (10 by 10) for more information.

The following APIs have been added to the platform since the release of 17134.


namespace Windows.ApplicationModel {
  public sealed class AppInstallerFileInfo
  public sealed class LimitedAccessFeatureRequestResult
  public static class LimitedAccessFeatures
  public enum LimitedAccessFeatureStatus
  public sealed class Package {
    IAsyncOperation<PackageUpdateAvailabilityResult> CheckUpdateAvailabilityAsync();
    AppInstallerFileInfo GetAppInstallerFileInfo();
  }
  public enum PackageUpdateAvailability
  public sealed class PackageUpdateAvailabilityResult
}
namespace Windows.ApplicationModel.Calls {
  public sealed class VoipCallCoordinator {
    IAsyncOperation<VoipPhoneCallResourceReservationStatus> ReserveCallResourcesAsync();
  }
}
namespace Windows.ApplicationModel.Store.Preview.InstallControl {
  public enum AppInstallationToastNotificationMode
  public sealed class AppInstallItem {
    AppInstallationToastNotificationMode CompletedInstallToastNotificationMode { get; set; }
    AppInstallationToastNotificationMode InstallInProgressToastNotificationMode { get; set; }
    bool PinToDesktopAfterInstall { get; set; }
    bool PinToStartAfterInstall { get; set; }
    bool PinToTaskbarAfterInstall { get; set; }
  }
  public sealed class AppInstallManager {
    bool CanInstallForAllUsers { get; }
  }
  public sealed class AppInstallOptions {
    AppInstallationToastNotificationMode CompletedInstallToastNotificationMode { get; set; }
    bool InstallForAllUsers { get; set; }
    AppInstallationToastNotificationMode InstallInProgressToastNotificationMode { get; set; }
    bool PinToDesktopAfterInstall { get; set; }
    bool PinToStartAfterInstall { get; set; }
    bool PinToTaskbarAfterInstall { get; set; }
    bool StageButDoNotInstall { get; set; }
  }
  public sealed class AppUpdateOptions {
    bool AutomaticallyDownloadAndInstallUpdateIfFound { get; set; }
  }
}
namespace Windows.Devices.Enumeration {
  public sealed class DeviceInformationPairing {
    public static bool TryRegisterForAllInboundPairingRequestsWithProtectionLevel(DevicePairingKinds pairingKindsSupported, DevicePairingProtectionLevel minProtectionLevel);
  }
}
namespace Windows.Devices.Lights {
  public sealed class LampArray
  public enum LampArrayKind
  public sealed class LampInfo
  public enum LampPurpose : uint
}
namespace Windows.Devices.Sensors {
  public sealed class SimpleOrientationSensor {
    public static IAsyncOperation<SimpleOrientationSensor> FromIdAsync(string deviceId);
    public static string GetDeviceSelector();
  }
}
namespace Windows.Devices.SmartCards {
  public static class KnownSmartCardAppletIds
  public sealed class SmartCardAppletIdGroup {
    string Description { get; set; }
    IRandomAccessStreamReference Logo { get; set; }
    ValueSet Properties { get; }
    bool SecureUserAuthenticationRequired { get; set; }
  }
  public sealed class SmartCardAppletIdGroupRegistration {
    string SmartCardReaderId { get; }
    IAsyncAction SetPropertiesAsync(ValueSet props);
  }
}
namespace Windows.Devices.WiFi {
  public enum WiFiPhyKind {
    He = 10,
  }
}
namespace Windows.Media.Core {
  public sealed class MediaStreamSample {
    IDirect3DSurface Direct3D11Surface { get; }
    public static MediaStreamSample CreateFromDirect3D11Surface(IDirect3DSurface surface, TimeSpan timestamp);
  }
}
namespace Windows.Media.Devices.Core {
  public sealed class CameraIntrinsics {
    public CameraIntrinsics(Vector2 focalLength, Vector2 principalPoint, Vector3 radialDistortion, Vector2 tangentialDistortion, uint imageWidth, uint imageHeight);
  }
}
namespace Windows.Media.Streaming.Adaptive {
  public enum AdaptiveMediaSourceResourceType {
    MediaSegmentIndex = 5,
  }
}
namespace Windows.Security.Authentication.Web.Provider {
  public sealed class WebAccountProviderInvalidateCacheOperation : IWebAccountProviderBaseReportOperation, IWebAccountProviderOperation
  public enum WebAccountProviderOperationKind {
    InvalidateCache = 7,
  }
  public sealed class WebProviderTokenRequest {
    string Id { get; }
  }
}
namespace Windows.Security.DataProtection {
  public enum UserDataAvailability
  public sealed class UserDataAvailabilityStateChangedEventArgs
  public sealed class UserDataBufferUnprotectResult
  public enum UserDataBufferUnprotectStatus
  public sealed class UserDataProtectionManager
  public sealed class UserDataStorageItemProtectionInfo
  public enum UserDataStorageItemProtectionStatus
}
namespace Windows.Services.Cortana {
  public sealed class CortanaActionableInsights
  public sealed class CortanaActionableInsightsOptions
}
namespace Windows.Services.Store {
  public sealed class StoreContext {
    IAsyncOperation<StoreRateAndReviewResult> RequestRateAndReviewAppAsync();
  }
  public sealed class StoreRateAndReviewResult
  public enum StoreRateAndReviewStatus
}
namespace Windows.Storage.Provider {
  public enum StorageProviderHydrationPolicyModifier : uint {
    AutoDehydrationAllowed = (uint)4,
  }
}
namespace Windows.System {
  public sealed class FolderLauncherOptions : ILauncherViewOptions {
    ViewGrouping GroupingPreference { get; set; }
  }
  public sealed class LauncherOptions : ILauncherViewOptions {
    ViewGrouping GroupingPreference { get; set; }
  }
namespace Windows.UI.Composition {
  public enum CompositionBatchTypes : uint {
    AllAnimations = (uint)5,
    InfiniteAnimation = (uint)4,
  }
  public sealed class CompositionGeometricClip : CompositionClip
  public sealed class Compositor : IClosable {
    CompositionGeometricClip CreateGeometricClip();
  }
}
namespace Windows.UI.Notifications {
  public sealed class ScheduledToastNotification {
    public ScheduledToastNotification(DateTime deliveryTime);
    IAdaptiveCard AdaptiveCard { get; set; }
  }
  public sealed class ToastNotification {
    public ToastNotification();
    IAdaptiveCard AdaptiveCard { get; set; }
  }
}
namespace Windows.UI.Shell {
  public sealed class TaskbarManager {
    IAsyncOperation<bool> IsSecondaryTilePinnedAsync(string tileId);
    IAsyncOperation<bool> RequestPinSecondaryTileAsync(SecondaryTile secondaryTile);
    IAsyncOperation<bool> TryUnpinSecondaryTileAsync(string tileId);
  }
}
namespace Windows.UI.StartScreen {
  public sealed class StartScreenManager {
    IAsyncOperation<bool> ContainsSecondaryTileAsync(string tileId);
    IAsyncOperation<bool> TryRemoveSecondaryTileAsync(string tileId);
  }
}
namespace Windows.UI.ViewManagement {
  public sealed class ApplicationView {
    bool IsTabGroupingSupported { get; }
  }
  public sealed class ApplicationViewTitleBar {
    void SetActiveIconStreamAsync(RandomAccessStreamReference activeIcon);
  }
  public enum ViewGrouping
  public sealed class ViewModePreferences {
    ViewGrouping GroupingPreference { get; set; }
  }
}
namespace Windows.UI.ViewManagement.Core {
  public sealed class CoreInputView {
    bool TryHide();
    bool TryShow();
    bool TryShow(CoreInputViewKind type);
  }
  public enum CoreInputViewKind
}
namespace Windows.UI.Xaml.Controls {
  public class NavigationView : ContentControl {
    bool IsTopNavigationForcedHidden { get; set; }
    NavigationViewOrientation Orientation { get; set; }
    UIElement TopNavigationContentOverlayArea { get; set; }
    UIElement TopNavigationLeftHeader { get; set; }
    UIElement TopNavigationMiddleHeader { get; set; }
    UIElement TopNavigationRightHeader { get; set; }
  }
  public enum NavigationViewOrientation
  public sealed class PasswordBox : Control {
    bool CanPasteClipboardContent { get; }
    public static DependencyProperty CanPasteClipboardContentProperty { get; }
    void PasteFromClipboard();
  }
  public class RichEditBox : Control {
    RichEditTextDocument RichEditDocument { get; }
  }
  public sealed class RichTextBlock : FrameworkElement {
    void CopySelectionToClipboard();
  }
  public class SplitButton : ContentControl
  public sealed class SplitButtonClickEventArgs
  public enum SplitButtonOrientation
  public sealed class TextBlock : FrameworkElement {
    void CopySelectionToClipboard();
  }
  public class TextBox : Control {
    bool CanPasteClipboardContent { get; }
    public static DependencyProperty CanPasteClipboardContentProperty { get; }
    bool CanRedo { get; }
    public static DependencyProperty CanRedoProperty { get; }
    bool CanUndo { get; }
    public static DependencyProperty CanUndoProperty { get; }
    void CopySelectionToClipboard();
    void CutSelectionToClipboard();
    void PasteFromClipboard();
    void Redo();
    void Undo();
  }
  public sealed class WebView : FrameworkElement {
    event TypedEventHandler<WebView, WebViewWebResourceRequestedEventArgs> WebResourceRequested;
  }
  public sealed class WebViewWebResourceRequestedEventArgs
}
namespace Windows.UI.Xaml.Controls.Primitives {
  public class FlyoutBase : DependencyObject {
    FlyoutShowMode ShowMode { get; set; }
    public static DependencyProperty ShowModeProperty { get; }
    public static DependencyProperty TargetProperty { get; }
    void Show(FlyoutShowOptions showOptions);
  }
  public enum FlyoutPlacementMode {
    BottomLeftJustified = 7,
    BottomRightJustified = 8,
    LeftBottomJustified = 10,
    LeftTopJustified = 9,
    RightBottomJustified = 12,
    RightTopJustified = 11,
    TopLeftJustified = 5,
    TopRightJustified = 6,
  }
  public enum FlyoutShowMode
  public sealed class FlyoutShowOptions : DependencyObject
}
namespace Windows.UI.Xaml.Hosting {
  public sealed class XamlBridge : IClosable
}
namespace Windows.UI.Xaml.Markup {
  public sealed class FullXamlMetadataProviderAttribute : Attribute
}

The post Windows 10 SDK Preview Build 17666 now available! appeared first on Windows Developer Blog.

.NET Framework May 2018 Preview of Quality Rollup

$
0
0

Today, we are releasing the May 2018 Preview of Quality Rollup.

Quality and Reliability

This release contains the following quality and reliability improvements.

CLR

  • Resolves an issue in WindowsIdentity.Impersonate where handles were not being explicitly cleaned up. [581052]
  • Resolves an issue in deserialization when using a collection, for example, ConcurrentDictionary by ignoring casing. [524135]
  • Removes case where floating-point overflow occurs in the thread pool’s hill climbing algorithm. [568704]
  • Resolves instances of high CPU usage with background garbage collection. This can be observed with the following two functions on the stack: clr!*gc_heap::bgc_thread_function, ntoskrnl!KiPageFault. Most of the CPU time is spent in the ntoskrnl!ExpWaitForSpinLockExclusiveAndAcquire function. This change updates background garbage collection to use the CLR implementation of write watch instead of the one in Windows. [574027]

Networking

  • Fixed a problem with connection limit when using HttpClient to send requests to loopback addresses. [539851]

WPF

  • A crash can occur during shutdown of an application that hosts WPF content in a separate AppDomain. (A notable example of this is an Office application hosting a VSTO add-in that uses WPF.) [543980]
  • Addresses an issue that caused XAML Browser Applications (XBAP’s) targeting .NET 3.5 to sometimes be loaded using .NET 4.x runtime incorrectly. [555344]
  • A WPF application can crash due to a NullReferenceException if a Binding (or MultiBinding) used in a DataTrigger (or MultiDataTrigger) belonging to a Style (or Template, or ThemeStyle) reports a new value, but whose host element gets GC’d in a very narrow window of time during the reporting process. [562000]
  • A WPF application can crash due to a spurious ElementNotAvailableException. [555225]
    This can arise if:
    1. Change TreeView.IsEnabled
    2. Remove an item X from the collection
    3. Re-insert the same item X back into the collection
    4. Remove one of X’s subitems Y from its collection
      (Step 4 can happen any time relative to steps 2 and 3, as long as it’s after step 1. Steps 2-4 must occur before the asynchronous call to UpdatePeer, posted by step 1; this will happen if steps 1-4 all occur in the same button-click handler.)

Note: Additional information on these improvements is not available. The VSTS bug number provided with each improvement is a unique ID that you can give Microsoft Customer Support, include in StackOverflow comments or use in web searches.

Getting the Update

The Preview of Quality Rollup is available via Windows Update, Windows Server Update Services, and Microsoft Update Catalog.

Microsoft Update Catalog

You can get the update via the Microsoft Update Catalog.

Product Version Preview of Quality Rollup KB
Windows 8.1
Windows RT 8.1
Windows Server 2012 R2
Catalog
4103473
.NET Framework 3.5 4095875
.NET Framework 4.5.2 4098974
.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.1 4098972
Windows Server 2012 Catalog
4098968
.NET Framework 3.5 4095872
.NET Framework 4.5.2 4098975
.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.1 4098971
Windows 7
Windows Server 2008 R2
Catalog
4103472
.NET Framework 3.5.1 4095874
.NET Framework 4.5.2 4098976
.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.1 4096234
Windows Server 2008 Catalog
4103474
.NET Framework 2.0, 3.0 4095873
.NET Framework 4.5.2 4098976
.NET Framework 4.6 4096234

Previous Monthly Rollups

The last few .NET Framework Monthly updates are listed below for your convenience:

Azure Marketplace new offers: April 16-30

$
0
0

We continue to expand the Azure Marketplace ecosystem. From April 16 to 30, 15 new offers successfully met the onboarding criteria and went live. See details of the new offers below:

(Basic) Apache NiFi 1.6 on Centos 7.4

(Basic) Apache NiFi 1.6 on Centos 7.4: A CentOS 7.4 VM running a basic install of Apache NiFi 1.6 using default configurations. Once the virtual machine is deployed and running, Apache NiFi can be accessed by opening a web browser and entering: http://<IP>:8080/nifi in the address bar.

clip_image002

Debian Web Server and mariadb: A ready-to-deploy Debian Web Server with mariadb databases. A web server includes several parts that control how web users access hosted files. MariaDB is a fork of the MySQL relational database management system.

clip_image003

Jamcracker CSB Service Provider Version5: This service provider appliance is a cloud brokerage solution for SaaS and IaaS products. It automates order management, provisioning, and billing, and integrates to support ITSM, billing, ERP, and identity systems including Microsoft Active Directory.

clip_image004

MCubo Energy: MCubo Energy is a powerful platform that uses its own “best practices” to maximize your energy savings while safeguarding the environment. The proactive use of analytic tools, reports, and alerts can help your company achieve return on investment in a very short period of time.

clip_image005

MicroStrategy Enterprise Platform VM: The MicroStrategy Enterprise Platform offers a complete set of business intelligence and analytics capabilities. Use MicroStrategy to build and deploy analytical and data discovery applications in the form of personalized reports, real-time dashboards, and more.

clip_image006

Panzura Freedom CloudFS 7.1.1.0: With 10x performance and scale, the Panzura Freedom Family represents a breakthrough in managing explosive growth in unstructured data. The Panzura CloudFS™ underpins the Freedom Family and is a scale-out, distributed file system built for the cloud.

clip_image007

Kubernetes Sandbox Certified by Bitnami: Bitnami Kubernetes Sandbox provides a complete, easy-to-deploy development environment for containerized apps. It is a realistic environment to learn and develop services in Kubernetes. We monitor all components and libraries for vulnerabilities.

clip_image008

Percona Server for MySQL: Percona Server for MySQL's self-tuning algorithms and support for extremely high-performance hardware delivers excellent performance and reliability. It delivers greater value to MySQL users with optimized performance, greater scalability, and availability.

clip_image009

RimauWAF web Application Firewall: Rimau Web Application Firewall (WAF) protects web application systems and websites from hackers, layer 7 DDoS attacks, SQL injection attacks, scanning attacks, and more. Powered by open-source technology and OWASP rules, with a user-friendly interface panel.

clip_image010

Ubuntu Server: An easy-to-use Ubuntu Server for developers on the Microsoft Azure platform. Certified by Microsoft to host Windows Server 2012 and Windows Server 2008 R2 as guests, under its Server Virtualization Validation Program (SVVP).

clip_image011

WordPress Multisite Certified by Bitnami: WordPress Multisite is the same software that powers Wordpress.com, enabling administrators to host and manage multiple websites from the same WordPress instance. These websites can all have unique domain names while sharing assets.

clip_image012

You Green Trial: YouGreen is the internal network dedicated to green issues. You can sign on and register as a user to understand the impact of your behavior, participate in the community formed by your colleagues, and discuss sustainability ideas and initiatives with others (tips and quiz).

 

Microsoft Azure Applications

clip_image013

F5 BIG-IP O365 Federation IdP: Deploying BIG-IP Access Policy Manager provides secure, federated identity management from your existing Active Directory to your Office 365 applications, eliminating the complexity of additional layers of Active Directory Federation Services servers and proxy servers.

clip_image014

Forcepoint Next Generation Firewall: Forcepoint NGFW (next generation firewall) gives you the scalability, protection, and visibility you need to more efficiently manage protect traffic into and out of your Microsoft Azure network as well as among various components of your cloud environment.

clip_image015

Unifi Data Platform 2.6 on Azure HDInsight: This platform is a comprehensive suite of self-service data discovery and preparation tools to empower business users. Unifi predicts what the business user wants to visualize, then connects the data natively to the BI tool for fast, accurate results.

Improving the responsiveness of critical scenarios by updating auto load behavior for extensions

$
0
0

The Visual Studio team partners with extension authors to provide a productive development environment for users, who rely on a rich ecosystem of quality extensions. Today, we’re introducing an update to extension auto load based on feedback from our community of developers, who need to quickly start Visual Studio and load their solution while deferring other functionality to load in the background.

As part of ongoing performance efforts to guarantee a faster startup and solution load experience for all users, Visual Studio will change how auto loaded packages work during startup and solution load scenarios. Please see the upcoming changes for extension authors below and let us know if you have any questions. The team is actively answering any questions you might have regarding this on the ExtendVS channel on Gitter.

Upcoming changes:

In Visual Studio 2015, we added support for asynchronous packages (AsyncPackage base class) and asynchronous auto load. Extensions have been opting into asynchronous load to reduce performance issues since then. However, there are still some extensions that are loading synchronously, and it is negatively impacting the performance of Visual Studio.

In light of that, changes are coming to start the process of turning off synchronous auto load support. This will improve the user experience and guarantees a consistent startup and solution load experience, providing a responsive IDE. As part of this, changes to auto load behavior in a future Visual Studio update will be as follows:

  1. Async packages that load on the background have smaller performance impact than synchronously loaded packages, but still the cost is non-zero due to IO contention with foreground thread when starting up Visual Studio or opening a solution. To optimize startup and solution scenarios specifically, the IDE will not auto load async packages during those scenarios even on background threads. Instead the IDE will push all auto load requests into a queue. Once startup or solution load is completed, the IDE will start loading queued packages asynchronously as it detects pauses in user activity. This could mean that a package is never automatically loaded in that session if it’s a short session, or that packages which were queued to be loaded during startup might not load before a user opens a solution.
    Please note that this covers all auto load requests regardless of the source UI context. For example, synchronous auto load requests from any UI context (e.g. SolutionHasSingleProject) or rule-based UI contexts that were previously activated while a solution was being loaded will not be added to the queue. Other sources of package loads, such as project factory queries and service queries, will not be impacted by this change.
  2. All packages that utilize auto load rules will have to support background load and implement asynchronous initialization. The IDE will no longer synchronously auto load packages in any UI context, including rule-based UI contexts.

While asynchronous load support was added in Visual Studio 2015, we know many extensions also want to support Visual Studio 2013 in a single package. In order to make that possible, we have provided a sample that shows how to create a Visual Studio package that loads synchronously in Visual Studio 2013 but also supports asynchronous load in Visual Studio 2015 and above.

Timing:

The Visual Studio team is committed to working with extension owners to help make these changes as soon as possible and with as little disruption as possible for end-users. So, the changes will be phased in over multiple updates:

Visual Studio 2017, version 15.7:

  • The Visual Studio Marketplace is posting a reminder during submission of a non-compliant extension (i.e., an extension that auto-loads but is not an async-package that supports background load).
  • The Visual Studio SDK includes a new analyzer that will issue a build reminder for non-compliant extensions.

Visual Studio 2017, version 15.8:

  • Async packages that support background load will be loaded after Visual Studio startup and solution load are completed (this is update #1 mentioned above).

In a later update, Visual Studio will completely disable auto-loading of synchronous extensions (update #2 mentioned above). End users will see a notification in Visual Studio informing them about extensions that were impacted.

Impact on package implementations:

These changes may require changes to existing packages that utilize the ProvideAutoLoad attribute and inherit from the Package base class, including but not limited to:

  • Synchronous packages (those inheriting from the Package base class) must be converted to support asynchronous loading and enable background load. We also encourage package owners to move initialization code to the thread pool as much as possible to ensure users continue to see a responsive IDE. We will be monitoring extensions and UI delays to track responsiveness issues caused by auto loaded packages. You can find more information on diagnosing package auto load performance in our guidance on Microsoft Docs.
    In order to catch potential issues with async conversion, we encourage all package owners to install the latest SDK and Threading analyzers from NuGet in to their projects.
  • If your package needs to utilize the main thread because it calls into UI thread bound Visual Studio APIs that take a long time to execute, please let us know as we are looking for opportunities in converting such services to implement async methods or be free threaded to avoid responsiveness issues when loading packages in background.
  • Packages that used to load at the beginning of solution load and rely on solution events will need to change implementation as they will no longer receive such events. Instead the package can enumerate the contents of a solution when the extension is loaded. See code sample.
  • Like above, packages that used to load at startup and relied on solution events will have to handle the case where it is loaded after solution load is completed. It is possible for solution load to occur during and closely after startup giving IDE no chance to load startup packages (altering the load behavior of packages from previous versions of Visual Studio).
  • Packages that register command status handlers will need to ensure their default command states are valid. With these changes there will be a timeframe where the QueryStatus handlers are not registered. Generally, we encourage package owners to utilize rule-based UI contexts as much as possible to determine command states via metadata instead of code-based QueryStatus handlers, and will be looking for feedback in what additional terms can be added to rule-based UI contexts to move away from code-based handlers.

Testing async packages that auto load in the background:

Update#1 mentioned above will change the timing of when async packages auto-load in the background. To help you test your package with this behavior, Visual Studio 2017 versions 15.6 and 15.7 include the new auto load manager in the product behind a feature flag (in version 15.8 Preview 2 and later this will be enabled by default). With this feature flag enabled, Visual Studio will defer auto-loading of async, background loadable packages until startup and solution load complete and Visual Studio is idle for some time. Synchronous auto-loading packages will have no change in behavior.

To enable the new auto load behavior, you can run both the following commands in your Visual Studio installation directory:

    vsregedit set <VSROOT> HKCU FeatureFlagsShellAutoLoadRestrictions Value dword 1

    vsregedit set <VSROOT> HKLM AutoLoadPackages AllowSynchronousLoads dword 1

You can use the following command to change the idle time to a large value to aid in testing your extension. For instance, to set the idle time to 60 seconds:

    vsregedit set <VSROOT> HKLM AutoLoadPackages MinimumInputIdleTime dword 60000

To go back to existing behavior, you can run:

    vsregedit set <VSROOT> HKCU FeatureFlagsShellAutoLoadRestrictions Value dword 0

Resources:

Mads Kristensen, Senior Program Manager
@mkristensen

Mads Kristensen is a senior program manager on the Visual Studio Extensibility Team and has published over 100 free Visual Studio extensions over the course of the past 5 years.


Reimagine accessibility and foster inclusion in the modern workplace

$
0
0

On May 17, 2018, Microsoft joins in marking the seventh Global Accessibility Awareness Day (GAAD), a day dedicated to raising awareness of accessibility in the digital world. In honor of this day, we are releasing a short film: Empower every person: reimagining accessibility.

This film features accessibility experts from Microsoft and our partners: US Business Leadership Network, Be. Accessible, TD Bank (Canada), and Rochester Institute of Technology. It introduces best practices to build more modern and inclusive workplace environments and how accessible-by-design technologies empower every person to create, communicate, and collaborate. It also showcases new capabilities in Microsoft 365 being unveiled today that make it easier to create accessible content.

As more people across the world join in marking this day and take actions every day to create a more accessible digital world, people of all abilities will be able to fully participate and contribute. More than one billion people need assistive products to be independent and productive, but only 1 in 10 have access. Delivering experiences that empower every person to achieve more is what energizes us at Microsoft to do our best work, and I invite everyone to bring that energy to making accessibility non-negotiable in their places of work and across the whole digital landscape.

I continue to be inspired by the ever-increasing number of organizations prioritizing accessibility, and collectively there has been clear progress. However, given adults with disabilities have twice the unemployment rate of those without, more progress is needed to enable the transformative change we all want. At Microsoft, we found that addressing accessibility requires attention in all stages of product development: design, implementation, and testing. Making things accessible from the get-go is not only affordable, but also beneficial for a broad set of people. Given this, our mainstream technologies—such as Microsoft 365—include built-in assistive technologies and accessibility features.

Empowering people with disabilities to create, consume, and share content in their preferred way is a key part of the Microsoft 365 vision for accessibility. In line with this vison, we created new Ease of Access settings in Windows 10 and built-in settings, such as Read Aloud and Dictate in Office 365. These capabilities are designed to support people with a range of access needs: vision, hearing, and interaction. For example, to make interactions more efficient for keyboard users, we have introduced text suggestions that suggest the top three words while typing in on-screen keyboards as well as hardware keyboards. Adoption metrics are a great indicator of product value. Three years ago, we introduced Learning Tools as an add-in to help the 1 in 5 people who exhibit signs of dyslexia. After embedding Learning Tools into mainstream Office 365 applications and the Microsoft Edge browser, we have over 10 million monthly active users. Inclusively designed tools are clearly beneficial for everyone, and I continue to be energized by the powerful stories of inclusion in action told by people whose lives are transformed through Learning Tools and other accessibility features built into Microsoft 365.

Diversity is a strength for any business, and diverse teams must be able to seamlessly collaborate. Another key part of the Microsoft 365 vision for accessibility is to empower everyone to create accessible content and provide equal access to information to people with disabilities such as blindness, low vision, or dyslexia. AI is already infused in Microsoft 365 to help with several aspects of image, audio, and video accessibility. With automatic alt-text for images in Word and PowerPoint, we give you a head start by providing descriptions for images recognizable by Computer Vision. The Presentation Translator add-in for PowerPoint enables you to display live subtitles in more than 60 languages. Additionally, Microsoft Stream generates automatic transcripts for videos in English and Spanish using AI to convert speech to text.

In the coming months, we will ship new features to Microsoft 365 that will make it even more efficient for everyone to create accessible content and ensure diverse teams can collaborate inclusively.

  • Accessibility Checker—Already discoverable next to Spelling Checker in several Office 365 PC and Mac applications, the Accessibility Checker will be enhanced to run proactively in the background. It will alert you in real-time of issues that make your content difficult for people with disabilities to access. For example, it will alert you of low-contrast text that is difficult to read because the font color is too similar to the background color.
  • MailTip—A MailTip will be offered in Outlook for PCs to remind those who collaborate with you to check the accessibility of their content if you indicate that you prefer accessible content, similar to the MailTip available in Outlook Web Access today.
  • Recommended Actions—A new Recommended Actions menu will be introduced within the Accessibility Checker to make it easier to fix flagged issues. It will recommend actions such as Add a description, Mark as decorative, and Suggest a description for me for a picture in a document that is missing alternative text.

This GIF shows the Accessibility Checker being run from the Review tab in a Word document with black text on a grey background and an image of a forest. Accessibility Checker inspection results show that the image is missing alternative text. To fix the issue, the user clicks the recommended action, Add a description, which opens the Alt Text pane. The user types the image description the text box. The user then clicks the Low-contrast text warning in the Accessibility Checker inspection results, and then clicks the recommended action and changes the page color to white. The inspection results now show no more accessibility issues.

We’re on a journey at Microsoft to design, build, and launch more accessible products to foster digital inclusion in the modern workplace. I invite you to join us on this journey as we reimagine accessibility. Visit the Microsoft accessibility site to learn more about our approach. Share your learnings with #ReimaginingAccessibility and continue the conversation with @MSFTEnable on Twitter.

The post Reimagine accessibility and foster inclusion in the modern workplace appeared first on Microsoft 365 Blog.

Announcing TypeScript 2.9 RC

$
0
0

Today we’re excited to announce and get some early feedback with TypeScript 2.9’s Release Candidate. To get started with the RC, you can access it through NuGet, or use npm with the following command:

npm install -g typescript@rc

You can also get editor support by

Let’s jump into some highlights of the Release Candidate!

Support for symbols and numeric literals in keyof and mapped object types

TypeScript’s keyof operator is a useful way to query the property names of an existing type.

interface Person {
    name: string;
    age: number;
}

// Equivalent to the type
//  "name" | "age"
type PersonPropertiesNames = keyof Person;

Unfortunately, because keyof predates TypeScript’s ability to reason about unique symbol types, keyof never recognized symbolic keys.

const baz = Symbol("baz");

interface Thing {
    foo: string;
    bar: number;
    [baz]: boolean; // this is a computed property type
}

// Error in TypeScript 2.8 and earlier!
// `typeof baz` isn't assignable to `"foo" | "bar"`
let x: keyof Thing = baz;

TypeScript 2.9 changes the behavior of keyof to factor in both unique symbols as well as numeric literal types. As such, the above example now compiles as expected. keyof Thing now boils down to the type "foo" | "bar" | typeof baz.

With this functionality, mapped object types like Partial, Required, or Readonly also recognize symbolic and numeric property keys, and no longer drop properties named by symbols:

type Partial<T> = {
    [K in keyof T]: T[K]
}

interface Thing {
    foo: string;
    bar: number;
    [baz]: boolean;
}

type PartialThing = Partial<Thing>;

// This now works correctly and is equivalent to
//
//   interface PartialThing {
//       foo?: string;
//       bar?: number;
//       [baz]?: boolean;
//   }

Unfortunately this is a breaking change for any usage where users believed that for any type T, keyof T would always be assignable to a string. Because symbol- and numeric-named properties invalidate this assumption, we expect some minor breaks which we believe to be easy to catch. In such cases, there are several possible workarounds.

If you have code that’s really meant to only operate on string properties, you can use Extract<keyof T, string> to restrict symbol and number inputs:

function useKey<T, K extends Extract<keyof T, string>>(obj: T, k: K) {
    let propName: string = k;
    // ...
}

If you have code that’s more broadly applicable and can handle more than just strings, you should be able to substitute string with string | number | symbol, or use the built-in type alias PropertyKey.

function useKey<T, K extends keyof T>(obj: T, k: K) {
    let propName: string | number | symbol = k; 
    // ...
}

Alternatively, users can revert to the old behavior under the --keyofStringsOnly compiler flag, but this is meant to be used as a transitionary flag.

import() types

One long-running pain-point in TypeScript has been the inability to reference a type in another module, or the type of the module itself, without including an import at the top of the file.

In some cases, this is just a matter of convenience – you might not want to add an import at the top of your file just to describe a single type’s usage. For example, to reference the type of a module at an arbitrary location, here’s what you’d have to write before TypeScript 2.9:

import * as _foo from "foo";

export async function bar() {
    let foo: typeof _foo = await import("foo");
}

In other cases, there are simply things that users can’t achieve today – for example, referencing a type within a module in the global scope is impossible today. This is because a file with any imports or exports is considered a module, so adding an import for a type in a global script file will automatically turn that file into a module, which drastically changes things like scoping rules and strict module within that file.

That’s why TypeScript 2.9 is introducing the new import(...) type syntax. Much like ECMAScript’s proposed import(...) expressions, import types use the same syntax, and provide a convenient way to reference the type of a module, or the types which a module contains.

// foo.ts
export interface Person {
    name: string;
    age: number;
}

// bar.ts
export function greet(p: import("./foo").Person) {
    return `
        Hello, I'm ${p.name}, and I'm ${p.age} years old.
    `;
}

Notice we didn’t need to add a top-level import specify the type of p. We could also rewrite our example from above where we awkwardly needed to reference the type of a module:

export async function bar() {
    let foo: typeof import("./foo") = await import("./foo");
}

Of course, in this specific example, foo could have been inferred, this might be more useful with something like the TypeScript language server plugin API.

Breaking changes

keyof types include symbolic/numeric properties

As mentioned above, key queries/keyof types now include names that are symbols and numbers, which can break some code that assumes keyof T is assignable to string. Users can avoid this by using the --keyofStringsOnly compiler option:

// tsconfig.json
{
    "compilerOptions": {
        "keyofStringsOnly": true
    }
}

Trailing commas not allowed on rest parameters

#22262
This break was added for conformance with ECMAScript, as trailing commas are not allowed to follow rest parameters in the specification.

Unconstrained type parameters are no longer assignable to object in strictNullChecks

#24013
The following code now errors:

function f<T>(x: T) {
    const y: object | null | undefined = x;
}

Since generic type parameters can be substituted with any primitive type, this is a precaution TypeScript has added under strictNullChecks. To fix this, you can add a constraint on object:

// We can add an upper-bound constraint here.
//           vvvvvvvvvvvvvvv
function f<T extends object>(x: T) {
    const y: object | null | undefined = x;
}

never can no longer be iterated over

#22964

Values of type never can no longer be iterated over, which may catch a good class of bugs. Users can avoid this behavior by using a type assertion to cast to the type any (i.e. foo as any).

What’s next?

We try to keep our plans easily discoverable on the TypeScript roadmap for everything else that’s coming in 2.9 and beyond. TypeScript 2.9 proper should arrive towards the end of the month, but to make that successful, we need all the help we can get, so download the RC today and let us know what you think!

Feel free to drop us a line on GitHub if you run into any problems, and let others know how you feel about this RC on Twitter and in the comments below!

Building, Running, and Testing .NET Core and ASP.NET Core 2.1 in Docker on a Raspberry Pi (ARM32)

$
0
0

I love me some Raspberry Pi. They are great little learning machines and are super fun for kids to play with. Even if those kids are adults and they build a 6 node Kubernetes Raspberry Pi Cluster.

Open source .NET Core runs basically everywhere - Windows, Mac, and a dozen Linuxes. However, there is an SDK (that compiles and builds) and a Runtime (that does the actual running of your app). In the past, the .NET Core SDK (to be clear, the ability to "dotnet build") wasn't supported on ARMv7/ARMv8 chips like the Raspberry Pi. Now it is.

.NET Core is now supported on Linux ARM32 distros, like Raspbian and Ubuntu!

Note: .NET Core 2.1 is supported on Raspberry Pi 2+. It isn’t supported on the Pi Zero or other devices that use an ARMv6 chip. .NET Core requires ARMv7 or ARMv8 chips, like the ARM Cortex-A53. Folks on the Azure IoT Edge team use the .NET Core Bionic ARM32 Docker images to support developers writing C# with Edge devices.

There's two ways to run .NET Core on a Raspberry Pi.

One, use Docker. This is literally the fastest and easiest way to get .NET Core up and running on a Pi. It sounds crazy but Raspberry Pis are brilliant little Docker container capable systems. You can do it in minutes, truly. You can install Docker quickly on a Raspberry Pi with just:

curl -sSL https://get.docker.com | sh

sudo usermod -aG docker pi

After installing Docker you'll want to log in and out. You might want to try a quick sample to make sure .NET Core runs! You can explore the available Docker tags at https://hub.docker.com/r/microsoft/dotnet/tags/ and you can read about the .NET Core Docker samples here https://github.com/dotnet/dotnet-docker/tree/master/samples/dotnetapp

Now I can just docker run and then pass in "dotnet --info" to find out about dotnet on my Pi.

pi@raspberrypi:~ $ docker run --rm -it microsoft/dotnet:2.1-sdk dotnet --info

.NET Core SDK (reflecting any global.json):
Version: 2.1.300-rc1-008673
Commit: f5e3ddbe73

Runtime Environment:
OS Name: debian
OS Version: 9
OS Platform: Linux
RID: debian.9-x86
Base Path: /usr/share/dotnet/sdk/2.1.300-rc1-008673/

Host (useful for support):
Version: 2.1.0-rc1
Commit: eb9bc92051

.NET Core SDKs installed:
2.1.300-rc1-008673 [/usr/share/dotnet/sdk]

.NET Core runtimes installed:
Microsoft.NETCore.App 2.1.0-rc1 [/usr/share/dotnet/shared/Microsoft.NETCore.App]

To install additional .NET Core runtimes or SDKs:
https://aka.ms/dotnet-download

This is super cool. There I'm on the Raspberry Pi (RPi) and I just ask for the dotnet:2.1-sdk and because they are using "multiarch" docker files, Docker does the right thing and it just works. If you want to use .NET Core on ARM32 with Docker, you can use any of the following tags.

Note: The first three tags are multi-arch and bionic is Ubuntu 18.04. The codename stretch is Debian 9. So I'm using 2.1-sdk and it's working on my RPi, but I can be specific if I'd prefer.

  • 2.1-sdk
  • 2.1-runtime
  • 2.1-aspnetcore-runtime
  • 2.1-sdk-stretch-arm32v7
  • 2.1-runtime-stretch-slim-arm32v7
  • 2.1-aspnetcore-runtime-stretch-slim-arm32v7
  • 2.1-sdk-bionic-arm32v7
  • 2.1-runtime-bionic-arm32v7
  • 2.1-aspnetcore-runtime-bionic-arm32v7

Try one in minutes like this:

docker run --rm microsoft/dotnet-samples:dotnetapp

Here it is downloading the images...

Docker on a Raspberry Pi

In previous versions of .NET Core's Dockerfiles it would fail if you were running an x64 image on ARM:

standard_init_linux.go:190: exec user process caused "exec format error"

Different processors! But with multiarch per https://github.com/dotnet/announcements/issues/14 Kendra from Microsoft it just works with 2.1.

Docker has a multi-arch feature that microsoft/dotnet-nightly recently started utilizing. The plan is to port this to the official microsoft/dotnet repo shortly. The multi-arch feature allows a single tag to be used across multiple machine configurations. Without this feature each architecture/OS/platform requires a unique tag. For example, the microsoft/dotnet:1.0-runtime tag is based on Debian and microsoft/dotnet:1.0-runtime-nanoserver if based on Nano Server. With multi-arch there will be one common microsoft/dotnet:1.0-runtime tag. If you pull that tag from a Linux container environment you will get the Debian based image whereas if you pull that tag from a Windows container environment you will get the Nano Server based image. This helps provide tag uniformity across Docker environments thus eliminating confusion.

In these examples above I can:

  • Run a preconfigured app within a Docker image like:
    • docker run --rm microsoft/dotnet-samples:dotnetapp
  • Run dotnet commands within the SDK image like:
    • docker run --rm -it microsoft/dotnet:2.1-sdk dotnet --info
  • Run an interactive terminal within the SDK image like:
    • docker run --rm -it microsoft/dotnet:2.1-sdk

As a quick example, here I'll jump into a container and new up a quick console app and run it, just to prove I can. This work will be thrown away when I exit the container.

pi@raspberrypi:~ $ docker run --rm -it microsoft/dotnet:2.1-sdk

root@063f3c50c88a:/# ls
bin boot dev etc home lib media mnt opt proc root run sbin srv sys tmp usr var
root@063f3c50c88a:/# cd ~
root@063f3c50c88a:~# mkdir mytest
root@063f3c50c88a:~# cd mytest/
root@063f3c50c88a:~/mytest# dotnet new console
The template "Console Application" was created successfully.

Processing post-creation actions...
Running 'dotnet restore' on /root/mytest/mytest.csproj...
Restoring packages for /root/mytest/mytest.csproj...
Installing Microsoft.NETCore.DotNetAppHost 2.1.0-rc1.
Installing Microsoft.NETCore.DotNetHostResolver 2.1.0-rc1.
Installing NETStandard.Library 2.0.3.
Installing Microsoft.NETCore.DotNetHostPolicy 2.1.0-rc1.
Installing Microsoft.NETCore.App 2.1.0-rc1.
Installing Microsoft.NETCore.Platforms 2.1.0-rc1.
Installing Microsoft.NETCore.Targets 2.1.0-rc1.
Generating MSBuild file /root/mytest/obj/mytest.csproj.nuget.g.props.
Generating MSBuild file /root/mytest/obj/mytest.csproj.nuget.g.targets.
Restore completed in 15.8 sec for /root/mytest/mytest.csproj.

Restore succeeded.
root@063f3c50c88a:~/mytest# dotnet run
Hello World!
root@063f3c50c88a:~/mytest# dotnet exec bin/Debug/netcoreapp2.1/mytest.dll
Hello World!

If you try it yourself, you'll note that "dotnet run" isn't very fast. That's because it does a restore, build, and run. Compilation isn't super quick on these tiny devices. You'll want to do as little work as possible. Rather than a "dotnet run" all the time, I'll do a "dotnet build" then a "dotnet exec" which is very fast.

If you're doing to do Docker and .NET Core, I can't stress enough how useful the resources are over at https://github.com/dotnet/dotnet-docker.

Building .NET Core Apps with Docker

Develop .NET Core Apps in a Container

  • Develop .NET Core Applications - This sample shows how to develop, build and test .NET Core applications with Docker without the need to install the .NET Core SDK.
  • Develop ASP.NET Core Applications - This sample shows how to develop and test ASP.NET Core applications with Docker without the need to install the .NET Core SDK.

Optimizing Container Size

ARM32 / Raspberry Pi

I found the samples to be super useful...be sure to dig into the Dockerfiles themselves as it'll give you a ton of insight into how to structure your own files. Being able to do Multistage Dockerfiles is crucial when building on a small device like a RPi. You want to do as little work as possible and let Docker cache as many layers with its internal "smarts." If you're not thoughtful about this, you'll end up wasting 10x the time building image layers every build.

Dockerizing a real ASP.NET Core Site with tests!

Can I take my podcast site and Dockerize it and build/test/run it on a Raspberry Pi? YES.

FROM microsoft/dotnet:2.1-sdk AS build

WORKDIR /app

# copy csproj and restore as distinct layers
COPY *.sln .
COPY hanselminutes.core/*.csproj ./hanselminutes.core/
COPY hanselminutes.core.tests/*.csproj ./hanselminutes.core.tests/
RUN dotnet restore

# copy everything else and build app
COPY . .
WORKDIR /app/hanselminutes.core
RUN dotnet build


FROM build AS testrunner
WORKDIR /app/hanselminutes.core.tests
ENTRYPOINT ["dotnet", "test", "--logger:trx"]


FROM build AS test
WORKDIR /app/hanselminutes.core.tests
RUN dotnet test


FROM build AS publish
WORKDIR /app/hanselminutes.core
RUN dotnet publish -c Release -o out


FROM microsoft/dotnet:2.1-aspnetcore-runtime AS runtime
WORKDIR /app
COPY --from=publish /app/hanselminutes.core/out ./
ENTRYPOINT ["dotnet", "hanselminutes.core.dll"]

Love it. Now I can "docker build ." on my Raspberry Pi. It will restore, test, and build. If the tests fail, the Docker build will fail.

See how there's an extra section up there called "testrunner" and then after it is "test?" That testrunner section is a no-op. It sets an ENTRYPOINT but it is never used...yet. The ENTRYPOINT is an implicit run if it is the last line in the Dockerfile. That's there so I can "Run up to it" if I want to.

I can just build and run like this:

docker build -t podcast .

docker run --rm -it -p 8000:80 podcast

NOTE/GOTCHA: Note that the "runtime" image is microsoft/dotnet:2.1-aspnetcore-runtime, not microsoft/dotnet:2.1-runtime. That aspnetcore one pre-includes the binaries I need for running an ASP.NET app, that way I can just include a single reference to "<PackageReference Include="Microsoft.AspNetCore.App" Version="2.1.0-rc1-final" />" in my csproj. If didn't use the aspnetcore-runtime base image, I'd need to manually pull in all the ASP.NET Core packages that I want. Using the base image might make the resulting image files larger, but it's a balance between convenience and size. It's up to you. You can manually include just the packages you need, or pull in the "Microsoft.AspNetCore.App" meta-package for convenience. My resulting "podcast" image ended up 205megs, so not to bad, but of course if I wanted I could trim in a number of ways.

Or, if I JUST want test results from Docker, I can do this! That means I can run the tests in the Docker container, mount a volume between the Linux container and (theoretical) Window host, and then open the .trx resulting file in Visual Studio!

docker build --pull --target testrunner -t podcast:test .

docker run --rm -v D:githubhanselminutes-coreTestResults:/app/hanselminutes.core.tests/TestResults podcast:test

Check it out! These are the test results from the tests that ran within the Linux Container:

XUnit Tests from within a Docker Container on Linux viewed within Visual Studio on Windows

Here's the result. I've now got my Podcast website running in Docker on an ARM32 Raspberry Pi 3 with just an hours' work (writing the Dockerfile)!

It's my podcast site running under Docker on .NET Core 2.1 on a Raspberry Pi

Cross-platform for the win!


Sponsor: Check out JetBrains Rider: a cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!



© 2018 Scott Hanselman. All rights reserved.
     

Detect malicious activity using Azure Security Center and Azure Log Analytics

$
0
0

This blog post was authored by the Azure Security Center team. ​

We have heard from our customers that investigating malicious activity on their systems can be tedious and knowing where to start is challenging. Azure Security Center makes it simple for you to respond to detected threats. It uses built-in behavioral analytics and machine learning to detect threats and generates alerts for the attempted or successful attacks. As discussed in a previous post, you can explore the alerts of detected threats through the Investigation Path, which uses Azure Log Analytics to show the relationship between all the entities involved in the attack. Today, we are going to explain to you how Security Center’s ability to detect threats using machine learning and Azure Log Analytics can help you keep pace with rapidly evolving cyberattacks.

Investigate anomalies on your systems using Azure Log Analytics

One method is to look at the trends of processes, accounts, and computers to understand when anomalous or rare processes and accounts are run on computers which indicates potentially malicious or unwanted activity. Run the below query against your data and note that what comes up is an anomaly or rare over the last 30 days. This query shows the processes run by computers and account groups over a week to see what is new and compare it to the behavior over the last 30 days. This technique can be applied to any of the logs provided in the Advanced Azure Log Analytics pane. In this example, I am using the Security Event table.

Please note the items in bold are an example of filtering your own results for noise and is not specifically required. The reason I have included it is to make it clear there will be certain items that are not run often and show up as anomalous when using this or similar queries, which are specific to your environment and may need manual exclusion to help focus the investigation. Please build your own list of “known good” items to filter out based on your environment.

let T = SecurityEvent
| where TimeGenerated >= ago(30d)
| extend Date = startofday(TimeGenerated)
| extend Process = ProcessName
| where Process != ""
| where Process != "-"
| where Process !contains "\Windows\System"
| where Process !contains "\Program Files\Microsoft\"
| where Process !contains "\Program Files\Microsoft Monitoring Agent\"
| where Process !contains "\ProgramData\"
| where Process !contains "\Windows\WinSxS\"
| where Process !contains "\Windows\SoftwareDistribution\"
| where Process !contains "\mpsigstub.exe"
| where Process !contains "\WindowsAzure\GuestAgent"
| where Process !contains "\Windows\Servicing\TrustedInstaller.exe"
| where Process !contains "\Windows\Microsoft.Net\"
| where Process !contains "\Packages\Plugins\"
| project Date, Process, Computer, Account
| summarize count() by Date, Process, Computer, Account
| sort by count_ desc nulls last;
T
| evaluate activity_counts_metrics(Process, Date, startofday(ago(30d)), startofday(now()), 1d, Process, Computer, Account)
| extend WeekDate = startofweek(Date)
| project WeekDate, Date, Process, NewForWeek = new_dcount, Account, Computer
| join kind= inner
(
      T
      | evaluate activity_engagement(Process, Date, startofday(ago(30d)), startofday(now()),1d, 7d)
      | extend WeekDate = startofweek(Date)
      | project WeekDate, Date, Distribution1day = dcount_activities_inner, Distribution7days = dcount_activities_outer, Ratio = activity_ratio*100
)
on WeekDate, Date
| where NewForWeek == 1 and Ratio == 100
| project WeekDate, Date, Process, Account, Computer , NewForWeek, Distribution1day, Distribution7days, Ratio
| render barchart kind=stacked

When the above query is run, you will receive a TABLE similar to the item below, although the dates and referenced processes will be different. In this example, we can see when a specific process, computer, and account has not been seen before based on week over week data for the last 30 days. Specifically, we can see portping.exe showed up in the week of 4/15 and on the date of 4/16 for the first time in 30 days.

Table 1

You can also view the results in CHART mode and change the pivot of the bar CHART as seen below. For example, use the drop down and pivot on Computer instead of process and see the computers that launched this process.

Completed

Hover to see the specific computer and how many processes showed up for the first time.

Potential Anomaly Count

In the query above, we look at the items that run across more than one day, which is the ratio of less than 100. This is a way to parse the date and more easily understand the scope of when a process runs on a given computer. By looking at rare items that have run across multiple days, you can potentially detect manual activity by an attacker who is probing your environment for information that will further increase his attack surface.

We can alternatively look at the processes that ran only on 1 day of the last 30 days, which can be done by choosing only ratio == 100 in the above query, simply change the related line to this:

| where NewForWeek == 1 and Ratio == 100.

The above change to the query results in different set of hits for rare processes and may indicate usage of a scripted attack to rapidly gather data from this system or several systems, or may just indicate attacker activity on a single day.

Lastly, we see several interactive processes run, which indicate an interactive logon, for example SQL Mgmt Studio process Ssms.exe. Potentially, this is an unexpected logon to this system and this query can help expose this type of anomaly in addition to unexpected processes.

Table 2

Once you have identified a computer or account you want to investigate, you can then dig in further on the full data for that computer. This can be done by opening a secondary query window and filtering only on the computer or account that you are interested in. Examples of this would be as follows. At that point, you can see what occurred around the anomalous or rare process execution time. We will select the portping.exe process and narrow the scope of the dates to allow for a closer look.  From the table above, we can see the Date[UTC] circled below. This date is rounded to the nearest day for the query to work properly, but this along with the computer and account used should allow us to focus in on the timeframe of when this was run on the computer.

Table 3

To focus in on the timeframe, we will use that date to provide our single day range. We can pass the range into the query by using standard date formats indicated below. Click on the + highlighted in yellow and paste the below query into your window.

In the results, the distinct time is marked in red. We will use that in a subsequent query.

SecurityEvent
| where TimeGenerated >= datetime(2018-04-16 00:00:00.000) and TimeGenerated <= datetime(2018-04-16 23:59:59.999)
| where Computer contains "Contoso-2016" and Account contains "ContosoAdmin"
| where Process contains "portping.exe"
| project TimeGenerated, Computer, Account, Process, CommandLine

Code

Now that we have the exact time, we can look at activity occurring with smaller time frames around that date. We usually use +5 minute and -5 minute blocks. For example:

SecurityEvent
| where TimeGenerated >= datetime(2018-04-16 19:10:00.000) and TimeGenerated <= datetime(2018-04-16 19:21:00.000)
| where Computer contains "Contoso-2016" and Account contains "ContosoAdmin"
//| where Process contains "portping.exe"
| project TimeGenerated, Computer, Account, Process, CommandLine

In the results below, we can easily see that someone was logged into the system via RDP. We know this because RDPClip.exe is being launched, which indicated they were copying and pasting between their host and the remote system.

Additionally, we see after the portping.exe activity that they are attempting to modify accounts or password functionality with the command netplwiz.exe or control userpasswords2.

They are then running Procmon.exe to see what other processes are running on the system. Generally this is done to understand what is available to the attacker to further exploit.

Query

At this point, this machine should be taken offline and investigated more deeply to understand the extent of the compromise.

Find hidden techniques commonly deployed by attackers using Azure Log Analytics

Most security experts have seen the techniques attackers use to hide the usage of commands on a system to avoid detection. While there are certainly methods to avoid even showing up on the command line, the obfuscation technique used below is regularly used by various levels of attackers.

Below we will decode a base64 encoded string in the command line data and look for common PowerShell methods that are used in attacks.

SecurityEvent
| where TimeGenerated >= ago(30d)
| where Process contains "powershell.exe" and CommandLine contains " -enc"
|extend b64 = extract("[A-Za-z0-9|+|=|/]{30,}", 0,CommandLine)
|extend utf8_decode=base64_decodestring(b64)
|extend decode =  replace ("x00","", utf8_decode)
|where decode contains 'Gzip' or decode contains 'IEX' or decode contains 'Invoke' or decode contains '.MemoryStream'
| summarize by Computer, Account, decode, CommandLine

Table 4

As you can see, the results provide you with details about what was in the encoded command line and potentially what an attacker was attempting to do.

You can now use the details in the above query to see what was running during the same time by adding the time and computer to the same table. This allows you to easily connect it with other activity on the system, the process by which is described just above in detail. One thing to note is that you can add these automatically by expanding the event with the arrow in the first column of the row. Then hover over TimeGenerated and click the + button.

Time Generated

This will add in an entry like so into your query window:

| where TimeGenerated == todatetime('2018-04-24T02:00:00Z')

Modify the range of time like this:

SecurityEvent
| where TimeGenerated >= ago(30d)
| where Computer == "XXXXXXX"
| where TimeGenerated >= todatetime('2018-04-24T02:00:00Z')-5m and TimeGenerated <= todatetime('2018-04-24T02:00:00Z')+5m
| project TimeGenerated, Account, Computer, Process, CommandLine, ParentProcessName
| sort by TimeGenerated asc nulls last

Table 5

Lastly, connect this to your various alerts using the join to alerts from the last 30 days to see what alerts are associated:

SecurityEvent
| where TimeGenerated >= ago(30d)
| where Process contains "powershell.exe"  and CommandLine contains " -enc"
| extend b64 = extract( "[A-Za-z0-9|+|=|/]{30,}", 0,CommandLine)
| extend utf8_decode=base64_decodestring(b64)
| extend decode =  replace ("x00","", utf8_decode)
| where decode contains 'Gzip' or decode contains'IEX' or decode contains 'Invoke' or decode contains '.MemoryStream'
| summarize by TimeGenerated, Computer=toupper(Computer), Account, decode, CommandLine
| join kind= inner (
      SecurityAlert | where TimeGenerated >= ago(30d)
      | extend ExtProps = parsejson(ExtendedProperties)
      | extend Computer = toupper(tostring(ExtProps["Machine Name"]))
      | project Computer, AlertName, Description
) on Computer

Table 6

Security Center uses Azure Log Analytics to help you detect anomalies in your data as well as expose common hiding techniques used by attackers. By exploring more of your data through directed queries like these presented above, you may find anomalies that are both malicious and benign, but in doing so you will have made your environment more secure and have a better understanding of the activity that is going on systems and resources in your subscription.

Learn more about Azure Security Center

To learn more about Azure Security Center’s detection capabilities, visit our threat detection documentation.

To learn more about Azure Advance Threat Protection, visit our threat protection documentation.

To learn more about integration with Windows Defender Advanced Threat Protection, visit our threat protection integration documentation.

To stay up-to-date with the latest announcements on Azure Security Center, read and subscribe to our blog.

Previewing support for same-site cookies in Microsoft Edge

$
0
0

Yesterday’s Windows Insider Preview build (build 17672) introduces support for the SameSite cookies standard in Microsoft Edge, ahead of a planned rollout in Microsoft Edge and Internet Explorer. Same-site cookies enable more protection for users against cross-site request forgery (CSRF) attacks.

Historically, sites such as example.com that make “cross-origin” requests to other domains such as microsoft.com have generally caused the browser to send microsoft.com’s cookies as part of the request. Normally, the user benefits by being able to reuse some state (e.g., login state) across sites no matter from where that request originated. Unfortunately, this can be abused, as in CSRF attacks. Same-site cookies are a valuable addition to the defense in depth against CSRF attacks.

Sites can now set the SameSite attribute on cookies of their choosing via the Set-Cookie header or by using the document.cookie JavaScript property, thus preventing the default browser behavior of sending cookies in cross-site requests either in all cross-site requests (via the “strict” value) or only in some less sensitive requests (via the “lax” value).

More specifically, if the strict attribute is specified for when a same-site cookie is set, it will not be sent for any cross-site request, which includes clicking on links from external sites. Since the logged-in state is stored as a SameSite=Strict cookie, when a user clicks such a link it will initially appear as if the user is not logged in.

On the other hand, if the lax attribute is specified for when a same-site cookie is set, it will not be sent for cross-origin sub-resource requests such as images. However, the SameSite=Lax cookies will be sent when navigating from an external site, such as when a link is clicked.

This feature is backwards compatible―that is, browsers that don’t support same-site cookies will safely ignore the additional attribute and will simply use the cookie as a regular cookie.

We continuously work to improve our support of standards towards a more interoperable web. Although same-site cookies is not yet a finalized standard at the Internet Engineering Task Force (IETF), we believe the feature is stable and compelling enough to warrant an early implementation as the standardization process progresses.

To broaden the security benefits of this feature, we plan to service Microsoft Edge and Internet Explorer 11 on the Windows 10 Creators Update and newer to support same-site cookies as well, allowing sites to rely on same-site cookies as a defense against CSRF and other related cross-site timing and cross-site information-leakage attacks.

— Ali Alabbas, Program Manager, Microsoft Edge
— Gabriel Montenegro, Program Manager, Windows Networking
— Brent Mills, Program Manager, Internet Explorer

The post Previewing support for same-site cookies in Microsoft Edge appeared first on Microsoft Edge Dev Blog.

.NET Framework May 2018 Preview of Quality Rollup for Windows 10

$
0
0

Today, we are releasing the May 2018 Preview of Quality Rollup for Windows 10 1703 (Creators Update) and Windows 10 1607 (Anniversary Update).

Quality and Reliability

This release contains the following quality and reliability improvements.

CLR

  • Resolves an issue in deserialization when using a collection, for example, ConcurrentDictionary by ignoring casing. [524135]
  • Resolves instances of high CPU usage with background garbage collection. This can be observed with the following two functions on the stack: clr!*gc_heap::bgc_thread_function, ntoskrnl!KiPageFault. Most of the CPU time is spent in the ntoskrnl!ExpWaitForSpinLockExclusiveAndAcquire function. This change updates background garbage collection to use the CLR implementation of write watch instead of the one in Windows. [574027]

Networking

  • Fixed a problem with connection limit when using HttpClient to send requests to loopback addresses. [539851]

WPF

  • A crash can occur during shutdown of an application that hosts WPF content in a separate AppDomain. (A notable example of this is an Office application hosting a VSTO add-in that uses WPF.) [543980]
  • Addresses an issue that caused XAML Browser Applications (XBAP’s) targeting .NET 3.5 to sometimes be loaded using .NET 4.x runtime incorrectly. [555344]
  • A WPF application can crash due to a NullReferenceException if a Binding (or MultiBinding) used in a DataTrigger (or MultiDataTrigger) belonging to a Style (or Template, or ThemeStyle) reports a new value, but whose host element gets GC’d in a very narrow window of time during the reporting process. [562000]
  • A WPF application can crash due to a spurious ElementNotAvailableException. This can arise if:
    1.Change TreeView.IsEnabled
    2.Remove an item X from the collection
    3.Re-insert the same item X back into the collection
    4.Remove one of X’s subitems Y from its collection
    (Step 4 can happen any time relative to steps 2 and 3, as long as it’s after step 1. Steps 2-4 must occur before the asynchronous call to UpdatePeer, posted by step 1; this will happen if steps 1-4 all occur in the same button-click handler.) [555225]

Note: Additional information on these improvements is not available. The VSTS bug number provided with each improvement is a unique ID that you can give Microsoft Customer Support, include in StackOverflow comments or use in web searches.

Getting the Update

The Preview of Quality Rollup is available via Windows Update, Windows Server Update Services, and Microsoft Update Catalog.

Microsoft Update Catalog

You can get the update via the Microsoft Update Catalog.

Product Version Preview of Quality Rollup KB
Windows 10 1703 (Creators Update) Catalog
4103722
.NET Framework 4.7, 4.7.1 4103722
Windows 10 1607 (Anniversary Update) Catalog
4103720
.NET Framework 4.6.2, 4.7, 4.7.1 4103720

Previous Monthly Rollups

The last few .NET Framework Monthly updates are listed below for your convenience:

Let me tell you what you missed at BUILD

$
0
0

If you weren't able to attend last weeks' BUILD conference in Seattle, you can always catch up on the keynotes and the session talks online, or read this recap by Charlotte Yarkoni. Or, if you have 45 minutes on Wednesday next week you can join Tim Heuer and me as we recap some of the biggest announcements of the conference, and few things you might have missed as well.  We'll focus mainly on the AI and Machine Learning related announcements, including:

The webinar will be at 10AM Pacific time on Wednesday, May 23 when you'll also have the opportunity to ask us live questions. You can register for the webinar below, which will also give you access to the on-demand recording if you can't make the live sessions. We hope to see you there!

Azure Webinar Series: Top Azure Takeaways from Microsoft Build

 


Announcing support for Pen with Windows Application Driver v1.1 – Preview available now

$
0
0

Windows Application Driver (WinAppDriver) is continuing Microsoft’s investment in UI test automation tools for Windows 10, and now, we’re excited to announce the next release of WinAppDriver—version 1.1!

A preview is available today— bringing support for Pen automation. The full v1.1 release is also on the horizon and will feature support for Multi-Touch in addition to Pen.

What is WinAppDriver?

For those of you who aren’t familiar with WinAppDriver or UI automation, WinAppDriver is an open-standards based UI automation service designed to work with all kinds of Windows 10 applications including WPF, WinForms, legacy Win32, and UWP. By complying with an open-standard, WinAppDriver users will be able to leverage the robust UI automation ecosystem already provided by Appium and Selenium.

What’s new in the v1.1 Preview

In v1.1, we’re aligning with the W3C WebDriver standard, and as a result, implementing the Actions API to bring in advanced input device support.

The Preview release of v1.1 includes the following:

  1. WinAppDriver updated for Pen, including support for advanced Pen functionality:
    1. Pressure
    2. Tilt X & Tilt Y
    3. Twist
    4. Barrel button
    5. Eraser support
  2. Appium-Dotnet-Driver NuGet Package
    1. This is a preview Nuget package with updated bindings to enable Pen automation on WinAppDriver.
  3. Samples & Documentation on GitHub

Note that the full release will also include support for Multi-Touch – more details on that below.

Getting Started with Pen

You can download the preview version of WinAppDriver on our GitHub page here: https://github.com/Microsoft/WinAppDriver/releases.

To get started on using Pen, we highly recommend checking out our new Sticky Notes sample here.

Let’s sketch out a quick smile 🙂

To demonstrate something a little more complex than a few strokes on a sticky note, we tried drawing out the following smiley face through the following steps–

Step 1 – Using Pen to draw a basic Circle  Using Pen to draw a circle.


//Initiate a Pen object using the custom Dotnet Driver Bindings. 
PointerInputDevice penDevice = new PointerInputDevice(PointerKind.Pen);
ActionSequence drawSequence = new ActionSequence(penDevice, 0);

//Set starting position of circle by its center point.
var centerX = canvasCoordinate.X + canvasSize.Width / 5 + 285;
var centerY = canvasCoordinate.Y + canvasSize.Height / 5 + 270;
//Radius of circle. 
var radius = 200;
/* This value will dictate the granularity of the number of stokes to complete the strokes. The more steps, the less blocky the circle will be. Note: Sticky Notes will interpolate between the steps anyway, so it will not appear blocky if Pen is not lifted. */ 
var steps = 50;
// These two variables will calculate either the X or Y coordinate around the center of the circle for a given step. 
int xValue = (int)(centerX + radius * Math.Cos(2 * Math.PI * 0 / steps));
int yValue = (int)(centerY + radius * Math.Sin(2 * Math.PI * 0 / steps));
drawSequence.AddAction(penDevice.CreatePointerMove(CoordinateOrigin.Viewport, xValue, yValue, TimeSpan.Zero));
drawSequence.AddAction(penDevice.CreatePointerUp(PointerButton.PenFrontTip));
/* Function to draw circle by calculating coordinates around center point and brushing through them. */
for (var i = 0; i <= steps; i++) { 
xValue = (int)(centerX + radius * Math.Cos(2 * Math.PI * i / steps));
yValue = (int)(centerY + radius * Math.Sin(2 * Math.PI * i / steps));                       drawSequence.AddAction(penDevice.CreatePointerDown(PointerButton.PenContact));
drawSequence.AddAction(penDevice.CreatePointerMove(CoordinateOrigin.Viewport, xValue, yValue, TimeSpan.Zero));
}
//Lifting pen up once Circle is drawn. drawSequence.AddAction(penDevice.CreatePointerUp(PointerButton.PenContact));
//Final step would be to execute the sequence.
newStickyNoteSession.PerformActions(new List<ActionSequence> { drawSequence });

Step 2 – Adding in the smile

We’ll have to get a little clever with this part, and modify the original for-loop from step 1 and add supplementary code following it. This will continue the same sequence.


/* Loop modified to iterate past opening step and closing step of circle. This will create an opening for the smiley ìmouthî. */
for (var i = 0; i < steps; i++) { 
xValue = (int)(centerX + radius * Math.Cos(2 * Math.PI * i / steps));
yValue = (int)(centerY + radius * Math.Sin(2 * Math.PI * i / steps)); 
if (i > 1){            
drawSequence.AddAction(penDevice.CreatePointerDown(PointerButton.PenContact));

}                   drawSequence.AddAction(penDevice.CreatePointerDown(PointerButton.PenContact));
drawSequence.AddAction(penDevice.CreatePointerMove(CoordinateOrigin.Viewport, xValue, yValue, TimeSpan.Zero));
}
drawSequence.AddAction(penDevice.CreatePointerUp(PointerButton.PenContact));
/* The following vars will calculate X & Y coordinates for start and end point of the smile. */
var xSmile = (int)(centerX + radius * Math.Cos(2 * Math.PI * 1 / steps));
var ySmile = (int)(centerY + radius * Math.Sin(2 * Math.PI * 1 / steps));
var xSmile2 = (int)(centerX + radius * Math.Cos(2 * Math.PI * (steps - 1) / steps));
var ySmile2 = (int)(centerY + radius * Math.Sin(2 * Math.PI * (steps - 1) / steps));

/* Continue previous sequence and execute the Pen actions. */
drawSequence.AddAction(penDevice.CreatePointerMove(CoordinateOrigin.Viewport, xSmile, ySmile, TimeSpan.Zero));
drawSequence.AddAction(penDevice.CreatePointerDown(PointerButton.PenContact));
drawSequence.AddAction(penDevice.CreatePointerMove(CoordinateOrigin.Viewport, centerX, centerY, TimeSpan.FromMilliseconds(400), new PenInfo { Pressure = .600f }));
drawSequence.AddAction(penDevice.CreatePointerUp(PointerButton.PenContact));
drawSequence.AddAction(penDevice.CreatePointerDown(PointerButton.PenContact));
drawSequence.AddAction(penDevice.CreatePointerMove(CoordinateOrigin.Viewport, xSmile2, ySmile2, TimeSpan.FromMilliseconds(400)));

// Execute sequence.
newStickyNoteSession.PerformActions(new List<ActionSequence> { drawSequence });

The additional pressure applied to the “smile strokes”—this is to add depth to the smile. The sketch should appear as the following:Adding a smiley face to a circle with Pen.
Step 3 – Eyes

If we wanted to finish up and add eyes, it would look something like the following:Adding eyes to a smiley face with Pen.There’s a lot of clever ways you can go on about adding the eyes, so we’ll let you decide which way is best. The full code for our design will be included as part of the Sticky Notes sample on GitHub!

Details on full 1.1

The full release of v1.1 will come with the following additions:

WinAppDriver

  1. Pen—support for Pen will be carried over to full 1.1 release from the Preview.
  2. Multi-Touch—support for Multi-Touch will be added in as well through the Actions API. The following touch modifiers will be supported:
    1. Pressure
    2. Twist

New Samples & Bindings

Samples from the Preview will be further expanded to demonstrate Pen and Multi-Touch functionality. The samples will incorporate a private Appium-Dotnet-Driver Nuget feed that will enable Actions implementation via the new bindings. We’re looking into having these changes be merged into the official Appium .Net Driver, and eventually be rolled-up to the Selenium Namespace in the future.

Release Date

We’re targeting this June for the full release of v1.1—stay tuned to our GitHub board for more info!

Moving Forward

The WinAppDriver team will continue to work on adding new features, resolving bugs, and improving performance. We’ve been looking into popular community requests, and as such, have a couple of cool things in the pipeline for 1.2 and beyond—one in particular being to improve performance with XPath handling.

We’ll also be releasing a new tool for the community – more details on this to follow in the coming weeks. Stay tuned!

How do I provide feedback?

Please provide feedback on our Github issue board – we look forward to hearing about any suggestions, feature requests, or bug reports!

https://github.com/Microsoft/WinAppDriver/issues

If you have any cool sketches done through v1.1 that you’d like to share—do so on the GitHub board! It may even be featured in a future blog post!

Stay Informed

To stay up to date with WinAppDriver news follow @mrhassanuz.

Summary

The v1.1 Preview is available now—enabling users to automate Pen scenarios. Full release for v1.1 to follow, and with it will bring support for Multi-Touch as well. Head over to our releases page on GitHub to download the preview, and get a jump-start on Pen automation by reviewing our updated samples.

The post Announcing support for Pen with Windows Application Driver v1.1 – Preview available now appeared first on Windows Developer Blog.

Announcing SQL Advanced Threat Protection (ATP) and SQL Vulnerability Assessment general availability

$
0
0

We are delighted to announce the general availability of SQL Vulnerability Assessment for Azure SQL Database! SQL Vulnerability Assessment (VA) provides you a one-stop-shop to discover, track and remediate potential database vulnerabilities. It helps give you visibility into your security state, and includes actionable steps to investigate, manage and resolve security issues, and enhance your database fortifications. VA is available for Azure SQL Database customers as well as for on-premises SQL Server customers via SSMS.

If you have data privacy requirements or need to comply with data protection regulations like the European Union General Data Protection Regulation (EU GDPR), then VA is your built-in solution to simplify these processes and monitor your database protection status. For dynamic database environments where changes are frequent and hard to track, VA is invaluable in detecting the settings that can leave your database vulnerable to attack.

VAscreenshot

New SQL Advanced Threat Protection (ATP)

VA is being released to general availability (GA) as part of a new security package for your Azure SQL Database, called SQL Advanced Threat Protection (ATP). ATP provides a single go-to location for discovering, classifying and protecting sensitive data, managing your database vulnerabilities, and detecting anomalous activities that could indicate a threat to the database.

 

ATPscreenshot2

With one click, you can enable ATP on your entire database server, applying to all databases on the server. ATP includes SQL Threat Detection (already generally available), SQL Vulnerability Assessment, and SQL Information Protection (currently in preview). You can try it for free with a 60-day free trial period. For more information, please see our pricing page.

Existing Threat Detection customers will continue to receive the Threat Detection service for the same price as before, with the additional benefits of the entire ATP package. All other customers will be required to opt-in to the new ATP service.

What’s new in VA?

SQL Vulnerability Assessment is an easy-to-use service that you can use to monitor that your database maintains a high level of security at all times, and that your organizational policies are met. It provides a comprehensive security report along with actionable remediation steps for each issue found, making it easy to proactively manage your database security stature even if you are not a security expert.

With the GA announcement, SQL VA supports some new and valuable capabilities.

  • Automated scheduled scans – Configure VA to automatically run a scan for you once a week and send you an email with a result summary.
  • Exportable report – One click to create and download an Excel report of the complete assessment results.
  • Augmented rule set – A broader set of checks covering both database and server-level vulnerabilities, which impact the overall security of the database system.
  • Scan history – View a complete history of all scans run on a database, with an ability to drill down into the details of each historic scan result.

These new capabilities join the existing VA feature set, including the baseline capability that enables you to customize the assessment to your environment. Once you define a security baseline based on your assessment results, then only deviations from your customized baseline are reported, making this a fully tailored experience for your environment.

Get started today – turn on ATP!

We encourage you to enable SQL Advanced Threat Protection and try out Vulnerability Assessment today, to start proactively improving your database security stature. Track and monitor your database security settings, so that you never again lose visibility and control of potential risks to the safety of your data.

Check out the SQL Advanced Threat Protection documentation to get started, and Getting Started with Vulnerability Assessment for more details on managing your vulnerability assessment.

Try it out, and let us know what you think!

New Azure Network Watcher integrations and Network Security Group Flow Logging updates

$
0
0

Azure Network Watcher provides you the ability to monitor, diagnose, and gain insights into your network in Azure.

Among its suite of capabilities, Network Watcher offers the ability to log network traffic through Network Security Group (NSG) Flow Logging. When NSG Flow Logging is enabled, you gain access to Network flow-level data that has endless applications in security, compliance, and traffic monitoring use cases. Deeper analysis of this NSG flow data is available in Network Watcher using Traffic Analytics, which is currently in preview.

Since Azure Network Watcher’s inception, we have continuously partnered with leaders in the SIEM and Log Management industry to provide a rich ecosystem of tools that seamlessly integrate and understand your network in Azure. I would like to highlight two of the most recent partners, offering customers additional choice and value through integration with Azure. On top of our growing ecosystem, we have now enabled the option to send NSG Flow Log data across subscriptions which greatly enhances log management in larger environments.

McAfee Cloud Workload Security integration

Recently, McAfee announced the general availability of the Cloud Workload Security (CWS) Platform in Azure including integration with Network Watcher. CWS automates the discovery and defense of elastic workloads and containers, eliminating blind spots, delivering advanced threat defense, and simplifying cloud management. McAfee CWS now leverages Network Watcher NSG Flow Logging data to provide comprehensive insights to your network traffic and management of security group configuration across your Azure subscriptions.

image

More about this integration and McAfee CWS can be found here.

Integration with RedLock

On April 17th RedLock announced support for Network Watcher through their Cloud 360 Platform. The Cloud 360 Platform provides visibility across a customer’s entire environment and leverages Azure API’s to help ensure that their enterprise is compliant and secure.

image

More about the RedLock and the integration can be found here.

NSG Flow Logging Data Across Subscriptions

Previously NSG Flow Logs could only be sent to storage accounts located in the same region and subscription as the NSG. We heard from customers running centralized monitoring teams managing multiple subscriptions, that consolidation of logs into as few storage accounts as possible was a one of the most desired features requested for the future roadmap of Network Watcher, so we made it happen! Now, you can configure NSG Flow Logs to be sent to a storage account located in a different subscription, provided you have the appropriate privileges, and that the storage account is located in the same region as the NSG. The NSG and the destination storage account must also share the same Azure Active Directory Tenant.


More information about Network Watcher can be found here.

If you have feedback on the Network Watcher service or would like to partner with us, please reach out to us at AzureNetworkWatcher@microsoft.com

Azure the cloud for all – highlights from Microsoft BUILD 2018

$
0
0

Last week, the Microsoft Build conference brought developers lots of innovation and was action packed with in-depth sessions. During the event, my discussions in the halls ranged from containers to dev tools, IoT to Azure Cosmos DB, and of course, AI. The pace of innovation available to developers is amazing. And, in case there was simply too much for you to digest, I wanted to pull together some key highlights and top sessions to watch, starting with a great video playlist with highlights from the keynotes.

Empowering developers through the best tools

Build is for devs, and all innovation in our industry starts with code! So, let’s start with dev tools. Day one of Build marked the introduction of .NET Core 2.1 release candidate. .NET Core 2.1 improves on previous releases with performance gains and many new features. Check out all the details in the release blog and this great session from Build showing what you can use today:

  • .NET Overview & Roadmap: In this session, Scott Hanselman and Scott Hunter talked about all things .NET, including new .NET Core 2.1 features made available at Build.

BUILD2018_HanselmanScott Hanselman and Scott Hunter sharing new .NET Core 2.1.

With AI being top of mind in the tech industry, we were excited to share our work on Visual Studio IntelliCode, which helps enable developers by providing intelligent suggestions improving code quality and productivity. We also announced the public preview of Live Share, which lets developers collaborate on their code and problem solve across Visual Studio and Visual Studio Code on Windows, Mac and Linux. Jason Warner, SVP Technology at GitHub, also joined Scott Guthrie on stage to talk about Microsoft’s commitment to open source and some of the work our teams have been doing.  This included the announcement that if you’re building mobile apps on GitHub you can now use Visual Studio App Center to set up and automate your continuous integration process in just a few clicks, check it out.

Some of the sessions not to miss are:

Containers + Serverless

Applications that span the cloud and edge will naturally take advantage of containers and a serverless, event-driven approach, for scale. With so much industry focus on container orchestrators, especially Kubernetes, there was a lot of focus at Build on Azure Kubernetes Service (AKS). Which, as Gabe Monroy, PM Lead for Containers, points out in his blog post, Kubernetes on Azure: Industry’s best end-to-end Kubernetes experience, has grown more than 10x over the last year. There were a lot of new advances for AKS; most important for me, we shared that the service will be made generally available in the coming weeks! Here are a couple great sessions to check out:

  • Why Kubernetes on Azure: Build 2018: This session shows how to simplify the deployment, management, and operations of Kubernetes using AKS, as well as a wide variety of tools in the Kubernetes ecosystem for CI/CD, observability, storage and networking.
  • Iteratively Develop Microservices with Speed on Kubernetes: This session showed how to rapidly iterate and debug code directly in Kubernetes using familiar dev tools like Visual Studio Code and Visual Studio with the programming language of your choice.

If you’re looking to modernize an existing application using containers you’ll definitely want to check out this session from Corey Sanders, Corporate Vice President of Azure Compute:

  • App Modernization with Microsoft Azure: Learn how Azure helps modernize applications faster with containers, how to use serverless to add additional functionality, and how to incorporate DevOps throughout your apps lifecycle.

BUILD2018_Sanders“Hey, you, get on my Cloud.” -Corey Sanders

Internet of Things

A significant amount of our recently announced $5 billion investment in the Internet of Things (IoT) is in new innovation, so there was a lot of new IoT tech to show at Build. Some of the top announcements were that we are open-sourcing the Azure IoT Edge runtime to give customers more transparency and control over their code and partnership with DJI, the world’s largest drone company, and to bring the Edge to more devices. Sam George, Partner Director, Azure IoT, showed off some of these devices in an awesome demo in Satya’s Vision Keynote with a drone flying on stage!

BUILD2018_GeorgeSam George about to take flight at the Build keynote IoT demo.

It’s also worth reading his recap blog post, Microsoft Azure IoT Edge – Extending cloud intelligence to edge devices.

If you want to learn more about what you can do today to build your own IoT solution, check out Azure IoT School to get started quickly with solution accelerators for common IoT scenarios, such as remote monitoring, predictive maintenance, and connected factory. From Build, here are some great IoT sessions:

Data + AI

The confluence of cloud, data, and AI is driving unprecedented change. The ability to utilize data and turn it into breakthrough insights is foundational to innovation today. Data is also vital to every app and experience we build today. And, modern apps require databases with greater scale, performance, and flexibility – enter Azure CosmosDB. Azure CosmosDB is a globally distributed, multi-model database service. CosmosDB was celebrating its first birthday at Build 2018 and it’s been a busy year for the team! We made lots of CosmosDB-related announcements at Build, including a preview of multi-master write capability. This capability unlocks new use-cases where multiple writes can happen across the globe and each synchronizes simultaneously across locations. Read through the blog post from Rimma Nehme, Group Program Manager, Azure CosmosDB, for the full run-down.

From an AI perspective, modern apps also require new machine learning and AI capabilities, with the ability to see, hear, predict, and reason over data. Whether you’re looking for a super-efficient, pre-built AI approach or are a data scientist looking to build custom AI models, only Azure provides the full range of AI services.

Here are some of the top sessions on data and AI worth checking out:

  • Technical overview of Azure Cosmos DB: In this technical overview of Azure CosmosDB you'll learn how easy it is to get started building planet-scale applications with Azure CosmosDB. We’ll then take a closer look at important design aspects around global distribution, consistency, and server-side partitioning. Learn how to model your data to fit your app’s needs using tools and APIs you love.
  • How to migrate your existing MongoDB and Cassandra Apps to Azure CosmosDB: Bring your MongoDB and Cassandra applications to Azure Cosmos DB and benefit turnkey global distribution, guaranteed low latency for cloud scale. Learn how easy it is to migrate your existing NoSQL applications to Azure CosmosDB by using the MongoDB API and Cassandra API.
  • Leveraging Azure Databricks to minimize time to insight by combining Batch and Stream processing pipelines: See how you can create simple pipelines that allow you to merge real-time data with massive batch datasets. With data-driving, automated decision-making processes infused into intelligent applications, this session will enable you to develop intelligence integration directly against your in-flight data.
  • Demystifying Machine and Deep Learning for Developers: To build the next set of personalized and engaging applications, more and more developers are adding machine learning to their applications. In this session, you'll learn the basics behind machine learning and deep learning, and you'll walk out with all the things you need to build an image classifier for your application.

There is so much more to share, I could keep going. I haven’t even touched on new Azure Stack features to help developers build intelligent hybrid applications, and there’s so much more available for developers to see and digest. Inside Azure Datacenter Architecture with Mark Russinovich is on track to be one of the most viewed of all the Build sessions. Mark is our Azure CTO and in this session he takes you on a tour of Azure’s datacenter architecture and innovations, covering everything from datacenter designs to how we are using FPGAs to accelerate networking and machine learning. Of course, you can check out and watch all of the sessions at Microsoft Build Live.

Thank you to everyone who travelled to visit us at home in Washington. It was awesome to host you all! See you soon and don’t forget to register for Microsoft Ignite, our biggest tech conference of the year, on September 24–28, 2018 in Orlando. You’ll get five packed days of training, product deep dives, hands-on experiences and networking. Hope to see you there!

Microsoft’s Approach to AI

$
0
0

Although AI has been around for decades it is only recently that companies and organizations are starting to adopt it at scale.  In my previous post I wrote about the new generation of technology building blocks that will shape the future of digital experiences with a specific focus on Artificial Intelligence, and what is driving the rapid expansion of this capability. In this post I will build on that conversation to discuss how Microsoft currently approaches AI from a business perspective.  Before we go into the details, be sure to revisit the first post in this series for context on why AI is taking off now.

At Microsoft there are teams working on AI projects around the company which generally falls into 3 core categories of platform and product investment.

AI Platform & Services – We are focused on building a new set of AI services and tools to make AI accessible to every organization.  At Microsoft we are platform builders by trade, that is we build the infrastructure that others use to build their products and services on.  In the case of Artificial Intelligence, we are creating the infrastructure, services, and tools that allow developers and data scientists to infuse AI into their applications and services, as well as build new and unique solutions that are AI based.

Infusing AI – The second area of focus is looking at how we can make our core products better by infusing them with AI.  A generation ago, industry embraced the Internet, weaving it into virtually every product and service. The same is now happening with AI.  In many cases you might not even know it’s there – but it’s helping individuals and organizations connect and make more informed decisions.

Business Solutions – The third area is looking at how we start with deep AI capabilities to build a new generation of AI-based business solutions.  We are looking at this first in terms of how we can use AI to help make people and organizations more productive, starting with some of our own processes that are used to run Microsoft.

These three areas represent an exciting period for bringing AI to life in a very pragmatic way.  These efforts build on top of, and will continue to build on, the core set of research that is going on in the field of Artificial Intelligence by our Research group and others in the industry.

AI Platform & Services

It’s invigorating to watch the birth of a new platform or platform layer, which is what we are witnessing in AI right now. There are a set of tools and services that started out as pure research (see last week’s post) that are being normalized for use by the broad developer base and data scientists.  This is an important step in the democratization of AI, as today it’s still difficult for organizations to find and hire people who are versed in the science of AI.  Previous AI systems were primarily sophisticated rules-based engines, but with the growth and success of deep neural networks, and the ability to rapidly test and deploy AI algorithms, the baseline background for AI practitioners has changed.  As a result, the state of the art for AI is quickly advancing, making it harder to find people who can be subject matter experts across the growing set of AI capabilities.  Searching for AI research papers released in 2018 already shows over 800 published papers.  In that light, the need for a platform layer that brings AI in a consistent fashion to traditional development teams is critical for getting started with AI for their use cases.

The Microsoft AI platform consists of infrastructure to provide AI at scale, services that provide core AI capabilities through a common set of APIs, and tools for practitioners who want to be hands on in the creation of their own custom AI models. Customers can now test and train at scale on CPUs and GPUs, and soon, will be able to tap the power of FPGAs – software programmable chips such as those powering Microsoft’s Project Brainwave and Jabil’s factory of the future as featured at our BUILD developer conference just last week. Already FPGAs offer a 132x speed enhancement over CPUs and being software programmable, FPGAs offer a balance of performance and future-proofing.  As new innovations happen in AI methods, they can more quickly be deployed via software to the FPGA’s in contrast to building custom silicon such as the ASICs powering TPUs. As a result, time to market and deployment of these new innovations can be significantly reduced. FPGAs are now powering Bing search, and we’re seeing 5x lower hardware latency than TPUs for real-time AI and we are working to deploy FPGAs for customers as a part of the Intelligent Cloud, and Intelligent Edge devices.

  1. Pre-Trained AI Services – For most developers and organizations starting their AI journey, the core of this new platform is a set of Azure-powered Cognitive Services that help developers bring human cognition (the ability to speak, hear, translate, see, reason, etc.) into their applications. Taking the research breakthroughs and turning them into a consistent set of developer services takes some time, but we are seeing the fruit of that work with a growing set of cohesive services, with consistency in APIs, sample code, documentation, language support and more.  These services give developers pre-built AI models, exposed as services for vision, speech, language, conversation, search, and knowledge.  Many developers are using these services for creating great user experiences and solving business problems rather than spending the time to train their own deep neural networks.  These core services work for a wide variety of developer needs and are also customizable for additional specifics an experience might require.  Azure-powered services such as Custom Vision make it possible to upload labeled images, train with a few dozen images, then evaluate and learn to make your classifier more specific to a given task or process, and then even deploy to mobile devices such as Windows, iOS, and Android.  The same can be done with custom language domains…say for lawyers or doctors.   Cognitive Services provide the best of general purpose capability and customizability with speed to solution.
  2. Conversational AI – Hand in hand with Cognitive Services, we are seeing a transition to a more conversational user interaction as people start to infuse AI into their solutions. Providing the ability for developers to add naturally fluid conversations as part of an overall user experience, across canvasses and state, is the goal of the Azure Bot Services. With the Azure Bot Service and LUIS for language understanding, you can create, deploy and manage a bot that interacts with your users on the channels they already are in your apps, Facebook Messenger, Slack, Skype, Microsoft Teams, Websites, Cortana, and more. When trying to create a more natural conversational experience the Azure Bot Service can be combined with the Cognitive Services such as translation or custom vision to expand your bot’s capabilities over time.
  3. Tools and Open Formats – While these services and tools can be accessed independently they are also being integrated into some of our core tools such as the Azure Portal and Visual Studio. Developers and data scientists that want to work a layer deeper on the fundamental science, they can through support for key frameworks and tools within our platform.  Since AI is still rapidly evolving some developers will be interested in researching new techniques or doing something very specific; like creating a generative adversarial network (GAN) or a very deep neural network. In those cases, people will want to use tools like Cognitive Toolkit, TensorFlow, Caffe, MXNet, Chainer, and more to train and deploy their own custom models.  Azure is committed to support a broad range of these AI specific tool chains.  To help in this space Microsoft is the co-founder of the Open Neural Network Exchange (ONNX) along with Facebook, which is an open format for exchanging deep learning models between different toolchains. Since announcing ONNX, other tech companies have joined including Amazon, AMD, ARM, Baidu, Huawei, IBM, Intel, Nvidia, Qualcomm, and many more.
  4. Extending to the Intelligent Edge – There is also a lot of great learning around the importance of edge support for AI as we delve further into world solutions and use cases. In this area Microsoft is focused on enabling AI at the intelligent edge through a variety of tools and offerings. There were several announcements around BUILD that highlight the effort to create an intelligent edge that supports real world AI scenarios.  These include:

Infusing AI into our Products and Services

Our second area of focus as a company is how can we make our products better by infusing them with AI.  The goal of course is to create a better customer experience, and if done well, you don’t know that the products are infused with AI, they just work better.  This work can be done either behind the scenes or directly in the experience.  Behind the scenes, we use AI to run our networks more effectively; like Skype or Azure, or to protect people’s data and content in the cloud or on devices.  Within our products we also strive to create unique experiences that would not be possible without AI. For example, the resume assistant in Word helps the author create a resume that identifies the top skills that are competitive with other people in the same field.  PowerPoint designer enables anyone creating a slide with images to make them visually interesting regardless of their artistic ability.  Excel Insights helps visualize patterns in your data, and the new Acronyms feature in Word that helps master company lingo with Machine Learning.   Using Natural Language Processing and Language Understanding to allow real time conversations between people in two different languages over Skype is bridging communities around the world.  These same tools can be used to enable closed captioning of a speech in PowerPoint, and even do the captioning in multiple languages at the same time.  In Windows Hello, the ability to securely login to your PC just by looking at it is based on AI.  There is a growing number of use cases, and if AI is properly infused into a product it should just work better.

Solutions

The third area we focus on is how can companies use AI to improve their overall business.  Whether it’s how products are built, customers and partners are helped, or employees are cared for, AI can support a broad set of standard use cases.  In this area Microsoft has been using AI internally on a variety of processes that range from forecasting to marketing to customer care and more.  As we spend time in this area and worked with a broad range of customers on their specific solutions, we’ve started to see a set of patterns emerge that reflect a lot of the focus of AI within organizations.

I find these patterns of virtual agents, ambient intelligence, AI-assisted professional, and autonomous systems to be helpful in having conversations with customers/partners around the potential opportunities to get started with AI for their business. In my next blog I will go deeper into these four patterns.

When it comes to AI, Microsoft is vested in creating a platform to allow developers and companies to infuse AI into their services and products, we are also using AI to create better products for our customers and finally we are using AI to help better run our company and help other companies better run theirs.  This is Microsoft’s approach for making AI available for others.  Join me next week for the third and final part of this series where I will delve a little deeper into how we are seeing customers adopt AI. If you missed the first part of the series, you can read it here.

 

Cheers,

Guggs

Viewing all 10804 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>