Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

.NET Core May 2018 Update

$
0
0

Today, we are releasing the .NET Core May 2018 Update. This update includes .NET Core 2.1.200 SDK and ASP.NET Core 2.0.8.

Security

Microsoft is releasing this security advisory to provide information about a vulnerability in .NET Core and .NET native version 2.0. This advisory also provides guidance on what developers can do to update their applications to remove this vulnerability.

Microsoft is aware of a denial of service vulnerability that exists when .NET Framework and .NET Core improperly process XML documents. An attacker who successfully exploited this vulnerability could cause a denial of service against a .NET Framework, .NET Core, or .NET native application.

The update addresses the vulnerability by correcting how .NET Framework, .NET Core, and .NET native applications handle XML document processing.

If your application is an ASP.NET Core application, developers are also advised to update to ASP.NET Core 2.0.8.

CVE-2018-0765: .NET Core Denial Of Service Vulnerability

Getting the Update

The .NET Core May 2018 Update is available from the .NET Core download page and the Microsoft.AspNetCore.All package on NuGet.

You can always download the latest version of .NET Core at .NET Downloads.

Docker Images

.NET Docker images have been updated for today’s release. The following repos have been updated.

Note: Look at the “Tags” view in each repository to see the updated Docker image tags.

Note: You must re-pull base images in order to get updates. The Docker client does not pull updates automatically.

Previous .NET Core Updates

The last few .NET Core updates follow:


Early preview of Visual Studio support for Windows 10 on ARM development

$
0
0

Today, we are pleased to announce that Visual Studio 15.8 Preview 1 contains an early preview of the SDK and tools to allow you to create your own 64-bit ARM (ARM64) apps. These tools answer the requests of many eager developers, and the development made possible with these tools represents the next step in the evolution of the Always Connected PC running Windows 10 on ARM.

Earlier this year, our partners released the first Always Connected PCs powered by Qualcomm Snapdragon 835 processors. These Always Connected PCs are thin, light, fast, and designed with instant-on 4G LTE connectivity and unprecedented battery life – now measured in days and weeks, not hours. Thanks to an x86 emulation layer, these Always Connected PCs also allow customers to tap into the wide ecosystem and legacy of Windows apps.

Developers interested in targeting this new ARM-based platform can use these early preview tools to build apps that run natively on ARM processors rather than relying on the emulation layer. While the algorithms that make emulation possible are engineered to optimize performance, running your app natively allows your customers to get the most performance and capability from your app on this new category of devices.

Since this is an early preview, there isn’t official support yet for the ARM64 apps built with these tools. You won’t be able to submit ARM64 packages to the Microsoft Store, though you can post preview versions of ARM64 Win32 apps to your website. Stay tuned to this blog for more information as more support becomes available. In the meantime, you can use these preview tools to get a head start on developing your ARM64 apps and provide feedback before the tools are finalized.

In this post we’ll look at how to set up your environment to build ARM64 apps, whether you’re building Universal Windows Platform (UWP) apps or C++ Win32 apps.

C++ UWP App Instructions

To build ARM64 UWP apps based on C++, start by setting up your development environment:

1) Download Visual Studio’s latest preview at https://www.visualstudio.com/vs/
2) Choose the “Universal Windows Platform development” workload
3) Select the “C++ Universal Windows Platform tools” optional component
4) In Individual Components, select “C++ Universal Windows Platform tools for ARM64”

The Visual Studio Installer should look like this when everything that is required is selected:

The Visual Studio Installer when everything that is required is selected.

After installing, you can get started on your app:

5) Open your C++ Project in Visual Studio, or create a new one
6) Right click on your Solution and select Properties, then navigate to Configuration Properties and select “Configuration Manager”
7) Under “Active solution platform:” select “<New…>” and call it ARM64. Copy settings from “ARM” and check to create new project platforms
8) If individual projects within the solution do not allow adding ARM64 as a platform, it may be because of dependencies. To learn more about the dependencies, you can modify those project files directly, copying the ARM configuration and modifying the platform to create an ARM64 configuration.
9) Save everything

ARM64 will now be available as a configuration for the project to build. Note that only Debug builds are supported for ARM64 at this time. You can create a package for sideloading or use remote debugging (see instructions) to run the app on a Windows 10 on ARM PC.

.NET Native UWP App Instructions

To build ARM64 UWP apps that rely on .NET Native, start by setting up your development environment.

1) Download Visual Studio’s latest preview at https://www.visualstudio.com/vs/preview/
2) Choose the “Universal Windows Platform development” workload
3) Open your project or create a new one. Note that the Target Platform Minimum Version must be set to at least “Windows 10 Fall Creators Update (Build 16299)”
4) Open the project file in your favorite editor and add a property group targeting ARM64. You can copy an existing property group, such as Debug|ARM, and modify it to support ARM64 with the changes highlighted below:

Sample code of Debug|ARM, and modify it to support ARM64.

The UseDotNetNativeToolchain property is required to enable the .NET Native toolchain.

5) Save the project file and reload it in Visual Studio
6) Right click on your Solution and select Properties, then navigate to Configuration Properties and select “Configuration Manager”
7) Under “Active solution platform:” select “<New…>” and call it ARM64. Copy settings from “ARM” and check to create new project platforms
8) Update to latest ARM64 version of the tools:

a. Right-click your project and select ‘Manage NuGet Packages…’
b. Ensure “Include Prerelease” is selected
c. Select the Microsoft.NETCore.UniversalWindowsPlatform package
d. Update to the following version: 6.2.0-Preview1-26502-02

ARM64 will now be available as a configuration for the project to build. Note that only Debug builds of ARM64 are supported at this time. You can create a package for sideloading or use remote debugging (see instructions) to run the app on a Windows 10 on ARM PC.

C++ Win32 App Instructions

Visual Studio 15.8 Preview 1 also includes an early level of support for rebuilding your C++ Win32 apps as ARM64 to run on Windows 10 on ARM.

1) Download Visual Studio’s latest preview at https://www.visualstudio.com/vs/preview/
2) Choose the “Desktop development with C++” workload
3) In Individual Components, select “Visual C++ compilers and libraries for ARM64”
4) Open your project in Visual Studio or create a new one
5) Right click on your Solution and select Properties, then navigate to Configuration Properties and select “Configuration Manager”
6) Under “Active solution platform:” select “<New…>” and call it ARM64. Copy settings from “ARM” and check to create new project platforms.
7) Save everything
8) Open the project file in your favorite editor. Under <PropertyGroup Label=”Globals”>, declare support for ARM64 by adding WindowsSDKDesktopARM64Support, as highlighted below:

Sample code depicting declaring support for ARM64 by adding WindowsSDKDesktopARM64Support.

9) Save and reload the project

You will now be able to build the project as ARM64 and either remote debug on a Windows 10 on ARM PC or copy over the EXE files and run them directly.

You can also use the Desktop Bridge (see instructions) to wrap the built ARM64 binaries into a Windows app package that can be installed on Windows 10 on ARM PCs.

Conclusion

We’re excited to open up Windows 10 on ARM to developers looking to build great apps compiled natively for the platform.

Visual Studio 15.8 Preview 1 provides an early preview of the full support that will be coming later this year. You can expect more updates as we work to bring these tools to an official release and open the Store to accept submissions for ARM64 packages.

We hope you give these tools a try on your apps and would love to hear from you. If you hit any issues, have any questions, or have feedback to share, head to our Windows 10 on ARM development page at http://aka.ms/Win10onARM or leave comments below.

The post Early preview of Visual Studio support for Windows 10 on ARM development appeared first on Windows Developer Blog.

Visual Studio Code C/C++ extension May 2018 Update – IntelliSense configuration just got so much easier!

$
0
0

Visual Studio Code C/C++ extension May 2018 Update – IntelliSense configuration just got so much easier!

This morning we shipped the May 2018 update of the C/C++ extension for Visual Studio Code, the most significant update to this extension in its 2-year history! 😊 The team has been working extremely hard for the past month to bring many new features into this release. We are super excited about it and hope you would like it too!

In this update, we primarily focused on drastically reducing the amount of configuration you have do to gain a great IntelliSense experience, and also added a few other features to boost your productivity. This update includes:

  • Recursive search of includePath
  • Auto-detection of system includes and defines for WSL, MinGW, and Cygwin
  • Global IntelliSense settings
  • Auto-detection of libraries installed by Vcpkg for IntelliSense
  • Auto-complete for #include
  • C/C++ code snippets
  • Support for WSL on Windows builds 17110 and higher

You can find the full list of changes in the release notes.

Recursive search of includePath

With this update, you can now specify the IntelliSense includePath to be searched recursively by appending “**” at the end of paths. This eliminates the need of listing individual paths each header lives in, which is especially useful when headers are located in different sub-directories.

The following is an example of using the new syntax “**” for recursive search in the configuration file.

Simply append “**” to the paths to opt in recursive search, or remove “**” to opt out.

During recursive search, IntelliSense will automatically resolve headers when there’s no ambiguity. You should use with care when search large folders recursively, especially when they contain ambiguously named headers. It is recommended to add explicit paths earlier in the include path lookup to help IntelliSense resolve headers to the desired location.

Also, this new syntax doesn’t apply to the “browse.path” setting, which performs recursive search by default when no wildcards are appended.

Auto-detection of system includes and defines for WSL, MinGW, and Cygwin

The March 2018 update added auto-detection of system includes and defines for Mac and Linux. This update extended this feature to the Windows platform. If you develop on Windows targeting a Windows subsystem, such as WSL (Windows subsystem for Linux), MinGW, and Cygwin, the extension now automatically searches for compilers installed in common locations on these systems and queries for its default includes and defines to resolve system headers.

In the following example, the compiler is auto-detected on WSL at the location of “/use/bin/gcc”. System headers will be automatically resolved with no manual configuration needed, unless you would like to change any of the default settings.

On Windows, the compiler auto-detection logic first searches for MSVC, followed by WSL, MinGW, and then Cygwin.

Global IntelliSense settings

This release allows you to set any of the IntelliSense settings in your user or workspace settings.json file. What this means is that these settings would automatically apply to any project (if set as user setting) or any projects in the same workspace (if set as workspace setting). This comes in handy when multiple projects share a common set of includes, defines, or compiler, etc.

For example, the following user setting, will ensure that for all folders and workspaces opened on your system, “c:/users/me/mylibs/include” is searched for headers.

Note that settings defined in c_cpp_properties.json will still be honored and override any user or workspace settings. To use the defaults specified in the user or workspace setting, include “${default}” in the includePath setting in c_cpp_properties.json files.

Vcpkg integration

Vcpkg is a C++ library manager for Linux, macOS and Windows with over 350 open sourced libraries already supported. If your apps use 3rd party libraries, vcpkg provides an easy way to acquire them (see instructions on github). Once libraries are installed through vcpkg, run “vcpkg integrate install” command to make vcpkg visible to VS Code, and then the extension will automatically use the headers for IntelliSense.

Auto-complete for #include

The extension now provides auto-complete suggestions for #include statements, including system headers and user headers.

Here is an example:

C and C++ code snippets

We added a set of C/C++ snippets that make it quicker to type C or C++ constructs like class definitions or long if-then-elseif-else statements.

You can use the “editor.snippetSuggestions” setting to change where the snippets show up in the suggestion list.

Support for WSL on Windows builds 17110 and higher

Last but not least, we updated the extension to work with Windows Update for April 2018, which introduced case-sensitive file naming in a Windows environment for the very first time.

Tell us what you think

Download the C/C++ extension for Visual Studio Code, try it out and let us know what you think. File issues and suggestions on GitHub. If you haven’t already provided us feedback, please take this quick survey to help shape this extension for your needs.

 

 

Announcing new Async Java SDK for Azure #CosmosDB

$
0
0

We’re excited to announce a new asynchronous Java SDK for Cosmos DB’s SQL API open sourced on GitHub. This SDK leverages the popular RxJava library to add a new async API surface area for composing event-based programs with observable sequences. It also features an improved user experience and is also lighter weight than our previous synchronous Java SDK (yielding a 2x performance improvement on the client-side)!

image

You can add the library from Maven using:

<dependency>
    <groupId>com.microsoft.azure</groupId>
    <artifactId>azure-cosmosdb</artifactId>
    <version>1.0.1</version>
</dependency>

Connect to Cosmos DB

The new SDK uses convenient builder pattern to specify connectivity options:

asyncClient = new AsyncDocumentClient.Builder()
                         .withServiceEndpoint(HOST)
                         .withMasterKey(MASTER_KEY)
                         .withConnectionPolicy(ConnectionPolicy.GetDefault())
                         .withConsistencyLevel(ConsistencyLevel.Eventual)
                         .build();

Insert an item

To execute and coordinate Cosmos DB data operations asynchronously, and get the results you use observables:

Document doc = 
      new Document(String.format("{ 'id': 'doc%d', 'counter': '%d'}", 1, 1));

Observable<ResourceResponse<Document>> createDocumentObservable =
      asyncClient.createDocument(collectionLink, doc, null, false);

createDocumentObservable
      .single()           // we know there will be one response
      .subscribe(documentResourceResponse -> {
            System.out.println(documentResourceResponse.getRequestCharge());
      });

Note that the createDocument request will be issued only once .subscribe is called on the corresponding observable result.

Query

In Cosmos DB queries can return multiple pages of data. To efficiently read all the pages, simply subscribe and read all pages:

Observable<FeedResponse<Document>> documentQueryObservable = asyncClient
                .queryDocuments(getCollectionLink(), "SELECT * FROM root", options);

// forEach(.) is an alias for subscribe(.)
documentQueryObservable.forEach(page -> {
     for (Document d : page.getResults()) {    
            System.out.println(d.toJson());
     }
});

We just barely scratched the surface. Learn more about Azure Cosmos DB async SDK for Java.

If you are using Azure Cosmos DB, please feel free to reach out to us at AskCosmosDB@microsoft.com any time. If you are not yet using Azure Cosmos DB, you can try Azure Cosmos DB for free today, no sign up or credit card is required. If you need any help or have questions or feedback, please reach out to us any time. For the latest Azure Cosmos DB news and features, please stay up-to-date by following us on Twitter #CosmosDB, @AzureCosmosDB. We look forward to see what you will build with Azure Cosmos DB!

- Your friends at Azure Cosmos DB

Azure Networking May 2018 announcements

$
0
0

This week is Microsoft Build 2018, our premiere event of the year for our gifted developer audience. With a strong appetite for technology and a desire to learn and immerse themselves in new ways to build and create cloud applications, thousands of software professionals and coders are coming to Seattle this week. We’d like to take this opportunity to let you know about new networking services we have launched as well as enhancements we have made.

As businesses of all sizes increasingly move their mission-critical workloads to Azure, new opportunities arise to simplify the overall network experience, from security and management over monitoring to key areas such as reliability and performance. Launching new services such as DDoS, VNet access to Azure services, zone-aware Application Gateways, a new global scale CDN offering, along with a new and super-fast Load Balancer, we continue to enhance the networking capabilities of Azure and more importantly develop new services and technologies to help customers run, manage, and achieve more when running their most demanding workloads.

Azure DDoS Protection

Last month we announced the general availability (GA) of the Azure DDoS Protection Standard service that provides enhanced DDoS mitigation capabilities for your application and resources deployed in virtual networks (VNets). DDoS Protection can be simply enabled with no application or resource changes so that your services benefit from the same DDoS technologies we use to protect Microsoft. With dedicated monitoring and machine learning that automatically configures DDoS protection policies continuously tuned to your application’s traffic profiles, your services are fully protected.

Azure DDoS

Please watch our getting started video. More details are in the Azure DDoS Protection service documentation.

VNet Service Endpoints

VNet Service Endpoints extend your VNet private address space to Azure services. This allows you to limit access to business-critical Azure resources to only your VNets, fully removing Internet access. All traffic through a service endpoint stays in Azure.

We announced GA of VNet service endpoints for Azure Storage and Azure SQL Database earlier this year. We continue to expand the services accessible via service endpoints and now announce GA for "Azure Cosmos DB service endpoints". Azure Cosmos DB is the first service to allow cross region access control support where customers can restrict access to globally distributed Azure Cosmos DB accounts from subnets located in multiple regions. For more information, please see VNet Service Endpoints documentation.

VNet Service Endpoints

VNet Service Endpoints restricts Azure services to be accessed only from a VNet

Azure DNS enhancements

Azure DNS Private Zones is now in public preview, providing secure and reliable name resolution for your VNets without needing custom DNS servers. You can bring DNS zones to your VNet. You also have the flexibility to use custom domain names, such as your company’s domain name. Private zones provide name resolution both within a VNet and across VNets. You can have private zones span across VNets in the same region as well as across regions and subscriptions.

You can configure zones with a split-horizon view allowing for a private and a public DNS zone to share the same name. This is a common scenario when you want to validate your workloads in a local test environment, before rolling out in production. Since name resolution is confined to configured VNets, you can prevent DNS exfiltration.

DNS

When you designate a VNet as a Registration VNet, Azure dynamically registers DNS and records in the private zone for the all virtual machines (VMs) in this VNet. For more details on Private Zones please see this overview as well as common scenarios.

Azure DNS now provides metrics via the Azure Monitor to enable customers to configure and receive alerts. For details, please see our documentation.

Connection Monitoring

Network Watcher Connection Monitor, now generally available, helps you easily monitor and alert on connectivity and latency from a VM to another VM, FQDN, URI, or IPv4 address with per minute granularity. This reduces the time to detect connectivity problems. You get insights into whether a connectivity issue is due to a platform or a user configuration problem to quickly pinpoint and rectify the problem.

Connection Monitor

To learn more please visit the Network Watcher Connection Monitor documentation.

Traffic View: Improving performance

In March we announced the general availability of Traffic View which provides information such as geographic location of your user base, latency experienced from these locations, and traffic volume. The visualizations provided through the Azure portal or analytics tools such as Power BI allows you to optimize the placement of your workloads and learn about your users’ network experience. For more details, please see the Traffic View documentation.

traffic view

Global VNet Peering

Global VNet Peering, which seamlessly connects your VNets across Azure regions, is now generally available. You can peer your VNets from across the world. Global VNet peering can be configured in under a minute. Once peered, the VNets, appear as one global VNet from a connectivity perspective. Resources within the peered VNets can directly communicate with each other via Microsoft’s global network. Global VNet Peering enables data replication across your VNets so that you can copy data from one location to another for better disaster recovery. To learn more please visit VNet Peering documentation.

Azure VNet

Application Security Groups

Application Security Groups (ASG) are now generally available. ASGs enable you to centralize policy configuration and simplify your security management. With ASGs you can define fine-grained network security policies based on workloads, applications, or environments using monikers assigned to virtual machines. This enables you to implement a zero-trust model, limiting access to the application flows that are explicitly permitted. To learn more please see the Application Security Group documentation.

Application security groups

Azure Application Gateway and WAF enhancements

Application Gateway and Web Application Firewall (WAF) provides Application Delivery Controller as a service. HTTP/2 support and enhanced metrics are now available. HTTP/2 aware clients benefit from connection multiplexing and related security and performance enhancements. Communication from Application Gateway to the backend remains over HTTP 1.1.

Enhanced metrics include performance counters such as total connections, total requests, failed requests, summarized response status code distribution, healthy/unhealthy host count, and throughput. These enhancements enable customers to build dashboards and set alerting to better monitor application workloads.

We are also announcing the managed preview for zone redundant Application Gateways and Static VIP. Zone redundant Application Gateways allow you to choose the availability zone or zones where Application Gateway instances are deployed. Static VIP is guaranteed not to change even if the Application Gateway is shut down or restarted. This managed preview also includes infrastructure improvements reducing provisioning time, faster updates, and better SSL connections per second per core. The preview is available in US East 2 with more regions coming soon. Please refer to our preview announcement for updates. Customers can sign up for this preview by following the steps in the preview documentation.

Azure CDN by Microsoft

During Build, we are announcing the public preview of Microsoft as a provider of Azure CDN.  You can now use and deliver content from Microsoft’s own global CDN network for all your web facing content. Running at the edge of Microsoft's Global Network, this new native option is being added alongside existing provider options from Verizon and Akamai and gives you access to the same CDN platform used by Bing and Office 365.

CDN

Figure X. Microsoft global network

Azure CDN from Microsoft provides access to 54 global Edge POPs in 33 countries and 16 Regional Cache POPs at network hubs across our fast and reliable anycast network. This translates into 50 ms on average RTT from users to a CDN POP in more than 60 countries.

Standard Load Balancer

Azure Standard Load Balancer is now generally available with increased scale, access to all VMs in the VNet, availability zone support, and private IP load balancing within the VNet. Standard Load Balancer can distribute application traffic to up 1,000 VMs in the backend pool, 10x more than the Basic Load Balancer.

Standard Load Balancer

Any VM in the VNet can now be configured to join the backend pool. Customers can combine multiple scale sets, availability sets, and individual VMs in a single backend pool. Multiple frontends can be used simultaneously for outbound flows, increasing the possible number of concurrent outbound connections. We also provide controls over which frontends are chosen when multiple frontends are present. For more information please see the Load Balancer and Outbound Connections documentation.

Standard Load Balancer is designed for use with Availability Zones. Customers can enable zone-redundant frontends for public and internal Load Balancers with a single IP address or align their frontends with IP addresses in specific zones. Cross-zone load balancing delivers traffic to VMs anywhere within the region. The zone-redundant frontend is served simultaneously from all zones. More details can be found in the Standard Load Balancer and Availability Zones documentation.

A new type of load balancing rule for internal Standard Load Balancers provides per flow load balancing across all ports of a private IP address. You can create n-active configurations for network virtual appliances and other scenarios requiring the distribution of traffic across a large number of ports. More details can be found in the HA Ports documentation.

Standard Load Balancer has new telemetry and alerting fully integrated into Azure Monitor providing insights into traffic volumes, inbound connection attempts, outbound connection volume, active in-band data path measurements, and health probes. All of this can be consumed by Azure’s Operations Management Suite and others. For more details, please visit the Standard Load Balancer diagnostics and monitoring documentation.

Summary

As enterprises deploy more workloads into the cloud, the need for better manageability, monitoring, availability, and network security increase. We will continue to provide new, easy to use, and more comprehensive networking services. Stay tuned for more Azure networking services in the coming months! As always, we are very interested in your feedback. 

Azure confidential computing

$
0
0

Last September, I had the privilege to publicly announce our Azure confidential computing efforts, where Microsoft Azure became the first cloud platform to enable new data security capabilities that protect customer data while in use. The Azure team, alongside Microsoft Research, Intel, Windows, and our Developer Tools group, have been working together to bring Trusted Execution Environments (TEEs) such as Intel SGX and Virtualization Based Security (VBS - previously known as Virtual Secure mode) to the cloud. TEEs protect data being processed from access outside the TEE. We’re ready to share more details about our confidential cloud vision and the work we’ve done since the announcement.

Many companies are moving their mission critical workloads and data to the cloud, and the security benefits that public clouds provide is in many cases accelerating that adoption. In their 2017 CloudView study, International Data Corporation (IDC) found that 'improving security' was one of the top drivers for companies to move to the cloud. However, security concerns still remain a commonly cited blocker for moving extremely sensitive IP and data scenarios to the cloud. Cloud Security Alliance (CSA) recently published the latest version of its Treacherous 12 Threats to Cloud Computing report. Not surprisingly, data breach ranked among the top cloud threats, and they included three additional data security concerns, specifically breaches caused by system vulnerabilities, malicious insiders, and shared technology vulnerabilities.

Azure Confidential Computing is aimed at protecting data while it’s processed in the cloud. It is the cornerstone of our 'Confidential Cloud' vision, which includes the following principles:

  • Top data breach threats are mitigated
  • Data is fully in the control of the customer regardless of whether in rest, transit, or use and even though the infrastructure is not
  • Code running in the cloud is protected and verifiable by the customer
  • Data and code are opaque to the cloud platform, or put another way the cloud platform is outside of the trusted computing base

While today this technology may be applied to a subset of data processing scenarios, we expect as it matures that it will become the new norm for all data processing, both in the cloud and on the edge.

Delivering on this vision requires us to innovate across hardware, software, and services that support confidential computing:

  1. Hardware: Over the past several years, we have worked closely with silicon partners to add features that isolate applications during computation and to make those features accessible in multiple operating systems. Through our close partnership, we are making the latest Intel secure enclave innovations available to customers as soon as they are ready.

    We are excited to announce availability of the latest generation of Intel Xeon Processors with Intel SGX technology in our East US Azure region. You will have access to hardware-based features and functionality in the cloud, before it is broadly available on-premise.

  1. Compute: We are extending our Azure compute platform to deploy and manage compute instances that are enabled with TEEs.

    We are introducing a new family of virtual machines (DC-series) that are enabled with the latest generation of Intel Xeon Processors with Intel SGX technology. With this release, you are able to run SGX-enabled applications in the cloud to protect the confidentiality and integrity of your code and data.

  1. Development: We are working closely with partners to drive an API for Windows and Linux that is consistent across TEEs, both hardware and software-based, so that confidential application code is portable. In addition, we’re working on tooling and debugging support for developing and testing confidential applications.

    You are able to build C/C++ applications with the Intel SGX SDK and additional enclave APIs.

  1. Attestation: Verifying the identity of code running in TEEs is necessary to establish trust with that code to determine whether to release secrets to it. We are partnering with silicon vendors to design and host attestation services that make verification simple and highly available.
  1. Services/Use cases: Virtual machines provide the building block on top of which to enable new secure business scenarios and use cases. We are actively working across Microsoft to develop services and products that leverage confidential computing, including:
    1. Protecting data confidentiality and integrity through SQL Server Always Encrypted
    2. Creating a trusted distributed network among a set of untrusted participants with our Confidential Consortium Blockchain Framework for highly scalable and confidential blockchain networks
    3. Confidentially combining multiple data sources to support secure multi-party machine learning scenarios
  1. Research: Microsoft Research has been working closely with the Azure team and silicon partners to identify and prevent TEE vulnerabilities. For example, we are actively researching advanced techniques to harden TEE application to prevent information leaks outside the TEE, both direct or indirect. We will bring this research to market in the form of tooling and runtimes for your use in developing confidential code.

You can sign up today to request preview access to get started with our confidential compute platform, software, tooling, and developer community.  Join us in building cloud applications and services that mitigate against the 'treacherous threats of cloud computing'. We look forward to hearing your feedback and partnering with you to build the future of confidential cloud computing.

Bing Maps Location Recognition API and Autosuggest API Previews Released

$
0
0

We are excited to announce the private preview release of two new Bing Maps APIs – Bing Maps Location Recognition API and Bing Maps Autosuggest API. Bing Maps Location Recognition API and the Bing Maps Autosuggest API will be generally available this summer.

Bing Maps Location Recognition Preview

By fully recognizing their user locations, apps & services can build the next gen experiences and analytics. Developers and analysts can use the Bing Maps Location Recognition API to build rich experiences like store specific notifications or coupons, visit tracking and identify new customer segments based on the types of location users visit.

Given a location (latitude, longitude) the Bing Maps Location Recognition API returns a list of entities situated at the location. Note that search by query is not supported. The different components of the API response provide a comprehensive description of the location. The API response consists of:

  • Business entities situated at the location. A wide variety of entity types are supported (e.g. restaurants, hotels, parks, gym, shopping malls and more).
  • Natural entities at the location (e.g. beaches, island, lakes and 9 other types).
  • Reverse geocoded address of the input location, with neighborhood and street intersection information where available.
  • Type of property (e.g. residential, commercial) situated at the location.

Supported business types/ categories

Arts & entertainment

Automotive

Banking & finance

Beauty & spa

Business-to-business

Education

Food & drink

Government

Healthcare

News & media

Professional services

Real estate

Religion

Retail

Sports & recreation

Travel

Note: granular categories are available under these top-level categories (e.g. different cuisines are available under ‘restaurants’ which itself is under ‘food & drink’).

Supported natural entity types

Bay

Beach

Canal

Forest

Island

Lake

Mountain

Plateau

Reserve

River

Sea

Valley

Example scenario: Trip labelling in MileIQ

Leading mileage tracking app MileIQ is using Location Recognition API to make it easy for users to label the start/ end of their trips. Location Recognition API is providing names of locations visited by a user in their trip, using these suggestions MileIQ users can easily label their trips.

MileIQ Trip Labelling

Example scenario: Location sharing in Swiftkey

Popular keyboard app Swiftkey is using the Location Recognition API to power their new location sharing feature. For users given location, Location Recognition API is providing the reverse geocoded address that users can share with their friends and family.

Swiftkey Location Sharing

Getting started

The table below describes the query parameters the Location Recognition API would support and fields included in a typical response (note, these are subject to change once we transition from private preview to general availability).

Query parameters

 

Parameter

Description

Values

point

Required. The coordinates of the location that you want to reverse geocode. A point is specified by a latitude and a longitude. For more information, see the definition of Point in Location and Area Types.

A point on the Earth specified by a latitude and longitude. For more information, see the definition of Point in Location and Area Types.

Example: 47.64054, -122.12934

radius

Optional. Search radius in KM. Search is performed within a circle of specified radius and centered on the given location.

Default search radius is 0.25KM, maximum supported radius is 2KM

distanceUnit

Optional. Unit for the radius parameter, use one of the following values

• kilometer or km

• miles or mi

Default value is km

top

Optional. Number of results to be returned. If more results are found in the specified search radius, the number of results is limited by the value of top parameter.

Maximum supported value is 20

key

Required. Valid Bing Maps key

 

Response

 

Field

Description

Businesses and points of interest

Up to 20 local businesses (e.g. restaurants, offices, transit etc.) and points of interest situated at the input location

Natural points of interest

Natural entities (e.g. beach, forest, lake etc.) situated at the input location

Address

Reverse geocoded address of the input location with neighborhood information where available

Bing Maps Autosuggest Preview

Bing Maps Autosuggest is a service built by the Location Intelligence team at Bing Maps. Given a partial query, the service returns a list of suggested entities the user is most likely searching for.  We return entities for: roads, addresses, places and businesses (Businesses are currently constrained to en-US). This service is currently in preview mode and will be available this summer.

Below are details regarding the expected parameters and output. Note that none of this will be finalized until we transition from preview to full launch. First, here’s the list of parameters expected to be part of the API signature:

Query parameters

query (alias: q)

Required

Query prefix that user has typed in e.g. “1 Micro” (potentially the prefix for “1 Microsoft Way, Redmond, WA 98052”)

userLocation (alias: ul)

Required (unless either ucmv or ucrv are specified)

Location user is searching from and confidence radius of accuracy in meters of that point, e.g.: 48.604311,-122.981998,5000. This is the location of the downstream user, not the server the service is being called from. UserLocation becomes a secondary signal if either UserCircularMapView or UserMapView is passed in as well.

userCircularMapView

(alias: ucmv)

Optional

The geographic area to search for local places. The area is expressed as a circle as (lat,long,radius), where lat and long are the coordinates of the circle’s center e.g. 48.604311,-122.981998,5000. UserCircularMapView is mutually exclusive from UserMapView. Both UserCircularMapView and UserMapView are used as ranking features. They are not physical constraints on the suggestions.

userMapView

(alias: urmv)

Optional

Allows the returned suggestions to be bounded to a bounding box. The box is defined by 4 values: South Latitude, West Latitude, North Latitude, East LongitudeNW corner latitude, NW corner longitude, SE corner latitude, SE corner longitude. E.g. 29.8171041, -122.981995, 48.604311, -95.5413725. UserMapView is mutually exclusive from UserCircularMapView. Both UserCircularMapView and UserMapView are used as ranking features. They are not physical constraints on the suggestions.

maxRes (alias: mr)

Optional

Maximum number of suggestions to return. Default is 7, Maximum is 10. Minimum is 1.

types

Optional

Specify types of locations to return. The types are: Place, Address, Business. E.g.: Place, Address. By default all 3 are on.

culture (alias: c)

Optional

The language to use for user interface strings. E.g. en-GB. By default this is set to en-US

userRegion (alias: m)

Optional

The market the user is in as denoted by its 2-letter country code abbreviation. E.g. DE for Germany. The default is US.

countryFilter (alias: cf)

Optional

Constrain suggestions to only those in a single country via a 2-character ISO country code. E.g. “DE”. The default is none.

Below are fields you can expect in the output. Responses include some subset of what’s below depending on the entity type.

Output fields

ResourceSets

Container of returned structured suggestions

Resources

A structured entity suggestion

Error

Defines an error that occurred

__type

The type of suggested result, e.g. Place

houseNumber

Just the house number of a street address. e.g. 2100

This may be blank for road entities.

streetName

Just the street name of a street address. E.g. Westlake Ave N.

addressLine

The street address e.g. 2100 Westlake Ave N.

locality

The city where the street address is located e.g. Seattle

adminDistrict2

The name of the county where the street address is located e.g. King.

adminDistrict

The state or province code when the street address is located. This could be the two-letter code (e.g. WA) or the full name (e.g. Washington)

countryRegion

The name of the country/region where the street address is located e.g. United States

countryRegionIso2

The ISO code for the country/region where the street address is located e.g. US

Neighborhood

The neighborhood where the street address is located e.g. Westlake

PostalCode

The zip code or postal code where the street address is located e.g. 98052

formattedAddress

The complete address of the location. e.g. 7625 170th Ave NE, Redmond, WA 98052

entityType

A list of hints that indicate the type of entity. The list may contain a single hint such as PopulatedPlace or a list of hints such as Place, LocalBusiness, Restaurant. Each successive hint in the array narrows the entity's type.

name

Name of a place or business e.g. Washington, Starbucks. All POI have names; however, places that are typically found in the geochain

Finally, check out an example of how one could visualize these suggestions quite quickly. In the example below, there were originally 5 POI markers for the suggestions, and we used an eraser tool to hide 2 of them.

Bing Maps Autosuggest Visualizer

Enroll in our preview program

Contact maplic@microsoft.com for enrollment in the private preview programs for both Bing Maps Location Recognition API and Bing Maps Autosuggest API.

We hope you enjoy exploring our new APIs as much as we enjoyed building them. As always, we welcome your feedback. Let us know what you think about the APIs and what you would like to see in future releases by reaching out to the team at Bing Maps Forum.

- Bing Maps Team

Bringing a modern WebView to your .NET WinForms and WPF Apps

$
0
0

One of the founding principles of Microsoft Edge is that it is evergreen, with automatic updates to Windows 10 continually refreshing both Microsoft Edge itself and web content throughout Windows 10. However, until recently, WinForms and WPF apps didn’t have access to the Microsoft Edge-powered WebView.

Earlier this week at Build 2018, Kevin Gallo previewed a new WebView for Windows 10, bringing the latest Microsoft Edge rendering engine to .NET WinForms and WPF apps for the first time in an upcoming release of Windows 10. In today’s post, we’ll provide more details on the benefits this brings to developers, and how you can preview this new capability in the Windows 10 April 2018 Update.

A WebView for the modern web

In our recent blog post on Progressive Web Apps (PWAs), we described the powerful experiences enabled by bringing a modern web platform to UWP, allowing developers to write HTML, CSS, and JS that seamlessly spans both browser as well as native app experiences.

As the growth of the web platform in general, and EdgeHTML in particular, have accelerated in recent releases, this has resulted in an increased performance and compatibility gap with the Trident-powered WebBrowser control, and many of our customers have asked for a way to incorporate the latest version of EdgeHTML into WinForms and WPF apps. We’re happy to address this feedback with a preview of the all-new EdgeHTML-powered WebView, bringing the last three years of performance, reliability, and security enhancements to WinForms and WPF apps for the first time.

Getting started

Working with the WebViewControl and WebView API may feel foreign to native .NET developers, so we’re building additional controls to simplify the experience and provide a more familiar environment. These controls wrap the WebViewControl to enable the control to feel more like a native .NET WinForms or WPF control, and provide a subset of the members from that class.

The WinForms and WPF controls are available today as a preview in the 3.0 release of the Windows Community Toolkit in the Microsoft.Toolkit.Win32.UI.Controls package. This means that upgrading from the Trident-powered WebBrowser control to the EdgeHTML-powered WebView in your WinForms or WPF app can be as easy as dragging in a new control from the toolbox.

Using WebView for WPF

Once you install the NuGet package, the WebView control appears in Windows Community Toolkit section of the Toolbox when the WPF Designer is open in Visual Studio or Blend.

Using the Designer

Drag the control from the Toolbox as you would any other control.

Programmatically adding WebView

The WPF version of the control is in the Microsoft.Toolkit.Win32.UI.Controls.WPF namespace.

Using WebView for WinForms

Using the Designer

First, we’ll need to add a WinForms control from a NuGet package to the Toolbox in Visual Studio. In a future release, Visual Studio will do this automatically.

  • First, open the Visual Studio Toolbox, then right-click anywhere in the toolbox, and select the Choose Items
  • In the .NET Framework Components tab of the Choose Toolbox Items dialog box, click the Browse button to locate the Toolkit.Win32.UI.Controls.dll in your NuGet package folder.
    For help finding that folder, see Managing the global packages, cache, and temp folders.
  • After the DLL is added to the list of Toolbox controls, WebView is automatically
  • Close the Choose Toolbox Items dialog box.

The WebView control appears in the All Windows Forms section of the Toolbox when the Windows Forms Designer is open.

Programmatically adding WebView

After installing the NuGet package, you can add the WebView to your application like any other control. The WinForms version of the control is in the Microsoft.Toolkit.Win32.UI.Controls.WinForms namespace.

We’re just getting started!

The current WinForms and WPF WebView implementations are an early preview, with some limitations when compared to the UWP WebView control. For the complete list of these limitations, see Known Issues of the WebView control for Windows Forms and WPF applications on GitHub.

You can get started with WebView today using the Windows 10 April 2018 update or the latest Windows Insider Preview builds, where we’ll be adding more improvements as we continue towards a stable release. Please share your feedback or suggestions via @MSEdgeDev on Twitter, in the Windows Community Toolkit project on GitHub, or in the comments below.

Happy Coding!

Kirupa Chinnathambi, Senior Program Manager, Microsoft Edge
Richard Murillo, Principal Architect, Microsoft Edge

The post Bringing a modern WebView to your .NET WinForms and WPF Apps appeared first on Microsoft Edge Dev Blog.


In case you missed it: April 2018 roundup

$
0
0

In case you missed them, here are some articles from April of particular interest to R users.

Microsoft R Open 3.4.4, based on R 3.4.4, is now available.

An R script by Ryan Timpe converts a photo into instructions for rendering it as LEGO bricks.

R functions to build a random maze in Minecraft, and have your avatar solve the maze automatically.

A dive into some of the internal changes bringing performance improvements to the new R 3.5.0.

AI, Machine Learning and Data Science Roundup, April 2018

An analysis with R shows that Uber has overtaken taxis for trips in New York City.

News from the R Consortium: new projects, results from a survey on licenses, and R-Ladies is promoted to a top-level project.

A talk, aimed at Artificial Intelligence developers, making the case for using R.

Bob Rudis analyzes data from the R-bloggers.com website, and lists the top 10 authors.

An R-based implementation of Silicon Valley's "Not Hotdog" application.

An R package for creating interesting designs with generative algorithms.

And some general interest stories (not necessarily related to R):

As always, thanks for the comments and please send any suggestions to me at davidsmi@microsoft.com. Don't forget you can follow the blog using an RSS reader, via email using blogtrottr, or by following me on Twitter (I'm @revodavid). You can find roundups of previous months here.

Azure Event Hubs for Kafka Ecosystems in public preview

$
0
0

Organizations need data driven strategies to increase competitive advantage. Customers want to stream data or analyze in real-time to get valuable insights faster. To meet these big data needs, you need a massively scalable distributed event driven messaging platform with multiple producers and consumers Apache Kafka and Azure Event Hubs provide such distributed platforms.

How is Event Hubs different from Kafka?

Kafka and Event Hubs are both designed to handle large scale stream ingestion driven by real-time events. Conceptually, both are a distributed, partitioned, and replicated commit log service. Both use partitioned consumer model offering huge scalability for concurrent consumers. Both use a client side cursor concept and scale very high workloads.

Apache Kafka is a software that is installed and run. Azure Event Hubs is a fully managed service in the cloud. While Kafka is popular with its wide eco system and its on-premises and cloud presence, Event Hubs offers you the freedom of not having to manage servers or networks or worry about configuring brokers.

Talk to Event Hubs, like you would with Kafka and unleash the power of PaaS!

Today we are happy to marry both these powerful distributed streaming platforms to offer you Event Hubs for Kafka Ecosystem.

With this integration, you are provided with a Kafka endpoint. This endpoint enables you to configure your existing Kafka applications to talk to Azure Event Hubs, an alternative to running your own Kafka clusters. Azure Event Hubs for Kafka Ecosystem supports Apache Kafka 1.0 and later.

This integration not only allows you to talk to Azure Event Hubs without changing your Kafka applications, also allows you to work with some of the most demanding features of Event Hubs like CaptureAuto-Inflate, and Geo Disaster-Recovery.

For those of you new to Event Hubs, conceptually they map as below:

Kafka Concept

Event Hubs Concept

Cluster

Namespace

Topic

Event Hub

Partition

Partition

Consumer Group

Consumer Group

Offset

Offset

 

What to expect for this preview?

For the public preview of this new integration, the supported Kafka features are limited and the following are not currently supported.

  • Idempotent producer
  • Transaction
  • Compression
  • Size based retention
  • Log compaction
  • Adding partitions to an existing topic
  • HTTP Kafka API support
  • Kafka Connect
  • Kafka Streams

Next steps

Enjoyed this blog? Follow us as we update the features list we will start supporting. Leave us your valuable feedback, questions, or comments below.

Happy event-ing!

Announcing .NET Core 2.1 RC 1 Go Live AND .NET Core 3.0 Futures

$
0
0

I just got back from the Microsoft BUILD Conference where Scott Hunter and I announced both .NET Core 2.1 RC1 AND talked about .NET Core 3.0 in the future.

.NET Core 2.1 RC1

First, .NET Core 2.1's Release Candidate is out. This one has a Go Live license and it's very close to release.

You can download and get started with .NET Core 2.1 RC 1, on Windows, macOS, and Linux:

You can see complete details of the release in the .NET Core 2.1 RC 1 release notes. Related instructions, known issues, and workarounds are included in releases notes. Please report any issues you find in the comments or at dotnet/core #1506. ASP.NET Core 2.1 RC 1 and Entity Framework 2.1 RC 1 are also releasing today. You can develop .NET Core 2.1 apps with Visual Studio 2017 15.7, Visual Studio for Mac 7.5, or Visual Studio Code.

Here's a deep dive on the performance benefits which are SIGNIFICANT. It's also worth noting that you can get 2x+ speed improvements for your builds/compiles, but and to adopt the .NET Core 2.1 RC SDK while continuing to target earlier .NET Core releases, like 2.0 for the Runtime.

  • Go Live - You can put this version in production and get support.
  • Alpine Support - There are docker images at 2.1-sdk-alpine and 2.1-runtime-alpine.
  • ARM Support - We can compile on Raspberry Pi now! .NET Core 2.1 is supported on Raspberry Pi 2+. It isn’t supported on the Pi Zero or other devices that use an ARMv6 chip. .NET Core requires ARMv7 or ARMv8 chips, like the ARM Cortex-A53. There are even Docker images for ARM32
  • Brotli Support - new lossless compression algo for the web.
  • Tons of new Crypto Support.
  • Source Debugging from NuGet Packages (finally!) called "SourceLink."
  • .NET Core Global Tools:
dotnet tool install -g dotnetsay
dotnetsay

In fact, if you have Docker installed go try an ASP.NET Sample:

docker pull microsoft/dotnet-samples:aspnetapp
docker run --rm -it -p 8000:80 --name aspnetcore_sample microsoft/dotnet-samples:aspnetapp

.NET Core 3.0

This is huge. You'll soon be able to take your existing WinForms and WPF app (I did this with a 12 year old WPF app!) and swap out the underlying runtime. That means you can run WinForms and WPF on .NET Core 3 on Windows.

Why is this cool?

  • WinForms/WPF apps can but self-contained and run in a single folder.

No need to install anything, just xcopy deploy. WinFormsApp1 can't affect WPFApp2 because they can each target their own .NET Core 3 version. Updates to the .NET Framework on Windows are system-wide and can sometimes cause problems with legacy apps. You'll now have total control and update apps one at at time and they can't affect each other. C#, F# and VB already work with .NET Core 2.0. You will be able to build desktop applications with any of those three languages with .NET Core 3.

Secondly, you'll get to use all the new C# 7.x+ (and beyond) features sooner than ever. .NET Core moves fast but you can pick and choose the language features and libraries you want. For example, I can update BabySmash (my .NET 3.5 WPF app) to .NET Core 3.0 and use new C# features AND bring in UWP Controls that didn't exist when BabySmash was first written! WinForms and WPF apps will also get the new lightweight csproj format. More details here and a full video below.

  • Compile to a single EXE

Even more, why not compile the whole app into a single EXE. I can make BabySmash.exe and it'll just work. No install, everything self-contained.

.NET Core 3 will still be cross platform, but WinForms and WPF remain "W is for Windows" - the runtime is swappable, but they still P/Invoke into the Windows APIs. You can look elsewhere for .NET Core cross-platform UI apps with frameworks like Avalonia, Ooui, and Blazor.

Diagram showing that .NET Core will support Windows UI Frameworks

You can check out the video from BUILD here. We show 2.1, 3.0, and some amazing demos like compiling a .NET app into a single exe and running it on a computer from the audience, as well as taking the 12 year old BabySmash WPF app and running it on .NET Core 3.0 PLUS adding a UWP Touch Ink Control!

Lots of cool stuff coming today AND tomorrow with open source .NET Core!


Sponsor: Check out JetBrains Rider: a cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!



© 2018 Scott Hanselman. All rights reserved.
     

Full-integrated experience simplifying Language Understanding in conversational AI systems

$
0
0

Creating an advanced conversational system is now a simple task with the powerful tools integrated into Microsoft’s Language Understanding Service (LUIS) and Bot Framework. LUIS brings together cutting-edge speech, machine translation, and text analytics on the most enterprise-ready platform for creation of conversational systems. In addition to these features, LUIS is currently GDPR, HIPPA, and ISO compliant enabling it to deliver exceptional service across global markets.

Talk or text?

Bots and conversational AI systems are quickly becoming a ubiquitous technology enabling natural interactions with users. Speech remains one of the most widely used input forms that come natural when thinking of conversational systems. This requires the integration of speech recognition within the Language Understanding in conversational systems. Individually, speech recognition and language understanding are amongst the most difficult problems in cognitive computing. Introducing the context of Language Understanding improves the quality of speech recognition. Through intent-based speech priming, the context of an utterances is interpreted using the language model to cross-fertilize the performance of both speech recognition and language understanding. Intent based speech recognition priming uses the utterances and entity tags in your LUIS models to improve accuracy and relevance while converting audio to text. Incorrectly recognized spoken phrases or entities could be rectified by adding the associated utterance to the LUIS model or correctly labeling the entity.

In this release, we have simplified the process of integrating speech priming into LUIS. You no longer must use multiple keys or interact through other middleware. This more streamlined integration also reduces the latency that your users would experience when using speech as an input to your conversational system. All you need to do is to enable speech priming in the publish setting of your LUIS application. Speech priming will be invoked on the same subscription key used in LUIS and transferred to the speech APIs seamlessly. 

Publish app

Text Analytics: Understand your text?

LUIS continues to bring together different technologies to help understand your user. It already includes the power of Bing Spell Check and now we are adding functionality from the Text Analytics Cognitive Service. Integrating text analytics into your LUIS model enabling sentiment detection on utterances is a simple configuration. Through this integration your bot can tell you if your customer is happy or sad. Text analytics also enables the detection of key phrases within utterances without requiring labeling or training. These advanced natural language processing tools enable better, more personalized interaction with your customers.

The JSON object returns the sentiment of the utterance as a value from 0 – 1, with values closer to 1 are more positive while closer to 0 are more negative. Additionally, adding key phrases pre-built entity enables identifying key phrases in the returned object.

Text analytics

Create a Global Bot, without being "lost in translation"

Are you worried about the effort it would take you to design a Bot that speaks multiple languages? With the integration of Machine Translation middleware into the Bot Framework, you don't need to worry any more. Using the Bot created in one language across 60+ languages in a few lines of code makes it much simple to build and improve models. With personalization and customization included in the middleware, enabling a global Bot is a simple task. Combining translation middleware with LUIS and QnA maker is simple and includes passing the utterance after translation. The middleware also specifies if the response should be translated back to the user language or returned in the bot native language.

Public void

The translation middleware also includes the ability to identify patterns that shouldn’t be translated to the target language (such as names of locations or entities that are meaningful in their own terms). An example if the user says “My name is …” in a non-native language for the bot you want to avoid translating this name using a pattern in every language. The code snippet reflects this in pattern for French.

Startup

The middleware also includes a localization converter for currencies and dates to distinct cultures by adding LocaleConverterMiddleware. The Machine Translation middleware is currently released as preview with Bot Framework SDK V4.  

Generalizing the Model using Regex and Pattern features

In this release LUIS is introducing two features that enable an improved language understanding of entities and intents. Regex entities allows the identification of an entity in the utterance based on a regular expression. For example, a flight number regular expression includes two or three characters and then 4 digits. The Delta Airlines flight regular expression could be expressed as DL[0-9]{4}. Defining this regular expression entity will allow LUIS to extract matching entities from an utterance.

Patterns, on the other hand, allow developers to define intents they could represent effectively without the need to provide extensive utterances. This is especially effective for forms that could capture a wide variety of common ways of expressing an utterance. Consider for example a shopping application, the pattern “add the {Shopping.Item} to my {Shopping.Cart}” is a common way of expressing the intent Shopping.AddToCart

Patterns are especially useful when there are similarities between utterances that reflect different intents. Take for example the utterances in a human resource domain “Who does Mike Jones report to?” and “Who reports to Mike Jones?”. The two statements contain the same tokens yet reflect different intents. This could be captured by either introducing many utterances, or by simply expressing these common utterances by their respective patterns “Who does {Employee} report to?” and “Who reports to “Employee}?”. Introducing these patterns, alongside the utterances allow LUIS to identify the most suitable intent to fit to an utterance. Moreover, patterns could also capture the distinct roles of entities. As an example, the pattern “Book a ticket from {Location:origin} to {Location:Destination}” in a flight booking application. This pattern captures the distinct roles of the locations included in the utterance.

Additionally, patterns could encompass entities with variable length represented as Patterns.any entities. The entities are detected first, prior to the matching of the pattern. In turn the pattern “Where is the {FormName} and who needs to sign it after I read it?” will match form names that could extent multiple tokens.

Image

Involve your personal data scientist to help improve your model

The tools provided in Cognitive Services enable developers without a machine learning (ML) background to develop conversational systems. Once the bot is built, developers are faced with multiple options to further improve their models. Some of these options are rooted in ML, and in turn developers with limited ML experience might not fully explore these options. 

Dispatch Command Line tool

In this release of LUIS and Bot Framework, we have taken our goal of democratizing ML further. We are incorporating tools to provide personalized data science guidance to developers on the existing applications. This includes identifying the areas where the current model falls short and providing suggestion to help improve the trained model. It also allows for automatically generating different architectures of multiple conversational models, which include LUIS and QnA maker, through the dispatcher tool. This might be creating a hierarchical architecture with a dispatching model or consolidating multiple models into one LUIS model. Collectively, these tools use LUIS to realize the different architectures and dissect the models to help provide the most suitable guidance to developers. The dispatch tool is currently released in preview on Bot Framework SDK V4.

These are some of the highlights of the features that have been introduced to LUIS and Bot Framework. Through integrating the different tools, and compliance with GDPR, HIPPA, and ISO, LUIS and Bot Framework are distinguishing themselves as the most enterprise-ready platform. These new additions make understanding customers and reaching new markets and users a few lines of code away. It also makes bot interactions more natural to your users.

For more information please visit:

We look forward for you feedback and the amazing Bot you create.

Enhancements in Application Insights Profiler and Snapshot Debugger

$
0
0

We are pleased to announce a series of improvements on Application Insights Profiler and Snapshot Debugger. Profiler identifies the line of code that slowed down the web app performance under load. Snapshot Debugger captures runtime exception call stack and local variables to identify the issue in code. To ensure users can easily and conveniently use the tools, we delivered the following new features for Profiler and Snapshot Debugger:

Application Insights enablement with Profiler and Snapshot Debugger

With the newly enhanced Application Insights enablement experience, Profiler and Snapshot Debugger are default options to be turned on with Application Insights.

  • Enabling Snapshot Debugger without redeploy your web app: For ASP.NET core web app, snapshot debugger is a simple, default option when enabling App Insights. It used to require modifying the project to install NuGet and add exception tracking code. Now it’s done via an ASP.NET core hosting light up through an App Setting, no redeploy will be required. ASP.NET support will be available very soon.
  • Enabling Profiler with Application Insights in one step: Enabling Profiler used to be done in a separate Profiler Configuration pane, which requires extra steps. This is no longer needed.

Profiler

  • On-demand profiler: Triggering a profiler run session on your web app anytime as needed. Before, Profiler would run randomly 5% of the time, which could miss capturing critical traces. With the new on-demand profiler feature, this problem is solved as users can capture traces anytime as needed.

  • Profiler for ASP.NET core on Linux: Profiler now works on App Services Linux ASP.NET core 2.0 docker images. More platforms will be supported in the future.

Snapshot Debugger

  • Snapshot healthy check: Smartly diagnose why web app runtime exceptions do not have associated snapshot. Easily and quickly troubleshooting snapshot debugger with more insights and visibility.

Enable Profiler and Snapshot Debugger is now easier than ever

We enhanced the App Insights enablement experience for App Services. Suppose you have deployed a Web Application to an App Services resource. Later, you notice your web app is being slow or throwing exceptions. You would want to enable App Insights on your Web App to monitor and diagnose what’s going on. Of course, you don’t want to redeploy the web app just to enable monitoring service.

With the new enablement experience, you can easily find the entry point to enable App Insights under Settings | Application Insights. The added section Code level diagnostics is on by default to enable Profiler and Snapshot Debugger for diagnosing slow app performance and runtime exceptions.

Profiler can be enabled easily like this because Profiler agent is installed in the new App Insights site extension and enabled by an App Setting. Snapshot Debugger is enabled through ASP.NET core hosting light up, the runtime will include an assembly if an environment variable is set.

The UI for the new enablement experience allows everything to be configured in one step:

Enablement_UI

The following App Settings are added to App Services for enabling Profiler and Snapshot Debugger:

Enablement_AppSettings

Capture Interesting Profiler Traces On-Demand

We are excited to introduce the new on-demand triggering profiler feature. To make sure critical traces are not missed, you can go to the Profiler configuration pane and click on Profile Now button to start the profiler as needed. You can trigger profiler run when you are in the following situations:

  • You want to get started using profiler by capturing the first traces to test everything is working.

  • You want to efficiently and reliably capture traces during a load test run.

  • You need to promptly capture traces for performance issues going on now.

In addition, you get more visibility into how profiler has been running from the Profiler run history list.

You can learn more at Manually trigger Profiler

OnDemand-Profiler

Investigate performance for ASP.NET Core on Linux using Profiler

Leveraging the Event Pipe technology, we can now capture traces for ASP.NET core web app running inside a Linux container hosted on App Services. The profiler runs in-proc in ASP.NET core to capture traces, which introduces less overhead. The current preview release is for evaluation purposes only. Try it out following Profile ASP.NET Core Azure Linux web apps with Application Insights Profiler.

Linux_profiler

Snapshot Health Check for Quickly Understand and Solve Issues

To address one of our top customer feedback for sometimes they cannot see snapshots for exceptions, we built a new feature to smartly help users diagnose reasons for missing snapshots. The service does health check on Snapshot Debugger based on user input. When missing snapshot, instead of not showing anything on the End-to-End trace viewer blade, we will show a link to help user troubleshoot what’s going on. We hope this can quickly help our customers to root cause and fix issues. We always strive to enable our customers’ success.

Learn more at troubleshoot snapshot debugger.

healthcheck2

SnapshotHealthCheck

Next steps

Enable Profiler and Snapshot Debugger on your web app today! Send feedback at serviceprofilerhelp@microsoft.com or snapshothelp@microsoft.com. You can also directly reply to this post.

Azure SQL Data Warehouse now supports automatic creation of statistics

$
0
0

We are pleased to announce that Azure SQL Data Warehouse (Azure SQL DW) now supports automatic creation of column level statistics. Azure SQL DW is a fast, flexible, and secure analytics platform for the enterprise.

Modern systems such as Azure SQL DW, rely on cost-based optimizers to generate quality execution plans for user queries. Even though Azure SQL DW implements a cost-based optimizer, the system relies on developers and administrators to create statistics objects manually. When all queries are known in advance, determining what statistics objects need to be created is an achievable task. However, when the system is faced with ad-hoc and random queries which is typical for the data warehousing workloads, system administrators may struggle to predict what statistics need to be created leading to potentially suboptimal query execution plans and longer query response times. One way to mitigate this problem is to create statistics objects on all the table columns in advance. However, that process comes with a penalty as statistics objects need to be maintained during table loading process, causing longer loading times.

Azure SQL DW now supports automatic creation of statistics objects providing greater flexibility, productivity, and ease of use for system administrators and developers, while ensuring the system continues to offer quality execution plans and best response times. The picture below shows a sample statistics histogram that Azure SQL DW query optimizer uses during the query optimization phase.

Automatic statistic creation is enabled by default for all new data warehouses that are being created. For existing instances, this option is disabled by default and users need to opt-in to enable it. Just like in SQL Server, auto create option exists at the database object level.

To enable or disable automatic statistics creation in SQL DW, execute the following statement:

ALTER DATABASE { database_name } SET { AUTO_CREATE_STATISTICS { OFF | ON } } [;]

As a best practice and guidance, we recommend setting AUTO_CREATE_STATISTICS option to ON.

For more information about statistics objects in Azure SQL DW, including automatic statistics creation process, refer to our documentation.

Spark + AI Summit: Data scientists and engineers put their trust in Microsoft Azure Databricks

$
0
0

Microsoft will have a major presence at Spark + AI Summit, 2018, in San Francisco, the premier event for the Apache Spark community. Rohan Kumar, Corporate Vice President of Azure Data, will deliver a keynote on how Azure Databricks combines the best of Apache® Spark™ analytics platform and Microsoft Azure Data Services to help customers unleash the power of data and reimagine possibilities that will improve our world.

Azure Databricks, a fast, easy, and collaborative Apache Spark-based analytics platform optimized for Azure, was made generally available in March 2018. To learn more about the announcement, read Rohan Kumar’s blog about how Azure Databricks can help customers accelerate innovation and simplify the process of building Big Data & AI solutions. At Spark + AI Summit, we have a number of sessions showcasing the great work our customers and partners are doing and how Azure Databricks is helping them achieve productivity at scale.

Sign up for training on Spark!

On Monday, June 4, 2018 there are a number of full-day training courses on Apache Spark ranging from beginner to advanced that will enhance your skill set and even prepare you for certification on Spark.

Apache Spark essentials

This 1-day course is for data engineers, analysts, architects, data scientist, software engineers, IT operations, and technical managers interested in a brief hands-on overview of Apache Spark.

Apache Spark tuning and best practices

This 1-day course is for data engineers, analysts, architects, dev-ops, and team-leads interested in troubleshooting and optimizing Apache Spark applications. It covers troubleshooting, tuning, best practices, anti-patterns to avoid, and other measures to help tune and troubleshoot Spark applications and queries.

Data science with Apache Spark

The Data Science with Apache Spark workshop will show how to use Apache Spark to perform exploratory data analysis (EDA), develop machine learning pipelines, and use the APIs and algorithms available in the Spark MLlib DataFrames API. It is designed for software developers, data analysts, data engineers, and data scientists.

Understand and apply deep learning with Keras, TensorFlow, and Apache Spark

This deep learning workshop introduces the conceptual background as well as implementation for key architectures in neural network machine learning models. We will see how and why deep learning has become such an important and popular technology, and how it is similar to and different from other machine learning models as well as earlier attempts at neural networks.

1/2 prep-course + Databricks Developer Certification: Apache Spark 2.x

This 1/2 day lecture is for anyone seeking to become a Databricks Certified Apache Spark Developer or Databricks Certified Apache Spark Systems Architect. It includes test-taking strategies, sample questions, preparation guidelines, and exam requirements. The primary goal of this course is to help potential applicants understand the breadth and depth to which individuals will be tested and to provide guidelines as to how to prepare for the exam.

We hope you will take advantage of this opportunity to build new skills and enhance existing ones on Apache Spark.

Come see us

At Spark + AI Summit 2018 in June, we’ll be showcasing Microsoft’s commitment to helping our customers drive analytics at scale and increase productivity with security, as well as the groundbreaking work of customers and partners that are using our technology to digitally transform their businesses. If you plan to attend, be sure to stop by our booth #201 to say hello and learn more about our data and AI services on Azure, especially Azure Databricks. We will also have our Azure experts in the booth to answer all your questions and help you get started on Microsoft Azure. Sign up for our webinar on the basics of Apache Spark on Azure Databricks on May 31, 2018 from 1:00 PM - 2:00 PM Pacific Time. Register for Spark + AI Summit today.


In case you missed it: 10 of your questions from our GDPR webinars

$
0
0

During the last few months, I’ve spoken with a lot of Azure customers, both in person and online, about how to prepare for the May 25, 2018 deadline for compliance with the EU’s General Data Protection Regulation (GDPR). The GDPR imposes new rules on companies, government agencies, non-profits, and other organizations that offer goods and services to people in the European Union (EU), or that collect and analyze data tied to EU residents. The GDPR applies no matter where you are located. The GDPR will dramatically shift the landscape for data collection and analysis, since under the GDPR, many practices that were commonplace will be forbidden, and companies must take care in assessing their exposure and how to comply.

I recently participated in a Microsoft series of webinars about the GDPR and its implications for IT teams and cloud computing. We got a lot of questions from the audience in these webinars, so I thought I would respond to some of the most frequently asked ones that we thought you might find helpful, along with links to the on-demand webinars.

Q: Does the GDPR allow me to send data outside the EU?

A: GDPR applies globally, so no matter where your company stores or processes personal data—even within the EU, it must comply with GDPR guidelines.

Q: Does GDPR apply to internal sites, such as corporate intranets, as well?

A: Yes. Whether you’re storing personal data about consumers or employees you must still abide by GDRP guidelines.

Q: What are the GDPR requirements around classifying data?

A: GDPR doesn’t explicitly require data classification, but given the rights that it grants to EU citizens, and the requirements of any company storing a citizen’s personal data, classifying data is practically non-negotiable. For example, companies must inform individuals about all of the personal data they have on file, and must get their consent before processing it. Companies must also ensure that they are taking appropriate measures to protect that data, and can only store it for the prescribed purpose and period of time for which an individual gave their consent.

So there’s really no feasible way to abide by these requirements and responsibilities without cataloging your data and knowing the location of any personal data that falls under GDPR jurisdiction.

Q: Does GDPR require encryption?

A: Not in a prescriptive matter. Instead, it gives you guidelines and strongly suggests that you encrypt.

Q: Has the EU established any best practices about what it means to be compliant?

A: The EU has published guidelines, but keep in mind that GDPR is just the baseline—each country has the authority to include additional requirements. And GDPR is more about giving you guidance, rather than providing highly prescriptive instructions.

Q: How does Brexit impact this?

A: Unfortunately, the UK is no longer considered to be on the same level as the EU member countries. As such, the UK will no longer be considered adequate in abiding by terms of data protection laws. However, the UK is doing its part to comply with GDPR.

Q: Will there be an official GDPR certification?

A: Eventually, but it won’t be completed for at least a couple of months after GDPR is implemented. In the meantime, you can build on top of ISO 27001, and Microsoft has its own GEP analysis to help companies figure out how to get compliant.

Q: Are any independent groups giving assessments?

A: A coalition of cloud infrastructure service providers, called CISPE, has developed its own code of conduct that’s intended to help companies get started. In December, the Cloud Security Alliance released its code of conduct, which we are evaluating. In the meantime, we are sticking with ISO 27001 and staying in contact with the EU’s Data Protection Authority.

Q: Do data retention requirements override an individual’s right to have their data deleted?

A: Yes, there are a few exceptions where personal data must be kept for tax or legal reasons to run your business. However, the whole notion of companies having carte blanche permission to collect and keep data has been done away with.

Q: Is IP in scope for data subject rights?

A: Yes. In fact, IP is in scope with the EU’s existing DPA regulations, but GDPR significantly broadens the definition of personal data to include any information that can be connected with a known person. Examples include browser history and social media activity.

It also makes special provisions for information related to an individual’s physical and mental health, such as genetic and biometric data.

 

I hope these questions get you thinking about what you can do to prepare for GDPR. We have a lot more GDPR information available on our main GDPR site, including an Azure GDPR page, our white paper, How Microsoft Azure Can Help Organizations Become Compliant with the EU GDPR, and our Get Started: Support for GDPR Accountability set of resources.  

Of course none of the above should be considered legal advice, and we encourage you to bring any concerns you have to your company's legal counsel. 

Missed the live webinars? View these sessions at your convenience to get caught up:

  1. Azure and the GDPR: What you need to know to become compliant
  2. Azure and the GDPR: Technical deep dive on implementing GDPR requirements in Azure
  3. Azure and the GDPR: Using Compliance Manager to track GDPR compliance in Azure

Extract management insights from SQL Data Warehouse with SQL Operations Studio

$
0
0

SQL Operations Studio can be leveraged with Azure SQL Data Warehouse (SQL DW) to create rich customizable dashboard widgets surfacing insights to your data warehouse. This unlocks key scenarios around managing and tuning your data warehouse to ensure it is optimized for consistent performance. Previously, developers had to manually and continuously execute complex DMV queries to extract insights from their data warehouse. This leads to a repetitious process when following development and tuning best practices with SQL DW. Now with SQL Operations Studio, customized insight widgets can be embedded directly within the query tool enabling you to seamlessly monitor and troubleshoot issues with your data warehouse.

The following widgets can be generated by using the provided T-SQL monitoring scripts within SQL Operations Studio for common data warehouse insights.

Data Skew

Detect data skew across distributions to help identify and troubleshoot query performance issues:

Data_Skew

Columnstore health and statistics

Leverage views to help maximize columnstore row group quality and ensure table statistics are up to date for optimal query performance:

Columnstore Health and Statistics

User Activity

Identify and understand workload patterns through active sessions queries, queued queries, loads, and backups:

User Activity

Resource Bottlenecks

Ensure adequate resources are allocated such as memory and TempDB:

Resource Bottlenecks

Next Steps

There are countless custom insight widgets that could be written for SQL DW through T-SQL scripts. Download and contribute scripts to the SQL Data Warehouse samples Github and install the latest version of SQL Operations Studio to get started.

Customer success stories with Azure Backup: Somerset County Council

$
0
0

This is a continuation of our customer success story blog series for Azure Backup. In the previous case study we covered Russell Reynolds, here we will discuss how United Kingdom’s Somerset County Council are able to improve their backup and restore efficiency and reduce their backup cost using Azure Backup.

Customer background

United Kingdom’s Somerset County Council provides government services to its 550,000 residents. It is one of the oldest local governments in the world, established about 700 A.D. Somerset had been using an in-house storage manager platform for their data backup and restore on-premises.

“The biggest problems we had were with flexibility and scalability. We had racks and racks of disks, and we had to wait a long time to get new hardware. The complexities with the product itself also introduced many challenges” says Dean Cridland, Senior IT Officer at Somerset County Council. In addition, as the data footprint grew, IT staff struggled to hit daily backup SLA. So, they were looking for a modern backup solution that could meet their ever-growing data footprint requirement, meet their backup SLA, and aligns with their strategy of moving to the cloud.

How Azure Backup helped

Somerset deployed Azure Backup Server to backup diverse workloads which included VMWare virtual machines, Microsoft SQL Server databases, files/folder backup, system state backup for Windows server and Exchange backup. With Azure Backup’s seamless integration with Azure, Somerset was able to minimize the on-premises backup footprint by keeping the data locally only for a day and then move it to Azure, where it’s retained for 30 days. They implemented a 1-Gbps Azure ExpressRoute connection 70 percent of which is utilized for backup and restore, helping them to meet their backup SLAs.

They found the UI to be quite simple and intuitive which helped increase the IT efficiency. “System and file recovery is a much simpler and faster process with our Azure solution,” says Cridland. “This simplicity is critical because, in a true disaster recovery event, you don't want to be wasting time trying to understand things”. Simple UI combined with minimum on-premises infrastructure also helped to improve agility as they can now rapidly scale up or down their backup environment. The new solution takes less than a week to deploy a new backup server which previously took months. “With our Azure hybrid cloud solution, we can now meet increasing government requirements for disaster recovery and business continuity,” explains Cridland.

Azure Backup Server comes with a pay-as-you-go model. Moreover, with minimum on-premises infrastructure they only need to provision the resources required today—versus paying for hardware that was sized to support future requirements. This helped them significantly reduce their costs. “Compared with our previous backup solution, our Azure solution costs significantly less to run,” says Andy Kennel, ICT Operations Manager at Somerset. “That, combined with the capital savings associated with the StorSimple devices, have produced a six-digit cost savings.”

Somerset IT team work closely with Microsoft product teams which a a win-win situation, Microsoft help them optimize their backups and in turn they provide feedback. “As we continue our digital transformation, wide aspects of our infrastructure are changing, and so are our processes and personnel. Navigating all this with a company like Microsoft is paying dividends. We have a more cohesive IT solution and we can discuss technology roadmaps with product teams from a technical and business perspective. They tell us what’s coming, and we tell them what we need. This kind of dialogue is unique, and it helps us with our long-term strategy, so we can provide better services and leadership for our community.” says Cridland.

Summary

Azure backup provided a simple and intuitive solution for their diverse workload backups and provides the agility to backup their ever-growing data to Azure. It also helped them significantly reduce their costs by providing a pay-as-you-go solution and they don’t need to provision extra storage to meet future requirements.

Source: Azure case study from Somerset County Council

Related links and additional content

Shift Left with SonarCloud Pull Request Integration

$
0
0
One of our DevOps “habits” is to Shift Left and move quality upstream.  Including additional validations earlier in the DevOps pipeline means identifying potential issues before they become a problem.  For teams using pull requests, catching issues while the PR is active is ideal – the code hasn’t been merged yet, so it’s easy to... Read More

Visual Studio and Unity 2018.1, even better together

$
0
0

The Visual Studio team is excited about the Unity 2018.1 release: It’s the start of a new release cycle packed with great new features like the Scriptable Render Pipeline and the C# Job System. You can read the full blog post by Unity for all the details on what’s new in the 2018.1 release.

First and foremost, we’re thrilled Unity chose Visual Studio as the default editor for both Windows and macOS so that developers get the same great editing and debugging experience in Visual Studio across PC and Mac. With this new release, Visual Studio for Mac is now included in the installer, instead of MonoDevelop on a Mac.

Download and Install Unity dialog showing Visual Studio for Mac

With this release, the Unity scripting runtime now also supports .NET 4.6 APIs and C# 6 by default. This gives you access to modern .NET libraries, SDKs, and tools. All Visual Studio products are already fully compatible with the new runtime for development and debugging.

Configuration dialog showing Unity scripting runtime now also supports .NET 4.6 APIs and C#6 by default

On top of this, additional features are supported thanks to the new runtime:

  • You can set the next statement while debugging (Ctrl+⇧+F10 on Windows, ⌘+⇧+F10 on macOS).
  • Improved support for evaluation (especially for generics, nullables, and collections).
  • Support for DebuggerHidden and DebuggerStepThrough attributes.
  • Better debugging performance and stability.

As you can see in the screenshot above, two .NET profiles are now available:

  • The .NET Standard 2.0 profile is your best choice for cross-platform and optimized build size.
  • The .NET 4.x profile gives you access to the full API, suitable for backwards compatibility.

You can learn more about the updated runtime support in this recent Unity blog post.

We’ve worked closely with Unity on this release and we are very pleased that all Unity developers can now benefit from a more reliable and faster programming and debugging experience with Visual Studio!

Jb Evain, Principal Software Engineer Manager
@jbevain

Jb runs the Visual Studio Tools for Unity experience He has a passion for developer tools and programming languages, and has been working in developer technologies for over a decade.

Viewing all 10804 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>