Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

Announcing F# 4.5 Preview

$
0
0

We’re very excited to announce that we’ll be shipping a new language version of F# soon. The version will be F# 4.5.

F# 4.5 has been developed entirely via an open RFC (requests for comments) process, with significant contributions from the community, especially in feature discussions and demonstrating use cases. You can view all RFCs that correspond with this release:

This post will talk a little bit about some of the features in this new language version.

Get started

First, install:

If you create a .NET desktop F# project in Visual Studio (from the F# desktop development component), then you will need to update your FSharp.Core package to 4.5.1 with the NuGet UI.

When .NET Core SDK 2.1.400 and Visual Studio 2017 version 15.8 are released, the referenced FSharp.Core will be 4.5.1 for all new projects and you will not need to perform this second step.

Versioning Alignment

The first thing you may notice about F# 4.5 is that it’s higher than F# 4.1. But you may not have expected it to be four decimal places higher! The following table details this change for F# 4.5:

F# language version FSharp.Core binary version FSharp.Core NuGet package
Old world F# 4.1 4.4.1.0 4.x.y, where x >= 2 for .NET Standard support
New world F# 4.5 4.5.x.0 4.5.x

There is a good reason for this change. As you can see, prior to F# 4.5, each item has a different version! The reasons for this are historical, but the end result is that it has been horribly confusing for F# users for a long time. So, we decided to clear it up.

From here on out, the major and minor versions for these three assets will be synced, as per the RFC we wrote detailing this change.

Side-by-side F# compilers deployed by Visual Studio

In previous versions of Visual Studio, you may have noticed that the F# Compiler SDK was bumped from 4.1 to 10.1. This is because the F# compiler evolves more rapidly than the language, often to fix bugs or improve performance. In the past, this was effectively ignored, with the compiler SDK still containing the same version number as the language version. This inaccurate representation of artifacts on your machine drove an effort to separate the versioning of the compiler and tools separately from the language they implement. As per the RFC detailing this change, the F# compiler and tools will use semantic versioning.

Additionally, the F# compiler SDK used to install as a singleton on your machine. If you had multiple side-by-side Visual Studio 2017 versions installed on your machine, the tools for all installations would pick up the same F# compiler, regardless of if they were ever intended to use that compiler. This means that something like using both release and preview version of Visual Studio could silently opt-in your release build of Visual Studio to use preview bits of F#! This resulted in multiple issues where users were confused about the behavior on their machine.

For F# 4.5, the compiler SDK version will be 10.2. When installed with Visual Studio, this will be fully side-by-side with that version of Visual Studio. Additional Visual Studio installations will not pick this higher compiler SDK version.

Accounting for this change on Windows build servers:

You may be doing one of the following things to install F# on a Windows build server:

  • Installing the full Visual Studio IDE
  • Installing the F# Compiler SDK MSI

Neither of these options have been recommended for some time, but are still available with F# 4.1.

For using F# 4.5 in a Windows build server, we recommend (in order of preference), Using the .NET SDK, the FSharp.Compiler.Tools package, or the Visual Studio Build Tools SKU. This change will be documented in the official F# docs and the F# Software Foundation guides page by the time F# 4.5 is out of preview.

Span support

The largest piece of F# 4.5 is a feature set aligned with the new Span feature in .NET Core 2.1. The F# feature set is comprised of:

  • The voidptr type.
  • The NativePtr.ofVoidPtr and NativePtr.toVoidPtr functions in FSharp.Core.
  • The inref<'T> and outref<'T>types, which are readonly and write-only versions of byref<'T>, respectively.
  • The ability to produce IsByRefLike structs (examples of such structs: Span<'T>and ReadOnlySpan<'T>).
  • The ability to produce IsReadOnly structs.
  • Implicit de-reference of byref<'T> and inref<'T> returns from functions and methods.
  • The ability to write extension methods on byref<'T>, inref<'T>, and outref<'T>(note: not optional type extensions).
  • Comprehensive safety checks to prevent unsoundness in your code.

The main goals for this feature set are:

  • Offer ways to interoperate with and product high-performance code in F#.
  • Full parity with .NET Core performance innovations.
  • Better code generation, especially for byref-like constructs.

What this boils down into is a feature set that allows for safe use of performance-oriented constructs in a very restrictive manner. When programming with these features, you will find that they are far more restrictive than you might initially anticipate. For example, you cannot define an F# record type that has a Span inside of it. This is because a Span is a “byref-like” type, and byref-like types can only contained in other byref-like types. Allowing such a thing would result in unsound F# code that would fail at runtime! Because of this, we implement strict safety checks in the F# compiler to prevent you from writing code that is unsound.

If you’ve been following along with the Span<'T> and ref work in C# 7.3, a rough syntax guide is as follows:

C# F#
out int arg arg: byref<int>
out int arg arg: outref<int>
in int arg arg: inref<int>
ref readonly int Inferred or arg: inref<int>
ref expr &expr

The following sample shows a few ways you can use Span<'T> with F#:

Safety rules for byrefs

As previously mentioned, byrefs and byref-like structs are quite restrictive in how they can be used. This is because the goal of this feature set is to make low-level code in the style of pointer manipulation safe and predictable. Doing so is only possible by restricting usage of certain types to appropriate contexts and performing scope analysis on your code to ensure soundness.

A quick summary of some of the safety rules:

  • A let-bound value cannot have its reference escape the scope it was defined in.
  • byref-like structs cannot be instance or static members of a class or normal struct.
  • byref-like structs cannot by captured by any closure construct.
  • byref-like structs cannot be used as a generic type parameter.

As a reminder, Span<'T> and ReadOnlySpan<'T> are byref-like structs and are subject to these rules.

Bug fixes that are not backwards compatible

There are two bugs fixes as a part of this feature set that are not backwards compatible with F# 4.1 code that deals with consuming C# 7.x ref returns and performs “evil struct replacement”.

Implicit dereference of byref-like return values

F# 4.1 introduced the ability for F# to consume byref returns. This was done strictly for interoperation with C# ref returns, and F# could not produce such a return from F# constructs.

However, in F# 4.1, these values were not implicitly dereferenced in F# code, unlike how they were in equivalent C#. This meant that if you attempted to translate C# code that consumed a ref return into equivalent F#, you’d find that the type you got back from the call was a pointer rather than a value.

Starting with F# 4.5, this value is now implicitly dereferenced in F# code. In addition to bringing this feature set in line with C# behavior, this allows for assignment to byref returns from F# functions, methods, and properties as one would expect if they had learned about this feature with C# 7 and higher.

To avoid the implicit dereference, simply apply the & operator to the value to make it a byref.

Disabling evil struct replacement on immutable structs

F# 4.1 (and lower) had a bug in the language where an immutable struct could define a method that completely replaced itself when called. This so-called “evil struct replacement” behavior is considered a bug now that F# has a way to represent ReadOnly structs. The this pointer on a struct will now be an inref<MyStruct>, and an attempt to modify the this pointer will now emit an error.

You can learn more about the full design and behavior of this feature set in the RFC.

New keyword: match!

Computation Expressions now support the `match!` keyword, shortening somewhat common boilerplate existing in lots of code today.

This F# 4.1 code:

Can now be written with match! in F# 4.5:

This feature was contributed entirely by John Wostenberg in the F# OSS community. Thanks, John!

Relaxed upcast requirements with yield in F# sequence, list and array expressions

A previous requirement to upcast to a supertype when using yield used to be required in F# sequence, list, and array expressions. This restriction was already unnecessary for these expressions since F# 3.1 when not using yield, so this makes things more consistent with existing behavior.

Relaxed indentation rules for list and array expressions

Since F# 2.0, expressions delimited by ‘}’ as an ending token would allow “undentation”. However, this was not extended to array and list expressions, thus resulting in confusing warnings for code like this:

The solution would be to insert a new line for the named argument and indent it one scope, which is unintuitive. This has now been relaxed, and the confusing warning is no more. This is especially helpful when doing reactive UI programming with a library such as Elmish.

F# enumeration cases emitted as public

To help with profiling tools, we now emit F# enumeration cases as public under all circumstances. This makes it easier to analyze the results of running performance tools on F# code, where the label name holds more semantic information than the backing integer value. This is also aligned with how C# emits enumerations.

Better async stack traces

Starting with F# 4.5 and FSharp.Core 4.5.0, stack traces for async computation expressions:

  • Reported line numbers now correspond to the failing user code
  • Non-user code is no longer emitted

For example, consider the following DSL and its usage with an FSharp.Core version prior to 4.5.0:

Note that both the f1 and f2 functions are called twice. When you look at the result of this in F# Interactive, you’ll notice that stack traces will never list names or line numbers that refer to the actual invocation of these functions! Instead, they will refer to the closures that perform the call:

This was confusing in F# async code prior to F# 4.5, which made diagnosing problems with async code difficult in large codebases.

With FSharp.Core 4.5.0, we selectively inline certain members so that the closures become part of user code, while also selectively hiding certain implementation details in relevant parts of FSharp.Core from the debugger so that they don’t accidentally muddy up stack traces.

The result is that names and line numbers that correspond to actual user code will now be present in stack traces.

To demonstrate this, we can apply this technique (with some additional modifications) to the previously-mentioned DSL:

When ran again in F# Interactive, the printed stack trace now shows names and line numbers that correspond to user calls to functions, not the underlying closures:

As mentioned in the RFC for this feature, there are other problems inherent to the space, and other solutions that may be pursued in a future F# version.

Additional FSharp.Core improvements

In addition to the improved Async stack traces, there were a small number of improvements to FSharp.Core.

  • Map.TryGetValue (RFC)
  • ValueOption<'T> (RFC)
  • FuncConvert.FromFunc and FuncConvert.FromAction APIs to enable accepting Func<’A, ‘B> and Action<’A, ‘B> instances from C# code. (RFC)

The following F# code demonstrates the usage of the first two:

The FuncConvert API additions aren’t that useful for F#-only code, but they do help with C# to F# interoperability, allowing the use of “modern” C# constructs like Action and Func to convert into F# functions.

The road to official release

This preview is very, very stable. In fact, after extensive testing, we feel that it’s stable enough for us to consider it a proper release, but due to the timing of the .NET SDK and Visual Studio releases, we’re releasing it now as a preview. Soon, when Visual Studio 2017 update 15.8 and the corresponding .NET Core 2.1 SDK update release, we will declare F# 4.5 as fully released and it will be fully included in both places.

Cheers, and happy coding!


SQL Server on Linux or in Docker plus cross-platform SQL Operations Studio

$
0
0

imageI recently met some folks that didn't know that SQL Server 2017 also runs on Linux but they really needed to know. They had a single Windows desktop and a single Windows Server that they were keeping around to run SQL Server. They had long-been a Linux shop and was now fully containerzed...except for this machine under Anna's desk. (I assume The Cloud is next...pro tip: Don't have important servers under your desk). You can even get a license first and decide on the platform later.

You can run SQL Server on a few Linux flavors...

or, even better, run it on Docker...

Of course you'll want to do the appropriate volume mapping to keep your database on durable storage. I'm digging being able to spin up a full SQL Server inside a container on my Windows machine with no install.

I've got Docker for Windows on my laptop and I'm using Shayne Boyer's "Docker Why" repo to make the point. Look at his sample DockerCompose that includes both a web frontend and a backend using SQL Server on Linux.

version: '3.0'

services:

mssql:
image: microsoft/mssql-server-linux:latest
container_name: db
ports:
- 1433:1433
volumes:
- /var/opt/mssql
# we copy our scripts onto the container
- ./sql:/usr/src/app
# bash will be executed from that path, our scripts folder
working_dir: /usr/src/app
# run the entrypoint.sh that will import the data AND sqlserver
command: sh -c ' chmod +x ./start.sh; ./start.sh & /opt/mssql/bin/sqlservr;'
environment:
ACCEPT_EULA: 'Y'
SA_PASSWORD: P@$$w0rdP@$$w0rd

Note his starting command where he's doing an initial population of the database with sample data, then running sqlservr itself. The SQL Server on Linux Docker container includes the "sqlcmd" command line so you can set up the database, maintain it, etc with the same command line you've used on Windows. You can also configure SQL Server from Environment Variables so it makes it easy to use within Docker/Kubernetes. It'll take just a few minutes to get going.

Example:

/opt/mssql-tools/bin/sqlcmd -S localhost -d Names -U SA -P $SA_PASSWORD -I -Q "ALTER TABLE Names ADD ID UniqueIdentifier DEFAULT newid() NOT NULL;"

I cloned his repo (and I have .NET Core 2.1) and did a "docker-compose up" and boom, running a front end under Alpine and backend with SQL Server on Linux.

101→ C:Usersscott> docker ps

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e5b4dae93f6d namesweb "dotnet namesweb.dll" 38 minutes ago Up 38 minutes 0.0.0.0:57270->80/tcp, 0.0.0.0:44348->443/tcp src_namesweb_1
5ddffb76f9f9 microsoft/mssql-server-linux:latest "sh -c ' chmod +x ./…" 41 minutes ago Up 39 minutes 0.0.0.0:1433->1433/tcp mssql

Command lines are nice, but SQL Server is known for SQL Server Management Studio, a nice GUI for Windows. Did they release SQL Server on Linux and then expect everyone use Windows to manage it? I say nay nay! Check out the cross-platform and open source SQL Operations Studio, "a data management tool that enables working with SQL Server, Azure SQL DB and SQL DW from Windows, macOS and Linux." You can download SQL Operations Studio free here.

SQL Ops Studio is really impressive. Here I am querying SQL Server on Linux running within my Docker container on my Windows laptop.

SQL Ops Studio - Cross platform SQL management

As I'm digging in and learning how far cross-platform SQL Server has come, I also checked out the mssql extension for Visual Studio Code that lets you develop and execute SQL against any SQL Server. The VS Code SQL Server Extension is also open source!

Go check it SQL Server in Docker at https://github.com/Microsoft/mssql-docker and try Shayne's sample at https://github.com/spboyer/docker-why


Sponsor: Scale your Python for big data & big science with Intel® Distribution for Python. Near-native code speed. Use with NumPy, SciPy & scikit-learn. Get it Today!



© 2018 Scott Hanselman. All rights reserved.
     

Bing Maps Routing API knows the shortest route that visits all waypoints!

$
0
0

Often referred to as ‘the traveling salesman problem’, planning how to most efficiently travel to a number of stops is an age-old problem. It’s difficult enough to optimize the stops for a single driver or delivery person’s day, and the challenge compounds quickly when trying to optimize routes for a fleet of drivers, your mobile salesforce, or a large team of repair personnel in the field.

A powerful addition to the Bing Maps routing API is now available to help! With waypoint optimization you can pass in up to 25 waypoints (stops) and they will be re-ordered and optimized to minimize travel time or distance.

Let’s look at a real-world use case to better understand the benefits of route optimization. Let’s say you run a delivery business in Kansas City. Your day begins downtown and ends in North Kansas City. Along the way you have 6 stops to make. The Route shown here in the first map was calculated based on the order the stops were passed in. That’s 131 miles and 2 hours 54 minutes of driving. In the second map, we see the result of specifying the ‘optimizeWaypoints=true' flag on the API call. Much better! We just saved 27 miles and 30 minutes of driving.

Before optimization

Bing Maps Route Optimization

After optimization

Bing Maps Route Optimization

The route optimization performed is based on actual road-distance, which will yield much more accurate results compared to algorithms that cut corners and use simple straight-line ‘crow flies’ distance between stops. Geographic constraints like rivers, lakes and divided highways can really throw these calculations off. Consider the simple route below. The straight-line distance from A to B is less than half a mile. But that body of water sitting between them makes the actual driving distance 2 miles. For enterprise-grade results, you definitely want to avoid any route optimization that use straight-line distance calculations behind the scenes!

Bing Maps Route Optimization

Now you’re probably thinking, this is great, but how much is this added functionality going to cost me? Here’s the best news – there is no additional charge for an optimized route! Each API call to the routing service is 1 billable transaction, whether the optimize flag is specified or not. And that includes passing in up to 25 waypoints for optimization. For requirements of more than 25 waypoints, please contact Bing Maps team at maplic@microsoft.com.

If you already know which stops a given driver in your fleet is going to make, this new optimization capability can bring big cost and time savings to your organization. But what if you have dozens of drivers needing to make 100 stops? Which stops should be assigned to which drivers to provide each driver with the most efficient itinerary?

The Bing Maps Distance Matrix API is ideal for solving this complex logistics problem. You can use it to compute optimal travel time and distances from all origins to destinations at scale, and then combing with optimization algorithms find the best stops for each driver to make. You can then use the waypoint optimization function to find the best driving directions with turn by turn instructions for each driver’s itinerary.

Learn more about the Distance Matrix API on our website and check out the developer docs. Also, for code samples, go to GitHub.

Note: If you are using the Bing Maps JavaScript control for route calculation, these new optimization parameters are not yet exposed there. We are working on that and will have news here on the blog when it is ready, but for now these route optimization capabilities are only in our REST API.

A great place to get started is with the documentation for the Route API. And this quick tutorial will show you how to create the REST URL optimizing 5 stops along a route.

- Bing Maps Team

aRt with code

$
0
0

Looking for something original to decorate your wall? Art With Code, created by Harvard University bioinformatician Jean Fan, provides a collection of R scripts to generate artistic images in the style of famous artworks, for example this randomly-generated piece in the style of Mondrian:

Mondrian

Other art generators include "Tunnel" (rotated and scaled designs in the style of Päivi Julin), Pointillism (sylizes photographs as a dots), and Pixel (an interesting multi-scale filter for photos). The transformations using only the built-in R grid and lattice packages, which made it easy for me to try Pixel on a photo of my own dog, Max:

Max-pixel.jpg

You can find examples of all of these styles and the R scripts used to generate them at the link below.

Jean Fan: Art With Code  (via the author)

Because it’s Friday: Street Orientation

$
0
0

Most cities in the US have a grid-based street structure. But it's rarely a perfect grid: sometimes the vagaries of history, geography, or convenience lead to deviations from right angles. And sometimes, rival urban planners simply disagree:

When two city planners hate each other: pic.twitter.com/LSb7k8KoaW

— Wilson 🏳️‍🌈🌹 (@the_sidecarist) March 8, 2018

Geoff Boeing, an urban planning postdoc at UC Berkeley, developed an interesting way to visualize the "griddedness" of cities, by summarizing at the distribution of street segments in a polar plot (you can see the Python code behind the charts here):

City-streets

I was surprised to see Seattle with such a regular north-south/east-west distribution, whereas in fact each Seattle neighborhood has its one grid aligned to its own orientation (often dictated by the shape of the local coastline). But of course the polar chart depends on how much of the city you include, and I suppose the one above includes just the downtown streets. But you can create your own charts using any zone you like, using this online tool by Randy Olson:

Seattle

That's all for the blog for this week. Have a great weekend, and we'll be back on Monday. See you then!

 

Top Stories from the Microsoft DevOps Community – 2018.07.27

$
0
0
For many of us in the northern hemisphere, things are really heating up — both the temperature and the move into DevOps in the cloud. This week we saw some great posts on DevOps adoption, including cloud migration, moving to VSTS from on-premises TFS, and modernizing workflows with Azure DevOps Projects. Add DevOps To Your... Read More

How to upgrade your financial analysis capabilities with Azure

$
0
0

In corporate finance and investment banking, risk analysis is a crucial job. To assess risk, analysts review research, monitor economic and social conditions, stay informed of regulations, and create models for the investment climate. In short, the inputs into an analysis make for a highly complex and dynamic calculation, one that requires enormous computing power. The vast number of calculations and the way the math is structured typically allows for high degrees of parallelization across many separate processes. To satisfy such a need, grid computing employs any number of machines working together to execute a set of parallelized tasks — which is perfect for risk analysis. By using a networked group of computers that work together as a virtual supercomputer, you can assemble and use vast computer grids for specific time periods and purposes, paying, only for what you use. Also, by splitting tasks over multiple machines, processing time is significantly reduced to increase efficiency and minimize wasted resources.

image
The Azure Industry Experiences team has recently authored two documents to help those involved in banking scenarios. We show how to implement a risk assessment solution that takes advantage of cloud grid computing technologies.

The first document is a short overview for technical decision makers, especially those considering a burst-to-cloud scenario. The second is a solution guide. It is aimed at solution architects, lead developers and others who want a deeper technical illumination of the strategy and technology.

Recommended next steps

  1. Read the Risk Grid Computing Overview
  2. Read the Risk Grid Computing Solution Guide

Azure Monitor: Route AAD Activity Logs using diagnostic settings

$
0
0

Today in partnership with the Azure Active Directory (AAD) team we are excited to announce the public preview of AAD Activity Logs using Azure Monitor diagnostic settings. Azure Monitor diagnostic settings enable you to stream log data from an Azure service to three destinations: an Azure storage account, an Event Hubs namespace, and/or a Log Analytics workspace. This allows you to easily route logs from any Azure service to a data archive, SIEM tool, or custom log processing tool. With today’s announcement, you will now be able to route your AAD audit and sign in logs to these same destinations, centralizing all of your Azure service logs in one pipeline.

Until now, all log data handled by Azure Monitor came from an Azure resource deployed within an Azure subscription. We often describe this type of data as “resource-level log data,” and it is configured using a resource diagnostic setting. AAD log data is the first type of log data from a tenant-level service made available through Azure Monitor. Tenant-level services aren’t deployed as resources within an Azure subscription, rather they function across an entire AAD tenant. To handle this new type of “tenant-level log data,” Azure Monitor has introduced a new type of diagnostic setting, a tenant diagnostic setting. For AAD logs, you can setup a tenant diagnostic setting by navigating to Audit Logs in the AAD area of the portal and clicking “Export Data Settings.”

image

This will pull up the familiar Azure Monitor diagnostic setting experience, where you can create, modify, or delete diagnostic settings.

image

image To learn more about the feature and get started, check out Alex Simons’s post on the Enterprise Mobility and Security blog. Please also be aware that during the public preview AAD activity logs cannot yet be routed to Log Analytics, but we are working to enable this by October 2018. For a full list of services that expose logs through Azure Monitor, visit our documentation.


Announcing public preview of Azure IoT Hub manual failover feature

$
0
0

Today, we are announcing the public preview offering for Manual failover. This feature is part of IoT Hub cloud service and allows customers to failover an IoT hub instance from its primary Azure region to its corresponding geo-paired region.

Failures are possible in any software system especially when it's a distributed system and it is important to plan for failures. The IoT Hub service implements redundancies at various layers of its implementation to safeguard its customers against transient failures and failures scoped within a datacenter. The service offers a high level of SLA using these redundancies. However, region wide failures or extended outages, although remote are still possible.

The IoT Hub service provides cross regional automatic disaster recovery as a default mitigation for such failures. The recovery time objective for this recovery process is 2 – 26 hours. IoT solutions which cannot afford to be down for so long, can now use the IoT Hub manual failover feature to failover their IoT hubs from one region to another in a self-serve manner. The recovery time objective for IoT Hub manual failover is 10 min – 2 hours.

More details about this feature can be found in the article that outlines the high availability and disaster recovery features of the IoT Hub service. Use the How-to guide for manual failover as a step-by-step guide to perform manual failover for your hub. You can also check out the Internet of Things Show on Channel 9 to learn more about this feature.

Azure cloud business value for retail and consumer goods explained

$
0
0

For brick and mortar retailers, the world has been overturned. Online retailers have been demolishing their market share and icons of commerce are struggling. But what helped online retailers can help the offline. The cloud can also be used by brick and mortar retailers. In fact, the brick and mortar experience, transformed with cloud technology, can be a real advantage in competition with online only.

Reasons for retailers and consumer brands to move to the cloud

Cloud technologies are enabling new capabilities and those new powers are disrupting the business models of traditional retailers and sellers of consumer goods. The cloud is at the heart of digital transformation.

  • It is changing the way technology is implemented and managed.
  • It offers the benefit of massive scale, increased business speed, and organizational agility.
  • It makes possible economic benefits related to variable expense, maintenance and deployment.
  • It enables seamless consumer experiences between offline and online.
  • It encourages differentiated experiences that wow customers.

Now you have the key to competing in today’s landscape. For these reasons, it is no longer a question of “if,” but “when” and “how” to move to the cloud for most brands.

179697361

Business value of the cloud

Born-in-the-cloud retailers are entering the marketplace by solving long-standing consumer challenges in new and innovative ways. Modern technology capabilities allow them to accelerate benefits to both the consumer and business objectives. These new experiences raise the bar on what’s possible. They elevate consumers’ expectations by delivering relevancy and convenience, often at a fraction of the ecosystem footprint of long-standing retailers.

Each organization’s journey to the cloud will be unique. There will be a variety of reasons and benefits that should be acknowledged. However, here are the four major categories for cloud business value: cost, agility, performance, and new sources of value.

Evolved cost structure and transparency

Innovation doesn’t stop because of an organization’s budgeting cycle. Your internal processes should not impact your speed and agility to deliver improved experiences to your consumers.  If it does, as a leader you should add those processes to your list of things to evolve.

The cloud enables and encourages a continuous planning approach. It allows you to reap the full benefits of the cloud despite the traditional annual budgeting cycles. The dominant conversation related to cost becomes the shift from CapEx to OpEx. This fundamentally changes how organizations budget and pay for technology. Since fixed costs associated with shared infrastructure are distributed, the cloud enables greater visibility into the true cost of individual applications. The shift to variable expense offers the organization the ability to begin executing more quickly. And the organization becomes more agile through a fail-fast approach, especially given the lower barrier to initiatives. This enables you to experiment and deliver new concepts to customers. And for some brands, the ability to continually test and learn before committing to significant investments is extremely valuable. Especially when determining the relevancy of the offer and viability of the concept.

Improved agility, speed and productivity

Developing and deploying via on-premises infrastructures (datacenters) can take weeks to months. The cloud provides greater agility and speed-to-consumer. Development teams can be more productive and can quickly develop services that reach global markets. Azure offers near-instant provisioning, allowing projects to move quickly without the need to over-provision resources. As an added bonus, infrastructure planning costs disappear.

The flexibility of the cloud enables organizations to deploy new approaches more effectively. It lets you deliver value to customers and productivity to the organization. Profits accrue with the adoption of agile software development methodologies, DevOps, CI/CD, and modern SOA and PaaS-based architectures.

Azure cloud, made to order

Azure is designed with the developer in mind. Applications can be built with the language of choice, including Node.js, Java, and .NET. Development tools are available for PC or MAC. Visual Studio and Visual Studio Code are premier environments with built-in features for Azure. For example, mobile app development is accelerated by integrating the development lifecycle with Visual Studio App Center. Features include automated builds, and testing for cross-platform, hybrid, and native apps on iOS and Android.

Most compliant

Azure’s infrastructure has been developed to support global demand. Azure is available in 54 global Azure regions, more than any cloud provider. Azure has 70+ compliance offerings—the largest portfolio in the industry. Azure meets a broad set of international and industry-specific compliance standards, such as General Data Protection Regulation (GDPR), as well as country-specific standards, including Australia IRAP, UK G-Cloud, and Singapore MTCS. See Compliance for Microsoft cloud services.

Security matters

Security is essential to you and your customers. Here is a short list of how Azure offers improvements in reliability and security over on-premises infrastructure.

  • The Azure Security Center spans on-premises and cloud workloads. From a single dashboard, you can monitor and manage all of your resources.
  • The Azure Advisor is a free service that gives you the best advice based on the most current data. Azure Active Directory helps you to manage user identities and create intelligence-driven access policies to secure your resources.
  • Site Recovery gives you some assurance that you can recover from a disaster.
  • Individual services have security features. For example, see the security features of Azure SQL Database.

Possibilities to wow customers

The cloud enables unlimited computing scale and storage while removing boundaries. This freedom is a distinct advantage over on-premises infrastructure. This opens a wealth of new opportunities. It frees your organization’s creatives. They can imagine, prototype, and deliver new experiences that wow customers, leading to new business model opportunities.

These cloud capabilities, plus the availability of data and digital networks, provide an opportunity for modern technologies such as artificial intelligence, IoT, machine learning, and AR/VR to thrive. These technologies enable you to innovate and experiment. This leads to competitive advantages, many of which are only available in the cloud, and that are cost-prohibitive if implemented on-premises.

This is where it gets exciting for retail and consumer goods brands who are focused on delivering new and/or improved digital experiences. The cloud opens possibilities as new data signals are captured and used to provide insights fuelled with artificial intelligence.

Recommended next steps

Now that we discussed the business value of the cloud, it’s time to explore the platform and tooling that makes up Microsoft’s Azure cloud.  With a strong cloud foundation, you can start exploring the various industry options.

I post regularly about new developments on social media.  If you would like to follow me you can find me on Linkedin and Twitter.

Responsibilities of a partner/system integrator in managing Azure Cosmos DB

$
0
0

Note that this post was co-authored by Arvind Rao and Sneha Gunda.

Many customers are using Azure Cosmos DB all around the world. This article lists the actions a partner can perform in different areas of Azure Cosmos DB such as security, performance management, and more.

Security

Data security is a shared responsibility between the customer, and the database provider. Depending on the database provider, the amount of responsibility a customer carries can vary. If the customer chooses a PaaS cloud database provider such as Azure Cosmos DB, the workload to manage security reduces considerably. However there are some areas where the partner can add value by implementing security best practices offered by Azure Cosmos DB to help customer prevent, detect, and respond to database breaches.

Role of a partner

  • The partner has an opportunity to play database administrator (DBAs) role and help manage databases, collections, users, and permissions.
  • The partner can facilitate primary key rotation process to keep the connection to Azure Cosmos DB accounts secure.
  • The partner can create/manage user resources and permissions of the databases by using the master key for the account.
  • The partner can help configure and troubleshoot the IP access control policy.
  • The partner can help review the diagnostic logs to find any security issues.

Monitoring

Monitoring is the next focused area after customer start using one or more Azure Cosmos DB databases. Azure Cosmos DB provides full suite of metrics to monitor throughput, storage, availability, latency, and consistency of the databases.

Role of a partner

  • The partner can help monitor health of a database by using performance metrics.
  • The partner can help setup appropriate alerts to report health anomalies.
  • The partner can help configure diagnostic logging.
  • The partner can help review the logs for security anomalies.
  • The partner can help review the indexes and queries to provide appropriate changes.

Performance management

Performance Management focuses on managing required throughput and storage distribution across partitions, customizing index policies of a collection, query analysis and client metric analysis, and more.

Role of a partner

  • The partner can help improve the performance of a database by researching the throttling issues and applying the right fix like re-partitioning or adjusting the index policies.
  • The partner can help debugging slow client response or long running queries with the help of metrics and diagnostic logging information.
  • The partner can help review the consistency model and suggest appropriate consistency level for the server side to get more throughput.
  • The partner can help review TTL configuration and suggest appropriate configuration to choose a right size for the container.
  • The partner can help analyze the local demand to expand the databases to other regions to provide low latency reads.
  • The partner can provide guidance on the real-time integration or notification for any changes to Azure Cosmos DB.

Back up, restore, and business continuity

Azure Cosmos DB provides automatic online backup and supports explicit as well as policy driven failovers that allow customer to control the end-to-end system behavior in the event of failures.

Role of a partner

  • Azure Cosmos DB takes snapshots of your data for every four hours at the partition level. At any given time, only the last two snapshots are retained. The partner can help maintain additional backup snapshots if needed.
  • The partner can help restore a database from an online backup.
  • The partner can help address data corruption issues.
  • The partner can help by configuring automatic failovers.
  • The partner can perform manual failovers if needed.

Plan your next trip – Customize Bing itineraries to make them your own

$
0
0

Just as every person is unique, we know every trip is unique.  A one-size-fits-all solution to travel is a start, but a great trip is one that caters to your interests.

Earlier this year we introduced itineraries on Bing Maps for popular travel destinations (only available for US and UK users at this time).  We are excited to announce that you can now customize these itineraries to make them your own.

To try it out, go to Bing Maps on your desktop browser and search for itineraries in the destination of your choice or a specific itinerary, for example 4 day New York itinerary.  From there you can…

  • Add the attractions you want to visit
  • Remove or re-order attractions to optimize your day
  • Add or remove days to fit your schedule
  • Save your itinerary to My Places for future editing
  • Share your itinerary with friends and family
  • Take your itinerary on the go – view it on your mobile phone

Pro tip: Use the same link we give you to share with friends and family as a convenient shortcut to your itinerary on your mobile device.

4 day new york itinerary

Edit itinerary - My Places

Edit Itinerary

These new itinerary capabilities are available today.  We'll be working hard over the coming months to expand our travel planning offering on Bing so your input is greatly appreciated.  If you have suggestions or feedback, share them using the Feedback link on the page.

- The Bing Team

A Certification for R Package Quality

$
0
0

Cii_badge-300x300There are more than 12,000 packages for R available on CRAN, and many others available on Github and elsewhere. But how can you be sure that a given R package follows best development practices for high-quality, secure software?

Based on a recent survey of R users related to challenges in selecting R packages, the R Consortium now recommends a way for package authors to self-validate that their package follows best practices for development. The CII Best Practices Badge Program, developed by the Linux Foundation's Core Infrastructure Initiative, defines a set of criteria that open-source software projects should follow for quality and security. The criteria relate to open-source license standards, documentation, secure development and delivery practices, version control and bug reporting practices, build and test processes, and much more. R packages that meet these standards are a signal to R users that can be relied upon, and as the standards document notes:

There is no set of practices that can guarantee that software will never have defects or vulnerabilities; even formal methods can fail if the specifications or assumptions are wrong. Nor is there any set of practices that can guarantee that a project will sustain a healthy and well-functioning development community. However, following best practices can help improve the results of projects. For example, some practices enable multi-person review before release, which can both help find otherwise hard-to-find technical vulnerabilities and help build trust and a desire for repeated interaction among developers from different organizations.

Developers can self-validate their packages (free of charge) using an online tool to check their adherence to the standards and get a "passing", "silver" or "gold" rating. Passing projects are eligible to display the badge (shown above) as a sign of quality and security, and almost 200 projects qualify as of this writing. That includes a number of R packages that have already gone through the certification process, including ggplot2, R Consortium projects covr and DBI, and the built-in R package Matrix Matrix. For more details on how R packages can get to a passing certification, and the R Consortium survey they led to the recommendation, see the R Consortium blog post at the link below.

R Consortium: Should R Consortium Recommend CII Best Practices Badge for R Packages: Latest Survey Results

 

Announcing TypeScript 3.0

$
0
0
TypeScript 3.0 is here! Today marks a new milestone in the TypeScript journey, serving JavaScript users everywhere.

If you’re unfamiliar with TypeScript, it’s not too late to learn about it now! TypeScript is an extension of JavaScript that aims to bring static types to modern JavaScript. The TypeScript compiler reads in TypeScript code, which has things like type declarations and type annotations, and emits clean readable JavaScript with those constructs transformed and removed. That code runs in any ECMAScript runtime like your favorite browsers and Node.js. At its core, this experience means analyzing your code to catch things like bugs and typos before your users run into them; but it brings more than that. Thanks to all that information and analysis TypeScript can provide a better authoring experience, providing code completion and navigation features like Find all References, Go to Definition, and Rename in your favorite editor.

To get started with the language itself, check out typescriptlang.org to learn more. And if you want to try TypeScript 3.0 out now, you can get it through NuGet or via npm by running

npm install -g typescript

You can also get editor support for

Other editors may have different update schedules, but should all have excellent TypeScript support soon as well.

The 3.0 Journey

When we released TypeScript 2.0, we took a brief look back at how each release leading up to TypeScript 2.0 brought the language to where it is today. Between TypeScript 1.0 and up until 2.0, the language added union types, type guards, modern ECMAScript support, type aliases, JSX support, literal types, and polymorphic this types. If we include TypeScript 2.0 with its introduction of non-nullable types, control flow analysis, tagged union support, this-types, and a simplified model around .d.ts file acquisition, that era truly defined the fundamentals of using TypeScript.

So what have we done since? What, apart from new ECMAScript features like the long-await-ed async/await, generators, and object rest/spread brought us to TypeScript 3.0?

TypeScript 2.1 was a foundational release that introduced a static model for metaprogramming in JavaScript. The key query (keyof), indexed access (T[K]), and mapped object ({ [K in keyof T]: T[K] }) types have been instrumental in better modeling libraries like React, Ember, Lodash, and more.

TypeScript 2.2 and 2.3 brought support for mixin patterns, the non-primitive object type, and generic defaults, used by a number of projects like Angular Material and Polymer. TypeScript 2.3 also shipped a feature for fine-grained control of this types that allowed TypeScript to work well with libraries like Vue, and added the checkJs flag to enable type-checking on JavaScript files.

TypeScript 2.4 and 2.6 tightened up the story for strict checking on function types, addressing some of the longest-standing feedback about our type system through --strictFunctionTypes which enforced contravariance on parameters. 2.7 continued the trend of strictness with --strictPropertyInitialization checks in classes.

TypeScript 2.8 introduced conditional types, a powerful tool for statically expressing decisions based on types, and 2.9 generalized keyof and provided easier imports for types.

Which brings us to TypeScript 3.0! Despite the new big number, 3.0 has few breaking changes (meaning it should be very easy to upgrade) and introduces a new flexible and scalable way to structure your projects, powerful new support for operating on parameter lists, new types to enforce explicit checks, better JSX support, an overall better error UX, and much more!

What’s New?

Project references

It’s fairly common to have several different build steps for a library or application. Maybe your codebase has a src and a test directory. Maybe you have your front-end code in a folder called client with your Node.js back-end code in a folder called server, and each imports code from a shared folder. And maybe you use what’s called a “monorepo” and have many many projects which depend on each other in non-trivial ways.

One of the biggest features that we’ve worked on for TypeScript 3.0 is called “project references”, and it aims to make working with these scenarios easier.

Project references allow TypeScript projects to depend on other TypeScript projects – specifically, allowing tsconfig.json files to reference other tsconfig.json files. Specifying these dependencies makes it easier to split your code into smaller projects, since it gives TypeScript (and tools around it) a way to understand build ordering and output structure. That means things like faster builds that work incrementally, and support for transparently navigating, editing, and refactoring across projects. Since 3.0 lays the foundation and exposes the APIs, any build tool should be able to provide this.

What’s it look like?

As a quick example, here’s what a tsconfig.json with project references looks like:

// ./src/bar/tsconfig.json
{
    "compilerOptions": {
        // Needed for project references.
        "composite": true,
        "declaration": true,

        // Other options...
        "outDir": "../../lib/bar",
        "strict": true, "module": "esnext", "moduleResolution": "node",
    },
    "references": [
        { "path": "../foo" }
    ]
}

There are two new fields to notice here: composite and references.

references simply specifies other tsconfig.json files (or folders immediately containing them). Each reference is currently just an object with a path field, and lets TypeScript know that building the current project requires building that referenced project first.

Perhaps equally important is the composite field. The composite field ensures certain options are enabled so that this project can be referenced and built incrementally for any project that depends on it. Being able to intelligently and incrementally rebuild is important, since build speed is one of the reasons you might break up a project in the first place. For example, if project front-end depends on shared, and shared depends on core, our APIs around project references can be used to detect a change in core, but to only rebuild shared if the types (i.e. the .d.ts files) produced by core have changed. That means a change to core doesn’t completely force us to rebuild the world. For that reason, setting composite forces the declaration flag to be set as well.

--build mode

TypeScript 3.0 will provide a set of APIs for project references so that other tools can provide this fast incremental behavior. As an example, gulp-typescript already leverages it! So project references should be able to integrate with your choice of build orchestrators in the future.

However, for many simple apps and libraries, it’s nice not to need external tools. That’s why tsc now ships with a new --build flag.

tsc --build (or its nickname, tsc -b) takes a set of projects and builds them and their dependencies. When using this new build mode, the --build flag has to be set first, and can be paired with certain other flags:

  • --verbose: displays every step of what a build requires
  • --dry: performs a build without emitting files (this is useful with --verbose)
  • --clean: attempts to remove output files given the inputs
  • --force: forces a full non-incremental rebuild for a project

Controlling output structure

One subtle but incredibly useful benefit of project references is logically being able to map your input source to its outputs.

If you’ve ever tried to share TypeScript code between the client and server of your application, you might have run into problems controlling the output structure.

For example, if client/index.ts and server/index.ts both reference shared/index.ts for the following projects:

src
├── client
│   ├── index.ts
│   └── tsconfig.json
├── server
│   ├── index.ts
│   └── tsconfig.json
└── shared
    └── index.ts

…then trying to build client and server, we’ll end up with…

lib
├── client
│   ├── client
│   │   └── index.js
│   └── shared
│       └── index.js
└── server
    ├── server
    │   └── index.js
    └── shared
        └── index.js

rather than

lib
├── client
│   └── index.js
├── shared
│   └── index.js
└── server
    └── index.js

Notice that we ended up with a copy of shared in both client and server. We unnecessarily spent time building shared twice and introduced an undesirable level of nesting in lib/client/client and lib/server/server.

The problem is that TypeScript greedily looks for .ts files and tries to include them in a given compilation. Ideally, TypeScript would understand that these files don’t need to be built in the same compilation, and would instead jump to the .d.ts files for type information.

Creating a tsconfig.json for shared and using project references does exactly that. It signals to TypeScript that

  1. shared should be built independently, and that
  2. when importing from ../shared, we should look for the .d.ts files in its output directory.

This avoids triggering a double-build, and also avoids accidentally absorbing all the contents of shared.

Further work

To get a deeper understanding of project references and how you can use them, read up more our issue tracker. In the near future, we’ll have documentation on project references and build mode.

We’re committed to ensuring that other tool authors can support project references, and will continue to improve the editing experience around project references. Our intent is for project references to feel as seamless as authoring code with a single tsconfig.json. If you do end up using project references, we’d appreciate any and all feedback to do just that.

Extracting and spreading parameter lists with tuples

We often take it for granted, but JavaScript lets us think about parameter lists as first-class values – either by using arguments or rest-parameters (e.g. ...rest).

function call(fn, ...args) {
    return fn(...args);
}

Notice here that call works on functions of any parameter length. Unlike other languages, JavaScript doesn’t force us to define a call0, call1, call2, etc. as follows:

function call0(fn) {
    return fn();
}

function call1(fn, param1) {
    return fn(param1);
}

function call2(fn, param1, param2) {
    return fn(param1, param2);
}

function call3(fn, param1, param2, param3) {
    return fn(param1, param2, param3);
}

Unfortunately, for a while there wasn’t a great well-typed way to express this statically in TypeScript without declaring a finite number of overloads:

// TODO (billg): 5 overloads should *probably* be enough for anybody?
function call<T1, T2, T3, T4, R>(fn: (param1: T1, param2: T2, param3: T3, param4: T4) => R, param1: T1, param2: T2, param3: T3, param4: T4): R
function call<T1, T2, T3, R>(fn: (param1: T1, param2: T2, param3: T3) => R, param1: T1, param2: T2, param3: T3): R
function call<T1, T2, R>(fn: (param1: T1, param2: T2) => R, param1: T1, param2: T2): R
function call<T1, R>(fn: (param1: T1) => R, param1: T1): R;
function call<R>(fn: () => R, param1: T1): R;
function call(fn: (...args: any[]) => any, ...args: any[]) {
    return fn(...args);
}

Oof! Another case of death by a thousand overloads! Or at least, as many overloads as our users asked us for.

TypeScript 3.0 allows us to better model scenarios like these by now allowing rest parameters to be generic, and inferring those generics as tuple types! Instead of declaring each of these overloads, we can say that the ...args rest parameter from fn must be a type parameter that extends an array, and then we can re-use that for the ...args that call passes:

function call<TS extends any[], R>(fn: (...args: TS) => R, ...args: TS): R {
    return fn(...args);
}

When we call the call function, TypeScript will try to extract the parameter list from whatever we pass to fn, and turn that into a tuple:

function foo(x: number, y: string): string {
    return (x + y).toLowerCase();
}

// The `TS` type parameter is inferred as `[number, string]`
call(foo, 100, "hello");

When TypeScript infers TS as [number, string] and we end up re-using TS on the rest parameter of call, the instantiation looks like the following

function call(fn: (...args: [number, string]) => string, ...args: [number, string]): string

And with TypeScript 3.0, using a tuple in a rest parameter gets flattened into the rest of the parameter list! The above boils down to simple parameters with no tuples:

function call(fn: (arg1: number, arg2: string) => string, arg1: number, arg2: string): string

So in addition to catching type errors when we pass in the wrong arguments:

function call<TS extends any[], R>(fn: (...args: TS) => R, ...args: TS): R {
    return fn(...args);
}

call((x: number, y: string) => y, "hello", "world");
//                                ~~~~~~~
// Error! `string` isn't assignable to `number`!

and inference from other arguments:

call((x, y) => { /* .... */ }, "hello", 100);
//    ^  ^
// `x` and `y` have their types inferred as `string` and `number` respectively.

we can also observe the tuple types that these functions infer from the outside:

function tuple<TS extends any[]>(...xs: TS): TS {
    return xs;
}

let x = tuple(1, 2, "hello"); // has type `[number, number, string]

There is a subtler point to note though. In order to make all of this work, we needed to expand what tuples could do…

Richer tuple types

To make tuples model parameter lists (as we just discussed), we had to rethink tuple types a bit. Before TypeScript 3.0, the best that tuples could model was the order and count of a set of parameters.

However, parameter lists aren’t just ordered lists of types. For example, parameters at the end can be optional:

// Both `y` and `z` are optional here.
function foo(x: boolean, y = 100, z?: string) {
    // ...
}

foo(true);
foo(true, undefined, "hello");
foo(true, 200);

The last parameter can be a rest parameter.

// `rest` accepts any number of strings - even none!
function foo(...rest: string[]) {
    // ...
}

foo();
foo("hello");
foo("hello", "world");

And finally, there is one mildly interesting property about parameter lists which is that they can be empty:

// Accepts no parameters.
function foo() {
    // ...
}

foo();

So to make it possible for tuples to correspond to parameter lists, we needed to model each of these scenarios.

First, tuples now allow trailing optional elements:

/**
 * 2D, or potentially 3D, coordinate.
 */
type Coordinate = [number, number, number?];

The Coordinate type creates a tuple with an optional property named 2 – the element at index 2 might not be defined! Interestingly, since tuples use numeric literal types for their length properties, Coordinate‘s length property has the type 2 | 3.

Second, tuples now allow rest elements at the end.

type OneNumberAndSomeStrings = [number, ...string[]];

Rest elements introduce some interesting open-ended behavior to tuples. The above OneNumberAndSomeStrings type requires its first property to be a number, and permits 0 or more strings. Indexing with an arbitrary number will return a string | number since the index won’t be known. Likewise, since the tuple length won’t be known, the length property is just number.

Of note, when no other elements are present, a rest element in a tuple is identical to itself:

type Foo = [...number[]]; // Equivalent to `number[]`.

Finally, tuples can now be empty! While it’s not that useful outside of parameter lists, the empty tuple type can be referenced as []:

type EmptyTuple = [];

As you might expect, the empty tuple has a length of 0 and indexing with a number returns the never type.

Improved errors and UX

Over time we’ve heard more and more demand from our community regarding better error messages. While we’re by no means done, we heard you in TypeScript 3.0 and have invested a bit here.

Related error spans

Part of the goal of providing a good error message is also guiding a user towards a way to fix the error, or providing a way to intuit why the error message was given in the first place. Much of the time, there can be a lot of information or multiple reasons an error message might surface. Of those reasons, we might find they come from different parts of the code.

Related error spans are a new way to surface that information to users. In TypeScript 3.0, error messages can provide messages on other locations so that users can reason about cause-and-effect of an error.

Using import * as express syntax can cause an error when calling express(). Here, the provided error tells the user not just that the call is invalid, but that it has occurred because of the way the user imported express.

In some sense, related error messages can give a user not just an explanation, but also breadcrumbs to see where things went wrong.

An error on a potentially misspelled property now also informs the user of where the most likely candidate originated.

These spans will also appear in the terminal when running tsc with --pretty mode enabled, though our team is still iterating on the UI and would appreciate feedback!

Improved messages and elaboration

Around TypeScript 2.9, we started investing more in our error messages, and with 3.0 we really tried to tackle a core set of cases that could give a smarter, cleaner, and more accurate error experience. This includes things like picking better types with mismatches in union types, and cutting right to the chase for certain error messages.

We believe this effort had paid off and will provide significantly shorter and cleaner error messages.

Error messages for the equivalent code/issue in JSX compared between TypeScript 2.8 and TypeScript 3.0. In TypeScript 3.0, the message is dramatically shorter and has a related span, while still providing context.

The unknown type

The any type is the most-capable type in TypeScript – while it encompasses the type of every possible value, it doesn’t force us to do any checking before we try to call, construct, or access properties on these values. It also lets us assign values of type any to values that expect any other type.

This is mostly useful, but it can be a bit lax.

let foo: any = 10;

// All of these will throw errors, but TypeScript
// won't complain since `foo` has the type `any`.
foo.x.prop;
foo.y.prop;
foo.z.prop;
foo();
new foo();
upperCase(foo);
foo `hello world!`;

function upperCase(x: string) {
    return x.toUpperCase();
}

There are often times where we want to describe the least-capable type in TypeScript. This is useful for APIs that want to signal “this can be any value, so you must perform some type of checking before you use it”. This forces users to safely introspect returned values.

TypeScript 3.0 introduces a new type called unknown that does exactly that. Much like any, any value is assignable to unknown; however, unlike any, unknown is assignable to almost nothing else without a type assertion. You also can’t access any properties off of an unknown, nor can you call/construct them.

As an example, swapping the above example to use unknown instead of any forces turns all usages of foo into an error:

let foo: unknown = 10;

// Since `foo` has type `unknown`, TypeScript
// errors on each of these locations.
foo.x.prop;
foo.y.prop;
foo.z.prop;
foo();
new foo();
upperCase(foo);
foo `hello world!`;

function upperCase(x: string) {
    return x.toUpperCase();
}

Instead, we’re now forced to either perform checking, or use a type assertion to convince the type-system that we know better.

let foo: unknown = 10;

function hasXYZ(obj: any): obj is { x: any, y: any, z: any } {
    return !!obj &&
        typeof obj === "object" &&
        "x" in obj && "y" in obj && "z" in obj
}

// Using a user-defined type guard...
if (hasXYZ(foo)) {
    // ...we're allowed to access certain properties again.
    foo.x.prop;
    foo.y.prop;
    foo.z.prop;
}

// We can also just convince TypeScript we know what we're doing
// by using a type assertion.
upperCase(foo as string);

function upperCase(x: string) {
    return x.toUpperCase();
}

Note that if you’ve been using a type like {} | null | undefined to achieve similar behavior, unknown usually has more desirable behavior in constructs like conditional types, since conditional types distribute across unions:

type Arrayify<T> = T extends any ? Array<T> : never;

type A = Foo<{} | null | undefined>; // null[] | undefined[] | {}[]
type B = Foo<unknown>;               // unknown[]

Support for defaultProps in JSX

Note: at the time of writing, React’s .d.ts files may not yet support this functionality.

If you’ve ever used default initializers in modern TypeScript/JavaScript, you might know how handy they can be for function callers. They give us a useful syntax to let callers use functions more easily by not requiring certain arguments, while letting function authors ensure that their values are always defined in a clean way.

function loudlyGreet(name = "world") {
    // Thanks to the default initializer, `name` will always have type `string` internally.
    // We don't have to check for `undefined` here.
    console.log("HELLO", name.toUpperCase());
}

// Externally, `name` is optional, and we can potentially pass `undefined` or omit it entirely.
loudlyGreet();
loudlyGreet(undefined);

In React, a similar concept exists for components and their props. When creating a new element using a component, React looks up a property called defaultProps, to fill in values for props that are omitted.

// Some non-TypeScript JSX file

import * as React from "react";
import * as ReactDOM from "react-dom";

export class Greet extends React.Component {
    render() {
        const { name } = this.props;
        return <div>Hello ${name.toUpperCase()}!</div>;
    }

    static defaultProps = {
        name: "world",
    };
}

//      Notice no `name` attribute was specified!
//                                     vvvvvvvvv
const result = ReactDOM.renderToString(<Greet />);
console.log(result);

Notice that in <Greet />, name didn’t have to be specified. When a Greet element is created, name will be initialized with "world" and this code will print <div>Hello world!</div>.

Unfortunately, TypeScript didn’t understand that defaultProps had any bearing on JSX invocations. Instead, users would often have to declare properties optional and use non-null assertions inside of render:

export interface Props { name?: string }
export class Greet extends React.$1Component<Props> {
    render() {
        const { name } = this.props;

        // Notice the `!` ------v
        return <div>Hello ${name!.toUpperCase()}!</div>;
    }
    static defaultProps = { name: "world"}
}

Or they’d use some hacky type-assertions to fix up the type of the component before exporting it.

That’s why TypeScript 3.0, the language supports a new type alias in the JSX namespace called LibraryManagedAttributes. Despite the long name, this is just a helper type that tells TypeScript what attributes a JSX tag accepts. The short story is that using this general type, we can model React’s specific behavior for things like defaultProps and, to some extent, propTypes.

export interface Props {
    name: string
}

export class Greet extends React.$1Component<Props> {
    render() {
        const { name } = this.props;
        return <div>Hello ${name.toUpperCase()}!</div>;
    }
    static defaultProps = { name: "world"}
}

// Type-checks! No type assertions needed!
let el = <Greet />

Keep in mind that there are some limitations. For defaultProps that explicitly specify their type as something like Partial<Props>, or stateless function components (SFCs) whose defaultProps are declared with Partial<Props>, will make all props optional. As a workaround, you can omit the type annotation entirely for defaultProps on a class component (like we did above), or use ES2015 default initializers for SFCs:

function Greet({ name = "world" }: Props) {
    return <div>Hello ${name.toUpperCase()}!</div>;
}

One last thing to note is that while the support is built into TypeScript, the current .d.ts files on DefinitelyTyped are not currently leveraging it – therefore @types/react may not have the change available yet. We are currently waiting on stabilization throughout DefinitelyTyped to ensure that the change is minimally disruptive.

/// <reference lib="..." /> directives

One of the issues we’ve seen in the community is that polyfills – libraries that provide newer APIs in older runtimes – often have their own declaration files (.d.ts files) that attempt to define those APIs themselves. While this is sometimes fine, these declarations are global, and may provide issues with TypeScript’s built-in lib.d.ts depending on users’ compiler options like --lib and --target. For example, declarations for core-js might conflict with the built-in lib.es2015.d.ts.

To solve this, TypeScript 3.0 provides a new way for files to declare the built-in APIs which they expect to be present using a new reference directive: /// <reference lib="..." />.

For example, a polyfill for ES2015’s Promise might now simply contain the lines

/// <reference lib="es2015.promise" />
export {};

With this comment, even if a TypeScript 3.0 consumer has explicitly used a target that doesn’t bring in lib.es2015.promise.d.ts, importing the above library will ensure that Promise is present.

Editor Productivity

For those who are unfamiliar, TypeScript leverages its syntactic and semantic knowledge to provide services for writing code more easily. It acts as the engine for TypeScript and JavaScript underneath editors like Visual Studio, Visual Studio Code, and any other editor with a TypeScript plugin to provide the things users love like code completion, Go to Definition, and even quick fixes and refactorings. TypeScript 3.0 continues to deliver here.

Named import refactorings

Occasionally, qualifying every import with the module it came from can be cumbersome.

import * as dependency from "./dependency";

// look at all this repetition!

dependency.foo();

dependency.bar();

dependency.baz();

On the other hand, if we individually import the things we use, we might find that after many uses it’s become unclear for new readers where these imports originated from.

import { foo, bar, baz } from "./dependency";

// way lower in the file...

foo();

bar();

baz();

Regardless of which you decide to choose now, you might change your mind later. TypeScript 3.0 provides refactorings so that switch never feels daunting.

Closing JSX tag completions and outlining spans

TypeScript now provides two new productivity features around JSX:

  • providing completions for JSX closing tags
  • providing collapsible outlining spans for JSX

Quick fixes for unreachable code and unused labels

TypeScript will now provide quick fixes to remove any unreachable code, as well as remove unused labels.

Breaking changes

You can always keep an eye on upcoming breaking changes in the language as well as in our API.

We expect TypeScript 3.0 to have very few impactful breaking changes. Language changes should be minimally disruptive, and most breaks in our APIs are oriented around removing already-deprecated functions.

unknown is a reserved type name

Since unknown is a new built-in type, it can no longer be used in type declarations like interfaces, type aliases, or classes.

API breaking changes

  • The deprecated internal method LanguageService#getSourceFile has been removed, as it has been deprecated for two years. See #24540.
  • The deprecated function TypeChecker#getSymbolDisplayBuilder and associated interfaces have been removed. See #25331. The emitter and node builder should be used instead.
  • The deprecated functions escapeIdentifier and unescapeIdentifier have been removed. Due to changing how the identifier name API worked in general, they have been identity functions for a few releases, so if you need your code to behave the same way, simply removing the calls should be sufficient. Alternatively, the typesafe escapeLeadingUnderscores and unescapeLeadingUnderscores should be used if the types indicate they are required (as they are used to convert to or from branded __String and string types).
  • The TypeChecker#getSuggestionForNonexistentProperty, TypeChecker#getSuggestionForNonexistentSymbol, and TypeChecker#getSuggestionForNonexistentModule methods have been made internal, and are no longer part of our public API. See #25520.

Going forward

TypeScript owes so much of its success to its community. We’re indebted to our contributors who’ve worked on the compiler, the language service, DefinitelyTyped, and tooling integration that leveraged any combination of them. But we’re also grateful for our users who’ve consistently given us the feedback we needed and pushed us to improve. Going forward, we foresee bringing more value to the type system and tooling experience, polishing the existing work on project references, and making TypeScript (both the language and the project) more approachable by whatever means we can. But in addition to that, we want to explore what we can do to empower more tool authors and users in the JavaScript community – to bring value to users who could still get value from using TypeScript even without directly using TypeScript.

Keep an eye on our roadmap as these ideas become specifics, and feel free to drop us a line to give us feedback, whether via the comments below, over Twitter, or by filing an issue. We’re always trying to do better.

For everyone who’s been a part of the TypeScript journey so far – thank you. We look forward to bringing you the best experience we can. And for everyone else, we hope you’ll start exploring and loving TypeScript as much as we do.

Happy hacking!

The TypeScript Team

Introducing Web Authentication in Microsoft Edge

$
0
0

Today, we are happy to introduce support for the Web Authentication specification in Microsoft Edge, enabling better, more secure user experiences and a passwordless experience on the web.

With Web Authentication, Microsoft Edge users can sign in with their face, fingerprint, PIN, or portable FIDO2 devices, leveraging strong public-key credentials instead of passwords.

A web without passwords

Staying secure on the web is more important than ever. We trust web sites to process credit card numbers, save addresses and personal information, and even to handle sensitive records like medical information. All this data is protected by an ancient security model—the password. But passwords are difficult to remember, and are fundamentally insecure—often re-used, and vulnerable to phishing and cracking.

For these reasons, Microsoft has been leading the charge towards a world without passwords, with innovations like Windows Hello biometrics and pioneering work with the FIDO Alliance to create an open standard for passwordless authentication – Web Authentication.

We started this journey in 2016, when we shipped the industry’s first preview implementation of the Web Authentication API in Microsoft Edge. Since then, we have been updating our implementation to as we worked with other vendors and the FIDO alliance to develop the standard. In March, the FIDO Alliance announced that the Web Authentication APIs have reached Candidate Recommendation (CR) status in the W3C, a major milestone for the maturity and interoperability of the specification.

Authenticators in Microsoft Edge

Beginning with build 17723, Microsoft Edge supports the CR version of Web Authentication. Our implementation provides the most complete support for Web Authentication to date, with support for a wider variety of authenticators than other browsers.

Windows Hello allows users to authenticate without a password on any Windows 10 device, using biometrics—face and fingerprint recognition—or a PIN number to sign in to web sites. With Windows Hello face recognition, users can log in to sites that support Web Authentication in seconds, with just a glance.

Animation showing a purchase using Web Authentication via Windows Hello

Users can also use external FIDO2 security keys to authenticate with a removable device and your biometrics or PIN. For websites that are not ready to move to a completely passwordless model, backwards compatibility with FIDO U2F devices can provide a strong second factor in addition to a password.

We’re working with industry partners on lighting up the first passwordless experiences around the web. At RSA 2018, we shared a sneak peak of how these APIs could be used to approve a payment on the web with your face. Passwordless authentication experiences like this are the foundation of a world without passwords.

Getting started

We’re excited to get implementation into the hands of more developers to see what you build. To get started with Web Authentication in Microsoft Edge, check out more information on our implementation in the Web Authentication dev guide, or install Windows Insider Preview build 17723 or higher to try it out for yourself!

– Angelo Liao, Program Manager, Microsoft Edge
– Ibrahim Damlaj, Program Manager, Windows Security

The post Introducing Web Authentication in Microsoft Edge appeared first on Microsoft Edge Dev Blog.


Azure.Source – Volume 42

$
0
0

Now in preview

Azure App Service now supports Java SE on Linux - Support for Java SE 8-based applications on Linux in Azure App Service is now available in public preview. Now you can build and deploy Java web apps on a highly scalable, self-patching web hosting service where bug fixes and security updates will be maintained by Microsoft. Additional performance features include scaling to support millions of users with multiple instances, applications, and regions in a dynamic scaling intelligent configuration.

Build richer applications with the new asynchronous Azure Storage SDK for Java - Azure Storage SDK v10 for Java is currently in Preview and supports Blob storage only. Azure Storage SDK v10 for Java adopts the next-generation Storage SDK design providing thread-safe types that were introduced earlier with the Storage Go SDK release. This new SDK is built to effectively move data without any buffering on the client, and provides interfaces close to the ones in the Storage REST APIs.

Also in preview

Now generally available

Security Center’s adaptive application controls are generally available - Adaptive application controls help you define the set of applications that are allowed to run on configured groups of virtual machines (VM), which helps you audit and block unwanted applications. By default, Security Center enables application control in Audit mode. After validating that the whitelist has not had any adverse effects on your workload, you can change the protection mode to Enforce mode. This feature is available in the standard pricing tier of Security Center.

News and updates

New recommendations in Azure Advisor - Azure Advisor is a free service that analyzes your Azure usage and provides recommendations on how you can optimize your Azure resources to reduce costs, boost performance, strengthen security, and improve reliability. Several new Azure Advisor recommendations to help you get the most out of your Azure subscriptions, such as when to use Reserved Instances to help you save over pay-as-you-go costs, when you have subscriptions missing Azure Service Health alerts, when you should consider upgrading to a support plan that includes technical support, recommendations to solve common configuration issues with Traffic Manager profiles, and more.

Screenshot of Azure Advisor recommendations in the Azure portal.

Build secure Oozie workflows in Azure HDInsight with Enterprise Security Package - You can now use Oozie in domain-joined Hadoop clusters. Oozie is a workflow and coordination scheduler for Hadoop to accelerate and ease their big data implementation. Integrated with the Hadoop stack, Oozie supports several types of Hadoop jobs, but was previously unsupported with domain-joined clusters.

Accelerated and Flexible Restore Points with SQL Data Warehouse - SQL Data Warehouse (SQL DW) is a fully-managed and secure analytics platform for the enterprise, optimized for running complex queries fast across petabytes of data. We just released accelerated and flexible restore points, which will help you to quickly restore a data warehouse offers customers data protection from accidental corruption, deletion, and disaster recovery. You can now restore across regions and servers using any restore point instead of selecting geo-redundant backups, which are taken every 24 hours.

Top feature requests added with Azure Blockchain Workbench 1.2.0 - The second update of Azure Blockchain Workbench, which released to public preview at Build in May, is now available. You can either deploy a new instance of Workbench through the Azure Portal or upgrade your existing deployment to 1.2.0 using an upgrade script. This release includes a number of improvements and bug fixes, some of which are in response to customer feedback and suggestions.

Additional news and updates

The Azure Podcast

    The Azure Podcast | Episode 239 - Kubernetes Developer Tooling - We talk to Azure Engineer Michelle Noorali about her passion with the Cloud Native Computing Foundation and the work she does to enable Developers to work easily with Kubernetes in Azure and any cloud.

    Azure Tips & Tricks

    Azure Tips and Tricks | Add logic to your Testing in Production sites with PowerShell

    Technical content and training

    Orchestrating production-grade workloads with Azure Kubernetes Service - Brian Redmond, Cloud Architect, Azure Global Black Belt Team digs into the top scenarios that Azure customers are building on Azure Kubernetes Service on the third anniversary of Kubernetes. Azure Kubernetes Service (AKS) manages your hosted Kubernetes environment, making it quick and easy to deploy and manage containerized applications without container orchestration expertise. It also eliminates the burden of ongoing operations and maintenance by provisioning, upgrading, and scaling resources on demand, without taking your applications offline.

    Workflow diagram showing lift and shift to containers

    Feeding IoT device telemetry data to Kafka-based applications - With the newly released support for Kafka streams in Event Hubs, it is now possible for Azure IoT Hub customers to easily feed their IoT device telemetry data into Kafka-based applications for further downstream processing or analysis. You can start using IoT Hub's native support for messaging, device and configuration management early, and defer the decision to migrate your telemetry processing applications to natively use Event Hubs at a later time. This post covers applicable customer scenarios and how to use Kafka-based applications with IoT Hub telemetry.

    The IoT Show | Kafka Integration with Azure IoT Hub - Whether you are a Kafka aficionado or you are simply curious about how Azure IoT Hub allows to easily consume IoT devices data from Kafka, this new episode of the IoT Show is for you!

    Free course on the Log Analytics query language (KQL) now available - Some of the most commonly asked questions we get in Azure Log Analytics and Application Insights are around the query language. These come both from beginners who need a hand getting started, and intermediate users who want to know what advanced capabilities are available to them. We teamed-up with Pluralsight to provide a free course on KQL. Register and get started today.

    Azure Friday

    Azure Friday | ACR Build: Automate Docker builds with OS and framework patching - Steve Lasker joins Lara Rubbelke to discuss ACR Build, a cloud-native container build solution enabling pre-check, git commit and base image update docker builds for OS and framework patching.

    Azure Friday | Azure Stream Analytics: Managing timelines and coding on IoT Edge - Jean-Sébastien Brunner joins Lara Rubbelke to discuss new features that enable you to implement the intelligent cloud, intelligent edge vision for streaming analytics: Stream Analytics running real-time analytics with custom code on IoT Edge. We also discuss substreams, which is a new time management feature for independently processing the timeline of each device (very useful for an IoT scenario), and the recently announced Session Window.

    Events

    Containerize Your Applications with Kubernetes on Azure - In this webinar on Tuesday, August 14th (10:00-11:00am Pacific), you’ll see an end-to-end Kubernetes development experience on Azure, showcasing an integrated development environment for building apps. This includes application scaffolding, inner-loop workflows, application-management frameworks, CI/CD pipelines, log aggregation, and monitoring and application metrics.

    Customers and partners

    Azure Marketplace new offers: June 16–30 - The Azure Marketplace is the premier destination for all your software needs – certified and optimized to run on Azure. Find, try, purchase, and provision applications & services from hundreds of leading software providers. In the second half of June we published 22 new offers, including virtual machine images, web applications, container solutions, and consulting services.

    Announcing availability of Azure Managed Application in AzureGov - Azure Managed Applications enable Managed Service Provider (MSP), Independent Software Vendor (ISV) partners, and enterprise IT teams to deliver fully managed turnkey cloud solutions that can be made available through the enterprise Service Catalog of a specific end-customer. Customers can quickly deploy managed applications in their own subscription and rely on the partner or central IT team for maintenance operations and support across the lifecycle.

    Marketplace news from Inspire 2018 - Key marketplace sessions and content from Microsoft Inspire is now available for on-demand viewing, including: Grow your business with AppSource and Azure Marketplace, Best practices for successful GTM in Azure Marketplace and AppSource, Grow your PaaS or SaaS business in Azure Marketplace or AppSource, and Optimize your Microsoft Marketplace listing to attract new customers.

    Avoid Big Data pitfalls with Azure HDInsight and these partner solutions - Big data and the analytical application lifecycle spans a number of steps. Including ingestion, prep, storage, processing, analyzing and visualization. All of these steps need to have enterprise requirements around governance, access control, monitoring, security and more. Stitching an application together which comprises everything is a complicated task, which is why we worked closely with a selected set of ISV’s to certify their solutions with Azure HDInsight and other analytical services so customers can deploy them with a single-click. Check out this post for an overview of each of these solutions.

    Internet of Things Show

    Internet of Things Show | First look at Maps in Azure IoT Central - Azure Maps is now fully integrated in Azure IoT Central applications offering plenty geo-location and geocoding features. Check out this cool demo by Miriam Berhane Russom, PM in the Azure IoT Central team.

    Internet of Things Show | Azure IoT Hub Manual Failover - Disasters can happen and you should always be ready for failures in any distributed systems. Cloud services are no exception and Roopesh Manda, PM in the Azure IoT team, tells us how the IoT Hub will ensure your IoT application resiliency and how you can use the new Manual Failover feature of the service to test catastrophic scenarios.

    Industries

    Insurance | IoT: the catalyst for better risk management in insurance - Insurance companies that embrace digital transformation and technologies that include the Internet of Things (IoT), Artificial Intelligence (AI), Machine Learning (ML), and Big Data will lead the industry. Learn more in this post about thought leader Matteo Carbone's book, All the Insurance Players Will Be Insurtech.

    Retail | How to move your e-commerce infrastructure to Azure - Moving an existing e-commerce solution to the cloud presents many benefits for an enterprise: it enables scalability, it offers customers 24/7 accessibility, and it becomes easier to integrate cloud services. But first, to move an e-commerce solution to the cloud is a significant task, with costs that must be understood by a decision maker. Migrating your e-commerce solution to Azure overview explains the scope of an Azure migration with the goal of informing you of the options. The first phase begins with IT Pros moving the components to the cloud. Once on Azure, he describes the steps the e-commerce team can take to increase ROI and take advantage of the cloud.

    Healthcare | Current use cases for machine learning in healthcare - Machine learning (ML) is causing quite the buzz at the moment, and it’s having a huge impact on healthcare. Payers, providers and pharmaceutical companies are all seeing applicability in their spaces and are taking advantage of ML today. This post provides a quick overview of key topics in ML, and how it is being used in healthcare.

    A Cloud Guru | Azure This Week

    A Cloud Guru | Azure This Week - 27 July 2018 - In this episode of Azure This Week, Lars looks at the public preview of Azure Service Fabric Mesh, Azure Security Center integration and the general availability of Azure File Sync. He also looks at Azure Cloud Shell which is now embedded inside Visual Studio Code.

    Intermittent Webmaster Tools API Issues Resolved

    $
    0
    0
    The Bing Webmaster team recently received feedback that our APIs were intermittently failing, and we deeply regret any inconveniences caused from the API failures. We recognize the frustrations that this may have caused.  Upon investigation, we discovered a technical glitch which led to API call failure that is now resolved. We are very grateful to you, our users, who brought this to our attention and thank you for your continued feedback and support.

    We're Listening

    Bing and Bing Webmaster Tools are actively listening to you and we value your feedback. It’s important to how we continually improve Bing and to help notify us of potential issues. It’s easy to provide feedback: just look for the Feedback button or link at the bottom of each page. It’s in the footer or the lower-right corner and it looks something like this:
    Feedback button and link is in the bottom right corner of the footer navigation window

    We are using advances in technology to make it easier to quickly find what you are looking for – from answers to life's big questions or an item in an image you want to learn more about. At Bing, our goal is to help you get answers with less effort.  We appreciate your feedback and the more that you can send, the more we can use it to improve Bing. Have a suggestion?

    Tell us! The more feedback the merrier.

    Please let us know.
    The Bing Webmaster Tools Team

    Experts tips on hardening security with Azure security

    $
    0
    0

    Note: This blog was authored by the Microsoft Threat Intelligence Center.

    Microsoft Azure provides a secure foundation for customers to host their infrastructure and applications. Microsoft’s secure foundation spans across physical, infrastructure, and operational security. Part of our operational security includes over 3,500 cybersecurity experts across different teams that are dedicated to security research and development. The Microsoft Threat Intelligence Center is just one of the security teams at Microsoft that encounters and mitigates against threats across the security landscape.

    On today’s episode of Microsoft Mechanics, you’ll see how the work of the Microsoft Threat Intelligence Center is helping to secure Azure and the global security landscape. This team works to identify issues such as peer to peer networking software, standard botnet and ransomware attacks, and adversary-based threats from hackers or nation state sponsored groups.

    The team also has a broad view across many geographies and a view of the services that run in Azure. With this insight, the team can see common attack patterns. These patterns can be at the network level, service level, app level, or OS level. As soon as an exploit is detected, the Microsoft Threat Intelligence Center works with other teams at Microsoft to build mitigations into our products and services. In addition, the team creates threat intelligence reports that provide detailed information on things like what the attack was, where it happened, the environment(s) that were impacted and steps you need to take to remediate the attack. 

    In addition to seeing how the Microsoft Threat Intelligence Center mitigates attacks targeting the Azure platform, you’ll learn how that intelligence is fed back into our services and how you can strengthen your organizational security using these tools. For example, you can use Azure Security Center to get a centralized, real-time monitoring view into the security state of your hybrid cloud resources, and quickly take action against issues. You can also use Security Center’s Just-in-Time VM Access to protect against threats such as brute force attacks by reducing access to virtual machine management ports to only when it is needed. Security Center’s Investigation Path will help you explore all the entities involved an attack, such as a SQL injection, and quickly remediate against the attack.

    We hope that you find today’s overview helpful. Please let us know your thoughts, and feel free to post your questions.

    .NET Framework July 2018 Update

    $
    0
    0

    Today, we released the July 2018 Update.

    Quality and Reliability

    This release contains the following quality and reliability improvements.

    CLR

    • Applications that rely on COM components were failing to load or run correctly because of “access denied,” “class not registered,” or “internal failure occurred for unknown reasons” errors described in 4345913. [651528]

    Note: Additional information on these improvements is not available. The VSTS bug number provided with each improvement is a unique ID that you can give Microsoft Customer Support, include in StackOverflow comments or use in web searches.

    Getting the Update

    The Update is available via Microsoft Update Catalog only.

    Microsoft Update Catalog

    You can get the update via the Microsoft Update Catalog.

    Product Version Update KB
    Windows 10 1607 (Anniversary Update) Catalog
    4346877
    .NET Framework 3.5, 4.6.2, 4.7, 4.7.1 4346877
    Windows 8.1
    Windows Server 2012 R2
    .NET Framework 3.5 Catalog
    4346745
    .NET Framework 4.5.2 Catalog
    4346408
    .NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.1 Catalog
    4346406
    Windows Server 2012
    .NET Framework 3.5 Catalog
    4346742
    .NET Framework 4.5.2 Catalog
    4346739
    .NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.1 Catalog
    4346405
    Windows 7
    Windows Server 2008 R2
    .NET Framework 3.5.1 Catalog
    4346744
    .NET Framework 4.5.2 Catalog
    4346410
    .NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.1 Catalog
    4346407
    Windows Server 2008
    .NET Framework 2.0 Catalog
    4346743
    .NET Framework 4.5.2 Catalog
    4346410
    .NET Framework 4.6 Catalog
    4346407

    Previous Monthly Rollups

    The last few .NET Framework Monthly updates are listed below for your convenience:

    Azure management groups now in general availability

    $
    0
    0

    I am very excited to announce today general availability of Azure management groups to all our customers. Management groups allow you to organize your subscriptions and apply governance controls, such as Azure Policy and Role-Based Access Controls (RBAC), to the management groups. All subscriptions within a management group automatically inherit the controls applied to the management group. No matter if you have an Enterprise Agreement, Certified Solution Partner, Pay-As-You-Go, or any other type of subscription, this service gives all Azure customers enterprise-grade management at a large scale for no additional cost.

    With the GA launch of this service, we introduce new functionality to Azure that allows customers to group subscriptions together so that you can apply a policy or RBAC role to multiple subscriptions, and their resources, with one assignment. Management groups not only allow you to group subscriptions but also allows you to group other management groups to form a hierarchy. The following diagram shows an example of creating a hierarchy for governance using management groups.

    MG_overview_thumb1

    By creating a hierarchy like this you can apply a policy, for example, VM locations limited to US West Region on the group “Infrastructure Team management group” to enable internal compliance and security policies. This policy will inherit onto both EA subscriptions under that management group and will apply to all VMs under those subscriptions. As this policy inherits from the management group to the subscriptions, this security policy cannot be altered by the resource or subscription owner allowing for improved governance.

    By using management groups, you can reduce your workload and reduce the risk of error by avoiding duplicate assignments. Instead of applying multiple assignments across numerous resources and subscriptions, you can apply the one assignment on the one management group that contains the target resources. This will save time in the application of assignments, creates one point for maintenance, and allows for better controls on who can control the assignment.

    Another scenario where you would use management groups is to provide user access to multiple subscriptions. By moving multiple subscriptions under that management group, you have the ability create one RBAC assignment on the management group which will inherit that access to all the subscriptions. Without the need to script RBAC assignments over multiple subscriptions, one assignment on the management group can enable users to have access to everything they need.

    As we continue to develop management groups within Azure, new and existing services will be integrated to provide even more functionality.

    Get started today

    To get started, visit the management group documents to see the great functionality that you can start using right away. If you would rather dive right in, go right to management groups in the Azure portal and select “Start using management groups” to start your new hierarchy.  

    empty_thumb1

    Viewing all 10804 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>