Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

Use the official range-v3 with MSVC 2017 version 15.9

$
0
0

We’re happy to announce that the ongoing conformance work in the MSVC compiler has reached a new milestone: support for Eric Niebler’s range-v3 library. It’s no longer necessary to use the range-v3-vs2015 fork that was introduced for MSVC 2015 Update 3 support; true upstream range-v3 is now usable directly with MSVC 2017.

The last push to achieve range-v3 support involved Microsoft-sponsored changes in both the MSVC compiler and range-v3.  The compiler changes involved fixing about 60 historically blocking bugs, of which 30+ were alias template bugs in /permissive- mode. Changes to range-v3 were to add support for building the test suite with MSVC, and some workarounds for roughly a dozen minor bugs that we will be working on fixing in future releases of MSVC.

How do I get range-v3 to try it out?

The range-v3 changes haven’t yet flowed into a release, so for now MSVC users should use the master branch. You can get range-v3:

Note that range-v3’s master branch is under active development, so it’s possible that the head of the branch may be unusable at some times. Releases after 0.4.0 will have MSVC support; until then the commit at 01ccd0e5 is known to be good. Users of vcpkg should have no issues: the range-v3 packager will ensure that vcpkg installs a known-good release.

What’s next

  • Continue fixing bugs that get reported from range-v3 usage and development.

In closing

This range-v3 announcement follows our previous announcement about supporting Boost.Hana in 15.8. The C++ team here at Microsoft is strongly motivated to continue improving our support for open source libraries.

We’d love for you to download Visual Studio 2017 version 15.9 and try out all the new C++ features and improvements. As always, we welcome your feedback. We can be reached via the comments below or via email (visualcpp@microsoft.com). If you encounter other problems with MSVC or have a suggestion for Visual Studio 2017 please let us know through Help > Send Feedback > Report A Problem / Provide a Suggestion in the product, or via Developer CommunityYou can also find us on Twitter (@VisualC) and Facebook (msftvisualcpp).

If you have any questions, please feel free to post in the comments below. You can also send any comments and suggestions directly to cacarter@microsoft.com, or @CoderCasey.

 


Collaborate with others and keep track of to-dos with new AI features in Word

$
0
0

Focus is a simple but powerful thing. When you’re in your flow, your creativity takes over, and your work is effortless. When you’re faced with distractions and interruptions, progress is slow and painful. And nowhere is that truer than when writing.

Word has long been the standard for creating professional-quality documents. Technologies like Editor—Word’s AI-powered writing assistant—make it an indispensable tool for the written word. But at some point in the writing process, you’ll need some information you don’t have at your fingertips, even with the best tools. When this happens, you likely do what research tells us many Word users do: leave a placeholder in your document and come back to it later to stay in your flow.

Today, we’re starting to roll out new capabilities to Word that help users create and fill in these placeholders without leaving the flow of their work. For example, type TODO: finish this section or <<insert closing here>> and Word recognizes and tracks them as to-dos. When you come back to the document, you’ll see a list of your remaining to-dos, and you can click each one to navigate back to the right spot.

Screenshot of a Word document open using the AI-powered To-Do feature.

Once you’ve created your to-dos, Word can also help you complete them. If you need help from a friend or coworker, just @mention them within a placeholder. Word sends them a notification with a “deep link” to the relevant place in the document. Soon, they’ll be able to reply to the notification with their contributions, and those contributions will be inserted directly into the document—making it easy to complete the task with an email from any device.

Over time, Office will use AI to help fill in many of these placeholders. In the next few months, Word will use Microsoft Search to suggest content for a to-do like <<insert chart of quarterly sales figures>>. You will be able to pick from the results and insert content from another document with a single click.

These capabilities are available today for Word on the Mac for Office Insiders (Fast) as a preview. We’ll roll these features out to all Office 365 subscribers soon for Word for Windows, the Mac, and the web.

Get started as an Office for Mac Insider

Office Insider for Mac has two speeds: Insider Fast and Insider Slow. To get access to this and other new feature releases, you’ll need a subscription to Office 365. To select a speed, open Microsoft Auto Update and on the Help menu select Check for Updates.

As always, we would love to hear from you, please send us your thoughts at UserVoice or visit us on Twitter or Facebook. You can also let us know how you like the new features by clicking the smiley face icon in the upper-right corner of Word.

The post Collaborate with others and keep track of to-dos with new AI features in Word appeared first on Microsoft 365 Blog.

Understanding the Whys, Whats, and Whens of ValueTask

$
0
0

The .NET Framework 4 saw the introduction of the System.Threading.Tasks namespace, and with it the Task class. This type and the derived Task<TResult> have long since become a staple of .NET programming, key aspects of the asynchronous programming model introduced with C# 5 and its async / await keywords. In this post, I’ll cover the newer ValueTask/ValueTask<TResult> types, which were introduced to help improve asynchronous performance in common use cases where decreased allocation overhead is important.

Task

Task serves multiple purposes, but at its core it’s a “promise”, an object that represents the eventual completion of some operation. You initiate an operation and get back a Task for it, and that Task will complete when the operation completes, which may happen synchronously as part of initiating the operation (e.g. accessing some data that was already buffered), asynchronously but complete by the time you get back the Task (e.g. accessing some data that wasn’t yet buffered but that was very fast to access), or asynchronously and complete after you’re already holding the Task (e.g. accessing some data from across a network). Since operations might complete asynchronously, you either need to block waiting for the results (which often defeats the purpose of the operation having been asynchronous to begin with) or you need to supply a callback that’ll be invoked when the operation completes. In .NET 4, providing such a callback was achieved via ContinueWith methods on the Task, which explicitly exposed the callback model by accepting a delegate to invoke when the Task completed:

SomeOperationAsync().ContinueWith(task =>
{
    try
    {
        TResult result = task.Result;
        UseResult(result);
    }
    catch (Exception e)
    {
        HandleException(e);
    }
});

But with the .NET Framework 4.5 and C# 5, Tasks could simply be awaited, making it easy to consume the results of an asynchronous operation, and with the generated code being able to optimize all of the aforementioned cases, correctly handling things regardless of whether the operation completes synchronously, completes asynchronously quickly, or completes asynchronously after already (implicitly) providing a callback:

TResult result = await SomeOperationAsync();
UseResult(result);

Task as a class is very flexible and has resulting benefits. For example, you can await it multiple times, by any number of consumers concurrently. You can store one into a dictionary for any number of subsequent consumers to await in the future, which allows it to be used as a cache for asynchronous results. You can block waiting for one to complete should the scenario require that. And you can write and consume a large variety of operations over tasks (sometimes referred to as “combinators”), such as a “when any” operation that asynchronously waits for the first to complete.

However, that flexibility is not needed for the most common case: simply invoking an asynchronous operation and awaiting its resulting task:

TResult result = await SomeOperationAsync();
UseResult(result);

In such usage, we don’t need to be able to await the task multiple times. We don’t need to be able to handle concurrent awaits. We don’t need to be able to handle synchronous blocking. We don’t need to write combinators. We simply need to be able to await the resulting promise of the asynchronous operation. This is, after all, how we write synchronous code (e.g. TResult result = SomeOperation();), and it naturally translates to the world of async / await.

Further, Task does have a potential downside, in particular for scenarios where instances are created a lot and where high-throughput and performance is a key concern: Task is a class. As a class, that means that any operation which needs to create one needs to allocate an object, and the more objects that are allocated, the more work the garbage collector (GC) needs to do, and the more resources we spend on it that could be spent doing other things.

The runtime and core libraries mitigate this in many situations. For example, if you write a method like the following:

public async Task WriteAsync(byte value)
{
    if (_bufferedCount == _buffer.Length)
    {
        await FlushAsync();
    }
    _buffer[_bufferedCount++] = value;
}

in the common case there will be space available in the buffer and the operation will complete synchronously. When it does, there’s nothing special about the Task that needs to be returned, since there’s no return value: this is the Task-based equivalent of a void-returning synchronous method. Thus, the runtime can simply cache a single non-generic Task and use that over and over again as the result task for any async Task method that completes synchronously (that cached singleton is exposed via `Task.CompletedTask`). Or for example, if you write:

public async Task<bool> MoveNextAsync()
{
    if (_bufferedCount == 0)
    {
        await FillBuffer();
    }
    return _bufferedCount > 0;
}

in the common case, we expect there to be some data buffered, in which case this method simply checks _bufferedCount, sees that it’s larger than 0, and returns true; only if there’s currently no buffered data does it need to perform an operation that might complete asynchronously. And since there are only two possible Boolean results (true and false), there are only two possible Task<bool> objects needed to represent all possible result values, and so the runtime is able to cache two such objects and simply return a cached Task<bool> with a Result of true, avoiding the need to allocate. Only if the operation completes asynchronously does the method then need to allocate a new Task<bool>, because it needs to hand back the object to the caller before it knows what the result of the operation will be, and needs to have a unique object into which it can store the result when the operation does complete.

The runtime maintains a small such cache for other types as well, but it’s not feasible to cache everything. For example, a method like:

public async Task<int> ReadNextByteAsync()
{
    if (_bufferedCount == 0)
    {
        await FillBuffer();
    }

    if (_bufferedCount == 0)
    {
        return -1;
    }

    _bufferedCount--;
    return _buffer[_position++];
}

will also frequently complete synchronously. But unlike the Boolean case, this method returns an Int32 value, which has ~4 billion possible results, and caching a Task<int> for all such cases would consume potentially hundreds of gigabytes of memory. The runtime does maintain a small cache for Task<int>, but only for a few small result values, so for example if this completes synchronously (there’s data in the buffer) with a value like 4, it’ll end up using a cached task, but if it completes synchronously with a value like 42, it’ll end up allocating a new Task<int>, akin to calling Task.FromResult(42).

Many library implementations attempt to mitigate this further by maintaining their own cache as well. For example, the MemoryStream.ReadAsync overload introduced in the .NET Framework 4.5 always completes synchronously, since it’s just reading data from memory. ReadAsync returns a Task<int>, where the Int32 result represents the number of bytes read. ReadAsync is often used in a loop, often with the number of bytes requested the same on each call, and often with ReadAsync able to fully fulfill that request. Thus, it’s common for repeated calls to ReadAsync to return a Task<int> synchronously with the same result as it did on the previous call. As such, MemoryStream maintains a cache of a single task, the last one it returned successfully. Then on a subsequent call, if the new result matches that of its cached Task<int>, it just returns the cached one again; otherwise, it uses Task.FromResult to create a new one, stores that as its new cached task, and returns it.

Even so, there are many cases where operations complete synchronously and are forced to allocate a Task<TResult> to hand back.

ValueTask<TResult> and synchronous completion

All of this motivated the introduction of a new type in .NET Core 2.0 and made available for previous .NET releases via a System.Threading.Tasks.Extensions NuGet package: ValueTask<TResult>.

ValueTask<TResult> was introduced in .NET Core 2.0 as a struct capable of wrapping either a TResult or a Task<TResult>. This means it can be returned from an async method, and if that method completes synchronously and successfully, nothing need be allocated: we can simply initialize this ValueTask<TResult> struct with the TResult and return that. Only if the method completes asynchronously does a Task<TResult> need to be allocated, with the ValueTask<TResult> created to wrap that instance (to minimize the size of ValueTask<TResult> and to optimize for the success path, an async method that faults with an unhandled exception will also allocate a Task<TResult>, so that the ValueTask<TResult> can simply wrap that Task<TResult> rather than always having to carry around an additional field to store an Exception).

With that, a method like MemoryStream.ReadAsync that instead returns a ValueTask<int> need not be concerned with caching, and can instead be written with code like:

public override ValueTask<int> ReadAsync(byte[] buffer, int offset, int count)
{
    try
    {
        int bytesRead = Read(buffer, offset, count);
        return new ValueTask<int>(bytesRead);
    }
    catch (Exception e)
    {
        return new ValueTask<int>(Task.FromException<int>(e));
    }
}

ValueTask<TResult> and asynchronous completion

Being able to write an async method that can complete synchronously without incurring an additional allocation for the result type is a big win. This is why ValueTask<TResult> was added to .NET Core 2.0, and why new methods that are expected to be used on hot paths are now defined to return ValueTask<TResult> instead of Task<TResult>. For example, when we added a new ReadAsync overload to Stream in .NET Core 2.1 in order to be able to pass in a Memory<byte> instead of a byte[], we made the return type of that method be ValueTask<int>. That way, Streams (which very often have a ReadAsync method that completes synchronously, as in the earlier MemoryStream example) can now be used with significantly less allocation.

However, when working on very high-throughput services, we still care about avoiding as much allocation as possible, and that means thinking about reducing and removing allocations associated with asynchronous completion paths as well.

With the await model, for any operation that completes asynchronously we need to be able to hand back an object that represents the eventual completion of the operation: the caller needs to be able to hand off a callback that’ll be invoked when the operation completes, and that requires having a unique object on the heap that can serve as the conduit for this specific operation. It doesn’t, however, imply anything about whether that object can be reused once an operation completes. If the object can be reused, then an API can maintain a cache of one or more such objects, and reuse them for serialized operations, meaning it can’t use the same object for multiple in-flight async operations, but it can reuse an object for non-concurrent accesses.

In .NET Core 2.1, ValueTask<TResult> was augmented to support such pooling and reuse. Rather than just being able to wrap a TResult or a Task<TResult>, a new interface was introduced, IValueTaskSource<TResult>, and ValueTask<TResult> was augmented to be able to wrap that as well. IValueTaskSource<TResult> provides the core support necessary to represent an asynchronous operation to ValueTask<TResult> in a similar manner to how Task<TResult> does:

public interface IValueTaskSource<out TResult>
{
    ValueTaskSourceStatus GetStatus(short token);
    void OnCompleted(Action<object> continuation, object state, short token, ValueTaskSourceOnCompletedFlags flags);
    TResult GetResult(short token);
}

GetStatus is used to satisfy properties like ValueTask<TResult>.IsCompleted, returning an indication of whether the async operation is still pending or whether it’s completed and how (success or not). OnCompleted is used by the ValueTask<TResult>‘s awaiter to hook up the callback necessary to continue execution from an await when the operation completes. And GetResult is used to retrieve the result of the operation, such that after the operation completes, the awaiter can either get the TResult or propagate any exception that may have occurred.

Most developers should never have a need to see this interface: methods simply hand back a ValueTask<TResult> that may have been constructed to wrap an instance of this interface, and the consumer is none-the-wiser. The interface is primarily there so that developers of performance-focused APIs are able to avoid allocation.

There are several such APIs in .NET Core 2.1. The most notable are Socket.ReceiveAsync and Socket.SendAsync, with new overloads added in 2.1, e.g.

 public ValueTask<int> ReceiveAsync(Memory<byte> buffer, SocketFlags socketFlags, CancellationToken cancellationToken = default);

This overload returns a ValueTask<int>. If the operation completes synchronously, it can simply construct a ValueTask<int> with the appropriate result, e.g.

int result = …;
return new ValueTask<int>(result);

If it completes asynchronously, it can use a pooled object that implements this interface:

IValueTaskSource<int> vts = …;
return new ValueTask<int>(vts);

The Socket implementation maintains one such pooled object for receives and one for sends, such that as long as no more than one of each is outstanding at a time, these overloads will end up being allocation-free, even if they complete operations asynchronously. That’s then further surfaced through NetworkStream. For example, in .NET Core 2.1, Stream exposes:

public virtual ValueTask<int> ReadAsync(Memory<byte> buffer, CancellationToken cancellationToken);

which NetworkStream overrides. NetworkStream.ReadAsync just delegates to Socket.ReceiveAsync, so the wins from Socket translate to NetworkStream, and NetworkStream.ReadAsync effectively becomes allocation-free as well.

Non-generic ValueTask

When ValueTask<TResult> was introduced in .NET Core 2.0, it was purely about optimizing for the synchronous completion case, in order to avoid having to allocate a Task<TResult> to store the TResult already available. That also meant that a non-generic ValueTask wasn’t necessary: for the synchronous completion case, the Task.CompletedTask singleton could just be returned from a Task-returning method, and was implicitly by the runtime for async Task methods.

With the advent of enabling even asynchronous completions to be allocation-free, however, a non-generic ValueTask becomes relevant again. Thus, in .NET Core 2.1 we also introduced the non-generic ValueTask and IValueTaskSource. These provide direct counterparts to the generic versions, usable in similar ways, just with a void result.

Implementing IValueTaskSource / IValueTaskSource<T>

Most developers should never need to implement these interfaces. They’re also not particularly easy to implement. If you decide you need to, there are several implementations internal to .NET Core 2.1 that can serve as a reference, e.g.

To make this easier for developers that do want to do it, in .NET Core 3.0 we plan to introduce all of this logic encapsulated into a ManualResetValueTaskSourceCore<TResult> type, a struct that can be encapsulated into another object that implements IValueTaskSource<TResult> and/or IValueTaskSource, with that wrapper type simply delegating to the struct for the bulk of its implementation. You can learn more about this in the associated issue in the dotnet/corefx repo at https://github.com/dotnet/corefx/issues/32664.

Valid consumption patterns for ValueTasks

From a surface area perspective, ValueTask and ValueTask<TResult> are much more limited than Task and Task<TResult>. That’s ok, even desirable, as the primary method for consumption is meant to simply be awaiting them.

However, because ValueTask and ValueTask<TResult> may wrap reusable objects, there are actually significant constraints on their consumption when compared with Task and Task<TResult>, should someone veer off the desired path of just awaiting them. In general, the following operations should never be performed on a ValueTask / ValueTask<TResult>:

  • Awaiting a ValueTask / ValueTask<TResult> multiple times. The underlying object may have been recycled already and be in use by another operation. In contrast, a Task / Task<TResult> will never transition from a complete to incomplete state, so you can await it as many times as you need to, and will always get the same answer every time.
  • Awaiting a ValueTask / ValueTask<TResult> concurrently. The underlying object expects to work with only a single callback from a single consumer at a time, and attempting to await it concurrently could easily introduce race conditions and subtle program errors. It’s also just a more specific case of the above bad operation: “awaiting a ValueTask / ValueTask<TResult> multiple times.” In contrast, Task / Task<TResult> do support any number of concurrent awaits.
  • Using .GetAwaiter().GetResult() when the operation hasn’t yet completed. The IValueTaskSource / IValueTaskSource<TResult> implementation need not support blocking until the operation completes, and likely doesn’t, so such an operation is inherently a race condition and is unlikely to behave the way the caller intends. In contrast, Task / Task<TResult> do enable this, blocking the caller until the task completes.

If you have a ValueTask or a ValueTask<TResult> and you need to do one of these things, you should use .AsTask() to get a Task / Task<TResult> and then operate on that resulting task object. After that point, you should never interact with that ValueTask / ValueTask<TResult> again.

The short rule is this: with a ValueTask or a ValueTask<TResult>, you should either await it directly (optionally with .ConfigureAwait(false)) or call AsTask() on it directly, and then never use it again, e.g.

// Given this ValueTask<int>-returning method…
public ValueTask<int> SomeValueTaskReturningMethodAsync();
…
// GOOD
int result = await SomeValueTaskReturningMethodAsync();

// GOOD
int result = await SomeValueTaskReturningMethodAsync().ConfigureAwait(false);

// GOOD
Task<int> t = SomeValueTaskReturningMethodAsync().AsTask();

// WARNING
ValueTask<int> vt = SomeValueTaskReturningMethodAsync();
... // storing the instance into a local makes it much more likely it'll be misused,
    // but it could still be ok

// BAD: awaits multiple times
ValueTask<int> vt = SomeValueTaskReturningMethodAsync();
int result = await vt;
int result2 = await vt;

// BAD: awaits concurrently (and, by definition then, multiple times)
ValueTask<int> vt = SomeValueTaskReturningMethodAsync();
Task.Run(async () => await vt);
Task.Run(async () => await vt);

// BAD: uses GetAwaiter().GetResult() when it's not known to be done
ValueTask<int> vt = SomeValueTaskReturningMethodAsync();
int result = vt.GetAwaiter().GetResult();

There is one additional advanced pattern that some developers may choose to use, hopefully only after measuring carefully and finding it provides meaningful benefit. Specifically, ValueTask / ValueTask<TResult> do expose some properties that speak to the current state of the operation, for example the IsCompleted property returning false if the operation hasn’t yet completed, and returning true if it has (meaning it’s no longer running and may have completed successfully or otherwise), and the IsCompletedSuccessfully property returning true if and only if it’s completed and completed successfully (meaning attempting to await it or access its result will not result in an exception being thrown). For very hot paths where a developer wants to, for example, avoid some additional costs only necessary on the asynchronous path, these properties can be checked prior to performing one of the operations that essentially invalidates the ValueTask / ValueTask<TResult>, e.g. await, .AsTask(). For example, in the SocketsHttpHandler implementation in .NET Core 2.1, the code issues a read on a connection, which returns a ValueTask<int>. If that operation completed synchronously, then we don’t need to worry about being able to cancel the operation. But if it completes asynchronously, then while it’s running we want to hook up cancellation such that a cancellation request will tear down the connection. As this is a very hot code path, and as profiling showed it to make a small difference, the code is structured essentially as follows:

int bytesRead;
{
    ValueTask<int> readTask = _connection.ReadAsync(buffer);
    if (readTask.IsCompletedSuccessfully)
    {
        bytesRead = readTask.Result;
    }
    else
    {
        using (_connection.RegisterCancellation())
        {
            bytesRead = await readTask;
        }
    }
}

This pattern is acceptable, because the ValueTask<int> isn’t used again after either .Result is accessed or it’s awaited.

Should every new asynchronous API return ValueTask / ValueTask<TResult>?

In short, no: the default choice is still Task / Task<TResult>.

As highlighted above, Task and Task<TResult> are easier to use correctly than are ValueTask and ValueTask<TResult>, and so unless the performance implications outweigh the usability implications, Task / Task<TResult>are still preferred. There are also some minor costs associated with returning a ValueTask<TResult> instead of a Task<TResult>, e.g. in microbenchmarks it’s a bit faster to await a Task<TResult> than it is to await a ValueTask<TResult>, so if you can use cached tasks (e.g. you’re API returns Task or Task<bool>), you might be better off performance-wise sticking with Task and Task<bool>. ValueTask / ValueTask<TResult> are also multiple words in size, and so when these are awaitd and a field for them is stored in a calling async method’s state machine, they’ll take up a little more space in that state machine object.

However, ValueTask / ValueTask<TResult> are great choices when a) you expect consumers of your API to only await them directly, b) allocation-related overhead is important to avoid for your API, and c) either you expect synchronous completion to be a very common case, or you’re able to effectively pool objects for use with asynchronous completion. When adding abstract, virtual, or interface methods, you also need to consider whether these situations will exist for overrides/implementations of that method.

What’s Next for ValueTask and ValueTask<TResult>?

For the core .NET libraries, we’ll continue to see new Task / Task<TResult>-returning APIs added, but we’ll also see new ValueTask / ValueTask<TResult>-returning APIs added where appropriate. One key example of the latter is for the new IAsyncEnumerator<T> support planned for .NET Core 3.0. IEnumerator<T> exposes a bool-returning MoveNext method, and the asynchronous IAsyncEnumerator<T> counterpart exposes a MoveNextAsyncmethod. When we initially started designing this feature, we thought of MoveNextAsync as returning a Task<bool>, which could be made very efficient via cached tasks for the common case of MoveNextAsync completing synchronously. However, given how wide-reaching we expect async enumerables to be, and given that they’re based on interfaces that could end up with many different implementations (some of which may care deeply about performance and allocations), and given that the vast, vast majority of consumption will be through await foreach language support, we switched to having MoveNextAsync return a ValueTask<bool>. This allows for the synchronous completion case to be fast but also for optimized implementations to use reusable objects to make the asynchronous completion case low-allocation as well. In fact, the C# compiler takes advantage of this when implementing async iterators to make async iterators as allocation-free as possible.

Static Data Masking for Azure SQL Database and SQL Server

$
0
0

The SQL Security team is pleased to share the public preview release of Static Data Masking. Static Data Masking is a data protection feature that helps users sanitize sensitive data in a copy of their SQL databases.  

Static Data Masking

Use cases

Static Data Masking is designed to help organizations create a sanitized copy of their databases where all sensitive information has been altered in a way that makes the copy sharable with non-production users. Static Data Masking can be used for:

 
  • Development and testing
  • Analytics and business reporting
  • Troubleshooting
  • Sharing the database with a consultant, a research team, or any third-party
 

Static Data Masking facilitates compliance with security requirements such as the separation between production and dev/test environments. For organizations subject to GDPR, the feature is a convenient tool to remove all personal information while preserving the structure of the database for further processing.

How Static Data Masking works

With Static Data Masking, the user configures how masking operates for each column selected inside the database. Static Data Masking will then replace data in the database copy with new, masked data generated according to that configuration. Original data cannot be unmasked from the masked copy. Static Data Masking performs an irreversible operation.

In the example below, all entries in the column FirstName have been nullified. The column LastName is made of randomly generated strings. In the EmailAddress column, names have been replaced with randomly generated strings, but the domain extension has been maintained. A similar narrative applies to the Phone column where the area code has been preserved, but not the last 7 digits.

Before and after

To learn more about on Static Data Masking, please refer to our documentation.

Static Data Masking vs. Dynamic Data Masking

Data masking is the process of applying a mask on a database to hide sensitive information and replace it with new data or scrubbed data. Microsoft offers two masking options, Static Data Masking and Dynamic Data Masking

Static Data Masking

Dynamic Data Masking

  • Happens on a copy of the database
  • Original data not retrievable
  • Mask occurs at the storage level
  • All users have access to the same masked data
  • Happens on the original database
  • Original data intact
  • Mask occurs on-the-fly at query time
  • Mask varies based on user permission

How to download Static Data Masking

Static Data Masking ships with SQL Server Management Studio 18.0. The latest preview SQL Server Management Studio 18.0 is available today for download.

Compatibility

Static Data Masking is compatible with SQL Server (SQL Server 2012 and newer), Azure SQL Database (DTU and vCore-based hosting options, excluding Hyperscale), and SQL Server on Azure Virtual Machines.

The team is actively looking for feedback so please do share your thoughts at static-data-masking@microsoft.com.

The importance of Azure Stack for DevOps

$
0
0

This blog post was co-authored by Steve Buchanan, Cloud & Datacenter MVP.

DevOps focuses on aligning culture, people, processes, and technology. It is sometimes thought that the technology part does not play a critical role in DevOps. This is wrong! Tools and technology help facilitate DevOps methodology and processes. Having the wrong tools and technology when trying to roll out DevOps can make it a challenge and can even become a blocker. Cloud platforms enable DevOps and are often the catalyst for rolling out DevOps. A recent Gartner report says that 75 percent of organizations plan to pursue a hybrid cloud strategy. An organization that implements a hybrid cloud strategy will need a consistent DevOps model across both an on-premises and public cloud. Microsoft Azure Stack extends Azure cloud services and capabilities to the on-premises environment, which is why it is so valuable for DevOps. Now let's dive in to see what DevOps on Azure Stack looks like.

Azure Stack and Azure give you the ability to stand up a hybrid continuous integration/continuous development (CI/CD) pipeline. With hybrid CI/CD, workloads can land on either an on-premises or public cloud, and they can be moved. Code that’s written for Azure Stack is interchangeable, so apps and services can be developed in a consistent way across public and on-premises clouds.

Azure Stack offers IaaS, PaaS, SQL as a service, container technology, microservice technology, and IoT capabilities. Services that exist on Azure PaaS that also exist on Azure Stack include App Service, Azure Functions, Service Fabric, and Azure Kubernetes Service (AKS). These services are often used when developing and running modern applications, which are typically developed and maintained by DevOps teams using CI/CD. Azure Stack supports the same non-Microsoft and Microsoft-based DevOps tooling that Azure does, including Git, GitHub, Visual Studio, Bitbucket, OneDrive, and Dropbox. You also can use another DevOps tooling with Azure Stack such as GitLab, Octopus Deploy, Jenkins, and many more. Below is an example of a hybrid cloud CI/CD pipeline.

image

Azure Stack also serves as a good hybrid platform for infrastructure as code (IaC) and configuration as code (CaC). IaC is often facilitated through using Azure Resource Manager (ARM) to deploy infrastructure both on IaaS and PaaS. Regarding IaaS, the ARM can be used to deploy the infrastructure needed by applications such as virtual machines (VM), networking, and storage. You can also use non-Microsoft IaC tooling such as Terraform, which was recently announced to support Azure Stack. By taking advantage of CaC tools, you will be able to further configure servers and the applications running on those servers. For CaC, you can use Microsoft and non-Microsoft tools such as Desired State Configuration (Microsoft-based) or Chef, Puppet, and Ansible (non-Microsoft).

In my experience of managing environments, IaC on-premises has usually been a challenge. This has been a challenge even with the introduction of the hypervisor because of having to rely on scripting that was not initially designed for server management. PowerShell made this easier on the server level for sure, but automating the network and storage required other methods. Likewise, CaC became a little easier with the introduction of tools like Chef, Puppet, and PowerShell DSC.

Having a true cloud such as Azure Stack on-premises is exciting because it is complemented by ARM on-premises. I can take advantage of ARM on-premises to automate at all three levels of server, storage, and networking. Having the combined power of ARM with CaC tools completes the picture. Being able to have actual end-to-end cloud on-premises just like we do in the public cloud, gets us developers and IT pros closer to the IT nirvana we have all been waiting for.

In addition to the hybrid CI/CD with Azure Stack, you can share other services across Azure and Azure Stack for a full hybrid cycle. These services include Azure Active Directory for identity, VPN/ExpressRoute for connectivity, and Azure Site Recovery for DR.

A common use of hybrid CI/CD is to use either Azure or Azure Stack as the test environment and the other as the production environment. For example, you can develop and deploy a web application to Azure App Service for testing and move it to App Service running on Azure Stack for production when ready. You can do something similar when you have an application that is containerized running on Kubernetes. The containers can be developed locally by pushing them to Azure Stack when ready for testing, and then pushed to Azure for production. All running on Kubernetes the entire time without any changes.

You can deploy a set of IaaS web server virtual machines, blob storage, a SQL database, virtual network, subnet, and a load balancer to Azure for development, while also deploying to Azure Stack at the same time for production. Then you can deploy the web app code to IIS and a SQL DAC package to Azure SQL on Azure and Azure Stack.

In my current role, I am fortunate to get the opportunity to help organizations that are looking to digitally transform but are not able to put all their workloads in a public cloud. This is no longer a blocker because now we can bring the cloud platform that will foster DevOps directly into their datacenters.

Currently, I am in the early stages of helping an organization architect a hybrid app that spans on-premises (Azure Stack) and public cloud (Azure) environments. This will enable the organization to deploy most of the application in a public cloud while leaving highly regulated parts of the application on-premises. The beauty here is that it’s the same exact platform across both and the developers only have to write the code once.

As we’ve seen, Azure Stack is valuable for DevOps because it ties on-premises and public cloud environments together. This gives organizations the ultimate agility in landing modern applications where they want them to go, as well as the ability to move those modern applications as needed.

Get started today

Try DevOps on Azure Stack for yourself with these helpful links:

The smart way to a smarter city: Three critical considerations

$
0
0

The concept of smart buildings is generating lots of excitement, but what’s the transformation to intelligent cities look like at ground level for those in charge of making it happen? This post is the first in a series of three illustrating how Internet of Things technology can change urban areas for the better—for those who run them and those who live in them. First up, a rundown on the most common barriers to adoption for administrators embarking on the journey to a smarter city.

From traffic management to sewer maintenance to building management and infrastructure, cities around the world are using the Internet of Things (IoT) to improve the lives of citizens and visitors. Networks of sensors connected to the cloud provide insights that make urban areas safer, more adaptive, and more economically robust. McKinsey estimates that the economic impact of IoT in cities could be as much as $1.6 trillion per year by 2025.

However, creating genuine impact with IoT isn’t easy. Cities are complex places, with layers of different IT systems; citizens, data, and infrastructure whose security needs protection; inflexible budgets; and intermeshing operations. Choosing the right platform is a lot more complicated when you don’t have the luxury of ripping out and replacing existing physical and digital infrastructure.

Smart city research from IDC, performed in partnership with Microsoft, highlights many ways metro area administrators can overcome these challenges to realize the incredible potential of IoT. Let’s take a look at three big themes that emerge.

1. Interoperability: Getting new and old IT talking

IDC’s research shows that city departments account for over 40 percent of smart city project decision-making and 30 percent of project funding. This siloed management approach is a real drawback, when 40 to 60 percent of the value of IoT solutions lies in interoperability.

Breaking through these barriers is one of the first hurdles you’ll have to clear on your way to creating a responsive, technology-enabled city. For example, real-time data from smart traffic lights and tollbooths can help cities improve planning and reduce commute times. Combine this with data from other systems—emergency services, road maintenance, event management, or even social media—and now you’re talking about transformative insights and efficiencies. Automatically call in cleaning crews when the football team wins a big game. Dynamically price parking and tolls in response to significant events. Give multiple departments access to video feeds so they can effectively respond to incidents.

In the past, this type of interoperability could require months of effort just to get proprietary, closed systems to play nicely together. The right technology choices make this much easier. Cloud-based IoT solutions built on open standards can accommodate many divergent technologies. Using application programming interfaces (APIs), programs can be made to talk to each other without rearchitecting them. In addition, using a common data schema can make incorporating multiple sources easier. Many cities are also creating the role of Chief Digital Officer to oversee the moving parts involved and drive collaboration and teamwork.

2. Security: Safer is smarter

Sharing data across networks requires a new focus on security. As the Harvard Business Review puts it, “as local governments pursue smart initiatives, realizing the full potential of these digitally connected communities starts with implementing cybersecurity best practices from the ground up.” This means not only protecting personal information but preventing hackers from gaining access to critical systems that control power, water, emergency services, and so on.

Cloud-based IoT can help address these issues by providing a platform built and managed with security in mind. Look for a solution that offers tools for secure device provisioning and management, data connectivity, and cloud processing and storage. Knowing these tools are baked in gives city leaders peace of mind and the freedom to focus on innovation.

3. Trust: For the public good

IoT technology will go nowhere without gaining public trust. The stakes are high for citizens, who rely on government systems and services for their safety and well-being. If they don’t have confidence in new systems, they won’t embrace them. (This is especially true when it comes to handling crises. Look for the second article in this series to see how cities are planning for disasters and improving emergency response with smart city tech.)

Bring the community into the process of IoT adoption early. This can reduce project risk by ensuring the solution will meet their needs. The public brings invaluable ideas to the table—insights rooted in how they live and use the city. Plus, high community engagement will drive broad awareness and support for smart city solutions.

Building buy-in is especially important to realizing the potential for citizen engagement. As McKinsey puts it in its article Smart cities: Digital solutions for a more livable future, “Above all, the people who live and work in a city should play a role in shaping its future. Digitization is shifting power to consumers in industry after industry, and the same pattern is emerging in smart cities.” IoT plays a role by empowering people to make data-driven decisions on a day-to-day basis.

The global competition for talent, investment, and tourism isn’t getting any easier. Cities are investing in IoT to become cleaner, safer, and more enjoyable places to live even as they grow. The choices they make today will have long-lasting consequences for the people they serve. Microsoft is committed to helping cities become more prosperous, sustainable, and inclusive through the application of intelligent cloud and edge technology. Want to know how? Come talk to us at Smart City Expo or learn about our CityNext initiative.

Tips and tricks for migrating on-premises Hadoop infrastructure to Azure HDInsight

$
0
0

Today, we are excited to share our tips and tricks series on how to migrate on-premises Hadoop infrastructure to Microsoft Azure HDInsight.

Every day, thousands of customers run their mission-critical big data workloads on Azure HDInsight. Many of our customers migrate workloads to HDInsight from on-premises due to its enterprise-grade capabilities and support for open source workloads such as Hive, Spark, Hadoop,  Kafka, HBase, Phoenix, Storm, and R.

migration

This six-part guide takes you through the migration process and shows you not only how to move your Hadoop workloads to Azure HDInsight, but also shares best practices on how to optimize your architecture, infrastructure, storage, and more. This guide was written in collaboration with the Azure Customer Advisory team based on a wealth of experience from helping many customers with Hadoop migrations. 

About HDInsight

Azure HDInsight is an easy, cost-effective, enterprise-grade service for open source analytics that enables customers to easily run popular open source frameworks including Apache Hadoop, Spark, Kafka, and others. The service is available in 27 public regions and Azure Government Clouds in the US and Germany. Azure HDInsight powers mission-critical applications in a wide variety of sectors and enables a wide range of use cases including ETL, streaming, and interactive querying.

Additional resources

AzureR: R packages to control Azure services

$
0
0

by Hong Ooi, senior data scientist, Microsoft Azure

This post is to announce a new family of packages we have developed as part of the CloudyR project for talking to Azure from R: AzureR.

As background, some of you may remember the AzureSMR package, which was written a few years back as an R interface to Azure. AzureSMR was very successful and gained a significant number of users, but it was never meant to be maintainable in the long term. As more features were added it became more unwieldy until its design limitations became impossible to ignore.

The AzureR family is a refactoring/rewrite of AzureSMR that aims to fix the earlier package's shortcomings.

The core package of the family is AzureRMR, which provides a lightweight yet powerful interface to Azure Resource Manager. It handles authentication (including automatically renewing when a session token expires), managing resource groups, and working with individual resources and templates. It also calls the Resource Manager REST API directly, so you don't need to have PowerShell or Python installed; it depends only on commonly used R packages like httr, jsonlite and R6.

Here's what some code using AzureRMR looks like. Note that it uses R6 classes to represent Azure objects, hence the use of $ to call methods.

library(AzureRMR)

az <- az_rm$new(tenant="{tenant_id}",
    app="{app_id}",
    password="{password}")

# get a subscription and resource group
sub <- az$get_subscription("{subscription_id}")
rg <- sub$get_resource_group("rgName")

# get a resource (storage account)
stor <- rg$get_resource(type="Microsoft.Storage/storageAccounts",
    name="myStorage")

# method chaining works too
stor <- az$
    get_subscription("{subscription_id}")$
    get_resource_group("rgName")$
    get_resource(type="Microsoft.Storage/storageAccounts",
        name="myStorage")

# create a new resource group and resource
rg2 <- sub$create_resource_group("newRgName", location="westus")

stor2 <- rg2$create_resource(type="Microsoft.Storage/storageAccounts",
    name="myNewStorage",
    kind="Storage", 
    sku=list(name="Standard_LRS", tier="Standard"))

# delete them
stor2$delete(confirm=FALSE)
rg2$delete(confirm=FALSE)

You can use AzureRMR to work with any Azure service, but a key idea behind it is that it's extensible: you can write new packages that provide extra functionality relevant to specific Azure offerings. This deals with one of the main shortcomings of AzureSMR: as a monolithic package, it became steadily more difficult to add new features as time went on. The packages that extend AzureRMR in this way constitute the AzureR family.

Currently, the AzureR family includes the following packages:

  • AzureVM is a package for working with virtual machines and virtual machine clusters. It allows you to easily deploy, manage and delete a VM from R. A number of templates are provided based on the Data Science Virtual Machine, or you can also supply your own template.
  • AzureStor is a package for working with storage accounts. In addition to a Resource Manager interface for creating and deleting storage accounts, it also provides a client interface for working with the storage itself. You can upload and download files and blobs, list file shares and containers, list files within a share, and so on. It supports authenticated access to storage via either a key or a shared access signature (SAS).
  • AzureContainers is a package for working with containers in Azure: specifically, it provides an interface to Azure Container Instances, Azure Container Registry and Azure Kubernetes Service. You can easily create an image, push it to an ACR repo, and then deploy it as a service to AKS. It also provides lightweight shells to docker, kubectl and helm (assuming these are installed).

I'll be writing future blog posts to describe each of these in further detail.

All the AzureR packages are part of the CloudyR Project, which aims to make R work better with cloud services of all kinds. They are not (yet) on CRAN, but you can obtain them with devtools::install_github.

library(devtools)
install_github("cloudyr/AzureRMR")
install_github("cloudyr/AzureVM")
install_github("cloudyr/AzureStor")
install_github("cloudyr/AzureContainers")

If you use Azure with R, I hope you'll find these packages to be of significant benefit to your workflow. If you have any questions or comments, please contact me at hongooi@microsoft.com.


XAML Islands – A deep dive – Part 2

$
0
0

Welcome to the 2nd post of our Xaml Islands deep dive adventure! On the first blog post, we talked a little bit about the history of this amazing feature, how the Xaml Islands infrastructure works and how to use it, and also a little bit of how you can leverage binding in your Island controls.

On this second blog post, we’ll take a quick look on how to use the wrappers NuGet packages and how to host your custom controls inside Win32 Apps.

Wrappers

Creating custom wrappers around UWP controls can be a cumbersome task, and you probably don’t want to do that. For simple things such as Buttons, it should be fine, but the moment you want to wrap complex controls, it can take a long time. To make things a little bit less complicated, some of our most requested controls are already wrapped for you! The current iteration brings you the InkCanvas, the InkToolbar, the MapControl and the MediaPlayerElement. So now, if your WPF app is running on a Windows 10 machine, you can have the amazing and easy-to-use UWP InkCanvas with an InkToolbar inside your WPF App (or WinForms)! You could even use the InkRecognizer to detect shapes, letters and numbers based on the strokes of that InkCanvas.

How much code does it take to integrate with the InkCanvas? Not much, at all!


<Window
...
xmlns:uwpControls="clr-namespace:Microsoft.Toolkit.Wpf.UI.Controls;assembly=Microsoft.Toolkit.Wpf.UI.Controls">

<Grid>
    <Grid.RowDefinitions>
        <RowDefinition Height="Auto"/>
        <RowDefinition Height="*"/>
    </Grid.RowDefinitions>
    <uwpControls:InkToolbar TargetInkCanvas="{x:Reference Name=inkCanvas}"/>
    <uwpControls:InkCanvas Grid.Row="1" x:Name="inkCanvas" />
</Grid>

Most of it is just the Grid definition so, in fact, we added very few lines of code (only 2). And that would give your users an amazing experience that is enabled by XAML Islands and the new UWP Controls.

Xaml Islands and Ink Canvas.

Custom Control – Managed Code

Everything I explained so far is for platform controls, but what if you want to wrap your own custom UWP UserControl and load it using WindowsXamlHost? Would it work? Yes! XAML controls, when instantiated in the context of an Island, handle resources in a very smart way, meaning that the ms-appx protocol just works, even if you are not running you Win32 process inside a packaged APPX. The root of the ms-appx protocol will map its path to your executable path.

As of right now, you can’t just create a UWP Library and reference it on your WPF or WinForms project, so the whole process of using a custom control is manual. When you develop a UWP App (C#, for example) you are compiling using a UWP flavor of the .NET Core Framework, not the .NET Full Framework. In order for your custom control to work on a WPF or WinForms App that is based on the .NET Full Framework, you must recompile the artifacts of the UWP Library using the .NET Full Framework toolset, by coping them to your WPF/WinForms project. There is a very good documentation about this right here that describes all the necessary steps. Remember that your WPF/WinForms project does not target, by default, any specific Windows 10 version, so you need to manually add references to some WinMD and DLLs files. Again, this is all covered in Enhance your desktop application for Windows 10, which describes how to use Windows 10 APIs on your Desktop Bridge Win32 App. By referencing the WinMDs and DLLs, you will also be able to build this compilation artifacts from the UWP Library on the WPF/WinForms project (.NET Full Framework).

NOTE: There is a whole different process for native code (C++/WinRT), which I’m not going to get into the details in this blog post.

You also can’t build these artifacts as-is. You need to inform the build system to disable type information reflection and x:Bind diagnostics. That’s because the generated code won’t be compatible with the .NET Framework. You can make it work by adding these properties to your UWP Library project:


<PropertyGroup>
  <EnableTypeInfoReflection>false</EnableTypeInfoReflection>
  <EnableXBindDiagnostics>false</EnableXBindDiagnostics>
</PropertyGroup>

Now, you could just manually copy the required files to the WPF/WinForms project, but then you would have multiple copies of it. You can automate that process with a post-build step, just like the documentation does it. If you do it that way though, it will not work if you try to pack your app inside an APPX, because the files will not get copied. To improve that, I created a custom MSBuild snippet that does that for you. The advantage of the Microsoft Build snippet is that is adds the CSharp files as well as the compilation outputs from the library all in the right place. All you must do is copy this script and it will just work.

NOTE: Keep in mind that this will be handled by the Visual Studio in the future, so you’ll have to remove either solution whenever that happens.

This is the snippet:


  <PropertyGroup>
    <IslandPath Condition="$(IslandPath) == ''">..$(IslandLibrary)</IslandPath>
	<IslandDirectoryName>$([System.IO.Path]::GetFileName($(IslandPath.TrimEnd(''))))</IslandDirectoryName>
  </PropertyGroup>
  <ItemGroup>
    <IslandLibraryCompile Include="$(IslandPath)***.xaml.cs;$(IslandPath)obj$(Configuration)***.g.cs;$(IslandPath)obj$(Configuration)***.g.i.cs"/>
  </ItemGroup>
  <ItemGroup>
    <Compile Include="@(IslandLibraryCompile)">
      <LinkBase>$(IslandDirectoryName)%(RecursiveDir)</LinkBase>
    </Compile>
  </ItemGroup>
  <ItemGroup>
	<IslandLibraryContent Include="$(IslandPath)***.*" Exclude="$(IslandPath)***.user;$(IslandPath)***.csproj;$(IslandPath)***.cs;$(IslandPath)***.xaml;$(IslandPath)**obj**;$(IslandPath)**bin**"/>
	<IslandLibraryContent Include="$(IslandPath)obj$(Configuration)***.xbf"/>
  </ItemGroup>
  <ItemGroup>
    <None Include="@(IslandLibraryContent)">
      <Link>$(IslandDirectoryName)%(RecursiveDir)%(Filename)%(Extension)</Link>
      <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
    </None>
  </ItemGroup>

This Microsoft Build snippet is copying files, based on the IslandLibrary Property path, to the project where it resides. The IslandLibraryCompile includes:

  • All the .xaml.cs files. That will enable you to reuse the code behind of your custom controls.
  • All the generated .g.i.cs and .g.cs files. These files are generated files. All that you do under the “x:” prefix, is actually generated code, and this is where this generated code is going after all. So this files are the partial classes that actually hold the fields of all the x:Names inside their corresponding XAML files, and they also hold the code for connecting this fields with their actual instances. They also reference the .XAML file that will be loaded whenever the InitializeComponentmethod is called, usually at the beginning of the control’s constructor. You can look at this as a black box, but it is interesting to understand what is inside these files, but not necessarily how it works. The IslandLibraryContent includes:
  • All the content files of your project. That basically will copy all the files required for your project to run, like PNGs, JPGs, etc. It already copies them to the right folders so ms-appx:///will “just work”™. There are better ways of doing this, but this will cover the basic needs of the most common scenarios.
  • All the generated .xbf files. XBF stands for XAML Binary Format and it is a compiled version of your .xaml files, they load much faster than the XAML files (no XML parsing, for example). Even though the .g.i.cs files might look like they are trying to load the .xaml files, the XAML infrastructure itself always tries to load the .XBF files first, for performance. Only if it can’t find them they will try to load the .xaml files. This MSBuild script is not copying the .xaml files since they bring no advantage compared to the .XBFs.

To make sure that your developer experience is optimal, you also have to add a solution level project dependency, from the WPF/WinForms project to the UWPLibrary project. This means that whenever you change any of the UWP Library’s files, you can just build the WPF/WinForms project and the newest artifacts are already in place, in the correct order of project compilation. All these steps are going away in a future version of Visual Studio, when the tooling gets updated. There steps are described here at the documentation.

With these files included into the project’s build infrastructure and with the build dependency added, your WindowsXamlHost should work just fine if you set it’s InitialTypeName to your custom control’s fully qualified name. You can checkout the sample project here.

With this MSBuild snippet, even your apps packaged with the “Windows Application Packaging Project” template should work. If you want to know more, checkout this blog post.

October 2018 Limitations

Again, this release is in preview, so nothing you see here is production ready code. Just to name a few:

  • Wrapped Controls properly responding to changes in DPI and scale.
  • Accessibility tools that work seamlessly across the application and hosted controls.
  • Inline inking, @Places, and @People for input controls.

For a complete list, check the docs.

What’s next?

The version just released is not the final stable version, meaning that it is a preview. We’re still actively working on improving Xaml Islands. We would love for you to test out the product and provide feedback on the User Voice or at XamlIslandsFeedback@microsoft.com, but we currently are not recommending this for production use.

The post XAML Islands – A deep dive – Part 2 appeared first on Windows Developer Blog.

A preview of UX and UI changes in Visual Studio 2019

Azure Marketplace new offers – Volume 24

$
0
0

We continue to expand the Azure Marketplace ecosystem. From October 1 to October 15, 2018, 31 new offers successfully met the onboarding criteria and went live. See details of the new offers below:

Virtual machines

Bocada Backup Reporting Automation Software

Bocada Backup Reporting Automation Software: Bocada’s agentless, single-pane backup reporting automation solution gives data protection managers and backup administrators a seamless way to oversee backup health across on-premises and cloud environments.

DocuTRAK 3.5

DocuTRAK 3.5: DocuTRAK 3.5 is an out-of-the-box enterprise correspondence management and documents workflow solution.

Foglight for Virtualization Enterprise Edition 8.8

Foglight for Virtualization Enterprise Edition 8.8: Visualize, analyze, and optimize your virtual infrastructure. Foglight delivers controlled optimization that promotes capacity planning to expose the impact of planned, VMware-initiated, and user-invoked changes.

IBM DataPower Virtual Edition 7.7

IBM DataPower Virtual Edition 7.7: Rent IBM DataPower Gateway Virtual Edition on Microsoft Azure by the hour for production or non-production environments.

IBM WebSphere Application Server Liberty 18.0

IBM WebSphere Application Server Liberty 18.0: Build and deploy awesome applications with the latest version of WebSphere Liberty. Built for developers yet ready for production, it’s a combination of IBM technology and open-source software, with fast startup times and more.

IBM WebSphere MQ 9.1

IBM WebSphere MQ 9.1: This is a pre-configured image of IBM WebSphere MQ, IBM's messaging integration middleware. It's simple to install and easily integrates with your current infrastructure.

IPM  Enterprise Energy Management Platform

IPM+ Enterprise Energy Management Platform: IPM+ enables software-based energy metering and energy savings in your computing and datacenter environments. IPM+ has been endorsed by the corporate executive community of Fortune 500 customers.

IZAR@NET

IZAR@NET: IZAR@NET gives water and energy providers a flexible solution for remotely reading consumption meters and supplying data analytics for smart cities.

JAMS V7 (BYOL) - Server 2016

JAMS V7 (BYOL) - Server 2016: JAMS is an enterprise batch scheduling and workload automation solution. Automate jobs on Windows, Linux, UNIX, z/OS, System I, and OpenVMS, with support for jobs running on databases, ERPs, CRMs, scripted processes, and more.

Leap Orbit Storage-backed SFTP Appliance

Leap Orbit Storage-backed SFTP Appliance: This SFTP appliance enables users to directly transfer files to a configured Azure storage account. This service empowers users to migrate from legacy SFTP processes to modern cloud processes.

Matillion ETL for Snowflake

Matillion ETL for Snowflake: With just a few clicks, build your jobs in Matillion to facilitate your data loads into Snowflake from more than 40 sources, including Azure Blob storage and Microsoft SQL relational databases.

Modern Requirements4TFS 2018 Update 2.1

Modern Requirements4TFS 2018 Update 2.1: Extend Visual Studio Team Services and Team Foundation Server. Simulation4TFS helps users manage project requirements visually by creating mockups, while Traceability4TFS helps users track changes made to work items.

Modernization Platform as a Service (ModPaaS)

Modernization Platform as a Service (ModPaaS): ModPaaS provides a collaborative platform for legacy application modernization. ModPaaS can be used in a self-service manner, with assistance from Modern Systems specialists, or in a fully managed approach.

Objectivity D ChoC

Objectivity D*ChoC: In many systems, it is important to know where the data came from, who acted upon the data, and the validity of any derived data. D*ChoC gives you the ability to measure the accuracy of the decision-making process.

RSK Orchid MainNet Beta

RSK Orchid MainNet Beta: Add a new node to the RSK Orchid MainNet Beta Network. Once the instance is started, the node will be automatically configured and join the network.

Session Border Controllers (SBC) for Azure

Session Border Controllers (SBC) for Azure: Looking to enable Microsoft Teams direct routing or connect SIP trunks to Skype for Business Server? AudioCodes’ Mediant Session Border Controllers deliver connectivity, security, and quality assurance for VoIP networks.

SQL Server 2017 Ent w VulnerabilityAssessment

SQL Server 2017 Ent w/ VulnerabilityAssessment: This image includes many database engine features and performance improvements. Utilities like SQL vulnerability assessment through SQL Server Management Studio, VS Code, and FTP client have been provided.

SQL Server 2017 Web with Vulnerability Assessment

SQL Server 2017 Web with Vulnerability Assessment: This image includes many database engine features and performance improvements. Utilities like SQL vulnerability assessment through SQL Server Management Studio, VS Code, and FTP client have been provided.

Web applications

CentOS 7.4 Hardened - Antivirus & Auto Updates

CentOS 7.4 Hardened - Antivirus & Auto Updates: CentOS 7.4 (Community Enterprise Operating System) is a Linux distribution that aims to provide a free, enterprise-class, community-supported computing platform functionally compatible with its upstream source.

CMS Made Simple on Ubuntu 14.04 LTS

CMS Made Simple on Ubuntu 14.04 LTS: CMS Made Simple is an open-source package built using PHP that provides website developers with a simple utility for building semi-static websites.

Confidential Compute VM Deployment

Confidential Compute VM Deployment: Protect your data while it's in use by running secure enclaves on DC-series Azure virtual machines. This security capability provides the missing piece for full data protection at rest, in transit, and in use.

E-mail Converter

E-mail Converter: E-mail Converter for Microsoft Azure and SharePoint automatically converts e-mails to SharePoint list items so that teams can collaborate on them.

Gradle on Ubuntu 14.04 LTS

Gradle on Ubuntu 14.04 LTS: Gradle is an open-source build automation system that introduces a Groovy-based domain-specific language instead of the XML form used by Apache Maven for declaring the project configuration.

Joomla on Ubuntu 16.04 LTS-Auto Updates Antivirus

Joomla on Ubuntu 16.04 LTS-Auto Updates + Antivirus: Joomla is an open-source content management system for publishing web content. It is built on a model-view-controller web application framework that can be used independently of the CMS.

Kong Cluster

Kong Cluster: Kong is an open-source API gateway and microservice management layer. It is designed for high availability, fault tolerance, and distributed systems.

Striim

Striim: The Striim platform is an enterprise-grade streaming data integration solution. Striim makes it easy to continuously ingest and process high volumes of streaming data from diverse sources, whether on-premises or in the cloud.

Striim for Real-Time Data Integration to CosmosDB

Striim for Real-Time Data Integration to CosmosDB: Striim for Azure CosmosDB simplifies the real-time collection and movement of data from a wide variety of on-premises sources, including enterprise databases via log-based change data capture (CDC), into Azure Storage.

Striim for Real-Time Integration to PostgreSQL

Striim for Real-Time Integration to PostgreSQL: This solution enables Azure PostgreSQL customers to quickly build streaming data pipelines to nonintrusively and continuously ingest data, transforming and enriching the data to deliver into PostgreSQL.

Wordpress on Ubuntu 14.04 LTS

Wordpress on Ubuntu 14.04 LTS: WordPress is a free and open-source content management system based on PHP and MySQL. Features include a plugin architecture and a template system.

Container solutions

Kubeapps Dashboard Container Image

Kubeapps Dashboard Container Image: Kubeapps is a web-based UI for deploying and managing applications in Kubernetes clusters.

SonarQube Container Image

SonarQube Container Image: SonarQube is an open-source tool for continuous code quality that performs automatic reviews of code to detect bugs and vulnerability issues for more than 20 programming languages.

Announcing ML.NET 0.7 (Machine Learning .NET)

$
0
0

ML.NET icon

We’re excited to announce today the release of ML.NET 0.7 – the latest release of the cross-platform and open source machine learning framework for .NET developers (ML.NET 0.1 was released at //Build 2018). This release focuses on enabling better support for recommendation based ML tasks, enabling anomaly detection, enhancing the customizability of the machine learning pipelines, enabling using ML.NET in x86 apps, and more.

This blog post provides details about the following topics in the ML.NET 0.7 release:

Enhanced support for recommendation tasks with Matrix Factorization

Recommendation icon

Recommender systems enable producing a list of recommendations for products in a catalog, songs, movies, and more. We have improved support for creating recommender systems in ML.NET by adding Matrix factorization (MF), a common approach to recommendations when you have data on how users rated items in your catalog. For example, you might know how users rated some movies and want to recommend which other movies they are likely to watch next.

We added MF to ML.NET because it is often significantly faster than Field-Aware Factorization Machines (which we added in ML.NET 0.3) and it can support ratings which are continuous number ratings (e.g. 1-5 stars) instead of boolean values (“liked” or “didn’t like”). Even though we just added MF, you might still want to use FFM if you want to take advantage of other information beyond the rating a user assigns to an item (e.g. movie genre, movie release date, user profile). A more in-depth discussion of the differences can be found here.

Sample usage of MF can be found here. The example is general but you can imagine that the matrix rows correspond to users, matrix columns correspond to movies, and matrix values correspond to ratings. This matrix would be quite sparse as users have only rated a small subset of the catalog.

ML.NET’s MF uses LIBMF.

Enabled anomaly detection scenarios – detecting unusual events


Anomaly detection icon

Anomaly detection enables identifying unusual values or events. It is used in scenarios such as fraud detection (identifying suspicious credit card transactions) and server monitoring (identifying unusual activity).

ML.NET 0.7 enables detecting two types of anomalous behavior:

  • Spike detection: spikes are attributed to sudden yet temporary bursts in values of the input data. These could be outliers due to outages, cyber-attacks, viral web content, etc.
  • Change point detection: change points mark the beginning of more persistent deviations in the behavior of the data. For example, if product sales are relatively consistent and become more popular (monthly sales double), there is a change point when the trend changes.

These anomalies can be detected on two types of data using different ML.NET components:

    • IidSpikeDetector and IidChangePointDetector are used on data assumed to be from one stationary distribution (each data point is independent of previous data, such as the number of retweets of each tweet).
    • SsaSpikeDetector and SsaChangePointDetector are used on data that has a season/trend components (perhaps ordered by time, such as product sales)

Sample code using anomaly detection with ML.NET can be found here.

Improved customizability of ML.NET pipelines

Pipeline icon

ML.NET offers a variety of data transformations (e.g. processing text, images, categorical features, etc.). However, some use cases require application-specific transformations, such as calculating cosine similarity between two text columns. We have now added support for custom transforms so you can easily include custom business logic.

The CustomMappingEstimator allows you to write your own methods to process data and bring them into the ML.NET pipeline. Here is what it would look like in the pipeline:

var estimator = mlContext.Transforms.CustomMapping<MyInput, MyOutput>(MyLambda.MyAction, "MyLambda")
    .Append(...)
    .Append(...)

Below is the definition of what this custom mapping will do. In this example, we convert the text label (“spam” or “ham”) to a boolean label (true or false).

public class MyInput
{
    public string Label { get; set; }
}

public class MyOutput
{
    public bool Label { get; set; }
}

public class MyLambda
{
    [Export("MyLambda")]
    public ITransformer MyTransformer => ML.Transforms.CustomMappingTransformer<MyInput, MyOutput>(MyAction, "MyLambda");

    [Import]
    public MLContext ML { get; set; }

    public static void MyAction(MyInput input, MyOutput output)
    {
        output.Label= input.Label == "spam" ? true : false;
    }
}

A more complete example of the CustomMappingEstimator can be found here.

x86 support in addition to x64

Pipeline icon

With this release of ML.NET you can now train and use machine learning models on x86 / 32-bit architecture devices. Previously, ML.NET was limited to x64 devices.
Note that some components that are based on external dependencies (e.g. TensorFlow) will not be available in x86.

NimbusML – experimental Python bindings for ML.NET

Python logo

NimbusML provides experimental Python bindings for ML.NET. We have seen feedback from the external community and internal teams regarding the use of multiple programming languages. We wanted to enable as many people as possible to benefit from ML.NET and help teams to work together more easily. ML.NET not only enables data scientists to train and use machine learning models in Python (with components that can also be used in scikit-learn pipelines), but it also enables saving models which can be easily used in .NET applications through ML.NET (see here for more details).

In case you missed it: provide your feedback on the new API

ML.NET 0.6 introduced a new set of APIs for ML.NET that provide enhanced flexibility. These APIs in 0.7 and upcoming versions are still evolving and we would love to get your feedback so you can help shape the long-term API for ML.NET.

Want to get involved? Start by providing feedback through issues at the ML.NET GitHub repo!

Additional resources

  • The most important ML.NET concepts for understanding the new API are introduced here.
  • A cookbook (How to guides) that shows how to use these APIs for a variety of existing and new scenarios can be found here.
  • A ML.NET API Reference with all the documented APIs can be found here.

Get started!

Get started icon

If you haven’t already, get started with ML.NET hereNext, explore some other great resources:

We look forward to your feedback and welcome you to file issues with any suggestions or enhancements in the ML.NET GitHub repo.

This blog was authored by Gal Oshri and Cesar de la Torre

Thanks,

The ML.NET Team

Python in Visual Studio Code – October 2018 Release

$
0
0

We are pleased to announce that the October 2018 release of the Python Extension for Visual Studio Code is now available. You can download the Python extension from the Marketplace, or install it directly from the extension gallery in Visual Studio Code. You can learn more about Python support in Visual Studio Code in the documentation.

In this release we have closed a total of 49 issues, including:

  • Jupyter support: import notebooks and run code cells in a Python Interactive window
  • Use new virtual environments without having to restart Visual Studio Code
  • Code completions in the debug console window
  • Improved completions in language server, including recognition of namedtuple, and generic types

Keep on reading to learn more!

Jupyter Support with the Python Interactive window

The extension now contains new editor-centric interactive programming capabilities built on top of Jupyter. To get started, make sure to have Jupyter installed in your environment (e.g. set your environment to Anaconda) and type #%% into a Python file to define a Cell. You will notice a “Run Cell” code lens will appear above the #%% line:

Clicking Run Cell will open the Python Interactive window to the right and run your code. You can define further cells or press Shift+Enter to run the current cell and automatically create a new one (or advance to the next cell).

Separately, if you were to open a Jupyter Notebook file (.ipynb), you will now be prompted to import the notebook as Python code:

Cells in the Jupyter Notebook will be converted to cells in a Python file by adding #%% lines. You can run the cells to view the notebook output in Visual Studio code, including plots:

Check out our blog post Data Science with Python in Visual Studio Code for a deep dive of the new capabilities!

Completions in the Debug Console

When stopped at a breakpoint and typing expressions into the debug console, you now get completions appearing:

Completions are based on variables available at runtime in the current scope.

Automatic detection of new virtual environments

The Python extension will now detect new virtual environments that are created in your workspace root while Visual Studio Code is running. You can create virtual environments from the terminal, and they are immediately available to select by clicking on the interpreter selector in the status bar, or by using the Python: Select Interpreter command.

In the screenshot above  a new virtual environment named “env” was created in the terminal, and then set the active environment, as indicated by 'env' in the status bar. Previously you had to reload Visual Studio Code for the new environment to be available.

Other Changes and Enhancements

We have also added small enhancements and fixed issues requested by users that should improve your experience working with Python in Visual Studio Code. The full list of improvements is listed in our changelog; some notable changes include:

Be sure to download the Python extension for Visual Studio Code now to try out the above improvements. If you run into any problems, please file an issue on the Python VS Code GitHub page.

Data Science with Python in Visual Studio Code

$
0
0

This post was written by Rong Lu, a Principal Program Manager working on Data Science tools for Visual Studio Code

Today we’re very excited to announce the availability of Data Science features in the Python extension for Visual Studio Code! With the addition of these features, you can now work with data interactively in Visual Studio Code, whether it is for exploring data or for incorporating machine learning models into applications, making Visual Studio Code an exciting new option for those who prefer an editor for data science tasks.

These features as currently shipping as experimental. We’re starting our Visual Studio Code investments in the data science space with two main use cases in mind:

  • Exploring data and experimenting with ideas in Visual Studio Code. Just like how you would use Jupyter Notebooks to explore data, with Visual Studio Code you can accomplish the same but using a familiar editor with your favorite settings. You can define and run individual cells using the IPython kernel, visualize data frames, interact with plots, restart kernels, and export to Jupyter Notebooks.
  • Import Jupyter Notebooks into Python code. When it comes time to turn experimentation into reproducible, production-ready Python code, Visual Studio Code can make that transition very easy. Run the “Import Jupyter Notebook” command in the editor and code will be extracted into a Python file, then all the rich features that make you productive are at your fingertips – including AI-powered IntelliSense (IntelliCode), integrated debugger, Visual Studio Live Share, refactoring, multi-file management, and Git source control.

Now, let’s take a closer look at how Visual Studio Code works in these two scenarios.

Exploring data and experimenting with ideas in Visual Studio Code

Above is an example of a Python file that simply loads data from a csv file and generates a plot that outlines the correlation between data columns. With the new Data Science features, now you can visually inspect code results, including data frames and interactive plots.

A few things to note:

  1. Just like how you organize Jupyter Notebooks using cells, you can define code cells in your Python code by using “#%%” and Markdown cells by using “#%% [markdown]”.
  2. “Run cell” links start to light up once “#%%” are detected. The first time the “Run cell” link is clicked (or Shift-enter are pressed), the “Python Interactive” window on the right will show up and a Jupyter server will start in the background. Code in the cells will then be sent to the Jupyter server to execute and results will be rendered in the window.
  3. By using the top toolbar in the Python Interactive window, you can clear results, restart the iPython kernel, and export results to a Jupyter Notebook.
  4. You can navigate back to the source code by clicking on the “Go To Code” button in each cell.

Import Jupyter Notebooks into Python code

If you have existing Jupyter Notebooks that are ready to be turned into production-ready Python modules, simply bring them into Visual Studio Code by running the command “Python: Import Jupyter Notebook”: this will extract Python code as well as Markdown blocks from the notebook, and put everything into a Python file.

Here is an example of a Jupyter Notebook and the generated Python file. Each code cell becomes a code section with annotation “#%%”, and each Markdown cell turns into a comment section with “#%% [markdown]” annotation. Both cell types are runnable in Visual Studio Code, which means you can reproduce the exact same results that you would see in a Jupyter Notebook.

Try it out today

We’re rolling out these features as experimental in the latest Python extension for Visual Studio Code that shipped today. Please give it a try and let us know what you think by taking a 2-minute survey to help shape the features for your needs.

A quick side note, this is an evolvement from the Visual Studio Code Neuron extension that we worked along with students from Imperial College London this summer. The Neuron extension received a lot of positive feedback just recently, and now we’re taking their awesome work further by building more of such capabilities into the Python extension.

Have fun playing with data in Visual Studio Code! 😊

T-mobile uses R for Customer Service AI

$
0
0

T-Mobile, the global telecommunication company, is using R in production to automatically classify text messages to customer service  and route them to an agent that can help. The AI@T-mobile team used the keras library in R to build a natural language processing engine with Tensorflow, and deployed it to production as a docker container. The MRAN Time Machine ensures the container gets fixed R package versions for reproducibility. 

While you may think first of Python for building production AI models, the keras library in R has already proven to be a first-class interface for building AI models. The T-mobile team was able to build this system in just four months and with a small budget because they were able to explore in R and immediately deploy in R:

"Despite being an incredibly popular language for exploratory analysis, data scientists are repeatedly told that R is not sufficient for machine learning – especially if those ML models are destined for production. Though much exploratory analysis and modeling is done in R, these models must be rebuilt in Python to meet DevOps and enterprise requirements. Our team doesn’t see the value in this double work. We explore in R and immediately deploy in R. Our APIs are neural network models using R and TensorFlow in docker containers that are small and maintainable enough to make our DevOps team happy!"

Ai-at-tmobile

If you'd like to try this out yourself, the T-mobile team has published an open source version of their message-classification container on Github, and you can read the details in the blog post by team leads at the link below.

T-Mobile blog: Enterprise Web Services with Neural Networks Using R and TensorFlow


Top Stories from the Microsoft DevOps Community – 2018.11.09

$
0
0
Woo! It’s Friday, so as you’re getting ready for the weekend, maybe you should get caught up on some of the news from around the Azure DevOps community. Source Control for Data Science – using Azure DevOps / VSTS with Jupyter Notebooks Everybody needs good version control practices, especially data scientists. Lee Stott shows some... Read More

Because it’s Friday: Do the Robot

$
0
0

The engineers at Boston Dynamics are at it again. Here's one of the dog robots, dancing (and moonwalking!) to Uptown Funk (play with sound for best effect):

I assume this robot was programmed directly, but it won't be long before we're able to observe a professional dancer and automatically replicate their moves with a robot. It's already possible digitally: in the video below, the dancer on the right a synthetically animated based on a real dancer's moves:

That's all from the blog for this week. We'll be back next week: have a great weekend!

When should you right click publish

$
0
0

Some people say ‘friends don’t let friends right click publish’ but is that true? If they mean that there are great benefits to setting up a CI/CD workflow, that’s true and we will talk more about these benefits in just a minute. First, let’s remind ourselves that the goal isn’t always coming up with the best long-term solution.

Technology moves fast and as developers we are constantly learning and experimenting with new languages, frameworks and platforms. Sometimes we just need to prototype something rather quickly in order to evaluate its capabilities. That’s a classic scenario where right click publish in Visual Studio provides the right balance between how much time you are going to spend (just a few seconds) and the options that become available to you (quite a few depending on the project type) such as publish to IIS, FTP  & Folder (great for xcopy deployments and integration with other tools).

Continuing with the theme of prototyping and experimenting, right click publish is the perfect way for existing Visual Studio customers to evaluate Azure App Service (PAAS). By following the right click publish flow you get the opportunity to provision new instances in Azure and publish your application to them without leaving Visual Studio:

When the right click publish flow has been completed, you immediately have a working application running in the cloud:

Platform evaluations and experiments take time and during that time, right click publish helps you focus on the things that matter. When you are ready and the demand rises for automation, repeatability and traceability that’s when investing into a CI/CD workflow starts making a lot of sense:

  • Automation: builds are kicked off and tests are executed as soon as you check in your code
  • Repeatability: it’s impossible to produce binaries without having the source code checked in
  • Traceability: each build can be traced back to a specific version of the codebase in source control which can then be compared with another build and figure out the differences

The right time to adopt CI/CD typically coincides with a milestone related to maturity; either and application milestone or the team’s that is building it. If you are the only developer working on your application you may feel that setting up CI/CD is overkill, but automation and traceability can be extremely valuable even to a single developer once you start shipping to your customers and you have to support multiple versions in production.

With a CI/CD workflow you are guaranteed that all binaries produced by a build can be linked back to the matching version of the source code. You can go from a customer bug report to looking at the matching source code easily, quickly and with certainty. In addition, the automation aspects of CI/CD save you valuable time performing common tasks like running tests and deploying to testing and pre-production environments, lowering the overhead of good practices that ensure high quality.

As always, we want to see you successful, so if you run into any issues using publish in Visual Studio or setting up your CI/CD workload, let me know in the comment section below and I’ll do my best to get your question answered.

Terminus and FluentTerminal are the start of a world of 3rd party OSS console replacements for Windows

$
0
0

Folks have been trying to fix supercharge the console/command line on Windows since Day One. There's a ton of open source projects over the year that try to take over or improve on "conhost.exe" (the thing that handles consoles like Bash/PowerShell/cmd on Windows). Most of these 3rd party consoles have weird or subtle issues. For example, I like Hyper as a terminal but it doesn't support Ctrl-C at the command line. I use that hotkey often enough that this small bug means I just won't use that console at all.

Per the CommandLine blog:

One of those weaknesses is that Windows tries to be "helpful" but gets in the way of alternative and 3rd party Console developers, service developers, etc. When building a Console or service, developers need to be able to access/supply the communication pipes through which their Terminal/service communicates with command-line applications. In the *NIX world, this isn't a problem because *NIX provides a "Pseudo Terminal" (PTY) infrastructure which makes it easy to build the communication plumbing for a Console or service, but Windows does not...until now!

Looks like the Windows Console team is working on making 3rd party consoles better by creating this new PTY mechanism:

We've heard from many, many developers, who've frequently requested a PTY-like mechanism in Windows - especially those who created and/or work on ConEmu/Cmder, Console2/ConsoleZ, Hyper, VSCode, Visual Studio, WSL, Docker, and OpenSSH.

Very cool! Until it's ready I'm going to continue to try out new consoles. A lot of people will tell you to use the cmder package that includes ConEmu. There's a whole world of 3rd party consoles to explore. Even more fun are the choices of color schemes and fonts to explore.

For a while I was really excited about Hyper. Hyper is - wait for it - an electron app that uses HTML/CSS for the rendering of the console. This is a pretty heavyweight solution to the rendering that means you're looking at 200+ megs of memory for a console rather than 5 megs or so for something native. However, it is a clever way to just punt and let a browser renderer handle all the complex font management. For web-folks it's also totally extensible and skinnable.

As much as I like Hyper and its look, the inability to support hitting "Ctrl-C" at the command line is just too annoying. It appears it's a very well-understood issue that will ultimately be solved by the ConPTY work as the underlying issue is a deficiency in the node-pty library. It's also a long-running issue in the VS Code console support. You can watch the good work that's starting in this node-pty PR that will fix a lot of issues for node-based consoles.

Until this all fixes itself, I'm personally excited (and using) these two terminals for Windows that you may not have heard of.

Terminus

Terminus is open source over at https://github.com/Eugeny/terminus and works on any OS. It's immediately gorgeous, and while it's in alpha, it's very polished. Be sure to explore the settings and adjust things like Blur/Fluent, Themes, opacity, and fonts. I'm using FiraCode Retina with Ligatures for my console and it's lovely. You'll have to turn ligature support on explicitly under Settings | Appearance.

Terminus is a lovely console replacement

Terminus also has some nice plugins. I've added Altair, Clickable-Links, and Shell-Selector to my loadout. The shell selector makes it easy on Windows 10 to have PowerShell, Cmd, and Ubuntu/Bash open all at the same time in multiple tabs.

I did do a little editing of the default config file to set up Ctrl-T for new tab and Ctrl-W for close-tab for my personal taste.

FluentTerminal

FluentTerminal is a Terminal Emulator based on UWP. Its memory usage on my machine is about 1/3 of Terminus and under 100 megs. As a Windows 10 UWP app it looks and feels very native. It supports ALT-ENTER Fullscreen, and tabs for as many consoles as you'd like. You can right-click and color specific tabs which was a nice surprise and turned out to be useful for on-the-fly categorization.

image

FluentTerminal has a nice themes setup and includes a half-dozen to start, plus supports imports.

It's not yet in the Windows Store (perhaps because it's in active development) but you can easily download a release and install it with a PowerShell install.ps1 script.

I have found the default Keybindings very intuitive with the usual Ctrl-T and Ctrl-W tab managers already set up, as well as Shift-Ctrl-T for opening a new tab for a specific shell profile (cmd, powershell, wsl, etc).

Both of these are great new entries in the 3rd party terminal space and I'd encourage you to try them both out and perhaps get involved on their respective GitHubs! It's a great time to be doing console work on Windows 10!


Sponsor: Check out the latest JetBrains Rider with built-in spell checking, enhanced debugger, Docker support, full C# 7.3 support, publishing to IIS and more advanced Unity support.



© 2018 Scott Hanselman. All rights reserved.
     

Start asking your questions for the Azure Monitor AMA now!

$
0
0

The Azure Monitor team is hosting a special Ask Me Anything session on Twitter, Thursday, November 15, 2018 from 8:30 AM to 10:30 AM Pacific Time. You can tweet to @AzureMonitor or @AzureSupport with #AzureMonitorAMA.

What's an AMA session?

AMA stands for Ask Me Anything – this is an opportunity for Twitter users to send questions directly to the Azure Monitor Engineering team. We will be on standby to answer any questions you have about our products, services, or even our team!

Why are you doing an AMA?

We want to hear more from our customers and the community. We want to know how you use Azure, Azure Monitor, Application Insights, and Log Analytics, and how your experience has been. Your questions provide insights into how we can make the service better.

How do I ask questions on Twitter?

You can ask us your questions by putting “#AzureMonitorAMA” in your tweet. Your question can span multiple tweets by replying to first tweet you post with this hashtag. You can also directly message @AzureMonitor or @AzureSupport if you want to keep your questions private. You can start posting your questions before the scheduled time of the AMA, but we will start answering only during the event time. This is to help customers who are in different time zone and who can’t attend the event during this specific hour of the day. You can catch us for a live conversation during the scheduled hours. If there are further follow-ups, we will continue the dialogue post event time. Go ahead and tweet us!

Who will be there?

Program Managers and Developers from the Azure Monitor team will be participating. Have any questions about the following topics? Bring them to the AMA.

  • Application Insights
  • Log Analytics
  • Azure Data Explorer
  • Azure Monitor Alerts
  • Application performance monitoring
  • Infrastructure monitoring - VMs
  • Monitoring containers and Kubernetes
  • Monitoring Service Fabric
  • Anything monitoring related!

What kinds of questions can I ask?

You really can ask anything you’d like, but here’s a list of question ideas to get you started:

  • What’s the difference between events, logs, and metrics?
  • Can I use Azure Monitor with my applications running on-premises and on other clouds?
  • How does Azure Monitor fit in to my existing DevOps toolchain?
  • Why is monitoring important for a DevOps organization?
  • How can I try Azure Monitor for free?

We're looking forward to having a conversation with you!

Viewing all 10804 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>