Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

std::string_view: The Duct Tape of String Types

$
0
0

Visual Studio 2017 contains support for std::string_view, a type added in C++17 to serve some of the roles previously served by const char * and const std::string& parameters. string_view is neither a “better const std::string&”, nor “better const char *”; it is neither a superset or subset of either. std::string_view is intended to be a kind of universal “glue” — a type describing the minimum common interface necessary to read string data. It doesn’t require that the data be null-terminated, and doesn’t place any restrictions on the data’s lifetime. This gives you type erasure for “free”, as a function accepting a string_view can be made to work with any string-like type, without making the function into a template, or constraining the interface of that function to a particular subset of string types.

tl;dr

string_view solves the “every platform and library has its own string type” problem for parameters. It can bind to any sequence of characters, so you can just write your function as accepting a string view:

void f(wstring_view); // string_view that uses wchar_t's

and call it without caring what stringlike type the calling code is using (and for (char*, length) argument pairs just add {} around them)

// pass a std::wstring:
std::wstring& s;         f(s);

// pass a C-style null-terminated string (string_view is not null-terminated):
wchar_t* ns = "";        f(ns);

// pass a C-style character array of len characters (excluding null terminator):
wchar_t* cs, size_t len; f({cs,len});

// pass a WinRT string
winrt::hstring hs;       f(hs);

f is just an ordinary function, it doesn’t have to be a template.

string_view as a Generic String Parameter

Today, the most common “lowest common denominator” used to pass string data around is the null-terminated string (or as the standard calls it, the Null-Terminated Character Type Sequence). This has been with us since long before C++, and provides clean “flat C” interoperability. However, char* and its support library are associated with exploitable code, because length information is an in-band property of the data and susceptible to tampering. Moreover, the null used to delimit the length prohibits embedded nulls and causes one of the most common string operations, asking for the length, to be linear in the length of the string.

Sometimes const std::string& can be used to pass string data and erase the source, because it accepts std::string objects, const char * pointers, and string literals like “meow”. Unfortunately, const std::string& creates “impedance mismatches” when interacting with code that uses other string types. If you want to talk to COM, you need to use BSTR. If you want to talk to WinRT, you need HSTRING. For NT, UNICODE_STRING, and so on. Each programming domain makes up their own new string type, lifetime semantics, and interface, but a lot of text processing code out there doesn’t care about that. Allocating entire copies of the data to process just to make differing string types happy is suboptimal for performance and reliability.

Example: A Function Accepting std::wstring and winrt::hstring

Consider the following program. It has a library function compiled in a separate .cpp, which doesn’t handle all string types explicitly but still works with any string type.

// library.cpp
#include <stddef.h>
#include <string_view>
#include <algorithm>

size_t count_letter_Rs(std::wstring_view sv) noexcept {
    return std::count(sv.begin(), sv.end(), L'R');
}
// program.cpp
// compile with: cl /std:c++17 /EHsc /W4 /WX
//    /I"%WindowsSdkDir%Include%UCRTVersion%cppwinrt" .program.cpp .library.cpp
#include <stddef.h>
#include <string.h>
#include <iostream>
#include <stdexcept>
#include <string>
#include <string_view>

#pragma comment(lib, "windowsapp")
#include <winrt/base.h>

// Library function, the .cpp caller doesn't need to know the implementation
size_t count_letter_Rs(std::wstring_view) noexcept;

int main() {
    std::wstring exampleWString(L"Hello wstring world!");
    exampleWString.push_back(L'');
    exampleWString.append(L"ARRRR embedded nulls");
    winrt::hstring exampleHString(L"Hello HSTRING world!");

    // Performance and reliability is improved vs. passing std::wstring, as
    // the following conversions don't allocate and can't fail:
    static_assert(noexcept(std::wstring_view{exampleWString}));
    static_assert(noexcept(std::wstring_view{exampleHString}));

    std::wcout << L"Rs in " << exampleWString
        << L": " << count_letter_Rs(exampleWString) << L"n";

    // note HStringWrapper->wstring_view implicit conversion when calling
    // count_letter_Rs
    std::wcout << L"Rs in " << std::wstring_view{exampleHString}
        << L": " << count_letter_Rs(exampleHString) << L"n";
}

Output:

>.program.exe
Rs in Hello wstring world! ARRRR embedded nulls: 4
Rs in Hello HSTRING world!: 1

The preceding example demonstrates a number of desirable properties of string_view (or wstring_view in this case):

vs. making count_letter_Rs some kind of template
Compile time and code size is reduced because only one instance of count_letter_Rs need be compiled. The interface of the string types in use need not be uniform, allowing types like winrt::hstring, MFC CString, or QString to work as long as a suitable conversion function is added to the string type.
vs. const char *
By accepting string_view, count_letter_Rs need not do a strlen or wcslen on the input. Embedded nulls work without problems, and there’s no chance of in-band null manipulation errors introducing bugs.
vs. const std::string&
As described in the comment above, string_view avoids a separate allocation and potential failure mode, because it passes a pointer to the string’s data, rather than making an entire owned copy of that data.
string_view For Parsers

Another place where non-allocating non-owning string pieces exposed as string_view can be useful is in parsing applications. For example, the C++17 std::filesystem::path implementation that comes with Visual C++ uses std::wstring_view internally when parsing and decomposing paths. The resulting string_views can be returned directly from functions like std::filesystem::path::filename(), but functions like std::filesystem::path::has_filename() which don’t actually need to make copies are natural to write.

inline wstring_view parse_filename(const wstring_view text)
	{	// attempt to parse text as a path and return the filename if it exists; otherwise,
		// an empty view
	const auto first = text.data();
	const auto last = first + text.size();
	const auto filename = find_filename(first, last); // algorithm defined elsewhere
	return wstring_view(filename, last - filename);
	}

class path
	{
public:
	// [...]
	path filename() const
		{	// parse the filename from *this and return a copy if present; otherwise,
			// return the empty path
		return parse_filename(native());
		}
	bool has_filename() const noexcept
		{	// parse the filename from *this and return whether it exists
		return !parse_filename(native()).empty();
		}
	// [...]
	};

In the std::experimental::filesystem implementation written before string_view, path::filename() contains the parsing logic, and returns a std::experimental::filesystem::path. has_filename is implemented in terms of filename, as depicted in the standard, allocating a path to immediately throw it away.

Iterator Debugging Support

In debugging builds, MSVC’s string_view implementation is instrumented to detect many kinds of buffer management errors. The valid input range is stamped into string_view’s iterators when they are constructed, and unsafe iterator operations are blocked with a message describing what the problem was.

// compile with cl /EHsc /W4 /WX /std:c++17 /MDd .program.cpp
#include <crtdbg.h>
#include <string_view>

int main() {
    // The next 3 lines cause assertion failures to go to stdout instead of popping a dialog:
    _set_abort_behavior(0, _WRITE_ABORT_MSG);
    _CrtSetReportMode(_CRT_ASSERT, _CRTDBG_MODE_FILE);
    _CrtSetReportFile(_CRT_ASSERT, _CRTDBG_FILE_STDOUT);

    // Do something bad with a string_view iterator:
    std::string_view test_me("hello world");
    (void)(test_me.begin() + 100); // dies
}
>cl /nologo /MDd /EHsc /W4 /WX /std:c++17 .test.cpp
test.cpp

>.test.exe
xstring(439) : Assertion failed: cannot seek string_view iterator after end

Now, this example might seem a bit obvious, because we’re clearly incrementing the iterator further than the input allows, but catching mistakes like this can make debugging much easier in something more complex. For example, a function expecting to move an iterator to the next ‘)’:

// compile with cl /EHsc /W4 /WX /std:c++17 /MDd .program.cpp
#include <crtdbg.h>
#include <string_view>

using std::string_view;

string_view::iterator find_end_paren(string_view::iterator it) noexcept {
    while (*it != ')') {
        ++it;
    }

    return it;
}

int main() {
    _set_abort_behavior(0, _WRITE_ABORT_MSG);
    _CrtSetReportMode(_CRT_ASSERT, _CRTDBG_MODE_FILE);
    _CrtSetReportFile(_CRT_ASSERT, _CRTDBG_FILE_STDOUT);
    string_view example{"malformed input"};
    const auto result = find_end_paren(example.begin());
    (void)result;
}
>cl /nologo /EHsc /W4 /WX /std:c++17 /MDd .program.cpp
program.cpp

>.program.exe
xstring(358) : Assertion failed: cannot dereference end string_view iterator
Pitfall #1: std::string_view doesn’t own its data, or extend lifetime

Because string_view doesn’t own its actual buffer, it’s easy to write code that assumes data will live a long time. An easy way to demonstrate this problem is to have a string_view data member. For example, a struct like the following is dangerous:

struct X {
    std::string_view sv; // Danger!
    explicit X(std::string_view sv_) : sv(sv_) {}
};

because a caller can expect to do something like:

int main() {
    std::string hello{"hello"};
    X example{hello + " world"}; // forms string_view to string destroyed at the semicolon
    putc(example.sv[0]); // undefined behavior
}

In this example, the expression `hello + ” world”` creates a temporary std::string, which is converted to a std::string_view before the constructor of X is called. X stores a string_view to that temporary string, and that temporary string is destroyed at the end of the full expression constructing `example`. At this point, it would be no different if X had tried to store a const char * which was deallocated. X really wants to extend the lifetime of the string data here, so it must make an actual copy.

There are of course conditions where a string_view member is fine; if you’re implementing a parser and are describing a data structure tied to the input, this may be OK, as std::regex does with std::sub_match. Just be aware that string_view’s lifetime semantics are more like that of a pointer.

Pitfall #2: Type Deduction and Implicit Conversions

Attempting to generalize functions to different character types by accepting basic_string_view instead of string_view or wstring_view prevents the intended use of implicit conversion. If we modify the program from earlier to accept a template instead of wstring_view, the example no longer works.

// program.cpp
// compile with: cl /std:c++17 /EHsc /W4 /WX
//    /I"%WindowsSdkDir%Include%UCRTVersion%cppwinrt" .program.cpp
#include <stddef.h>
#include <string.h>
#include <algorithm>
#include <iostream>
#include <locale>
#include <stdexcept>
#include <string>
#include <string_view>

#pragma comment(lib, "windowsapp")
#include <winrt/base.h>

template<class Char>
size_t count_letter_Rs(std::basic_string_view<Char> sv) noexcept {
    return std::count(sv.begin(), sv.end(),
        std::use_facet<std::ctype<Char>>(std::locale()).widen('R'));
}

int main() {
    std::wstring exampleWString(L"Hello wstring world!");
    winrt::hstring exampleHString(L"Hello HSTRING world!");
    count_letter_Rs(exampleWString); // no longer compiles; can't deduce Char
    count_letter_Rs(std::wstring_view{exampleWString}); // OK
    count_letter_Rs(exampleHString); // also no longer compiles; can't deduce Char
    count_letter_Rs(std::wstring_view{exampleHString}); // OK
}

In this example, we want exampleWString to be implicitly converted to a basic_string_view<wchar_t>. However, for that to happen we need template argument deduction to deduce CharT == wchar_t, so that we get count_letter_Rs. Template argument deduction runs before overload resolution or attempting to find conversion sequences, so it has no idea that basic_string is at all related to basic_string_view, and type deduction fails, and the program does not compile. As a result, prefer accepting a specialization of basic_string_view like string_view or wstring_view rather than a templatized basic_string_view in your interfaces.

In Closing

We hope string_view will serve as an interoperability bridge to allow more C++ code to seamlessly communicate. We are always interested in your feedback. Should you encounter issues please let us know through Help > Report A Problem in the product, or via Developer Community. Let us know your suggestions through UserVoice. You can also find us on Twitter (@VisualC) and Facebook (msftvisualcpp).


How do you even know this crap?

$
0
0

Imposter's HandbookThis post won't be well organized so lower your expectations first. When Rob Conery first wrote "The Imposter's Handbook" I was LOVING IT. It's a fantastic book written for imposters by an imposter. Remember, I'm the original phony.

Now he's working on The Imposter's Handbook: Season 2 and I'm helping. The book is currently in Presale and we're releasing PDFs every 2 to 3 weeks. Some of the ideas from the book will come from blog posts like or similar to this one. Since we are using Continuous Delivery and an Iterative Process to ship the book, some of the blog posts (like this one) won't be fully baked until they show up in the book (or not). See how I equivocated there? ;)

The next "Season" of The Imposter's Handbook is all about the flow of information. Information flowing through encoding, encryption, and transmission over a network. I'm also interested in the flow of information through one's brain as they move through the various phases of being a developer. Bear with me (and help me in the comments!).

I was recently on a call with two other developers, and it would be fair that we were of varied skill levels. We were doing some HTML and CSS work that I would say I'm competent at, but by no means an expert. Since our skill levels didn't fall on a single axis, we'd really we'd need some Dungeons & Dragon's Cards to express our competencies.

D&D Cards from Battle Grip

I might be HTML 8, CSS 6, Computer Science 9, Obscure Trivia 11, for example.

We were asked to make a little banner with some text that could be later closed with some iconography that would represent close/dismiss/go away.

  • One engineer suggested "Here's some text + ICON.PNG"
  • The next offered a more scalable option with "Here's some text + ICON.SVG"

Both are fine ideas that would work, while perhaps later having DPI or maintenance issues, but truly, perfectly cromulent ideas.

I have never been given this task, I am not a designer, and I am a mediocre front-end person. I asked what they wanted it to look like and they said "maybe a square with an X or a circle with an X or a circle with a line."

I offered, you know, there MUST be a Unicode Glyph for that. I mean, there's one for poop." Apparently I say poop in business meetings more than any other middle manager at the company, but that's fodder for another blog post.

We searched and lo and behold we found ☒ and ⊝ and simply added them to the end of the string. They scale visibly, require no downloads or extra dependencies, and can be colored and styled nicely because they are text.

One of the engineers said "how do you even know this crap?" I smiled and shrugged and we moved on to the doing.

To be clear, this post isn't self-congratulatory. Perhaps you had the same idea. This interaction was all of 10 minutes long. But I'm interested in the HOW did I know this? Note that I didn't actually KNOW that these glyphs existed. I knew only that they SHOULD exist. They MUST.

How many times have you been coding and said "You know, there really must be a function/site/tool that does x/y/z?" All the time, right? You don't know the answers but you know someone must have AND must have solved it in a specific way such that you could find it. A new developer doesn't have this intuition - this sense of technical smell - yet.

How is technical gut and intuition and smell developed? Certainly by doing, by osmosis, by time, by sleeping, and waking, and doing it again.

I think it's exposure. It's exposure to a diverse set of technical problems that all build on a solid base of fundamentals.

Rob and I are going to try to expand on how this technical confidence gets developed in The Imposter's Handbook: Season 2 as topics like Logic, Binary and Logical Circuits, Compression and Encoding, Encryption and Cryptanalysis, and Networking and Protocols are discussed. But I want to also understand how/if/when these topics and examples excite the reader...and most importantly do they provide the reader with that missing Tetris Piece of Knowledge that moves you from a journeyperson developer to someone who can more confidently wear the label Computer Science 9, Obscure Trivia 11.

via GIPHY

What do you think? Sound off in the comments and help me and Rob understand!


Sponsor: Preview the latest JetBrains Rider with its built-in spell checking, initial Blazor support, partial C# 7.3 support, enhanced debugger, C# Interactive, and a redesigned Solution Explorer.



© 2018 Scott Hanselman. All rights reserved.
     

Moving from Hosted XML Process to Inherited Process – Publicly Available

$
0
0
Last month we announced a private preview that allows you to move projects that use a Hosted XML process to an Inherited process. We received a fare share of feedback, watched the telemetry, and made some fixes. This week we announced that the feature is now publicly available! If you are using Hosted XML, and... Read More

Moving from Hosted XML Process to Inherited Process – Generally Available

$
0
0
Last month we announced a private preview that allows you to move projects that use a Hosted XML process to an Inherited process. We received a fare share of feedback, watched the telemetry, and made some fixes. This week we announced that the feature is now generally available! If you are using Hosted XML, and... Read More

Reduce your exposure to brute force attacks from the virtual machine blade

$
0
0

Attackers commonly target open ports on Internet-facing virtual machines (VMs), spanning from port scanning to brute force and DDoS attacks. In case of a successful brute force attack, an attacker can compromise your VM and establish a foothold into your environment. Once an attacker is in your environment, he can profit from the compute of that machine or use its network access to perform lateral attacks on other networks.

One way to reduce exposure to an attack is to limit the amount of time that a port on your virtual machine is open. Ports only need to be open for a limited amount of time for you to perform management or maintenance tasks. Just-In-Time VM Access helps you control the time that the ports on your virtual machines are open. It leverages network security group (NSG) rules to enforce a secure configuration and access pattern.

Today we are excited to announce the public preview of configuring Just-In-Time VM Access from the virtual machine blade to make it even easier for you to reduce your exposure to threats.

Just-In-Time VM access

In one simple click, a Just-In-Time VM access policy is applied to a VM. This will configure a policy that locks down the machine RPD or SSH ports, depending on the OS of the respected VM. When an authorized user wants access to the ports for management or maintenance purposes, he or she can use Just-In-Time VM Access to request access to those ports for up to 3 hours. After 3 hours, the management ports will automatically be locked down to help reduce those ports susceptibility to an attack.

While setting Just-in-Time VM Access is already available as a feature in Azure Security Center, we added it to the virtual machine experience to make it easier for you to protect your management ports from attacks while you are configuring other settings in the virtual machine blade.

To get started with Just-in-Time VM Access, you can start your free 60-day trial of Azure Security Center today. If you are currently using the Security Center Free tier, you can simply upgrade your subscription to the Standard Tier to take advantage of Just-In-Time VM Access.

Just-in-time access

To learn more about Just-in-Time VM Access, visit the documentation.

Respond to threats faster with Security Center’s Confidence Score

$
0
0

Azure Security Center provides you with visibility across all your resources running in Azure and alerts you of potential or detected issues. The volume of alerts can be challenging for a security operations team to individually address. Due to the volume of alerts, security analysts have to prioritize which alerts they want to investigate. Investigating alerts can be complex and time consuming, so as a result, some alerts are ignored.

Security Center can help your team triage and prioritize alerts with a new capability called Confidence Score. The Confidence Score automatically investigates alerts by applying industry best practices, intelligent algorithms, and processes used by analysts to determine whether a threat is legitimate and provides you with meaningful insights.

How is the Azure Security Center Confidence Score triggered?

Alerts are generated due to detected suspicious processes running on your virtual machines. Security Center reviews and analyzes these alerts on Windows virtual machines running in Azure. It performs automated checks and correlations using advanced algorithms across multiple entities and data sources across the organization and all your Azure resources.

Results of Azure Security Center Confidence Score

The Confidence Score ranges between 1 to 100 and represents the confidence that the alert should be investigated. The higher the score is, the higher the confidence is that this alert indicates true malicious activity. Additionally, the Confidence Score provides a list of the top reasons why the alert received its Confidence Score. The Confidence Score makes it easier for the security analyst to prioritize his or her response to alerts and address the most pressing attacks first, ultimately reducing the amount of time it takes to respond to attacks and breaches.

You can find the Confidence Score in the Security alerts blade. The alerts and incidents are ordered based on Security Center’s confidence that they are legitimate threats. Here, you can see that the incident Suspicious screensaver process execution received a confidence score of 91.

confidence score list

When drilling down in the Security alert blade, in the Confidence section, you can view the observations that contributed to the confidence score and gain insights related to the alert. This enables you to get more insight into the nature of the activities that caused the alert.

Suspicious Screensaver process executed

Use Security Center’s Confidence Score to prioritize alert triage in your environment. The confidence score saves you time and effort by automatically investigating alerts, applying industry best practices and intelligent algorithms, and acting as a virtual analyst to determine which threats are real and where you need to focus your attention.

Create your Azure free account and take advantage of this capability and many more. Get started with Azure Security Center today.

LibMan CLI Released

$
0
0

The Command Line Interface (CLI) is now available for Microsoft Library Manager (LibMan) and can be downloaded via NuGet. Look for Microsoft.Web.LibraryManager.CLI
The LibMan CLI is cross-platform, so you’ll be able to use it anywhere that .NET Core is supported (Windows, Mac, Linux).

Install the LibMan CLI

To install LibMan, type:

> dotnet tool install --global Microsoft.Web.LibraryManager.Cli

Once the LibMan CLI is installed, you can start using LibMan from the root of your web project (or any folder).

Using LibMan from the Command Line

When using LibMan CLI, begin commands with “libman” then follow with the action you wish to invoke.
For example, to install all files from the latest version of jquery, type:

> libman install jquery

Follow the prompts to select the provider and destination. Type the values you want, or press [Enter] to accept the defaults.

The install operation creates a libman.json file in the current directory if one does not already exist, then adds the new library configuration. It then downloads the files and places them in the destination folder. See the example below.

LibMan CLI example

To learn more about the LibMan CLI, refer to the LibMan CLI documentation on the Library Manager Wiki.

Happy coding!

Justin Clareburt, Senior Program Manager, Visual Studio

Justin Clareburt (justcla) Profile Pic Justin Clareburt is the Web Tools PM on the Visual Studio team. He has over 20 years of Software Engineering experience and brings to the team his expert knowledge of IDEs and a passion for creating the ultimate development experience.

Follow Justin on Twitter @justcla78

ASP.NET Core 2.2.0-preview1 now available

$
0
0

Today we’re very happy to announce that the first preview of the next minor release of ASP.NET Core and .NET Core is now available for you to try out. We’ve been working hard on this release over the past months, along with many folks from the community, and it’s now ready for a wider audience to try it out and provide the feedback that will continue to shape the release.

How do I get it?

You can download the new .NET Core SDK for 2.2.0-preview1 (which includes ASP.NET 2.2.0-preview1) from https://www.microsoft.com/net/download/dotnet-core/2.2.

Visual Studio requirements

Customers using Visual Studio should also install and use the Preview channel of Visual Studio 2017 (15.9 Preview 1 at the time of writing) in addition to the SDK when working with .NET Core 2.2 and ASP.NET Core 2.2 projects.

Azure App Service Requirements

If you are hosting your application on Azure App Service, you should follow these instructions to install the site extension for hosting your 2.2.0-preview1 applications.

Impact to machines

Please note that is a preview release and there are likely to be known issues and as-yet-to-be discovered bugs. While the .NET Core SDK and runtime installs are side-by-side, your default SDK will become the latest one. If you run into issues working on existing projects using earlier versions of .NET Core after installing the preview SDK, you can force specific projects to use an earlier installed version of the SDK using a global.json file as documented here. Please log an issue if you run into such cases as SDK releases are intended to be backwards compatible.

What’s new in 2.2

We’re publishing a series of posts here that go over some of the new feature areas in detail. We’ll update the post with links to these posts as they go live over the coming days:

  • API Controller Conventions
  • Endpoint Routing
  • Health Checks
  • HTTP/2 in Kestrel
  • Improvements to IIS hosting
  • SignalR Java client

In addition to these features area, we’ve also:

  • Updated our web templates to Bootstrap 4 and Angular 6

For a detailed list of all features, bug fixes, and known issues refer to our release notes.

Migrating an ASP.NET Core 2.1 project to 2.2

To migrate an ASP.NET Core project from 2.1.x to 2.2.0-preview1, open the project’s .csproj file and change the value of the the <TargetFramework> element to netcoreapp2.2. You do not need to do this if you’re targeting .NET Framework 4.x.

Giving Feedback

The main purpose of providing previews is to solicit feedback so we can refine and improve the product in time for the final release. Please help provide us feedback by logging issues in the appropriate repository at https://github.com/aspnet or https://github.com/dotnet. We look forward to receiving your feedback!


Using gganimate to illustrate the luminance illusion

$
0
0

Many illusions are based on the fact that our perceptions of color or brightness of an object are highly dependent on the background surrounding the object. For example, in this image (an example of the Cornsweet illusion) the upper and lower blocks are exactly the same color, according to the pixels displayed on your screen.

Mind = blown. These two blocks are exactly the same shade of grey. Hold your finger over the seam and check. pic.twitter.com/OqAnforGqs

— David Smith (@revodavid) December 5, 2013

Here's another simpler representation of the principle, created by Colin Fay (in response to this video made with colored paper). In the animation below, the rectangle moving to the left to right remains the same color throughout (a middling gray). But as the background around it changes, our perception of the color changes as well.

Lightness_illusion

Colin created this animation in R using the gganimate package (available on GitHub from author Thomas Lin Pederson), and the process is delightfully simple. It begins with a chart of 10 "points", each being the same grey square equally spaced across the shaded background. Then, a simple command animates the transitions from one point to the next, and interpolates between them smoothly:

library(gganimate)
gg_animated <- gg + 
  transition_time(t) + 
  ease_aes('linear')

You can find the complete R source code behind the animation at the blog post linked below, along with an interesting discussion of luminance and how you should consider it when choosing color scales for your data visualizations.

RTask: Remaking ‘Luminance-gradient-dependent lightness illusion’ with R

ASP.NET Core 2.2.0-preview1: HTTP/2 in Kestrel

$
0
0

As part of the 2.2.0-preview1 release, we’ve added support for HTTP/2 in Kestrel.

What is HTTP/2?

HTTP/2 is a major revision of the HTTP protocol. Some of the notable features of HTTP/2 are support for header compression and fully multiplexed streams over the same connection. While HTTP/2 preserves HTTP’s semantics (HTTP headers, methods, etc) it is a breaking change from HTTP/1.x on how this data is framed and sent over the wire.

As a consequence of this change in framing, servers and clients need to negotiate the protocol version used. While it is possible to have prior knowledge between the server and the client on the protocol, all major browsers support ALPN as the only way to establish a HTTP/2 connection.

Application-Layer Protocol Negotiation (ALPN)

Application-Layer Protocol Negotiation (ALPN) is a TLS extension that allows the server and client negotiate the protocol version used as part of their TLS handshake.

How do I use it?

In 2.2.0-preview1 of Kestrel, HTTP/2 is enabled by default (we may change this in subsequent releases). Since most browsers already support HTTP/2 any request you make will already happen over HTTP/2 provided certain conditions are met:

  • The request is made over an HTTPS connection.
  • The native crypto library used by .NET Core on your platform supports ALPN

In the event that either of these conditions is unmet, the server and client will transparently fallback to using HTTP1.1.

The default binding in Kestrel advertises support for both HTTP/1.x and HTTP/2 via ALPN. You can always configure additional bindings via KestrelServerOptions. For example,

WebHost.CreateDefaultBuilder()
    .ConfigureKestrel(options =>
    {
        options.Listen(IPAddress.Any, 8080, listenOptions =>
        {
            listenOptions.Protocols = HttpProtocols.Http1AndHttp2;
            listenOptions.UseHttps("testcert.pfx", "testPassword")
        }); 
    })
    .UseStartup<Startup>();

If you do not enable HTTPS/TLS then Kestrel will be unable to use ALPN to negotiate HTTP/2 connections.

It is possible to establish a HTTP/2 connection in Kestrel using prior knowledge on all platforms (since we don’t rely on ALPN). However, no major browser supports prior knowledge HTTP/2 connections. This approach does not allow for graceful fallback to HTTP/1.x.

WebHost.CreateDefaultBuilder()
    .ConfigureKestrel(options =>
    {
        options.Listen(IPAddress.Any, 8080, listenOptions =>
        {
            listenOptions.Protocols = HttpProtocols.Http2;
        }); 
    })
    .UseStartup<Startup>();

Caveats

As mentioned earlier, it is only possible to negotiate an HTTP/2 connection if the native crypto library on your server supports ALPN.

ALPN is supported on:

  • .NET Core on Windows 8.1/Windows Server 2012 R2 or higher
  • .NET Core on Linux with OpenSSL 1.0.2 or higher (e.g., Ubuntu 16.04)

ALPN is not supported on:

  • .NET Framework 4.x on Windows
  • .NET Core on Linux with OpenSSL older than 1.0.2
  • .NET Core on OS X

What’s missing in Kestrel’s HTTP/2?

  • Server Push: An HTTP/2-compliant server is allowed to send resources to a client before they have been requested by the client. This is a feature we’re currently evaluating, but haven’t planned to add support for yet.
  • Stream Prioritization: The HTTP/2 standard allows for clients to send a hint to the server to express preference for the priority of processing streams. Kestrel currently does not act upon hints sent by the client.
  • HTTP Trailers: Trailers are HTTP headers that can be sent after the message body in both HTTP requests and responses.

What’s coming next?

In ASP.NET Core 2.2,

  • Hardening work on HTTP/2 in Kestrel. As HTTP/2 allows multiplexed streams over the same TCP connection, we need to introduce HTTP/2 specific limits as part of the hardening.
  • Performance work on HTTP/2.

Feedback

The best place to provide feedback is by opening issues at https://github.com/aspnet/KestrelHttpServer/issues.

ASP.NET Core 2.2.0-preview1: Healthchecks

$
0
0

What is it?

We’re adding a health checks service and middleware in 2.2.0 to make it easy to use ASP.NET Core in environments that require health checks – such as Kubernetes. The new features are set of libraries defining an IHealthCheck abstraction and service, as well as a middleware for use in ASP.NET Core.

Health checks are used by a container orchestrator or load balancer to quickly determine if a system is responding to requests normally. A container orchestrator might respond to a failing health check by halting a rolling deployment, or restarting a container. A load balancer might respond to a health check by routing traffic away from the failing instance of the service.

Typically health checks are exposed by an application as a simple HTTP endpoint used by monitoring systems. Creating a dedicated health endpoint allows you to specialize the behavior of that endpoint for the needs of monitoring systems.

How to use it?

Like many ASP.NET Core features, health checks comes with a set of services and a middleware.

public void ConfigureServices(IServiceCollection services)
{
...

    services.AddHealthChecks(); // Registers health checks services
}

public void Configure(IApplicationBuilder app)
{
...

    app.UseHealthChecks("/healthz");

...
}

This basic configuration will register the health checks services and and will create a middleware that responds to the URL path “/healthz” with a health response. By default no health checks are registered, so the app is always considered healthy if it is capable of responding to HTTP.

You can find a few more samples in the repo:

Understanding liveness and readiness probes

To understand how to make the most out out health checks, it’s important to understand the difference between a liveness probe and a readiness probe.

A failed liveness probe says: The application has crashed. You should shut it down and restart.

A failed readiness probe says: The application is OK but not yet ready to serve traffic.

The set of health checks you want for your application will depend on both what resources your application uses and what kind of monitoring systems you interface with. An application can use multiple health checks middleware to handle requests from different systems.

What health checks should I add?

For many applications the most basic configuration will be sufficient. For instance, if you are using a liveness probe-based system like Docker’s built in HEALTHCHECK directive, then this might be all you want.

// Startup.cs
public void ConfigureServices(IServiceCollection services)
{
...

    services.AddHealthChecks(); // Registers health checks services
}

public void Configure(IApplicationBuilder app)
{
...

    app.UseHealthChecks("/healthz");

...
}


# Dockerfile
...

HEALTHCHECK CMD curl --fail http://localhost:5000/healthz || exit

If your application is running in Kubernetes, you may want to support a readiness probe that health checks your database. This will allow the orchestrator to know when a newly-created pod should start receiving traffic.

public void ConfigureServices(IServiceCollection services)
{
...
    services
        .AddHealthChecks()
        .AddCheck(new SqlConnectionHealthCheck("MyDatabase", Configuration["ConnectionStrings:DefaultConnection"]));
...
}

public void Configure(IApplicationBuilder app)
{
    app.UseHealthChecks("/healthz");
}

...
spec:
  template:
  spec:
    readinessProbe:
      # an http probe
      httpGet:
        path: /healthz
        port: 80
        # length of time to wait for a pod to initialize
        # after pod startup, before applying health checking
        initialDelaySeconds: 30
        timeoutSeconds: 1
    ports:
      - containerPort: 80

Customization

The health checks middleware supports customization along a few axes. All of these features can be accessed by passing in an instance of HealthCheckOptions.

  • Filter the set of health checks run
  • Customize the HTTP response
  • Customize the mappings of health status -> HTTP status codes

What is coming next?

In a future preview we plan to add official support for health checks based on an ADO.NET DbConnection or Entity Framework Core DbContext.

We expect that the way that IHealthCheck instances interact with Dependency Injection will be improved. The current implementation doesn’t provide good support for interacting with services of varying lifetimes.

We’re working with the authors of Polly to try and integrate health checks for Polly’s circuit breakers.

We plan to also provide guidance and examples for using the health check service with push-based health systems.

How can you help?

There are a few areas where you can provide useful feedback during this preview. We’re interested in any thoughts you have of course, these are a few specific things we’d like opinions on.

Is the IHealthCheck interface general and useful enough to be used broadly?

  • Including other health check systems
  • Including health checks written by other libraries and frameworks
  • Including health checks written by application authors

The best place to provide feedback is by logging issues on https://github.com/aspnet/Diagnostics

Caveats and notes

The health check middleware doesn’t disable caching for responses to the health endpoint. We plan to add this in the future, but it didn’t make it into the preview build.

Azure SQL Data Warehouse Gen2 now generally available in France and Australia

$
0
0

Today, we announce the broader regional availability of the industry-leading performance provided by Azure SQL Data Warehouse (Azure SQL DW). Azure SQL DW is a fast, flexible, and secure analytics platform offering you a SQL-based view across data. It is elastic, enabling you to provision a cloud data warehouse and scale to terabytes in minutes.

Compute Optimized Gen2 tier is now rolled out to three additional Azure regions— Australia Central, Australia Center 2, and France Central. These new locations bring the product worldwide availability count for Compute Optimized Gen2 tier to 22 regions.

Compute Optimized Gen 2 Regional Availability


Azure SQL DW Gen2 brings the best of Microsoft software and hardware innovations to dramatically improve query performance and concurrency. Our customers now get up to 5 times better performance, on average, for query workloads, 4 times more concurrency, and 5 times higher computing power compared to the previous generation. Azure SQL DW can also serve 128 concurrent queries from a single cluster, the maximum for any cloud data warehousing service.

Begin today and experience the speed, scale, elasticity, security, and ease of use of a cloud-based data warehouse for yourself. You can see this blog post for more info on the capabilities and features of SQL Data Warehouse.

Share your feedback

We would love to hear from you about what features you would like us to add. Please let us know on our feedback site what features you want most. Users who suggest or vote for feedback will receive periodic updates on their request and will be the first to know when the feature is released. Also, you can connect with us if you have any product questions via StackOverflow, or via our MSDN forum.

Learn more

Check out the many resources for learning more about SQL Data Warehouse, including:

ASP.NET Core 2.20-preview1: Open API Analyzers & Conventions

$
0
0

What is it?

Open API (alternatively known as Swagger) is a language-agnostic specification for describing REST APIs. The Open API ecosystem has tools that allows for discovering, testing and producing client code using the specification. Support for generating and visualizing Open API documents in ASP.NET Core MVC is provided via community driven projects such as NSwag, and Swashbuckle.AspNetCore. Visit https://docs.microsoft.com/en-us/aspnet/core/tutorials/web-api-help-pages-using-swagger?view=aspnetcore-2.1 to learn more about Open API Swagger and for details on configuring your applications to use it.

For 2.2, we’re investing in tooling and runtime experiences to to allow developers to produce better Open API documents. This work ties in with ongoing work to perform client code SDK generation during build.

How to use it?

Analyzer

For 2.2, we’re introducing a new API-specific analyzers NuGet package – Microsoft.AspNetCore.Mvc.Api.Analyzers. These analyzers work with controllers annotated with ApiController introduced in 2.1, while building on API conventions that we’re also introducing in this release. To start using this, install the package:


<PackageReference Include="Microsoft.AspNetCore.Mvc.Api.Analyzers"
    Version="2.2.0-preview1-final"
    PrivateAssets="All" />

Open API documents contain each status code and response type an operation may return. In MVC, you use attributes such as ProducesResponseType and Produces to document these. The analyzer inspects controllers annotated with ApiController and identifies actions that do not entirely document their responses. You should see this as warnings (squiggly lines) highlighting return types that aren’t documented as well as warnings in the output. In Visual Studio, this should additionally appear under the “Warnings” tab in the “Error List” dialog. You now have the opportunity to address these warnings using code fixes.

Let’s look at the analyzer in action:

The analyzer identified that the action returned a 404 but did not document it using a ProducesResponseTypeAttribute. We used a code fix to document this. The added attributes would now become available for Swagger / Open API tools to consume. It’s a great way to identify areas of your application that are lacking swagger documentation and correct it.

Conventions

If your controllers follows some common patterns, e.g. they are all primarily CRUD endpoints, and you aren’t already using ProducesResponseType or Produces to document them, you could consider using API conventions. Conventions let you define the most common “conventional” return types and status codes that you return from your action, and apply them to individual actions or controllers, or all controllers in an assembly. Conventions are a substitute to decorating individual actions with ProducesResponseType attributes.

By default, ASP.NET Core MVC 2.2 ships with a set of default conventions – DefaultApiConventions – that’s based on the controller that ASP.NET Core scaffolds. If your actions follow the pattern that scaffolding produces, you should be successful using the default conventions.

At runtime, ApiExplorer understand conventions. ApiExplorer is MVC’s abstraction to communicate with Open API document generators. Attributes from the applied convention get associated with an action and will be included in action’s Swagger documentation. API analyzers also understand conventions. If your action is unconventional i.e. it returns a status code that is not documented by the applied convention, it will produce a warning, encouraging you to document it.

There are 3 ways to apply a convention to a controller action:

  • Applying the ApiConventionType attribute as an assembly level attribute. This applies the specified convention to all controllers in an assembly.
[assembly: ApiConvention(typeof(DefaultApiConventions))]
  • Using the ApiConventionType attribute on a controller.
[ApiConvention(typeof(DefaultApiConventions))]
[ApiController]
[Route("/api/[controller]")]
public class PetsController : ControllerBase
{
    ...
}
  • Using ApiConventionMethod. This attributes accepts both the type and the convention method.
// PUT: api/Pets/5
[ApiConventionMethod(typeof(DefaultApiConventions), nameof(DefaultApiConventions.Put))]
[HttpPut("{id}")]
public async Task<ActionResult<Pet>> PutPet(long id, Pet pet)
{
    ...
}

Like many other features in MVC, more specific attributes will supersede less specific ones. An API metadata attribute such as ProducesResponseType or Produces applied to an action will stop applying any convention atributes. The ApiConventionMethod will supersede a ApiConventionType attribute applied to the method’s controller or the assembly; and an ApiConventionType attribute applied to a controller will supersede ones applied to the assembly.

Authoring conventions

A convention is a static type with methods. These methods are annotated with ProducesResponseType or ProducesDefaultResponseType attributes.

public static class MyAppConventions
{
    [ProducesResponseType(200)]
    [ProducesResponseType(404)]
    public static void Find(int id)
    {

    }
}

Applying this convention to an assembly would result in the convention method applying to any action with the name Find and having exactly one parameter named id, as long as they do not have other more specific metadata attributes.

In addition to ProducesResponseType and ProducesDefaultResponseType, two additional attributes – ApiConventionNameMatch and ApiConventionTypeMatch – can be applied to the convention method that determines the methods they apply to.

[ProducesResponseType(200)]
[ProducesResponseType(404)]
[ApiConventionNameMatch(ApiConventionNameMatchBehavior.Prefix)]
public static void Find(
    [ApiConventionNameMatch(ApiConventionNameMatchBehavior.Suffix)]
    int id)
{ }

The ApiConventionNameMatchBehavior.Prefix applied to the method, indicates that the convention can match any action as long as it starts with the prefix “Find”. This will include methods such as Find, FindPet or FindById. The ApiConventionNameMatchBehavior.Suffix applied to the parameter, indicates that the convention can match methods with exactly one parameter that terminate in the suffix id. This will include parameters such as id, or petId. ApiConventionTypeMatch can be similarly applied to types to constrain the type of the parameter. A params[] arguments can be used to indicate remaining parameters that do not need not be explicitly matched.

An easy way to get started authoring a custom convention is to start by copying the body of DefaultApiConventions and modifying it. Here’s a link to the source of the type: https://raw.githubusercontent.com/aspnet/Mvc/release/2.2/src/Microsoft.AspNetCore.Mvc.Core/DefaultApiConventions.cs

Feedback

This is one on our earliest forays in trying to use tooling to enhance runtime experiences. We’re interested in any thoughts you have about this as well as your experiences using this in your applications. The best place to provide feedback is by opening issues at https://github.com/aspnet/Mvc.

Additional help

Speech Services August 2018 update

$
0
0

We are pleased to announce the release of another update to the Cognitive Services Speech SDK (version 0.6.0). With this release, we have added the support for Java on Windows 10 (x64) and Linux (x64). We are also extending the support for .NET Standard 2.0 to the Linux platform. The changes are highlighted in the table below. The sample section of the SDK has been updated with samples showcasing the use of the newly supported languages. The UWP support was added in the Speech SDK version 0.5.0 release; and starting from now the UWP apps built with the Speech SDK can be published to the Microsoft Store.

Speech SDK August Update

We also included several bug fixes which were reported by early adopters. Most notable this should fix errors in long-running speech transcriptions, as well as reducing the amount of in-use socket connections and threads.

Other functional changes, breaking changes and bug fixes can be found in the Speech SDK’s release notes. For questions regarding Speech SDK and Speech Services, please visit our support page.

There are also changes that impact the Speech Devices SDK. To provide a little bit of the background, the Speech Devices SDK is for our devices solution. It consumes the Speech SDK, and uses the Speech SDK to send the audio to the Speech Service for speech recognition. The Speech Devices SDK also has the advanced audio processing algorithm that’s fined tuned to the Roobo dev kits to significantly improve the audio quality of the audio data collected via the dev kits’ microphones, for high speech recognition accuracy.

The custom keyword spotting feature is only available for the Speech Devices SDK. In this release, the custom keyword spotting feature’s KeywordRecognitionModel function now supports fromFile(), and fromStream() in the zip format, that contains the kws.table as well as (optional) keyword adaptation files. We generally release an update to the Speech Devices SDK shortly after the release of the Speech SDK. For any update specific to the Speech Devices SDK, please visit its Speech Devces SDK’s release notes.

Please let us know if you have questions, by visiting our support page.

Cross-subscription disaster recovery for Azure virtual machines

$
0
0

Today, we are glad to announce cross-subscription disaster recovery (DR) support for Azure virtual machines using Azure Site Recovery (ASR). You can now configure DR for Azure IaaS applications to a different subscription with in the same Azure Active Directory tenant.

An Azure subscription is the basic unit where all Azure resources are contained. It also defines several limits within Azure, such as number of cores, resources, etc. Many organizations use multiple Azure subscriptions in their Azure account for billing or management purposes. For example, some organizations use different subscriptions for production, staging and disaster recovery environments for ease of management and to adhere to the subscription quota limits. With the new capability, you can replicate your virtual machines a different Azure region of your choice within a geographical cluster across subscriptions. This helps you meet the business continuity and disaster recover requirements for your IaaS applications without altering subscription topology of your Azure environment.

Configuring DR across subscriptions is very simple. By default, the target subscription will be same as the source virtual machine's subscription. You can customize and select the target subscription of your choice and the all the other settings such as resource group and virtual network are auto populated from the selected target subscription.

cross-sub-blog

Disaster recovery between Azure regions is available in all Azure regions where ASR is available. Get started with Azure Site Recovery today.

Related links and additional content


ASP.NET Core 2.2.0-preview1: SignalR Java Client

$
0
0

In ASP.NET Core 2.2 we are introducing a Java Client for SignalR. The first preview of this new client is available now. This client supports connecting to an ASP.NET Core SignalR Server from Java code, including Android apps.

The API for the Java client is very similar to that of the already existing .NET and JavaScript clients but there are some important differences to note.

The HubConnection is initialized the same way, with the HubConnectionBuilder type.

HubConnection hubConnection = new HubConnectionBuilder()
        .withUrl("www.example.com/myHub")
        .configureLogging(LogLevel.Information)
        .build();

Just like in the .NET client, we can send an invocation using the send method.

hubConnection.send("Send", input);

Handlers for server invocations can be registered using the on method. One major difference here is that the argument types must be specified as parameters, due to differences in how Java handles generics.

hubConnection.on("Send", (message) -> {
      // Your logic here
}, String.class);

Installing the Java Client

If you’re using Gradle you can add the following line to your build.gradle file:

implementation 'com.microsoft.aspnet:signalr:0.1.0-preview1-35029'

If you’re using Maven you can add the following lines to the dependencies section of your pom.xml file:

<dependency>
  <groupId>com.microsoft.aspnet</groupId>
  <artifactId>signalr</artifactId>
  <version>0.1.0-preview1-35029</version>
</dependency>

For a complete sample, see https://github.com/aspnet/SignalR-samples/tree/master/AndroidJavaClient

This is an early preview release of the Java client so there are many features that are not yet supported. We plan to close all these gaps before the RTM release:

  • Only primitive types can be accepted as parameters and return types.
  • The APIs are synchronous.
  • Only the “Send” call type is supported at this time, “Invoke” and streaming return values are not supported.
  • The client does not currently support the Azure SignalR Service.
  • Only the JSON protocol is supported.
  • Only the WebSockets transport is supported.

Video: Azure Machine Learning in plain English

$
0
0

Data Scientist and author Siraj Raval recently released a 12-minute video overview of Azure Machine Learning (embedded at the end of this post). The video begins with a overview of cloud computing and Microsoft Azure generally, before getting into the details of some specific Azure services for machine learning:

For those of you familiar with similar services from AWS, or just generally baffled by some of the Microsoft service names (and if so you're not alone there — believe me!), Siraj has also produced a plain English codex for Microsoft Azure services, with a description, corresponding service name in other systems, and what the service maybe should have been called in the first place.

Azure in plain English

Take a look at Siraj's video, below:

 

Visual Studio Code C/C++ extension August 2018 Update

$
0
0

Late last week we shipped the August 2018 update  to the C/C++ extension for Visual Studio Code. This update included support for “Just My Code” symbol search, a gcc-x64 option in the intelliSenseMode setting, and many bug fixes. You can find the full list of changes in the release notes.

“Just My Code” symbol search

Keyboard shortcut Ctrl+T in Visual Studio Code lets you jump to any symbols in the entire workspace.

We have heard feedback that sometimes it is desired to have the system header symbols excluded from this search. In this update, we enabled “Just My Code” symbol search to filter out system symbols, which offers a cleaner result list and significantly speeds up symbol search in large codebases, and as such we’ve made this behavior the default.

If you need symbol search to also include system headers, simply toggle the C_Cpp.workspaceSymbols setting in the VS Code Settings file (File > Preferences > Settings).

Tell us what you think

Download the C/C++ extension for Visual Studio Code, try it out and let us know what you think. File issues and suggestions on GitHub. If you haven’t already provided us feedback, please take this quick survey to help shape this extension for your needs.

The Chartmaker Directory: Data visualizations in every tool

$
0
0

Working with a new data visualization tool, and wondering how to create a specific type of chart? The Chartmaker Directory (designed by Andy Kirk) indexes more than 35 tools and over 50 charts, and provides links to examples from each combination. Here's an intentionally small detail from the index, where each hollow dot in a column represents a sample chart created a tool (the rightmost column is R, for example), and solid dots also include code. 

Chartmaker detail

The visualization tools include applications like Excel, Power BI and Tableau; languages and libraries including R, Stata, and Python's matplotlib); and frameworks like D3. The data visualizations range from the standard to the esoteric, and follow the taxonomy of the book Data Visualisation (also by Andy Kirk). The chart categories are color coded by row: categorical (including bar charts, dot plots); hierarchical (donut charts, treemaps); relational (scatterplots, sankey diagrams); temporal (line charts, stream graphs) and spatial (choropleths, cartograms). 

Follow the link below to explore the indexed tools and visualization. If you know of an example or solution that isn't listed, you can contribute your own links by clicking the plus sign in the top left of the index.

Visualizing Data: The Chartmaker Directory

Air France elevates customer service and empowers employees with Office 365

$
0
0

Image of the Air France logo.

Today’s post was written by Amel Hammouda, chief transformation officer at Air France and a member of the Air France executive committee.

Profile picture of Amel Hammouda, chief transformation officer at Air France and a member of the Air France executive committee.Whether they’re traveling overseas for business, taking a chance on a whirlwind romance, or reuniting with old friends, passengers count on Air France to deliver an exceptional customer experience on their journeys. Every year, Air France flies upwards of 87.3 million passengers to hundreds of destinations around the world. While we are internationally recognized for the lengths we go for our customers, we are equally committed to making sure our employees have a great experience on the job.

In fact, we see a direct correlation between empowered employees and satisfied customers. This idea drives our strategic business plan, which aims to foster an innovative mindset for the benefit of our customers and our employees. This means ensuring that our employees have the right tools to contribute their ideas and enthusiasm—from unique ways to present in-flight meals to project management on an international scale. We’re building a modern, mobile workplace with Microsoft Office 365, so we can tap into the enthusiasm of our dynamic, engaged employees. Continuing to innovate great customer service will help us maintain our advantage in today’s competitive airline industry.

It’s often said that you never know who you might meet on an airplane, and this spirit of potential is also at the core of our corporate culture. As we continue to build on a culture of innovation, we know that game-changing ideas can come from anywhere in the business. With Office 365, we have tools to break down silos and leverage our collective brainpower. More than 46,000 employees use the Yammer corporate social network to share their ideas and innovate through improved collaboration for the benefit of our customers. Many of our most successful Yammer initiatives have been entirely grassroots, like the viral “I Love My Job” project that started with a single flight attendant using Yammer to share pictures and stories about her job. This grew into an incredible network of Firstline Workers who use Yammer to share best practices, tips, and tricks—innovations that we can use to take customer service to the next level.

Consistency and accuracy occur naturally in a well-connected workforce. With our cloud-based communication tools taking off in the company, it is easier for Air France employees to provide accurate answers to customers’ questions, and even anticipate their needs. The more knowledge we share across the company, the more unified we are in our approach to service, and the more reliable we are in the eyes of our customers. Today, Air France is a more interconnected, productive organization because we have the technology to communicate and collaborate effortlessly.

Dynamic communication tools like Yammer don’t just improve employee connections, they also bring tangible business wins and help the focus remain on customers. The speed at which information can spread throughout the company translates to more agile decision-making and efficiency gains. Yammer was recently used by flight attendants who encountered a problem with snack packaging on a particular route. From posts across the company, it quickly became clear that flight attendants elsewhere noticed the same frustrating defect. Within 48 hours, Air France had negotiated a refund and replacement from the supplier—a win for both flight attendants and customers.

Air France operates a complex network of flights all over the globe; distributing information across our geographically dispersed enterprise can be equally complex. Working with intelligent communication applications like Microsoft Teams helps us to connect a mobile workforce that is always on the move. Our digital champions, employees who are passionate about promoting digital culture, use Skype for Business video calls to drive adoption of our new cloud tools. My team uses Microsoft SharePoint Online for document sharing and storage. I have seen firsthand how these tools can help us transcend department boundaries and drive projects across the company.

Our strategic priorities include two main lines of action: to continuously improve the customer experience and do the same for employees. Thanks to our modern, connected workplace, we are accelerating progress in both strategic directions and we expect great synergies over the next few years.

Read the case study to learn more about how Air France uses Office 365 to improve customer service and empower employees.

The post Air France elevates customer service and empowers employees with Office 365 appeared first on Microsoft 365 Blog.

Viewing all 10804 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>