Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

Videos from NYC R Conference

$
0
0

The videos from the NYC R conference have been published, and there are so many great talks there to explore. I highly recommend checking them out: you'll find a wealth of interesting R applications, informative deep dives on using R (and a few other applications as well), and some very entertaining deliveries. In this post, I wanted to highlight a couple of talks in particular. 

The talk by Jonathan Hersh (Chapman University), Applying Deep Learning to Satellite Images to Estimate Violence in Syria and Poverty in Mexico is both fascinating and a real technical achievement. In the first part of the talk, he uses convolutional neural networks to identify buildings, roads, and vegetation in satellite images, and use that to predict poverty rates across all of Mexico. In the second part, he uses a similar technique to identify violent hotspots in Syria, this time by identifying damaged buildings and roads from the satellite images. In both cases, analyzing satellite images is a lot less expensive (and more consistent) than collecting samples at the ground level. The neural networks were implemented in Theano running on a GPU-enabled Microsoft Azure server.

Moving from applications to platforms, my colleague Marck Vaisman gave a great talk on R for Big Data in the cloud. His talk offers some practical considerations related to working with large data sets in R, and details some cloud technologies for big data that you can connect with R, like Spark and sparklyr (which is easy to use in the cloud with Azure Databricks). 

My talk on R and Minecraft is there as well, if you want to check it out. That talk and all of the rest from the conference are available for viewing at the link below.

New York R Conference: 2018 


Intelligent Search: Video summarization using machine learning

$
0
0

Videos account for some of the richest and most delightful content on the web. But it can be difficult to tell which cat video is really going to make you LOL. That can be a frustrating and time-consuming process and it’s why we decided to help by building a smart preview feature to improve our search results and help our users more effectively find videos on the web. The idea is simple – you can hover on a video-result thumbnail and see a short preview of the video which tells you whether the video is the one you are looking for. You can try this out with a query on the video vertical – like funny cats.


The concept may be simple, but the execution is not. Video summarization is among the hardest technical challenges. Things that are intuitive to human beings like “the main scene” are inherently case-dependent and difficult for machines to internalize or generalize. Here’s how we use data and some machine learning magic to solve this technically challenging problem.

Overview

There are broadly two approaches towards video summarization: static and dynamic. Static summarization techniques try to find the important frames (images) from different parts of the video and splice them together in a kind of story-board. Dynamic summarization techniques divide the video into small video segments/chunks and try to select and combine the important segments/chunks to create a fixed-duration summary.

We chose the static approach for reasons of efficiency and utility. We had data indicating that over 80% of the viewers hovered on the thumbnail for less than 10 seconds (i.e. users don’t have the patience to watch long previews). We therefore thought, it would be useful to provide a set of four diverse thumbnails that could summarize the video at a single glance. There were UX constraints that kept us from adding too many thumbnails. In this way, our problem became selecting the most relevant thumbnail (hereinafter referred to as ‘primary thumbnail’) and selecting the four-thumbnail set to summarize a video.

Step One: Selecting the primary thumbnail

Here’s how we created a machine-learning pipeline for selecting the primary thumbnail of any video. First and foremost, you need labelled data, and lots of it. To teach our machines some examples of good and bad thumbnails, we randomly sample 30 frames (frame = still image) from the video and show it to our judges. The judges evaluate these frames using a subjective evaluation that considers attributes such as image quality, representativeness, attractiveness, etc. and assign each frame a label based on its quality as Good, Neutral, Bad (scored as 1,0.5,0). Point to note – our training data is not query specific, i.e. the judges are evaluating the thumbnail in isolation, and not in the context of the query. This training data, along with a host of features from these images (more on that in a bit) are used to train a boosted trees regression model that tries to predict the label on an unseen frame based on its features. The boosted trees model outputs a score between 0 and 1 that helps us decide the best frame that can be used as a primary thumbnail for the video.

What were the features that turned out to be useful in selecting a good thumbnail? As it turned out, core image quality features turned out to be very useful (i.e. features like the level of contrast, the blurriness, the level of noise, etc.). We also used sophisticated features powered by face-detection (# of faces detected, face size and position relative to the frame, etc). Also used were motion detection features and frame difference/frame similarity features. Visually similar and temporally co-located frames are grouped together into video sequences called scenes, and the scene length of the corresponding frame is also used as a feature – this turns out to be helpful in deciding whether the selected thumbnail is a good one. Finally, we also use deep neural networks (DNN) to train high-dimensional image vectors on the image quality labels and these vectors are used to capture the quality of the frame layout (factors like the zoom level [absence of extreme close ups and extreme zoom outs etc.]). The frame with highest predicted frame score is selected as the primary thumbnail to be shown to the user.

Here is a visual schematic:

Step Two: Selecting the remaining thumbnails for the video summary

The next step is to create a four-thumbnail set that provides a good representative summary of the video. A key requirement is comprehensiveness and it brings in many technical challenges. For instance, we could have simply taken the four frames with the highest scores from previous step and created a summary. But that won’t work in most cases because there’s a high chance that the four top-scored frames are from the exact same scene, and they don’t do a good job of summarizing the whole video. There are other problems too - from a computational cost point of view, it is impractical to evaluate all possible sets of four-frame candidates. Thirdly, it’s hard to collect training data from users about the four frames that best summarize a video, because, it is hard for users to select the 4 best frames from a video having thousands of frames. Here’s how we handle each of these problems.

To deal with the comprehensiveness, we introduce a similarity factor in the objective function. The new objective function for the expanded thumbnail set not only tries to maximize the total image quality score, but also adds an additional tuning parameter for similarity. The weight for this parameter is trained from user’s labelled data (more on that below). The similarity factor currently has a negative weight (i.e. a set of 4 high quality frames in which the frames are mutually diverse, will generally be considered a better summary than a corresponding set where the frames are similar).

We deal with computational complexity by formulating the problem as a greedy optimization problem. As stated before, it’s not possible to evaluate every possible combination of 4-frame summaries. Moreover, the best combination of 4 frames need not contain the primary thumbnail (it’s possible that the best combination excludes the primary thumbnail). But since we’ve already taken great pains to select the primary thumbnail, it can greatly simplify our task if we use this as a starting point to select just three more thumbnails that help maximize the total score. That’s greedy optimization.

Here’s how we generate training data for learning the weights for similarity and other features. We show judges a set of 4 frames on LHS and RHS (these frames are randomly selected from the video) and ask them to do a side-by-side judgment (label as “leftbetter”, “rightbetter”, or “equal”). This training data is then used to derive the thumbnail-set model by training the new objective function (total image quality score and similarity) for the 4-frame set. As it turned out, based on the training data, the weight for similarity is negative (i.e. in general, more visually diverse frame-sets lead to better summaries). That’s how we select the 4-thumbnail set.


Here are some examples that show the improved performance our new model over baseline methods available in static video summarization.



Outcome: Creating a playable preview video from 4 thumbnails

Of course with all the technical wizardry we can’t forget our main objective which is to generate a playable video clip from these 4 thumbnails to help our users better locate web videos. We do that by extracting a small clip around each of these four frames. How we find the boundaries to snip etc. is probably the subject of another blog. The result of all of this is a Smart Preview that helps users know what’s in the video. This means all of us can spend less time searching and more time watching the videos we want. As with earlier features like coding answer, the goal is to build an intelligent search engine and save users’ time. Try it out on our video vertical

Improvements on ASP.NET Core deployments on Zeit’s now.sh and making small container images

$
0
0

Back in March of 2017 I blogged about Zeit and their cool deployment system "now." Zeit will take any folder and deploy it to the web easily. Better yet if you have a Dockerfile in that folder as Zeit will just use that for the deployment.

image

Zeit's free Open Source account has a limit of 100 megs for the resulting image, and with the right Dockerfile started ASP.NET Core apps are less than 77 megs. You just need to be smart about a few things. Additionally, it's running in a somewhat constrained environment so ASP.NET's assumptions around FileWatchers can occasionally cause you to see errors like

at System.IO.FileSystemWatcher.StartRaisingEvents()

Unhandled Exception: System.IO.IOException:
The configured user limit (8192) on the number of inotify instances has been reached.
at System.IO.FileSystemWatcher.StartRaisingEventsIfNotDisposed(

While this environment variable is set by default for the "FROM microsoft/dotnet:2.1-sdk" Dockerfile, it's not set at runtime. That's dependent on your environment.

Here's my Dockerfile for a simple project called SuperZeit. Note that the project is structured with a SLN file, which I recommend.

Let me call our a few things.

  • First, we're doing a Multi-stage build here.
    • The SDK is large. You don't want to deploy the compiler to your runtime image!
  • Second, the first copy commands just copy the sln and the csproj.
    • You don't need the source code to do a dotnet restore! (Did you know that?)
    • Not deploying source means that your docker builds will be MUCH faster as Docker will cache the steps and only regenerate things that change. Docker will only run dotnet restore again if the solution or project files change. Not the source.
  • Third, we are using the aspnetcore-runtime image here. Not the dotnetcore one.
    • That means this image includes the binaries for .NET Core and ASP.NET Core. We don't need or want to include them again.
    • If you were doing a publish with a the -r switch, you'd be doing a self-contained build/publish. You'd end up copying TWO .NET Core runtimes into a container! That'll cost you another 50-60 megs and it's just wasteful. If you want to do that
    • Go explore the very good examples on the .NET Docker Repro on GitHub https://github.com/dotnet/dotnet-docker/tree/master/samples
    • Optimizing Container Size
  • Finally, since some container systems like Zeit have modest settings for inotify instances (to avoid abuse, plus most folks don't use them as often as .NET Core does) you'll want to set ENV DOTNET_USE_POLLING_FILE_WATCHER=true which I do in the runtime image.

So starting from this Dockerfile:

FROM microsoft/dotnet:2.1-sdk-alpine AS build

WORKDIR /app

# copy csproj and restore as distinct layers
COPY *.sln .
COPY superzeit/*.csproj ./superzeit/
RUN dotnet restore

# copy everything else and build app
COPY . .
WORKDIR /app/superzeit
RUN dotnet build

FROM build AS publish
WORKDIR /app/superzeit
RUN dotnet publish -c Release -o out

FROM microsoft/dotnet:2.1-aspnetcore-runtime-alpine AS runtime
ENV DOTNET_USE_POLLING_FILE_WATCHER=true
WORKDIR /app
COPY --from=publish /app/superzeit/out ./
ENTRYPOINT ["dotnet", "superzeit.dll"]

Remember the layers of the Docker images, as if they were a call stack:

  • Your app's files
  • ASP.NET Core Runtime
  • .NET Core Runtime
  • .NET Core native dependencies (OS specific)
  • OS image (Alpine, Ubuntu, etc)

For my little app I end up with a 76.8 meg image. If want I can add the experimental .NET IL Trimmer. It won't make a difference with this app as it's already pretty simple but it could with a larger one.

BUT! What if we changed the layering to this?

  • Your app's files along with a self-contained copy of ASP.NET Core and .NET Core
  • .NET Core native dependencies (OS specific)
  • OS image (Alpine, Ubuntu, etc)

Then we could do a self-Contained deployment and then trim the result! Richard Lander has a great dockerfile example.

See how he's doing the package addition with the dotnet CLI with "dotnet add package" and subsequent trim within the Dockerfile (as opposed to you adding it to your local development copy's csproj).

FROM microsoft/dotnet:2.1-sdk-alpine AS build

WORKDIR /app

# copy csproj and restore as distinct layers
COPY *.sln .
COPY nuget.config .
COPY superzeit/*.csproj ./superzeit/
RUN dotnet restore

# copy everything else and build app
COPY . .
WORKDIR /app/superzeit
RUN dotnet build

FROM build AS publish
WORKDIR /app/superzeit
# add IL Linker package
RUN dotnet add package ILLink.Tasks -v 0.1.5-preview-1841731 -s https://dotnet.myget.org/F/dotnet-core/api/v3/index.json
RUN dotnet publish -c Release -o out -r linux-musl-x64 /p:ShowLinkerSizeComparison=true

FROM microsoft/dotnet:2.1-runtime-deps-alpine AS runtime
ENV DOTNET_USE_POLLING_FILE_WATCHER=true
WORKDIR /app
COPY --from=publish /app/superzeit/out ./
ENTRYPOINT ["dotnet", "superzeit.dll"]

Now at this point, I'd want to see how small the IL Linker made my ultimate project. The goal is to be less than 75 megs. However, I think I've hit this bug so I will have to head to bed and check on it in the morning.

The project is at https://github.com/shanselman/superzeit and you can just clone and "docker build" and see the bug.

However, if you check the comments in the Docker file and just use the a "FROM microsoft/dotnet:2.1-aspnetcore-runtime-alpine AS runtime" it works fine. I just think I can get it even smaller than 75 megs.

Talk so you soon, Dear Reader! (I'll update this post when I find out about that bug...or perhaps my bug!)


Sponsor: Preview the latest JetBrains Rider with its built-in spell checking, initial Blazor support, partial C# 7.3 support, enhanced debugger, C# Interactive, and a redesigned Solution Explorer.



© 2018 Scott Hanselman. All rights reserved.
     

2018 State of DevOps Report Now Available

$
0
0
The new State of DevOps Report is available, and it is a must-read. This is the best empirical analysis I have read of the practices that make organizations effective at software delivery. Here’s one of my favorite examples from the study: At Microsoft, we feel privileged to be able to co-sponsor the 2018 State of... Read More

Announcing general availability of Azure IoT Hub’s integration with Azure Event Grid

$
0
0

We’re proud to see more and more customers using Azure IoT Hub to control and manage billions of devices, send data to the cloud and gain business insights. We are excited to announce that IoT Hub integration with Azure Event Grid is now generally available, making it even easier to transform these insights into actions by simplifying the architecture of IoT solutions. Some key benefits include:

  • Easily integrate with modern serverless architectures, such as Azure Functions and Azure Logic Apps, to automate workflows and downstream processes.
  • Enable alerting with quick reaction to creation, deletion, connection, and disconnection of devices.
  • Eliminate the complexity and expense of polling services and integrate events with 3rd party applications using webhooks such as ticketing, billing system, and database updates.

Together, these two services help customers easily integrate event notifications from IoT solutions with other powerful Azure services or 3rd party applications. These services add important device lifecycle support with events such as device created, device deleted, device connected, and device disconnected, in a highly reliable, scalable, and secure manner.

Here is how it works:


As of today, this capability is available in the following regions:

  • Asia Southeast
  • Asia East
  • Australia East
  • Australia Southeast
  • Central US
  • East US 2
  • West Central US
  • West US
  • West US 2
  • South Central US
  • Europe West
  • Europe North
  • Japan East
  • Japan West
  • Korea Central
  • Korea South
  • Canada Central
  • Central India
  • South India
  • Brazil South
  • UK West
  • UK South
  • East US, coming soon
  • Canada East, coming soon

    Azure Event Grid became generally available earlier this year and currently has built-in integration with the following services:

    Azure Event Grid service integration

    As we work to deliver more events from Azure IoT Hub, we are excited for you to try this capability and build more streamlined IoT solutions for your business. Try this tutorial to get started.

    We would love to hear more about your experiences with the preview and get your feedback! Are there other IoT Hub events you would like to see made available? Please continue to submit your suggestions through the Azure IoT User Voice forum.

    Microsoft Azure, the cloud for high performance computing

    $
    0
    0

    Today, we continue to see customers leveraging Azure to push through new frontiers in high performance and accelerated computing. From Neuroiniative’s quest to accelerate drug discovery for Parkinson’s and Alzheimer’s diseases to EFS’s building of self-driving car technologies, a vast number of customers are leveraging Azure for breakthrough innovation.

    We continue to invest to deliver the broadest range of Accelerated and high-performance computing (HPC) capabilities in the public cloud. From InfiniBand-enabled Virtual Machine families for artificial intelligence and HPC, to Hyperscale services like Cray supercomputing, Azure enables customers to deliver the full spectrum of AI and machine learning applications.

    Azure CycleCloud – the simplest way to execute HPC on Azure

    We are excited to announce the general availability of Azure CycleCloud, a tool for creating, managing, operating, and optimizing HPC clusters of any scale in Azure.

    With Azure CycleCloud, we are making it even easier for everyone to deploy, use, and optimize HPC burst, hybrid, or cloud-only clusters. For users running traditional HPC clusters, using schedulers including SLURM, PBS Pro, Grid Engine, LSF, HPC Pack, or HTCondor, this will be the easiest way to get clusters up and running in the cloud, and manage the compute/data workflows, user access, and costs for their HPC workloads over time. 

    With a few clicks, HPC IT administrators can deploy high-performance clusters of compute, storage, filesystem, and application capability in Azure. Azure CycleCloud’s role-based policies and governance features make it easy for their organizations to deliver the hybrid compute power where needed while avoiding runaway costs. Users can rely on Azure CycleCloud to orchestrate their job and data workflows across these clusters.

    image

    Customers including GE, Johnson & Johnson, and Ramboll leverage CycleCloud technology to deploy HPC clusters, control access and costs, and simplify management for compute and data workloads on the cloud. As an example of an innovative HPC simulation and AI workload, Silicon Therapeutics is using Azure CycleCloud to orchestrate a large Slurm HPC cluster with GPUs to simulate a large number of proteins to assess if and how these proteins can be targeted in their drug design projects.

    Silicon Therapeutics – next generation drug discovery using quantum physics and machine learning

    Silicon Therapeutics has created a unique quantum physics simulation technology to identify targets and design drugs to fight diseases that have been considered difficult for traditional approaches. These challenging protein targets typically involve large changes in their shape “conformational changes” associated with their biological function.

    The company’s proprietary platform couples biological data with the dynamic nature of proteins to identify new disease targets. The integration of experimental data with physics-based simulations and machine learning can be performed at the genome scale, which is extremely computationally demanding, but tractable in the modern era of computing. Once targets have been identified, the platform is used to study thousands of molecules at the atomic level to gain insights that are used to guide the design of new, better drug candidates, which they synthesize and test in the lab.

    Here, Silicon Therapeutics ran molecular dynamics simulations on thousands of targets—both to explore “flexibility” and to identify potential “hotspots” for designing new medicines. The simulations entailed millions of steps computing interactions between tens of thousands of atoms, which they ran on thousands of proteins.

    The computations used five years of GPU compute-time, but was run in only 20 hours on 2048 NCv1 GPU instances in Azure. The auto-scaling capabilities of Azure CycleCloud created a Slurm cluster using Azure’s NCv1 VMs with full-performance Nvidia K80 GPUs, and a BeeGFS file system. This environment mirrored their internal cluster, so their on-premise jobs could run seamlessly without any bottlenecks in Azure. This search for potential protein “hotspots” where drug candidates might be able to fight disease, generated over 50 TB of data. At peak, the 2048 K80 GPUs used over 25 GB/second of bandwidth between the BeeGFS and the compute nodes.

    Using CycleCloud, Silicon Therapeutics could run the same platform they ran inhouse, and simply scale a Slurm HPC cluster with low-priority GPU execute nodes and a 80TB BeeGFS parallel filesystem to execute the molecular dynamics simulations and machine learning workloads to search for potential new drug candidates.

    “In our work, where simulations are central to our decisions, time-to-solution is critical. Even with our significant internal compute resources, the Microsoft Azure cloud offers the opportunity to scale up resources with minimal effort. Running thousands of GPUs, as in this work, was a smooth process, and the Azure support team was excellent” says Woody Sherman, CSO at Silicon Therapeutics.

    Azure CycleCloud is free to download and use to help get innovative HPC workloads like this one running on Azure, with easy management and cost control. If you have HPC and AI workloads that need to leverage Azure’s specialized compute capabilities to get answers back faster, try it for free today!

    NVIDIA GPU Cloud with Azure

    As GPUs provide outstanding performance for AI and HPC, Microsoft Azure provides a variety of virtual machines enabled with NVIDIA GPUs. Starting today, Azure users and cloud developers have a new way to accelerate their AI and HPC workflows with powerful GPU-optimized software that takes full advantage of supported NVIDIA GPUs on Azure.

    Containers from the NVIDIA GPU Cloud (NGC) container registry are now supported on NVIDIA Volta and Pascal-powered Azure NCv3, NCv2 and ND. This brings together the power of NVIDIA GPUs in Azure cloud infrastructure with the comprehensive library of deep learning and HPC containers from NGC.

    The NGC container registry includes NVIDIA tuned, tested, and certified containers for deep learning software such as Microsoft Cognitive Toolkit, TensorFlow, PyTorch, and NVIDIA TensorRT. Through extensive integration and testing, NVIDIA creates an optimal software stack for each framework – including required operating system patches, NVIDIA deep learning libraries, and the NVIDIA CUDA Toolkit – to allow the containers to take full advantage of NVIDIA GPUs. The deep learning containers from NGC are refreshed monthly with the latest software and component updates.

    NGC also provides fully tested, GPU-accelerated applications and visualization tools for HPC, such as NAMD, GROMACS, LAMMPS, ParaView, and VMD. These containers simplify deployment and get you up and running quickly with the latest features.

    To make it easy to use NGC containers with Azure, a new image called NVIDIA GPU Cloud Image for Deep Learning and HPC is available on Azure Marketplace. This image provides a pre-configured environment for using containers from NGC on Azure. Containers from NGC on Azure NCv2, NCv3, and ND virtual machines can also be run with Azure Batch AI by following these GitHub instructions.

    To access NGC containers from this image, simply signup for a free account and then pull the containers into your Azure instance. To learn more accelerating HPC and AI projects with Azure and NGC, sign up for the webinar on October 2nd. 

    Azure: Investing to make HPC, AI, and GPU in the cloud easy

    Microsoft is committed to making Azure the cloud of choice for HPC. Azure CycleCloud and NVIDIA GPUs ease integration and the ability to manage and scale. Near-term developments around hybrid cloud performance with the Avere vFXT will enhance your ability to minimize latency while leveraging on-premises NAS or Azure blob storage alongside Azure CycleCloud and Azure Batch workloads. With this portfolio of HPC solutions in-hand, we’re excited to see the new innovations you create!

    Q&A: How to specialize std::sort by binding the comparison function

    $
    0
    0

    This post is part of a regular series of posts where the C++ product team here at Microsoft and other guests answer questions we have received from customers. The questions can be about anything C++ related: Visual C++, the standard language and library, the C++ standards committee, isocpp.org, CppCon, etc. Today’s Q&A is by Herb Sutter.

    Question

    A reader recently asked: I am trying to specialize std::sort by binding the comparison function.
    I first tried:

    auto sort_down = bind(sort<>,_1,_2,[](int x, int y){return x > y;});

    It couldn’t infer the parameter types. So then I tried:

    auto sort_down = bind(sort<vector<int>::iterator,function<int(int)>>,
                          _1,_2,[](int x, int y){return x > y;});

    Is there a straightforward way to do this?
    Another example:

    auto f = bind(plus<>(), _1, 1)

    Here bind has no trouble deducing the template arguments in this case, but when I use a function template for the original callable, it’s not so happy. Just wanting to be consistent with this usage.

    Answer

    First, the last sentence is excellent: We should definitely be aiming for a general consistent answer where possible so we can spell the same thing the same way throughout our code.

    In questions about binders, the usual answer is to use a lambda function directly instead – and usually a generic lambda is the simplest and most flexible. A lambda additionally lets you more directly express how to take its parameters when it’s invoked – by value, by reference, by const, and so forth, instead of resorting to things like std::ref as we do when we use binders.

    For the second example, you can write f as a named lambda this way:

    auto f = [](const auto& x){ return x+1; };

    For the first example, you can write sort_down as a named lambda like this:

    auto sort_down = [](auto a, auto b){ return sort(a, b, [](int x, int y){return x > y;}); };

    Note the way to give a name to a lambda: assign it to an auto variable, which you can give any name you like. In this case I take a and b by value because we know they’re intended to be iterators which are supposed to be cheap to copy.

    The nice thing about lambdas is they allow exactly what you asked for: consistency. To be consistent, code should use lambdas exclusively, never bind. As of C++14, which added generic lambdas, lambdas can now do everything binders can do and more, so there is never a reason to use the binders anymore.

    Note that the old binders bind1st and bind2nd were deprecated in C++11 and removed in C++17. Granted, we have not yet deprecated or removed std::bind itself, but I wouldn’t be surprised to see that removed too. Although bind can be convenient and it’s not wrong to use it, there is no reason I know of to use it in new code that is not now covered by lambdas, and since lambdas can do things that binders cannot, we should encourage and use lambdas consistently.

    As a side point, notice that the “greater than” comparison lambda

    [](int x, int y){return x > y;}

    expects integer values only, and because of the glories of the C integer types it can give the wrong results because of truncating (e.g., if passed a long long) and/or sign conversion (e.g., a 32-bit unsigned 3,000,000,000 is greater than 5, but when converted to signed is less than 5). It would be better written as

    [](const auto& x, const auto& y){return x > y;}

    or in this case

    std::greater<>{}

    Thanks to Stephan Lavavej for comments on this answer.

    Your questions?

    If you have any question about C++ in general, please comment about it below. Someone in the community may answer it, or someone on our team may consider it for a future blog post. If instead your question is about support for a Microsoft product, you can provide feedback via Help > Report A Problem in the product, or via Developer Community.

    Join the Bing Maps APIs team at Microsoft Ignite 2018 in Orlando, Florida

    $
    0
    0

    The Bing Maps team will be at Microsoft Ignite 2018, in Orlando, Florida, September 24th through the 28th. If you are registered for the event, stop by the Bing Maps APIs for Enterprise booth in the Modern Workplace area of the Expo, to learn more about the latest features and updates to our Bing Maps platform, as well as attend our sessions.

    Bing Maps APIs sessions details:

    Theater session ID: THR1127

    Microsoft Bing Maps APIs - Solutions Built for the Enterprise

    The Microsoft Bing Maps APIs platform provides mapping services for the enterprise, with advanced data visualization, website and mobile application solutions, fleet and logistics management and more. In this session, we’ll provide an overview of the Bing Maps APIs platform (what it is and what’s new) and how it can add value to your business solution.

    Theater session ID: THR1128

    Cost effective, productivity solutions with fleet management tools from Microsoft Bing Maps APIs
    The Bing Maps API platform includes advanced fleet and asset management solutions, such as the Distance Matrix, Truck Routing, Isochrone, and Snap-to-Road APIs that can help your business reduce costs and increase productivity. Come learn more about our fleet management solutions as well as see a short demo on how you can quickly set up and deploy a fleet tracking solution.

    If you are not able to attend Microsoft Ignite 2018, we will share news and updates on the blog after the conference and post recordings of the Bing Maps APIs sessions on http://www.microsoft.com/maps.

    For more information about the Bing Maps Platform, go to https://www.microsoft.com/maps/choose-your-bing-maps-API.aspx.

    - Bing Maps Team


    #ifdef WINDOWS – MIDL 3 with Larry Osterman

    $
    0
    0

    Microsoft Interface Definition Language (MIDL) 3.0 is a simplified, modern syntax for declaring Windows Runtime types inside Interface Definition Language (IDL) files (.idl files). It is a particularly convenient way to declare C++/WinRT runtime classes.

    In this video, Larry Osterman, lead developer on the COM team in Windows, gave us a deep dive in MIDL and how it all ties in the Windows Runtime. Larry shared how MIDL started and how it got to where it is today. Watch the full video above and feel free to reach out on Twitter or in the comments below for questions or comments.

    Happy coding!

    #ifdef WINDOWS is a periodic dev show by developers for developers focused on Windows development through interviews of engineers working on the Windows platform. Learn why and how features and APIs are built, how to be more successful building Windows apps, what goes into building for Windows and ultimately how to become a better Windows developer. Subscribe to the YouTube channel for notifications about new videos as they are posted, and make sure to reach out on Twitter for comments and suggestions.

    The post #ifdef WINDOWS – MIDL 3 with Larry Osterman appeared first on Windows Developer Blog.

    Improving your productivity in the Visual Studio Editor

    $
    0
    0

    Over the last few updates to Visual Studio 2017, we’ve been hard at work adding new features to boost your productivity while you’re writing code. Many of these are the result of your direct feedback coming from the UserVoice requests, Developer Community tickets, and direct feedback we’ve encountered while talking to developers like you.

    We are so excited to share these features with you and look forward to your feedback!

    Multi-Caret Support

    One of our top UserVoice items asked for the ability to create multiple insertion and selection points, often shortened to be called multi-caret or multi-cursor support. Visual Studio Code users told us they missed this feature when working in Visual Studio. We heard you opened single files in Visual Studio Code to leverage this feature or installed extensions such as MixEdit, but in Visual Studio 2017 Version 15.8, you won’t need to do this anymore. We’ve added native support for some of the top requested features in the multi-caret family and we’re just getting started.

    There are three main features we’d like to highlight. First, you can add multiple insertion points or carets. With Ctrl + Alt + Click, you can add additional carets to your document, which allows you to add or delete text in multiple places at once.

    GIF showing how to add carets in multiple locations

    Second, with Ctrl + Alt + . you can add additional selections that match your current selection. We think of this as an alternative to find and replace, as it allows you to add your matching selections one by one while also verifying the context of each additional selection. If you’d like to skip over a match, use (Ctrl + Shift + Alt + .) to move the last matching selection to the next instance.

    Lastly, you can also grab all matching selections in a document at once (Ctrl + Alt + Shift + ,) providing a scoped find and replace all.

    Quick Commands

    Just like papercuts, smaller missing commands hurt when you add them up! We heard your pain, so in the past few releases, we’ve tried to address some of the top features you’ve asked for.

    Duplicate line

    The reduction of even a single keystroke adds up when multiplied across our userbase and one place we saw an opportunity to optimize your workflow was in duplicating code. The classic Copy + Paste worked in many cases, but we also heard in feedback that you wanted a way to duplicate a selection without affecting your clipboard. One scenario where this often popped up was when you wanted to clone a method and rename it by pasting a name you had previously copied.

    To solve this issue, we introduced Duplicate Code (Ctrl + D) in Visual Studio 2017 version 15.6 which streamlines the process of duplicating your code while leaving your clipboard untouched. If nothing is selected, Ctrl + D will duplicate the line the cursor is in and insert it right below the line in focus. If you’d like to duplicate a specific set of code, simply select the portion of code you want to duplicate before invoking the duplicate code command.

    Expand/Contract Selection

    How do you quickly select a code block? In the past, you could incrementally add to your selection word by word or perhaps you used a series of Shift plus arrow keystrokes. Maybe you took that extra second to lift you hand off the keyboard so you could use a mouse instead. Whatever the way, you wanted something better. In Visual Studio 2017 version 15.5, we introduced expand /contract selection which allows you to grow your selection to the next logical code block (Shift + Alt + +) and decrease it by the same block if you happen to select too much (Shift + Alt + ).

    gif showing expand /contract selection which allows you to grow your selection to the next logical code block and decrease it by the same block

    Moving between issues in your document

    You’ve been able to navigate to Next Error via Ctrl + Shift + F12 but we heard this experience was sometimes jarring as Next Error might jump you all around a solution as it progressed through issues in the order they appeared in the Error List. With Next/Previous Issue (Alt + PgUp/PgDn) you can navigate to the next issue (error, warning, suggestion) in the current document. This allows you to move between issues in sequential versus severity order and gives you more progressive context as you’re moving through your issues.

    Go To All – Recent Files and File Member search

    You can now view and prioritize search results from recent files. When you turn on the recent files filter, the Go To All results will show you a list of files opened during that session and then prioritizes results from recent files for your search term.

    Additionally, Go To Member is now scoped to the current file by default. You can toggle this default scope back to solution level by turning off Scope to Current Document (Ctrl + Alt + C).

    Go To Last Edited Location

    We all know the feeling of starting to write a feature and then realizing we need some more information from elsewhere in the solution. So, we open another file from Solution Explorer or Go to Definition in a few places and suddenly, we’re far off from where we started with no easy way back unless you remember the name of file you were working in originally. In Visual Studio 2017 version 15.8, you can now go back to your last edited location via Edit > Go To > Go To Last Edit Location (Ctrl + Shift + Backspace).

    Expanded Navigation Context Menu

    Keyboard profiles for Visual Studio Code and ReSharper

    Learning keyboard shortcuts takes time and builds up specific muscle memory so that once you learn one set, it can be difficult to retrain yourself when the shortcuts change or create mappings that match your previous shortcuts. This problem came to light as we heard from users who frequently switch between Visual Studio and Visual Studio Code, and those who used ReSharper in the past. To help, we’ve added two new keyboard profiles, Visual Studio Code and ReSharper (Visual Studio), which we hope will increase your productivity in Visual Studio.

    Keyboard profiles for Visual Studio Code and ReSharper

    C# Code Clean-up

    Last, but certainly not least, in Visual Studio 2017 version 15.8, we’ve configured Format Document to perform additional code cleanup on a file–like remove and sort usings or apply code style preferences. Code cleanup will respect settings configured in an .editorconfig file, or lacking that rule or file, those set in Tools > Options > Text Editor > C# > [Code Style & Formatting]. Rules configured as none in an .editorconfig will not participate in code cleanup and will have to be individually fixed via the Quick Actions and Refactorings menu.

    Options dialog showing format document options for C# Code Clean-up

    Update and Give Feedback

    With Visual Studio Version 15.8, you’ll have access to all the features above and more so be sure to update to take advantage of everything Visual Studio has to offer.

    As you test out these new features, use the Send Feedback button inside Visual Studio to provide direct feedback to the product team. This can be anything from an issue you’re encountering or a request for a new productivity feature. We want to hear all of it so we can build the best Visual Studio for you!

    Allison Buchholtz-Au, Program Manager, Visual Studio Platform

    Allison is a Program Manager on the Visual Studio Platform team, focusing on streamlining source control workflows and supporting both our first and third party source control providers.

    Two seconds to take a bite out of mobile bank fraud with Artificial Intelligence

    $
    0
    0

    The future of mobile banking is clear. People love their mobile devices and banks are making big investments to enhance their apps with digital features and capabilities. As mobile banking grows, so does the one aspect about it that can be wrenching for customers and banks, mobile device fraud. 

    image

    Problem: To implement near real-time fraud detection

    Most mobile fraud occurs through a compromise called a SIM swap attack in which a mobile number is hacked. The phone number is cloned and the criminal receives all the text messages and calls sent to the victim’s mobile device. Then login credentials are obtained through social engineering, phishing, vishing, or an infected downloaded app. With this information, the criminal can impersonate a bank customer, register for mobile access, and immediately start to request fund transfers and withdrawals.

    Artificial Intelligence (AI) models have the potential to dramatically improve fraud detection rates and detection times. One approach is described in the Mobile bank fraud solution guide.  It’s a behavioral-based AI approach and can be much more responsive to changing fraud patterns than rules-based or other approaches.

    The solution: A pipeline that detects fraud in less than two seconds

    Latency and response times are critical in a fraud detection solution. The time it takes a bank to react to a fraudulent transaction translates directly to how much financial loss can be prevented. The sooner the detection takes place, the less the financial loss.

    To be effective, detection needs to occur in less than two seconds. This means less than two seconds to process an incoming mobile activity, build a behavioral profile, evaluate the transaction for fraud, and determine if an action needs to be taken. The approach described in this solution is based on:

    • Feature engineering to create customer and account profiles.
    • Azure Machine Learning to create a fraud classification model.
    • Azure PaaS services for real-time event processing and end-to-end workflow.

    The architecture: Azure Functions, Azure SQL, and Azure Machine Learning

    Most steps in the event processing pipeline start with a call to Azure Functions because functions are serverless, easily scaled out, and can be scheduled.

    The power of data in this solution comes from mobile messages that are standardized, joined, and aggregated with historical data to create behavior profiles. This is done using the in-memory technologies in Azure SQL.  

    Training of a fraud classifier is done with Azure Machine Learning Studio (AML Studio) and custom R code to create account level metrics.

    Recommended next steps

    Read the Mobile bank fraud solution guide to learn details on the architecture of the solution. The guide explains the logic and concepts and gets you to the next stage in implementing a mobile bank fraud detection solution. We hope you find this helpful and we welcome your feedback.

    Monitor all Azure Backup protected workloads using Log Analytics

    $
    0
    0

    We are excited to share that Azure Backup now allows you to monitor all workloads protected by it by leveraging the power of Log Analytics (LA). This allows enterprises to monitor key backup parameters across Recovery Services vaults and subscriptions irrespective of which Azure backup solution you are using. In addition, configure custom alerts and actions for custom monitoring requirements for all Azure Backup workloads with this LA based solution.

    This solution now covers all workloads protected by Azure Backup including Azure VMs, SQL in Azure VM backups, System Center Data Protection Manager connected to Azure (DPM-A), Microsoft Azure Backup Server (MABS), and file-folder backup from Azure backup agent.

    Here’s how you get all the benefits.

    Configure diagnostic settings

    If you have already configured Log Analytics workspace to monitor Azure Backup, skip to the Deploy solution template section.

    You can open the diagnostic setting window from the Azure Recovery services vault or from Azure Monitor. In the Diagnostic settings window, select “Send data to log analytics,” choose the relevant LA workspace and select the log accordingly, “AzureBackupReport,” and click “Save.”

    Be sure to choose the same workspace for all the vaults so that you get a centralized view in the workspace. After completing the configuration, allow 24 hours for initial data push to complete.

    Deploy solution template

    Once the data is in the workspace, we need a set of graphs to visualize the monitoring data. Deploy the Azure quick-start template to the workspace configured above to get a default set of graphs, explained below. Make sure you give the same resource group, workspace name and workspace location to properly identify the workspace and then install this template on it.

    If you are already using this template as outlined in a previous blog and edited it, just add the relevant kusto queries from deployment JSON in github. If you didn’t edit the template, re-deploy the template onto the same workspace to view the updated template.

    Once deployed, you will view an overview tile for Azure Backup in the workspace dashboard. Clicking on the overview tile will take you to the solution dashboard and provide you all the information shown below.

    AzureMonitorTile

    Monitor Azure Backup data

    Monitor backups and restores

    Monitor regular daily backups for all Azure Backup protected workloads. With this update, you can monitor even log backups for your SQL Databases whether they are running within Azure IaaS VMs or being run locally on-premises and being protected by DPM, MABS.

    AllBackupJobs

    RestoreJobs

    Monitor all datasources

    Monitor a spike or reduction in number of backed up datasources using the active datasources graph. The active datasources attribute is split across all Azure Backup types. The legend beside the pie graph shows the top three types. The list beneath the pie chart displays the top 10 active datasources. For example, datasources on which the greatest number of jobs were run in the specified time frame.

    ActiveDatasources

    Monitor Azure Backup alerts

    Azure Backup generates alerts automatically when a backup and/or a restore job fails. You are now able to view all such alerts generated in a single place.

    ActiveAlerts

    However, be sure to select the relevant time range to monitor, such as the proper start and end dates.

    SelectTime

    Generate custom alerts

    Whenever you click on any single row in the above graphs, it will lead to a more detailed view in the Log Search window and you can generate a custom alert for that scenario.

    CustomAlert

    To learn more, visit our documentation on how to configure alerts.

    Summary

    You can configure LA workspaces to receive key backup data across multiple Recovery Services vaults and subscriptions and deploy customizable solutions on workspaces to view and configure actions for business-critical events. This solution is key for any enterprise to keep a watchful eye over their backups and ensure that all actions are taken for successful backups and restores.

    Related links and additional content

    Protecting privacy in Microsoft Azure: GDPR, Azure Policy Updates

    $
    0
    0

    Today more than ever, privacy is of critical importance in the technology industry. Microsoft has an enduring commitment to protect data privacy, not as an afterthought, but built into Microsoft Azure from the ground up. Microsoft designed Azure with industry-leading security controls, compliance tools, and privacy policies to safeguard your data in the cloud, including the categories of personal data identified by the GDPR. These also help you comply with other important global and regional privacy standards such as ISO/IEC 27018, EU-U.S. Privacy Shield, EU Model Clauses, HIPAA/HITECH, and HITRUST.

    When you build on Azure’s secure foundation, you accelerate your move to the cloud by achieving compliance more readily, allowing you to enable privacy-sensitive cloud scenarios, such as financial and health service, with confidence.

    In this episode we describe key tools in Azure to help you achieve your privacy goals that include:

    • The Azure Data Subject Requests for the GDPR portal, which provides step-by-step guidance on how to comply with GDPR requirements to find and act on personal data that resides in Azure. This capability to execute data subject requests is available through the Azure portal on our public and sovereign clouds, as well as through pre-existing APIs and UIs across the breadth of our online services.
    • Azure Policy, which is deeply integrated into Azure Resource Manager, helps your organization enforce policy across resources. With Azure Policy you can define policies at an organizational level to manage resources and prevent developers from accidentally allocating resources in violation of those policies. You can use Azure Policy in a wide range of compliance scenarios, such as ensuring that your data is encrypted or remains in a specific region to comply with the GDPR.
    • Compliance Manager, which is a free workflow-based risk assessment tool, can help you manage regulatory compliance within the shared responsibility model of the cloud. It delivers a dashboard view of standards, regulations, and assessments that contain Microsoft control implementation details and test results as well as customer-managed controls. This enables you to track, assign, and verify your organization's regulatory compliance activities.
    • Azure Information Protection, which offers file-share scanning for on-premises servers to discover sensitive data, can enable you to label, classify, and protect it thereby improving organizational data governance.
    • Azure Security Center, which provides unified security management and advanced threat protection. Integration with Azure Policy enables you to apply security policies across hybrid cloud workloads to enable encryption, limit organizational exposure to threats, and respond to attacks.
    • Azure Security and Compliance GDPR Blueprint, which can help you build and launch cloud applications that meet GDPR requirements. You can leverage our common reference architectures, deployment guidance, GDPR article implementation mappings, customer responsibility matrices, and threat models to simplify adoption of Azure in support of your GDPR compliance initiatives.

    Learn more on the Service Trust Portal about how Microsoft can help you meet GDPR requirements. Read more about our steadfast commitment to privacy at Microsoft.

    Team Foundation Server (TFS) Reporting – Which reports do you use?

    $
    0
    0
    If you are using Team Foundation Server (TFS) and SSRS Reporting today, we want to hear from you! We want to know which of the TFS Reports we offer today are most valuable to you. Your feedback will help us guide the VSTS Analytics Service roadmap. Analytics is the future of reporting for both Visual... Read More

    Tips for analyzing Excel data in R

    $
    0
    0

    If you're familiar with analyzing data in Excel and want to learn how to work with the same data in R, Alyssa Columbus has put together a very useful guide: How To Use R With Excel. In addition to providing you with a guide for installing and setting up R and the RStudio IDE, it provide a wealth of useful tips for working with Excel data in R, including:

    • To import Excel data into R, use the readxl package
    • To export Excel data from R, use the openxlsx package
    • How to remove symbols like "$" and "%" from currency and percentage columns in Excel, and convert them to numeric variables suitable for analysis in R
    • How to do computations on variables in R, and a list of common Excel functions (like RAND and VLOOKUP) with their R equivalents
    • How to emulate common Excel chart types (like histograms and line plots) using R plotting functions

    Conversely, you can also use R within Excel. The guide suggests BERT (Basic Excel R Toolkit), which allows you to apply R functions to Excel data via the Excel formula interface:

    BERT-loop

    With BERT, you can also open an R console within Excel, and use R commands to manipulate data within the spreadsheet. BERT is open-source and available here, and you can see the detailed guide to using Excel data in R at the link below.

    RPubs: How To Use R With Excel (via author Alyssa Columbus)


    Use the official Boost.Hana with MSVC 2017 Update 8 compiler

    $
    0
    0

    We would like to share a progress update to our previous announcement regarding enabling Boost.Hana with MSVC compiler. Just as a quick background, Louis Dionne, the Boost.Hana author, and us have jointly agreed to provide a version of Boost.Hana in vcpkg to promote usage of the library among more C++ users from the Visual C++ community. We’ve identified a set of blocking bugs and workarounds and called them out in our previous blog, and stated that as we fix the remaining bugs, we will gradually update the version of Boost.Hana in vcpkg, ultimately removing it and replacing it with master repo. We can conduct this development publicly in vcpkg without hindering new users who take a dependency on the library.

    Today, we’re happy to announce that the vcpkg version of Boost.Hana now just points to the official master repo, instead of our fork!!!

    With VS2017 Update 8 MSVC compiler, the Boost.Hana official repo with this pull request or later will build clean. We recommend you take the dependency via vcpkg.

    For full transparency, below is where we stand with respect to active bugs and used source workarounds as of August 2018:

    Source workarounds in place

    There are 3 remaining workarounds in Boost.Hana official repo for active bugs in VS2017 Update 8 compiler:

    // Multiple copy/move ctors
    #define BOOST_HANA_WORKAROUND_MSVC_MULTIPLECTOR_106654
    
    // Forward declaration of class template member function returning decltype(auto)
    #define BOOST_HANA_WORKAROUND_MSVC_DECLTYPEAUTO_RETURNTYPE_662735
    
    // Parser incorrectly parses a comparison operation as a template id
    // This issue only impacts /permissive- or /std:c++17
    #define&nbsp;BOOST_HANA_WORKAROUND_MSVC_RDPARSER_TEMPLATEID_616568
    

    Removed 23 source workarounds that are no longer necessary for VS2017 Update 8 release. Full details for more information.

    // Fixed by commit f4e60b2ecc169b0a5ec51d713125801adae24bc2, 20180323
    // Note, the workaround requires /Zc:externConstexpr
    #define BOOST_HANA_WORKAROUND_MSVC_NONTYPE_TEMPLATE_PARAMETER_INTERNAL
    
    // Fixed by commit c9999d916f1d73bc852de709607b2ca60e76a4c9, 20180513
    #define BOOST_HANA_WORKAROUND_MSVC_CONSTEXPR_NULLPTR
    #define BOOST_HANA_WORKAROUND_MSVC_CONSTEXPR_ARRAY_399280
    
    // error C2131: expression did not evaluate to a constant
    // test_includeautofor_each.hpp
    #define BOOST_HANA_WORKAROUND_MSVC_FOR_EACH_DISABLETEST
    
    // testfunctionalplaceholder.cpp
    #define BOOST_HANA_WORKAROUND_MSVC_CONSTEXPR_ADDRESS_DISABLETEST
    #define BOOST_HANA_WORKAROUND_MSVC_CONSTEXPR_ARRAY_DISABLETEST
    
    // Fixed by commit 5ef87ec5d20b45552784a40fe455c04c257c7b08, 20180516
    // Generic lambda preparsing and static capture
    #define BOOST_HANA_WORKAROUND_MSVC_GENERIC_LAMBDA_NAME_HIDING_616190
    
    // Fixed by commit 9c4869e61b5ad301f1fe265193241d2c74729a1c, 20180518
    // ICE when try to give warning on the format string for printf
    // examplemiscprintf.cpp
    #define BOOST_HANA_WORKAROUND_MSVC_PRINTF_WARNING_506518
    
    // Fixed by commit 095130d02c8805517bbaf93d92415041eecbca00, 20180521
    // decltype behavior difference when comparing character array and std::string
    // testorderable.cpp
    #define BOOST_HANA_WORKAROUND_MSVC_DECLTYPE_ARRAY_616099
    
    // Fixed by commit a488f9dccbfb4ceade4104c0d8d00e25d6ac7d88, 20180521
    // Member with array type
    // testissuesgithub_365.cpp
    #define BOOST_HANA_WORKAROUND_MSVC_GITHUB365_DISABLETEST
    
    // Fixed by commit 7a572ef6535746f1cee5adaa2a41edafca6cf1bc, 20180522
    // Member with the same name as the enclosing class
    // testissuesgithub_113.cpp
    #define BOOST_HANA_WORKAROUND_MSVC_PARSEQNAME_616018_DISABLETEST
    
    // Fixed by commit 3c9a06971bf4c7811db1a21017ec509a56d60e59, 20180524
    #define BOOST_HANA_WORKAROUND_MSVC_VARIABLE_TEMPLATE_EXPLICIT_SPECIALIZATION_616151
    
    // error C3520: 'Args': parameter pack must be expanded in this context
    // exampletutorialintegral-branching.cpp
    #define BOOST_HANA_WORKAROUND_MSVC_LAMBDA_CAPTURE_PARAMETERPACK_616098_DISABLETEST
    
    // Fixed by commit 5b1338ce09f7827e5b9248bcba2f519043044fef, 20180529
    // Narrowing warning on constant float
    // examplecoreconvertembedding.cpp
    #define BOOST_HANA_WORKAROUND_MSVC_NARROWING_CONVERSION_FLOAT_616032
    
    // Fixed by commit be8778ab26957ae7c6a36376a9ae2d049d64a095, 20180611
    // Pack expansion of decltype
    // examplehash.cpp
    #define BOOST_HANA_WORKAROUND_MSVC_PACKEXPANSION_DECLTYPE_616094
    
    // Fixed by commit 5fd2bf807a0320167c72d9960b32d823a634c04d, 20180613
    // Parser error when using '{}' in template arguments
    #define BOOST_HANA_WORKAROUND_MSVC_PARSE_BRACE_616118
    
    // Fixed by commit ce4f90349574b4acc955cf1eb04d7dc6a03a568e, 20180614
    // Generic lambda and sizeof...
    // testtypeis_valid.cpp
    #define BOOST_HANA_WORKAROUND_MSVC_GENERIC_LAMBDA_RETURN_TYPE_269943
    
    // Return type of generic lambda is emitted as a type token directly after pre-parsing
    #define BOOST_HANA_WORKAROUND_MSVC_GENERIC_LAMBDA_RETURN_TYPE_610227
    
    // Fixed by commit 120bb866980c8a1abcdd41653fa084d6c8bcd327, 20180615
    // Nested generic lambda
    // testindex_if.cpp
    #define BOOST_HANA_WORKAROUND_MSVC_NESTED_GENERIC_LAMBDA_615453
    
    // Fixed by commit 884bd374a459330721cf1d2cc96d231de3bc68f9, 20180615
    // Explicit instantiation involving decltype
    // exampletutorialintrospection.cpp
    #define BOOST_HANA_WORKAROUND_MSVC_DECLTYPE_EXPLICIT_SPECIALIZATION_508556
    
    // Fixed by commit ff9ef6d9fe43c54f7f4680a2701ad73de18f9afb, 20180620
    // constexpr function isn't evaluated correctly in SFINAE context
    #define BOOST_HANA_WORKAROUND_MSVC_SFINAE_CONSTEXPR_616157
    
    // Fixed by commit 19c35b8c8a9bd7dda4bb44cac1d9d446ed1b20ac, 20180625
    // Pack expansion of decltype
    // testdetailvariadicat.cpp
    // testdetailvariadicdrop_into.cpp
    #define BOOST_HANA_WORKAROUND_MSVC_PACKEXPANSION_DECLTYPE_616024
    

    Bugs remaining in the compiler

    • There are 3 active bugs with the VS2017 Update 8 release. This is down from 25 active bugs from Update 7 release.
    • We plan to fix these remaining bugs by the VS2017 Update 9 release later this year.

    What’s next…

    • Throughout the remaining updates of Visual Studio 2017, we will continue to exhaust the remaining MSVC bugs that block upstream version of the Boost.Hana library.
    • We will continue to provide status updates on our progress. Next update will be when we release VS2017 Update 9.
    • We will ensure that users who take dependency on this library in vcpkg will not be affected by our work.
    • Where are we with enabling Range-v3 with MSVC?
      • Similarly, we are tracking all Range-v3 blocking bugs in the compiler and fixing them. Our plan is to fix them all in the remaining VS2017 Update 9 release.

    In closing

    We’d love for you to download Visual Studio 2017 version 15.8 and try out all the new C++ features and improvements. As always, we welcome your feedback. We can be reached via the comments below or via email (visualcpp@microsoft.com). If you encounter other problems with MSVC in Visual Studio 2017 please let us know through Help > Report A Problem in the product, or via Developer Community. Let us know your suggestions through UserVoice. You can also find us on Twitter (@VisualC) and Facebook (msftvisualcpp).

    Thank you,

    Xiang Fan, Ulzii Luvsanbat

    .NET Framework August 2018 Preview of Quality Rollup

    $
    0
    0

    Today, we are releasing the August 2018 Preview of Quality Rollup.

    Quality and Reliability

    This release contains the following quality and reliability improvements.

    ASP.NET

    • Resolves an issue where an ASP.NET web application is continuously running under high load on high-end web server(40+ CPU cores), it may suffer high thread contention which can cause high CPU usage. [624745]

    CLR

    • Fixes an issue that results in a System.InvalidProgramException for some very large XSLT transforms. This may also fix this kind of issue for some other very large methods. [604943]
    • Addresses an issue where a the CultureAwareComparer type was not able to correctly serialize and deserialize across different versions of the .NET Framework, as described in Advisory serializing/deserializing a CultureAwareComparer with .NET Framework 4.6+. [637591]

    SQL

    • Resolves an issue where SqlClient login may use an infinite timeout due to truncating a small millisecond timeout to zero when converting to seconds. [631196]

    WCF

    • A race-condition existed in AsyncResult that closes a WaitHandle before Set() is called. When this happens, the process crashes with an ObjectDisposedException. [590542]
    • Enable customers using .NET 2.0, 3.0, 3.5, 3.5.1 to use their program under TLS 1.1 or TLS 1.2 [639940]

    WPF

    • In multi-threaded WPF applications that process large packages simultaneously, there is potential for a deadlock when one of these files is closing and the other starts to consume larger amounts of memory.[602405]
    • Under certain conditions, WPF applications (like SCCM) using WindowChromeWorker experience high CPU usage or hangs. [621651]

    Note: Additional information on these improvements is not available. The VSTS bug number provided with each improvement is a unique ID that you can give Microsoft Customer Support, include in StackOverflow comments or use in web searches.

    Getting the Update

    The Preview of Quality Rollup is available via Windows Update, Windows Server Update Services, Microsoft Update Catalog, and Docker.

    Microsoft Update Catalog

    You can get the update via the Microsoft Update Catalog. For Windows 10, .NET Framework updates are part of the Windows 10 Monthly Rollup.

    The following table is for Windows 10 and Windows Server 2016+.

    Product Version Preview of Quality Rollup KB
    Windows 10 1803 (April 2018 Update) Catalog
    4346783
    .NET Framework 3.5 4346783
    .NET Framework 4.7.2 4346783
    Windows 10 1709 (Fall Creators Update) Catalog
    4343893
    .NET Framework 3.5 4343893
    .NET Framework 4.7.1, 4.7.2 4343893
    Windows 10 1703 (Creators Update) Catalog
    4343889
    .NET Framework 3.5 4343889
    .NET Framework 4.7, 4.7.1, 4.7.2 4343889
    Windows 10 1607 (Anniversary Update) Catalog
    4343884
    .NET Framework 3.5 4343884
    .NET Framework 4.6.2, 4.7, 4.7.1, 4.7.2 4343884

    The following table is for earlier Windows and Windows Server versions.

    Product Version Preview of Quality Rollup KB
    Windows 8.1
    Windows RT 8.1
    Windows Server 2012 R2
    Catalog
    4346082
    .NET Framework 3.5 4342310
    .NET Framework 4.5.2 4342317
    .NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.1, 4.7.2 4342315
    Windows Server 2012 Catalog
    4346081
    .NET Framework 3.5 4342307
    .NET Framework 4.5.2 4342318
    .NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.1, 4.7.2 4342314
    Windows 7
    Windows Server 2008 R2
    Catalog
    4346080
    .NET Framework 3.5.1 4342309
    .NET Framework 4.5.2 4342319
    .NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.1, 4.7.2 4342316
    Windows Server 2008 Catalog
    4346083
    .NET Framework 2.0, 3.0 4342308
    .NET Framework 4.5.2 4342319
    .NET Framework 4.6 4342316

    Previous Monthly Rollups

    The last few .NET Framework Monthly updates are listed below for your convenience:

    Interesting bugs – MSB3246: Resolved file has a bad image, no metadata, or is otherwise inaccessible. Image is too small.

    $
    0
    0

    I got a very strange warning recently when building a .NET Core app with "dotnet build."

    MSB3246: Resolved file has a bad image, no metadata, or is otherwise inaccessible. 
    
    Image is too small.

    Interesting Bug used under CC https://flic.kr/p/4SpmL6Eek! It's clear, in that something is "too small" but what? A file I guess? Maybe it's the wrong size?

    The error code is MSB3246 which is nice and googleable/searchable but it was confusing because I couldn't figure our what file specifically. It just felt vague.

    BUT!

    I had recently been overclocking my machine (overly aggressively, gulp, about 40%) and had a very nasty hard reboot. As a result I had a few dozen files get orphaned - specifically the files were zero'ed out! Zero is small, right?

    Turns out you can pass parameters over to MSBuild from "dotnet build" and see what MSBuild is doing internally. For example, you could

    /fileLoggerParameters:verbosity=diagnostic

    but that's long. So how about:

    dotnet build /flp:v=diag

    Cool. What deep logging do I see now?

    Primary reference "deliberately.zero.bytes.dll". (TaskId:41)
    
    13:36:52.397 1:7>C:Program Filesdotnetsdk2.1.400Microsoft.Common.CurrentVersion.targets(2110,5): warning MSB3246: Resolved file has a bad image, no metadata, or is otherwise inaccessible. Image is too small. [S:workzero-byte-refzero-byte-ref.csproj]
    Resolved file path is "S:workzero-byte-refdeliberately.zero.bytes.dll". (TaskId:41)
    Reference found at search path location "{RawFileName}". (TaskId:41)

    Now with "verbose" turned on I can see that one of the references is zero'ed out/corrupted/bad. I reinstalled .NET Core in my case and doubled checked all the DLLs/Assemblies that I was bringing in - I also ran chkdsk /f - and I was back in business!

    I hope this helps someone who might stumble on error MSB3246 and wonder what's up.

    Even better, thanks to Rainer Sigwald who filed a bug against MSBuild to update the error message to be more clear. In the future I'll be able to debug this without changing verbosity!


    Sponsor: Preview the latest JetBrains Rider with its built-in spell checking, initial Blazor support, partial C# 7.3 support, enhanced debugger, C# Interactive, and a redesigned Solution Explorer.


    © 2018 Scott Hanselman. All rights reserved.
         

    Library Manager Release in 15.8

    $
    0
    0

    Microsoft Library Manager (LibMan) is now available in the general release of Visual Studio 2017 as of v15.8. LibMan first previewed earlier this year, and now, after a much-anticipated wait, LibMan is available in the stable release of Visual Studio 2017 bundled as a default component in the ASP.NET and web development workload.

    In the announcement about the preview, we showed off the LibMan manifest (libman.json), providers for filesystem and CDNJS, and the menu options for Restore, Clean and Enable Restore-on-Build. Included as part of the release in v15.8 we’ve also added:
    – a new dialog for adding library files
    – a new library provider (UnPkg)
    – the LibMan CLI (cross-platform DotNet global tool)

    What is LibMan?

    LibMan is a tool that helps to find common client-side library files and add them to your web project. If you need to pull JavaScript or CSS files into your project from libraries like jQuery or bootstrap, you can use LibMan to search various global providers to find and download the files you need.

    Library Manager in Visual Studio

    To learn more about LibMan, refer to the official Microsoft Docs: Client-side library acquisition in ASP.NET Core with LibMan.

    What’s new?



    New dialog for adding library files

    We’ve added tooling inside Visual Studio to add library files to a web project. Inside a web project, you can right-click any folder (or the project root) and select Add–>Client-Side Library…
    This will launch the Add Client-Side Library dialog, which provides a convenient interface for browsing the libraries and files available in various providers, as well as setting the target location for files in your project.

    LibMan Add Files Dialog

    New Provider: UnPkg

    Along with CDNJS and FileSystem, we’ve built an UnPkg provider. Based on the UnPkg.com website, which sits on top of the npm repo, the UnPkg provider opens access to many more libraries than just those referenced by the CDNJS catalogue.

    LibMan CLI available on NuGet

    Timed with the release of Visual Studio 2017 v15.8, the LibMan command line interface (CLI) has been developed as a global tool for the DotNet CLI and is now available on NuGet. Look for Microsoft.Web.LibraryManager.Cli

    You can install the LibMan CLI with the following command:

    > dotnet tool install -g Microsoft.Web.LibraryManager.Cli
    

    The CLI is cross-platform, so you’ll be able to use it anywhere that .NET Core is supported (Windows, Mac, Linux). You can perform a variety of LibMan operations including install, update, and restore, plus local cache management.

    LibMan CLI example

    To learn more about the LibMan CLI, see the blog post: LibMan CLI Release or refer to the official Microsoft Docs: Use the LibMan command-line interface (CLI) with ASP.NET Core

    Related Links

    Happy coding!

    Justin Clareburt, Senior Program Manager, Visual Studio

    Justin Clareburt (justcla) Profile Pic Justin Clareburt is the Web Tools PM on the Visual Studio team. He has over 20 years of Software Engineering experience and brings to the team his expert knowledge of IDEs and a passion for creating the ultimate development experience.

    Follow Justin on Twitter @justcla78

    New to Microsoft 365 in August—tools to achieve more in the modern workplace

    $
    0
    0

    This month, we introduced new features and updates in Microsoft 365 that help teams streamline management of tasks, make it easier for IT admins to manage Windows 10 devices, and empower small to medium-sized businesses to grow.

    Keep everyone on the same page with @Mentions in Office apps—Today, we’re introducing @Mentions in Word, PowerPoint, and Excel. This capability makes it easier to work together on shared documents, presentations, and worksheets by giving you the ability to get someone’s attention directly within the comments. If you’re @Mentioned in a document, you’ll receive an email notification, so you know exactly where your input is needed. @Mentions will start rolling out in September to Word Online and PowerPoint Online and to Word and PowerPoint for Windows and Mac for Insiders and will be coming to iOS, Android, and Excel in the next few months. Learn more in this support article.

    An animated screenshots shows off @Mentions in Word.

    Tag coworkers with @Mentions directly in Word.

    Enable remote actions on Windows 10 devices—FastTrack for Microsoft 365 now offers deployment support for co-management on your Windows 10 devices to help you get the most of our your Microsoft 365 subscriptions. Co-management enables a Windows 10 device to be managed by Configuration Manager and Microsoft Intune at the same time, giving IT admins the ability to enable remote actions on devices—including factory reset and selective wipe for lost or stolen devices. Get started by going to our FastTrack website.

    New capabilities to help small and medium businesses grow

    We are introducing several new capabilities in Microsoft 365 to help small and medium-sized businesses improve teamwork and build their business.

    Discover and securely share videos with Microsoft Stream—This quarter, we’re making Microsoft Stream available to small and medium-sized businesses by bringing it to Microsoft 365 Business, Office 365 Business Premium, and Office 365 Business Essentials. Stream is an intelligent video service, which makes it easy to discover and securely share videos from across the organization, with features like auto-generated and searchable transcripts, and AI-powered speaker detection.A screenshot displays a video shared in Microsoft Stream.

    Discover and securely share videos across the organization–all in one place–with searchable speech-to-text and AI-powered speaker detection.

    Get paid faster and manage mileage expenses more easily with new integrations—Now you can collect pre-payment from customers when they book an appointment with you via Microsoft Bookings—a great way to reduce no-shows. You can also now provide customers a direct link to Microsoft Pay from within invoices generated by Microsoft Invoicing, providing a convenient and secure online payment option. In addition, mileage tracking app MileIQ now integrates with Xero and AutoReimbursement.com to help you log mileage and manage vehicle reimbursements quickly and easily. These capabilities will roll out to small and medium-sized business customers in the U.S. in the coming weeks.A screenshot displays a booking made in Microsoft Bookings.

    Reduce no-shows by collecting pre-payments with Microsoft Bookings.

    Hire the best talent with LinkedIn Jobs—Eligible small and medium-sized business customers can now receive $50 off their first LinkedIn Jobs post. LinkedIn Jobs reaches more than 500 million LinkedIn members—most of whom aren’t visiting job boards but are open to new opportunities—enabling you to reach relevant candidates not accessible elsewhere. The platform matches your role with the right candidates and provides recommended matches that get smarter over time. This offer applies to customers with a subscription to Microsoft 365 Business and Office 365 Business Premium, Business, and Business Essentials who use the LinkedIn Jobs pay-per-click pricing model. In the coming weeks, information about this offer, including terms, will be emailed directly to eligible customers.

    Other updates

    • Access live events in Microsoft 365 to connect leaders and employees across your organization and engage with communities, content, and communications.
    • Ink and text annotations are now available for the Office Lens mobile app.
    • Starting October 2, 2018, get even more out of Office 365 Home and Office 365 Personal by installing Office on an unlimited number of devices. Find out more on our blog.
    • New capabilities in Microsoft 365 combine the power of artificial intelligence and machine learning with content stored in OneDrive and SharePoint to help you be more productive, make more informed decisions, and keep files more secure.

    The post New to Microsoft 365 in August—tools to achieve more in the modern workplace appeared first on Microsoft 365 Blog.

    Viewing all 10804 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>