Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

Mitigating speculative execution side-channel attacks in Microsoft Edge and Internet Explorer

$
0
0

Today, Google Project Zero published details of a class of vulnerabilities which can be exploited by speculative execution side-channel attacks. These techniques can be used via JavaScript code running in the browser, which may allow attackers to gain access to memory in the attacker’s process.

Microsoft has issued security updates (KB4056890) with mitigations for this class of attacks. As part of these updates, we are making changes to the behavior of supported versions of Microsoft Edge and Internet Explorer 11 to mitigate the ability to successfully read memory through this new class of side-channel attacks.

Initially, we are removing support for SharedArrayBuffer from Microsoft Edge (originally introduced in the Windows 10 Fall Creators Update), and reducing the resolution of performance.now() in Microsoft Edge and Internet Explorer from 5 microseconds to 20 microseconds, with variable jitter of up to an additional 20 microseconds. These two changes substantially increase the difficulty of successfully inferring the content of the CPU cache from a browser process.

We will continue to evaluate the impact of the CPU vulnerabilities published today, and introduce additional mitigations accordingly in future servicing releases.  We will re-evaluate SharedArrayBuffer for a future release once we are confident it cannot be used as part of a successful attack.

— John Hazen, Principal PM Lead, Microsoft Edge

The post Mitigating speculative execution side-channel attacks in Microsoft Edge and Internet Explorer appeared first on Microsoft Edge Dev Blog.


Visual Studio 2017 Throughput Improvements and Advice

$
0
0

As C++ programs get larger and larger and the optimizer becomes more complex the compiler’s build time, or throughput, increasingly comes into focus. It’s something that needs to be continually addressed as new patterns emerge and take hold (such as “unity” builds in gaming). It’s something we’re focusing on here in the Visual C++ Team, and has become a major point of emphasis during the most recent 15.5 release and will continue to be going forward. I want to take a few minutes to update everyone on some of the specific changes we’ve made to help with your compile times, and provide a few tips on how you can change your project or use technologies baked into the tools to help with your build times.

Please note that not all changes are aimed at providing small increases in all scenarios. Typically we’re targeting long pole corner cases, trying to get compile times there down somewhere closer to the expected mean for a project of that size. We’ve recently started focusing on AAA game titles as a benchmark. There is more work to be done.

There are three pieces of the toolset which need to be improved individually. First there is the “front end” of the compiler, implemented in “c1xx.dll”. It’s the set of code which takes a .cpp file and produces a language independent intermediate language, or IL, which is then fed into the back end of the compiler. The compiler back end is implemented in “c2.dll”. It reads the IL from the front end and produces an obj file from it, which contains the actual machine code. Finally, the linker (link.exe) is the tool which takes the various obj files from the back end as well as any lib files you give it, and mashes them together to produce a final binary.

Compiler Front End Throughput

In many projects, the front end of the compiler is the bottleneck for build throughput. Luckily it parallelizes well, either by using the /MP switch (which will spawn multiple cl.exe processes to handle multiple input files), externally via MSBuild or other build systems, or perhaps even distributed across machines with a tool like IncrediBuild. Effective distribution and parallelization of building your project should be the first step you take to improve your throughput.

The second step you should take is make sure you are making effective use of PCH files. A PCH file is essentially a memory dump of cl.exe with a fully parsed .h file – saving the trouble to redo it each time it is included. You’d be surprised how much this matters; header files (such as windows.h, or some DirectX headers) can be massive once they are fully preprocessed – and often make up the vast majority of a post processed source file. PCH files here can make a world of difference. The key here is to only include files which are infrequently changed, making sure PCH files are a net win for you.

The final piece of advice here is actually to limit what you #include. Outside of PCH files, #including a file is actually a rather expensive process involving searching every directory in your include path. It’s a lot of File I/O, and it’s a transitive process that needs to be repeated each time. That’s why PCH files help so much. Inside Microsoft, people have reported a lot of success by doing a “include what you use” pass over their projects. Using the /showIncludes option here can give you an idea as to how expensive this is, and help guide you to only include what you use.

Finally, I want you to be aware of the /Bt option to cl.exe. This will output the time spent in the front end (as well as back end and linker) for each source file you have. That will help you identify the bottlenecks and know what source files you want to spend time optimizing.

Here are a few things we changed in the front end to help with performance.

Refreshed PGO counts

PGO, or “profile guided optimization”, is a back end compiler technology used extensively across Microsoft. The basic idea is you generate a special instrumented build of your product, run some test cases to generate profiles, and recompile/optimize based on that collected data.

We discovered that we were using older profile data when compiling and optimizing the front end binary (c1xx.dll). When we reinstrumented and recollected the PGO data we saw a 10% performance boost.

The lesson here is, if you’re using PGO in order to provide a performance boost to your product, make sure you periodically recollect your training data!

Remove usages of __assume

__assume(0) is a hint to the back end of the compiler that a certain code path (maybe the default case of a label, etc) is unreachable. Many products will wrap this up in a macro, named something like UNREACHABLE, implemented so debug builds will assert and ship builds will pass this hint to the compiler. The compiler might do things such as removing branches or switches which target that statement.

It stands to reason then that if at runtime an __assume(0) statement actually is reachable, bad code generation can result. This causes problems in a lot of different ways (and some people argue it might cause security issues) – so we did an experiment to see what the impact was on simply removing all __assume(0) statements by redefining that macro. If the regression was small, perhaps it wasn’t worth having it in the product, given the other issues it causes.

Much to our surprise, the front end actually got 1-2% faster with __assume statements removed. That made the decision pretty easy. This root cause here appears to be that although __assume can be an effective hint to the optimizer in many cases, it seems that it can actually inhibit other optimizations (particularly newer optimizations). Improving __assume is an active work item for a future release as well.

Improve winmd file loading

Various changes were made to how winmd files were loaded, for a gain of around 10% of load time (which is perhaps 1% of total compile time). This only impacts UWP projects.

Compiler Back End

The compiler back end includes the optimizer. There are two classes of throughput issues here, “general” problems (where we do a bunch of work in hopes of a 1-2% across the board win), and “long poles” where a specific function causes some optimization to go down a pathological path and take 30 seconds or longer to compile – but the vast majority of people are not impacted. We care about and work on both.

If you use /Bt to cl.exe and see an outlier which takes an unusual amount of time in c2.dll (the back end), the next step is to compile just that file with /d2cgsummary. Cgsummary (or “code generation summary”) will tell you what functions are taking all of the time. If you’re lucky, the function isn’t on your critical performance path, and you can disable optimizations around the function like this:

#pragma optimize("", off)
void foo() {
...
}
#pragma optimize("", on)

Then the optimizer won’t run on that function. Then get in touch with us and we’ll see if we can fix the throughput issue.

Beyond just turning off the optimizer for functions with pathological compile times, I also need to warn against too liberal a use of __forceinline when possible. Often customers need to use forceinline to get the inliner to do what they want, and in those cases, the advice is to be as targeted as possible. The back end of the compiler takes __forceinline very, very seriously. It’s exempt from all inline budget checks (the cost of a __forceinline function doesn’t even count against the inline budget) and is always honored. We’ve seen many cases over the years where liberally applying __forceinline for code quality (CQ) reasons can cause a major bottleneck. Basically, this is because unlike other compilers we always inline preoptimized functions directly from the IL from the front end. This sometimes is an advantage – we can make different optimization decisions in different contexts, but one disadvantage is we end up redoing a lot of work. If you have a deep forceinline “tree”, this can quickly get pathological. This is the root cause of long compile times in places like Tensorflow/libsodium. This is something we are looking to address in a future release.

Look into Incremental Link Time Code Generation (iLTCG) for your LTCG builds. Incremental LTCG is a relatively new technology that allows us to only do code generation on the functions (and dependencies, such as their inliners) which have changed in an LTCG build. Without it, we actually redo code generation on the entire binary, even for a minor edit. If you’ve abandoned the use of LTCG because of the hit it causes to the inner dev loop, please take another look at it with iLTCG.

One final piece of advice, and this also applies more to LTCG builds (where there is a single link.exe process doing codegen rather than distributed across cl.exe processes), consider adjusting the default core scaling strategy via /cgthreads#. As you’ll see below we’ve made changes to scale better here, but the default is still to use 4 cores. In the future we’ll look at increasing the default core count, or even making it dynamic with the number of cores on the machine.

Here are some recent changes made to the back end that will help you build faster for free:

Inline Reader Cache

In some other compilers, inlining is implemented by keeping all inline candidate functions after being optimized in memory. Inlining then is just a matter of copying that memory into the appropriate spot in the current function.

In VC++, however, we implement inlining slightly differently. We actually re-read the unoptimized version of an inlinee from disk. This clearly can be a lot slower, but at the same time may use a lot less memory. This can become a bottleneck, especially in projects with a ton of __forceinline calls to work through.

To help mitigate this, we took a small step towards the “in memory” inlining approach of other compilers. The back end will now cache a function after it’s been read for inlining a certain number of times. Some experimenting showed that N=100 was a good balance between throughput wins and memory usage. This can be configured by passing /d2FuncCache# to the compiler (or /d2:-FuncCache# to the linker for LTCG builds). Passing 0 disables this feature, passing 50 means that a function is only cached after it’s been inlined 50 times, etc.

Type System Building Improvements

This applies to LTCG builds. At the start of an LTCG build, the compiler back end attempts to build a model of all of the types used in the program for use in a variety of optimizations, such as devirtualization. This is slow, and takes a ton of memory. In the past, when issues have been hit involving the type system we’ve advised people to just turn it off via passing /d2:-notypeopt to the linker. Recently we made some improvements to the type system which we hope will mitigate this issue once and for all. The actual changes are pretty basic, and they involve how we implement bitsets.

Better scaling to multiple cores

The compiler back end is multithreaded. But there are some restrictions: we compile in a “bottom up” order – meaning a function is only compiled once all of its callees are compiled. This is so a function can use information collected during its callees’ compilation to optimize better.

There has always been a limit on this: functions above a certain size are exempt, and simply begin compiling immediately without using this bottom up information. This is done to prevent compilation from bottle-necking on a single thread as it churns through the last few remaining massive functions which couldn’t start sooner because of a large tree of dependencies.

We have reevaluated the “large function” limit, and lowered it significantly. Previously, only a few functions existed in all of Microsoft which triggered this behavior. Now we expect a few functions per project might. We didn’t measure any significant CQ loss with this change, but the throughput wins can be large depending on how much a project was previously bottlenecking on its large functions.

Other inlining improvements

We’ve made changes to how symbol tables are constructed and merged during inlining, which provide an additional small benefit across the board.

Finer grained locking

Like most projects we continually profile and examine locking bottlenecks, and go after the big hitters. As a result we’ve improved the granularity of our locking in a few instances, in particular how IL files are mapped and accessed and how symbols are mapped to each other.

New data structures around symbol tables and symbol mappings

During LTCG, a lot of work is done to properly map symbols across modules. This part of the code was rewritten using new data structures to provide a boost. This helps especially in “unity” style builds, common in the gaming industry, where these symbol key mappings can get rather large.

Multithread Additional Parts of LTCG

Saying the compiler is multithreaded is only partially true. We’re speaking about the “code generation” portion of the back end – by far the largest chunk of work to be done.

LTCG builds, however, are a lot more complicated. They have a few other parts to them. We recently did the work to multithread another one of these parts, giving up to a 10% speedup in LTCG builds. This work will continue into future releases.

Linker Improvements

If you’re using LTCG (and you should be), you’ll probably view the linker as the bottleneck in your build system. That’s a little unfair, as during LTCG the linker just invokes c2.dll to do code generation – so the above advice applies. But beyond code generation, the linker has its traditional job to do of resolving references and smashing objs together to produce a final binary.

The biggest thing you can do here is to use “fastlink”. Fastlink is actually a new PDB format, and invoked by passing /debug:fastlink to the linker. This greatly reduces the work that needs to be done to generate a PDB file during linking.

On your debug builds, you should be using /INCREMENTAL. Incremental linking allows the linker to only update the objs which have been modified, rather than rebuild the entire binary. This can make a dramatic difference in the “inner dev loop” where you’re making some changes, recompiling/linking and testing, and repeating. Similar to fastlink, we’ve made tons of stability improvements here. If you’ve tried it before but found it to be unstable, please give it another chance.

Some recent linker throughput improvements you’ll get for free include:

New ICF heuristic

ICF – or identical comdat folding, is one of the biggest bottlenecks in the linker. This is the phase where any identical functions are folded together for purposes of saving space, and any references to those functions are redirected to the single instance of it as well.

This release, ICF got a bit of a rewrite. The summary is we now rely on a strong hashing function for equality rather than doing a memcmp. This speeds up ICF significantly.

Fallback to 64 bit linker

The 32 bit linker has an address space problem for large projects. It often memory maps files as a way of accessing them, and if the file is large this isn’t always possible as memory mapping requires contiguous address space. As a backup, the linker falls back on a slower buffered I/O approach where it reads parts of the file as needed.

It’s known that the buffered I/O codepath is much, much slower compared to doing memory mapped I/O. So we’ve added new logic where the 32 bit linker attempts to restart itself as a 64 bit process before falling back to the buffered I/O.

Fastlink Improvements

/DEBUG:fastlink is a relatively new feature which significantly speeds up debug info generation – a major portion of overall link time. We suggest everyone read up on this feature and use it if at all possible. In this release we’ve made it faster and more stable, and we are continuing to invest in fastlink in future releases. If you initially used it but moved away because of a bad experience, please give it another shot! We have more improvements on the way here in 15.6 and beyond.

Incremental Linking fallback

One of the complaints we’ve heard about incremental linking is it is can sometimes be slower than full linking, depending on how many objs or libs have been modified. We’re now more aggressive about detecting this situation and bailing out directly to a full link.

Conclusion

This list isn’t by any means exhaustive, but it’s a good summary of a few of the larger throughput focused changes over the past few months. If you’ve ever been frustrated with the compile time or link time of VC++ before, I’d encourage you to give it another shot with the 15.5 toolset. And if you do happen to have a project which is taking an unreasonably long time to compile compared to other projects of similar size or on other toolsets, we’d love to take a look!

And remember, you can use /d2cgsummary to cl.exe or /d2:-cgsummary to the linker to help diagnose code generation throughput issues. This includes info about the inliner reader cache discussed above. And for the toolset at large, pass /Bt to cl.exe and it’ll break down the time spent in each phase (front end, back end, and linker). The linker itself will output its time breakdown when you pass it /time+, including how much time is spent during ICF.

Asynchronous refresh with the REST API for Azure Analysis Services

$
0
0

Azure Analysis Services unlocks datasets with potentially billions of rows for non-technical business users to perform interactive analysis. Such large datasets can benefit from features such as asynchronous refresh.

We are pleased to introduce the REST API for Azure Analysis Services. Using any programming language that supports REST calls, you can now perform asynchronous data-refresh operations. This includes synchronization of read-only replicas for query scale out. Please see the blog post Introducing query replica scale-out for Azure Analysis Services for more information on query scale out.

Data-refresh operations can take some time depending on various factors, including data volume and level of optimization using partitions. These operations have traditionally been invoked with existing methods such as using TOM (Tabular Object Model), PowerShell cmdlets for Analysis Services, or TMSL (Tabular Model Scripting Language). The traditional methods may require long-running HTTP connections. A lot of work has been done to ensure the stability of these methods, but given the nature of HTTP, it may be more reliable to avoid long-running HTTP connections from client applications.

The REST API for Azure Analysis Services enables data-refresh operations to be carried out asynchronously. It therefore does not require long-running HTTP connections from client applications. Additionally, there are other built-in features for reliability such as auto retries and batched commits.

Please visit our documention page for details on how to use the REST API for Azure Analysis Services. It covers how to perform asynchronous refreshes, check their status, and cancel them if necessary. Similar information is provided for query-replica synchronization. Additionally, the C# RestApiSample on GitHub code sample is provided.

Whitepaper: Selecting the right secure hardware for your IoT deployment

$
0
0

How do you go about answering those perplexing questions such as what secure hardware to use? How do I gauge the level of security? How much security do I really need and hence how much premium should I place on secure hardware? We’ve published a new whitepaper to shed light on this subject matter.

In our relentless commitment to securing IoT deployments worldwide, we continue to raise awareness to the true nature of security—that it is a journey and never an endpoint. Challenges emerge, vulnerabilities evolve, and solutions age thereby triggering the need for renewal if you are to maintain a desired level of security.

Securing your deployment as desired comprises planning, architecture, and execution main phases. For IoT, these are further broken down into sub-phases to include design assessment, risk assessment, model assessment, development, and deployment as shown in Figure 1. The decision process at each phase is equally important, the process must take all other phases into consideration for optimal efficacy. This is especially true when choosing the right secure hardware, also known as secure silicon or Hardware Secure Module(HSM), to secure an IoT deployment.
 

image

Figure 1: The IoT Security Lifecycle


Choosing the right secure hardware for securing an IoT deployment requires that you understand what you are protecting against (risk assessment), which drives part of the requirements for the choice. The other part of the requirements entails logistical considerations like provisioning, deployment and retirement, as well as tactical considerations like maintainability. These requirements in turn drive architecture and development strategies which then allow you to make the optimal choice of secure hardware. While this prescription is not an absolute guarantee for security, following these guidelines allows one to comfortably claim due diligence for a holistic consideration towards the choice of the right secure hardware, and hence the greatest chance of achieving security goals.

The choice itself requires knowledge of available secure hardware options as well as corresponding attributes such as protocol and standards compliances. We’ve developed a whitepaper, The Right Secure Hardware for Your IoT Deployment, to highlight the secure hardware decision process. This whitepaper educates on the Architecture Decision phase of the IoT security lifecycle. It comprises the second whitepaper for the IoT security lifecycle decision making series following previously published whitepaper, Evaluating Your IoT Security, which offers education on the Planning phase.
 
Download IoT Security Lifecycle whitepaper series:

  1. Evaluating Your IoT Security.
  2. The Right Secure Hardware for Your IoT Deployment.

What strategies do you use in selecting the right hardware to secure your IoT devices and deployment? We invite you to share your thoughts in comments below.

Using Qubole Data Service on Azure to analyze retail customer feedback

$
0
0

It has been a busy season for many retailers. During this time, retailers are using Azure to analyze various types of data to help accelerate purchasing decisions. The Azure cloud not only gives retailers the compute capacity to handle peak times, but also the data analytic tools to better understand their customers.

Many retailers have a treasure trove of information in the thousands, or millions, of product reviews provided by their customers. Often, it takes time for particular reviews to show their value because customers "vote" for helpful or not helpful reviews over time. Using machine learning, retailers can automate identifying useful reviews in near real-time and leverage that insight quickly to build additional business value.

But how might a retailer without deep big data and machine learning expertise even begin to conduct this type of advanced analytics on such a large quantity of unstructured data? We will be holding a workshop in January to show you how easy that can be through the use of Azure and Qubole’s big data service.

Using these technologies, anyone can quickly spin up a data platform and train a machine learning model utilizing Natural Language Processing (NLP) to identify the most useful reviews. Moving forward, a retailer can then identify the value of reviews as they are generated by the user base and gain insights that can impact many aspects of their business.

Join Microsoft, Qubole, and Precocity for a half-day, hands on lab experience where we will show how to:

  • Leverage Azure cloud-based services and Qubole Data Service to increase the velocity of managing advanced analytics for retail
  • Ingesting a large retail review data set from Azure and leverage Qubole notebooks to explore data in a retail context
  • Demonstrate the autoscaling capability of a Qubole Spark cluster during a Natural Language Processing (NLP) pipeline
  • Train a machine learning model at scale using Open Source technologies like Apache Spark and score new customer reviews in real-time
  • Demonstrate use of Azure’s Event Hub and CosmosDB coupled with Spark Streaming to predict helpfulness of customer reviews in real-time

This workshop can be the basis of creating business value from reviews for other purposes including:

  • Fake review fraud detection
  • Identifying positive product characteristics
  • Identify influencers
  • Uncover new feature attributes for a product to inform merchandising

Register today for our event in Dallas, Texas on January 30th, 2017.

Space is limited, so register early!

Divide and parallelize large data problems with Rcpp

$
0
0

by Błażej Moska, computer science student and data science intern

Got stuck with too large a dataset? R speed drives you mad? Divide, parallelize and go with Rcpp!

One of the frustrating moments while working with data is when you need results urgently, but your dataset is large enough to make it impossible. This happens often when we need to use algorithm with high computational complexity. I will demonstrate it on the example I've been working with.

Suppose we have large dataset consisting of association rules. For some reasons we want to slim it down. Whenever two rules consequents are the same and one rule's antecedent is a subset of second rule's antecedent, we want to choose the smaller one (probability of obtaining smaller set is bigger than probability of obtaining bigger set). This is illustrated below:

{A,B,C}=>{D}

{E}=>{F}

{A,B}=>{D}

{A}=>D

How can we achieve that? For example, using below pseudo algorithm:

For i=1 to n:
  For j=i+1 to n:
   # check if antecedent[i] contains antecedent[j]
   (if consequents[i]=consequents[j]), then flag antecedent[i] with 1,
                                       otherwise with 0
    else: # check if antecedent[j] contains antecedent[i]
          (if consequents[i]=consequents[j]), then flag antecedent[j] with 1,
                                              otherwise with 0

How many operations do we need to perform with this simple algorithm?

For the first i we need to iterate (n-1) times, for the second i (n-2) times, for the third i (n-3) and so on, reaching finally (n-(n-1)). This leads to (proof can be found here):

[ sum_{i=1}^{n}{i}= frac{n(n-1)}{2} ]

So the above has asymptotic complexity of (O(n^2)). It means, more or less, that the computational complexity grows with the square of the size of the data. Well, for the dataset containing around 1,300,000 records this becomes serious issue. With R I was unable to perform computation in reasonable time. Since a compiled language performs better with simple arithmetic operations, the second idea was to use Rcpp. Yes, it is faster, to some extent — but with such a large dataframe I was still unable to get results in satisfying time. So are there any other options?

Yes, there are. If we take a look at our dataset, we can see that it can be aggregated in such way that each individual "chunk" will consist of records with exactly same consequents:

{A,B}=>{D}

{A}=>{D}

{C,G}=>{F}

{Y}=>{F}

After such division I got 3300 chunks, so the average number of observations per chunk was around 400. Next step was to retry sequentially for each chunk. Since our algorithm has square complexity, it is faster to do it that way rather than on the whole dataset at once. While R failed again, Rcpp finally returned result (after 5 minutes). But still there is a room for improvement. Since our chunks can be calculated independently, there is a possibility to perform parallel computation using for example, foreach package (which I demonstrated in previous article). While passing R functions to foreach is a simple task, parallelizing Rcpp is a little bit more time consuming. We need to do below steps:

  1. Create .cpp file, which includes all of functions needed
  2. Create a package using Rcpp. This can be achieved using for example:
    Rcpp.package.skeleton("nameOfYourPackage",cpp_files = "directory_of_your_cpp_file")
  3. Install your Rcpp package from source:
    install.packages("directory_of_your_rcpp_package", repos=NULL, type="source")
  4. Load your library:
    library(name_of_your_rcpp_package)

Now you can use your Rcpp function in foreach:

results=foreach(k=1:length(len),
                .packages=c(name_of_your_package)) %dopar%
                              {your_cpp_function(data)}

Even with foreach I waited forever for the R results, but Rcpp gave them in approximately 2.5 minutes. Not too bad!

Here are some conclusions. Firstly, it's worth knowing more languages/tools than just R. Secondly, there is often escape from the large dataset trap. There is little chance that somebody will do exactly the same task as mentioned in above example, but much higher probability that someone will face similar problem, with a possibility to solve it in the same way.

Maximize your VM’s Performance with Accelerated Networking – now generally available for both Windows and Linux

$
0
0

We are happy to announce that Accelerated Networking (AN) is generally available (GA) and widely available for Windows and the latest distributions of Linux providing up to 30Gbps in networking throughput, free of charge! 

AN provides consistent ultra-low network latency via Azure's in-house programmable hardware and technologies such as SR-IOV. By moving much of Azure's software-defined networking stack off the CPUs and into FPGA-based SmartNICs, compute cycles are reclaimed by end user applications, putting less load on the VM, decreasing jitter and inconsistency in latency.

With the GA of AN, region limitations have been removed, making the feature widely available around the world. Supported VM series include D/DSv2, D/DSv3, E/ESv3, F/FS, FSv2, and Ms/Mms.

The deployment experience for AN has also been improved since public preview. Many of the latest Linux images available in the Azure Marketplace, including Ubuntu 16.04, Red Hat Enterprise Linux 7.4, CentOS 7.4 (distributed by Rogue Wave Software), and SUSE Linux Enterprise Server 12 SP3, work out of the box with no further setup steps needed. Windows Server 2016 and Windows Server 2012R2 also work out of the box.

All the information needed to deploy a VM with AN can be found here, Windows AN VM or Linux AN VM.

Top stories from the VSTS community – 2018.01.05

$
0
0
Here are top stories we found in our streams this week related to DevOps, VSTS, TFS and other interesting topics. TOP STORIES Tag your GitHub source code in VSTS pipeline – Mikael KriefTo allow me to tag my GitHub source code I developed and published an extension in the Visual Studio Marketplace, which through a... Read More

Because it’s Friday: Harry Potter was the time to come

$
0
0

Type "Harry Potter" as a text on your phone. Now press the predictive text button a few times. I got "Harry Potter was the time to come", but my phone has been trained on my texts and you'll likely get something different. But if you train the predictive algorithm on the complete Harry Potter series, you get this, from Botnik:

Potter

Predictive text has been used to generate songs before, and AIs have been used to create movie scripts, but they all seemed a bit stilted. The plot of this one is nonsense, but I'm amazed at how well this one reads: it definitely has that Harry Potter style. 

You can try out the predictive keyboards for Narration and Dialogue here, but the results aren't as impressive as the text above, at least if you keep selecting the first option (I get "Said harry quickly as possible skulduggery behind him" if I keep pressing '1'). I suspect the 12 contributors listed here had a significant hand in selecting the words and curating the dialogue. But still, kudos — it's downright hilarious in places.

That's all from us for this week. See you on Monday!

Announcing Preview 1 of ASP.NET MVC 5.2.4, Web API 5.2.4, and Web Pages 3.2.4

$
0
0

Today we are releasing Preview 1 of ASP.NET MVC 5.2.4, Web API 5.2.4, and Web Pages 3.2.4 on NuGet. This release contains some minor bug fixes and a couple of new features specifically targeted at enabling .NET Standard support for the ASP.NET Web API Client.

You can find the full list of features and bug fixes for this release in the release notes.

To update an existing project to use this preview release run the following commands from the NuGet Package Manager Console for each of the packages you wish to update:

Install-Package Microsoft.AspNet.Mvc -Version 5.2.4-preview1
Install-Package Microsoft.AspNet.WebApi -Version 5.2.4-preview1
Install-Package Microsoft.AspNet.WebPages -Version 3.2.4-preview1

ASP.NET Web API Client support for .NET Standard

The ASP.NET Web API Client package provides strongly typed extension methods for accessing Web APIs using a variety of formats (JSON, XML, form data, custom formatter). This saves you from having to manually serialize or deserialize the request or response data. It also enables using .NET types to share type information about the request or response with the server and client.

This release adds support for .NET Standard 2.0 to the ASP.NET Web API Client. .NET Standard is a standardized set of APIs that when implemented by .NET platforms enables library sharing across .NET implementations. This means that the Web API client can now be used by any .NET platform that supports .NET Standard 2.0, including cross-platform ASP.NET Core apps that run on Windows, macOS, or Linux. The .NET Standard version of the Web API client is also fully featured (unlike the PCL version) and has the same API surface area as the full .NET Framework implementation.

For example, let’s use the new .NET Standard support in the ASP.NET Web API Client to call a Web API from an ASP.NET Core app running on .NET Core. The code below shows an implementation of a ProductsClient that uses the Web API client helper methods (ReadAsAsync<T>(), Post/PutAsJsonAsync<T>()) to get, create, update, and delete products by making calls to a products Web API:

Note that all the serialization and deserialization is handled for you. The ReadAsAsync<T>() methods will also handle selecting an appropriate formatter for reading the response based on its content type (JSON, XML, etc.).

This ProductsClient can then be used to call the Products Web API from your Razor Pages in an ASP.NET Core 2.0 app running on .NET Core (or from any .NET platform that supports .NET Standard 2.0). For example, here’s how you can use the ProductsClient from the page model for a page that lets you edit the details for a product:

For more details on using the ASP.NET Web API Client see Call a Web API From a .NET Client (C#).

Please try out Preview 1 of ASP.NET MVC 5.2.4, Web API 5.2.4, and Web Pages 3.2.4 and let us know what you think! Any feedback can be submitted as issues on GitHub.

Enjoy!

VSTS will no longer allow creation of new MSA users with custom domain names backed by AzureAD

$
0
0
On September 15, 2016, the Azure Active Directory (Azure AD) team blocked the ability to create new Microsoft accounts using email addresses in domains that are configured in Azure AD. Many VSTS customers expressed concern when this change happened. As a result, we worked with the Azure AD team to get a temporary exception for... Read More

#Azure #SQLDW, the cost benefits of an on-demand data warehousing

$
0
0

Prices illustrated below are based on East US 2 as December 18th, 2017. For price changes updates, visit Azure Analysis Services, SQL Database, and SQL Data Warehouse pricing pages.

Azure SQL Data Warehouse is Microsoft’s SQL analytics platform, the backbone of your Enterprise Data Warehouse. The service is designed to allow customers to elastically and independently scale, compute and store. It acts as a hub to your data marts and cubes for an optimized and tailored performance of your EDW. Azure SQL DW offers guaranteed 99.9 percent high availability, PB scale, compliance, advanced security, and tight integration to upstream and downstream services so you can build a data warehouse that fits your needs. Azure SQL DW is the only data warehouse service enabling enterprises to gain insights from data everywhere with a global availability in more than 30 regions.

image

This is the last blog post in our series detailing the benefits of Hub and Spoke data warehouse architecture on Azure. On-premises, a Hub and Spoke architecture was hard and expensive to maintain. In the cloud, the cost of such architecture can be much lower as you can dynamically adjust compute capacity to what you need, when you need it. Azure is the only platform that enables you to create a high performing data warehouse that is cost optimized for your needs. You will see in this blog post how you can save up to 50 percent on cost by leveraging a Hub and Spoke design while increasing the overall performance and time to insights of your analytics solutions.

With the Microsoft Azure data platform you can build the data warehouse solution you want with workload isolation, advanced security and virtually unlimited concurrency. All of this can be done at an incredibly low cost if you leverage Azure Functions to build on-demand data warehousing. Imagine a company who wants to create a central data repository from a variety of source systems and push the combined data to multiple customers (e.g. ISV), suppliers (e.g. retail) or business units/departments. In this case study, this customer expects a strong activity for its Data Warehouse from 8 AM to 8PM during workdays. The performance ratio between high and low activity times is around 5x. They expect its curated data lake, SQL Data warehouse to be 10 TB large after compression and have peak time needs at 1,500 DWUs. For dash boarding and reports the solution will use Analysis Services, caching around 1 percent of the data. Thanks to SQL DB elastic Pools or Azure Analysis Services, the company can add concurrency, advanced security and workload isolation between their end users. SQL DB Elastic Pool offers a wide range of performance and cost with the cost per database starting at $0.60 with the Basic Tier.

The figure below illustrates the various benefits from moving to a Hub and Spoke Model. Microsoft Azure is the only platform offering the ability to build the data warehouse that fits your unique data warehousing needs.

image

Figure 1 - Benefits from a Hub and Spoke Architecture

In step one, this is the traditional data warehouse and is the starting point of building your Data Warehouse. Every data warehouse will have inherent limits that will be encountered with more and more people connecting to the service. In this example, with no auto-scale and a rigid level of provisioning you could spend $15k/month.

In step two, we introduce Azure Functions to use the full elasticity of SQL DW. In that simple example, we leverage the time trigger function and ask SQL DW to be at 1,500 DWUs at peak time (workdays 8AM-8PM) and 300 the rest of the time. This is a simple example, but you can go deeper on performance levels, add auto-scaling and auto-pausing/resuming to make your data warehouse auto-scale. In this example the cost goes down to $8k/month.

Step three is a great example on the breadth of customization you can make around SQL DW using SQL DB or Azure Analysis Services. No other data warehouse enables such a high level of customization because you cannot expand them. With that model, there is virtually no limit in concurrency and performance of your data warehouse. Here are a few examples of what you can do:

  • For high performance, interactive dash boarding and reports with pre-aggregate queries, Azure Analysis Services will be the right choice.
  • Do you want to provide a predictable performance to a large department at a fast speed? SQL DB Premium Single Database will be the right choice.
  • If you are an ISV, do you have a large number of customers that you need to accommodate at a free subscription level?  A Basic SQL DB Elastic Pool can accommodate a cost per database for less than $1/month.

Deploy in a SQL Data Warehouse Hub Spoke Template with SQL Databases.

In the example below, the cost of the data warehouse varies from $10k/month to $15.5k/month depending on what tier and service you pick. Remember that by offloading the performance from SQL DW to data marts or caching layers, you can dramatically reduce your DWU provisioning (while increasing concurrency). Also remember that you can leverage Azure Functions to start automating the level of performance you need at a specific point in time. Learn more about using Azure Functions to automate SQL DW Compute Levels.

In step four, you can further optimize the performance of your data marts by connecting them to Azure Analysis Services for caching. In this example, the cost is between $16k and $21.5k/month with the opportunity to be even lower if you offload the performance needs on your data marts.

image

Figure 2 - Summary of the benefits to build a Hub and Spoke Data Warehouse

In summary, we moved from a static and monolithic data warehouse costing $28k per month to an elastic Hub & Spoke data warehouse optimized for performance and accessed by thousands of users with a potential cost saving of 50 percent. We can guarantee you that each of the services will continue further integrating with each other to provide the best data warehouse experience.

If you need our help for a POC, contact us directly by submitting a SQL Data Warehouse Information Request. Stay up-to-date on the latest Azure SQL DW news and features by following us on Twitter @AzureSQLDW. Next week, we will feature the deeper integration between Azure Analysis Services and SQL DW.

Visual Studio Code Java Debugger Adding Step Filter and Expression Evaluation

$
0
0

Happy new year! We’d like to thank you all for using Visual Studio Code for your Java development as well as for sharing your feedback. Within just three months, we’ve published 5 releases for our Debugger for Java extension for Visual Studio Code and received 400K+ downloads. With our new 0.5.0 release, we’re adding two new exciting features: Expression Evaluation and Support Step Filters.

Expression Evaluation

The debugger now enables you to evaluate expressions in variable watch window as well as debug console at runtime. So now you can see the value of both the simple variables, single-line expressions, as well as short code fragments within the running context. You can then monitor and validate the change of the value when your code is being executed. See below

VS Code Java Debugger Adding Expression Evaluation

Support step filter

Step filters are commonly used to filter out types that you do not wish to see or step through while debugging. With this feature, you can configure the packages to filter within your launch.json so they could be skipped when you step through. See below

VS Code Java Debugger Adding Step Filters

Other updates

This release also includes a few other enhancements:

  1. Publish the binaries to the maven central repository
  2. Adopt new Visual Studio Code 1.19.0 debug activation events
  3. Improving searching performance by looking up the stack frame’s associated source file from source containers directly instead of leveraging the original jdt search engine
  4. Bug fixes

You can find more details in our changelog.

Try it out

If you’re trying to find a performant editor for your Java project, please give Visual Studio Code a try

Xiaokai He, Program Manager
@XiaokaiHe

Xiaokai is a program manager working on Java tools and services. He’s currently focusing on making Visual Studio Code great for Java developers, as well as supporting Java in various of Azure services.

ASP.NET Single Page Applications Angular Release Candidate

$
0
0

I was doing some Angular then remembered that the ASP.NET "Angular Project Template" has a release candidate and is scheduled to release sometime soon in 2018.

Starting with just a .NET Core 2.0 install plus Node v6 or later, I installed the updated angular template. Note that this isn't the angular/react/redux templates that came with .NET Core's base install.

I'll start by adding the updated SPA (single page application) template:

dotnet new --install Microsoft.DotNet.Web.Spa.ProjectTemplates::2.0.0-rc1-final

Then from a new directory, just

dotnet new angular

Then I can open it in either VSCode or Visual Studio Community (free for Open Source). If you're interested in the internals, open up the .csproj project file and note the checks for ensuring node is install, running npm, and running WebPack.

If you've got the Angular "ng" command line tool installed you can do the usual ng related stuff, but you don't need to run "ng serve" because ASP.NET Core will run it automatically for you.

I set development mode with "SET ASPNETCORE_Environment=Development" then do a "dotnet build." It will also restore your npm dependencies as part of the build. The client side app lives in ./ClientApp.

C:UsersscottDesktopmy-new-app> dotnet build

Microsoft (R) Build Engine version 15.5 for .NET Core
Copyright (C) Microsoft Corporation. All rights reserved.

Restore completed in 73.16 ms for C:UsersscottDesktopmy-new-appmy-new-app.csproj.
Restore completed in 99.72 ms for C:UsersscottDesktopmy-new-appmy-new-app.csproj.
my-new-app -> C:UsersscottDesktopmy-new-appbinDebugnetcoreapp2.0my-new-app.dll
v8.9.4
Restoring dependencies using 'npm'. This may take several minutes...

"dotnet run" then starts the ng development server and ASP.NET all at once.

My ASP.NET Angular Application

If we look at the "Fetch Data" menu item, you can see and example of how Angular and open source ASP.NET Core work together. Here's the Weather Forecast *client-side* template:

<p *ngIf="!forecasts"><em>Loading...</em></p>


<table class='table' *ngIf="forecasts">
<thead>
<tr>
<th>Date</th>
<th>Temp. (C)</th>
<th>Temp. (F)</th>
<th>Summary</th>
</tr>
</thead>
<tbody>
<tr *ngFor="let forecast of forecasts">
<td>{{ forecast.dateFormatted }}</td>
<td>{{ forecast.temperatureC }}</td>
<td>{{ forecast.temperatureF }}</td>
<td>{{ forecast.summary }}</td>
</tr>
</tbody>
</table>

And the TypeScript:

import { Component, Inject } from '@angular/core';

import { HttpClient } from '@angular/common/http';

@Component({
selector: 'app-fetch-data',
templateUrl: './fetch-data.component.html'
})
export class FetchDataComponent {
public forecasts: WeatherForecast[];

constructor(http: HttpClient, @Inject('BASE_URL') baseUrl: string) {
http.get<WeatherForecast[]>(baseUrl + 'api/SampleData/WeatherForecasts').subscribe(result => {
this.forecasts = result;
}, error => console.error(error));
}
}

interface WeatherForecast {
dateFormatted: string;
temperatureC: number;
temperatureF: number;
summary: string;
}

Note the URL. Here's the back-end. The request is serviced by ASP.NET Core. Note the interface as well as the TemperatureF server-side conversion.

[Route("api/[controller]")]

public class SampleDataController : Controller
{
private static string[] Summaries = new[]
{
"Freezing", "Bracing", "Chilly", "Cool", "Mild", "Warm", "Balmy", "Hot", "Sweltering", "Scorching"
};

[HttpGet("[action]")]
public IEnumerable<WeatherForecast> WeatherForecasts()
{
var rng = new Random();
return Enumerable.Range(1, 5).Select(index => new WeatherForecast
{
DateFormatted = DateTime.Now.AddDays(index).ToString("d"),
TemperatureC = rng.Next(-20, 55),
Summary = Summaries[rng.Next(Summaries.Length)]
});
}

public class WeatherForecast
{
public string DateFormatted { get; set; }
public int TemperatureC { get; set; }
public string Summary { get; set; }

public int TemperatureF
{
get
{
return 32 + (int)(TemperatureC / 0.5556);
}
}
}
}

Pretty clean and straightforward. Not sure about the Date.Now, but for the most part I understand this and can see how to extend this. Check out the docs on this release candidate and also note that this included updated React and Redux templates as well!


Sponsor: Scale your Python for big data & big science with Intel® Distribution for Python. Near-native code speed. Use with NumPy, SciPy & scikit-learn. Get it Today!


© 2017 Scott Hanselman. All rights reserved.
     

Last week in Azure: Securing Azure infrastructure from CPU vulnerability, and more

$
0
0

As we ushered in 2018, our industry was busy responding to an industry-wide, hardware-based security vulnerability related to CPU data cache timing. We updated the Azure infrastructure to address this vulnerability at the hypervisor level. You can get more details about the steps taken to address this vulnerability here: Securing Azure customers from CPU vulnerability.

As announced last month (Azure portal update for classic portal users), today is the official sunset date for the Azure classic portal, manage.windowsazure.com. The Azure portal (portal.azure.com), which launched in December 2015, is now the place to find all of our services and your resources.

Data

Azure Data Lake tools integrates with VSCode Data Lake Explorer and Azure Account - Learn about the Azure Data Lake Tools extension in VS Code, which provides a lightweight, cross-platform, authoring experience for U-SQL scripts.

Asynchronous refresh with the REST API for Azure Analysis Services - Introduces the REST API for Azure Analysis Services, which enables you to perform asynchronous data-refresh operations instead of using long-running HTTP connections from client applications.

Build richer apps with your time series data - Check out the new Time Series Insights (TSI) developer tools, which include an Azure Resource Manager (ARM) template, API code samples, and developer docs.

Azure #CosmosDB: Recap of 2017 - Rimma Nehme, GPM for Azure Cosmos DB + Open Source Software Analytics, provides a look back on a year's worth of Azure Cosmos DB milestones.

Azure Analysis Services features on Azure Friday - Christian Wade stops by to chat with Scott Hanselman about how to use Azure Analysis Services diagnostic logging and query scale out. These features provide high scalability and monitoring for IT-owned "corporate BI" and are much easier to set up in Azure than on-premises.

Other Headlines

Migration checklist when moving to Azure App Service - If you're preparing to move your application to Azure PaaS services, especially Azure App Service, be sure to review this collection of items to consider. In addition, check out the Azure App Service Migration Assistant (available for both Windows and Linux sites).

Maximize your VM’s Performance with Accelerated Networking – now generally available for both Windows and Linux - Accelerated Networking (AN) is generally available (GA) and widely available for Windows and the latest distributions of Linux providing up to 30Gbps in networking throughput, free of charge.

New Content

Designing, building, and operating microservices on Azure - The AzureCAT patterns and practices team published new guidance about microservices titled, Designing, building, and operating microservices on Azure.

Whitepaper: Selecting the right secure hardware for your IoT deployment - The second white paper of the IoT Security Lifecycle series, The Right Secure Hardware for Your IoT Deployment, is now available for download.

Service updates

Azure shows

<!-- -->

Azure Advisor Updates - Matt Wagner joins Scott Hanselman to talk about Azure Advisor, your personalized cloud service for Azure best practices that helps you to improve availability, enhance protection, optimize performance of your Azure resources, and maximize the return on your IT budget. In this episode, you'll learn about the latest set of improvements to Advisor that enable you to attain a comprehensive view of Advisor's advice across all your subscriptions and to customize Advisor to the needs of your specific organization.

Azure Analysis Services Scale Out & Diagnostics - Christian Wade stops by to chat with Scott Hanselman about how to use Azure Analysis Services diagnostic logging and query scale out. These features provide high scalability and monitoring for IT-owned "corporate BI" and are much easier to set up in Azure than on-premises.

The Azure Podcast: Episode 210 - CPU Vulnerability - Evan talks about the hot issue of the CPU vulnerability that's been addressed by Microsoft in Windows on Azure and on-premises. He discusses the reason for the reboots of all the Azure servers and how customers can alleviate the impact of these reboots.


Learn your way around the R ecosystem

$
0
0

One of the most powerful things about R is the ecosystem that has emerged around it. In addition to the R language itself and the many packages that extend it, you have a network of users, developers, governance bodies, software vendors and service providers that provide resources in technical information and support, companion applications, training and implementation services, and more.

I gave the talk above at the useR! conference last year, but never posted the slides before because they weren't particularly useful without the associated talk track. Mark Sellors from Mango Solutions has thankfully filled the gap with his new Field Guide to the R Ecosystem. This concise and useful document (which Mark introduces here) provides an overview of R and its packages and APIs, developer tools, data connections, commercial vendors of R, and the user and developer community. The document assumes no background knowledge of R, and is useful for anyone thinking of getting into R, or for existing R users to learn about the resources available in the wider ecosystem. You can read Field Guide to the R Ecosystem at the link below.

Mark Sellors: Field Guide to the R Ecosystem

 

Exploring the Azure IoT Arduino Cloud DevKit

$
0
0

Someone gave me an Azure IoT DevKit, and it was lovely timing as I'm continuing to learn about IoT. As you may know, I've done a number of Arduino and Raspberry Pi projects, and plugged them into various and sundry clouds, including AWS, Azure, as well as higher-level hobbyist systems like AdaFruit IO (which is super fun, BTW. Love them.)

The Azure IoT DevKit is brilliant for a number of reasons, but one of the coolest things is that you don't need a physical one...they have an online simulator! Which is very Inception. You can try out the simulator at https://aka.ms/iot-devkit-simulator. You can literally edit your .ino Arduino files in the browser, connect them to your Azure account, and then deploy them to a virtual DevKit (seen on the right). All the code and how-tos are on GitHub as well.

When you hit Deploy it'll create a Free Azure IoT Hub. Be aware that if you already have a free one you may want to delete it (as you can only have a certain number) or change the template as appropriate. When you're done playing, just delete the entire Resource Group and everything within it will go away.

The Azure IoT DevKit in the browser is amazing

Right off the bat you'll have the code to connect to Azure, get tweets from Twitter, and display them on the tiny screen! (Did I mention there's a tiny screen?) You can also "shake" the virtual IoT kit, and exercise the various sensors. It wouldn't be IoT if it didn't have sensors!

It's a tiny Arduino device with a screen!

This is just the simulator, but it's exactly like the real MXChip IoT DevKit. (Get one here) They are less than US$50 and include WiFi, Humidity & Temperature, Gyroscope & Accelerometer, Air Pressure, Magnetometer, Microphone, and IrDA, which is ton for a small dev board. It's also got a tiny 128x64 OLED color screen! Finally, the board also can go into AP mode which lets you easily put it online in minutes.

I love these well-designed elegant little devices. It also shows up as an attached disk and it's easy to upgrade the firmware.

Temp and Humidity on the Azure IoT DevKit

You can then dev against the real device with free VS Code if you like. You'll need:

  • Node.js and Yarn: Runtime for the setup script and automated tasks.
  • Azure CLI 2.0 MSI - Cross-platform command-line experience for managing Azure resources. The MSI contains dependent Python and pip.
  • Visual Studio Code (VS Code): Lightweight code editor for DevKit development.
  • Visual Studio Code extension for Arduino: Extension that enables Arduino development in Visual Studio Code.
  • Arduino IDE: The extension for Arduino relies on this tool.
  • DevKit Board Package: Tool chains, libraries, and projects for the DevKit
  • ST-Link Utility: Essential tools and drivers.

But this Zip file sets it all up for you on Windows, and head over here for Homebrew/Mac instructions and more details.

I was very impressed with the Arduino extension for VS Code. No disrespect to the Arduino IDE but you'll likely outgrow it quickly. This free add on to VS Code gives you intellisense and integration Arduino Debugging.

Once you have the basics done, you can graduate to the larger list of projects at https://microsoft.github.io/azure-iot-developer-kit/docs/projects/ that include lots of cool stuff to try out like a cloud based Translator, Door Monitor, and Air Traffic Control Simulator.

All in all, I was super impressed with the polish of it all. There's a LOT to learn, to be clear, but this was a very enjoyable weekend of play.


Sponsor: Get the latest JetBrains Rider for debugging third-party .NET code, Smart Step Into, more debugger improvements, C# Interactive, new project wizard, and formatting code in columns.



© 2017 Scott Hanselman. All rights reserved.
     

Azure Security Center and Microsoft Web Application Firewall Integration

$
0
0

Web applications are increasingly becoming targets of attacks such as cross-site scripting, SQL injection, and application DDoS. While OWASP provides guidance on writing applications that can make them more resistant to such attacks, it requires rigorous maintenance and patching at multiple layers of application topology. Microsoft Web Application Firewall (WAF) and Azure Security Center (ASC) can help secure web applications against such vulnerabilities.

Microsoft WAF is a feature of Azure Application Gateway (layer 7 load balancer) that protects web applications against common web exploits using OWASP core rule sets. Azure Security Center scans Azure resources for vulnerabilities and recommends mitigation steps for those issues. One such vulnerability is the presence of web applications that are not protected by WAF. Currently, Azure Security Center recommends a WAF deployment for public facing IPs that have an associated network security group with open inbound web ports (80 and 443). Azure Security Center offers provisioning of application gateway WAF to an existing Azure resource as well as adding a new resource to an existing web application firewall. By integrating with WAF, Azure Security Center can analyze its logs and surface important security alerts.

In some cases, the security admin may not have resource permissions to provision WAF from ASC or the application owner has already configured WAF as part of the app deployment. To accommodate these scenarios, we are pleased to announce that Azure Security Center will now automatically discover non-ASC provisioned Microsoft WAF instances. Previously provisioned WAF instances will be displayed in ASC security solutions pane under discovered solutions where the security admin can integrate them with Azure Security Center. Connecting existing Microsoft WAF deployments will allow customers to take advantage of ASC detections regardless of how WAF was provisioned. Additional configuration settings such as custom firewall rules sets are available in the WAF console which is linked directly from security center. This article on configuring Microsoft WAF can provide more guidance on provisioning process.

ASC_WAF_Blog_picture

We would love to hear your feedback! If you have suggestions or questions, please leave a comment at the bottom of the post or reach out to ascpartnerssupport@microsoft.com.

Interested in learning more about Azure Security Center?

Intro to Azure Security Center

Azure Security Center FAQ

Cortana coming to more devices in 2018 through Devices SDK and new reference designs

$
0
0

2017 was an exciting and productive year for Cortana — from launching the Harman Kardon Invoke intelligent speaker, releasing our Skills Kit, and introducing many new features and capabilities — we continue to grow the Cortana ecosystem to help people get things done across work and life.

To kick-off 2018, we’re making it easier than ever for OEMs and developers to build for Cortana with the Cortana Devices SDK. Just last week, Johnson Controls announced the new Cortana-enabled JCI Glas thermostat, built using the Cortana Devices SDK. We’ve also partnered with industry leaders including Allwinner, Synaptics, TONLY and Qualcomm, to develop reference designs for new Cortana experiences. Sign up today to start building with the Devices SDK and bring Cortana to your very own designs or leverage one of the many reference designs from our Cortana Device Program partners. Additionally, skills created using the Cortana Skills Kit are available on devices built leveraging the SDK, allowing users to seamlessly access their favorite Cortana skills across all devices.

With the support of both existing and new partnerships, we’re continuing to bring Cortana to even more places in the office, at home and on the go. Regardless of the device or context, our goal is to put Cortana everywhere you need assistance, whether that is on your PC, phone, Xbox, mixed reality headsets, intelligent home speakers, thermostats and even more in the future. You’ll continue to see Cortana integrated on your favorite devices and services throughout the year to come.

More on the reference designs from our Device Program partners:

Allwinner

The Allwinner Tech R16 Quad Core IoT solution is now available with Cortana, enabling partners to build voice first IoT devices. The reference designs come with the software integration to equip device partners to deliver high quality Cortana devices reducing cycle time and overhead.

Synaptics

Synaptics is an industry leader in far-field voice processing for consumer IoT, smart speakers, PCs and beyond. Synaptics provides Cortana and Skype certified solutions that greatly reduce development time-to-market and provide a high-quality user experience. Learn more about Synaptics designs here.

TONLY

TONLY has collaborated closely with Microsoft to design, develop and manufacture Cortana devices that optimize Skype audio requirements.

Qualcomm
Qualcomm and Microsoft are excited to enable partners to build Mesh Networking and Smart Audio products that bring Cortana value to customers, through reference designs on the Qualcomm® Smart Audio Platform and Qualcomm® Mesh Networking Platform. Learn more about Qualcomm designs here.

The post Cortana coming to more devices in 2018 through Devices SDK and new reference designs appeared first on Building Apps for Windows.

In case you missed it: December 2017 roundup

$
0
0

In case you missed them, here are some articles from December of particular interest to R users.

Hadley Wickham's Shiny app for making eggnog.

Using R to analyze the vocal range of pop singers.

A video tour of the data.table package from its creator, Matt Dowle.

The European R Users Meeting (eRum) will be held in Budapest, May 14-18.

Winners of the ASA Police Data Challenge student visualization contest.

An introduction to seplyr, a re-skinning of the dplyr package to a standard R evaluation interface.

How to run R in the Windows Subsystem for Linux, along with the rest of the Linux ecosystem.

A chart of Bechdel scores, showing representation of women in movies over time.

The British Ecological Society's Guide to Reproducible Science advocates the use of R and Rmarkdown.

Eight modules from the Microsoft AI School cover Microsoft R and SQL Server ML Services.

And some general interest stories (not necessarily related to R):

As always, thanks for the comments and please send any suggestions to me at davidsmi@microsoft.com. Don't forget you can follow the blog using an RSS reader, via email using blogtrottr, or by following me on Twitter (I'm @revodavid). You can find roundups of previous months here.

Viewing all 10804 articles
Browse latest View live