Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

Everyone should get a Dashcam

$
0
0

A clean dashcam installationI've put Dashcams in both my car and my wife's car. It's already captured two accidents: one where I was rear-ended and one where someone fell asleep as they were driving a few cars ahead of me on the freeway.

After these two experiences, I will never drive a car without a dashcam again. Case in point - being rear-ended. I was at a red light, it turned green, and as I accelerated I got nailed from behind, pushing me into the intersection. The gent jumped out and started yelling and waving his arms, saying I backed up (!), and I said, "I'm sorry, but I've got a dashcam both front and back." He got really quiet, and then we exchanged information. When I called the insurance company on Monday and told them I had not only Dashcam footage but that the cam recorded date and time, gps coordinates, speed and 1080p video both front and back, including the face and license plate of the other driver...I had a check that Thursday afternoon.

I was driving at night on I5 from Seattle to Portland and noticed a truck two or three (long) car lengths ahead of me start to drift, drift, drift off to the side...and then suddenly jerk hard to the left, cross all lanes of traffic and slam into the median in a shower of sparks and eventually balance on top of the center median. While I wasn't involved in the accident, I pulled over and dropbox'ed the video to the cops right there. The officer on duty said that dashcam footage made things 100% easier.

A cropped and somewhat compressed version of this video is embedded below, and also linked here. Now, it was late at night and I've cropped it, but you can see the car get "sleepy" and slowly float across all lanes to the right, hit the right side, then overcompensate and hit the center. This contradicted the driver's statement that he was hit by another car.

Disclaimer: This is older DR650 footage in the dead of night that's been cropped to remove identifying info. Check out this example Dashcam footage of a DR750 for a better sense of what to expect.

411gAnMBbaLI've put Blackvue Dashcams in both our cars. I put a Blackvue DR750S-2CH with Power Magic in car. A PowerMagic will power the dashcam while the car is parked, and catch anything that happens even if the car is off, and it will shut off if it detects that it's in any way discharging the 12V battery below a set voltage. I like the DR750 because it's 60fps 1080p on the front, and it can optionally buffer the video to memory so it's not beating on the SD card and shortening its life. It also has g-force and impact sensors, so as you get in the car it'll say (literally speak) "an impact was detected while in parking mode."

My wife didn't care about these more advanced features so she got the Blackvue DR650S-2CH. It's last year's model but still does 1080p front and back. There's a main wire that handles power for the main unit (either from a 9V cigarette lighter or the PowerMagic), then there's a long, long wire that you'll fish though the plastic panels of your car that will power and run the back camera.

It only took about two hours for me to install the camera, per car, and installation consisted mostly of hiding wires in the existing plastic panels and pushing the wires out of sight. The final look is very sanitary and requires zero maintenance.

The camera has wifi built-in and there's a free app to download. You connect your phone (whenever necessary) to the camera's wifi and download videos as needed. That's why it was super easy for me to Dropbox the footage without connecting to a PC. That said, there are Blackvue desktop apps that will show you maps with your position and speed and allow you to stitch footage together. You can also stamp date, time, speed, and custom text to the footage so it's embedded in the resulting MP4s.

I've had zero issues with my dashcams, and as I said, I'm sold. It's a no-brainer and frankly, it should be built into every car. I'll be installing a dashcam in whatever car my soon-to-be teenager drives, count on it.

Maybe you won't get into an accident (hopefully!) but you could catch a meteor on your dashcam!

* I use Amazon links to products. When you use them, you're supporting this blog! Thanks!


Sponsor: Get the latest JetBrains Rider for debugging third-party .NET code, Smart Step Into, more debugger improvements, C# Interactive, new project wizard, and formatting code in columns.



© 2017 Scott Hanselman. All rights reserved.
     

Brand Detection in Microsoft Video Indexer

$
0
0

We are delighted to announce a new capability in Microsoft Video Indexer: Brand Detection from speech and from visual text! If you are not yet familiar with Video Indexer, you may want to take a look at a few examples on our portal.

Having brands in the video index gives you insights on names of products and organizations, which appear in a video or audio asset without having to watch it. Particularly, it enables you to search over large amounts of video and audio. Customers find Brand Detection useful in a wide variety of business scenarios such as contents archive and discovery, contextual advertising, social media analysis, retail compete analysis and many more.

Out of the box brand detection

Let us take a look at an example. In this Microsoft Build 2017 Day 2 presentation, the brand "Microsoft Windows" appears multiple times. Sometimes in the transcript, sometimes as visual text and never as verbatim. Video Indexer detects with high precision that a term is indeed brand based on the context, covering over 90k brands out of the box, and constantly updating. At 02:25, Video Indexer detects the brand from speech and then again at 02:40 from visual text, which is part of the windows logo.

image

image

Talking about windows in the context of construction will not detect the word "Windows" as a brand, and same for Box, Apple, Fox, etc., based on advanced Machine Learning algorithms that know how to disambiguate from context. Brand Detection works for all our supported languages. Click here for full Microsoft Build 2017 Day 2 keynote video and index.

Bring your own brands

We have already got mining algorithms in place, which mine brands and will be updating the brand base regularly. However, we also allow customization of brands which includes adding and excluding brands from the index. The following screenshot shows the brand customization screen. You can reach it from the customization button on the upper-right corner of the VI portal main page. Let us add 'Mod Pizza' as a brand to be included.

image

The result of indexing after the addition of the new custom brand follows:

image

image

Click here for full Microsoft Build 2017 Day 2 Keynote video and customized index.

Using the Brand Customization API

In the Api Partner Customization Brands Web API you will be able to customize brand detection. You can enable and disable the detection of out of the box brands, as well as add your own custom brands for VI to detect or ignore.

To add your own brand for detection or filtering, all you need is the brand name. It is recommended to add its Wikipedia page as well if possible for improved results. You can also specify to which categories the brand belongs. From that moment forward, every video that is indexed, will account for your customization.

Conclusion

Brand Detection is a new capability in Video Indexer, which enables you to index brand mentions in speech and visual text, based on a large built-in brands catalog as well as with customization. Brands are disambiguated from other terms using context.

Please visit our API documentation on the VI developer portal, for more details on how to use brand detection in VI.

Welcoming Progressive Web Apps to Microsoft Edge and Windows 10

$
0
0

A little over a year ago, we outlined our vision to bring Progressive Web Apps (PWAs) to the more than half a billion devices running Windows 10. We believe PWAs are key to the web’s future, and couldn’t be more excited about their potential to enable more immersive web app experiences across all device form factors.

Today, we’re excited to take a major step from vision to reality, starting with some updates on previewing PWAs in Windows and our roadmap to bring PWAs to the Microsoft Store.

Beginning with EdgeHTML 17.17063, we have enabled Service Workers and push notifications by default in preview builds of Microsoft Edge—you can learn more about those features in Ali’s post, “Service Worker: Going beyond the page.” This completes the suite of technologies (including Fetch networking and the Push and Cache APIs) that lays the technical foundation for PWAs on Windows 10.

Over the coming weeks, we’re also kicking off some experiments with crawling and indexing quality PWAs from the Web to list them in the Microsoft Store, where users can find them just like any other app on Windows 10.

In this post, we’ll give a quick introduction to Progressive Web Apps – what they are, the problems they solve, and how we’ll be enabling them across Windows 10. We’ll explore how our indexing experiments will ramp to an end-to-end PWA discovery experience later this year, and how we’ll empower developers to differentiate their PWAs on Windows – including allowing developers to claim and monetize their PWAs in the Store, interact with customers via reviews and telemetry, and enhance the app with WinRT capabilities.

Let’s dive in!

What’s a Progressive Web App, anyway?

The excitement about PWAs in the developer community is almost palpable – but amongst all that excitement, it can be hard to pin down a single, concise, authoritative definition of a “Progressive Web App.” For the purposes of this discussion, here’s how we define a PWA:

Progressive Web Apps are just great web sites that can behave like native apps—or, perhaps, Progressive Web Apps are just great apps, powered by Web technologies and delivered with Web infrastructure.

Technologically speaking, PWAs are web apps, progressively enhanced with modern web technologies (Service Worker, Fetch networking, Cache API, Push notifications, Web App Manifest) to provide a more app-like experience.

Unlike a “packaged” web app experience, PWAs are hosted on your servers and can be updated without issuing new updates to an app store. Additionally, new web standards (such as Service Worker) enable interoperable ways to implement push notifications, support for offline scenarios, background refreshing, and more, without platform-specific code.

It’s beyond the scope of this post to give a full crash course in the component technologies of a PWA (for that, we highly encourage you to check out Progressive Web Apps on MDN for a starter). But at a high level, these features are built to enable native-like capabilities – offline, background wake/refresh, instant loading, push notifications, and installability.

Progressive Web Apps in Microsoft Edge and Windows 10

So what about PWAs in Microsoft Edge and Windows 10?

We’ve announced before in several venues that we’re all-in on PWAs. In fact, as hinted above, we want to take PWAs on Windows to the next level, by making them first-class app citizens in Windows. This follows from our general philosophy that the web platform, powered by EdgeHTML, is a core part of the Universal Windows Platform on Windows 10. Because of this any device running EdgeHTML 17 gets full access to the technologies and characteristics of Progressive Web Apps.

On other platforms, PWAs primarily originate from inside the browser, and can escape the browser in response to various prompts or menu options. We’re taking things one step further on Windows! Because a PWA can be a first-class citizen in the Windows Store, a user will be able to engage fully with an installed PWA—from discovery, to installation, to execution—without ever opening the browser.

Just for kicks, here is @davatron5000's @godaytrip as a #PWA on a preview build of Windows 10! ????(inspired by: https://t.co/Flm63mmu6K) pic.twitter.com/t2Kr5MlTOX

— Kirupa ???? (@kirupa) February 1, 2018

On the other hand, in the browser context, all the benefits of being a PWA should still accrue to the web site, empowering the user to choose how and where they want to engage with the experience.

Progressive Web Apps in the Microsoft Store

The first and most obvious distinction here is that we believe PWAs should be discoverable everywhere apps are discoverable – this means they should appear in the Microsoft Store alongside native apps.

In the next release of Windows 10, we intend to begin listing PWAs in the Microsoft Store. Progressive Web Apps installed via the Microsoft Store will be packaged as an appx in Windows 10 – running in their own sandboxed container, without the visual or resource overhead of the browser.

This has a number of benefits to users: PWAs installed via the store will appear in “app” contexts like Start and Cortana search results, and have access to the full suite of WinRT APIs available to UWP apps. They can differentiate their experience on Windows 10 with enhancements like access to local calendar and contacts data (with permission) and more.

It also has exciting benefits to developers! Listing a PWA in the Store gives developers the opportunity to get more insight into their users with channels like reviews and ratings in the Store, analytics on installs, uninstalls, shares, and performance, and more. It also provides more natural and discoverable access to your web experience on devices where the browser is a less natural entry point, such as Xbox, Windows Mixed Reality, and other non-PC form factors.

The road from the Web to the Microsoft Store

PWAs provide a natural signal of intent to be treated as “app-like” in the Web App Manifest, which allows us to leverage Bing’s web crawler in combination with our Store catalog to identify the best candidates for indexing.

The Microsoft Store has a two-pronged approach to publishing Progressive Web Apps:

  1. Developers can proactively submit Progressive Web Apps to the Microsoft Store
  2. The Microsoft Store, powered by the Bing crawler, will automatically index selected quality Progressive Web Apps

Submitting to the Microsoft Store with PWA Builder

Proactively submitting a PWA to the Microsoft Store requires generating an AppX containing your PWA and publishing it to your Dev Center account.

The easiest way to generate an AppX with your PWA is the free PWA Builder tool. PWA Builder can generate a complete AppX for publishing using your existing site and Web App Manifest – both website and CLI options are available.

PWA Builder logo

PWA Builder takes data from your site and uses that to generate cross-platform Progressive Web Apps.

Publishing manually gives you full access to the benefits above—fine-grained control over how your app appears in the Microsoft Store, access and the ability to respond to feedback (reviews and comments), insights into telemetry (installs, crashes, shares, etc.), and the ability to monetize your app. This also gets you access to all the other benefits of the Microsoft Dev Center, including promotion and distribution in the Microsoft Store for Business and the Microsoft Store for Education.

Automatically indexing quality Progressive Web Apps with the Bing Crawler

We’ve been using the Bing Crawler to identify PWAs on the web for nearly a year, and as we’ve reviewed the nearly 1.5 million candidates, we’ve identified a small initial set of Progressive Web App experiences which we’ll be indexing for Windows 10 customers to take for a spin over the coming weeks.

Diagram with three steps, reading: 1. Crawl and Index. 2. Convert to APPX. 3. Searchable/Browseable in Store.

We will crawl and index selected PWAs from the web to be available as apps in the Microsoft Store

Over the coming months, we’ll be ramping up our automatic indexing in the Microsoft Store from a few initial candidates to a broader sample. Throughout this process, we’ll continue to vet our quality measures for PWAs, to make sure we’re providing a valuable, trustworthy, and delightful experience to our mutual customers on Windows devices.

Whether automatically indexed by the Store or manually submitted by the site owner, the Web App Manifest provides the starting set of information for the app’s Store page: name, description, icons, and screenshots. Developers should aim to provide complete and high-quality information in the manifest. Once in the Store, the publisher will have the option of claiming their apps to take complete control of their Store presence.

Quality signals for Progressive Web Apps

We’re passionate about making the Microsoft Store a home to trustworthy, quality app experiences. With that in mind, we’ve identified a set of quality measures for developers to keep in mind as you build PWAs.

We won’t ingest every app that meets these criteria, but will be including them in our considerations for candidates as we gradually expand our program.

  • Web App Manifests should suggest quality: In our initial crawl of sites looking for PWAs, we discovered over 1.5 million manifests across 800k domains. Looking at a selection of these sites, we discovered that not all are good candidates for ingestion. Some aren’t PWAs at all, and others have a boilerplate manifest generated by tools like favicon generators. We will be looking for non-boilerplate manifests that include a name, description, and at least one icon that is larger than 512px square.
  • Sites should be secure: Access to the Service Worker family of APIs requires an HTTPS connection on Windows and other platforms.
  • Service Workers should be an enhancement: We’ll look for a Service Worker as a signal for ingesting PWAs, but we also expect experiences to degrade gracefully if Service Worker is unsupported, as it may be on older browsers or other platforms. You can get started building a basic Service Worker with PWA Builder; Mozilla also has great recipes if you are looking for somewhere to start.
  • Sites should consider automated testing for quality: There are a number of tools out there for this, including our sonarwhal, Lighthouse, aXe, and more.
  • PWAs must be compliant with Microsoft Store policies: PWAs will need to meet the standards of the Microsoft Store, just like any other app. We will not ingest PWAs that violate laws or Store policies.

Once we have shipped these technologies to mainstream Windows customers with EdgeHTML 17, we will gradually expand our indexing of high-quality Progressive Web Apps into the Microsoft Store based on quality measures and the value they add to the Windows ecosystem.

PWA or UWP?

Given the overlap in terms of capabilities, we often get asked about the recommended approach: PWA or UWP. We see this as a false dichotomy! In fact, on Windows 10, the Universal Windows Platform fully embraces Progressive Web Apps, because EdgeHTML is a foundational component of UWP.

For developers who are building a fully-tailored UWP experience, building from the ground up with native technologies may make the most sense. For developers who want to tailor an existing web codebase to Windows 10, or provide a first-class cross-platform experience with native capabilities and enhancements, PWA provides an on-ramp to the Universal Windows Platform that doesn’t require demoting or forking existing web resources.

When evaluating native app development in relation to Progressive Web Apps, here are some of the questions we recommend asking:

  • Are there native features the Web can’t offer that are critical to the success of this product?
  • What is the total cost (time and money) of building and maintaining each platform-specific native app?
  • What are the strengths of my dev team? or How easy will it be to assemble a new team with the necessary skills to build each native app as opposed to a PWA?
  • How critical will immediate app updates (e.g., adding new security features) be?

In other words, the choice between PWA and native should be evaluated on a case-by-case basis. For example:

  • If you are looking to craft an experience that takes full advantage of each platform you release it on and you want to agonize over every UX detail in order to differentiate your product… native might be the best choice for you.
  • If you are maintaining a product on multiple native platforms in addition to the Web and they are all largely the same in terms of look & feel and capabilities, it may make more sense to focus all of your efforts on the Web version and go PWA.
  • If you are planning a brand-new product and the Web provides all of the features you need (especially when you also consider the additional APIs provided via the host OS), building a PWA is probably going to be a faster, more cost-effective option.

For a more in-depth discussion, check out our video from Microsoft Edge Web Summit 2017: PWA, HWA, Electron, oh my! Making sense of the evolving web app landscape.

Testing your Progressive Web Apps in Microsoft Edge and Windows 10

Service Worker, Push, and other technologies are enabled by default in current Insider builds in Microsoft Edge, and we intend to enable them by default when EdgeHTML 17 ships to stable builds of Windows 10 next year.

You can get started testing your PWA in Microsoft Edge today by downloading a recent build of Windows 10 via the Windows Insider Program, or using a free VM. We’ll be sharing more about Service Worker debugging features in the Microsoft Edge DevTools in a future post—stay tuned!

Service Worker features will be enabled for the UWP platform (including installed PWAs) with the upcoming release of Windows 10, but are currently not available to published apps in the Store, including on Windows Insider Preview builds. In the meantime, you can test them in Insider builds by sideloading your AppX using the install script provided by PWA Builder tools, or by running your PWA inside Microsoft Edge.

What’s next for Progressive Web Apps on Windows?

Over the coming months, we’re laser focused on polishing our initial implementation of the core technologies behind PWAs in EdgeHTML and the Universal Windows Platform—Service Worker, Push, Web App Manifest, and especially Fetch are foundational technologies which have a potentially dramatic impact to compatibility and reliability of existing sites and apps, so real-world testing with our Insider population is paramount.

In our initial implementation, we’ll be focused on those two components—the Service Worker family of technologies in Microsoft Edge, and PWAs in the Microsoft Store. Looking forward, we’re excited about the potential of PWA principles to bring the best of the web to native apps, and the best of native apps to the web through tighter integrations between the browser and the desktop. We look forward to hearing your feedback on our initial implementation and experimenting further in future releases.

In the meantime, we encourage you to try out your favorite PWAs in Microsoft Edge today, and get started testing your installable PWA on Windows, both via PWA Builder and in Microsoft Edge! We look forward to hearing your feedback and to digging in to any bugs you may encounter.

Here’s to what’s next!

Kyle, Kirupa, Aaron, and Iqbal

The post Welcoming Progressive Web Apps to Microsoft Edge and Windows 10 appeared first on Microsoft Edge Dev Blog.

A new experiment: Browser-based web apps with .NET and Blazor

$
0
0

Today I’m excited to announce a new experimental project from the ASP.NET team called Blazor. Blazor is an experimental web UI framework based on C#, Razor, and HTML that runs in the browser via WebAssembly. Blazor promises to greatly simplify the task of building fast and beautiful single-page applications that run in any browser. It does this by enabling developers to write .NET-based web apps that run client-side in web browsers using open web standards.

If you already use .NET, this completes the picture: you’ll be able to use your skills for browser-based development in addition to existing scenarios for server and cloud-based services, native mobile/desktop apps, and games. If you don’t yet use .NET, our hope is that the productivity and simplicity benefits of Blazor will be compelling enough that you will try it.

Why use .NET for browser apps?

Web development has improved in many ways over the years but building modern web applications still poses challenges. Using .NET in the browser offers many advantages that can help make web development easier and more productive:

  • Stable and consistent: .NET offers standard APIs, tools, and build infrastructure across all .NET platforms that are stable, feature rich, and easy to use.
  • Modern innovative languages: .NET languages like C# and F# make programming a joy and keep getting better with innovative new language features.
  • Industry leading tools: The Visual Studio product family provides a great .NET development experience on Windows, Linux, and macOS.
  • Fast and scalable: .NET has a long history of performance, reliability, and security for web development on the server. Using .NET as a full-stack solution makes it easier to build fast, reliable and secure applications.

Browser + Razor = Blazor!

Blazor is based on existing web technologies like HTML and CSS, but you use C# and Razor syntax instead of JavaScript to build composable web UI. Note that it is not a way of deploying existing UWP or Xamarin mobile apps in the browser. To see what this looks like in action, check out Steve Sanderson’s prototype demo at NDC Oslo last year. You can also try out a simple Blazor app running in Azure.

Blazor will have all the features of a modern web framework including:

  • A component model for building composable UI
  • Routing
  • Layouts
  • Forms and validation
  • Dependency injection
  • JavaScript interop
  • Live reloading in the browser during development
  • Server-side rendering
  • Full .NET debugging both in browsers and in the IDE
  • Rich IntelliSense and tooling
  • Ability to run on older (non-WebAssembly) browsers via asm.js
  • Publishing and app size trimming

WebAssembly changes the Web

Running .NET in the browser is made possible by WebAssembly, a new web standard for a “portable, size- and load-time-efficient format suitable for compilation to the web.” WebAssembly enables fundamentally new ways to write web apps. Code compiled to WebAssembly can run in any browser at native speeds. This is the foundational piece needed to build a .NET runtime that can run in the browser. No plugins or transpilation needed. You run normal .NET assemblies in the browser using a WebAssembly based .NET runtime.

Last August, our friends on Microsoft’s Xamarin team announced their plans to bring a .NET runtime (Mono) to the web using WebAssembly and have been making steady progress. The Blazor project builds on their work to create a rich client-side single page application framework written in .NET.

A new experiment

While we are excited about the promise Blazor holds, it’s an experimental project, not a committed product. During this experimental phase, we expect to engage deeply with early Blazor adopters to hear your feedback and suggestions. This time allows us to resolve technical issues associated with running .NET in the browser and to ensure we can build something that developers love and can be productive with.

Where it’s happening

The Blazor repo is now public and is where you can find all the action. It’s a fully open source project: you can see all the development work and issue tracking in the public repo.

Please note that we are very early in this project. There aren’t any installers or project templates yet and many planned features aren’t yet implemented. Even the parts that are already implemented aren’t yet optimized for minimal payload size. If you’re keen, you can clone the repo, build it, and run the tests, but only the most intrepid pioneers would attempt to write app code with it today. If you are that intrepid pioneer, please do dig into the sources. Feedback and suggestions can be provided through the Blazor repo issue tracker. In the months ahead, we hope to publish pre-alpha project templates and tooling that will let a wider audience try it out.

Please also check out the Blazor FAQ to learn more about the project.

Thanks!

Vcpkg: introducing installation options with Feature Packages

$
0
0

We are happy to announce a new feature for vcpkg in version 0.0.103: Feature Packages.

Vcpkg is a package manager to help acquiring and building open source libraries on Windows; vcpkg currently offers over 600 C++ libraries available for VS2017 and VS2015.

With Feature Packages you have more control over how you build a library as you can specify different options (features). Lots of open source libraries offer different options and features to select at build time. For example, you may want to build OpenCV with CUDA to utilize the GPU or build HDF5 with MSMPI to enable parallel execution. Previously you needed to edit the port file to build with a given set of options. With features packages, these options can be easily specified at installation time.

How to use feature packages

We support optional packages via this syntax: vcpkg install library[feature]

vcpkg install hdf5 // install without parallel support
>vcpkg install hdf5[parallel] // install with parallel support

hdf5 now exposes options, so the search command will display more information:


> vcpkg search hdf5
hdf5 1.10.1-1 HDF5 is a data model, library, …
hdf5[parallel] parallel support for HDF5

Now hdf5 has been installed with the parallel option activated, so the list command will display more information as well:

>vcpkg list hdf5
hdf5:x86-windows 1.10.1-1 HDF5 is a data model, …
hdf5[parallel]:x86-windows with parallel support for HDF5

Note that each feature package will be listed on a separate line.

Behind the scene

All the features packages for a given library are listed in the CONTROL file.

This file also lists the dependencies for each feature package.

For example:

Source: hdf5
Version: 1.10.1-1
Description: HDF5 is a data model, library, and file format for …
Build-Depends: zlib, szip


Feature: parallel
Description: parallel support for HDF5
Build-Depends: msmpi

A library can support any number of features.

Having feature packages enabled means more subtleties when you remove or update a package. We worked hard to find the right algorithm to addresses collection of edge cases. For example, installing a feature may imply installing dependencies or rebuild in a certain way these dependencies.

> vcpkg install hdf5[parallel]
The following packages will be built and installed:
hdf5[core,parallel]:x86-windows
* msmpi[core]:x86-windows
* szip[core]:x86-windows
Additional packages (*) will be modified to complete this operation.

Vcpkg maintainers, you can now add options to your port file

With feature packages, you have now the ability to define different options when creating a port file. It is now the right time to update your port file and enable options for your libraries.

In order to update your port file, you can react to a feature using the CMake directive

"if(featurename IN_LIST FEATURES)"

Here are some resources:

Activate parallel in HDF5

Cuda and options for OpenCV

VTK with Python

Documentation

We have updated our documentation about feature package

Thanks to Daniel Shaw for his effort and dedication to implement this feature with us.

As always, your feedback and comments really matter to us, open an issue on GitHub or reach out to us at vcpkg@microsoft.com for any comments and suggestions.

The AI Show: Data Science Virtual Machine

$
0
0

The Data Science Virtual Machine was featured on a recent episode of the AI Show with Seth Juarez and Gopi Kumar. If you want a quick and easy way to spin up a virtual machine with all of the data science tools you'll ever need — including R and RStudio — already installed and ready to go, this video explains what the Data Science Virtual Machine is used for and (at 21:00) how to launch one in the Azure portal.

The Data Science Virtual Machine is available in both Windows and Linux flavors. Even if you don't yet have an Azure subscription, you can test drive the DSVM for free

Toward a More Intelligent Search: Bing Multi-Perspective Answers

$
0
0
In December, we launched several new Intelligent Answers that go beyond the traditional Q&A style of search and offer answers to more complicated questions. We received so much interest in providing multiple perspectives to an answer, that we wanted to share more details on how it came about and what we are doing to make it possible.

I’m a big yoga fan and have been practicing it for years. A few months ago, I decided to try hot yoga for the first time. When I got to the studio, I noticed that the temperature was way hotter than expected, and I forgot to bring a water bottle.
 
The class was amazing, but I felt dizzy and lighthearted. I couldn’t help but wonder… is hot yoga good for me?
 
I decided to Bing my question to learn more about it. At the very top, Bing had the following answer:

“Why Hot Yoga? There are countless benefits for those who incorporate hot yoga in their lives. We like to think of it as a ‘work in’ as opposed to a ‘work out’. The reason being that someone who practices yoga consistently will notice the changes on the outside occur first, such as weight loss, muscle toning and even clear, radiant skin.

This was good. I quickly received an answer confirming that hot yoga was a healthy sport from a relevant site. But I knew there might be some drawbacks for hot yoga as well. I repeated my search as {is hot yoga bad for you} and got another answer, this time showing that hot yoga could be dangerous.

That’s when it struck me: There are many questions that don’t have just one answer, but multiple valid perspectives on a given topic.  Should I repeat my search with the word “bad” or “good” in it every time I wanted to get a comprehensive picture of a topic and hear the other side?  How would I even know how and when to do that?  Should I assume that this single answer Bing returned for me was the best or the only answer? Is that the most authoritative page to answer my question?
 
 

How do we solve this problem?


Search results are often limited to many blue links and sometimes one link and snippet of text highlighted at the top. But there are many questions where getting just one point of view is not sufficient, convenient or comprehensive.

Many times, you want to hear different perspectives, points of view and opinions about a question you have. Today, when you use a search engine to find answers to your questions, your experience can quickly turn into an echo chamber because search results often cover just one side of the spectrum of possible answers to your questions, which can lead to “confirmation bias.”

As announced at the Microsoft AI event in December 2017, we believe that your search engine should inform you when there are different viewpoints to answer a question you have, and it should help you save research time while expanding your knowledge with the rich content available on the Web. Therefore, we built a new experience for you.
 
Try “is hot yoga good for you” today and you will now see both perspectives.



Clicking on the right-hand side helped me learn that I need to drink lots of water, stretch properly and cover the yoga mat with a towel to avoid breeding germs on it.
 

How does Multi-Perspective QnA work?


Based on processing billions of queries and web pages, Bing intelligently understands when your question or topic of interest has various valid perspectives and can show you snippets explaining those opinions from passages or short blurbs of text extracted from many different websites. Thus, we can now keep you well-informed on topics that have opposing viewpoints. Using several different models in combination, we prioritize reputable content from authoritative, high quality websites that are relevant to the subject in question, have easily discoverable content and minimal to no distractions on the site.
 
When you issue a search query like “is coffee good for you”, passage candidates from web pages are selected based on our powerful Web Search and Question Answering engine. After that, we build clusters over the passages to determine similarity and sentiment using deep recurrent neural network (Deep RNN) models. Lastly, we rank the most relevant passages from each cluster based on sentiment analysis and several other features, to deliver you the most relevant results from web sources. This process is summarized in the following image.


 
One of the key challenges in this project was building the Deep RNN since existing state of the art sentiment classifiers were limited in their ability to extract the correct sentiment from web passages. For example, take the passage below:
 
"You could burn more fat. Caffeine is found in almost every over-the-counter fat-burning supplement commercially available today. And for good reason. It's been shown to increase metabolism by 3 to 11 percent, and to increase the burning of fat from 10 to 29 percent, depending on your body type.”
 
Existing classifiers would report that the passage had a negative sentiment since there are words such as “burn” and “fat”. However, our deep neural network-based models are specifically built to handle web data. As a result, they correctly detect that the passage above has a positive sentiment: it describes the benefits of coffee.

So, when you search “is coffee good for you” on Bing, we show you passages as search results that offer you two different perspectives on this topic instead of just one side. This allows you to see this topic from multiple viewpoints, enriching your understanding and allowing you to form your own opinion.
 
Try it today!
 
Not sure if “cholesterol is good”?


 
Should you cut down on your coffee intake?
 


Want to learn why video games are good? Just don’t show this to kids!


 
This is just the beginning. We will expand this functionality to address many more questions you have, increase coverage, and expand beyond the US, starting with the United Kingdom in the next few months.
 
At Bing, we apply cutting-edge AI technology to help you easily find the best variety of viewpoints on the Web for the topics that matter the most to you. If you want to learn about something new or need to find several opinions about a topic, Bing will be there to help you every step of the way.

Try it today and share your feedback directly from www.bing.com!

Happy 2018!
-Mir Rosenberg on behalf of the Bing Multi-Perspective Team
 

Azure Search service upgrades: New hardware, unlimited document counts, and more!

$
0
0

Today we are happy to announce performance upgrades to all paid service tiers in Azure Search. For the exact same price, these upgraded Azure Search services have roughly double the compute power of the previous hardware configuration that backed Azure Search. Additionally, services in the Standard tier began using SSD storage under the hood, compared to HDD storage used previously.

What does this mean for Azure Search?

With these service upgrades, we removed the document count limits from the Basic and Standard pricing tiers. This means that only storage limits are enforced in new Azure Search services. Depending on a scenario’s workload, these upgraded services may also benefit from faster indexing and querying performance at the same exact price points.

The Basic tier now supports an increase in the number of indexes from 5 to 15. Also in the Basic tier, Azure Search can now support up to 15 data sources and indexers per service. For the Standard 3 High Density pricing tier (great for multitenant scenarios), we were able to remove the 200 million document per partition limit, only enforcing per-index limits.

Unlimited document counts

These upgrades allow Azure Search to no longer enforce document count limits on Basic and Standard services. However, storage limits are still enforced and our testing and benchmarking have revealed some practical limits to the new hardware:


Practical document counts per Partition

Small documents (<10 fields, <1KB) per Partition

Large documents (50 fields, 50KB) per Partition

Standard 1

~25 million documents

~1 million documents

Standard 2

~100 million documents

~4 million documents

Standard 3

~200 million documents

~8 million documents

Specific storage usage depends on several factors such as number of fields, size of the content in the fields, the specific search attributes defined in the index schema, and others.

Supported regions

The new hardware configurations and relaxation of limits are available in most regions where Azure Search is offered. Today, the upgraded hardware is available in the following regions: Brazil South, Canada Central, Central India, East US, North Central US, North Europe, South Central US, Southeast Asia, UK South, West Europe, and West US.

Azure Search began using the upgraded hardware for newly created services in these regions beginning in late 2017. To learn more about which limits apply to an existing service, use the Azure portal to view limit information on your service's overview page.

The following regions are still using the old hardware configuration of Azure Search and enforce the previous limits: Australia East, East Asia, Japan West, and West Central US.

Next steps

To create an Azure Search service with the new hardware and limits, just provision a Basic or Standard tier Azure Search service in one of the supported regions. The newly created service will automatically use the upgraded hardware.

Read more about Azure Search and its capabilities and visit our documentation. Please visit our pricing page to learn about the various pricing tiers of Azure Search.


Diagnosing Errors on your Cloud Apps

$
0
0

One of the most frustrating experiences is when you have your app working on your local machine, but when you publish it it’s inexplicably failing. Fortunately, Visual Studio provides handy features for working with apps running in Azure. In this blog I’ll show you how to leverage the capabilities of Cloud Explorer to diagnose issues in Azure.

If you’re interested in developing apps in the cloud, we’d love to hear from you. Please take a minute to complete our one question survey.

Prerequisites

– If you want to follow along, you’ll need Visual Studio 2017 with Azure development workload installed.
– This blog assumes you have an Azure subscription and have an App running in Azure App Services. If you don’t have an Azure subscription, click here to sign up for free credits.
– For the purposes of this blog, we’ve developed a simple one-page web app. The source is available here.

Open the solution

If you have your app running on Azure, open the solution in Visual Studio.
Alternatively, clone the source for the sample app and open it in Visual Studio.
Publish the app to Microsoft Azure App Services.

Connect to your Azure subscription with Cloud Explorer

Cloud Explorer is a powerful tool that ships with the Azure development workload in Visual Studio 2017. We can use Cloud Explorer to view and interact with the resources in our Azure subscription.

To view your Azure resources in Cloud Explorer, enable the subscription in the Account Manager tab.
– Open Cloud Explorer (View -> Cloud Explorer)
– Press the Account Management button in the Cloud Explorer toolbar.
– Choose the Azure subscription that you are working with, then press Apply.

Cloud Explorer - Account Manager

Your Azure subscription now appears in the Cloud Explorer. You can toggle the grouping of elements by Resource Groups or Resource Types using the drop-down selector at the top of the window.

View Streaming Logs

When I ran my app after publishing, there was an error. The error message shown on the web page was not very descriptive. So what can I do? How can I get more information about what’s going wrong?

One easy way to diagnose issues on the server is to inspect the application logs. Using Cloud Explorer, you can access the streaming logs of any App Service in your subscription. The streaming logs output a concatenation of all the application logs saved on the App Service. The default log level is “Error”.

To view streaming logs for your application running on Azure App Services:
– Expand the subscription node and select your App Service.
– Click View Streaming Logs in the Actions panel.

Cloud Explorer - View Streaming Logs

The Output window opens with a new log stream from the App Service running on the cloud.

– If you’re using the sample app, refresh the page in the web browser and wait for the page to complete rendering.
This might take ten seconds or more, as the server waits for the fetch operation to time out before returning the result.

You can read the log messages to see what’s happening on the server.

Streaming Logs - Showing Errors

If you switch to Verbose output logging, you see a lot more.

Streaming Logs - Verbose view

Notice the [Error] that appears in the streaming logs: “Exception occurred while attempting to list files on server.”
It doesn’t tell us much, but at least now we can start looking in the ListBlobFiles.StorageHelper for clues.

We know it works locally, so we’ll need to debug the version running on the cloud to see why it’s failing.
For that, we need remote debugging. Once again, Cloud Explorer to the rescue!

Remote Debugging App Service running on Azure

Using Cloud Explorer, you can attach a remote debugger to applications running on Azure. This lets you control the flow of execution by breaking and stepping through the code. It also provides an opportunity to view the value of variables and method returns by utilizing Visual Studio’s debugger tooltips, autos, watches, call stack and other diagnostic tools.

Publish a Debug version of the Web App

Before you can attach a debugger to an application on Azure, there must be a debug version of the code running on the App Service. So, we’ll re-publish the app with Debug release configuration. Then we’ll attach a remote debugger to the app running in the cloud, set breakpoints and step through the code.

• Open the publish summary page (Right-click project, choose “Publish…”)
• Select (or create) the publish profile for your web app
• Click Settings
• When the settings dialog opens, go to the Settings tab.
• In the Configurations drop-down, select “Debug”.
• Save the publish profile settings.
• Press Publish to republish the web app

Publish Debug Configuration

Attach Remote Debugger

You can attach a remote debugger to allow you to step through the code that’s running on your Azure App Service. This lets you see the values of variables and watch the flow of control in your app.

To attach a remote debugger:
• In the Cloud Explorer, select the web app.
• Click Attach Debugger in the Actions panel.

Visual Studio will switch over to Debug mode. Now you can set breakpoints in the code and watch the execution as the program runs.

Set breakpoint and execute the code

If you’re following along from the sample, try this:

• Set breakpoints in the GetBlobFileListAsync() method of the StorageHelper.cs
• Refresh the page in the web browser
• Execution will stop at your first breakpoint.
• Hover your mouse cursor over the _storageConnectionString variable and inspect its value.
• Notice that the connection string is “UseDevelopmentStorage=true”.

Remote Debugging in Visual Studio

Problem found! We’re referencing our local Storage (“UseDevelopmentStorge=true”), which won’t work in the cloud.
To fix it, we’ll need to provide a connection string to the app running in the cloud that points to our Blob storage container.

Complete the debugging session.
– Press F5 to allow the request to complete.
– Then press Shift+F5 to stop the remote debugging session.

Next steps

Re-publish with Release configuration
Once you’ve finished debugging and your app is working as expected, you can republish a Release version of the app for better performance.
Go to the Publish page, find the Publish Profile, select “Settings…” and change the configuration to “Release”.

Related Links

Get started with Azure Blob storage using .NET
https://docs.microsoft.com/en-us/azure/storage/blobs/storage-dotnet-how-to-use-blobs

Use the Azure Storage Emulator for development and testing
https://docs.microsoft.com/en-us/azure/storage/common/storage-use-emulator

Introduction to Razor Pages in ASP.NET Core
https://docs.microsoft.com/en-us/aspnet/core/mvc/razor-pages/?tabs=visual-studio

ASP.NET Core – Simpler ASP.NET MVC Apps with Razor Pages
MSDN Magazine article by Steve Smith
https://msdn.microsoft.com/en-us/magazine/mt842512.aspx

Upload image data in the cloud with Azure Storage
https://docs.microsoft.com/en-us/azure/storage/blobs/storage-upload-process-images

Azure Blob Storage Samples for .NET
https://github.com/Azure-Samples/storage-blob-dotnet-getting-started

In case you missed it: January 2018 roundup

$
0
0

In case you missed them, here are some articles from January of particular interest to R users.

Josh Katz and Peter Aldhous used R to analyze the content and presentation of the most recent State of the Union speech from the US president.

Slides for my presentation "Speeding up R with Parallel Processing in the Cloud", with applications of the doAzureParallel and sparklyr packages.

An example of using the doAzureParallel package to speed up a statistical simulation.

5 lines of R code to create a list of US Representatives from a Wikipedia table.

A package to visualize routes from activities recorded with the Strava app.

The call for papers and registration is now open for useR!2018 in Brisbane.

Microsoft R Open 3.4.3 is now available.

A simple command-line tool to launch a cluster in Azure for use with sparklyr.

A review of cloud-based tools for building intelligent applications with R.

A guide to implementing deep neural networks from scratch in R.

R leaps to its highest position — 8th — in the TIOBE language rankings.

A field guide to the ecosystem surrounding R.

Using the Rcpp package to parallelize an association rules problem.

Various R tricks used at Etsy to speed up an A/B testing system.

Some useful advice from Jenny Bryan on setting up a reproducible R workflow.

And some general interest stories (not necessarily related to R):

As always, thanks for the comments and please send any suggestions to me at davidsmi@microsoft.com. Don't forget you can follow the blog using an RSS reader, via email using blogtrottr, or by following me on Twitter (I'm @revodavid). You can find roundups of previous months here.

Azure #CosmosDB Graph API now generally available

$
0
0

Azure Cosmos DB Graph API is the first cloud database to provide graph functionality over a globally distributed managed service. This has enabled users to explore new ways of consuming their data with the use of the Gremlin language while still benefitting from global distribution, elastic scalability in storage and throughput, guaranteed low latency, consistency models, and enterprise-ready SLAs of Azure Cosmos DB.

In December, Azure Cosmos DB Graph API became generally available. This release includes several critical updates to the performance and latency, as well as expanding the application platforms that can be used with it. Here is a brief recap of the features included in the general availability release of Azure Cosmos DB Graph API.

Increased service performance and stability

Several performance and stability improvements have been applied to the Azure Cosmos DB Graph API service. These updates benefit the Gremlin query processing performance, as well as the connectivity experience when using any of the open-source Gremlin connectors. Additional fixes were also applied to the previously known Gremlin error parsing issues that used to be experienced.

Newly added support for Python and PHP application platforms!

Azure Cosmos DB Graph API now supports connections from Python and PHP applications through the use of the Apache Tinkerpop™ recommended open-source drivers gremlin-python and gremlin-php. Learn how to get started with the Python Quickstart and PHP Quickstart.

Bulk import library in private preview

We are working on a library that can be used to efficiently bulk import nodes and edges. Today, we’re announcing that we have a private preview for this library, and we’re looking for users to test it out. If you’re interested in participating in this preview, please fill out this form.

Preview account migrations

In the next couple of months we will automatically migrate preview accounts to the updated service. These migrations will be done in batches and instructions with the necessary changes will be sent via email.

Please continue to provide feedback on what you want to see next in our service. Try out Gremlin API today for free today with our Cosmos DB for free experience. If you need any help or have questions or feedback, please reach out to us on the developer forums on Stack Overflow, and follow us on Twitter @AzureCosmosDB and #CosmosDB for the latest news and announcement.

- Your friends at Azure Cosmos DB.

Microsoft offers SAP HANA supportable VMs in UK with the Azure M/B/V3-series

$
0
0

Microsoft becomes the first hyperscale cloud provider to offer SAP HANA supportable VMs in the UK.

Azure Virtual Machines (VMs) customers can now take advantage of the Azure M/V3/B-series of VM sizes available in the UK South region. We’re also excited to announce that Azure is the first hyperscale cloud provider to offer VMs optimized for large in-memory workloads such as SAP HANA in the UK.

New Azure M series – The Azure M-series is perfectly suited for your large in-memory workloads like SAP HANA and SQL Hekaton. With the M-series, these databases can load large datasets into memory and utilize fast memory access with massive virtual CPU (vCPU) parallel processing to speed up queries and enable real-time analytics.

Learn more about M-Series.

 

Size vCPU’s Memory (GiB) Local SSD (GiB) Max data disks
M64s 64 1024 2048 32
M64ms 64 1792 2048 32
M128s 128 2048 4096 64
M128ms 128 3800 4096 64

 
New Azure B series – B-series VMs provide the lowest cost option for customers with flexible vCPU requirements. These are useful for workloads like web servers, small databases, and development or test environments where CPU utilization is low most of the time, but spikes for short durations. B-Series VMs offer consistent baseline CPU performance and let you build up credits which can be used for peak CPU usage. These sizes provide you with optimal cost and value flexibility.

Learn more about B-Series.

New Azure V3 series – Dv3 and Ev3 VMs are some of the first VMs to enable nested virtualization and Hyper-V containers. These new sizes introduce Hyper-Threading Technology running on the Intel Broadwell E5-2673 v4 2.3GHz processor, and the Intel Haswell 2.4 GHz E5-2673 v3. The shift from physical cores to vCPUs is a key architectural change that will enable the full potential of the latest processors to support even larger virtual machine sizes.

Learn more about V3-Series.

For more information, please visit the Virtual Machines page and the Virtual Machines pricing page.

Windows Developer Day Returns on March 7th!

$
0
0

Windows Developer Day is back! Join us via livestream on March 7th starting at 9:00 AM PST to find out what’s being released in the next Windows 10 Update. Tune into the keynote by Kevin Gallo, Vice President of the Windows Developer Platform, and live Q&A session to be the first to hear about the newest features and updates 

Learn what’s coming for developers in the next Windows 10 Update 

No matter what you’re working on, you’ll find new features and improvements to make your software more compelling:

  • Building for the modern workplace – Upgrade and redefine your code. We’ll discuss improvements on how we’re evolving our platform to make it easier than ever to update your existing Windows applications with new functionality.
  • Making your applications part of the intelligent edge – The ability to have software quickly make complex calculations and inferences is critical for building applications in a fast-changing market. Learn how you can enable your application to be a native part of the intelligent edge.

Windows Developer Day is the only place to find out what’s coming for developers in the next Windows 10 Update, so RSVP today!

The post Windows Developer Day Returns on March 7th! appeared first on Windows Developer Blog.

Goats up high

$
0
0

Let me start with a little background.  Every year, we have new goats born on the farm – generally in March/April.  Last year, we had about 10 or so.  They live in the field with their moms until breeding season – ~October.  During breeding season, we have to remove the young goats from the adult herd so they aren’t accidentally bred – they are too young at that point.  We don’t have a great fenced area to put them right now so we put them in a large stall in the barn.

All of that happened this year as usual.

The other piece of information you need to know is that goats establish a dominance hierarchy and then exercise their dominance.  They do both by butting other goats with their head quite forcefully.  You may have an image of Rams (male sheep) with big horns smacking the snot out of each other – they do call them Rams for a reason.  Well, goats really aren’t any different.  Male goats behave pretty much the same way and, although doe (female) behavior is slightly different, the basic idea is the same.

So, a couple of months ago, we went out to the barn in the morning, as usual, to do chores and found that one of the young goats was hobbling around on 3 legs.  We soon discovered that her leg was broken – we suspect a result of getting rammed in just the wrong way.

My wife, being a veterinarian, knew just what to do.  She splinted and bandaged the leg and put her back with the goats.  All seemed fine.  The next morning when we came out to do chores, we discovered shards and shreds of bandages laying everywhere and the goat with nothing on her leg.  It seems the Great Pyrenees (livestock guardian dog) we keep with them to protect them from predators really didn’t like this odd contraption on the goat’s leg and chewed it off.  There wasn’t a scratch on the goat – so the dog seems to have been very careful, but, nonetheless, the splint was destroyed.  I’m having a hard time picturing the goat sitting patiently while the dog gnawed the splint off – but who knows, no one was there to watch.

So my wife repeated the procedure of splinting and wrapping the goats leg.  OK, but now, we decided it’s probably best not to put the injured goat back in with the group so we put her in a separate stall by herself.  Unfortunately we didn’t have spare stall that was free so we had to use one that had a huge stack of hay in it.  I thought, oh well, the goat will nibble at the hay, it’s one goat and a lot of hay so no big deal, right?

After a few weeks things seemed to be going OK and the goat’s leg was healing and we were feeling bad about her being stuck alone.  We didn’t want to put her back with the larger group so we decided to take one of the smallest goats, one that was near the bottom of the dominance hierarchy, and put her in the same stall.  That went well too.  They got along really well; they had all the hay they could possibly eat; the world was good.

You know there’s a “but” coming, right?

One afternoon, I walked through the barn and saw something strange in my peripheral vision.  I stopped to look closely.

Both goats, including the one with the broken leg (and a splint still on it), had managed to get on top of the stack of hay.  That hay stack is probably 7 feet tall (6 feet in the side they were on).  Holy cow, how did they do that?  There was no ramp or anything.  They would have had to jump/climb almost vertically the whole way.  I was astonished, and, admittedly, amused.  That was the end of their stay in that stall – I had to make other arrangements.

When I was first getting goats, a friend of mine who had some experience with goats said the following about fences for goats: “If a giraffe can’t get over it and water can’t get under it, it will hold a goat.”  I have to admit, it’s pretty amazing how much ingenuity goats have.

For a good end to the story, the goat’s leg is fully healed, the splint is off and she’s back with the group in the field.

I hope you enjoyed the story…

Brian

Why should I care about Kubernetes, Docker, and Container Orchestration?

$
0
0

A person at work chatted me, commenting on my recent blog posts on the Raspberry Pi Kuberentes Clusters that are being built, and wondered "why should I care about Kubernetes or Docker or any of that stuff?"

WOCinTech Chat pic used under CC

Great question, and I'm figuring it out myself. There's lots of resources out there but none that spoke my language, so here's my thoughts and how I explain it.

"Hey, I have this great new blog app!"

"Fab, gimme!"

"Sure, first make sure you have this version of Windows/Linux, this version of .NET/Python/Node, and these prerequisites."

"Hang on, lemme call you next week when that's handled.

This is how software was built for years. Now let's deploy it.

"Here's the code/dlls/application zipped up."

"Lemme FTP/SFTP/Drag this from one Explorer Window to another."

"Is this version of that file set to this?"

"Wait, what?"

"Make sure that system/boss/dll/nounjs is version 4.5.4.1, they patched it."

"Ok, Imma shush* into production."

Again, we've all been there, even if we refuse to admit. It's 2018 and there's more folks doing this than you care to admin.

Enter Virtual Machines! Way better, right? Here's a USB key with a  file that is EVERYTHING you need. Handled.

"Forget that, use this. It's better than a computer, it's a Virtual Machine. But be aware, It doesn't know it's Virtual, so respect the lie."

"OK, email it to me."

"Well, it's 32 gigs. Lemme UPS it."

Your app is only 100 megs, and this VM is tens of gigs. Why does a 150 pound person need a 6000lb Hummer? Isolation, I guess.

"The app is getting more complex, but it's cool. There's four VMs now. One for the DB, one for Redis, and a front end one, and the shopping cart gets one. It's microservices!"

"I'm loving it."

"Here's a 2 TB drive."

Nice that we're breaking it up, but not so nice that we're getting bloated. Now we have to run apt upgrade/windows update on all these things and maintain them. Why drive a Hummer when I can get a Lyft?

"Ok I got them all running on this beefy machine under my desk."

"Cool, we're moving to the cloud."

"Sigh. I need to update all these connection strings and start uploading VMs."

"It'll be great. It's like a machine under your desk, except your desk is in the cloud."

"What's the cloud?"

"It's a server room you can't see. Basically it's the computers under your desk. But invisible."

Most VM infrastructure is pretty sloppy. It's hard coded IP addresses, it's poorly named VMs living in the same subnets, then we'll move them to the cloud (lift and shift!) but then they are still messy, but they're in the Cloud™, right?

"You know, all these VMs are heavy. I have to maintain and move a bunch of stuff that ISN'T the app. Containers are the way. Just define the app's base requirement and share everything else."

"I've been hearing about this. I can type "docker run hello-world" and on any machine it'll load the hello world image (based on Ubuntu) from a central hub and run it in a mostly isolated way. Guaranteed to work and run, even as time passes."

"Nice, because more and more parts of our app are in .NET Core on Linux, but there's also some Python and node."

"Yep and it'll all just run as the prerequisites are clearly listened in the container...and the prereqs are in fact references to other container images."

"It's containers all the way down."

Now the DB, Redis, the front end, and the shopping cart can call be defined in some simple text files. Rather than your Host OS (the main computer...the metal) loading up a bunch of Guest OS's (literally copies!) and then loading all the apps and prerequisites, you'll share  OSes, and when appropriate, the binaries and libraries.

"OK, now we have a bunch of containers running in Docker, but sometimes they go down or stop."

"Run them again?"

"It's more that that, we need to sometimes have 3 shopping cart containers, and other times we need 2 or more DB containers. Plus their IPs sometimes change"

"So we need something to keep them running, scale or auto-scale them, as well manage networking and naming/dns."

Enter a container orchestrator. There's Docker Swarm, Meso Marathon, Azure Service Fabric, and others, but for this post we'll use Kubernetes.

"So Kubernetes runs my containers, keeps them running, and helps manage the network?"

"Yes, and no. Parts of Kubernetes - or k8s, as cool people like me who have been using it for nearly 3 hours say - are part of the master components, like etcd for key value storage, and the kube-scheduler for selecting what node to run a "pod" on (a pod is cooler to say than container, but sometimes a pod is more than one container. Still, very cool.)

"I'll need to make a glossary."

"Darn tootin' you will."

Kubernetes has basically pluggable everything. Don't like their networking setup? There's literally over a dozen options. Want better charts and graphs? Whole world of options.

Just as one Dockerfile can explain declare what's needed to run an app, a Kubernetes YAML file describes not only the  containers, but the ports need, the number of replicas of each (think web farm), names, environment variables, and more. Here's a file that shows a front end, back end, and load balancer. Everything is there, connection strings become internal DNS lookups, ever service has a load balancer (if you like), and you can scale manually or auto-scale.

"Ok so why should I care?"

"A few reasons. In the past, to install our app I'd need to give you a Word document and a weekend. Now you type kubectl apple theapp.yaml and it's running in less than a minute."

"I'm still billing for the weekend."

Simply stated, we are at the beginning of a new phase of DevOps. One that is programmatic, elastic, and declarative. It's consistent and clear and modular.

I recommend you check out Julia Evens' "Reasons Kubernetes is cool" as well as reading up on how to make a Kubernetes cluster (and the management VMS are free) in Azure.

* I'm trying to make shush a thing. We don't Es Es Eaytch into machines! We shush in! It's pronounced somewhere between shush and shoosh. Make sure you throw in a little petit jeté when you say it.

* Pic used under CC


Sponsor: Unleash a faster Python Supercharge your applications performance on future forward Intel® platforms with The Intel® Distribution for Python. Available for Windows, Linux, and macOS. Get the Intel® Distribution for Python* Now!



© 2017 Scott Hanselman. All rights reserved.
     

Microsoft Azure IP Advantage: Our first year

$
0
0

One year ago, we announced Azure IP Advantage, the industry’s leading program to help cloud service customers stay focused on their digital transformation journey and avoid IP issues. The program has been a tremendous success so far with many customers telling us that it is a key differentiator for Azure and that they choose Azure in part because of the value they get from these benefits.

Here are some of the highlights from our first year:

  • Customers around the world find that Azure IP Advantage has been a valuable deterrent against IP lawsuits, which is especially important as cloud-related patent litigation has increased over the past 4 years. Customers of our partner 21 Vianet like Mobike, the world’s largest bicycle sharing company headquartered in China, explain the benefits of offering IP protection programs to Azure clients. “Azure IP Advantage helps us by reducing potential IP risks as we march into new markets. From technologies to patent offerings, Microsoft is providing a comprehensive protection for us to thrive on cloud without worry.”
  • Microsoft expanded Azure IP Advantage to China in partnership with 21Vianet, ensuring that Azure customers in China enjoy the same great IP protection benefits as customers in the rest of the world.
  • Microsoft invests about $10 million a year to maintain the 10,000 patents that are available to customers under the program. This is money that our customers do not need to pay themselves!  When they select Azure to deploy their workloads and run their apps, customers now benefit from the ability they have to access our portfolio.
  • Those 10,000 patents include more than 500 patents related to artificial intelligence. Customers are increasingly taking advantage of Microsoft Cognitive Services to add AI capabilities to their apps. These innovations have been harvested from Microsoft’s world-class development community over many years. We will continue to prioritize these assets in the Azure IP Advantage portfolio for the benefit of customers, as the importance of artificial intelligence as a service grows. 

Azure’s adoption rate has grown at 90% year-over-year as customers continue to embrace the benefits of running workloads in the cloud. We’ve seen how the additional benefits of Azure IP Advantage has become vital to them as well.

The core benefits of the program are straight forward. Azure IP Advantage provides customers with uncapped indemnification for Microsoft cloud services, including for the open source components that power these services. Eligible customers have also access to 10,000 Microsoft patents to deter and defend their own applications running on Azure against patent lawsuits by operating companies.  We’ve seen competitors try to match some aspects of this offering since we launched it, but none to date have come close. 

As we move into the second year of Azure IP Advantage, we look forward to working with our customers to continue improving the benefits available to them through our expertise in using IP to minimize risk. 

If you are an Azure customer and have thoughts on the program and how it could be improved, please contact us at ipadvant@microsoft.com. It’s been an exciting year for the Azure IP Advantage program, and we’re looking forward to another great one in 2018.

First System Center Semi-Annual Channel release now available

$
0
0

I am excited to announce that System Center, version 1801 is now available. Based on customer feedback, we are delivering new features and enhancements in this release including improved Linux monitoring support, more efficient VMware backup, additional support for Windows Server, and improved user experience and performance. 

System Center, version 1801 is the first of our Semi-Annual Channel releases delivering new capabilities at a faster cadence. Semi-Annual Channel releases have an 18-month support policy. In addition, we will continue to release in the Long-Term Servicing Channel (LTSC) at a lower frequency. The LTSC will continue to provide 5 years of mainstream support followed by 5 more years of extended support.

What’s in System Center, version 1801?

System Center, version 1801 focuses on enhancements and features for System Center Operations Manager, Virtual Machine Manager, and Data Protection Manager. Additionally, security and bug fixes, as well as support for TLS 1.2, are available for all System Center components including Orchestrator, Service Management Automation, and Service Manager.

I am pleased to share the capabilities included in this release:

  • Support for additional Windows Server features in Virtual Machine Manager: Customers can now setup nested virtualization, software load balancer configuration, and storage QoS configuration and policy, as well as migrate VMware UEFI VM to Hyper-V VM. In addition to supporting Windows Server, version 1709, we have added support for host monitoring, host management, fall back HGS, configuration of encrypted SDN virtual network, Shielded Linux VMs on Hyper-V management, and backup capabilities.
  • Linux monitoring in Operations Manager: Linux monitoring has been significantly improved with the addition of a customizable FluentD-based Linux agent. Linux log file monitoring is now on par with that of Windows Server (Yes, we heard you! Kick the tires, it really works).
  • Improved web console experience in Operations Manager: The System Center Operations Manager web console is now built on HTML5 for a better experience and support across browsers.
  • Updates and recommendations for third-party Management Packs: System Center Operations Manager has been extended to support the discovery and update of third-party MPs.
  • Faster, cost-effective VMware backup: Using our Modern Backup Storage technology in Data Protection Manager, customers can backup VMware VMs faster and cut storage costs by up to 50%.
  • And much more including Linux Kerberos support and improved UI responsiveness when dealing with many management packs in Operations Manager. In Virtual Machine Manager, we have enabled SLB guest cluster floating IP support, added Storage QoS at VMM cloud, added Storage QoS extended to SAN storage, enabled Remote to VMs in Enhanced Session mode, added seamless update of non-domain host agent, and made host Refresher up to 10X faster.

As well as consistent evaluation and license experiences across components.

Customers should consider supplementing System Center with Azure security & management capabilities for enhanced on-premises management and for the management of Azure resources. We have included the following updates in System Center, version 1801:

  • Service Map integration with Operations Manager: Using the Distributed Application Diagram function in SCOM, you can automatically see application, server, and network dependencies deduced from Service Map. This deeper endpoint monitoring from SCOM is surfaced in the diagram view for better diagnostics workflows.
  • Manage Azure ARM VMs and special regions: Using a Virtual Machine Manager add-in, you can now manage Azure ARM VMs, Azure Active Directory, and more regions (China, US Government, and Germany).
  • Service Manager integration with Azure: Using the Azure ITSM integration with Azure Action Groups you can set up rules to create incidents automatically in System Center Service Manager for alerts fired on Azure and non-Azure resources.

Get System Center, version 1801

Try System Center, version 1801 today at the Evaluation Center or the Volume Licensing Service Center.

What’s next?

In a couple months, we’ll share information about the second release in our Semi-Annual Channel as well as the next release of the Long-Term Servicing channel. Be sure and follow this blog for updates on the next release and other product news.

As always, we would love to hear what capabilities and enhancements you’d like to see in our next releases. Please share your suggestions, and vote on submitted ideas, through our UserVoice channels.

    DataExplorer: Fast Data Exploration With Minimum Code

    $
    0
    0

    by Boxuan Cui, Data Scientist at Smarter Travel

    Once upon a time, there was a joke:

    In Data Science, 80% of time spent prepare data, 20% of time spent complain about need for prepare data.

    — Big Data Borat (@BigDataBorat) February 27, 2013

    According to a Forbes article, cleaning and organizing data is the most time-consuming and least enjoyable data science task. Of all the resources out there, DataExplorer is one of them, with its sole mission to minimize the 80%, and make it enjoyable. As a result, one fundamental design principle is to be extremely user-friendly. Most of the time, one function call is all you need.

    Data manipulation is powered by data.table, so tasks involving big datasets usually complete in a few seconds. In addition, the package is flexible enough with input data classes, so you should be able to throw in any data.frame-like objects. However, certain functions require a data.table class object as input due to the update-by-reference feature, which I will cover in later part of the post.

    Now enough said and let's look at some code, shall we?


    Take the BostonHousing dataset from the mlbench library:

    library(mlbench)
    data("BostonHousing", package = "mlbench")

    Initial Visualization

    Without knowing anything about the data, my first 3 tasks are almost always:

    library(DataExplorer)
    plot_missing(BostonHousing) ## Are there missing values, and what is the missing data profile?
    plot_bar(BostonHousing) ## How does the categorical frequency for each discrete variable look like?
    plot_histogram(BostonHousing) ## What is the distribution of each continuous variable?

    While there are not many interesting insights from plot_missing and plot_bar, below is the output from plot_histogram.

    Histogram

    Upon scrutiny, the variable rad looks like discrete, and I want to group crim, zn, indus and b into bins as well. Let's do so:

    ## Set `rad` to factor
    BostonHousing$rad <- as.factor(BostonHousing$rad)
    
    ## Create new discrete variables
    for (col in c("crim", "zn", "indus", "b")) 
    BostonHousing[[paste0(col, "_d")]] <- as.factor(ggplot2::cut_interval(BostonHousing[[col]], 2)) ## Plot bar chart for all discrete variables plot_bar(BostonHousing)

    Bar

    At this point, we have much better understanding of the data distribution. Now assume we are interested in medv (median value of owner-occupied homes in USD 1000's), and would like to build a model to predict it. Let's plot it against all other variables:

    plot_boxplot(BostonHousing, by = "medv")    

    Boxplot

    plot_scatterplot(
    subset(BostonHousing, select = -c(crim, zn, indus, b)),
    by = "medv", size = 0.5)

    Scatterplot_1
    Scatterplot_2

    plot_correlation(BostonHousing)

    Correlation

    And this is how you slice & dice your data, and analyze correlation with merely 3 lines of code.

    Feature Engineering

    Feature engineering is a crucial step in building better models. DataExplorer provides a couple of functions to ease the process. All of them require a data.table as the input object, because it is lightning fast. However, if you don't feel like coding in data.table syntax, you may adopt the following process:

    ## Set your data to `data.table` first
    your_data <- data.table(your_data)
    
    ## Apply DataExplorer functions
    group_category(your_data, ...)
    drop_columns(your_data, ...)
    set_missing(your_data, ...)
    
    ## Set data back to the original object
    class(your_data) <- "original_object_name"

    Let's return to the BostonHousing dataset. For the rest of this section, we'll assume the data has been converted to a data.table already.

    library(data.table)
    BostonHousingDT <- data.table(BostonHousing)

    Remember those transformed continuous variables? Let's drop them:

    drop_columns(BostonHousingDT, c("crim", "zn", "indus", "b"))

    Note: Because data.table updates by reference, the original object is updated without the need to re-assign a returned object.

    Let's take a look at the discrete variable rad:

    plot_bar(BostonHousingDT$rad)

    Rad_bar

    I think categories other than 4, 5 and 24 are too sparse, and might skew my model fit. How could I group all the sparse categories together?

    group_category(BostonHousingDT, "rad", 0.25, update = FALSE)
    
    #    rad cnt       pct   cum_pct
    # 1:  24 132 0.2608696 0.2608696
    # 2:   5 115 0.2272727 0.4881423
    # 3:   4 110 0.2173913 0.7055336

    Looks like grouping by bottom 25% of rad would give me what I need. Let's do so:

    group_category(BostonHousingDT, "rad", 0.25, update = TRUE)
    plot_bar(BostonHousingDT$rad)

    Grouped_rad_bar

    In addition to categorical frequency, you may also play with the measure argument to group by the sum of a different variable. See ?group_category for more example use cases.

    Data Report

    To generate a report of your data:

    create_report(BostonHousing)

    Currently, there is not much to do with this, but it is my plan to support customization of the generated report, so stay tuned for more features!


    I hope you enjoyed exploring the Boston housing data with me, and finally here are some additional resources about the DataExplorer package:

    New Feature: Free-form pricing in Dev Center

    $
    0
    0

    Pricing your app or add-ons correctly is key to your success on Microsoft Store. As a community, you have been asking for increased flexibility when pricing your products, and the Dev Center team is happy to announce that as of today, all developers can now use the Free-form pricing feature. With the introduction of this feature, you now have the freedom to set the price of your app, game, or add-on to any value you choose in a market’s local currency.

    Think 7 is your lucky number?  You can change your game’s United States price to USD $7.77. Looking to offer a special add-on for Single’s Day in China? You can set your add-on’s China price to CNY ¥11.11.

    The pricing possibilities are endless, within the same valid ranges as the price tiers (USD $0.99-$1,999.99).  Note that free-form pricing can only be used to override the base price in a single market. (For groups of markets, you can still override the base price with another price tier).

    Using free-form prices

    It all starts from the Pricing section of the Pricing and availability page for your submission in Dev Center. Once you’re on this page, you can use the free-form pricing feature in 2 easy steps:

    1) Select the market where you want to override the base price with a free-form price 

    • Click Select markets for base price override, select the specific market, and then click Create.

    2) Override the base price

    • Select Free-form price from the drop-down menu and then enter your value.

    • The entered freeform price must be within the valid range (for example, in USD the range is $0.99  $1,999.99) and must be in the correct format for that currency. Validation warnings will alert you to any errors in your free-form price. 

    It’s that easy!  You can view detailed documentation here.

    The post New Feature: Free-form pricing in Dev Center appeared first on Windows Developer Blog.

    OMS Monitoring solution for Azure Backup using Azure Log analytics

    $
    0
    0

    We previously announced the preview of Azure Backup reporting and gave customers the ability to generate their own reports and build customizations using Power BI. Today, we are pleased to let you know that you can leverage the same workflow to build your own Microsoft Operations Management Suite (OMS) monitoring solution for Azure Backup in the upgraded OMS workspace. The OMS monitoring solution allows you to monitor key backup parameters such as backup and restore jobs, backup alerts, and cloud storage usage across Recovery Services vaults and subscriptions. You can then utilize OMS log analytics capabilities to raise further alerts for events that you deem important for the business to be notified of. You could even open tickets through webhooks or ITSM integration using the OMS log analytics capabilities.

    Here’s how you do it…

    Configuring Diagnostic settings

    You can open the diagnostic setting window from the Azure Recovery services vault, or you can open the diagnostic setting window by logging into Azure portal. First, click “Monitor” service followed by “Diagnostic settings” in settings section. You can then specify the relevant Subscription, Resource Group, and Recovery Services Vault. In the Diagnostic settings window, as shown below, you can select “Send data to log analytics” and then select the relevant OMS workspace. You can choose any existing log analytics workspace, such that all vaults pump the data to the same workspace

    Please select the relevant log, “AzureBackupReport” in this case, to be sent to the log analytics workspace. Click “Save” to save the setting.

    DiagnosticSettings

    After you have completed the configuration, you should wait for 24 hours for initial data push to complete.

    Deploying solution to Azure OMS

    The OMS monitoring solution template for Azure Backup is a community driven project where you can deploy the base template to Azure and then customize it to fit your needs. To learn more, visit the Azure quick-start template and deploy the OMS monitoring solution for Azure backup to the workspace configured above.

    Monitoring Azure Backup data

    The overview tile in the dashboard reflects the key parameter, which is the backup jobs and their status.

    OverviewTile

    Clicking on the overview tile will take you to the dashboard where the solution has information categorized into jobs and alerts status, and active machines and their storage usage.

    KeyBackupJobsParameters

    ActiveStorageParams

    MAKE SURE YOU SELECT THE RIGHT DATE RANGE AT THE TOP OF THE SCREEN to filter the data for the required time interval.

    TimeFilter

    Log search capabilities

    You can click on each tile to get more details about the queries used to create the tile and configure it to meet your requirement. Clicking further on values appearing in the tiles will lead you to the Log Analytics screen where you can raise alerts for configurable event thresholds and automate actions to be performed when those thresholds are met/crossed.

    LogAnalyticsScreen

    To learn more, visit our documentation on how to configure alerts.

    Summary

    You can configure OMS workspaces to receive key backup data across multiple Recovery Services vaults and subscriptions, and deploy customizable solutions on workspaces to view and configure actions for business critical events. This solution is key for any enterprise to keep a watchful eye over their backups and ensure that all actions are taken for successful backups and restores.

    Related links and additional content

    Viewing all 10804 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>