Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

Getting started with IoT: how to connect, secure, and manage your “things”

$
0
0

The Internet of Things (IoT) is one of the go-to solutions for executives looking for more and better insights about their business. Collectively, IoT is made up of a network of devices and sensors, otherwise known as things, which connect to a company’s network and the cloud by various means. These devices generate data about the organization and its operations, and stream it to data stores and apps where it can be analyzed and acted upon. The resulting insights enable organizations to take action in response to something that has already happened, or which is expected to.

But the value of the data an IoT solution generates depends largely on how effectively you deploy and manage the devices. In addition to their breadth of form factors (from an incredibly small footprint to the size of a manufacturing assembly line), devices also have numerous capabilities and can be controlled at a minute scale. Once installed, they’re designed to perform their jobs without having to be physically touched again. Some operating characteristics may include:

  • Automatic operation
  • Limited power
  • Limited connectivity
  • Difficult to access
  • Only accessible through the backend
  • Susceptible to being tampered with by the public
  • Managed by special protocols

To help connect devices to an IoT solution, Microsoft Azure IoT provides three distinct, but inter-related, technologies:

  • The Azure IoT Hub: the gateway through which all data passes on its way to a data store, business application, or other destination.
  • Azure IoT Edge: a runtime that helps connect devices to Azure IoT Hub and enables the running of services such as AI and data analysis on the device itself.
  • Azure IoT Hub Device Provisioning Service: a service which simplifies the provisioning of devices to Azure IoT Hub, making it possible to establish a connection between the device and the hub without any human intervention.

You can either choose from a list of devices in the Azure IoT device catalog, or use your own devices and connect them using a variety of Azure IoT Software Development Kits for C, Python, Node, Java or C#/.NET.

Securing your devices

Given their frequently remote and public locations, devices are especially vulnerable to attack. And the access they can provide to extensive pools of additional data makes them potentially lucrative targets. To help secure the devices and data, Azure IoT Hub supports a variety of device identification mechanisms, including security tokens and x.509 certificates that are self-signed or issued by a certificate authority.

X.509 certificates simplify supply chain logistics by using cryptographic chains of trust to validate each device, rather than requiring that you manage the security of numerous private keys.

The Azure IoT Hub takes a different approach with securing back-end applications and only uses shared access policies.

Provisioning your devices

In an Azure IoT application, the device lifecycle starts by its provisioning: in order to securely connect and authenticate with Azure IoT Hub, the device needs an identity and needs to have its credentials onboard. The provisioning of devices can be done manually (creating the identity in the hub and putting the unique credentials on the device). For a one-hub solution, manual provisioning makes sense. However, for solutions deployed in multiple regions, or multi-tenancy solutions, the scalability and automation capabilities of the Device Provisioning Service (DPS) has powerful advantages.

To get started with provisioning:

  1. Create an account on Azure Portal
  2. Select Create a Resource on the Azure Portal home page (under the Internet of Things tab) and select IoT Hub
  3. Select Create a Resource a second time and search the IoT Marketplace for Device Provisioning Service (visit the DPS quickstart guide for more information)
  4. Establish a connection between your two resources (IoT Hub and DPS) by clicking on All Resources and selecting the DPS you configured, select Linked IoT Hubs on the DPS summary blade and click the + Add button at the top, in the Add link to IoT hub portal blade select either the current subscription or enter the name for another subscription and select it from the dropdown, then click Save (see these step-by-step instructions for more information)
  5. Create device enrollments in DPS. Once the device turns on and goes through provisioning, DPS will automatically create the device identities in IoT Hub

Once DPS is setup, you must flash all your devices with a firmware that will connect to DPS, and provide a piece of information about its identity that DPS will recognize, such as a unique piece of info from a TPM. Once the device requests provisioning and proves its identify, DPS will then create the device Identity in IOT Hub, and send its credentials to the device. Stored credentials can be used by the device to connect to the assigned IoT Hub.

Managing your devices

With your devices secured and provisioned, now comes the job of managing your devices—e.g., reboots, factory resets, configuration, firmware updates, and reporting progress and status—as well as monitoring the data streams to identify and extract insights. A critical element in this process is how you manage all the metadata associated with your devices, including information about what each device is as well as how you want the device to be configured vs how the device reports itself as configured. IoT Hub gives you the ability to manage this information across all your devices via a digital representation of the device called the device twin.

Device twins are incredibly useful in managing the IoT devices spread across your company. They can be used to:

  • Monitor a device’s configuration data for early detection of potential malware attacks.
  • Monitor data on the performance of a device, or group of devices, across the company and complete remote maintenance to avoid system outages. 
  • Perform widespread installation and upgrades of sensors across a security networks to maintain surveillance

You can also target a set of devices based on shared characteristics in order to send a scoped message, apply a configuration or schedule a software update, all using the device twin.

At first glance, IoT may still seem like a foreign concept compared to other technologies in your stack, but you will find many familiar tools and resources.

Learn more about building out your IoT capabilities and see how easy it is to get started with your first deployment. Download the Azure IoT developer guide and explore how to provision, secure, and manage devices with Azure IoT Hub.


Speech services July 2018 update

$
0
0

A lot has happened since we announced that Speech services is now in preview, we have released the Cognitive Services Speech SDK June 2018 update.

Today, we are excited to announce that we have just released the 0.5.0 version of the Speech SDK. With this update, we have added support for UWP (on Windows version 1709), .NET Standard 2.0 (on Windows), and Java on Android 6.0 (Marshmallow, API level 23) or higher. We have made some feature changes and done some bug fixes. Most notably, we now support long-running audio and automatic reconnection. This will make the Speech service more resilient overall, in the event of timeout, network failures or service errors. We’ve also improved the error messages to make it easier to handle the errors. Please visit the Release Notes page for details. We will continue to add support for more platforms and programming languages, as we work toward making the Speech SDK generally available this fall.

image

Besides the Speech SDK, Custom Voice has also released a new feature to support more training data formats. All ‘.wav’ files (RIFF) with a sampling rates equal to or higher than 16khz are now accepted. Furthermore, we have extended support to more plain text encoding types (ANSI/UTF-8/UTF-8-BOM/UTF-16-LE/UTF-16-BE). For more details, visit our docs about how to prepare data and customize voice fonts. A new document is released to help you create high quality audio samples of human speech, with a focus on issues that you are likely to encounter during your voice training data preparation. For more details, see how to record voice samples for a custom voice.

CustomVoiceInterested in the Microsoft Speech services? You can try it out for free. To learn more and review sample code, please reference our documentation page. Please follow us on Twitter @msspeech3 to be notified for the future updates.

Azure Marketplace June container offers

$
0
0

We continue to expand the Azure Marketplace ecosystem. In June, 42 container offers from Bitnami successfully met the onboarding criteria and went live. See details of the new offers below:

ACME Solver Container Image

ACME Solver Container Image: ACME Solver is part of the Cert Manager project. It will periodically ensure certificates are valid and up to date, and it will attempt to renew certificates at an appropriate time before expiration.

AlertManager Container Image

AlertManager Container Image: AlertManager handles alerts sent by client applications, such as the Prometheus server. It takes care of deduplicating, grouping, and routing them to the correct receiver integrations.

Apache Solr Container Image

Apache Solr Container Image: Apache Solr is an extremely powerful, highly reliable open-source enterprise search platform built on Apache Lucene. It's flexible, scalable, and designed to quickly add value after launch.

Blackbox Exporter Container Image

Blackbox Exporter Container Image: The Blackbox Exporter allows black-box probing of endpoints over HTTP, HTTPS, DNS, TCP, and ICMP. Bitnami container images are secure, up-to-date, and packaged using industry best practices.

Cassandra Container Image

Cassandra Container Image: Apache Cassandra is an open-source distributed database management system designed to handle large amounts of data across many servers, providing high availability with no single point of failure.

Cert Manager Container Image

Cert Manager Container Image: Cert Manager is a Kubernetes add-on to automate the management and issuance of TLS certificates from various issuing sources. Bitnami container images use a container-optimized Debian base image.

Cluster Autoscaler Container Image

Cluster Autoscaler Container Image: Cluster Autoscaler is a component that automatically adjusts the size of a Kubernetes cluster so that all pods have a place to run and there are no unneeded nodes.

Codiad Container Image

Codiad Container Image: Codiad is a web-based integrated development environment framework with minimal resource requirements. Codiad was designed to be simple, allowing for fast, interactive development with less overhead than larger desktop editors.

Consul Container Image

Consul Container Image: Consul is a tool for discovering and configuring services in your infrastructure. Available Consul Container Image versions: 1, 1.0.7, 1.1.0, latest.

Discourse Container Image

Discourse Container Image: Discourse is an open-source discussion platform with built-in moderation and governance systems that let discussion communities protect themselves from bad actors even without official moderators.

DreamFactory Container Image

DreamFactory Container Image: DreamFactory instantly provides REST APIs and API management for your mobile, web, and IoT projects. With DreamFactory, you can stop hand-coding REST APIs and get to work building awesome applications.

Elasticsearch Exporter Container Image

Elasticsearch Exporter Container Image: Elasticsearch Exporter is analytics software and a Prometheus exporter for various metrics about Elasticsearch, written in Go.

Elasticsearch Production Container Image

Elasticsearch Production Container Image: Elasticsearch is a highly scalable open-source full-text search and analytics engine. It allows you to store, search, and analyze large volumes of data nearly in real time.

Fluent Bit Container Image

Fluent Bit Container Image: Fluent Bit is a fast and lightweight log processor and forwarder. It's made with a strong focus on performance to allow the collection of events from different sources without complexity.

Fluentd Container Image

Fluentd Container Image: Fluentd collects events from various data sources and writes them to files, relational database management systems, NoSQL, Infrastructure-as-a-Service, Software-as-a-Service, Hadoop, and so on.

Fluentd Exporter Container Image

Fluentd Exporter Container Image: Fluentd Exporter is a simple server that scrapes Fluentd metrics endpoint and exports them as Prometheus metrics.

Grafana Container Image

Grafana Container Image: Grafana is an open-source, feature-rich metrics dashboard and graph editor for Graphite, Elasticsearch, OpenTSDB, Prometheus, and InfluxDB.

Heapster Container Image

Heapster Container Image: Heapster is an open-source, feature-rich metrics dashboard and graph editor for Graphite, Elasticsearch, OpenTSDB, Prometheus, and InfluxDB. Available Heapster Container Image versions: 1, 1.5.3, latest.

Ingress Shim Container Image

Ingress Shim Container Image: Ingress Shim is infrastructure software and part of the Cert Manager project. It automatically provisions TLS certificates for Ingress resources via annotations on your Ingresses.

JasperReports Container Image

JasperReports Container Image: JasperReports is a stand-alone and embeddable reporting server. It’s a central information hub, with reporting and analytics that can be embedded into web and mobile applications.

Kafka Container Image

Kafka Container Image: Apache Kafka is publish-subscribe messaging rethought as a distributed commit log. Available Kafka Container Image versions: 1, 1.1.0, latest.

Kibana Container Image

Kibana Container Image: Kibana is an open-source, browser-based analytics and search dashboard for Elasticsearch. Kibana strives to be easy to learn while also being flexible and powerful.

kube-state-metrics Container Image

kube-state-metrics Container Image: kube-state-metrics is a simple service that listens to the Kubernetes API server and generates metrics about the state of the objects.

Kubewatch Container Image

Kubewatch Container Image: Kubewatch is a Kubernetes watcher that currently publishes notifications to Slack. Run it in your Kubernetes cluster and you will get event notifications in a Slack channel.

Matomo Container Image

Matomo Container Image: Matomo, formerly known as Piwik, is a real-time web analytics program. It provides detailed reports on website visitors: the search engines and keywords they used, popular pages, and much more.

Moodle Container Image

Moodle Container Image: Moodle is an open-source online learning management system widely used at universities, schools, and corporations worldwide. It is modular and highly adaptable to any type of online learning.

NATS Container Image

NATS Container Image: NATS is a lightweight, open-source, and high-performance messaging system. It’s ideal for distributed systems and supports modern cloud architectures and pub-sub, request-reply, and queuing models.

Neo4j Container Image

Neo4j Container Image: Neo4j is a high-performance graph store with all the features expected of a mature and robust database, like a friendly query language and ACID transactions.

Node Exporter Container Image

Node Exporter Container Image: Node Exporter is a Prometheus exporter for hardware and OS metrics exposed by UNIX kernels, with pluggable metric collectors. Available Node Exporter Container Image versions: 0, 0.16.0, latest.

OAuth 2 Proxy Container Image

OAuth 2 Proxy Container Image: OAuth 2 Proxy is a reverse proxy and static file server that provides authentication using providers (Google, GitHub, and others) to validate accounts by email, domain, or group.

Odoo Container Image

Odoo Container Image: Odoo is an open-source enterprise resource planning and customer relationship management platform. Formerly known as OpenERP, Odoo can connect a wide variety of business operations, including sales, supply chain, finance, and project management.

Open Service Broker for Azure Container Image

Open Service Broker for Azure Container Image: Open Service Broker for Azure is the open-source, Open Service Broker-compatible API server that provisions managed services in the Microsoft Azure public cloud.

phpBB Container Image

phpBB Container Image: phpBB is a popular bulletin board that features robust messaging capabilities, such as flat message structure, subforums, topic split/merge/lock, user groups, full-text search, and attachments.

phpMyAdmin Container Image

phpMyAdmin Container Image: phpMyAdmin is a free software tool written in PHP and intended to handle the administration of MySQL over the web. phpMyAdmin supports a wide range of operations on MySQL and MariaDB.

PrestaShop Container Image

PrestaShop Container Image: PrestaShop is a powerful open-source e-commerce platform used by more than 250,000 online storefronts worldwide. It’s easily customizable, responsive, and includes powerful tools to drive online sales.

Prometheus Container Image

Prometheus Container Image: Prometheus is a systems and service monitoring system. It collects metrics from configured targets at given intervals, evaluates rule expressions, displays the results, and can trigger alerts.

Push Gateway Container Image

Push Gateway Container Image: The Push Gateway is an intermediary service that allows you to push metrics from jobs that can't be scraped. Available Push Gateway Container Image versions: 0, 0.4.0, 0.5.1, 0.5.2, latest.

Redis Sentinel Container Image

Redis Sentinel Container Image: Redis Sentinel provides high availability for Redis. Redis Sentinel also provides other collateral tasks, such as monitoring and notifications, and it acts as a configuration provider for clients.

Service Catalog Container Image

Service Catalog Container Image: The Service Catalog project is in incubation and will bring integration with service brokers to the Kubernetes ecosystem via the Open Service Broker API.

TensorFlow Inception Container Image

TensorFlow Inception Container Image: The Inception model is a TensorFlow model for image recognition. The TensorFlow Inception container image allows you to easily query a TensorFlow server.

User Broker Container Image

User Broker Container Image: User Broker is a part of the Service Catalog project, which is in incubation and will bring integration with service brokers to the Kubernetes ecosystem via the Open Service Broker API.

ZooKeeper Container Image

ZooKeeper Container Image: ZooKeeper provides a reliable, centralized register of configuration data and services for distributed applications.

AI and Digital Feedback Loops

$
0
0

I am here in sunny…aka “hot” Las Vegs for the 2018 Microsoft Inspire and Ready conferences. Despite the heat I love this time of year for the opportunity to meet with many of Microsoft’s great partners to discuss Artificial Intelligence (AI) and the opportunities to work together in this area. Over the last year I’ve had hundreds of conversations with customers and partners on AI and what it can mean for their business/industry, which I documented in my last series of blog posts (see some of the themes here, here, and here). With the ongoing digitization of many line-of-business processes there is a growing wealth of structured and unstructured data that can be used to provide insights that improve systems and processes across core business functions.   

While we have often taken an “infrastructure first” approach in the software space (e.g. client, client-service, Internet, etc.) with a new generation of technology building blocks emerging, we will see a transition to an “experiences” and “processes” first approach in the coming decade. The ability to combine core business data with a new generation of “observation-driven” data signals that will provide the starting point to drive intelligence into all of our business processes. This shift in combination with scalable cloud compute and powerful AI algorithms means organizations are better equipped than ever before to use their data to improve business outcomes, and I believe these data-powered ‘Digital Feedback Loops’ will fundamentally change how companies innovate business processes. When we couple this thinking with the AI Patterns we discussed before we can begin to see the alignment of AI with these core processes/loops.

In most cases, organizations start small by creating a feedback loop around a core business process. For example, an industrial manufacturer might begin using an instrumented manufacturing line to predict breakdowns and optimize production flow. Or a bank might begin aggregating its call center activity to engage customers with more personalized offers. However, the true power of this model stretches far beyond a single feedback loop. The beauty of our growing data and AI capabilities will be the opportunity to break down silos and synthesize data across product, customer, employee, and operational business processes. In this sense, the most powerful feedback loops will be the ones that are interconnected and that reinforce each other by bringing together disparate datasets.

Customer + Product

One of the most common examples of this interconnectedness is between the processes that support customer engagement and product development. Delivering good products and intelligently engaging customers are deeply linked and complementary. Customer understanding can always help a company better tailor a product for a given market/customer, if the feedback is woven into product planning and creation. In addition, this same customer feedback can be used to deliver more effective and personalized sales, marketing, and service.

When Progressive, one of the largest auto insurance companies in the USA wanted to drive deeper engagement with their customers they created Flo, a chatbot for helping customers with their questions. This is an area where we see the “Virtual Agents” pattern being one of the first implementations of AI in the customer feedback loop. Recognizing that many of their customers were using their mobile phones, Progressive used Azure Cognitive Services and Bot Framework to create the Flo Chatbot and made it available through Facebook Messenger. Now customers can get their questions answered, get quotes, and chit chat with their bot. As Progressive grows Flo’s knowledgebase over time across many customer interactions, it can use this data to improve Flo’s capabilities and even optimize the products (insurance offers) it delivers based on evolving customer needs.

”One of the great things about Bot Service is that, out of the box, we could use it to quickly put together the basic framework for our bot.“

Matt White: Marketing Manager, Personal Lines Acquisition Experience, Progressive Insurance

Developing useful bots can be challenging due to a combination of how hard it is to answer open domain questions and computers lacking conversational skills. Developing a chat bot like Flo, with its unique personality and the variety of questions that it needs to handle can be difficult, but with tools for language understanding (LUIS) and questions and answers (QnA Maker), it is easier to create innovative bot experiences quickly. Microsoft has also deeply researched social bots which provide great insights into how to build a bot that people feel comfortable interacting with. For example, Xiaoice in China with over 500M+ users is trained to have open domain conversations in a natural way, with one of the longest conversations lasting over 29 hours.

Where Progressive began adopting its feedback loop with a focus on customer engagement, Tetra-Pak has taken a more product-centric approach. One of Tetra-Pak’s key business lines provides complex machinery to help food companies safely package their food. Cows typically provide 6 gallons of milk per day that needs to be packaged and delivered to customers promptly. Providing reliable products is extremely important to Tetra-Pak as any problems with the packaging process or breakdowns in the packaging line can result in large amounts of spoilage.

To manage these risks, Tetra-Pak instrumented its products and started using Azure to capture the telemetry. By analyzing this data, Tetra-Pak can proactively predict product breakdowns and identify the repairs needed to prevent any issues. Not only does this help Tetra-Pak better understand its own products and opportunities to improve them, but it also allows Tetra-Pak to engage its customers more intelligently. Service engineers can be proactively dispatched and can be armed with all the right information and tools they will need to address the issue at hand. This not only drives better outcomes for the customer, but it provides them with a better experience as well.

Employees

Just as the product and customer feedback loops are linked, so too are the feedback loops companies build around their employees. For example, Tetra-Pak has taken the scenario above even further by extending its investments in product instrumentation to empower its employees. By using HoloLens, all the details on the customer and telemetry from the asset being worked on are available through the headset in real-time. Ambient Intelligence is the AI pattern where AI is applied to physical space and when telemetry is combined with this pattern it makes easier for the service engineer to help their customer. Not only does this help with lowering the time needed for a repair, but it reduces downtime and helps the service engineer gain more credibility with the customer through easy access to key product knowledge.

In the most compelling scenarios, companies are empowering their employees not just with the relevant data they need, but with AI to reason over that data and assist where needed. Publicis has 80,000 employees and like other large organizations, finding the person with the right skills that you need access to can be a difficult task. To help solve this problem, Publicis teamed up with Microsoft to develop Marcel, an AI powered platform for connecting Publicis’ employees. This is an example of the pattern of “AI assisting professionals” which Publicis will accomplish in 3 ways. First, AI will be used to identify the skills and connections of each employee and then make it easier to search for and find the person that has the skills needed to accomplish a specific task. Second, by having a complete knowledge graph of the skills and abilities available within the company, collections of people that show unique combinations of expertise can be surfaced for specific projects. Third, is the ability to create new and ad-hoc working groups will allow employees can now participate in projects beyond their immediate boundaries creating more organic opportunities for employee growth.

Operations

The interconnected power of the feedback loop also extends to operations. As companies build increasingly robust data assets around their products, customers, and people, opportunities quickly appear to deliver products & services more efficiently and reliably. Think of how data + AI has changed the Taxi industry, where street-side hailing is often replaced by apps that use location data and complex algorithms to optimize the matching of riders with drivers.

Jabil, one of the world’s leading design and manufacturing solution providers provides a powerful example of how the Digital Feedback Loop can also help companies optimize their operational processes. Using Azure Machine Learning, Jabil can analyze a given manufacturing process and identify high-risk areas where it will fail. By taking existing defect data and using ML techniques to create a custom “inspection model” Jabil has “seen at least an 80 percent accuracy rate in the prediction of machine processes that will slow down or fail, contributing to a scrap and rework savings of 17 percent,” said Clint Belinsky, vice president, Global Quality, Jabil. (source)

Using data from sensors to optimize and maintain the health of a system is one of the ways to use the autonomous systems AI pattern. First step is to instrument your system or process to get key health metrics and then to use that data to trigger an automated system to fix problems as they arise.

Creating the Digital Feedback Loop

Although I provided a couple examples of how customers are using Microsoft AI patterns and technologies to drive feedback loops across customers, employees, products, and operations; this just the start. At this point in time it is too early for AI to solve every problem, but it is too late not to get started. Whether you are attending Inspire in person or online, take some time to attend some of the AI sessions and think about how some of these AI technologies can be used for your digital feedback loops. I am looking forward to seeing what you build.

Cheers,

Guggs

Highlights from the useR! 2018 conference in Brisbane

$
0
0

he fourteenth annual worldwide R user conference, useR!2018, was held last week in Brisbane, Australia and it was an outstanding success. The conference attracted around 600 users from around the world and — as the first held in the Southern hemisphere — brought many first-time conference-goers to useR!. (There were also a number of beginning R users as well, judging from the attendance at the beginner's tutorial hosted by R-Ladies.) The program included 19 3-hour workshops, 6 keynote presentations, and more than 200 contributed talkslightning talks and posters on using, extending, and deploying R.

If you weren't able to make it to Brisbane, you can nonetheless relive the experience thanks to the recorded videos. Almost all of the tutorials, keynotes and talks are available to view for free, courtesy of the R Consortium. (A few remain to be posted, so keep an eye on the channel.) Here are a few of my personal highlights, based on talks I saw in Brisbane or have managed to catch online since then.

Keynote talks

Steph de Silva, Beyond Syntax: on the power and potentiality of deep open source communities. A moving look at how open source communities, and especially R, grow and evolve.

Bill Venables, Adventures with R. It was wonderful to see the story and details behind an elegantly designed experiment investigating spoken language, and this example was used to great effect to contrast the definitions of "Statistics" and "Data Science". Bill also includes the best piece advice to give anyone joining a specialized group: "Everyone here is smart; distinguish yourself by being kind".

Kelly O'Brian's short history of RStudio was an interesting look at the impact of RStudio (the IDE and the company) on the R ecosystem.   

Thomas Lin Pedersen, The Grammar of Graphics. A really thought-provoking talk about the place of animations in the sphere of data visualization, and an introduction to the gganimate package which extends ggplot2 in a really elegant and powerful way.

Danielle Navarro, R for Pysychological Science. A great case study in introducing statistical programming to social scientists.

Roger Peng, Teaching R to New Users. A fascinating history of the R project, and how changes in the user community have been reflected in changes in programming frameworks. The companion essay summarizes the talk clearly and concisely.

Jenny Bryan, Code Smells. This was an amazing talk with practical recommendations for better R coding practices. The video isn't online yet, but the slides are available to view online.

Contributed talks

Bryan Galvin, Moving from Prototype to Production in R, a look inside the machine learning infrastructure at Netflix. Who says R doesn't scale?

Peter Dalgaard, What's in a Name? The secrets of the R build and release process, and the story behind their codenames.

Martin Maechler, Helping R to be (even more) Accurate. On R's near-obsessive attention to the details of computational accuracy.

Rob Hyndman, Tidy Forecasting in R. The next generation of time series forecasting methods in R. 

Nicholas Tierney, Maxcovr: Find the best locations for facilities using the maximal covering location problem. Giftastic!

David Smith Speeding up computations in R with parallel programming in the cloud. My talk on the doAzureParallel package.

David Smith, The Voice of the R Community. My talk for the R Consortium with the results of their community survey.

In addition, several of my colleagues from Microsoft were in attendance (Microsoft was a proud Platinum sponsor of useR!2018) and delivered talks of their own:

Hong Ooi, SAR: a practical, rating-free hybrid recommender for large data

Angus Taylor, Deep Learning at Scale with Azure Batch AI

Miguel Fierro, Spark on Demand with AZTK

Fang Zhou, Jumpstart Machine Learning with Pre-Trained Models

Le Zhang, Build scalable Shiny applications for employee attrition prediction on Azure cloud

Looking back, looking ahead

Overall, I thought useR!2018 was a wonderful conference. Great talks, friendly people, and impeccably organized. Kudos to all of the organizing committee, and particularly Di Cook, for putting together such a fantastic event. Next year's conference will be held in Toulouse, France and already has a great set of keynote speakers announced. But in the meantime, you can catch up on the talks from useR!2018 at the R Consortium YouTube channel linked below.

YouTube: R Consortium

.NET Core Code Coverage as a Global Tool with coverlet

$
0
0

Last week I blogged about "dotnet outdated," an essential .NET Core "global tool" that helps you find out what NuGet package reference you need to update.

.NET Core Global Tools are really taking off right now. They are meant for devs - this isn't a replacement for chocolatey or apt-get - this is more like npm's global developer tools. They're putting together a better way to find and identify global tools, but for now Nate McMaster has a list of some great .NET Core Global Tools on his GitHub. Feel free to add to that list!

.NET tools can be installed like this:

dotnet tool install -g <package id>

So for example:

C:Usersscott> dotnet tool install -g dotnetsay

You can invoke the tool using the following command: dotnetsay
Tool 'dotnetsay' (version '2.1.4') was successfully installed.
C:Usersscott> dotnetsay

Welcome to using a .NET Core global tool!

You know, all the important tools. Seriously, some are super fun. ;)

Coverlet is a cross platform code coverage tool that's in active development. In fact, I automated my build with code coverage for my podcast site back in March. I combined VS Code, Coverlet, xUnit, plus these Visual Studio Code extensions

for a pretty nice experience! All free and open source.

I had to write a little PowerShell script because the "dotnet test" command for testing my podcast site with coverlet got somewhat unruly. Coverlet.msbuild was added as a package reference for my project.

dotnet test /p:CollectCoverage=true /p:CoverletOutputFormat=lcov /p:CoverletOutput=./lcov .hanselminutes.core.tests

I heard last week that coverlet had initial support for being a .NET Core Global Tool, which I think would be convenient since I could use it anywhere on any project without added references.

dotnet tool install --global coverlet.console

At this point I can type "Coverlet" and it's available anywhere.

I'm told this is an initial build as a ".NET Global Tool" so there's always room for constructive feedback.

From what I can tell, I run it like this:

coverlet .binDebugnetcoreapp2.1hanselminutes.core.tests.dll --target "dotnet" --targetargs "test --no-build"

Note I have to pass in the already-built test assembly since coverlet instruments that binary and I need to say "--no-build" since we don't want to accidentally rebuild the assemblies and lose the instrumentation.

Coverlet can generate lots of coverage formats like opencover or lcov, and by default gives a nice ASCII table:

88.1% Line Coverage in Hanselminutes.core

I think my initial feedback (I'm not sure if this is possible) is smarter defaults. I'd like to "fall into the Pit of Success." That means, even I mess up and don't read the docs, I still end up successful.

For example, if I type "coverlet test" while the current directory is a test project, it'd be nice if that implied all this as these are reasonable defaults.

.binDebugwhateverwhatever.dll --target "dotnet" --targetargs "test --nobuild"

It's great that there is this much control, but I think assuming "dotnet test" is a fair assumption, so ideally I could go into any folder with a test project and type "coverlet test" and get that nice ASCII table. Later I'd be more sophisticated and read the excellent docs as there's lots of great options like setting coverage thresholds and the like.

I think the future is bright with .NET Global Tools. This is just one example! What's your favorite?


Sponsor: Preview the latest JetBrains Rider with its built-in spell checking, initial Blazor support, partial C# 7.3 support, enhanced debugger, C# Interactive, and a redesigned Solution Explorer.



© 2018 Scott Hanselman. All rights reserved.
     

Dashboard Updates Generally Available

$
0
0
We’re excited to announce updates to the new dashboard experience. This new experience lets you: Easily switch between your team’s dashboards Fine tune team permissions on individual dashboards Find and favorite the dashboards you need It is now generally available for VSTS customers and coming to TFS in the next major version. Get to your... Read More

Announcing .NET Framework 4.8 Early Access build 3632

$
0
0

We released the first Early Access build for the .NET Framework 4.8 last month (June-2018) and are happy to announce the next build (3632) for your feedback. This is one of the in-development builds of the next version of the .NET Framework and includes incremental fixes from the last build (3621).

You can download the installer and provide feedback using the links below:

This pre-release build (3632) includes quality and reliability fixes for  BCL, Runtime, SQL Connectivity, Winforms, and WPF. You can view the complete list of improvements in this build in the release notes.

Few things to note:

  1. This build (3632) is not supported for production use. This is a Preview build and should be used for internal testing only.
  2. This build will upgrade any older .NET Framework 4.X installation on your machine. This means all .NET Framework 4 and later applications on your machine will run on the .NET Framework early access build upon installation
  3. The build will only install on the following Windows and Windows Server versions and is not supported on other platforms:
  • Latest Windows 10 Insider Preview Build (RS5)
  • Windows 10 April 2018 Update
  • Windows 10 Fall Creators Update
  • Windows 10 Creators Update
  • Windows 10 Anniversary Update
  • Windows Server, version 1709
  • Windows Server 2016

We’d appreciate you taking the time to report any new issues to us on GitHub so we can address them before the official release.


Python in Visual Studio Code – June & July 2018 Release

$
0
0

We are pleased to announce that the June & July 2018 releases of the Python Extension for Visual Studio Code is now available from the marketplace and the gallery. You can download the Python extension from the marketplace, or install it directly from the extension gallery within Visual Studio Code. You can learn more about Python support in Visual Studio Code in the VS Code documentation.

Between these two releases we have closed a total of 156 issues including introducing a new experimental language server and gevent support in our experimental debugger.

Preview of Python language server

We are pleased to make available an opt-in preview of the Microsoft Python Language Server in Visual Studio Code. The language server is the IntelliSense engine from Visual Studio that implements the language server protocol, and brings the following benefits to Visual Studio Code developers.

  • Syntax errors as you type in code:

  • Warnings when modules are not found:

  • Using Typeshed files to fill in missing completions for modules
  • Improved performance for analyzing your workspace and presenting completions
  • Ability to detect syntax errors on your entire workspace, rather than just the current file.
  • Faster startup times
  • Faster imports
  • Better handling for a number of language constructs

To try out the new language server, go to File > Preferences > Settings and add the following option:

"python.jediEnabled": false

This will trigger a notification asking you to reload Visual Studio Code. Upon reload it will begin downloading the language server for your operating system. Our meta issue for the language server goes into more details of the install process as well as provides an FAQ for troubleshooting issues.

After some time using the language server you will see a prompt to fill out a survey, so please let us know how it works for you.

gevent launch configuration for debugging

Contributed by Bence Nagy at the PyCon US 2018 sprints, the experimental debugger now supports a gevent launch configuration for code that has been monkey-patched by gevent. A predefined debugging template named "Python Experimental: Gevent" is available, as well as adding the setting "gevent": true to any launch configuration.

Various Fixes and Enhancements

We have also added small enhancements and fixed issues requested by users that should improve your experience working with Python in Visual Studio Code. The full list of improvements is listed in our changelog, some notable improvements are:

  • Changed the keyboard shortcut for Run Selection/Line in Python Terminal to Shift+Enter. (#1875)
  • Changed the keyboard shortcut for Run Selection/Line in Python Terminal to not interfere with the Find/Replace dialog box. (#2068)
  • Added a setting to control automatic test discovery on save, python.unitTest.autoTestDiscoverOnSaveEnabled. (thanks Lingyu Li) (#1037)
  • Ensured navigation to definitions follows imports and is transparent to decoration. (thanks Peter Law) (#1638)
  • Fix to display all interpreters in the interpreter list when a workspace contains a Pipfile. (#1800)
  • Automatically add path mappings for remote debugging when attaching to the localhost. (#1829)
  • Add support for the "source.organizeImports" setting for "editor.codeActionsOnSave" (thanks Nathan Gaberel)

Be sure to download the Python extension for VS Code now to try out the above improvements. If you run into any issues be sure to file an issue on the Python VS Code GitHub page.

Introducing the Python Language Server

$
0
0

Visual Studio has long been recognized for the quality of its IntelliSense (code analysis and suggestions) across all languages, and has had support for Python since 2011. We are pleased to announce that we are going to be making the Python support available to other tools as the Microsoft Python Language Server. It is available first today in the July release of the Python Extension for Visual Studio Code, and we will later release it as a standalone component that you can use with any tool that works with the Language Server Protocol.

Background on IntelliSense and Language Servers

Ever since the days of Visual Basic, one of the core features of the Visual Studio series of IDEs has been IntelliSense: auto-completions for variables, functions, and other symbols that appear as you type in your code. Through a clever combination of static code analysis, precompiled databases and UI overlays, developers are regularly blown away at how productive they can be with an editor that truly understands their code.

IntelliSense being used in Visual Basic 6.0

Fast forward to today, and IntelliSense is still one of the most important features out there. More tools are requiring users to write code, and completions are practically a necessity in these editors. However, writing the static analysis necessary to provide a great experience is difficult, and most implementations are very closely tied to the editor they work with. Enter the language server protocol.

Language servers are standalone programs implementing the language server protocol, and were created to work with Visual Studio Code. Editors can start running a language server and use this JSON-based communication channel to provide and request information about the user’s code. All of the analysis and "smart" operations are handled by the server, allowing the editor to focus on presentation and interaction with the user.

Visual Studio Code uses language servers for most of its supported languages, including C++, C# and Go. From the editor’s point of view there are no differences between these languages - all the intelligence exists in the language server. This means that it is easy to add support for new languages to Visual Studio Code, and it does not require changing the editor at all. Language servers can also be used with plugins for Sublime Text, vim and more.

Introducing the Python Language Server

Previously, Python IntelliSense in Visual Studio was very specific to that IDE. We have been developing this support for nearly a decade. It has an impressively deep understanding of the Python language, but only Visual Studio users have been able to enjoy this work. Recently we have been refactoring our implementation to separate it from Visual Studio and make it available as a standalone program using the language server protocol.

From the point of view of the editor, language servers are a black box that is given text and gives back lists of more text. But the black box normally contains a process known as static type inferencing, where the language server determines ("infers") the type of each variable without actually running the code. For statically-typed languages, such as C#, this is often as simple as finding the variable definition and the type specified there. However, Python variables can change type any time they are assigned, and assignments can happen almost anywhere in any of the code that is run. This actually makes perfect static type inferencing impossible!

Python IntelliSense in Visual Studio 2017

(Technical aside: Variables are often thought of as "holes" into which only compatible values can "fit", where the shape of the hole is determined by its type. In Python, variables are names that are attached ("bound") to the value when it is assigned. Assigning a new name always re-binds the value regardless of whether the type is the same as the previous one. So just because you see "self.value = ‘a string’" in one place doesn’t mean that "self.value" will always be a string.)

Our Python Language Server uses iterative full-program analysis to track the types of all the variables in your project while simulating execution of all the code in your project. Normally this kind of approach can take hours for complex programs and require unlimited amounts of RAM, but we have used many tricks to make it complete quickly enough for IntelliSense use. We have also made the tradeoffs necessary to provide useful information despite it not being possible to perfectly infer all types in a Python program.

The end result is that we have a black box that takes Python code and provides all the information your editor needs for tooltips, completions, finding definitions and references, global variable renaming, and more. For performance, it runs with .NET Core on Windows, macOS and Linux, works with Python 2.5 through to Python 3.7 and supports the latest language features such as async/await, type annotations and type stub packages (including typeshed, a copy of which is included with the language server). It performs incremental updates as you type, and is already proven as a core feature of Visual Studio.

Benefits for Python in VS Code

Python IntelliSense in VS Code

Our July release of the Python extension for Visual Studio Code will include an early version of the Python Language Server. Features that are new for VS Code developers in this release include:

  • Syntax errors as you type in code
  • Warnings when modules are not found
  • Using typeshed files to fill in missing completions for modules
  • Improved performance for analyzing your workspace and presenting completions
  • Ability to detect syntax errors on your entire workspace, rather than just the current file.
  • Faster startup times
  • Faster imports
  • Better handling for a number of language constructs

All of these are already available in Visual Studio 2017 or will be in the next minor update.

Having a standalone, cross-platform language server means that we can continue to innovate and improve on our IntelliSense experience for Python developers in both Visual Studio and Visual Studio Code at the same time.

Be sure to check out our VS Code release announcement for more information. The standalone release of the Python Language Server will follow in the next few months, and will be available under the Apache 2.0 license.

Score one for the IT Pro: Azure File Sync is now generally available!

$
0
0

Azure File Sync replicates files from your on-premises Windows Server to an Azure file share. With Azure File Sync, you don’t have to choose between the benefits of cloud and the benefits of your on-premises file server - you can have both! Azure File Sync enables you to centralize your file services in Azure while maintaining local access to your data.

Visit our product page to learn more about Azure File Sync.

We created Microsoft Azure Files with the goal of making it easy for you to reap the benefits of cloud storage. We know from decades of experience building Windows file server that file shares are useful for more than just application development; file shares are used for everything under the sun. With Azure Files, we are focused on building general purpose file shares that can replace all the file servers and NAS devices your organization has, and today, we are happy to share an important milestone on that journey: the general availability of Azure File Sync! 
 
Azure File Sync grew out of our conversations with thousands of customers about the challenge of balancing the need for local and fast access to their frequently accessed data with the maintenance and time cost of managing on-premises storage. Azure File Sync replicates files from your on-premises Windows Server to an Azure file share, just like how you might have used DFS-R to replicate data between Windows Servers. Once you have a copy of your data in Azure, you can enable cloud tiering—the real magic of Azure File Sync—to store only the hottest and most recently accessed data on-premises. And since the cloud has a full copy of your data, you can connect as many servers to your Azure file share as you want, allowing you to establish quick caches of your data wherever your users happen to be. As mentioned above, in simple terms, Azure File Sync enables you to centralize your file services in Azure while maintaining local access to your data.

Having a copy of the data in the cloud enables you to do more. For example, you can nearly instantly recover from the loss of server with our fast disaster recovery feature. No matter what happens to your local server – a bad update, damaged physical disks, or something worse, you can rest easy knowing the cloud has a fully resilient copy of your data. Simply connect your new Windows Server to your existing sync group, and your namespace will be pulled down right away for use. 

When we announced Azure File Sync last year at Ignite, we thought it would help a lot of organizations on their cloud journeys but nothing could have prepared us for the groundswell of interest in Azure Files and Azure File Sync. For the general availability of Azure File Sync, we focused on addressing your feedback from using Azure File Sync in preview. Here are a few of our top innovations and enhancements in Azure File Sync since our initial preview:

  • Sync and cloud tiering performance, scale, and reliability improvements. For general availability, we have increased the performance of Azure File Sync upload by 2X and fast disaster recovery by 4X to 18X (depending on hardware). We have also rearchitected the cloud tiering backend to support faster and more reliable tiering, enabling us to support tiering as soon as we detect that the used volume space exceeds your volume free space percentage threshold.
  • Enhanced Azure File Sync portal experience. One of the top concerns we heard from customers during preview is that it was hard to understand the state of the system. We believe that you shouldn’t have to understand how our system is built or have a Ph.D. in computer science to understand the state of an Azure File Sync server endpoint. To this end, we are excited to introduce a revamped portal experience that more clearly displays the progress of sync uploads and surfaces only actionable error messages to you – keeping you focused on your day to day job! 
  • Holistic disaster recovery through integration with geo-redundant storage (GRS). Fast disaster recovery enables you to quick recover in the event of a disaster on-premises, but what about a disaster affecting one of the datacenters serving an Azure region? For GA, we now integrate end-to-end with the geo-redundant storage (GRS) resiliency setting. This enables you to fearlessly adopt Azure File Sync to protect against disasters for your organization’s most valuable data!

We’re just getting started!

For us, the general availability of Azure File Sync is just the start of the innovations we plan to bring to Azure Files and Azure File Sync. We have a whole series of new features and incremental improvements to deliver throughout the summer and fall, including support for and tighter integration with Windows Server 2019! Stay tuned – we think you’ll like what we have for you! See you at Ignite!

Visit our product page to learn more about Azure File Sync.

Adopting the word “organization”

$
0
0
In our interactions with users, we see a lot of confusion over the word account. Currently we use account to mean things like https://contoso.visualstudio.com and things like admin@contoso.com. To avoid this confusion, and to better align with the terminology of the broader developer and OSS community, we will be adopting the word organization in Visual... Read More

Come see us at EuroPython 2018!

$
0
0

Next week is the EuroPython conference from July 23-29 in Edinburgh, Scotland. Microsoft is a platinum sponsor of EuroPython this year, and are looking forward to meeting you there! Be sure to come by our booth and check out our sessions. If you can’t make it, don’t worry, we’ll be sharing the content from the conference on our Python blog after the events are over, so stay tuned by following this blog.

Microsoft has a long history of working with Python, from support for Python in Visual Studio and Visual Studio Code (pictured below: our free, open source editor for macOS, Linux and Windows), to contributing to open source projects such as IronPython and CPython itself. We are excited to have the team at EuroPython to show and talk about the work we are doing with Python at Microsoft.

Python in Visual Studio Code

Come to our talks!

Members from our team are giving a variety of talks during the conference, see below for details!

First, check out Python on Windows is Okay, Actually by Steve Dower on Wednesday, July 25th at 14:00 in the Moorfoot room to learn how to make Python run better for everyone, including Windows developers.

Then, stop into our sponsored session From Zero to Azure with Python, Docker Containers, and Visual Studio Code on Wednesday July 25th at 15:30 in the Kilsyth room to learn how you can quickly get up and running with Python in Azure.

Come by Code Review Skills for Pythonistas by Nina Zakharenko on Thursday, July 26th at 12:10 in the Lammermuir room to learn successful code review practices will lead to happier teams and healthier code bases.

Finally, check out Get Productive with Python and VS Code by Dan Taylor on Friday, July 27th at 11:20 in the Smarkets room to for to learn how you can make the most out of VS Code for Python development!

Join our Women & Non-Binary Community Breakfast

Microsoft Cloud Developer Advocates, Trans*Code, & CodeClan are excited to host a breakfast plus Python and people panel for women & non-binary attendees on the last day of EuroPython 2018, Friday, July 27th 8:00 AM – 9:00 AM in the Platform 5 Café at the Edinburgh International Conference Centre.

Please RSVP for this free event as soon as possible because we are anticipating this breakfast to sell out fast. We look forward to seeing you there and please reach out to Jenny.Morgan@Microsoft.com with any questions.

Come say ‘Hi’ at the booth!

We will have members from our team at the booth, be sure to stop by to talk about anything Python at Microsoft, including Python on Windows, Python in Visual Studio, Visual Studio Code, Python in Azure, Azure Notebooks, or Python itself!

We want to hear how Microsoft can help the Python community, if you’re at EuroPython next week be sure to stop by!

AltCover and ReportGenerator give amazing code coverage on .NET Core

$
0
0

I'm continuing to explore testing and code coverage on open source .NET Core. Earlier this week I checked out coverlet. There is also the venerable OpenCover and there's some cool work being done to get OpenCover working with .NET Core, but it's Windows only.

Today, I'm exploring AltCover by Steve Gilham. There are coverage tools that use the .NET Profiling API at run-time, instead, AltCover weaves IL for its coverage.

As the name suggests, it's an alternative coverage approach. Rather than working by hooking the .net profiling API at run-time, it works by weaving the same sort of extra IL into the assemblies of interest ahead of execution. This means that it should work pretty much everywhere, whatever your platform, so long as the executing process has write access to the results file. You can even mix-and-match between platforms used to instrument and those under test.

AltCover is a NuGet package but it's also available as .NET Core Global Tool which is awesome.

dotnet tool install --global altcover.global

This makes "altcover" a command that's available everywhere without adding it to my project.

That said, I'm going to follow the AltCover Quick Start and see how quickly I can get it set up!

I'll Install into my test project hanselminutes.core.tests

dotnet add package AltCover

and then run

dotnet test /p:AltCover=true

90.1% Line Coverage, 71.4% Branch CoverageCool. My tests run as usual, but now I've got a coverage.xml in my test folder. I could also generate LCov or Cobertura reports if I'd like. At this point my coverage.xml is nearly a half-meg! That's a lot of good information, but how do I see  the results in a human readable format?

This is the OpenCover XML format and I can run ReportGenerator on the coverage file and get a whole bunch of HTML files. Basically an entire coverage mini website!

I downloaded ReportGenerator and put it in its own folder (this would be ideal as a .NET Core global tool).

c:ReportGeneratorReportGenerator.exe -reports:coverage.xml -targetdir:./coverage

Make sure you use a decent targetDir otherwise you might end up with dozens of HTML files littered in your project folder. You might also consider .gitignoring the resulting folder and coverage file. Open up index.htm and check out all this great information!

Coverage Report says 90.1% Line Coverage

Note the Risk Hotspots at the top there! I've got a CustomPageHandler with a significant NPath Complexity and two Views with a significant Cyclomatic Complexity.

Also check out the excellent branch coverage as expressed here in the results of the coverage report. You can see that EnableAutoLinks was always true, so I only ever tested one branch. I might want to add a negative test here and explore if there's any side effects with EnableAutoLinks is false.

Branch Coverage

Be sure to explore AltCover and its Full Usage Guide. There's a number of ways to run it, from global tools, dotnet test, MSBuild Tasks, and PowerShell integration!

There's a lot of great work here and it took me literally 10 minutes to get a great coverage report with AltCover and .NET Core. Kudos to Steve on AltCover! Head over to https://github.com/SteveGilham/altcover and give it a STAR, file issues (be kind) or perhaps offer to help out! And most of all, share cool Open Source projects like this with your friends and colleagues.


Sponsor: Preview the latest JetBrains Rider with its built-in spell checking, initial Blazor support, partial C# 7.3 support, enhanced debugger, C# Interactive, and a redesigned Solution Explorer.


© 2018 Scott Hanselman. All rights reserved.
     

A hex sticker wall, created with R

$
0
0

Bill Venables, member of the R Foundation, co-author of the Introduction to R manual, R package developer, and one of the smartest and nicest (a rare combo!) people you will ever meet, received some gifts from the user!2018 conference committee in recognition of his longtime service to the R community. He received a book of reminiscences from those whose lives he has touched, and this blanket:

This morning we took care of some unfinished #user2018 business. 😊 @visnut @tslumley @_jessie_roberts pic.twitter.com/gW0wX2PLF2

— Miles McBain (@MilesMcBain) July 19, 2018

The design on the blanket is the "Hex Sticker Wall" that was on design during the conference, created by Mitchell O'Hara-Wild:

The #useR2018 #hexwall has been revealed! Read about how it was created in #rstats on this blog post: https://t.co/krYYOQ3N84

A huge thanks to everyone who has submitted stickers and provided feedback. I hope you enjoy the end result as much as I have had creating it! 🎉 pic.twitter.com/GnG9m2cZme

— Mitchell O'Hara-Wild (@mitchoharawild) July 11, 2018

The design for the wall was created — naturally — using R. Mitchell created an R script that will arrange a folder of hexagon images according to a specified layout. He then used a map of Australia to lay out the hex sticker coordinates within the map boundaries and plotted that to create the Hexwall. If you have a similar collection of hex images you could use the same technique to arrange them into any shape you like. The details are provided in the blog post linked below.

Mitchell O'Hara-Wild: UseR! 2018 Feature Wall


Advisory on July 2018 .NET Framework Updates

$
0
0

The July 2018 Security and Quality Rollup updates for .NET Framework was released earlier this month. We have received multiple customer reports of applications that fail to start or don’t run correctly after installing the July 2018 update. These reports are specific to applications that initialize a COM component and run with restricted permissions. You can reach out to Microsoft Support to get help.

We have stopped distributing the .NET Framework July 2018 updates on Windows Update and are actively working on fixing and re-shipping this month’s updates. If you installed the July 2018 update and have not yet seen any negative behavior, we recommend that you leave your systems as-is but closely monitor them and ensure that you apply upcoming .NET Framework updates.

As a team, we regret that this release was shipped with this flaw. This release was tested using our regular and extensive testing process. We discovered while investigating this issue that we have a test hole for the specific combination of COM activation and restricted permissions, including impersonation. We will be mitigating that gap going forward. Again, we are sorry for any inconvenience that this product flaw has caused.

We will continue to update this post and dotnet/announcement #74 as we have new information.

Technical Context

The .NET Framework runtime uses the process token to determine whether the process is being run within an elevated context. These system calls can fail if the required process inspection permissions are not present. This causes an “access denied” error.

Workaround

Temporarily uninstall the July 2018 Security and Quality Rollup updates for .NET Framework to restore functionality until a new update has been released to correct this problem.

Symptoms

A COM component fails to load because of “access denied,” “class not registered,” or “internal failure occurred for unknown reasons” errors.

The most commonly reported failure results in the following error message:

Exception type: System.UnauthorizedAccessException
Message: Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED))

Sharepoint

When users browse to a SharePoint site they may see the following HTTP 403 message:

"The Web Site declined to show this webpage"

The SharePoint ULS Logs will contain a message like the following:

w3wp.exe (0x1894)         0x0B94  SharePoint Foundation  General 0000       High                UnauthorizedAccessException for the request. 403 Forbidden will be returned. Error=An error occurred creating the configuration section handler for system.serviceModel/extensions: Could not load file or assembly <AssemblySignature>  or one of its dependencies. Access is denied. (C:WindowsMicrosoft.NETFramework64v2.0.50727Configmachine.config line 180)    

w3wp.exe (0x1894)         0x0B94  SharePoint Foundation  General b6p2      VerboseEx                Sending HTTP response 403:403 FORBIDDEN.      

w3wp.exe (0x1894)         0x0B94  SharePoint Foundation  General 8nca       Verbose                Application error when access /, Error=Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED))

When crawling a people content source, the request may fail with the following entry logged to the SharePoint ULS Log:

mssearch.exe (0x118C) 0x203C SharePoint Server Search Crawler:Gatherer Plugin cd11 Warning The start address sps3s://<URLtoSite> cannot be crawled.  Context: Application 'Search_Service_Application', Catalog 'Portal_Content'  Details:  Class not registered   (0x80040154)  

IIS Hosted Classic ASP calling CreateObject for .NET COM objects may receive error "ActiveX component can't create object" 

.NET Application creates instance of .NET COM application within an Impersonation Context may receive error "0x80040154 (REGDB_E_CLASSNOTREG)"

BizTalk Server Administration Console

BizTalk Server Administration Console fails to launch properly with the following errors:

An internal failure occurred for unknown reasons. (WinMgmt) 

Program Location:  

   at System.Runtime.InteropServices.Marshal.ThrowExceptionForHRInternal(Int32 errorCode, IntPtr errorInfo) 

   at System.Management.ManagementObject.Get() 

   at Microsoft.BizTalk.SnapIn.Framework.WmiProvider.SelectInstance

Use the following guidance as a workaround:

  • Add “NETWORK SERVICE” to the local Administrators group.

IIS with Classic ASP

IIS Hosted Classic ASP calling CreateObject for .NET COM objects may receive the following error: “ActiveX component can’t create object”. Use the following guidance as a workaround.

  • If your web site uses Anonymous Authentication, change the Web Site Anonymous Authentication credentials to use the “Application pool identity”
  • If your site uses Basic Authentication, log into the application once as the application pool identity and then create an instance of the .NET COM component. All subsequent activations for that .NET COM component should succeed, for any user.

.NET applications using COM and impersonation

.NET Applications that creates instances of .NET COM application within an Impersonation Context may receive the following error: “0x80040154 (REGDB_E_CLASSNOTREG)”. Use the following guidance as a workaround.

  • Create an instance of the .NET COM component prior to the impersonation context call. Later impersonated create instance calls should work as expected.
  • Run the .NET Application in the context of the impersonated user
  • Avoid using Impersonation when creating the .NET COM object

Top Stories from the Microsoft DevOps Community – 2018.07.20

$
0
0
This week I’ve been busy talking with Open Source developers and users at OSCON, explaining how VSTS can enable their builds with our hosted (or on-premises!) build agents. Meanwhile, we’ve seen some incredible podcasts and blog posts about DevOps in Azure. The DevOps Lab: Azure Automation Runbooks with PowerShell Damian Brady sits down with MVP... Read More

Azure Marketplace June container offers

$
0
0

We continue to expand the Azure Marketplace ecosystem. In June, 42 container offers from Bitnami successfully met the onboarding criteria and went live. See details of the new offers below:

ACME Solver Container Image

ACME Solver Container Image: ACME Solver is part of the Cert Manager project. It will periodically ensure certificates are valid and up to date, and it will attempt to renew certificates at an appropriate time before expiration.

AlertManager Container Image

AlertManager Container Image: AlertManager handles alerts sent by client applications, such as the Prometheus server. It takes care of deduplicating, grouping, and routing them to the correct receiver integrations.

Apache Solr Container Image

Apache Solr Container Image: Apache Solr is an extremely powerful, highly reliable open-source enterprise search platform built on Apache Lucene. It's flexible, scalable, and designed to quickly add value after launch.

Blackbox Exporter Container Image

Blackbox Exporter Container Image: The Blackbox Exporter allows black-box probing of endpoints over HTTP, HTTPS, DNS, TCP, and ICMP. Bitnami container images are secure, up-to-date, and packaged using industry best practices.

Cassandra Container Image

Cassandra Container Image: Apache Cassandra is an open-source distributed database management system designed to handle large amounts of data across many servers, providing high availability with no single point of failure.

Cert Manager Container Image

Cert Manager Container Image: Cert Manager is a Kubernetes add-on to automate the management and issuance of TLS certificates from various issuing sources. Bitnami container images use a container-optimized Debian base image.

Cluster Autoscaler Container Image

Cluster Autoscaler Container Image: Cluster Autoscaler is a component that automatically adjusts the size of a Kubernetes cluster so that all pods have a place to run and there are no unneeded nodes.

Codiad Container Image

Codiad Container Image: Codiad is a web-based integrated development environment framework with minimal resource requirements. Codiad was designed to be simple, allowing for fast, interactive development with less overhead than larger desktop editors.

Consul Container Image

Consul Container Image: Consul is a tool for discovering and configuring services in your infrastructure. Available Consul Container Image versions: 1, 1.0.7, 1.1.0, latest.

Discourse Container Image

Discourse Container Image: Discourse is an open-source discussion platform with built-in moderation and governance systems that let discussion communities protect themselves from bad actors even without official moderators.

DreamFactory Container Image

DreamFactory Container Image: DreamFactory instantly provides REST APIs and API management for your mobile, web, and IoT projects. With DreamFactory, you can stop hand-coding REST APIs and get to work building awesome applications.

Elasticsearch Exporter Container Image

Elasticsearch Exporter Container Image: Elasticsearch Exporter is analytics software and a Prometheus exporter for various metrics about Elasticsearch, written in Go.

Elasticsearch Production Container Image

Elasticsearch Production Container Image: Elasticsearch is a highly scalable open-source full-text search and analytics engine. It allows you to store, search, and analyze large volumes of data nearly in real time.

Fluent Bit Container Image

Fluent Bit Container Image: Fluent Bit is a fast and lightweight log processor and forwarder. It's made with a strong focus on performance to allow the collection of events from different sources without complexity.

Fluentd Container Image

Fluentd Container Image: Fluentd collects events from various data sources and writes them to files, relational database management systems, NoSQL, Infrastructure-as-a-Service, Software-as-a-Service, Hadoop, and so on.

Fluentd Exporter Container Image

Fluentd Exporter Container Image: Fluentd Exporter is a simple server that scrapes Fluentd metrics endpoint and exports them as Prometheus metrics.

Grafana Container Image

Grafana Container Image: Grafana is an open-source, feature-rich metrics dashboard and graph editor for Graphite, Elasticsearch, OpenTSDB, Prometheus, and InfluxDB.

Heapster Container Image

Heapster Container Image: Heapster is an open-source, feature-rich metrics dashboard and graph editor for Graphite, Elasticsearch, OpenTSDB, Prometheus, and InfluxDB. Available Heapster Container Image versions: 1, 1.5.3, latest.

Ingress Shim Container Image

Ingress Shim Container Image: Ingress Shim is infrastructure software and part of the Cert Manager project. It automatically provisions TLS certificates for Ingress resources via annotations on your Ingresses.

JasperReports Container Image

JasperReports Container Image: JasperReports is a stand-alone and embeddable reporting server. It’s a central information hub, with reporting and analytics that can be embedded into web and mobile applications.

Kafka Container Image

Kafka Container Image: Apache Kafka is publish-subscribe messaging rethought as a distributed commit log. Available Kafka Container Image versions: 1, 1.1.0, latest.

Kibana Container Image

Kibana Container Image: Kibana is an open-source, browser-based analytics and search dashboard for Elasticsearch. Kibana strives to be easy to learn while also being flexible and powerful.

kube-state-metrics Container Image

kube-state-metrics Container Image: kube-state-metrics is a simple service that listens to the Kubernetes API server and generates metrics about the state of the objects.

Kubewatch Container Image

Kubewatch Container Image: Kubewatch is a Kubernetes watcher that currently publishes notifications to Slack. Run it in your Kubernetes cluster and you will get event notifications in a Slack channel.

Matomo Container Image

Matomo Container Image: Matomo, formerly known as Piwik, is a real-time web analytics program. It provides detailed reports on website visitors: the search engines and keywords they used, popular pages, and much more.

Moodle Container Image

Moodle Container Image: Moodle is an open-source online learning management system widely used at universities, schools, and corporations worldwide. It is modular and highly adaptable to any type of online learning.

NATS Container Image

NATS Container Image: NATS is a lightweight, open-source, and high-performance messaging system. It’s ideal for distributed systems and supports modern cloud architectures and pub-sub, request-reply, and queuing models.

Neo4j Container Image

Neo4j Container Image: Neo4j is a high-performance graph store with all the features expected of a mature and robust database, like a friendly query language and ACID transactions.

Node Exporter Container Image

Node Exporter Container Image: Node Exporter is a Prometheus exporter for hardware and OS metrics exposed by UNIX kernels, with pluggable metric collectors. Available Node Exporter Container Image versions: 0, 0.16.0, latest.

OAuth 2 Proxy Container Image

OAuth 2 Proxy Container Image: OAuth 2 Proxy is a reverse proxy and static file server that provides authentication using providers (Google, GitHub, and others) to validate accounts by email, domain, or group.

Odoo Container Image

Odoo Container Image: Odoo is an open-source enterprise resource planning and customer relationship management platform. Formerly known as OpenERP, Odoo can connect a wide variety of business operations, including sales, supply chain, finance, and project management.

Open Service Broker for Azure Container Image

Open Service Broker for Azure Container Image: Open Service Broker for Azure is the open-source, Open Service Broker-compatible API server that provisions managed services in the Microsoft Azure public cloud.

phpBB Container Image

phpBB Container Image: phpBB is a popular bulletin board that features robust messaging capabilities, such as flat message structure, subforums, topic split/merge/lock, user groups, full-text search, and attachments.

phpMyAdmin Container Image

phpMyAdmin Container Image: phpMyAdmin is a free software tool written in PHP and intended to handle the administration of MySQL over the web. phpMyAdmin supports a wide range of operations on MySQL and MariaDB.

PrestaShop Container Image

PrestaShop Container Image: PrestaShop is a powerful open-source e-commerce platform used by more than 250,000 online storefronts worldwide. It’s easily customizable, responsive, and includes powerful tools to drive online sales.

Prometheus Container Image

Prometheus Container Image: Prometheus is a systems and service monitoring system. It collects metrics from configured targets at given intervals, evaluates rule expressions, displays the results, and can trigger alerts.

Push Gateway Container Image

Push Gateway Container Image: The Push Gateway is an intermediary service that allows you to push metrics from jobs that can't be scraped. Available Push Gateway Container Image versions: 0, 0.4.0, 0.5.1, 0.5.2, latest.

Redis Sentinel Container Image

Redis Sentinel Container Image: Redis Sentinel provides high availability for Redis. Redis Sentinel also provides other collateral tasks, such as monitoring and notifications, and it acts as a configuration provider for clients.

Service Catalog Container Image

Service Catalog Container Image: The Service Catalog project is in incubation and will bring integration with service brokers to the Kubernetes ecosystem via the Open Service Broker API.

TensorFlow Inception Container Image

TensorFlow Inception Container Image: The Inception model is a TensorFlow model for image recognition. The TensorFlow Inception container image allows you to easily query a TensorFlow server.

User Broker Container Image

User Broker Container Image: User Broker is a part of the Service Catalog project, which is in incubation and will bring integration with service brokers to the Kubernetes ecosystem via the Open Service Broker API.

ZooKeeper Container Image

ZooKeeper Container Image: ZooKeeper provides a reliable, centralized register of configuration data and services for distributed applications.

Score one for the IT Pro: Azure File Sync is now generally available!

$
0
0

Azure File Sync replicates files from your on-premises Windows Server to an Azure file share. With Azure File Sync, you don’t have to choose between the benefits of cloud and the benefits of your on-premises file server - you can have both! Azure File Sync enables you to centralize your file services in Azure while maintaining local access to your data.

Visit our planning guide to learn more about Azure File Sync.

We created Microsoft Azure Files with the goal of making it easy for you to reap the benefits of cloud storage. We know from decades of experience building Windows file server that file shares are useful for more than just application development; file shares are used for everything under the sun. With Azure Files, we are focused on building general purpose file shares that can replace all the file servers and NAS devices your organization has, and today, we are happy to share an important milestone on that journey: the general availability of Azure File Sync! 
 
Azure File Sync grew out of our conversations with thousands of customers about the challenge of balancing the need for local and fast access to their frequently accessed data with the maintenance and time cost of managing on-premises storage. Azure File Sync replicates files from your on-premises Windows Server to an Azure file share, just like how you might have used DFS-R to replicate data between Windows Servers. Once you have a copy of your data in Azure, you can enable cloud tiering—the real magic of Azure File Sync—to store only the hottest and most recently accessed data on-premises. And since the cloud has a full copy of your data, you can connect as many servers to your Azure file share as you want, allowing you to establish quick caches of your data wherever your users happen to be. As mentioned above, in simple terms, Azure File Sync enables you to centralize your file services in Azure while maintaining local access to your data.

Having a copy of the data in the cloud enables you to do more. For example, you can nearly instantly recover from the loss of server with our fast disaster recovery feature. No matter what happens to your local server – a bad update, damaged physical disks, or something worse, you can rest easy knowing the cloud has a fully resilient copy of your data. Simply connect your new Windows Server to your existing sync group, and your namespace will be pulled down right away for use. 

When we announced Azure File Sync last year at Ignite, we thought it would help a lot of organizations on their cloud journeys but nothing could have prepared us for the groundswell of interest in Azure Files and Azure File Sync. For the general availability of Azure File Sync, we focused on addressing your feedback from using Azure File Sync in preview. Here are a few of our top innovations and enhancements in Azure File Sync since our initial preview:

  • Sync and cloud tiering performance, scale, and reliability improvements. For general availability, we have increased the performance of Azure File Sync upload by 2X and fast disaster recovery by 4X to 18X (depending on hardware). We have also rearchitected the cloud tiering backend to support faster and more reliable tiering, enabling us to support tiering as soon as we detect that the used volume space exceeds your volume free space percentage threshold.
  • Enhanced Azure File Sync portal experience. One of the top concerns we heard from customers during preview is that it was hard to understand the state of the system. We believe that you shouldn’t have to understand how our system is built or have a Ph.D. in computer science to understand the state of an Azure File Sync server endpoint. To this end, we are excited to introduce a revamped portal experience that more clearly displays the progress of sync uploads and surfaces only actionable error messages to you – keeping you focused on your day to day job! 
  • Holistic disaster recovery through integration with geo-redundant storage (GRS). Fast disaster recovery enables you to quick recover in the event of a disaster on-premises, but what about a disaster affecting one of the datacenters serving an Azure region? For GA, we now integrate end-to-end with the geo-redundant storage (GRS) resiliency setting. This enables you to fearlessly adopt Azure File Sync to protect against disasters for your organization’s most valuable data!

We’re just getting started!

For us, the general availability of Azure File Sync is just the start of the innovations we plan to bring to Azure Files and Azure File Sync. We have a whole series of new features and incremental improvements to deliver throughout the summer and fall, including support for and tighter integration with Windows Server 2019! Stay tuned – we think you’ll like what we have for you! See you at Ignite!

Visit our product page to learn more about Azure File Sync.

Build secure Oozie workflows in Azure HDInsight with Enterprise Security Package

$
0
0

Customers love to use Hadoop and often rely on Oozie, a workflow and coordination scheduler for Hadoop to accelerate and ease their big data implementation. Oozie is integrated with the Hadoop stack, and it supports several types of Hadoop jobs. However, for users of Azure HDInsight with domain joined clusters, Oozie was not a supported option. To get around this limitation customers had to run Oozie on a regular cluster. This was costly with extra administrative overhead. Today we are happy to announce that customers can now use Oozie in domain-joined Hadoop clusters too.

In domain-joined clusters, authentication happens through Kerberos and fine-grained authorization is through Ranger policies. Oozie supports impersonation of users and a basic authorization model for workflow jobs.

Moreover, Hive server 2 actions submitted as part of an Oozie workflow get logged and are auditable through Ranger too. Fine-grained authorization through ranger will be enforced on the Oozie jobs, only when Ranger policies are present, otherwise coarse-grained authorization based on HDFS (only available on ADLS Gen1) is enforced.

Learn more about how to create an Oozie workflow and submit jobs in a domain joined cluster, and how to use Oozie with Hadoop to define and run a workflow on Linux-based Azure HDInsight.

Try HDInsight now

We hope you take full advantage of today’s announcements and we are excited to see what you will build with Azure HDInsight. Read this developer guide and follow the quick start guide to learn more about implementing these pipelines and architectures on Azure HDInsight. Stay up-to-date on the latest Azure HDInsight news and features by following us on Twitter #HDInsight and @AzureHDInsight. For questions and feedback, please reach out to AskHDInsight@microsoft.com.

About HDInsight

Azure HDInsight is Microsoft’s premium managed offering for running open source workloads on Azure. Today, we are excited to announce several new capabilities across a wide range of OSS frameworks.

Azure HDInsight powers some of the top customer’s mission critical applications ranging in a wide variety of sectors including, manufacturing, retail education, nonprofit, government, healthcare, media, banking, telecommunication, insurance, and many more industries ranging in use cases from ETL to Data Warehousing, from Machine Learning to IoT, and more.

Viewing all 10804 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>