Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

Container Tooling for Service Fabric in Visual Studio 2017

$
0
0

The latest version of the Service Fabric tools, which is part of Visual Studio 2017 Update 7 (15.7), includes the new container tooling for Service Fabric feature. This new feature makes debugging and deploying existing applications, in containers, on Service Fabric easier than ever before.

Containerize .NET Framework and .NET Core applications and run them on Service Fabric

You can now take an existing console or ASP.NET application, deploy it to a container image, and run and debug it in Service Fabric as a container on your local developer workstation. With a few clicks, you can make your existing .NET application run in a container in a Service Fabric environment. Simply right-click on your project in Solution Explorer and selecting Add --> Container Orchestrator Support. This will display a dialog box where you select Service Fabric and click OK.

Container-Tools

Doing this will create a Docker file in your project and add the required Service Fabric files, as well as create a new Service Fabric application project in the solution. If your project is part of a solution with an existing Service Fabric application, it will be added to that application automatically. This must be done for each project in the solution you want to containerize for Service Fabric.

Debug containers running on Service Fabric

Not only can you easily containerize existing .NET projects with just a few clicks, you can also debug the code running inside the container instance as it runs in Service Fabric. Hit F5 and you will get full debugging support within Visual Studio for each service, even when it runs in a container! For the best debugging experience, use a one node cluster and “Refresh Application” mode.

Publish your containerized application to Service Fabric

When it's time to publish your application to Azure, the Publish dialog in Visual Studio will prompt you for an Azure Container Registry (ACR) to push your image(s) to, as well as the cluster connection information.

image

We have also updated the Visual Studio Team Services tasks for Service Fabric, to support the above scenario: Deploy: Service Fabric Application Deployment.

To get the latest tools, install Service Fabric SDK version 3.1 and latest Visual Studio 2017 release.

To learn more about how to be setup and use the tools, take a look at these articles:

From all of us...

The Service Fabric and Visual Studio teams


Detecting unconscious bias in models, with R

$
0
0

There's growing awareness that the data we collect, and in particular the variables we include as factors in our predictive models, can lead to unwanted bias in outcomes: from loan applications, to law enforcement, and in many other areas. In some instances, such bias is even directly regulated by laws like the Fair Housing Act in the US. But even if we explicitly remove "obvious" variables like sex, age or ethnicity from predictive models, unconscious bias might still be a factor in our predictions as a result of highly-correlated proxy variables that are included in our model.

As a result, we need to be aware of the biases in our model and take steps to address them. For an excellent general overview of the topic, I highly recommend watching the recent presentation by Rachel Thomas, "Analyzing and Preventing Bias in ML". And for a practical demonstration of one way you can go about detecting proxy bias in R, take a look at the vignette created by my colleague Paige Bailey for the ROpenSci conference, "Ethical Machine Learning: Spotting and Preventing Proxy Bias". 

The vignette details general principles you can follow to identify proxy bias in an analysis, in the context of a case study analyzed using R. The case study considers data and a predictive model that might be used by a bank manager to determine the creditworthiness of a loan applicant. Even though race was not explicitly included in the adaptive boosting model (from the C5.0 package), the predictions are still biased by race:

Unc bias race
That's because zipcode, a variable highly associated with race, was included in the model. Read the complete vignette linked below to see how Paige modified the model to ameliorate that bias, while still maintaining its predictive power. All of the associated R code is available in the iPython Notebook.

GitHub (ropenscilabs): Ethical Machine Learning: Spotting and Preventing Proxy Bias (Paige Bailey)

No container image for Build Tools for Visual Studio 2017

$
0
0

After having written documentation about installing Build Tools for Visual Studio 2017 and working with partners to set up validation starting with Visual Studio 2017 Version 15.7, a common question from customers and partners alike is: are you going to publish a container image in a Docker registry? With DockerCon 2018 in full swing, there’s no better time to answer this question.

In short, we have no plans to publish such an image. A full Build Tools image is currently ~58GB and contains support for more workloads than probably any developer needs.

We are interested in common customer workload requirements and scenarios and are exploring options to simplify deployment of build environments. Perhaps someday we may publish one or more container images. This is why the documentation exists now: to aide our customers and partners to tailor a container image to fit their needs. In Visual Studio 2017 Version 15.7 we made significant improvements to the Build Tools SKU to increase the number of workloads it supports since running the Visual Studio IDE (or even just using its build toolchain) in a container is not supported. This also greatly increased the size for a full install from previous releases when installing into a container was not officially supported but mostly worked.

For example, to build the setup engine and installer shell, we would use a Dockerfile similar to the following:

# escape=`

# Copyright (C) Microsoft Corporation. All rights reserved.

# ARG FROM_IMAGE=microsoft/dotnet-framework:4.7.1-sdk-windowsservercore-1709
ARG FROM_IMAGE=microsoft/dotnet-framework:3.5-sdk-windowsservercore-1709
FROM ${FROM_IMAGE}

# Reset the shell.
SHELL ["cmd", "/S", "/C"]

# Set up environment to collect install errors.
COPY Install.cmd C:TEMP
ADD https://aka.ms/vscollect.exe C:TEMPcollect.exe

# Install Node.js LTS
ADD https://nodejs.org/dist/v8.11.3/node-v8.11.3-x64.msi C:TEMPnode-install.msi
RUN start /wait msiexec.exe /i C:TEMPnode-install.msi /l*vx "%TEMP%MSI-node-install.log" /qn ADDLOCAL=ALL

# Download channel for fixed installed.
ARG CHANNEL_URL=https://aka.ms/vs/15/release/channel
ADD ${CHANNEL_URL} C:TEMPVisualStudio.chman

# Download and install Build Tools for Visual Studio 2017.
ADD https://aka.ms/vs/15/release/vs_buildtools.exe C:TEMPvs_buildtools.exe
RUN C:TEMPInstall.cmd C:TEMPvs_buildtools.exe –quiet –wait –norestart –nocache `
–channelUri C:TEMPVisualStudio.chman `
–installChannelUri C:TEMPVisualStudio.chman `
–add Microsoft.VisualStudio.Workload.ManagedDesktopBuildTools `
–add Microsoft.Net.Component.3.5.DeveloperTools `
–add Microsoft.Net.ComponentGroup.4.6.2.DeveloperTools `
–add Microsoft.Net.ComponentGroup.TargetingPacks.Common `
–add Microsoft.VisualStudio.Component.TestTools.BuildTools `
–add Microsoft.VisualStudio.Workload.VCTools `
–add Microsoft.VisualStudio.Component.VC.140 `
–add Microsoft.VisualStudio.Component.VC.ATL `
–add Microsoft.VisualStudio.Component.VC.CLI.Support `
–add Microsoft.VisualStudio.Component.Windows10SDK.16299.Desktop `
–add Microsoft.VisualStudio.ComponentGroup.NativeDesktop.WinXP `
–add Microsoft.VisualStudio.Workload.NodeBuildTools `
–add Microsoft.VisualStudio.Component.TypeScript.2.8 `
–installPath C:BuildTools

# Use developer command prompt and start PowerShell if no other command specified.
ENTRYPOINT C:BuildToolsCommon7ToolsVsDevCmd.bat &&
CMD ["powershell.exe", "-NoLogo", "-ExecutionPolicy", "Bypass"]

This uses a slightly older Windows container image since the continuous integration agent runs Windows Server 2016.

Like with this example, you can construct a Dockerfile with whatever tools and libraries you need and either publish it to your own container registry or have developers build and maintain images from source. You could even simplify the process of building and running within the image using docker-compose. This Dockerfile only took ~10 minutes to build and is ~22.5GB (which includes the base image size of ~9GB) expanded. Depending on your network bandwidth building the image could be faster or at least less strain on network infrastructure.

Daily planning made simple with Emily Ley

$
0
0

Profile picture of Emily Ley.Emily Ley believes perfect is overrated. When life gets overly busy, rather than strive in vain to check every box every day, Ley’s approach is to scale back. Simplify. Make intentional choices. Plan purposefully. And don’t be afraid to fail.

These are the principles that drive Ley’s daily life as well as her namesake business. What started in South Florida in 2008 as a small design shop has blossomed into a thriving boutique brand that now ships artfully designed organizational products to eager customers worldwide—everything from planning tools to stationery, office necessities, small storage items, and more.

Her signature product, The Simplified Planner, got off the ground after her first son was born in 2011. Ley says she went in search of the perfect planner but was left dissatisfied. “Everything I found was full of extra checklists and other features I didn’t need,” she says. “All those boxes that went unchecked or empty everyday made me feel even more like I wasn’t measuring up. What I desperately wanted was a fresh start every morning. A place to plan my day, keep track of my to-dos, and keep track of dinners. I just wanted something simple. So, I set out to create my own. With a sharpie and blank paper in hand, The Simplified Planner was born.”

In addition to her own organizational products, Ley uses the productivity tools in Office 365 to keep her day-to-day creative activities on track. She says she and her team use Word to draft creative briefs and production plans—including things like what files need to be created, who’s responsible for what, and, of course, deadlines. “The Review features in Word are fantastic,” says Ley, who also wrote her two books—Grace Not Perfection: Embracing Simplicity, Celebrating Joy (2016), and A Simplified Life: Tactical Tools for Intentional Living (2017)—in Word. “They really help my editor and her team streamline the process of editing and proofreading.”

Excel also plays a crucial role in her daily life. In addition to using Excel as the family address book and to keep track of annual family budgets, Ley uses it to manage the manufacturing, ordering, and importing of her products. “With multiple manufacturers and multiple shipments—each on its own production, shipment, receiving deadlines—things can get complicated quickly. The features in Excel really help to keep track of both the overarching view of things, as well as the details of each shipment,” says Ley.

She also notes the importance of being able to customize Excel sheets to make them more visual. “Being able to manipulate formulas as well as colors, borders, and the sizes of each cell helps to create spreadsheets that are dynamic as well as visually organized. This is especially helpful in a fast-paced industry. [You can] set it up once and use that template for future projects.”

Speaking of future projects, when asked how her growing family has informed the way she designs new items, the answer is surprising—or perhaps not, given Ley’s plan-purposefully philosophy. “As my family life has gotten busier, The Simplified Planner has gotten simpler. Every year, we look at its pages and examine each feature and each mark on the pages. ‘Does this absolutely need to be here?’ Or, ‘Could we simplify the page in an intentional way, making white space or margin for what really matters?’”

What really matters: it’s something Ley learned at an early age from her parents, and she tries to carry it through all aspects of her life today. She says the secret to staying organized is in focusing on the right things and in using the right tools—and in not trying to be perfect. “I really see mistakes and mess-ups as opportunities for great things. If we don’t let ourselves off the hook for being imperfect, then what fun is life?

“We often think about pursuing dreams as being really, really complicated or big. But the truth is, pursuing our passions is quite simple: one foot in front of the other. Little by little, our tiny decisions and actions can produce great joy in our lives.”

See what’s new in Office 365.

The post Daily planning made simple with Emily Ley appeared first on Microsoft 365 Blog.

Top Stories from the Microsoft DevOps Community – 2018.06.15

$
0
0
DevOps 100 TechBeacon creates the DevOps 100 list – the top 100 leaders, practitioners and experts in DevOps that you should be following. It’s no surprise that Microsoft employees made an impressive showing on this list: from the VSTS team, Sam Guckenheimer is the man to follow. For the DevOps Cloud Developer Advocates, Donovan Brown... Read More

Interpreting machine learning models with the lime package for R

$
0
0

Many types of machine learning classifiers, not least commonly-used techniques like ensemble models and neural networks, are notoriously difficult to interpret. If the model produces a surprising label for any given case, it's difficult to answer the question, "why that label, and not one of the others?".

One approach to this dilemma is the technique known as LIME (Local Interpretable Model-Agnostic Explanations). The basic idea is that while for highly non-linear models it's impossible to give a simple explanation of the relationship between any one variable and the predicted classes at a global level, it might be possible to asses which variables are most influential on the classification at a local level, near the neighborhood of a particular data point. An procedure for doing so is described in this 2016 paper by Ribeiro et al, and implemented in the R package lime by Thomas Lin Pedersen and Michael Benesty (and a port of the Python package of the same name). 

You can read about how the lime package works in the introductory vignette Understanding Lime, but this limerick by Mara Averick sums also things up nicely:

There once was a package called lime,
Whose models were simply sublime,
It gave explanations for their variations,
One observation at a time.

"One observation at a time" is the key there: given a prediction (or a collection of predictions) it will determine the variables that most support (or contradict) the predicted classification.

Lime

The lime package also works with text data: for example, you may have a model that classifies a paragraph of text as a sentiment "negative", "neutral" or "positive". In that case, lime will determine the the words in that sentence which are most important to determining (or contradicting) the classification. The package helpfully also provides a shiny app making it easy to test out different sentences and see the local effect of the model.

To learn more about the lime algorithm and how to use the associated R package, a great place to get started is the tutorial Visualizing ML Models with LIME from the University of Cincinnati Business Analytics R Programming Guide. The lime package is available on CRAN now, and you can always find the latest version at the GitHub repository linked below.

GitHub (thomasp): lime (Local Interpretable Model-Agnostic Explanations)

 

 

 

Because it’s Friday: Olive Garden Bot

$
0
0

Comedy writer Keaton Patti claims this commercial script for a US Italian restaurant chain was generated by a bot:

I forced a bot to watch over 1,000 hours of Olive Garden commercials and then asked it to write an Olive Garden commercial of its own. Here is the first page. pic.twitter.com/CKiDQTmLeH

— Keaton Patti (@KeatonPatti) June 13, 2018

Of course this wasn't bot-generated, but "what a bot might write" is fertile ground for comedy:

I forced a bot to read over 1,000 tweets claiming to be scripts written by bots online, then asked it to write a script itself. It wrote these two pages, then hung itself. pic.twitter.com/KbFXStJuAb

— Christine Love (@christinelove) June 1, 2018

That's all from us here at the blog for this week. Have a great weekend, and we'll be back next week.

 

Penny Pinching in the Cloud: Deploying Containers cheaply to Azure

$
0
0

imageI saw a tweet from a person on Twitter who wanted to know the easiest and cheapest way to get an Web Application that's in a Docker Container up to Azure. There's a few ways and it depends on your use case.

Some apps aren't web apps at all, of course, and just start up in a stateless container, do some work, then exit. For a container like that, you'll want to use Azure Container Instances. I did a show and demo on this for Azure Friday.

Azure Container Instances

Using the latest Azure CLI  (command line interface - it works on any platform), I just do these commands to start up a container quickly. Billing is per-second. Shut it down and you stop paying. Get in, get out.

Tip: If you don't want to install anything, just go to https://shell.azure.com to get a bash shell and you can do these command there, even on a Chromebook.

I'll make a "resource group" (just a label to hold stuff, so I can delete it en masse later). Then "az container create" with the image. Note that that's a public image from Docker Hub, but I can also use a private Container Registry or a private one in Azure. More on that in a second.

Anyway, make a group (or use an existing one), create a container, and then either hit the IP I get back or I can query for (or guess) the full name. It's usually dns=name-label.location.azurecontainer.io.

> az group create --name someContainers --location westus

Location Name
---------- --------------
westus someContainers
> az container create --resource-group someContainers --name fancypantscontainer --image microsoft/aci-helloworl
d --dns-name-label fancy-container-demo --ports 80
Name ResourceGroup ProvisioningState Image IP:ports CPU/Memory OsType Location
------------------- --------------- ------------------- ------------------------ ---------------- --------------- -------- ----------
fancypantscontainer someContainers Pending microsoft/aci-helloworld 40.112.167.31:80 1.0 core/1.5 gb Linux westus
> az container show --resource-group someContainers --name fancypantscontainer --query "{FQDN:ipAddress.fqdn,ProvisioningState:provisioningState}" --out table
FQDN ProvisioningState
--------------------------------------------- -------------------
fancy-container-demo.westus.azurecontainer.io Succeeded

Boom, container in the cloud, visible externally (if I want) and per-second billing. Since I made and named a resource group, I can delete everything in that group (and stop billing) easily:

> az group delete -g someContainers 

This is cool because I can basically run Linux or Windows Containers in a "serverless" way. Meaning I don't have to think about VMs and I can get automatic, elastic scale if I like.

Azure Web Apps for Containers

ACI is great for lots of containers quickly, for bringing containers up and down, but I like my long-running web apps in Azure Web Apps for Containers. I run 19 Azure Web Apps today via things like Git/GitHub Deploy, publish from VS, or CI/CD from VSTS.

Azure Web Apps for Containers is the same idea, except I'm deploying containers directly. I can do a Single Container easily or use Docker Compose for multiple.

I wanted to show how easy it was to set this up so I did a video (cold, one take, no rehearsal, real accounts, real app) and put it on YouTube. It explains "How to Deploy Containers cheaply to Azure" in 21 minutes. It could have been shorter, but I also wanted to show how you can deploy from both Docker Hub (public) or from your own private Azure Container Registry.

I did all the work from the command line using Docker commands where I just pushed to my internal registry!

> docker login hanselregistry.azurecr.io

> docker build -t hanselregistry.azurecr.io/podcast .
> docker push hanselregistry.azurecr.io/podcast

Took minutes to get my podcast site running on Azure in Web Apps for Containers. And again - this is the penny pinching part - keep control of the App Service Plan (the VM underneath the App Service) and use the smallest one you can and pack the containers in tight.

Watch the video, and note when I get to the part where I add create an "App Service Plan." Again, that's the VM under a Web App/App Service. I have 19 smallish websites inside a Small (sometime a Medium, I can scale it whenever) App Service. You should be able to fit 3-4 decent sites in small ones depending on memory and CPU characteristics of the site.

Click Pricing Plan and you'll get here:

Recommend Pricing tiers have many choices

Be sure to explore the Dev/Test tab on the left as well. When you're making a non-container-based App Service you'll see F1 and D1 for Free and Shared. Both are fine for small websites, demos, hosting your github projects, etc.

Free, Shared, or Basic Infrastructure

If you back up and select Docker as the "OS"...

Windows, Linux, or Docker

Again check out Dev/Test for less demanding workloads and note B1 - Basic.

B1 is $32.74

The first B1 is free for 30 days! Good to kick the tires. Then as of the timing of this post it's US$32.74 (Check pricing for different regions and currencies) but has nearly 2 gigs of RAM. I can run several containers in there.

Just watch your memory and CPU and pack them in. Again, more money means better perf, but the original ask here was how to save money.

Low CPU and 40% memory

To sum up, ACI is great for per-second billing and spinning up n containers programmatically and getting out fast) and App Service for Containers is an easy way to deploy your Dockerized apps. Hope this helps.


Sponsor: Check out dotMemory Unit, a free unit testing framework for fighting all kinds of memory issues in your code. Extend your unit testing with the functionality of a memory profiler.



© 2018 Scott Hanselman. All rights reserved.
     

Queries Hub Updates Generally Available

$
0
0
The New Queries Hub streamlines many of the existing features from the old hub and provides new capabilities to make it easier to get to the queries that are important to you. It is now generally available for VSTS customers and coming to TFS in the next major version. Expanded Directory pages The left panel... Read More

Top 8 reasons to choose Azure HDInsight

$
0
0

Household names such as Adobe, Jet, ASOS, Schneider Electric, and Milliman are amongst hundreds of enterprises that are powering their Big Data Analytics using Azure HDInsight. Azure HDInsight launched nearly six years ago and has since become the best place to run Apache Hadoop and Spark analytics on Azure.

Here are the top eight reasons why enterprises are choosing Azure HDInsight for their big data applications:

1. Fully managed cluster service for Apache Hadoop and Spark workloads: Spin up Hive, Spark, LLAP, Kafka, HBase, Storm, or R Server clusters within minutes, deploy and run your applications and allow HDInsight do the rest. We will monitor the cluster and all the services, detect and repair common issues and respond to issues 24/7.

2. Guaranteed high availability (99.9 percent SLA) at large scale: Run your most critical and time sensitive workloads across thousands of cores and TBs of memory under the assurance of an industry-leading availability SLA of 99.9 percent for the whole software stack. Your big data applications can run more reliably as your HDInsight service monitors the health and automatically recovers from failures.

3. Industry-leading end to end security and compliance: Protect your most sensitive enterprise data assets using the full spectrum security technologies at your disposal. Isolate your HDInsight cluster within VNETs and take advantage of transparent data encryption. Develop rich role-based access policies using Apache Ranger and restrict access to your most critical data and applications. Achieve peace of mind knowing that your enterprise data assets are being handled and protected by a service that has received more than 30 industry standard certifications including ISO, SOC, HIPAA, PCI, and more.

4. Valuable applications available on Azure Marketplace: Pick from more than 30 popular Hadoop and Spark applications. Within several minutes, Azure HDInsight deploys the applications to the cluster.

5. Productive platform for analytics: Data engineers, data scientists and BI analysts can build their Hadoop/Spark applications using their favorite development tools (Visual Studio and Eclipse or IntelliJ), Notebooks (Jupyter or Zeppelin) languages (Scala, Python, R or C#) and frameworks (Java or .NET).

6. Enterprise-scale R for machine learning: Your data scientists can train more accurate models for better predictions in a shorter time by using Microsoft R Server for HDInsight. The multi-threaded math libraries and transparent parallelization in R Server provide the capability of handling up to thousands of more data and up to 50 times faster speed than open source R.

7. Global availability: Deployed in more than 26 public regions and multiple government clouds across the world, you will always find HDInsight in a data center near you.

8. High value for a low price: We know that cost is a very important consideration when running big data analytics. So, all the above value of Azure HDInsight is now available at half the price.

Get started

There is more to come as we prepare to bring the latest innovations in the Hadoop and Spark world to Azure HDInsight. You can read this developer guide and follow the quickstart section to learn more about implementing your big data applications on Azure HDInsight. Follow us on @AzureHDInsight or HDInsight blog for the latest updates. For questions and feedback, reach out to AskHDInsight@microsoft.com.

Announcing the general availability of Azure SQL Data Sync

$
0
0

We are delighted to announce the general availability (GA) of Azure SQL Data Sync! Azure SQL Data Sync allows you to synchronize data between Azure SQL Database and any other SQL endpoints unidirectionally or bidirectionally. It enables hybrid SQL deployment and allows local data access from both Azure and on-premises application. It also allows you to deploy your data applications globally with a local copy of data in each region, and keep data synchronized across all the regions. It will significantly improve the application response time and reliability by eliminating the impact of network latency and connection failure rate.

Azure SQL Data Sync

What’s new in Azure SQL Data Sync

With the GA announcement, Azure SQL Data Sync supports some new capabilities:

  • Better configuration experience – More reliable configuration workflow and more intuitive user experience.
  • More reliable and faster database schema refresh – Load database schema more efficiently using new SMO library.
  • More secured data synchronization – We reviewed the end-to-end sync workflow and ensured user data are always encrypted at rest and in transit. Data Sync service now meets GDPR compliance requirement.

Get started today – Try out Azure SQL Data Sync

If you are building a hybrid platform or global distributed application using Azure SQL Database and SQL Server, we encourage you to try out Azure SQL Data Sync today. Check out the Azure SQL Data Sync documentation to get started, and get more details from the following content:

If you have any ideas or suggestions for data sync service, please share your feedback with us.

Siphon: Streaming data ingestion with Apache Kafka

$
0
0

Data is at the heart of Microsoft’s cloud services, such as Bing, Office, Skype, and many more. As these services have grown and matured, the need to collect, process and consume data has grown with it as well. Data powers decisions, from operational monitoring and management of services, to business and technology decisions. Data is also the raw material for intelligent services powered by data mining and machine learning.

Most large-scale data processing at Microsoft has been done using a distributed, scalable, massively parallelized storage and computing system that is conceptually similar to Hadoop. This system supported data processing using a batch processing paradigm. Over time, the need for large scale data processing at near real-time latencies emerged, to power a new class of ‘fast’ streaming data processing pipelines.

Siphon – an introduction

Siphon was created as a highly available and reliable service to ingest massive amounts of data for processing in near real-time. Apache Kafka is a key technology used in Siphon, as its scalable pub/sub message queue. Siphon handles ingestion of over a trillion events per day across multiple business scenarios at Microsoft. Initially Siphon was engineered to run on Microsoft’s internal data center fabric. Over time, the service took advantage of Azure offerings such as Apache Kafka for HDInsight, to operate the service on Azure.

Here are a few of the scenarios that Siphon supports for Microsoft:

O365 Security: Protecting Office 365 customers’ data is a critical part of the business. A critical aspect of this is detecting security incidents in near real-time, so that threats can be responded to in a timely manner. For this, a streaming processing pipeline processes millions of events per second to identify threats. The key scenario requirements include:

  • Ingestion pipeline that reliably supports multiple millions of events/second
  • Reliable signal collection with integrated audit and alert
  • Support O365 compliance certifications such as SOC and ISO

For this scenario, Siphon supports ingestion of more than 7 million events/sec at peak, with a volume over a gigabyte per second.

O365 SharePoint Online: To power analytics, product intelligence, as well as data-powered product features, the service requires a modern and scalable data pipeline for connecting user activity signals to the downstream services that consume these signals for various use cases for analytics, audit, and intelligent features. The key requirements include:

  • Signals are needed in near real-time, with end to end latency of a few seconds
  • Pipeline needs to scale to billions of events per day 
  • Support O365 compliance and data handling requirements

Siphon powers the data pub/sub for this pipeline and is ramping up in scale across multiple regions. Once the service was in production in one region, it was an easy task to replicate it in multiple regions across the globe.

MileIQ: MileIQ is an app that enables automated mileage tracking. On the MileIQ backend, there are multiple scenarios requiring scalable message pub/sub:

  • Dispatching events between micro-services
  • Data integration to the O365 Substrate
  • ETL data for analytics

MileIQ is onboarding to Siphon to enable these scenarios which require near real-time pub/sub for 10s of thousands of messages/second, with guarantees on reliability, latency and data loss.

Siphon architecture

Siphon provides reliable, high-throughput, low-latency data ingestion capabilities, to power various streaming data processing pipelines. It functions as a reliable and compliant enterprise-scale ‘Data Bus.’ Data producers can publish data streams once, rather than to each downstream system; and data consumers can subscribe to data streams they need. Data can be consumed either via streaming platforms like Apache Spark Streaming, Apache Storm, and more, or through Siphon connectors that stream the data to a variety of destinations.

A simplified view of the Siphon architecture:

Siphon-architecture 
The core components of Siphon are the following:

  • Siphon SDK: Data producers send data to Siphon using this SDK, that supports schematizing, serializing, batching, retrying and failover. 
  • Collector: This is a service with an HTTPS endpoint for receiving the data. In provides authentication, routing, throttling, monitoring and load balancing/failover.
  • Apache Kafka: One more Kafka clusters are deployed as needed for the scenario requirements.
  • Connectors: A service that supports config-driven movement of data from Siphon to various destinations, with support for filtering, data transformation, and adapting to the destination’s protocol.

These components are deployed in various Microsoft data centers / Azure regions to support business scenarios. The entire system is managed as a multi-user/multi-tenant service with a management layer including monitoring and alerting for system health, as well as an auditing system for data completeness and latency.

Siphon’s journey to HDInsight

When the Siphon team considered what building blocks they needed to run the service on Azure, the Apache Kafka for HDInsight service was an attractive component to build on. The key benefits are:

  • Managed service: The HDInsight service takes care of Apache Kafka cluster creation, and keeping the clusters up and running, and routine maintenance and patching, with an overall SLA of 99.9 percent.
  • Compliance: HDInsight meets a number of security and compliance requirements and is a good foundation from which Siphon could build additional capabilities needed to meet the stringent needs of services like Office 365.
  • Cost: Innovations such as integration of the Kafka nodes with Azure Managed Disks enable increased scale and reduced cost without sacrificing reliability.
  • Flexibility: HDInsight gives the flexibility to customize the cluster both in terms of the VM type and disks used, as well as installation of custom software, and tuning the overall service for the appropriate cost and performance requirements.

Siphon was an early internal customer for the Apache Kafka for HDInsight (preview) service. Implementation of the Azure Managed Disk integration enabled lowering the overall cost for running this large scale ‘Data Bus’ service.

Siphon currently has more than 30 HDInsight Kafka clusters (with around 600 Kafka brokers) deployed in Azure regions worldwide and continues to expand its footprint. Cluster sizes range from 3 to 50 brokers, with a typical cluster having 10 brokers, with 10 disks attached to each broker. In aggregate, these Siphon clusters support ingesting over 4 GB of data per second at peak volumes.

Apache Kafka for HDInsight made it easy for Siphon to expand to new geo regions to support O365 services, with automated deployments bringing down the time to add Siphon presence in a new Azure region to hours instead of days.

Conclusion

Data is the backbone of Microsoft's massive scale cloud services such as Bing, Office 365, and Skype. Siphon is a service that provides a highly available and reliable distributed Data Bus for ingesting, distributing and consuming near real-time data streams for processing and analytics for these services. Siphon relies on Apache Kafka for HDInsight as a core building block that is highly reliable, scalable, and cost effective. Siphon ingests more than one trillion messages per day, and plans to leverage HDInsight to continue to grow in rate and volume.

Get started with Apache Kafka on Azure HDInsight.

Dive into blockchain for healthcare with the HIMSS blockchain webinar

$
0
0

Excitement around the potential of blockchain in healthcare has reached all-time highs and is accelerating. There is a hunger building across the industry for real use cases with real healthcare organizations transacting on blockchain, and proof points, case studies, or success stories that outline business values sought, results achieved, what worked well, and what needs improvement. Only through such hands-on experience can we validate the true potential of blockchain in healthcare, refine its application, improve any deficiencies identified, build trust, and scale it both in terms of the networks of healthcare organizations participating in various blockchain initiatives, as well as scaling blockchain in applying it to additional healthcare use cases.

The HIMSS Blockchain Work Group is forum of leaders from across healthcare including providers, payers, pharmaceuticals, life sciences, and industry experts worldwide, collaborating to advance the applications of blockchain in healthcare. I have the honor of participating in this group. At the recent HIMSS18 conference in Las Vegas I had the privilege of moderating a session, organized by the HIMSS Blockchain Work Group, with a panel of worldwide experts on blockchain in healthcare. If you missed that event you can now view the video on demand. At this event we had great, candid, deep discussion of the real value and challenges we are currently seeing in the application of blockchain across healthcare. The event was a great success and we had a full house of attendance, with all seats taken and three rows of people standing at the back! We also had more questions than we could get to during the time allocated. Clearly there is great energy, interest, and excitement about blockchain in healthcare, and opportunity for further collaboration across the industry to accelerate the realization of the associated business benefits, from reducing healthcare costs to improving patient outcomes.

I was honored to moderate a follow-on webinar session, also hosted by the HIMSS Blockchain Working Group, with this same panel of healthcare blockchain experts on June 4: Blockchain Reset - Seeing Through the Hype and Starting Down the Path. In this webinar each of the panel of experts shared their direct experience in piloting or preparing to pilot blockchain in healthcare this year. We heard about a range of healthcare use cases where blockchain has compelling business value, from provider credentialing, to provider directory, clearinghouse, and identity by consensus. We also discussed deployment considerations that are key to success including: privacy, security, compliance, interoperability, integration, identity, performance, and deployment options from on premises to the cloud. If you missed this webinar it is now available on demand at: Blockchain Reset: Seeing through the Hype and Starting down the Path, Part 2.

If you are in healthcare and interested in blockchain I highly recommend viewing this webinar on demand to hear from experts at the forefront of piloting blockchain in healthcare. If you have any additional feedback or questions, feel free to reach out to me.

Many healthcare organizations are talking about and considering blockchain. Those that get started first and start testing and incrementally improving to evolve and maximize healthcare business benefits will emerge as early leaders. If you would like to get started with prototyping blockchain for your healthcare use case I recommend take a look at Azure Blockchain Workbench which enables rapid prototyping and deployment to the Azure cloud, enabling you to focus on your pilot, business goals, and application logic rather than blockchain technology.

If you have a different healthcare use case in mind and would like to see if it is suitable for blockchain, I also recommend taking a look at the blog Blockchain in Health: Beyond the Hype in a Trusted Cloud. It outlines how to test your use case against the FITS (Fraud-Intermediaries-Throughput-Stability) model as well as the healthcare Quadruple Aim Outcomes to identify its business value proposition(s).

Microsoft is actively working with multiple healthcare organizations and industry partners on the application of blockchain in healthcare. Reach out to me if you would like to connect to introduce and explore synergies and opportunities for collaboration.

Blockchain in healthcare is fast evolving. I post regularly about new developments on social media. If you would like to follow me you can find me on LinkedIn and Twitter.

What other input, questions, or feedback do you have on healthcare use cases for blockchain? Feel free to leave comments below.

Azure Marketplace new offers: May 16-31

$
0
0

We continue to expand the Azure Marketplace ecosystem. From May 16th to 31st, 31 new offers successfully met the onboarding criteria and went live. See details of the new offers below:

AionNode

AionNode: Fully functional AionNode with running Kernel.

AISE TensorFlow CPU Production

AISE TensorFlow CPU Production: A fully integrated deep learning software stack with TensorFlow, an open-source software library for machine learning, and Python, a high-level programming language for general-purpose programming for running on CPU.

Azure AD Connect Server 2016

Azure AD Connect Server 2016: Add your Active Directory details and begin syncing to Azure Active Directory. Azure AD Connect will integrate your on-premises directories with Azure Active Directory. Provide a common identity for your users for Office 365, Azure, and SaaS applications.

cuberubuntu

cuberubuntu: cuberubuntu images.

F5 BIG-IQ Virtual Edition

F5 BIG-IQ Virtual Edition: F5 BIG-IQ provides a central point of control for F5 physical and virtual devices and for the solutions that run on them. It simplifies management, helps ensure compliance, and gives you the tools you need to deliver applications securely and effectively.

FileZilla Secure FTP Server Windows 2016

FileZilla Secure FTP Server Windows 2016: FileZilla Server is a full-featured FTP server with support for secure SSL/TLS connections, IP security, anti-FXP options, per-user speed limits, user groups, and MODE-Z compression. It provides users with a plain but easy-to-use interface.

GigaSECURE Cloud 5.3.01 - Hourly (100 pack)

GigaSECURE Cloud 5.3.01 - Hourly (100 pack): GigaSECURE Cloud delivers intelligent network traffic visibility for workloads running in Azure and enables increased security, operational efficiency, and scale across virtual networks. Optimize costs with up to 100 percent visibility.

hive - Azure Self Service Portal

hive - Azure Self Service Portal: hive has removed bottlenecks, eliminated human errors, and reduced the VM request time from one week to one hour. Our workflows include: Lists all Virtual Machines in Azure IaaS; Allows users to request new VMs without Azure Portal access; and more!

JEUS 8 Enterprise Edition

JEUS 8 Enterprise Edition: JEUS has been certified from J2EE 1.4 to Java EE 6 and Java EE 7. Based on 500 customers in 2003, it was a leading WAS solution in the Korean market. In 2016, we had more than 2,700 customers and were among the leaders in market share for five years.

JEUS 8 Standard Edition

JEUS 8 Standard Edition: JEUS is a web application server (WAS) for developing and operating applications. JEUS has been certified from J2EE 1.4 to Java EE 6 and Java EE 7. Based on 500 customers in 2003, it has become a leading WAS solution in the Korean market.

KONG Certified by Bitnami

Kong Certified by Bitnami: Kong is an open-source API gateway and microservice management layer. It is designed for high availability, fault-tolerance, and distributed systems. Bitnami certifies that our images are secure, up to date, and packaged using industry-best practices.

LiquidFiles

LiquidFiles: Secure messages: Send files of unlimited size securely to your customers, partners, or internally. File drops: Pre-defined URLs you can keep in your email signature, post on your web page, post in forums, or send in private messages. Plus file requests, sharing, and more!

Looker Analytics Platform

Looker Analytics Platform: Looker is a web-based business intelligence platform that brings people and data together, in context. Purpose-built for the next generation of analytic databases in the cloud, Looker puts actionable data in the hands of the people who need it most.

Midfin Systems Neon Cloud Gateway

Midfin Systems Neon Cloud Gateway: Neon is a hybrid cloud networking solution that allows you to seamlessly and securely extend your on-premises networks to public clouds. Unlike traditional VPN solutions, Neon stretches your on-premises networks to the Azure cloud.

Neural Designer

Neural Designer: Neural Designer is a software tool for advanced analytics. It is based on neural networks, which are considered the most powerful technique for data analysis. Neural Designer stands out in terms of performance. It is developed in C++ for better memory management.

Postgres Pro Enterprise Database 10

Postgres Pro Enterprise Database 10: Postgres Pro Enterprise is a high-performance PostgreSQL-based database. It includes features on top of PostgreSQL to make the database enterprise-ready, including data compression to minimize data footprint, allowing databases to run faster.

Postgres Pro Standard Database 10

Postgres Pro Standard Database 10: Postgres Pro is a PostgreSQL fork. Each new Postgres Pro version is an actual PostgreSQL version with some patches committed to PostgreSQL community and Postgres Professional extensions and patches, which are open source in most cases.

Pulse Connect Secure

Pulse Connect Secure: Enable secure access from any device to enterprise apps and services in the datacenter or cloud. This is the entry-level Pulse Connect Secure product with 200 concurrent user support.

Solar inCode

Solar inCode: Reveal source code vulnerabilities without the source code. Just deploy the inCode VM in your Azure infrastructure, email incode-presale@solarsecurity.ru to request your license, then put the license file license.xml in the folder /home/incode/files on the virtual machine.

Stratusphere UX

Stratusphere UX: Stratusphere UX provides complete Windows desktop monitoring, diagnostics, performance validation, and optimization. The solution supports all Windows-based delivery approaches, including virtual and mixed-platform (Microsoft RDS/RDMI) desktop environments.

totemomail email encryption gateway

totemomail email encryption gateway: totemomail offers easy and user-friendly encryption for your email and is the solution that covers every possible scenario: It encrypts emails with internal and external communication partners, enables the sending of large attachments, and more!

Web Safety Proxy

Web Safety Proxy: Web Safety is a web filtering proxy appliance for Microsoft Azure. It may be used to block potentially illegal or malicious file downloads, remove annoying advertisements, prevent access to various categories of the web sites, and block resources with explicit content.

Xpert BI 2.6.1.0

Xpert BI 2.6.1.0: Xpert BI is a Microsoft SQL server-based tool that automates and improves all aspects of data warehousing and business intelligence and transformation processes, also known as ELT. Xpert BI extracts data from various sources and creates a central structured database.

 

Microsoft Azure Applications

AltitudeCDN OmniCache (eCDN)

AltitudeCDN OmniCache (eCDN): AltitudeCDN OmniCache is an intelligent enterprise video caching solution for live video and video on demand. This software-defined networking solution usually deploys on existing infrastructure and supports caching of all streaming protocols.

Backup Exec 20 - Standard

Backup Exec 20 - Standard: Veritas Backup Exec provides simple, rapid, and secure offsite backup to Azure for your in-house virtual and physical environments and also protects cloud-based workloads in Azure. Veritas and Microsoft have been working together for more than 25 years.

Cisco vEdge Cloud Router Solution - 4 Nics

Cisco vEdge Cloud Router Solution (4 Nics): Cisco vEdge Cloud is a software router platform that supports an entire range of capabilities available on the physical vEdge router platforms. The vEdge Cloud router is a VM that can be deployed in hybrid cloud computing environments.

Conduit

Conduit: Conduit provides businesses with a secure data transit solution that connects data systems and sources to any business intelligence visualization tools. Conduit unifies your data analytics initiatives, enables real-time data delivery, and operates in the cloud and on-premises.

OutSystems Trial

OutSystems Trial: Thousands of customers worldwide trust OutSystems, a low-code platform for rapid application development. Engineers with an attention to detail crafted the OutSystems platform to help organizations build enterprise-grade apps and transform their business faster.

Pivotal Greenplum - BYOL - by Pivotal Software Inc

Pivotal Greenplum (BYOL) by Pivotal Software Inc: Advanced analytics meets traditional business intelligence with Pivotal Greenplum, a fully featured, multi-cloud, massively parallel processing (MPP) data platform based on the open-source Greenplum Database.

Pivotal Greenplum - Hourly - by Pivotal Software Inc

Pivotal Greenplum (Hourly) by Pivotal Software Inc: Pivotal Greenplum provides comprehensive and integrated analytics on multi-structured data. Powered by advanced cost-based query optimizers, Pivotal Greenplum delivers analytical query performance on massive volumes of data.

STASHit

STASHit: Centralized Windows event logging made easy. Need to log events like password resets or access to specific folders/files to comply to your internal or GDPR rules? No problem. Save your events in a dedicated, long-term Microsoft Azure Log Analytics workspace.

Azure Blockchain Workbench 1.1.0 extends capabilities and monitoring

$
0
0

Last month, we announced the public preview release of Azure Blockchain Workbench, which greatly accelerates blockchain application development. Since launch, we’ve seen a tremendous amount of engagement and enthusiasm from our customers and partners with thousands of different blockchain apps processing tens of thousands of transactions. The feedback has been great as well, and today we’re excited to announce the first major update to Workbench, which we’re calling version 1.1.0. As part of this release, we are making an upgrade script available which you can use to update your existing Workbench deployment. Of course, you can always get the latest version from Azure Portal directly if you want to do a new deployment.

This update includes the following improvements:

Multi workflow and contract support

You may have noticed that our Blockchain Workbench configuration documentation and Workbench API reference the ability to have multiple workflows within one application. Our initial release didn’t provide the UI to showcase more than one workflow per application, but we've now added this feature. With 1.1.0, you can now have multiple workflows for a single application show in the Workbench UI.

In addition to this UI update, we have published a new Bazaar Marketplace sample application on our Workbench Github showcasing the use of multiple workflows and smart contracts. Try it out and let us know what you think.

Monitoring improvements

Reference the Blockchain Workbench architecture document. The Workbench DLT watcher monitors events occurring on the attached blockchain network. If something goes wrong with this service, Workbench will no longer be able to process and understand transactions going through the blockchain. In that state, the UI looks like this:

image

With 1.1.0, we’ve improved the reliability of the watcher, which means if there is a disruption with the service, Workbench can recover and process all transactions missed during the disruption.

Usability and polish

We made several improvements to the overall usability of Workbench. New items will now show a new icon, which will make it easier for you to see new contracts, actions, members, etc.

image

Another improvement relates to how you find people when assigning roles or assignments within contracts. We now have a richer search algorithm, which will make it easier to find people when you only have a partial name.

Improvements and bug fixes

We also made several other improvements to Workbench. Some of the top bugs we addressed are:

  • Workbench is now able to support Azure Active Directories of any size.
  • All users who have the right to create contracts can create a contract, even if the user is not an administrator.
  • Deployment of Workbench is more reliable as we’ve addressed the top failure cases.
  • Database views have been fixed to return the right set of data to address a bug where a few columns did not display the correct information in a couple of views.

Please use our Blockchain User Voice to provide feedback and suggest features/ideas for Workbench. Your input is helping make this a great service.  We look forward to hearing from you!


Azure.Source – Volume 36

$
0
0

Now in preview

Azure Blob Storage lifecycle management in public preview - Both Blob-Level Tiering and Archive Storage help you optimize storage performance and cost. Blob-Level Tiering enables you to transition blobs between the Hot, Cool, and Archive tiers without moving data between accounts. In response to customer feedback, you can now automate blob tiering and retention with lifecycle management policies. Azure Blob Storage lifecycle management offers a rich, rule-based policy which you can use to transition your data to the best access tier and to expire data at the end of its lifecycle.

Exciting advances in Azure Alerts – From better alert management to Smart Groups - Three new features in Azure Monitor enable you to enumerate alerts at scale across log, metric or activity log alerts, filter alerts across subscriptions, manage alert states, look at alert instance specific details, and troubleshoot issues faster using SmartGroups that automatically group related alerts. Observe alerts across Azure deployments with the new alert enumeration experience and API. Change the state of an alert to reflect the current situation of the issue in their environment with Alert state management. To reduce alert noise and help in mitigating events faster, Smart Groups encapsulate mutliple related alerts.

Screenshot showing Azure Alerts Smart Groups in the Azure portal

Azure Alerts Smart Groups in the Azure portal

Hallelujah! Azure AD delegated application management roles are in public preview! - You can improve your security posture and reduce the potential for unfortunate mistakes by eliminating the need to grant people the Global Administrator role for things like configuring enterprise applications. We added support for per-application ownership, which enables you to grant full management permissions on a per-application basis. We also introduced a role that enables you to selectively grant people the ability to create application registrations.

Also in preview

Now generally available

Azure Kubernetes Service (AKS) GA – New regions, more features, increased productivity - Azure Kubernetes Service (AKS) is now generally available. We added five new regions including Australia East, UK South, West US, West US 2, and North Europe, so AKS is now generally available in ten regions across three continents with ten more regions coming online in the coming months.

Azure Friday | Episode 440 - Azure Kubernetes Service (AKS) GA - Brendan Burns joins Lara Rubbelke to discuss GA of Azure Kubernetes Service (AKS). Developers can drastically simplify how they build and run container-based solutions without deep Kubernetes expertise.

Also generally available

News and updates

Participate in the 15th Developer Economics Survey - The Developer Economics Q2 2018 survey is here in its 15th edition to shed light on the future of the software industry. Every year more than 40,000 developers around the world participate in this survey, so this is a chance to be part of something big, voice your thoughts, and make your own contribution to the developer community.

Quick Recovery Time with SQL Data Warehouse using User-Defined Restore Points - SQL Data Warehouse, a flexible and secure analytics platform for the enterprise optimized for running complex queries fast across petabytes of data, now supports User-Defined Restore Points. Now you can initiate snapshots before and after significant operations on your data warehouse to ensure that each restore point is logically consistent and limit the impact and reduce recovery time of restoring the data warehouse should this be needed.

Container Tooling for Service Fabric in Visual Studio 2017 - The latest version of the Service Fabric tools, which is part of Visual Studio 2017 Update 7 (15.7), includes the new container tooling for Service Fabric feature. This new feature makes debugging and deploying existing applications, in containers, on Service Fabric easier than ever before.

Animated GIF showing debugging containers on Service Fabric in Visual Studio 2017

Debugging Containers on Service Fabric in Visual Studio 2017

TLS Configuration now fixed to block 1.0 - We recently announced that all Azure App Service and Azure Functions apps could update TLS configuration. However, after deployment, an edge case scenario was identified involving SNI-SSL. This scenario led to SSL-analyzing tools, such as SSL Labs, showing that TLS 1.0 was still accepted, while higher versions were selected. The deployment to solve the issue for SSI-SSL is complete. Reporting tools should correctly indicate that lower versions of TLS (mainly TLS 1.0) are blocked.

Additional news and updates

Name changes and GUID migrations

The Azure Podcast

The Azure Podcast: Episode 233 - Live from Build 2018 - Serverless & IoT with Jeff Hollan - In this last of the shows recorded by our team at BUILD 2018, the guys talk to Senior Azure PM Jeff Hollan on all things IoT & Serverless.

Technical content and training

Eight Essentials for Hybrid Identity: #2 Choosing the right authentication method - The second essential for hybrid identity is when setting up single sign-on for your employees and partners to all their SaaS applications and on-premises apps and resources, the first thing we want to do is establish identities in the cloud. With identities as your control plane, authentication is the foundation for cloud access. Choosing the right authentication method is a crucial decision, but also one that’s different for every organization and might change over time. Catch up on the first essential here, Eight Essentials for Hybrid Identity: #1 A new identity mindset.

SmartHotel360 Microservices on Azure Kubernetes Service - To help you learn how to deploy microservices written in any framework to AKS, we updated the SmartHotel360 back-end microservices source code and deployment process to optimize it for AKS. Clone, fork, or download the AKS and Azure Dev Spaces demo on GitHub to learn more.

Bing Visual Search and Entity Search APIs for video apps - Bing Visual Search API enables you to use an image as a query to get information about what entities are in the image, along with a list of visually similar images from the image index built by Bing. Bing Entity Search API enables you to brings rich contextual information about people, places, things, and local businesses to any application, blog, or website for a more engaging user experience. By combining the power of these two APIs, you can build a more engaging experience in your video app.

Azure tips & tricks

Cloning Web Apps Using and Azure App Service

Configure a Backup for your Azure App Service and Database

Events

Service Fabric Community Q&A 25th Edition - We will be holding our 25th monthly community Q&A call this Thursday, June 21 at 10 AM Pacific Time (GMT-7). In addition, we will hold another call at 4 PM Pacific Time (GMT-7) this Thursday, June 21 to accommodate folks in other time zones.

Microsoft Azure Data welcomes attendees to ACM SIGMOD/PODS 2018 - ACM SIGMOD/PODS 2018 took place last week in Houston, Texas. In this blog post, Rohan Kumar Corporate Vice President, Azure Data, shares some of the exciting work in data that’s going on in the Azure Data team at Microsoft, and invites conference attendees to take a closer look.

Azure Friday

Azure Friday | Episode 439 - Java in App Service on Linux - Shrirang Shirodkar joins Donovan Brown to discuss what's new in App Service for Java developers. You'll see how Java developers can build web apps for Linux without having to deal with custom Docker containers (i.e., how to use the Tomcat Docker containers provided out of the box). How to use the new deployment mechanism in Kudu - 'wardeploy' (for Linux and Windows both). You'll learn what are the problems with existing deployment mechanisms and how wardeploy solves it.

Azure Friday | Episode 441 - Service Fabric Extension for VS Code - Peter Pogorski chats with Scott Hanselman about building Service Fabric applications with the Service Fabric for VS Code extension. This episode introduces the process of creating and debugging Service Fabric applications with the new Service Fabric extension for VS Code. The extension enables you to create, build, and deploy Service Fabric applications (e.g., C#, Java, Containers, and Guests) to local or remote clusters.

Customers and partners

Rendering in Azure with Qube 7 - Rendering is the most compute-intensive part of a movie production. To address this complexity companies like PipelineFX offer render management solutions that simplify the management and control of render pipelines. PipelineFx recently announced their latest software release, Qube 7. Read this post to learn more about the 30-day trial of Qube and our low-priority Virtual Machines for getting a lot of rendering done without spending a lot of money.

New free Go-To-Market Services for all marketplace publishers - Starting March 1, 2018 all new listings in Azure Marketplace and AppSource were extended a set of free Go-To-Market (GTM) services. These benefits help our partners jumpstart discoverability of their offer in marketplaces. No action is required by partners to initiate these benefits. Upon listing, Microsoft will reach out to the partner to kick-off discussions.

Publish your solutions to Azure Government – what, why and how - The Azure Marketplace is the premier destination for all your software needs, optimized to run on Azure. The Azure Government Marketplace includes many of the same solutions for use in Azure Government, an exclusive instance of Microsoft Azure that enables government customers to safely transfer mission-critical workloads to the cloud. See this post to learn how partners can use this platform reaching new government customers.

The IoT Show

The IoT Show | Azure IoT Edge updates - Azure IoT Edge has been out for 6 months now and Chipalo Street, PM in the Azure IoT Edge team, comes back to the IoT Show to give us an update on all that happened since our intelligent edge technology was released.

The IoT Show | Custom Vison AI on Azure IoT Edge - Running artificial intelligence directly on IoT devices seems like something straight out of sci-fi movies, right? It's now something you can do pretty easily with Azure IoT Edge. And thanks to the Custom Vision Cognitive Service, you don't even need to be a data scientist to create AI Custom Vision models!

Developer spotlight

Tutorial: Deploy your ASP.NET Core App to Azure Kubernetes Service (AKS) with the Azure DevOps Project - The Azure DevOps Project presents a simplified experience where you bring your existing code and Git repository, or choose from one of the sample applications to create a continuous integration (CI) and continuous delivery (CD) pipeline to Azure.

Quickstart: Create a Kubernetes dev space with Azure Dev Spaces (.NET Core and Visual Studio) - Follow this quickstart to learn how to set up Azure Dev Spaces with a managed Kubernetes cluster in Azure. iteratively develop code in containers using Visual Studio, and debug code running in your cluster.

Quickstart: Create a Kubernetes dev space with Azure Dev Spaces (Node.js and VS Code) - Follow this quickstart to learn how to set up Azure Dev Spaces with a managed Kubernetes cluster in Azure. iteratively develop code in containers using VS Code, and debug code running in your cluster.

Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster - In this quickstart, you deploy an AKS cluster using the Azure portal. A multi-container application consisting of web front end and a Redis instance is then run on the cluster. Once completed, the application is accessible over the internet.

A Cloud Guru: This week in Azure

Azure This Week - 15 June 2018 - In this episode of Azure This Week, James takes a look at Azure AD Conditional Access for blocking legacy authentication, backup for SQL Server instances running on Azure VMs, the updates included in the newest API release for Azure Container Instances and the announcement that Azure Kubernetes Service (AKS) is now generally available.

Artificial intelligence + human intelligence: Training data breakthrough

$
0
0

At Bing, AI is the foundation of our services and experiences. If you have ever been involved in any machine learning or AI project you know that frequently the key to success is good training data (a set of labeled examples that helps train the algorithm). Getting enough high-quality training data is often the most challenging piece of building an AI-based service. Typically, data labeled by humans is of high quality (has relatively few mistakes) but comes at high cost - both in terms of money and time. On the other hand, automatic approaches allow for cheaper data generation in large quantities but result in more labeling errors ('label noise'). In this blog we describe a new, creative way of combining human based and automatic labeling to generate large quantities of much lower-noise training data for visual tasks than was possible ever before.

 

This new technology developed here at Microsoft and subject of today’s post has been at the foundation of the quality you have seen in various Bing multimedia services. To share the discoveries and empower a broader AI community to build better experiences for all of us, we will also present this technique at this year's Computer Vision and Pattern Recognition Conference (CVPR) taking place in Salt Lake City between June 19th and 21st .

 

Background

To explain the challenge in a bit more detail, let's use an example: imagine that we are trying to build an image classification system that could determine what category of object is shown in a given image. We want to be able to detect a number of such categories. For instance, we may want to detect different types of foods visible in the image. To build it, we need a lot of example images of food with corresponding category names. Once we train the model on our training data, we will be able to present the system with an image and it would produce a category corresponding to the type of food detected in the image.

 

We could try to use human labeling to create our training data: take example pictures of food for each of our categories and use that. It would work but it would also be very expensive considering that we may need thousands if not millions of examples: each type of food can be portrayed in multitude of different ways, and there are hundreds if not thousands of different food categories.

 

The typical approach to solving this problem would be to 'scrape' an existing search engine. What this means is that we would first collect a list of names of food categories we want to be able to identify in the images, possibly with their synonyms and variations. Then we would issue each such name as a query to an image search engine and collect resulting images, expecting that they would in fact portray food of the category we used in the query. Thus, if we wanted to be able to identify tuna in our images we could issue queries such as 'tuna', ‘tuna dishes’, or 'maguro' (tuna in Japanese), and collect corresponding images for training. We would do the same for any other category we wish to detect. This process can be automated so we can easily generate a lot of data for many categories.

 

The challenge

As you can see above, search engines are unfortunately not perfect – no matter which engine you use, if you scroll down far enough in the results you will start seeing some mistakes. Mistakes in the training data mislead our classifier during the training process. This in turn leads to inferior performance of the final system. So how can we correct those mistakes - and avoid having to look through all the scraped images to find the errors? One approach that is popular in the community would be to train another model ('cleaning model') specifically to find images which don't match their category (are 'mis-labeled'). The diagram below shows how such a cleaning model would be used.

 

This can be done but we would need some number of correctly and incorrectly labeled examples for each food category, so that during training of the 'cleaning model' it can learn how to tell for each category what a correctly and incorrectly labeled image looks like so that it can later spot the errors. That is all doable if we have relatively few categories. But it becomes very problematic if we want to detect thousands, hundreds of thousands, or even more categories. Just labeling a few examples for each category manually would again be prohibitively expensive.

 

While AI scientists have come up with specialized 'outlier detection models' that could help here, they usually either require labeling all categories, which presents the scalability problem described above, or they require no labeling at all but then suffer from poor accuracy. Preferably we would like to have a bit of both: infuse the system with some amount human intelligence in the form of labels while controlling the cost, and use it to improve quality of an otherwise automatic cleaning model. Can this be done? That's exactly where our new approach comes in!

 

The solution

In our 'dream' solution we would like to have a small subset of labels generated by the web scraping process (described earlier) to be verified by humans, but not necessarily enough to cover all categories (also referred to as ‘classes’) or all possible types of errors. We would then like the system to learn from this data to detect labeling flaws for many other types of categories, specifically those that we did not have labeled examples from. With recent advances in AI it is now possible.

 

The crux of the solution is to teach the AI model what a typical error looks like for a few example categories (such as 'Donuts', 'Onion Soup', 'Waffle' in the picture below) and then use techniques such as 'transfer learning' and ‘domain adaptation’ to get the model to intelligently use those 'learnings' in all the other categories where human labeled examples are not available ('Hamburger').

 

The way this is realized is that behind the scenes, while system is being trained, one part of the model effectively learns how to automatically ‘select’ (assign weights for) images best representing each of the categories, and map them to a single high dimensional vector. This will form 'class embedding vector' - a vector representing given category. This part of the network will be fed noisy, web-scraped training data during training.

 

At the same time, another part of the model is learning to embed each example image (call it 'query image') into the same high dimensional space. Both parts of the network are trained at the same time ('joint embedding framework') and the training process is further constrained in such a way as to nudge the system to make the class embedding vector and the query image vector similar to one another if the image is in fact a member of the category, and further apart if it is not.

 

Thanks to this arrangement, the system can now learn that some of the noisy, web scraped examples that it was fed are probably incorrect and shouldn't be trusted. In the process it also learns general patterns about the nature and best ways of finding representative images for each category, that turn out to continue to work well on categories even where no human verified labels were present!

 

We realize that it may take a moment to wrap your head around what exactly is happening here, but such is the nature and beauty of modern AI today. Amazingly, the system works very well, and it learns to quite reliably identify correct and incorrect labels in all categories whether there were any human verified labels available or not.

 

Closing thoughts

The approach discussed in this post is already proving very effective in producing clean training data for image-related tasks. We believe it will be equally useful when applied to video, text, or speech. Our teams are already looking into it. Continue to check our blog for new developments! 

If you want a more in-depth look at this technology, check out our Microsoft Research blog post here or read the corresponding paper for all the technical details, which we'll be presenting soon at CVPR.

In the meantime, know that every time you use Bing you are consuming the fruits of these amazing technologies. And if you are also an AI fan, just go ahead and try out some of the described techniques to generate quality training data for your own projects!

 

Happy Training!

The Bing Team

 

 

Staying up-to-date with .NET Container Images

$
0
0

This post describes the container images that we produce and update for you, that you can use with Docker, Kubernetes and other systems. When you are using .NET and Docker together, you are probably using the official .NET container images from Microsoft. We’ve made many improvements over the last year to the .NET images that make it easier for you to containerize .NET applications.

Last week during DockerCon 2018, I posted an update about Using .NET and Docker Together. It demonstrates how you can use Docker with .NET, for production, development and testing. Those scenarios are all based on the .NET container images on Docker Hub.

Faster Software Delivery

Docker is a game changer for acquiring and using .NET updates. Think back to just a few years ago. You would download the latest .NET Framework as an MSI installer package on Windows and not need to download it again until we shipped the next version. Fast forward to today. We push updated container images to Docker Hub multiple times a month. Every time you pull .NET images, you are getting updated software, an update to .NET and/or the underlying operating system, either Windows or Linux.

This new model of software delivery is much faster and creates a much stronger connection between software producer and consumer. It also gives you more control, but requires a bit more knowledge on how you acquire software, through Docker. It is important that you understand the Docker repos and tags that the software provider — in this case Microsoft — uses so that you get the exact versions and updates you want. This post is intended to provide you with the information you need to select the best versions of .NET images and tags for your needs.

Official images from Docker

Docker maintains official images, for operating systems and application platforms. These images are maintained by a combination of Docker, community developers and the operating system or application platform maintainers who are expert in Docker and the given technology (like Alpine).

Official images are:

  • Correctly and optimally configured.
  • Regularly maintained.
  • Can be shared (in memory) with other applications.

.NET images are built using official images. We build on top of Alpine, Debian, and Ubuntu official images for x64 and ARM. By using official images, we leave the cost and complexity of regularly updating operating system base images and packages like OpenSSL, for example, to the developers that are closest to those technologies. Instead, our build system is configured to automatically build, test and push .NET images whenever the official images that we use are updated. Using that approach, we’re able to offer .NET Core on multiple Linux distros at low cost and release updates to you within hours. There are also memory savings. A combination of .NET, Java, and Node.js apps run on the same host machine with the latest official Debian image, for example, will share the Debian base image in memory.

.NET Images from Microsoft

.NET images are not part of the Docker official images because they are maintained solely by Microsoft. Windows Server images, which we also build on top of, are the same. Similar to Docker official images, we have a team of folks maintaining .NET images that are expert in both .NET and Docker. This results in the same benefits as described for Docker official images, above.

We maintain .NET images with the following model:

  • Push same-day image updates, when a new .NET version or base operating system image is released
  • Push images to Docker Hub only after successful validation in our VSTS CI system
  • Produce images that match the .NET version available in Visual Studio
  • Make pre-release software available for early feedback and use on microsoft/dotnet-nightly

We rely on Docker official maintainers to produce quality images in a timely manner so that our images are always up-to-date. We know you rely on us to do the same thing for .NET. We also know that many of you automatically rebuild your images, and the applications contained within them, when a new .NET image is made available. It is very important that this process works well, enabling your applications to always be running on the latest patched version of .NET and the rest of the software stack you have chosen to use. This is part of how we work together to ensure that .NET applications are secure and reliable in production.

.NET Docker Hub Repos

Docker Hub is a great service that stores the world’s public container images. When we first started pushing images to Docker Hub, we created fine-grained repositories. Many fine-grained repos has its advantages, but discoverability is not one of them. We heard feedback that it was hard to find .NET images. To help with that, we reduced the number of repos we use. The current set of .NET Docker Hub repos follows:

.NET Core repos:

.NET Framework repos:

  • microsoft/dotnet-framework – includes .NET Framework runtime and sdk images.
  • microsoft/aspnet – includes ASP.NET runtime images, for ASP.NET Web Forms and MVC, configured for IIS.
  • microsoft/wcf – includes WCF runtime images configured for IIS.
  • microsoft/iis – includes IIS on top of the Windows Server Core base image. Works for but not optimized for .NET Framework applications. The microsoft/aspnet and microsoft/wcf repos are recommended instead for running the respective application types.

Deprecated repositories

.NET Image Tags

The image tag you use in your Dockerfile is perhaps the most important artifact within your various Docker-related assets. The tag defines the underlying software that you expect to be using when you rundocker build and that your app will be running on in production. The tag gives you a lot of control over the images you pull, but can also be a source of pain if the tag you are using doesn’t align with your needs.

There are four main options for tags, from most general to most specific:

  • latest version — The latest tag aligns with the default version of the software available from a repository (for whatever the repository maintainer considers the default – may or may not be the actual latest version). When you don’t specify a tag, your request pulls the latest tag. From one pull to the next, you might get software and/or the underlying operating system that has been updated by a major version. For example, when we shipped .NET Core 2.0, the latest tag (for Linux) jumped from .NET Core 1.1 to 2.0 and from Debian 8 to 9. Latest is great for experimentation, but not a good choice for anything else. You don’t want your application to use latest in an automated build.
  • minor version — A major.minor tag, such as 2.0-runtime or 2.1-sdk, locks you to a specific family of updates of software. These example tags will receive only .NET Core 2.0 or .NET Core 2.1 updates, respectively. You can expect to get patch updates to the software and the underlying operating system when you build with docker build --pull. We recommend this form of tag for most cases. It balances the competing concerns of ease of use and the risk of updates. The 4.7.2-sdk .NET Framework tag, for example, matches this tag style.
  • patch version — A major.minor.patch tag, such as 2.0.7-runtime locks you to a specific patch version of software. This is great from a predictability standpoint, but you need to update your Dockerfile every time you want to update to a new patch version. That’s a lot of work and requires quick action if you need to deploy a .NET security updates for multiple applications. The 4.7.2-sdk-20180523-windowsservercore-1803 .NET Framework tag, for example, matches this tag style. We do not update the .NET contents in patch version images, but we may push new images for the tag due to underlying base image changes. As a result, do not consider patch version tags to be immutable.
  • digest — You can reference an image digest directly. This approach gives you the most predictability. Tags can and are overwritten with new images while digests cannot be. Tags can also be deleted. We recommend using digests in the case an application breaks due to an image update and you need to go back to a “last known good” image. It can be challenging to determine the digest for an image you are no longer using. You can add logging into your image building infrastructure to collect this information as an insurance policy for unforeseen breakage.

Tags are a contract on the .NET version you want and the degree of change you expect. We do our best to satisfy that contract each time we ship. We do two main things to produce quality container images: CI validation and code-review. CI validation runs on several operating systems with each pull request to .NET repos. This level of pre-validation provides us with confidence on the quality of the Docker images we push to Docker Hub.

For each major and minor .NET version, we may take a new major operating system version dependency. As I mentioned earlier, we adopted Debian 9 as the base image for .NET Core 2.0. We stayed with Debian 9 for .NET Core 2.1, since Debian 10 (AKA “Buster”) has not been released. Once we adopt an underlying operating system major version, we will not change it for the life of that given .NET release.

Each distro has its own model for patches. For .NET patches, we will adopt minor Debian releases (Debian 9.3 -> 9.4), for example. If you look at .NET Core Dockerfiles, you can see the dependency on various Linux operating systems, such as Debian and Ubuntu. We make decisions that make sense within the context for the Windows and Linux operating systems that we support and the community that uses them.

Windows versioning with Docker works differently than Linux, as does the way multi-arch manifest tags work. In short, when you pull a .NET Core or .NET Framework image on Windows, you will get an image that matches the host Windows version, if you use the mult-arch tags we expose (more on that later). If you want a different version, you need to use the specific tag for that Windows version. Some Azure services, like Azure Container Instances (ACI) only support Windows Server 2016 (at the time of writing). If you are targeting ACI, you need to use a Windows Server 2016 tag, such as 4.7.2-runtime-windowsservercore-ltsc2016 or 2.1-aspnetcore-runtime-nanoserver-sac2016, for .NET Framework and ASP.NET Core respectively.

.NET Core Tag Scheme

There are multiple kinds of images in the microsoft/dotnet repo:

  • sdk — .NET Core SDK images, which include the .NET Core CLI, the .NET Core runtime and ASP.NET Core.
  • aspnetcore-runtime — ASP.NET Core images, which include the .NET Core runtime and ASP.NET Core.
  • runtime — .NET Core runtime images, which include the .NET Core runtime.
  • runtime-deps — .NET Core runtime dependency images, which include only the dependencies of .NET Core and not .NET Core itself. This image is intended for self-contained applications and is only offered for Linux. For Windows, you can use the operating system base image directly for self-contained applications, since all .NET Core dependencies are satisfied by it.

We produce Docker images for the following operating systems:

  • Windows Nano Server 2016+
  • Debian 8+
  • Alpine 3.7
  • Ubuntu 18.04

.NET Core supports multiple chips:

  • x64
  • ARM32v7

Note: ARM64v8 images will be made available at a later time, possibly with .NET Core 3.0.

.NET Core tags follow a scheme, which describes the different combinations of kinds of images, operating systems and chips that .NET Core supports.

[version]-[kind]-[os]-[chip]

Note: This scheme is new with .NET Core 2.1. Earlier versions use a similar but slightly different scheme.

The following .NET Core 2.1 tags are examples of this scheme:

  • 2.1.300-sdk-alpine3.6
  • 2.1.0-aspnetcore-runtime-stretch-slim
  • 2.1.0-runtime-nanoserver-1803
  • 2.1.0-runtime-deps-bionic-arm32v7

Note: You might notice that some of these tags use strange names. “bionic” and “stretch” are the version names for Ubuntu 18.04 and Debian 9, respectively. They are the tag names used by the ubuntu and debian repos, respectively. “stretch-slim” is a smaller variant of “stretch”. We use smaller images when they are available. “nanoserver-1803” represents the Spring 2018 update of Windows Nano Server. “arm32v7” describes a 32-bit ARM-based image. ARMv7 is a 32-bit instruction set defined by ARM Holdings.

They are also short-forms of .NET Core tags. The short-forms are short in two different ways. They use two part version numbers and skip the operating system. In most cases, the short-form tags are the ones you want to use, because they are simpler, are serviced, and are multi-arch so are portable across operating systems.

We recommend you use the following short-forms of .NET Core tags in your Dockerfiles:

  • 2.1-sdk
  • 2.1-aspnetcore
  • 2.1-runtime
  • 2.1-runtime-deps

As discussed above, some Azure services only support Windows Server 2016 (not Windows Server, version 1709+). If you use one of those, you may not be able to use short tags unless you happen to only build images on Windows Server 2016.

.NET Framework Tag Scheme

There are multiple kinds of images in the microsoft/dotnet-framework repo:

  • sdk — .NET Framework SDK images, which include the .NET Framework runtime and SDK.
  • runtime — .NET Framework runtime images, which include the .NET Framework runtime.

We produce Docker images for the following Windows Server versions:

  • Windows Server Core, version 1803
  • Windows Server Core, version 1709
  • Windows Server Core 2016

.NET Framework tags follow a scheme, which describes the different combinations of kinds of images, and Windows Server versions that .NET Framework supports:

[version]-[kind]-[timestamp]-[os]

The .NET Framework version number doesn’t use the major.minor.patch scheme. The third part of the version number does not represent a patch version. As a result, we added a timestamp in the tag to create unique tag names.

The following .NET Framework tags are examples of this scheme:

  • 4.7.2-sdk-20180615-windowsservercore-1803
  • 4.7.2-runtime-20180615-windowsservercore-1709
  • 3.5-sdk-20180615-windowsservercore-ltsc2016

They are also short-forms of .NET Framework tags. The short-forms are short in two different ways. They skip the timestamp and skip the Windows Server version. In most cases, those are the tags you will want to use, because they are simpler, are serviced, and are multi-arch so are portable across Windows versions.

We recommend you use the following short-form of tags in your Dockerfiles, using .NET Framework 4.7.2 and 3.5 as examples:

  • 4.7.2-sdk
  • 4.7.2-runtime
  • 3.5-sdk
  • 3.5-runtime

As discussed above, some Azure services only support Windows Server 2016 (not Windows Server, version 1709+). If you use one of those, you may not be able to use short tags unless you happen to only build images on Windows Server 2016.

The microsoft/aspnet and microsoft/wcf using a variant of this tag scheme and may move to this scheme in the future.

Security Updates and Vulnerability Scanning

As you’ve probably picked up at this point, we update .NET images regularly so that you have the latest .NET and operating system patches available. For Windows, our image updates are similar in nature to the regular “Patch Tuesday” releases that the Windows team makes available on Windows Update. In fact, we update our Windows-based images with the latest Windows patches every Patch Tuesday. You don’t run Windows Update in a container. You rebuild with the latest container images that we make available on Docker Hub.

The update experience is more nuanced on Linux. We support multiple Linux distros that can be updated at any time. There is no specific schedule. In addition, there is usually a set of published vulnerabilities (AKA CVEs) that are unpatched, where no fix is available. This situation isn’t specific to using Linux in containers but using Linux generally.

We have had customers ask us why .NET Core Debian-based images fail their vulnerability scans. I use anchore.io for scanning and have validated the same scans that customers shared with us. The vulnerabilities are coming from the base images we use.

You can look at the same scans I look at:

One of my observations is that the scan results change significantly over time for Debian and Ubuntu. For Alpine, they remain stable, with few vulnerabilities reported.

We employ three approaches to partially mitigate challenge:

  • Rebuild and republish .NET images on the latest Linux distro patch updates immediately (within hours).
  • Support the latest distro major versions as soon as they are available. We have observed that the latest distros are often patched quicker.
  • Support Alpine, which is much smaller, so has fewer components that can have vulnerabilities.

These three approaches give you a lot of choice. We recommend you use .NET Core images that use the latest version of your chosen Linux distro. If you have a deeper concern about vulnerabilities on Linux, use the latest patch version of the .NET Core 2.1 Alpine images. If you are still unhappy with the situation, then consider using our Nano Server images.

Using Pre-release Images

We maintain a pre-release Docker Hub repository, dotnet-nightly. Nightly builds of .NET Core 2.1 images were available in that repo before .NET Core 2.1 shipped. There are nightly builds of the .NET Core 1.x and 2.x servicing branches in that repo currently. Before long, you’ll see nightly builds of .NET Core 2.2 and 3.0 that you can test.

We also offer .NET Core with pre-release versions of Linux distros. We offered Ubuntu 18.04 (AKA “bionic”) before it was released. We currently offer .NET Core images with pre-release versions of Debian 10 (AKA “buster”) and the Alpine edge branch.

Closing

We want to make it easy and intuitive for you to use the official .NET images that we produce, for Windows and Linux. Docker provides a great software delivery system that makes it easy to stay up-to-date. We hope that you now have the information you need to configure your Dockerfiles and build system to use the tag style that provides you with the servicing characteristics and image consistency that makes sense for your environment.

We will continue to make changes to .NET container images as we receive feedback and as the Docker feature set changes. We post regular updates at dotnet/announcements. “watch” that repo to keep up with the latest updates.

If you are new to Docker, check out Using .NET and Docker Together. It explains how to use .NET with Docker in a variety of scenarios. We also offer samples for both .NET Core and .NET Framework that are easy to use and learn from.

Tell us how you are using .NET and Docker together in the comments. We’re interested to hear about the way you use .NET with containers and how you would like to see it improved.

Updates to Adobe Document Cloud bring integrated PDF services to Office 365

$
0
0

Last September, we expanded our strategic partnership with Adobe to focus on integrations between Adobe Sign and Office 365 products such as Microsoft Teams, SharePoint, Outlook, and more.

We’ve seen our customers make great use of the combination. For example, the State of Hawaii saved a significant amount of employee time while also improving document status versus paper-based processes—providing a double win over previous paper-based processes.

Building on this success, today the Adobe Document Cloud team announced new capabilities that deepen the integration with Office 365 and can save you and your team time. PDF services integrations provide new fidelity when working with PDF documents as part of Office 365. Once integrated by your administrator, PDF services provide rich previews of PDF documents right within OneDrive and your SharePoint sites.

A screenshot displays a non-disclosure agreement in the Adobe Document Cloud.

In addition to many reporting, sharing, and collaboration scenarios, PDF files are frequently used to create final or archived versions of content spanning across many different files. With PDF services and the newly introduced Combine Files by Adobe functionality, you can select several files and pull into one PDF with just a couple of clicks within SharePoint document libraries.

A screenshot displays a launch team group in SharePoint.

PDF services are now available in the ribbon for online versions of Word, Excel, and PowerPoint—making the creation of high-quality, full fidelity PDFs from these applications even easier.

PDF servicesalong with capabilities as part of Adobe Sign and upcoming Adobe Reader enhancements—are all part of Adobe Document Cloud. All share a commitment to productive integrations across Office 365—and we hope to see your team benefit from these integrations as well.

If you are an administrator, with Adobe Document Cloud, get started integrating with Office 365 with this guide. Adobe Document Cloud and Office 365 provide great complementary functionalities, and you can learn more about this and Adobe Sign integrations with Office 365. We look forward to seeing continued productivity improvements across the millions of joint customers that Adobe Document Cloud and Microsoft Office 365 share.

The post Updates to Adobe Document Cloud bring integrated PDF services to Office 365 appeared first on Microsoft 365 Blog.

New Navigation for Visual Studio Team Services

$
0
0
I’m excited to share the new navigation we’re working on for Visual Studio Team Services (VSTS) to modernize the user experience and give you more flexibility. As Lori mentioned in her blog post, our goal to create an integrated suite that also gives the flexibly to pick and choose the services that work best for you. That goal is a common customer request and at... Read More
Viewing all 10804 articles
Browse latest View live