Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

Announcing the public preview of Windows Container Support in Azure App Service

$
0
0

Windows Server Containers in Web App are now available in public preview! Web App for Containers is catered towards developers who want to have more control over what is installed in their containers.

By supporting Windows Containers on Azure App Service, we enable new opportunities for Application Modernization:

  • Lift and Shift to PaaS – Ideal for customers interested in migrating .NET Framework and .NET Core applications to Azure, and going straight to a PaaS service to get the many productivity benefits from the App Service platform.
  • Applications with dependencies – Deploying an app within a Windows Container allows customers to install custom dependencies. Examples include installation of custom fonts or component libraries which must be installed into the Global Assembly Cache (GAC).
  • Relaxed security restrictions – When deploying a containerized application, the Windows Container is an isolation and security boundary. As a result, calls to libraries that would normally be blocked by Azure App Service will instead succeed when running inside of a Windows Container. For example, many PDF generation libraries make calls to graphics device interface (GDI) APIs. With Windows Containers, these calls will succeed.
  • Third-party application migration – Customers often have business critical applications developed by third parties with which the company no longer has a relationship. Containerizing these types of applications unlocks the opportunity to migrate applications to Azure App Service.

New SKUs for Windows Containers

With the introduction of Windows Container support, we are also adding three new premium SKUs exclusively for App Service Plans hosting applications deployed using Windows Containers. These new SKUs all provide Dv3 series capabilities, offering customers more choice for their applications. The new Premium Container Tier offers customers three options in which to run their containers:

  • Small (2 CPU vcores, 8GB Memory)
  • Medium (4 CPU vcores, 16GB Memory)
  • Large (8 CPU vcores, 32GB Memory)

Pricing tiers

To allow customers to explore and evaluate our offering for hosting Windows Containers on Azure App Service, we are initially offering a free preview for these new SKUs during the month of August. Preview pricing will take affect starting September 8, 2018.

Which base images are supported?

Windows Containers always derive from a base Windows image, and Windows Containers deployed on Azure App Service are no different. As a service, we cache several base images on our infrastructure and we advise customers to use those images as the base of their containers to enable faster application startup times. Customers are also free to use their own base images, though using different base images will lead to longer application startup times.

Customers deploying .NET Framework Applications must choose a base image based on the Windows Server Core 2016 Long Term Servicing Channel release. Customers deploying .NET Core Applications must choose a base image based on the Windows Server Nano 2016 Long Term Servicing Channel release. At this point, Azure App Service does not support deployment of applications in containers based on the Windows Server 1709 release.

Cached base images:

  • microsoft/iis:windowsservercore-ltsc2016, latest
  • microsoft/iis:nanoserver-sac2016
  • microsoft/aspnet:4.7.2-windowsservercore-ltsc2016, 4.7.2, latest
  • microsoft/dotnet:2.1-aspnetcore-runtime
  • microsoft/dotnet:2.1-sdk

    Public preview capabilities

    • Deploy containerized applications using Docker Hub, Azure Container Registry, or private registries.
    • Incrementally deploy apps into production with deployment slots and slot swaps.
    • Scale out automatically with auto-scale.
    • Enable application logs and use the App Service Log Streaming feature to see logs from your application.
    • Use PowerShell and Win-RM to remotely connect directly into your containers.

    appservice_windows_containers_winrm (002)

    To get started, please refer to this quick start.

    We want to hear from you!

    Windows Container support for Azure App Service provides you with even more ways to build, migrate, deploy, and scale enterprise grade web and API applications running on the Windows platform. We are planning to add even more capabilities during the public preview and we are very interested in your feedback as we move towards general availability.


    Announcing Windows Community Toolkit v4.0

    $
    0
    0

    The Windows Community Toolkit recently reached over 1 million downloads across all nuget packages. This is a very big milestone for the community and I’m very excited to announce yet another major update to the Windows Community Toolkit, version 4.0. With the help of the Windows community, this update introduces several new additions and improvements, specifically:

    • New DataGrid with fluent design for all UWP developers
    • Two new Microsoft Graph controls. PowerBIEmbedded enables embedding PowerBI dashboards in your UWP apps and PlannerTasksList allows users to work with Microsoft Planner tasks
    • The Twitter, LinkedIn, and Microsoft Translator services have moved to the .NET Standard services package and available to even more developers, including desktop and Xamarin developers
    • Strong-named packages for those developers that require strong-named assemblies
    • Dark theme support for the sample app and theme chooser for each sample

    These are some of the biggest updates in this release and I encourage you to view the full release notes on our GitHub. Let’s take a look at some of these updates in more details.

    The latest updates to Windows Community Toolkit.

    New fluent DataGrid control

    We had introduced a preview of a fluent DataGrid control for Windows 10 in Version 3.0. In the past several months, we have added a few more features based on community feedback, ensured more reliability and better accessibility for the DataGrid control. We are now pleased to announce the general availability of the DataGrid control.

    The DataGrid control is a robust control that provides a flexible way to display a collection of data in rows and columns. It retains the programming model for DataGrid from Silverlight and WPF so it is familiar to XAML developers who have used the DataGrid control in older XAML technologies. Developers can now create highly flexible tabular visualization of data with editing, data validation and data shaping functionalities with a few simple lines of code in Windows 10:

    
    <controls:DataGrid x:Name="dataGrid1" 
        Height="600" Margin="12"
        AutoGenerateColumns="True"
        ItemsSource="{x:Bind MyViewModel.Customers}" /> 
    
    

    Create highly flexible tabular visualization of data with editing, data validation and data shaping functionalities with a few simple lines of code.

    Make sure to visit the DataGrid documentation to learn about the capabilities of the DataGrid control with detailed guidance documents and How-Tos with code samples. DataGrid comes in a standalone nuget package that you can download and add reference to.

    New Microsoft Graph controls

    Version 3 of the toolkit introduced a new category of UWP controls to enable developers access the Microsoft Graph. With few lines of code, developers can add UI to enable users to log in to the Microsoft Graph, search for coworkers and friends, browse SharePoint files and more. Version 4.0 introduces two new Microsoft Graph controls: PlannerTaskList and PowerBIEmbedded.

    The PlannerTaskList enables developers to integrate tasks directly in their apps and allow users to interact with the Microsoft Planner tasks. Teams and individuals depend on Microsoft Planner to get organized quickly, work together effortlessly, and stay on the same page.

    The PlannerTaskList enables developers to integrate tasks directly in their apps and allow users to interact with the Microsoft Planner tasks.

    The PowerBIEmbedded control enables developers to embed a rich PowerBI dashboard directly in their apps and allow users to interact with the rich data directly.

    Moved Twitter, LinkedIn, and Microsoft Translator services to .NET Standard

    The Windows Community Toolkit contains APIs to make it easy to work with web services such as Twitter, OneDrive, LinkedIn, Microsoft Graph and more. Originally only available to only UWP developers, with this update, most services have moved to our .NET Standard services package (Microsoft.Toolkit.Services). These services are now available to any framework implementing .NET Standard 1.4 and above, which includes UWP, the .NET Framework (including WPF and WinForms), Xamarin, .NET Core and many more.

    Get started today

    As a reminder, you can get started by following this tutorial, or preview the latest features by installing the Windows Community Toolkit Sample App from the Microsoft Store. If you would like to contribute, please join us on GitHub! To join the conversation on Twitter, use the #windowstoolkit hashtag.

    Happy coding!

    The post Announcing Windows Community Toolkit v4.0 appeared first on Windows Developer Blog.

    Centra embraces transformation, improves patient care with Office 365 intelligent business tools

    $
    0
    0

    Today’s post was written by Joseph (Jody) Hobbs, managing director of business applications and information security officer at Centra.

    Profile picture of Jody Hobbs.Centra is proud to count itself among the early adopters of cloud technology in the healthcare field. Back in 2014, we saw cloud computing as a way to keep up with the rapid growth we were experiencing across the enterprise—and the challenge of adapting to industry changes under the Patient Protection and Affordable Care Act (ACA). Five years later, we’re still using Microsoft Cloud services to remain on the leading edge of business productivity software so that we can provide exceptional patient care.

    With Microsoft 365, we are better able to adapt to industry-wide changes introduced by ACA, such as the transition from a fee-for-service model to a quality-based model. This change made capturing data and analytics very important, because now reimbursement is based on quality of care, not quantity of services. We use Power BI, the data analytics tool from Microsoft Office 365 E5, to meet new healthcare reporting requirements and provide a wealth of data to our clinicians. They use this data to measure their performance against quality benchmarks to improve patient experiences and health outcomes.

    We also turned to Microsoft 365 to help address Centra data security and privacy policies. Microsoft accommodated our requirement for data to remain in the continental United States, which helps us comply with Health Insurance Portability and Accountability Act (HIPAA) regulations that are standard in the healthcare industry. We also found a great solution for emailing sensitive information by combining a Microsoft two-factor authentication solution with our existing encryption appliance. Microsoft invests an incredible amount in its security posture, more than we ever could, and this, along with the knowledge that our data is not intermingled with others’ data in the tenant, gives us peace of mind. And we use Office 365 Advanced Threat Protection, which gives us great insight into malicious activities aimed at our employees’ inboxes.

    Keeping our Firstline Workers flexible and mobile is another major priority. We plan to get all our clinical workers online with Office 365 to actualize our vision for a more productive, mobile workforce. We have almost 4,000 employees taking advantage of Office 365 ProPlus and downloading up to five instances of Office 365 on a range of devices. This makes it seamless for them to work from home or the office using the same powerful, cloud-based productivity apps.

    As Centra continues to grow from a network of hospitals to an assortment of health-related enterprises, adding everything from a college of nursing to our own insurance business, we see a cloud-based workplace solution as key to staying agile and making the most of our momentum. In Microsoft 365, we have found a solution that marries the strict security requirements of our industry with the needs of a workforce that demands anytime, anywhere access to colleagues and information. For Centra, change isn’t just a matter of increasing productivity or mobility—at the end of the day, our ability to stay up to date with the latest technology innovations means we are providing the best care possible.

    The post Centra embraces transformation, improves patient care with Office 365 intelligent business tools appeared first on Microsoft 365 Blog.

    Revoking potentially impacted tokens from ESLint vulnerability

    $
    0
    0
    On the 24th of July 2018, we  notified some customers via e-mail and on this blog and about a planned action that we would start taking in relation to the malicious ESLint NPM package incident. If you received an email from us and/or see a banner like one below, we have invalidated access tokens in... Read More

    Are your Windows Forms and WPF applications ready for .NET Core 3.0?

    $
    0
    0

    Download Download Portability Analyzer (2.37 MB)

    At Build 2018 we announced that we are enabling Windows desktop applications (Windows Forms and Windows Presentation Framework (WPF)) with .NET Core 3.0. You will be able to run new and existing Windows desktop applications on .NET Core and enjoy all the benefits that .NET Core has to offer, such as application-local deployment and improved performance.

    It is important that .NET Core 3.0 includes all the APIs that your applications depend on. We are releasing a Portability Analyzer that will report the set of APIs referenced in your apps that are not yet available in NET Core 3.0. This API list will be sent to Microsoft and will help us prioritize which APIs we should incorporate to the product before it ships.

    Please download and run the tool (PortabilityAnalyzer.exe) on your Windows Forms and WPF apps to see how ready your apps are for .NET Core 3.0 and to help us shape the .NET Core 3.0 API set.

    The Portability Analyzer

    The Portability Analyzer is an open source tool that simplifies your porting experience by identifying APIs that are not portable among the various .NET Platforms. This tool has existed for a few years as a console applicationand a Visual Studio extension. We recently updated it with a Windows Forms UI, using an early build of .NET Core 3.0. You can see what it looks like in the following image.

    Running the tool will do two things:

    1. Generate an Excel spreadsheet that will report the level of compatibility that your project has with .NET Core 3.0, including the specific APIs that are currently unsupported.
    2. Send this same data to the .NET team at Microsoft so that we can determine which APIs are needed by the most people.

    The data we are collecting is the same as what is in the spreadsheet. None of your source code or binaries will be sent from your machine to Microsoft.

    In order for us to know which APIs our users need, we are asking you to run the tool which will help us to provide the best possible experience in porting your apps. You, at the same time, will see how portable your apps are right now since the tool generates a list of APIs referenced in your assemblies, that might not be supported in .NET Core 3.0.

    We will prioritize adding new APIs in .NET Core 3.0 based on information we collect. Please help us help you by making sure your application’s API requirements are represented in the data that we use for prioritization. Please run the Portability Analyzer to ensure that your application is counted.

    Using Portability Analyzer

    Use the following instructions to run Portability Analyzer.

    1. Extract archive anywhere on your local disk.
    2. Run PortabilityAnalyzer.exe
    3. In the Path to application text box enter the directory path to your Windows Forms or WPF app (either by inserting a path string or clicking on Browse button and navigating to the folder).
    4. Click Analyze button.
    5. After the analysis is complete, a report of how portable your app is right now to .NET Core 3.0 will be saved to your disc. You can open it in Excel by clicking Open Report button.

    Below is an example of the report you’re getting after running the tool for the popular Paint.NET application:

    Note: In your report you might have a tab “Missing Assemblies”. It means that the analyzer could not find the source assemblies that are referenced in your application. Make sure to find those assemblies and add them to the folder you’re analyzing. Otherwise you won’t get the full picture of your application.

    Using the console-based version

    If you would like to run the analysis for many applications, you can either run them one by one as described above or you can use the console version of the Portability Analyzer.

    Note: You will get one report per invocation of the ApiPort.exe tool. If you prefer to get one report per application, you can automate the invocation either using for in Batch or the ForEach mechanism in PowerShell.

    To run the console app:

    1. Download and unzip Console Portability Analyzer.
    2. From the command prompt run the following command specifying multiple directories, dlls, or executables:

      For example:

    You can find the portability report saved as an Excel file (.xlsx) in your current directory.

    Summary

    Please download and use Portability Analyzer on your desktop applications. It will help you determine how compatible your apps are with .NET Core 3.0. This information will help us plan the 3.0 release with the goal of making it easy for you to adopt .NET Core 3.0 for desktop apps.

    Download Download Portability Analyzer (2.37 MB)

    Thank you in advance for your help!

    New Azure #CosmosDB JavaScript SDK 2.0 now in public preview

    $
    0
    0

    The Azure Cosmos DB team is excited to announce version 2.0 RC of the JavaScript SDK for SQL API, now in public preview!

    We are excited to get feedback through this RC before general availability, so please try it out and let us know what you think. You can get the latest version through npm with:

    npm install azure cosmos

    What is Azure Cosmos DB?

    Azure Cosmos DB is a globally distributed, multi-model database service. It offers turnkey global distribution, guarantees single-digit millisecond latencies at the 99th percentile, and elastic scaling of throughput and storage.

    For the SQL API, we support a JavaScript SDK to enable development against Azure Cosmos DB from JavaScript and Node.js projects. Version 2.0 of the SDK is written completely in TypeScript, and we’ve redesigned the object model and added support for promises. Let’s dive into these updates.

    New object model

    Based on user feedback, we’ve redesigned the object model to make it easier to interact with and perform operations against Cosmos DB. 

    If you’re familiar with the previous version of the JavaScript SDK, you’ve likely noticed that the entire API surface hangs off DocumentDBClient. While the previous design makes it easy to find the entry point for methods, it also came at the cost of a cluttered IntelliSense experience, as seen below.

    DocumentDBClient

    We also got feedback that it was difficult to do operations off databases, collections, or documents since each method needed to reference the URL of that resource. 

    To address this, we’ve created a new top level CosmosClient class to replace DocumentDBClient, and split up its methods into modular Database, Container, and Items classes.

    For example, in the new SDK, you can create a new database, container, and add an item to it, all in 10 lines of code!

    Create a new database

    This is called a “builder” pattern, and it allows us to reference resources based on the resource hierarchy of Cosmos DB, which is similar to the way your brain thinks about Cosmos DB. For example, to create an item, we first reference its database and container, and call items.create().

    Containers and Items

    In addition, because Cosmos DB supports multiple API models, we’ve introduced the concepts of Container and Item into the SDK, which replace the previous Collection and Document concepts. In other words, what was previously known as a “Collection” is now called a “Container.”

    An account can have one or more databases, and a database consists of one or more containers. Depending on the API, the container is projected as either a collection (SQL or Mongo API), graph (Gremlin API), or table (Tables API).

    Container and Items

    Support for promises

    Finally, we’ve added full support for promises so you no longer have write custom code to wrap the SDK yourself. Now, you can use async/await directly against the SDK.

    To see the difference, to create a new database, collection, and add a document in the previous SDK, you would have to do something like this:

    SDK

    In the new SDK, you can simply await the calls to Cosmos DB directly from inside an async function, as seen below.

    We’ve also added a convenience method createIfNotExists() for databases and containers, which wraps the logic to read the database, check the status code, and create it if it doesn’t exist.

    Here’s the same functionality, using the new SDK:

    promisesNewImageUseThisOne

    Open source model

    The Azure Cosmos DB JavaScript SDK is open source, and our team is planning to do all development in the open. To that end, we will be logging issues, tracking feedback, and accepting PR’s in GitHub.

    Getting started

    We hope this new SDK makes for a better developer experience. To get started, check out our quick start guide. We’d love to hear your feedback! Email cosmosdbsdkfeedback@microsoft.com or log issues in our GitHub repo.

    npm install azure cosmos

    Stay up-to-date on the latest Azure #CosmosDB news and features by following us on Twitter @AzureCosmosDB. We are really excited to see what you will build with Azure Cosmos DB!

    Security Bulletin for August 2018

    $
    0
    0

    August 6, 2018:

    Microsoft is aware of a temporary denial of service (DoS) vulnerability (CVE-2018-5390) affecting the Linux Kernel. Virtual Machines running Linux may be vulnerable. The Azure Host platform remains secure from this vulnerability. We are working with various Linux distributions to ensure that they address this security issue.

    For guidance on (CVE-2018-5390) please refer to the Linux vendor security channels for your distribution. To learn more about the vulnerability, please visit Vulnerability Notes Database.

    We will continue to update this advisory as additional details become available.

    In case you missed it: July 2018 roundup

    $
    0
    0

    In case you missed them, here are some articles from July of particular interest to R users.

    A program to validate quality and security for R packages: the Linux Foundation's CII Best Practices Badge Program.

    R scripts to generate images in the style of famous artworks, like Mondrian's.

    A 6-minute video tour of the AI and Machine Learning services in Azure, including R.

    The July roundup of AI, Machine Learning and Data Science news.

    An R package for tiling hexagon-shaped images, used to create a striking banner of hex stickers for useR!2018.

    Highlights and links to videos from the useR!2018 conference.

    Video and R scripts from a workshop on creating an app to detect images of hotdogs

    Microsoft has released a number of open data sets produced from its research programs.

    R 3.5.1 has been released.

    And some general interest stories (not necessarily related to R):

    As always, thanks for the comments and please send any suggestions to me at davidsmi@microsoft.com. Don't forget you can follow the blog using an RSS reader, via email using blogtrottr, or by following me on Twitter (I'm @revodavid). You can find roundups of previous months here.


    Docker recipes available for Visual Studio Build Tools

    $
    0
    0

    Because Docker container images for Visual Studio Build Tools are very large, we have created a repository of “recipes”. These are dockerfiles and, when necessary, supporting scripts are purpose-built for solution types that require certain workloads and even other tools like package managers. I encourage developers and DevOps to take a look and even contribute useful samples of unique workloads.

    As a reminder, Visual Studio Build Tools is licensed as a supplement to Visual Studio, so images must not be pushed to public repositories. These samples should always work and the resulting images can be pushed to private repositories.

    11 essential characteristics for being a good technical advocate or interviewer

    $
    0
    0

    14265784357_5b2773e123_oI was talking to my friend Rob Caron today. He produces Azure Friday with me - it's our weekly video podcast on Azure and the Cloud. We were talking about the magic for a successful episode, but then realized the ingredients that Rob came up with were generic enough that they were the essential for anyone who is teaching or advocating for a technology.

    Personally I don't believe in "evangelism" and I hate the word. Not only does it evoke The Crusades but it also implies that your religion technology is not only what's best someone, but that it's the only solution. That's nonsense, of course. I like the word "advocate" because you're (hopefully) advocating for the right solution regardless of technology.

    Here's the 11 herbs and spices that are needed for a great technical talk, a good episode of a podcast or show, or a decent career talking and teaching about tech.

    1. Empathy for the guest – When talking to another person, never let someone flounder and fail – compensate when necessary so they are successful.
    2. Empathy for the audience – Stay conscious that you're delivering an talk/episode/post that people want to watch/read.
    3. Improvisation – Learn how to think on your feet and keep the conversation going (“Yes, and…”) Consider ComedySportz or other mind exercises.
    4. Listening – Don't just wait to for your turn to speak, just to say something, and never interrupt to say it. Be present and conscious and respond to what you’re hearing
    5. Speaking experience – Do the work. Hundreds of talks. Hundreds of interviews. Hundreds of shows. This ain’t your first rodeo. Being good means hard work and putting in the hours, over years, whether it's 10 people in a lunch presentation or 2000 people in a keynote, you know what to articulate.
    6. Technical experience – You have to know the technology. Strive to have context and personal experiences to reference. If you've never built/shipped/deployed something real (multiple times) you're just talking.
    7. Be a customer – You use the product, every day, and more than just to demo stuff. Run real sites, ship real apps, multiple times. Maintain sites, have sites go down and wake up to fix them. Carry the proverbial pager.
    8. Physical mannerisms – Avoid having odd personal ticks and/or be conscious of your performance. I know what my ticks are and I'm always trying to correct them. It's not self-critical, it's self-aware.
    9. Personal brand – I'm not a fan of "personal branding" but here's how I think of it. Show up. (So important.) You’re a known quantity in the community. You're reliable and kind. This lends credibility to your projects. Lend your voice and amplify others. Be yourself consistently and advocate for others, always.
    10. Confidence – Don't be timid in what you have to say BUT be perfectly fine with saying something that the guest later corrects. You're NOT the smartest person in the room. It's OK just to be a person in the room.
    11. Production awareness – Know how to ensure everything is set to produce a good presentation/blog/talk/video/sample (font size, mic, physical blocking, etc.) Always do tech checks. Always.

    These are just a few tips but they've always served me well. We've done 450 episodes of Azure Friday and I've done nearly 650 episodes of the Hanselminutes Tech Podcast. Please Subscribe!

    Related Links

    * pic from stevebustin used under CC.


    Sponsor: Preview the latest JetBrains Rider with its built-in spell checking, initial Blazor support, partial C# 7.3 support, enhanced debugger, C# Interactive, and a redesigned Solution Explorer.



    © 2018 Scott Hanselman. All rights reserved.
         

    Speech Devices SDK and Dev Kits news (August 2018)

    $
    0
    0

    We have just released the v0.5.0 version of the Speech Devices SDK to its download site a few days ago. If you want access to it, please apply for it via the Microsoft Speech Devices SDK Sign Up Form.

    Please use the latest version of the Speech Devices SDK and its matching sample app. The Speech Devices SDK consumes the Speech SDK. In addition, it uses an advanced audio processing algorithm and enables a custom Key Word Spotting (KWS) feature. The Speech Devices SDK's version matches the version of the Speech SDK, so that all the APIs are consistent. Right now we only have support for Java, see the Java API reference. Please see the Speech Devices SDK’s release notes for details. We currently have one developer centric Java sample app. The source code can be found posted in the GitHub repository’s under the Samples/Android example. We will post additional Java sample apps as they become available.

    The microphone array dev kits from our hardware provider Roobo, have also gone on sale recently. Please visit ROOBO Dev Kits for Microsoft Speech Services for the hardware specs and product details. If you have questions regarding the device hardware, including ordering and shipping, please contact Roobo at rooboddk@roobo.com. Currently, they can only be shipped to the U.S. and China.

    Please also read the LinkedIn article about the dev kits by Xuedong Huang (technical fellow) of Microsoft Cloud & AI group.

    image

    We can’t wait to see the cool devices and applications that you will build with the Microsoft Speech Devices SDK and the Roobo Smart Audio Dev Kits!

    For Speech Devices SDK or Speech Services questions, please visit our support page.

    Microsoft Azure at SIGGRAPH 2018

    $
    0
    0

    If you are attending SIGGRAPH this year I hope you can join us at Booth 620 to hear the latest on how Azure can help you: 

    • Render more, faster with Avere.
    • Move your data to the cloud using our rugged, secure Azure Data Box.
    • Get better scalability, simplicity, and hybrid support with Azure High-Performance Computing.
    • Secure your media workloads. Security experts will be on hand to answer your questions.

    Microsoft’s Mixed Reality Capture Studios will be there, so you can experience the latest in holographic video. And, we’re excited to have our partners, Avid Technology, Nimble Collective, and Teradici to name a few, share their latest and greatest.

    • Avid is one the world’s most comprehensive provider of solutions for media, enabling the creation and distribution of the most listened to, most watched and most loved media in the world. Avid will demo its MediaCentral Platform on Microsoft Azure. Find out how to reduce cost, centralize media/data management, secure content collaboration, increase workflow efficiencies, and improve time-to-market.

    • Nimble Collective will feature its cloud-based animation platform, an entire animation studio infrastructure complete with workflow and pipeline, streaming through your Internet browser. Nimble can reduce studio overhead up to 75 percent and give access to a globally distributed talent pool. Nimble will showcase their forthcoming enterprise cloud-based animation platform, Nimble Studio in 3D for animated film management and in 2D for a TV series workflow.
    • Learn why creative professionals worldwide depend on Teradici PC over IP remoting solutions for ultra-secure access to graphics-intensive 3D applications while media assets remain protected. Now you can work from anywhere, with assets secured in any data center or public cloud. Teradici will demo Cloud Access Software and Cloud Access Manager.

    If you want to dive deep into new technologies, consider adding these Tech Talks to your agenda to hear firsthand how studios are using Microsoft technology.

    • Tuesday, August 14, 2018 from 9:00 - 10:00 AM - Room 215/216: Searching your image store without needing a PHD: PixStor and Microsoft Cognitive Services
    • Tuesday, August 14, 2018 from 10:30 - 11:30 AM  - Room 215/216: Mr. X Production:  Workflow architecture and operational elements
    • Tuesday, August 14, 2018 from 12:30 - 1:30 PM - Room 215/216: Human Holograms for Mixed Reality and Beyond
    • Wednesday, August 15, 2018 from 12:30 - 1:30 PM - Room 215/216: What does it mean to be a fully cloud-based studio?

    You may also want to check out:

    • Birds of a Feather Sessions at SIGGRAPH 2018 are graphics-related, attendee-organized, informational discussions of shared interests, goals, technologies, and/or backgrounds.
    • The SIGGRAPH Computer Animation Festival as it celebrates storytelling through the prism of computer graphics. The SIGGRAPH Computer Animation Festival, made up of both an Electronic Theater and a VR Theater, has been recognized as a qualifying festival by the Academy of Motion Picture Arts and Sciences since 1999. Experience the world’s most innovative flat screen and virtual storytelling. Congratulations to this year’s winners!
    • StudioSysAdmins as they celebrate their 10 year anniversary. Join the Studio Mingle on Tuesday, August 24, 2018 from 6pm - 11pm at the Cypress Suite, Pan Pacific Hotel. This group has brought people together to help efficiently solve studio infrastructure/support issues. SSA has 1286 studio members representing 568 production studios! Microsoft is pleased to be a sponsor of this event. Everyone is welcome to attend, but please RSVP.

    When your busy agenda starts to take its toll and you need a moment – take in the waterfront. It really is incredible.

    See you in Vancouver.

    Getting started with IoT: driving business action through analytics and automation

    $
    0
    0

    Whether you’re looking for in-the-moment insights or long-term trends, the value of information is only as good as its usefulness for your business. Some information must be acted upon immediately, but it can also be stored for long-term, big-picture analysis of trends over time. For each case there are different tools to use, and Azure IoT has you covered.

    Near real-time data analysis

    Azure Stream Analytics is a pay-as-you-go service that ingests data from a variety of sources. In addition to real-time data from IoT Hub, it can also take in historical data from Azure Blob Storage, and combine Event Hub data with other sources to highlight correlations or run comparative analyses.

    Using Azure Portal, you can initiate an Azure Stream Analytics job, direct it to the appropriate data set, and provide instructions on how to look for specific data, patterns, or relationships.

    Once Stream Analytics has completed a job, you can direct the results to Blob Storage, SQL Server, a Data Lake, or a database-as-a-service offering, such as Cosmos DB. You can also send the data to HD Insights or Power BI for additional analysis and visualization.

    As its name suggests, Azure Time Series Insights is designed to store, index, query, and visualize any data that is chronological or sequential in nature. And it can do so at scale, whether processing terabytes of data or billions of events.

    Some of the most common applications of Time Series Insights are:

    • Conducting root-cause analysis
    • Building custom apps that analyze or visualize time series data
    • Sharing data from different locations with business leads to enable better collaboration

    If you’re dealing with large data sets, Apache Spark may be your tool of choice. It uses parallel processing, making it an ideal solution for long-term analytics jobs. To improve the efficiency of its analysis it breaks data down into smaller chunks before processing, so you will likely experience some latency issues, though only a few seconds.

    Historical data analysis

    Looking for long-term trends can be a bit like reading tea leaves. You might consider using Azure Machine Learning (AML), an end-to-end solution for data science and advanced analytics. Using existing data sets, it can forecast future outcomes and trends. Over time, Azure Machine Learning will develop an understanding of the area of focus, providing the foundation for developing more effective experiments and gaining better outcomes.

    AML also offers specialized libraries of Python code to help accelerate development of machine learning models in particular areas, including computer vision, financial and demand forecasting, and text analysis.

    The original open source solution for big data analysis, Azure HD Insight is a fully managed service that is based on Hadoop. HD Insight supports several open source frameworks, making it a flexible, fast and cost-effective solution. And because it supports Kafka for HD Insight, you have the ability to create a managed, cost-effective streaming analytics solution that can scale to handle massive amounts of data.

    Key benefits of HD Insights include:

    • Integration with Azure Log Analytics gives you a single portal for monitoring the performance of your clusters
    • Compatibility with a variety of developer environments helps ensure that you can use your tools of choice
    • Excels at analyzing a variety of data sets, including historical, real-time, structured, unstructured, and big data

    In addition to driving action through data analytics, you can also create serverless compute models that trigger actions based on a particular event in IoT Hub. An earlier blog post discussed how to trigger actions using Event Grid. You can also use Azure Functions to run a snippet of code in response to events, such as uploading new files.

    Azure Functions frees you up from the hassle of writing an entire app and the supporting infrastructure. Instead, all you need to do is write a short snippet of code (using your language of choice) that focuses on the problem at hand. This streamlined approach makes Functions ideal for processing orders, maintaining files, or most any other task that you need to operate on a regular basis.

    Whether you're dealing with real-time or historical data, or you need to automate a function, Azure IoT has the tools to meet your needs. To learn more about working with IoT, and how easy it is to get started with your first deployment, download the IoT developer guide.

    Now Available: Azure Sphere technical documentation

    $
    0
    0

    In April we announced Azure Sphere to the world. Since then, we’ve been pleased with the deep customer interest and engaging media conversations. This dialogue has been an opportunity to share Microsoft’s vision and commitment to an end-to-end, secured IoT platform and listen to feedback from partners, customers and consumers. Microsoft will continue to tirelessly advocate for IoT security that is not an afterthought. An IoT device must include security measures that that span silicon, software and cloud from the moment that device is first conceived in a designers mind.

    Azure Sphere’s purpose is to help manufacturers easily and affordably build MCU-powered devices that are informed by Microsoft’s decades of security experience and protected through their connection to the Azure Sphere Security Service. With our public preview milestone just around the corner, we want to share more about the technology behind Azure Sphere. Just this week, we saw the release of Azure Sphere documentation, which covers a range of topics from architecture (an example is included in this blog post), key concepts, developing applications, and deploying those applications over-the-air to devices in the field. Also included is a QuickStart guide, tutorials and a selection of best practices – all of which will help you rapidly get up and running with your dev kits. We thank you for your patience, we expect devkits to begin reaching you in the coming weeks.

    An example of Azure Sphere architecture from the Azure Sphere documentation

    Check back here for more details and announcements as we approach more public milestones. Until then, enjoy the docs and let us know what questions you have so we can address them in future blogs.

    Pre-order a dev kit today to be among the first in line when they start shipping.

    How Microsoft drives exabyte analytics on the world’s largest YARN cluster

    $
    0
    0

    At Microsoft, like many companies using data for competitive advantage, opportunities for insight abound and our analytics needs were scaling fast – almost out of control. We invested in Yet Another Resource Manager (YARN) to meet the demands of an exabyte-scale analytics platform and ended up creating the world’s largest YARN Cluster.

    In big data, how big is really big?

    Yarn is known to scale to thousands of nodes, but what happens when you need to tens of thousands of nodes? The Cloud & Information Service at Microsoft is a highly specialized team of experts that work on applied research and science initiatives focusing on data processing and distributed systems. This blog explains how CISL and the Microsoft Big Data team met the challenge of complex scale and resource management – and ended up implementing the world's largest YARN cluster to drive its exabyte-sized analytics.

    Exabyte-size analytics

    For more than a decade Microsoft has depended on internal version of the publicly available Azure Data Lake for its own super-sized analytics. The volume of data and complexity of calculation has caused it to scale to several larger clusters. To the best of our knowledge, Microsoft is currently running the largest Yarn cluster in the world at over 50 thousand nodes in a single cluster. Exabytes of data are processed daily from business units like Bing, AdCenter, MSN and Windows live. More than 15,000 developers use it across the company.

    Microsoft’s journey began in 2007 with a manageably small number of machines. Today, in 2018, the system operates on hundreds of thousands of machines. It boasts individual clusters of up to 50 thousand nodes with more than 15,000 developers innovating on data that is counted in exabytes – and growing.

    The challenges of scale

    More users means multiple frameworks on same cluster

    As a big data users base matures, users want to run more and more diverse applications on a shared data and compute infrastructure. If you build silos, you destroy utilization. MSFT bet on YARN from the Apache Hadoop community to get around this. YARN is the resource management framework that enables arbitrary applications to share the same compute resources to access a common pool of data in a distributed file system. Since betting on YARN, Microsoft rolled up its sleeves and contributed innovations to the open source Apache Hadoop community, which lead to several Microsoft employees to become committers and PMC members for the project.

    Scale matters – use larger clusters to avoid fragmentation

    As data and application volumes grow, scale becomes an issue. The inexperienced solution to meet demand is to build more smaller clusters. This, like the ideas of silos for different applications, is unhealthy, slow and error prone because resources become fragmented, harming utilization, and data must be continuously copied across clusters. Best practice is to consolidate work into a small number of large clusters. This is what motivated Microsoft to embrace YARN and extend it to support scalability to data center sized clusters (tens of thousands of nodes).

    Microsoft contributed to YARN a feature called "YARN Federation" [1-3], which allows us to operate multiple smaller clusters (each between 2-5 thousand nodes), and tie them together in a way that is transparent to users, and enables sophisticated load balancing via policies.

    Squeezing the last ounce of power – no core left behind

    In a standard cluster the central resource manager assigns resources. The process of centralized resource negotiation has inherent inefficiencies due to its heartbeat-based communications. The heart beats are non-variable rhythm at which commands can be executed. While this allows large scale coordination of an execution it also limits the maximum utilization of resources. The inexperienced solution is to throw more nodes in the cluster to achieve desired performance; this can incur unacceptable overheads. Since resources may become idle in between the heartbeats, Microsoft introduced in YARN the notion of opportunistic containers, inherited from our internal infrastructure. These containers are queued locally at the node and run exclusively on scavenged capacity – in other words the resources that are available in between heartbeats. Opportunistic containers also enable resource overbooking. Just like airlines overbook a flight to ensure that it leaves full, opportunistic containers do the same and ensure that idle resources are fully utilized. Since they are scavengers they can easily be preempted by higher priority containers, and re-queued – or even promoted to high priority. At Microsoft, across our fleet, we estimate benefits of using opportunistic containers to be in the order of 15-30 percent, which translates into 100s of millions of dollars savings per year. Microsoft has contributed this featured to YARN [4], for a deeper technical discussion see [5-6].

    Taking it to the next level – use reservations to meet SLOs in a busy cluster

    As your customer base matures, people rely on the system for mission critical work which means more recurrent jobs with explicit deadlines. Existing technologies don’t support this use case. That’s why in YARN Microsoft introduced reservations to support time bases SLOs.

    Reservations guarantee that the resources you need will be available at the precise time you need them. Before, it was gruesome and painful for users. They had to get in line and carefully tune queue sizes and application priorities to obtain the timeliest execution of their job. Even with this they were always subject to the resources being freed up and hence could be made to wait. The system was inherently unreliable. Today, with the new “deadline reservations” capability, users get the resources they signed up for precisely at the time they need them.

    With this new capability, every reservation request comes with some flexibility (time or shape). Customers are incentivized to expose more flexibility to increase their chances of being admitted. Think about this in the context of booking a hotel room for your seashore vacation. You need your room to be available during some time period (when you can get vacation…) and you have preferences about rooms size and room amenities (king, queen, …). You increase your chances of finding a hotel room by indicating your flexibility on the characteristic of the room or time period.

    In the case of YARN the reservation system takes into account the flexibilities you indicate in order to provide you the most optimized resources that meet your need at the time you need them. By leveraging these flexibilities, it is able to densely pack the cluster agenda to achieve the highest possible utilization of the overall systems. This translates into lower operating costs and thus better price for users. This feature is also open-sourced by Microsoft to Apache Hadoop [7-8].

    From here your journey will only grow

    Wherever you are on your Big Data Analytics journey, expect it to grow. More sources of data are coming on line daily – and business analysts and data scientists are finding more ways to use that data. Efficiently operating Big Data infrastructure is hard – but we are making it easier for you.

    Microsoft invests continually in making its systems efficient. These efforts translate into benefits for the larger community in three ways:

    • This massive analytics system is used every day to drive improvement in to Microsoft products and services.
    • The techniques exposed here benefit the developers and analysts who use the Azure services to provide critical insights to their own businesses.
    • And lastly, anyone using YARN benefits from the enhancements Microsoft has contributed to the open source community.

    Editor’s note:

    All of the contributions referenced above are available to the public in any release since Apache Hadoop 2.9.

    [1] The initial YARN Federation effort is tracked in JIRA
    [2] Ongoing extensions to Federation are tracked in JIRA
    [3] "Hydra: a federated resource manager for data-center scale analytics", to appear in NSDI 2019
    [4] Opportunistic tokens extensions are tracked in JIRA: Extend YARN to support distributed scheduling  and  Scheduling of OPPORTUNISTIC containers through YARN RM  
    [5] Efficient Queue Management for Cluster Scheduling Jeff Rasley, Konstantinos Karanasos, Srikanth Kandula, Rodrigo Fonseca, Sriram Rao, Milan Vojnovic European Conference on Computer Systems (EuroSys)
    [6] Mercury: Hybrid Centralized and Distributed Scheduling in Large Shared Clusters Konstantinos Karanasos, Sriram Rao, Carlo Curino, Chris Douglas, Kishore Chaliparambil, Giovanni Matteo Fumarola, Solom Heddaya, Raghu Ramakrishnan, Sarvesh Sakalanaga USENIX Annual Technical Conference (USENIX ATC'2015) USENIX – Advanced Computing Systems Association July 1, 2015
    [7] YARN Admission Control/Planner: enhancing the resource allocation model with time
    [8] "Reservation based scheduling: if you are late don't blame us!", SoCC 2014


    What is Artificial Intelligence?

    $
    0
    0

    It has been said that Artificial Intelligence will define the next generation of software solutions. If you are even remotely involved with technology, you will almost certainly have heard the term with increasing regularity over the last few years. It is likely that you will also have heard different definitions for Artificial Intelligence offered, such as:

    “The ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.” – Encyclopedia Britannica

    “Intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans.” – Wikipedia

    How useful are these definitions? What exactly are “tasks commonly associated with intelligent beings”? For many people, such definitions can seem too broad or nebulous. After all, there are many tasks that we can associate with human beings! What exactly do we mean by “intelligence” in the context of machines, and how is this different from the tasks that many traditional computer systems are able to perform, some of which may already seem to have some level of intelligence in their sophistication? What exactly makes the Artificial Intelligence systems of today different from sophisticated software systems of the past?

    image

    It could be argued that any attempt to try to define “Artificial Intelligence” is somewhat futile, since we would first have to properly define “intelligence”, a word which conjures a wide variety of connotations. Nonetheless, this article attempts to offer a more accessible definition for what passes as Artificial Intelligence in the current vernacular, as well as some commentary on the nature of today’s AI systems, and why they might be more aptly referred to as “intelligent” than previous incarnations.

    Firstly, it is interesting and important to note that the technical difference between what used to be referred to as Artificial Intelligence over 20 years ago and traditional computer systems, is close to zero. Prior attempts to create intelligent systems known as expert systems at the time, involved the complex implementation of exhaustive rules that were intended to approximate intelligent behavior. For all intents and purposes, these systems did not differ from traditional computers in any drastic way other than having many thousands more lines of code. The problem with trying to replicate human intelligence in this way was that it requires far too many rules and ignores something very fundamental to the way intelligent beings make decisions, which is very different from the way traditional computers process information.

    Let me illustrate with a simple example. Suppose I walk into your office and I say the words “Good Weekend?” Your immediate response is likely to be something like “yes” or “fine thanks”. This may seem like very trivial behavior, but in this simple action you will have immediately demonstrated a behavior that a traditional computer system is completely incapable of. In responding to my question, you have effectively dealt with ambiguity by making a prediction about the correct way to respond. It is not certain that by saying “Good Weekend” I actually intended to ask you whether you had a good weekend. Here are just a few possible intents behind that utterance:

    • Did you have a good weekend?
    • Weekends are good (generally).
    • I had a good weekend.
    • It was a good football game at the weekend, wasn’t it?
    • Will the coming weekend be a good weekend for you?

    And more.

    image

    The most likely intended meaning may seem obvious, but suppose that when you respond with “yes”, I had responded with “No, I mean it was a good football game at the weekend, wasn’t it?”. It would have been a surprise, but without even thinking, you will absorb that information into a mental model, correlate the fact that there was an important game last weekend with the fact that I said “Good Weekend?” and adjust the probability of the expected response for next time accordingly so that you can respond correctly next time you are asked the same question. Granted, those aren’t the thoughts that will pass through your head! You happen to have a neural network (aka “your brain”) that will absorb this information automatically and learn to respond differently next time.

    The key point is that even when you do respond next time, you will still be making a prediction about the correct way in which to respond. As before, you won’t be certain, but if your prediction fails again, you will gather new data
    which leads to my definition of Artificial Intelligence:

    “Artificial Intelligence is the ability of a computer system to deal with ambiguity, by making predictions using previously gathered data, and learning from errors in those predictions in order to generate newer, more accurate predictions about how to behave in the future”.

    This is a somewhat appropriate definition of Artificial Intelligence because it is exactly what AI systems today are doing, and more importantly, it reflects an important characteristic of human beings which separates us from traditional computer systems: human beings are prediction machines. We deal with ambiguity all day long, from very trivial scenarios such as the above, to more convoluted scenarios that involve playing the odds on a larger scale. This is in one sense the essence of reasoning. We very rarely know whether the way we respond to different scenarios is absolutely correct, but we make reasonable predictions based on past experience.

    Just for fun, let’s illustrate the earlier example with some code in R! First, lets start with some data that represents information in your mind about when a particular person has said “good weekend?” to you.

    image

    In this example, we are saying that GoodWeekendResponse is our score label (i.e. it denotes the appropriate response that we want to predict). For modelling purposes, there have to be at least two possible values in this case “yes” and “no”. For brevity, the response in most cases is “yes”.

    We can fit the data to a logistic regression model:

    library(VGAM)
    greetings=read.csv('c:/AI/greetings.csv',header=TRUE)
    fit <- vglm(GoodWeekendResponse~., family=multinomial, data=greetings)

    Now what happens if we try to make a prediction on that model, where the expected response is different than we have previously recorded? In this case, I am expecting the response to be “Go England!”. Below, some more code to add the prediction. For illustration we just hardcode the new input data, output is shown in bold:

    response <- data.frame(FootballGamePlayed="Yes", WorldCup="Yes", EnglandPlaying="Yes", GoodWeekendResponse="Go England!!")
    greetings <- rbind(greetings, response)
    fit <- vglm(GoodWeekendResponse~., family=multinomial, data=greetings)
    prediction <- predict(fit, response, type="response")
    prediction
    index <- which.max(prediction)
    df <- colnames(prediction)
    df[index]
    
                No Yes Go England!!
    1 3.901506e-09 0.5          0.5
    > index <- which.max(prediction)
    > df <- colnames(prediction)
    > df[index]
    [1] "Yes"

    The initial prediction “yes” was wrong, but note that in addition to predicting against the new data, we also incorporated the actual response back into our existing model. Also note, that the new response value “Go England!” has been learnt, with a probability of 50 percent based on current data. If we run the same piece of code again, the probability that “Go England!” is the right response based on prior data increases, so this time our model chooses to respond with “Go England!”, because it has finally learnt that this is most likely the correct response!

                No       Yes Go England!!
    1 3.478377e-09 0.3333333    0.6666667
    > index <- which.max(prediction)
    > df <- colnames(prediction)
    > df[index]
    [1] "Go England!!"

    Do we have Artificial Intelligence here? Well, clearly there are different levels of intelligence, just as there are with human beings. There is, of course, a good deal of nuance that may be missing here, but nonetheless this very simple program will be able to react, with limited accuracy, to data coming in related to one very specific topic, as well as learn from its mistakes and make adjustments based on predictions, without the need to develop exhaustive rules to account for different responses that are expected for different combinations of data. This is this same principle that underpins many AI systems today, which, like human beings, are mostly sophisticated prediction machines. The more sophisticated the machine, the more it is able to make accurate predictions based on a complex array of data used to train various models, and the most sophisticated AI systems of all are able to continually learn from faulty assertions in order to improve the accuracy of their predictions, thus exhibiting something approximating human intelligence.

    Machine learning

    You may be wondering, based on this definition, what the difference is between machine learning and Artificial intelligence? After all, isn’t this exactly what machine learning algorithms do, make predictions based on data using statistical models? This very much depends on the definition of machine learning, but ultimately most machine learning algorithms are trained on static data sets to produce predictive models, so machine learning algorithms only facilitate part of the dynamic in the definition of AI offered above. Additionally, machine learning algorithms, much like the contrived example above typically focus on specific scenarios, rather than working together to create the ability to deal with ambiguity as part of an intelligent system. In many ways, machine learning is to AI what neurons are to the brain. A building block of intelligence that can perform a discreet task, but that may need to be part of a composite system of predictive models in order to really exhibit the ability to deal with ambiguity across an array of behaviors that might approximate to intelligent behavior.

    Practical applications

    There are number of practical advantages in building AI systems, but as discussed and illustrated above, many of these advantages are pivoted around “time to market”. AI systems enable the embedding of complex decision making without the need to build exhaustive rules, which traditionally can be very time consuming to procure, engineer and maintain. Developing systems that can “learn” and “build their own rules” can significantly accelerate organizational growth.

    Microsoft’s Azure cloud platform offers an array of discreet and granular services in the AI and Machine Learning domain, that allow AI developers and Data Engineers to avoid re-inventing wheels, and consume re-usable APIs. These APIs allow AI developers to build systems which display the type of intelligent behavior discussed above.

    If you want to dive in and learn how to start building intelligence into your solutions with the Microsoft AI platform, including pre-trained AI services like Cognitive Services and the Bot Framework, as well as deep learning tools like Azure Machine Learning, Visual Studio Code Tools for AI, and Cognitive Toolkit, visit AI School.

    New locations for Azure CDN now available

    $
    0
    0

    Back in May during Microsoft Build, we made our own Content Delivery Network available to Azure customers. Building on our years of experience scaling enterprise-class services, opening up access to this vast amount of infrastructure previously not accessible to customers was a significant milestone for us. In line with our mission to offer the broadest possible range of choice and options when deploying and scaling your web apps using Azure, Microsoft’s own CDN was a natural expansion supplementing our world class partners Akamai and Verizon.

    Azure CDN provides a world class platform to let you reduce load times, save bandwidth, and speed responsiveness across your businesses diverse workflows. Azure CDN from Microsoft enables Azure customers to use and deliver content from the same global CDN network leveraged by Microsoft properties such as Office 365, Skype, Bing, OneDrive, Windows, and Xbox.

    Connectivity within Microsoft’s network along with new Regional Caching capabilities enables more consistent, more predictable cache fill performance by providing multi-tier caching along direct, private access to content in Azure from each CDN edge point of presence (POP). Azure CDN from Microsoft entered public preview providing access to 54 global Edge POPs in 33 countries and 16 Regional Cache POPs.

    image

    Today we're happy to announce eight additional point-of presence (POP) locations also adding three new countries to our global coverage.

    The new locations online and available:

    North America

    • Seattle, WA, USA
    • Las Vegas, NV, USA
    • Houston, TX, USA

    Europe

    • Budapest, Hungary
    • Barcelona, Spain
    • Warsaw, Poland

    India

    • New Delhi, India

    Asia

    • Busan, South Korea

    Get started with the Azure CDN from Microsoft today! If you are interested in exploring capabilities beyond the standard offerings, please contact us at cdnteam@microsoft.com.

    Additional resources

    Azure Data Factory Visual tools now supports GitHub integration

    $
    0
    0

    GitHub is a development platform that allows you to host and review code, manage projects and build software alongside millions of other developers from open source to business. Azure Data Factory (ADF) is a managed data integration service in Azure that allows you to iteratively build, orchestrate, and monitor your Extract Transform Load (ETL) workflows. You can now integrate your Azure Data Factory with GitHub. The ADF visual authoring integration with GitHub allows you to collaborate with other developers, do source control, versioning of your data factory assets (pipelines, datasets, linked services, triggers, and more). Simply click ‘Set up Code Repository’ and select ‘GitHub’ from the Repository Type dropdown to get started.

    image

    image

    ADF-GitHub integration allows you to use either public Github or GitHub Enterprise depending on your requirements. You can use OAuth authentication to login to your GitHub account. ADF automatically pulls the repositories in your GitHub account that you can select. You can then choose the branch that developers in your team can use to do collaboration. You can also easily import all your current data factory resources to your GitHub repository.

    image

    image

    Once you enable ADF-GitHub integration, you can now save your data factory resources anytime in GitHub. ADF automatically saves the code representation of your data factory resources (pipelines, datasets, and more ) to your GitHub repository. Get more information and detailed steps on enabling Azure Data Factory-GitHub integration.

    Our goal is to continue adding features and improve the usability of Data Factory tools. Get started building pipelines easily and quickly using Azure Data Factory. If you have any feature requests or want to provide feedback, please visit the Azure Data Factory forum.

    Make your Visual Studio Team Services dashboard part of your conversation in Microsoft Teams

    $
    0
    0
    Visual Studio Team Services (VSTS) Dashboards help keep track of your project and drive collaboration with your team. Starting today, you can bring your VSTS dashboards right where the conversation is happening in Microsoft Teams (MS Teams). MS Teams is Microsoft’s chat and collaboration platform. Add any of your dashboards to a Team channel tab using... Read More

    Enhance security and simplify network integration with Extension Host on Azure Stack

    $
    0
    0

    We are excited to share a new capability we are bringing to Azure Stack to further enhance the security posture and simplify network integration for our customers. Today, each Azure Service on Azure Stack adds functionality to the portal for its portal experience via a module called, a portal extension. Each of these portal extensions uses a separate network port. As the number of Azure services increases, so do the number of ports that must be opened on a firewall that supports Azure Stack.

    Our customers told us we need to improve this this posture, and we’ve listened. We’re bringing the Extension Host solution to Azure Stack so only one port (443) is required to be opened. This solution is already available on Azure, allowing all requests to be funneled through one port, reducing the ports that need to be opened on the firewall, and allowing customers to communicate with these end points via proxy servers.

    In its first release, the User and Admin portal default extensions have moved to this model, thereby reducing the number of ports from 27 to one. Over time, additional services such as the SQL and MySQL providers will also be changed to use the Extension Host model.

    The implementation of Extension Host requires two wild card SSL certificates, one for the Admin portal and one for the Tenant portal. Customers who have already deployed Azure Stack systems will need to provide these two additional certificates by the time the 1810 update is released, so there is still some time to acquire and prepare for this capability. The 1810 update will require these two certificates be imported before the update can be applied. Effectively, the 1810 update will enable Extension Host.

    New deployments of Azure Stack will start requiring these two additional certificates sometime in September 2018. Please check with your Azure Stack hardware partner for specifics on when they require these additional certificates for new deployments.

    To prepare for the use of these certificates, we have enhanced the Azure Stack Readiness Checker tool. You can use it to validate certificates acquired for Extension Host, as well as generate the appropriate certificate signing request for these two certificates.

    Thanks for your continued feedback and support for Azure Stack. Please share any thoughts or questions in the comments section below.

    Additional Information

    Viewing all 10804 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>