Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

Snapshots on Exceptions while debugging with IntelliTrace

$
0
0

Have you ever encountered an exception in your application while debugging, and wanted to know exactly what the state of the app was at that point in time? Or, you’re debugging async code and you want to know the context in which an exception was thrown? Now, with a new feature in IntelliTrace step-back, you can!

Starting in Visual Studio Enterprise 2017 version 15.7, IntelliTrace will now automatically take snapshots on exception events, in addition to breakpoints and debugger steps. This enables you to go back to a previous exception event and see the state of the application at the time the exception was thrown.

Exception events are the most widely used IntelliTrace event, and we’re excited to be able to address a top customer ask.

Enabling snapshots on exception events

To enable this feature, go to Tools -> Options -> IntelliTrace settings, and select the option “IntelliTrace events and snapshots.” By default, IntelliTrace will collect a maximum of five snapshots on exception events between break states – e.g. between debugger steps or breakpoints. This number can be configured in IntelliTrace advanced settings.

Enable this feature in Tools - Options - IntelliTrace by selecting "IntelliTrace events and snapshots"

Debugging with snapshots on exceptions

Let’s walk through an example of how we can use this feature while debugging an application.

In my shuttle application, I see that when I F5 to debug it, I get an error. The driver details are not appearing as expected.

The driver details and headshot picture does not show as expected

In Visual Studio, IntelliTrace has recorded an exception event in the Diagnostic Tools window. The camera icon indicates that there is a snapshot available.

Camera icon on the Exception indicates there is a snapshot available

To view the snapshot, double-click on the event, or select the event and click the ‘Activate Historical Debugging’ link.

Visual Studio is now in Historical Debugging mode at the line of code where the exception was thrown.

Visual Studio is in historical debugging mode

From here, you’ll be able to see the historical values of your Call Stack, Locals, and Watches, just as you would in regular, live debugging. You can also evaluate expressions in the Watch Window and hover over variables to see data tips. All these values are populated using the data from the snapshot.

Using this information, I can see from the Locals window that the exact value of driver.name was null. In this query, we should have done the comparison on driver.id.

By changing the query to in line 161 to:

selectedDriver = drivers.Where(d => driver.id.Equals(d.DriverId)).FirstOrDefault();

the issue is fixed. Now, the driver shows up!

The issue is fixed and now the driver headshot picture shows

To minimize redundant snapshots and improve performance, IntelliTrace will only take snapshots on exception ‘thrown’ events under certain conditions. The first time an exception is thrown, IntelliTrace will take a snapshot and mark it accordingly. If this exception is caught and then rethrown or wrapped as an inner exception to another thrown exception – no snapshot will be taken on these events, nor any exception events associated with the original exception. Additionally, IntelliTrace will not take snapshots on exceptions with the same type and call stack as an existing exception event that already has a snapshot.

Try it out

The IntelliTrace snapshots on exceptions feature is available in Visual Studio Enterprise 2017 version 15.7 and requires Windows 10 Anniversary Update or above. The feature is currently supported for ASP.NET, ASP.NET Core, .NET Core, WinForms, WPF, managed console apps, and managed class libraries.

We’d love to hear your feedback. To report issues, use the Report a Problem tool in Visual Studio. You’ll be able to track your issues in the Visual Studio Developer Community where you can ask questions and find answers. You can also make a product suggestion through UserVoice, or email the team directly at stepback@microsoft.com.

Deborah Chen, Program Manager, Visual Studio Diagnostics
@ChenDeborah

Deborah is a program manager on the Visual Studio Diagnostics team, working on IntelliTrace.


New features for extensions in the Windows 10 April 2018 Update

$
0
0

The Windows 10 April 2018 Update includes several incremental improvements to API support, functionality, and end-user discoverability for the extensions platform in Microsoft Edge. In this post, we’ll walk through the biggest improvements, and how you can get started enhancing your extensions with new features.

Extensions can now be enabled for InPrivate browsing

In previous releases, extensions could not be enabled during an inPrivate browsing session. Beginning with this release, users can now choose to allow extensions to run during inPrivate browsing on a case-by-case basis – either when the extension is initially installed (by selecting the “Allow for inPrivate browsing” checkbox), or at any later time by visiting the Settings page for a given extension.

When running inPrivate, extensions can run in either split or span mode as specified by the WebExtensions API. In span mode (the default), the extension spans both inPrivate and non-private windows; windows and tabs from an inPrivate instance are indicated to the extension with an incognito property. In split mode, a separate instance of the extension is created for inPrivate and normal browsing, and the two copies are isolated (the inPrivate copy cannot see non-private windows, and the non-private copy cannot see inPrivate windows).

Screen capture showing the OneNote Web Clipper in Microsoft Edge in an inPrivate window.

Extensions are now available when browsing inPrivate

Extensions in span mode can be debugged as with a normal extension. Extensions in split mode can be debugged separately for each instance, as the background script is separate for normal and InPrivate sessions.

Screen capture showing the settings page for an extension.

The background script is separate for normal and inPrivate sessions.

Introducing the Notifications API for extensions

Extensions can now display interactive notifications , including basic messages, progress indicators, lists, and more. Developers can customize the appearance of these notifications by configuring the icon, text, buttons, and button icons.

Screen capture showing examples of progress, list, image and basic notifications

Examples of progress, list, image and basic notifications

In the sample below, we’ll demonstrate the process to create a basic notification. The first step is to define the notification options:

Next, we’ll add event listeners for various user interactions:

The notification can now be invoked:

Notifications sent from an extension use the standard Windows notification service, appearing in Action Center until an action is taken. Users have full control over notifications, and can choose to suppress notifications originating from a specific extension using either the extension’s menu in Microsoft Edge or by acting on an individual notification in the Windows Action Center.

Screen capture showing hte option to "turn off notifications for this extension" in Action Center

Users can suppress notifications for an extension from the Action Center or the extension settings

Extension developers can determine if notifications are enabled or disabled using the getPermissionLevel() method:

The event onPermissionLevelChanged will be raised if the user changes the notification permission:

Support for Tabs.reload()

Microsoft Edge extensions now support the tabs.reload() method in the tabs API class, allowing extensions to directly reload a specific tab using this method.

What’s next for extensions

We are continuing to make enhancements to extensions platform for future releases. You can see a snapshot of our current roadmap on Microsoft Docs at Microsoft Edge Extension API roadmap. To get started building your extension for Microsoft Edge, check out Getting started with extensions.

We look forward to your feedback on these improvements! You can vote on the most important features for your extension development on the Microsoft Edge Developer UserVoice, or share your feedback in the comments below.

– Balaje Krishnan, Senior Program Manager, Microsoft Edge

The post New features for extensions in the Windows 10 April 2018 Update appeared first on Microsoft Edge Dev Blog.

Explore Build 2018 content with playlists

$
0
0

If you couldn’t attend Microsoft Build 2018, now is your opportunity to do so online with Microsoft Build Live 2018. Learn about the cloud, AI, IoT, and much more. But as you browse the hundreds of recordings available, you may find it overwhelming to find the things that are the most relevant to what you want to learn. That’s why we organized select content into playlists.

migrate-existing-apps-to-the-cloud-playlist

Playlist: Migrate existing apps to the cloud

Simply select the playlist that is most interesting to you, and you’ll get our top picks for the most relevant sessions, resources, and expert interviews:

New capabilities to enable robust GDPR compliance

$
0
0

Today marks the beginning of enforcement of the EU General Data Protection Regulation (GDPR), and I’m pleased to announce that we have released an unmatched array of new features and resources to help support compliance with the GDPR and the policy needs of Azure customers.

New offerings include the general availability of the Azure GDPR Data Subject Request (DSR) portal, Azure Policy, Compliance Manager for GDPR, Data Log Export, and the Azure Security and Compliance Blueprint for GDPR.

In our webcast today, President Brad Smith outlined our commitment to making sure that our products and services comply with the GDPR, including having more than 1,600 engineers across the company working on GDPR projects. As Brad noted, we believe privacy is a fundamental human right, and that individuals must be in control of their data. So I am pleased that Azure is part of keeping that commitment by being the only hyperscale cloud provider to offer the level of streamlined mechanisms and tools for GDPR compliance enforcement we are announcing today.

Azure Data Subject Request (DSR) portal enables you to fulfill GDPR requests. The DSR capability is generally available today through the Azure portal user interface, as well as through pre-existing application programming interfaces (APIs) and user interfaces (UIs) across the breadth of our online services. These capabilities allow customers to respond to requests to access, rectify, delete, and export personal data in the cloud. In addition, Azure enables customers to access system-generated logs as a part of Azure services.

Azure Policy enables you to set policies to conform to the GDPR. Azure Policy is generally available today at no additional cost to Azure customers. You can use Azure Policy to define and enforce policies that help your cloud environment become compliant with internal policies as well as external regulations.

Azure Policy is deeply integrated into Azure Resource Manager and applies across all resources in Azure. Individual policies can be grouped into initiatives to quickly implement multiple rules. You can also use Azure Policy in a wide range of compliance scenarios, such as ensuring that your data is encrypted or remains in a specific region as part of GDPR compliance. Microsoft is the only hyperscale cloud provider to offer this level of policy integration built in to the platform for no additional charge.

Extend Azure Policies for the GDPR into Azure Security Center. Azure Security Center provides unified security management and advanced threat protection to help meet GDPR security requirements. With Azure Policy integrated into Security Center, you can apply security policies across your workloads, enable encryption, limit your exposure to threats, and help you respond to attacks.

The Azure Security and Compliance GDPR Blueprint accelerates your GDPR deployment. This new Azure Security and Compliance Blueprint will help you build and launch cloud-powered applications that meet GDPR requirements. It includes common reference architectures, deployment guidance, GDPR article implementation mappings, customer responsibility matrices, and threat models that enable you to quickly and securely implement cloud solutions.

Compliance Manager for Azure helps you assess and manage GDPR compliance. Compliance Manager is a free, Microsoft cloud services solution designed to help organizations meet complex compliance obligations, including the GDPR, ISO 27001, ISO 27018, and NIST 800-53. Generally available today for Azure customers, the Compliance Manager GDPR dashboard enables you to assign, track, and record your GDPR compliance activities so you can collaborate across teams and manage your documents for creating audit reports more easily. Azure is the only hyperscale cloud provider with this functionality.

Azure GDPR support and guidance help you stay compliant. Our GDPR sites on the Service Trust Portal and the Trust Center provide you with current information about Microsoft services that support the requirements of the GDPR. These include detailed guidance on conducting Data Protection Impact Assessments in Azure, fulfilling DSRs in Azure, and managing Data Breach Notification in Azure for you to incorporate into your own GDPR accountability program.

Global Regions help you meet your data residency requirements. Azure has more global regions than any other cloud provider, offering the scale you need to bring applications closer to people around the world, preserve data residency, and give customers the confidence that their data is in under their control.

Microsoft has a long-standing commitment to privacy and was the first cloud provider to achieve certification for the EU Model Clauses and ISO/IEC 27018, and was the first to contractually commit to the requirements of the GDPR. Azure offers 11 privacy-focused compliance offerings, more than any other cloud provider. We are proud to be the first to offer customers this level of GDPR functionality.

Through the GDPR, Azure has strengthened its commitment to be first among cloud providers in providing a trusted, private, secure, and compliant private cloud. We are continuing to build and release new features, tools, and supporting materials for our customers to comply with the GDPR and other important standards and regulations. We are proud to release these new capabilities and invite you to learn more in the Azure portal today.

Safeguard individual privacy rights under GDPR with the Microsoft intelligent cloud

$
0
0

Today’s post was written by Alym Rayani, director of Microsoft 365.

With the General Data Protection Regulation (GDPR) taking effect, today marks a milestone for individual privacy rights. We live in a time where digital technology is profoundly impacting our lives, from the way we connect with each other to how we interpret our world. Central to this digital transformation is the ability to store and analyze massive amounts of data to generate deeper insights and more personal customer experiences. This helps all of us achieve more than ever before, but it also leaves an extensive trail of data, including personal information and sensitive business records that need to be protected.

At Microsoft, our mission is to empower every person and every organization on the planet to achieve more. Trust is at the core of everything we do because we have long appreciated that people won’t use technology they don’t trust. We also believe that privacy is a fundamental human right that needs to be protected. As Julie Brill, Microsoft privacy lead, notes in her recent blog, Microsoft believes GDPR establishes important principles that are relevant globally.

In addition to our ongoing commitment to privacy, we made a number of investments over the last year to support GDPR and the privacy rights of individuals. Here is a recap of how you can use these capabilities to help your organization on the path to GDPR compliance.

Assess and manage compliance risk

Because achieving organizational compliance can be very challenging, understanding your compliance risk should be your first priority. Customers have told us about their challenges with the lack of in-house capabilities to define and implement controls and inefficiencies in audit preparation activities.

The Compliance Manager and Compliance Score helps you continuously monitor your compliance status. Compliance Manager captures and provides details for each Microsoft control, which has been implemented to meet specific requirements, including implementation and test plan details, and management responses if necessary. It also provides recommended actions your organization can take to enhance data protection capabilities and help you meet your compliance obligations.

Screenshot of the Compliance Manager dashboard.

Here’s a look at how Microsoft 365 customer Abrona uses Compliance Manager:

Protect personal data

At its core, GDPR is all about protecting the personal data of individuals—making sure there is proper security, governance, and management of such data to help prevent it from being misused or getting into the wrong hands. To help ensure that your organization is effectively protecting personal data as well as sensitive content relevant to organizational compliance needs, you need to implement solutions and processes that enable your organization to discover, classify, protect, and monitor data that is most important.

The information protection capabilities within Microsoft 365, such as Office 365 Data Governance and Azure Information Protection, provide an integrated classification, labeling, and protection experience—enabling more persistent protection of your data—no matter where it lives or travels. A proactive data governance strategy of classification of personal and sensitive data enables you to respond with precision when you need to find the relevant data to satisfy a regulatory request or requirement like a Data Subject Request (DSR) as a part of GDPR.

Screenshot of Protection settings the Security & Compliance Center.

Azure Information Protection scanner addresses hybrid and on-premises scenarios by allowing you to configure policies to automatically discover, classify, label, and protect documents in your on-premises repositories such as the File Servers and on-premises SharePoint servers. You can deploy the scanner in your own environment by following instructions in this technical guide.

Azure’s fully managed database services, like Azure SQL Database, help alleviate the burden of patching and updating the data platform, while bringing intelligent built-in features that help identify where sensitive data is stored. New technologies, like Azure SQL Data Discovery and Classification, provide advanced capabilities for discovering, classifying, labeling, and protecting the sensitive data at the database level. Protect personal data with technologies like Transparent Data Encryption (TDE) that offer Bring Your Own Key (BYOK) support with Azure Key Vault integration.

Let’s take a look at how Microsoft 365 customer INAIL leverages Azure Information Protection to classify, label, and protect their most sensitive data:

Respond with confidence

Ensuring processes are in place to efficiently manage and meet certain GDPR requirements, such as responding to DSRs or responding to data breaches, is a tough hurdle for many organizations.

To help you navigate the GDPR resources provided across cloud services, we introduced the Privacy tab in the Service Trust Portal last month. It provides you with the information you need to prepare for your own Data Protection Impact Assessments (DPIAs) on Microsoft Cloud services, the guidance for responding to DSRs, and the information about how Microsoft detects and responds to personal data breaches and how to receive notifications directly from Microsoft.

Watch the new Mechanics video to learn more about the GDPR resources in the Service Trust Portal.

Features to support DSRs

Several features help support DSRs across Microsoft Cloud services, including a Data Privacy tab in Office 365, an Azure DSR portal, and new DSR search capabilities in Dynamics 365.

The new Data Privacy tab, GDPR dashboard, and DSR experience in Office 365 are now generally available for all commercial customers. This experience is designed to provide you with the tools to efficiently and effectively execute a DSR for Office 365 content—such as Exchange, SharePoint, OneDrive, Groups, and now Microsoft Teams.

As Kelly Clay of GlaxoSmithKline says, “The GDPR 2016/679 is a regulation in E.U. law on data protection and privacy for all individuals within the European Union. GDPR also brings a new set of ‘digital rights’ for E.U. citizens in an age of an increase of the economic value of personal data in the digital economy. GDPR will require large data holders and data processors to manage DSRs, and organizations will need tools in Office 365 to manage DSRs.”

Patrick Oots of the law firm Shook, Hardy & Bacon observes his client organizations and their steps towards GDPR. “We are excited to see Microsoft investing in Office 365. As our clients prepare for GDPR, we see tremendous value in tools within the Data Privacy Portal to manage DSRs in response to Article 15. As data privacy law evolves, we remind our Office 365 clients of the overall importance in the proper implementation of information governance polices within the Security & Compliance Center to minimize risk.” Patrick further highlights how a proactive data governance strategy can help organizations react to regulations such as GDPR with precision when required.

Screenshot of the GDPR dashboard in the Security & Compliance Center.

The Azure DSR portal is now also generally available. Using the Azure DSR portal, tenant admins can identify information associated with a user and then correct, amend, delete, or export the user’s data. Admins can also identify information associated with a data subject and will be able to execute DSRs against system-generated logs (data Microsoft generates to provide a given service) for Microsoft Cloud services. Other new offerings from Azure include the general availability of Azure Policy, Compliance Manager for Azure GDPR, and the Azure Security and Compliance Blueprint for GDPR.

Learn more by reading the Azure blog post on GDPR features.

Screenshot of the "Get started with User Privacy" screen in Azure Directory.

To help customers respond to DSRs in Dynamics 365, we have two search capabilities: Relevance Search and the Person Search Report. Relevance Search gives you a fast and simple way to find what you are looking for, and is powered by Azure Search. The Person Search Report offers a prepackaged set of extendible entities, which Microsoft authored, to identify personal data used to define a person and the roles they might be assigned to.

You can learn more in the Dynamics 365 blog post.

Screenshot showcasing the Relevance Search capability in Dynamics 365.

The new Windows Privacy hub converges related content about Windows privacy on docs.microsoft.com. Here you can find new guidance to help IT decision makers get ready for GDPR, a list of Windows 10 services configuration settings used for personal data privacy protection, understand Windows diagnostic data, and much more.

Handling data breaches

The onset of GDPR also means stricter regulations that organizations must adhere to in the event of a data breach. Microsoft 365 has a robust set of capabilities, from Office 365 Advanced Threat Protection (ATP) to Azure ATP, that can help protect against and detect data breaches.

Get started today on your GDPR journey with Microsoft

Microsoft has extensive expertise in protecting data, championing privacy, and complying with complex regulations. We believe that the GDPR is an important step forward for clarifying and enabling individual privacy rights and provide GDPR related assurances in our contractual commitments.

No matter where you are in your GDPR efforts, we are here to help on your journey to GDPR compliance. We have several resources available to help you get started today:

Learn more about how Microsoft can help you with the GDPR.

The post Safeguard individual privacy rights under GDPR with the Microsoft intelligent cloud appeared first on Microsoft 365 Blog.

M-Series certified for SAP HANA and available worldwide

$
0
0

Last December, we announced general availability of Azure M-series Virtual Machines with memory configuration from 1 TB to 4 TB, offering on-demand scalability and agility for workloads requiring large memory consumption. We recently announced SAP HANA certification for M-series VMs, enabling you to run production SAP HANA workloads with a pay-as-you-go, per second VM billing. Customers can now deploy entire SAP landscapes in minutes with Azure automation templates like this one from our partner, SUSE, offering unprecedented agility for SAP HANA applications with error-free setup following reference architecture guidance. With Azure Reserved VM Instances, you can save up to 72% on the M-series VMs when running in Azure. Agile, inexpensive, and certified. 

Given the excitement we have seen from customers on this SKU, we have been steadily launching M-series VMs all over the world. Today, I’m excited to announce that M-series VMs are now generally available in four more regions: Australia East, Australia Southeast, Japan East, Japan West. This adds to availability in US East 2, US West 2, West Europe, UK South, Southeast Asia and US Gov Virginia. M-series VMs are now available and GA in 10 regions worldwide with plans for 12 additional regions later this year, doubling our global capacity for SAP HANA certified VMs in 6 months. For details on M-series general availability by regions, please check out more details about Azure service region availability.

Azure has the largest spectrum of SAP certified offerings in the public cloud with 16 different configurations for customers to choose. The M-series VM is certified for 4 TB or less, which matches the upper bound of what other clouds offer today. To support larger workloads, Azure pioneered a purpose-built bare-metal large instance, offering up to 20 TB in memory for a single node scale-up configuration, the largest available of any public cloud. Azure is also certified for a scale-out configuration for up to 60 TB with a 15+1 node configuration of 4TB instances, offering the largest scale-out in any public cloud in a cost-effective configuration. For the scale-out BW deployment, given our unique bare metal offering, Azure is the only public cloud provider to offer a 99.99% SLA.

We continue to lead the industry in offering you the best choice, scale, and SLA in the public cloud and with our region expansion this year, we will offer the largest global footprint for SAP HANA certified VMs of any public cloud. 

Given this, we are see very exciting results from our customers. Rio Tinto, a global metals and mining company, migrated mission critical SAP applications and realized 30% reduction in IT operating costs with increased agility to meet peak SAP application usage. Penti, a fast-growing Turkish retailer, will run S/4HANA on Azure to ensure service availability, maximize data security and reduce operational costs while increasing business agility. Guyana Goldfields, a Canadian gold producer, is estimating 40% reduction in SAP ERP maintenance costs by using Azure for S/4HANA and SAP applications.
 
If you want to learn more:

Thanks, 
Corey

Microsoft continues to be a leader in Gartner’s Cloud IaaS MQ

$
0
0

We’re in the midst of the cloud revolution. Customers – big and small – are betting on Azure and undergoing massive digital transformation. While industry stalwarts like Rio Tinto are powering their mission critical SAP deployments with powerful Azure infrastructure, startups like Audioburst are envisioning entire new businesses on Azure with AI technologies, and many others like Adobe are delighting their customers with better digital experiences. Every customer has a fascinating story on what they’re doing with Azure. And we’re proud to be the cloud – across IaaS and PaaS – underpinning this transformation and enabling IT.

Today, we’re honored to be recognized as a leader in Gartner’s IaaS MQ for the fifth consecutive year.

Many of our customers choose IaaS as the entry point to Azure and I am happy to note that the recent Right Scale State of the Cloud survey placed Azure ahead of any other cloud for enterprises beginning their cloud journey. The reason customers are choosing Azure is not just because Azure offers unparalleled security, the largest global datacenter footprint, true hybrid consistency and a rich set of application services. It’s also because Microsoft leads across a broad spectrum of cloud services. This gives our customers the confidence that no matter their cloud need, Microsoft has them covered.

Here’s a list of Gartner MQs that Microsoft is placed in the leader’s quadrant:

Magic Quadrant for Cloud Infrastructure as a Service, Worldwide
Magic Quadrant for the CRM Customer Engagement Center
Magic Quadrant for Enterprise Agile Planning Tools
Magic Quadrant for Enterprise Integration Platform as a Service
Magic Quadrant for Cloud Infrastructure as a Service, Japan
Magic Quadrant for Analytics and Business Intelligence Platforms
Magic Quadrant for Data Management Solutions for Analytics
Magic Quadrant for Operational Database Management Systems
Magic Quadrant for Content Services Platforms
Magic Quadrant for Meeting Solutions
Magic Quadrant for Content Collaboration Platforms
Magic Quadrant for Public Cloud Storage Services, Worldwide
Magic Quadrant for Unified Communications
Magic Quadrant for Sales Force Automation
Magic Quadrant for Mobile App Development Platforms
Magic Quadrant for Access Management, Worldwide
Magic Quadrant for Insight Engines

If you haven’t used Azure yet, I invite you to try it – and I can't wait to see what amazing things you'll end up doing with Azure.

 

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

What’s new in VSTS Sprint 134 Update

$
0
0
The Sprint 134 Update of Visual Studio Team Services (VSTS) has rolled out to all accounts. In this Update we continue to increase the breadth of services offered in Azure DevOps Projects to enable you to get started quickly. The (newly renamed) Azure Kubernetes Service (AKS), a fully managed Kubernetes container orchestration service, Azure Service... Read More

The sweet life with Jocelyn Delk Adams

$
0
0

When you’re a successful food blogger, cookbook author, TV celebrity, world traveler, devoted wife, expecting mother, and occasional tap dancer, life starts to stack up like a seven-layer cake. Jocelyn Delk Adams’ personal goal through all of it? Keep things as sweet as possible.

Profile picture of Jocelyn Delk Adams, entrepreneur and author of the book Grandbaby Cakes.

The mastermind behind the popular baking blog Grandbaby Cakes, Adams is a lightning bolt of activity. In addition to creating and cultivating a rapidly growing brand and loyal fan base, the energetic 30-something racks up frequent flyer miles like they’re going out of style, jetting to appear live on programs like The Today Show or The Cooking Channel’s Unique Sweets.

Yet even with her busy lifestyle, Adams’ favorite place is still in her own kitchen creating and testing recipes, most of which are based on techniques and tastes handed down to her from generations past. Adams says she relies on Office 365 apps in a variety of ways to stay on top of it all. For instance, she keeps track of her many food creations in Excel, using the Sort feature to quickly find what she’s looking for. And she says the Review feature in Word is indispensable for writing and editing the wealth of content she publishes on her blog, her website, and other brand outlets. (More on this below.)

As a young girl growing up in the suburbs of Chicago, Adams had plenty of influences in the kitchen. Her mother and aunt taught her the basics of cooking and baking at an early age. Her father, seeing his daughter’s rapidly growing culinary interest, encouraged her talent by purchasing whatever she wanted at the grocery store each week, so she could make meals for herself and the family. She also says her uncle, BB, is one of the best cooks in the family. “He seriously gets down,” she proclaims.

Yet Adams’ biggest education took place during visits to her grandmother Maggie’s home in Mississippi. It was there, alongside her “Big Mama” as the family affectionately (or should we say “confectionately”) calls Maggie, that a lifelong love of creating cakes and other dessert treats took hold. It was Big Mama’s original recipes and stories that formed the inspiration behind Adams’ book Grandbaby Cakes, published in 2015.

Adams feels a big reason for the book’s popularity is the strong connection it makes between her Southern-inspired recipes and her strong family roots. “It was an opportunity to truly share my story,” she says. “I’m so glad readers connected with that.”

Family is what got Adams started in the baking business, and it’s what keeps her motivated today. Yet as a young adult, baking wasn’t always in her career sights. Her first fulltime job out of college was as a TV production assistant on the Judge Mathis show. From there she freelanced in casting for several big-budget films before eventually landing at an arts college, where she oversaw the production of a huge annual arts festival in downtown Chicago.

Through it all, however, were her grandmother’s recipes. And a natural entrepreneurial spirit. When Adams decided to pair her production experience with her unique kitchen history, Grandbaby Cakes was born. “Having a brand where family is central to its purpose really makes so much sense for me,” she says. “I’m so glad that I get to incorporate them into my first love and career in such a special way.” And while Adams’ recipes are based on Big Mama’s original creations, she likes to experiment as well, “adding new life to family recipes that could use a little spruce.”

Finding success in the crowded blogosphere and celebrity baking world requires exceptional dedication and organization. The elusive dividing line between work and life can be blurry at best, something Adams remains philosophical about. “When I realized that balance didn’t truly exist, I was finally able to prioritize and manage my schedule much better,” she says. “I believe in changing hats when I need to.”

Adams says she credits Excel for keeping her organized. “I have millions of spreadsheets devoted to everything from recipe ideas arranged by month or season to recipe test results to project budgets. Whether I’m making new seasonal recipe lists or invite lists for brand events, I can alphabetically sort—or sort by certain data. Talk about efficiency!” She also says she uses the Formulas feature in Excel constantly, for budgets and processing payments. “Formulas automatically calculate totals for me, which means I can concentrate on so many other things.”

As far as writing goes, she says she keeps all her recipes and chapters of her cookbook organized in Word. “The Comments feature is particularly useful for receiving edits from my publisher. We can implement Track Changes, which helps us keep tabs on any edits we make. It has totally revolutionized how I edit my transcripts.” Adams notes that for her, the ability to customize the Spelling & Grammar feature in Word is key. “I have some settings that automatically update words that I use frequently in my posts. This is such a time saver.”

The next chapter in the Grandbaby Cakes story may be the most exciting—the impending arrival of her “little BabyCakes” this winter. “My family and I are beyond excited. Because I’m having a girl, I can’t wait to get her in the kitchen and get her baking with her family,” says Adams. “We truly believe in passing down our traditions, so this will be no exception.”

No doubt adding a new family member will make a busy life even busier. But for now, Adams is taking it all in stride. “Family is always number one for me. And while I’m a super hard worker that constantly challenges myself to grow and soar more and more each year, my family comes before everything, and they totally should.”

That sounds like something that would make Big Mama proud.

See what’s new in Office 365.

The post The sweet life with Jocelyn Delk Adams appeared first on Microsoft 365 Blog.

Top stories from the VSTS community – 2018.05.25

$
0
0
Here are top stories we found in our streams this week related to DevOps, VSTS, TFS and other interesting topics, listed in no specific order: TOP STORIES Chip Off The Ol’ Blockchain – Part 3: How to Setup a CI/CD Pipeline with VSTS and TruffleBlaize Stewart outlines how to use VSTS to create a simple... Read More

Reflections on the ROpenSci Unconference

$
0
0

I had an amazing time this week participating in the 2018 ROpenSci Unconference, the sixth annual ROpenSci hackathon bringing together people to advance the tools and community for scientific computing with R. It was so inspiring to be among such a talented and dedicated group of people — special kudos goes to the organizing committee for curating such a great crowd. (I heard there were over 200 hundred nominations from which the 65 or so attendees were selected.)

The idea behind the unconference is to spend two full days hacking on projects of interest to the community. Before the conference begins, the participants suggest projects as Github issues and begin discussions there. On the first day of the conference (after an icebreaker), the participants vote for projects they'd be interested in working on, and then form up into groups of 2-6 people or so to work on them. And then everyone gets to work! You can get a sense of the activity by looking at the #runconf18 hashtag on Twitter (and especially the photos).

I joined the "Tensorflow Probability for R" team, where we worked mainly with the greta package, which uses Tensorflow Probability to implement a Markov-Chain Monte-Carlo system to find solutions to complex statistical models. I hadn't used greta before, so I focused on trying out some simple examples to understand how greta works. In the process I learned that greta is a really powerful package: it solves many of the same problems as stan, but with a really elegant R interface that generalizes beyond Bayesian models. (Over the next couple of weeks I'll elaborate the examples for the greta package and write a more detailed blog post.)

At the end of the second day, everyone gets together to "report out" on their projects: a three minute presentation to review the progress from the hackathon. You can browse the list of projects here: follow the links to the Github repositories and check out the Readme.md files for details on each project.

A sincere thank you to all participants in #runconf18

This thread👇includes links to all project repos: https://t.co/2PhAz4zSuK#rstats pic.twitter.com/8SICcWkQ0v

— rOpenSci (@rOpenSci) May 25, 2018

On a personal note, it was also a great joy that my team could sponsor the unconference and provide the venue for this year's event. The Microsoft Reactor in Seattle turned out to be a great place to hold a hackathon like this, with plenty of space for everyone to form into small teams and find a comfortable place to work. This kind of event is exactly why we are opening Reactor spaces around the world, as spaces for the community to meet, gather and work, and it was great to see that vision realized in such a great way this week. 

Thanks once again to all of the participants and especially the organizers (with a special shout-out to Stefanie Butland) for such a wonderful event. I can't wait for the next one!

Internet-Scale Deep Learning for Bing Image Search

$
0
0

At Bing we are continuously innovating to provide our users with ever better, more accurate, and more beautiful results. We routinely embrace modern advances in multiple domains such as natural language processing, computer vision and AI to provide the best experience at internet scale across billions of queries and images. Recently we talked about how semantic vectors helped improve Bing Web Search. Today we want to share how end-to-end deployment of semantic, vector-based deep learning techniques in our image search stack makes Bing even more intelligent, and how it enables us to find more satisfying results to complex image search queries.

Today when a user issues a query such as 'pencil erasers that look like tools,’ Bing doesn't just retrieve images from pages containing that literal text. We also map the query into semantic space and then search for matches in an index of images that have also been mapped into that same semantic space. Deep learning is used to discover the semantic representation of queries and images such that related items are co-located in semantic space.  This approach allows Bing to provide high quality results even from pages where no such query terms are present.  We do this at internet scale, searching billions of images in tens of milliseconds for every image search query issued on Bing.

Looking inside an image

In image search a picture is really ‘worth a thousand words’ because describing a picture with natural language can be done in 'thousands’ of ways.  For example, the image below could be described as ‘man swimming along with a sperm whale,’ or ‘wildlife in the ocean.’  In Bing image search, we need to understand that both of these queries can be well answered by this image.  Capturing the semantic association between text and images is not straightforward, and this is where deep learning comes into play. 


Deep learning computer vision techniques let us look inside an image, understand its semantic meaning and represent it as a vector through a process called image embedding. This vector needs to simultaneously represent that the image above is about a whale, contains a man, has waves at the top, has a strong blue background, etc. That’s a lot of information to squeeze into one vector! Similarly, deep learning natural language processing techniques let us read the words of a query and represent it as a vector through a process called text embedding. A model is first trained using deep learning techniques to map (embed) a query and an image to a high dimensional vector space in such a way that the corresponding vectors are similar if the image is semantically relevant to the query, and further apart otherwise.
So we have some vectors of queries and images, but how does that let us search billions of images in tens of milliseconds to satisfy a Bing Image Search query? 

Deep image ranking

The goal of Bing Image Search is to retrieve the most relevant images for a given text query.  A very very simplified description of this large-scale search operation consists of three major stages:

  1. A matching stage - to select candidate images from a huge index of images

  2. Multiple ranking stages - to use computationally expensive methods to score each candidate image independently and rank all images

  3. Multiple set ranking stages - to re-rank previous candidates lists considering information from the entire candidate set, not just independent images

In Deep Image Ranking we leverage image and text embedding vectors at each stage to allow us to better capture semantic intent of queries and images.  The graph below illustrates the overall infrastructure of our deep image ranking work, and we will introduce each component in detail.

Deep learning in matching

The matching stage selects a candidate set of images from billions of images to be subsequently processed by the ranking stages. During matching, it is essential to obtain a result set with high recall and moderate precision. To achieve this, query, image and page text are embedded by a deep learning model into embedding vectors. We then use an efficient vector search algorithm to scan a billions-scale index to find the N-best document vectors for a query vector. We have experimented with two approaches: (i) an optimized version of the classical Approximate Nearest Neighbor (ANN) search, and (ii) a quantized matching approach where dense vectors are first quantized into sparse vectors which allows us to execute vector search considerably faster but with a trade-off in precision. After evaluating several approaches balancing different levels of precision and performance, and various combinations of query, image and page text embedding vectors, we picked the one providing optimal combination of speed and quality. Let us look at an example in the diagram below: the left part shows that even though the page text ‘golden butterfly hd wallpapers’ is not exactly the same as the query ‘gold butterflies hd background’, we are able to retrieve the relevant image based on query embedding, page text embedding and image embedding, which are generated from the deep learning model as shown in the right diagram.

 

Deep learning in ranking

The next step is to rank all the candidate images according to their relevance to the query. The set of images we consider here is much smaller than in the previous step and therefore we can be more liberal with computations. Here we reuse the previously computed embedding vectors but do more exact calculations of semantic distance between the vectors. Using semantic matching of vectors is particularly invaluable at this stage when the text on the page hosting the image is not representative of the image, as we are able to directly ‘look at the image’ and consider its relevance to the query. For example, in the diagram below, the page contains the text ‘how to maintain a healthy and happy parrot’ however we are still able to recognize that the image matches the query ‘a parrot rides a tricycle’ because we are looking inside the image using image embedding.

  

Deep learning in set ranking

The set ranking stages aim to further optimize the ranking order of a list of images for a given query by considering information from all images in the candidate list. For example, for the query ‘cat in a box,’ if 95 out of 100 images are relevant, while the remaining 5 images are of a ‘dog in a box,’ we want to consider the consistent information of the 95 good images and demote the 5 bad images. 

Once again, we can use deep learning here to embed the image and page text into vectors and then compute pair-wise distances between all images in the candidate list. We further compute higher-order features from these pairwise distances and use them to re-rank the candidate list.  For example, the diagram below shows one such approach where we consider the distance of each image from the cluster centroids estimated from the vectors of the top-N images. With these distances available we can now easily identify the few ‘dog in a box’ images as inconsistent with the majority of the good results and eliminate them.


So, what is the end result?

Putting this all together, we now have an image ranking system that considers search in semantic space and allows more complex queries to be intelligently understood. Here are examples of some of the large improvements we have seen in image search quality:







Deep image ranking has improved Bing results across the board, but where it stands out is when there is insufficient text on the page, and where semantic understanding of the query is key to providing optimal image search results. However, there is no silver bullet, and there are still many complex queries that we are trying to improve. We continue to look for more ways to train our deep image ranking to provide great results to the hardest queries using the most elegant and scalable solutions. 

If you have any feedback, let us know using Bing Listens or simply click on ‘Feedback’ on Bing Image Search!

- Bing Image Search Relevance Team

 

Internet-Scale Deep Learning for Bing Image Search and Ranking

$
0
0

At Bing we are continuously innovating to provide our users with ever better, more accurate, and more beautiful results. We routinely embrace modern advances in multiple domains such as natural language processing, computer vision and AI to provide the best experience at internet scale across billions of queries and images. Recently we talked about how semantic vectors helped improve Bing Web Search. Today we want to share how end-to-end deployment of semantic, vector-based deep learning techniques in our image search stack makes Bing even more intelligent, and how it enables us to find more satisfying results to complex image search queries.

Today when a user issues a query such as 'pencil erasers that look like tools,’ Bing doesn't just retrieve images from pages containing that literal text. We also map the query into semantic space and then search for matches in an index of images that have also been mapped into that same semantic space. Deep learning is used to discover the semantic representation of queries and images such that related items are co-located in semantic space.  This approach allows Bing to provide high quality results even from pages where no such query terms are present.  We do this at internet scale, searching billions of images in tens of milliseconds for every image search query issued on Bing.

Looking inside an image

In image search a picture is really ‘worth a thousand words’ because describing a picture with natural language can be done in 'thousands’ of ways.  For example, the image below could be described as ‘man swimming along with a sperm whale,’ or ‘wildlife in the ocean.’  In Bing image search, we need to understand that both of these queries can be well answered by this image.  Capturing the semantic association between text and images is not straightforward, and this is where deep learning comes into play. 


Deep learning computer vision techniques let us look inside an image, understand its semantic meaning and represent it as a vector through a process called image embedding. This vector needs to simultaneously represent that the image above is about a whale, contains a man, has waves at the top, has a strong blue background, etc. That’s a lot of information to squeeze into one vector! Similarly, deep learning natural language processing techniques let us read the words of a query and represent it as a vector through a process called text embedding. A model is first trained using deep learning techniques to map (embed) a query and an image to a high dimensional vector space in such a way that the corresponding vectors are similar if the image is semantically relevant to the query, and further apart otherwise.
So we have some vectors of queries and images, but how does that let us search billions of images in tens of milliseconds to satisfy a Bing Image Search query? 

Deep image ranking

The goal of Bing Image Search is to retrieve the most relevant images for a given text query.  A very very simplified description of this large-scale search operation consists of three major stages:

  1. A matching stage - to select candidate images from a huge index of images

  2. Multiple ranking stages - to use computationally expensive methods to score each candidate image independently and rank all images

  3. Multiple set ranking stages - to re-rank previous candidates lists considering information from the entire candidate set, not just independent images

In Deep Image Ranking we leverage image and text embedding vectors at each stage to allow us to better capture semantic intent of queries and images.  The graph below illustrates the overall infrastructure of our deep image ranking work, and we will introduce each component in detail.

Deep learning in matching

The matching stage selects a candidate set of images from billions of images to be subsequently processed by the ranking stages. During matching, it is essential to obtain a result set with high recall and moderate precision. To achieve this, query, image and page text are embedded by a deep learning model into embedding vectors. We then use an efficient vector search algorithm to scan a billions-scale index to find the N-best document vectors for a query vector. We have experimented with two approaches: (i) an optimized version of the classical Approximate Nearest Neighbor (ANN) search, and (ii) a quantized matching approach where dense vectors are first quantized into sparse vectors which allows us to execute vector search considerably faster but with a trade-off in precision. After evaluating several approaches balancing different levels of precision and performance, and various combinations of query, image and page text embedding vectors, we picked the one providing optimal combination of speed and quality. Let us look at an example in the diagram below: the left part shows that even though the page text ‘golden butterfly hd wallpapers’ is not exactly the same as the query ‘gold butterflies hd background’, we are able to retrieve the relevant image based on query embedding, page text embedding and image embedding, which are generated from the deep learning model as shown in the right diagram.

 

Deep learning in ranking

The next step is to rank all the candidate images according to their relevance to the query. The set of images we consider here is much smaller than in the previous step and therefore we can be more liberal with computations. Here we reuse the previously computed embedding vectors but do more exact calculations of semantic distance between the vectors. Using semantic matching of vectors is particularly invaluable at this stage when the text on the page hosting the image is not representative of the image, as we are able to directly ‘look at the image’ and consider its relevance to the query. For example, in the diagram below, the page contains the text ‘how to maintain a healthy and happy parrot’ however we are still able to recognize that the image matches the query ‘a parrot rides a tricycle’ because we are looking inside the image using image embedding.

  

Deep learning in set ranking

The set ranking stages aim to further optimize the ranking order of a list of images for a given query by considering information from all images in the candidate list. For example, for the query ‘cat in a box,’ if 95 out of 100 images are relevant, while the remaining 5 images are of a ‘dog in a box,’ we want to consider the consistent information of the 95 good images and demote the 5 bad images. 

Once again, we can use deep learning here to embed the image and page text into vectors and then compute pair-wise distances between all images in the candidate list. We further compute higher-order features from these pairwise distances and use them to re-rank the candidate list.  For example, the diagram below shows one such approach where we consider the distance of each image from the cluster centroids estimated from the vectors of the top-N images. With these distances available we can now easily identify the few ‘dog in a box’ images as inconsistent with the majority of the good results and eliminate them.


So, what is the end result?

Putting this all together, we now have an image ranking system that considers search in semantic space and allows more complex queries to be intelligently understood. Here are examples of some of the large improvements we have seen in image search quality:







Deep image ranking has improved Bing results across the board, but where it stands out is when there is insufficient text on the page, and where semantic understanding of the query is key to providing optimal image search results. However, there is no silver bullet, and there are still many complex queries that we are trying to improve. We continue to look for more ways to train our deep image ranking to provide great results to the hardest queries using the most elegant and scalable solutions. 

If you have any feedback, let us know using Bing Listens or simply click on ‘Feedback’ on Bing Image Search!

- Bing Image Search Relevance Team

 

Because it’s Friday: Bad road

$
0
0

Sometimes I think the potholes in the roads in Chicago are bad, but then a road like this puts things into perspective:

(Thanks to TH for the link.) Don't miss the shots looking back near the end to see how many people are in that vehicle!

That's all from the blog for this week. Have a great weekend, and we'll be back after the US holiday on Monday. Enjoy!

The year of Linux on the (Windows) Desktop – WSL Tips and Tricks

$
0
0

I've been doing a ton of work in bash/zsh/fish lately - Linuxing. In case you didn't know, Windows 10 can run Linux now. Sure, you can run Linux in a VM, but it's heavy and you need a decent machine. You can run a shell under Docker, but you'll need Hyper-V and Windows 10 Pro. You can even go to https://shell.azure.com and get a terminal anywhere - I do this on my Chromebook.

But mostly I run Linux natively on Windows 10. You can go. Just open PowerShell once, as Administrator and run this command and reboot:

Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux

Then head over to the Windows Store and download Ubuntu, or Debian, or Kali, or whatever.

What's happening is you're running user-mode Linux without the Linux Kernel. The syscalls (system calls) that these un-modified Linuxes use are brokered over to Windows. Fork a Linux process? It a pico-process in Windows and shows up in the task manager.

Want to edit Windows files and edit them both in Windows and in Linux? Keep your files/code in /mnt/c/ and you can edit them with other OS. Don't use Windows to "reach into the Linux file system." There be dragons.

image

Once you've got a Linux installed (or many, as I do) you can manage then and use them in a number of ways.

Think this is stupid or foolish? Stop reading and keep running Linux and I wish you all the best. More power to you.

Want to know more? Want to look new and creative ways you can get the BEST of the Windows UI and Linux command line tools? Read on, friends.

wslconfig

WSL means "Windows Subsystem for Linux." Starting with the Windows 10 (version 1709 - that's 2017-09, the Fall Creators Update. Run "Winver" to see what you're running), you've got a command called "wslconfig." Try it out. It lists distros you have and controls which one starts when you type "bash."

Check out below that my default for "bash"  is Ubuntu 16.04, but I can run 18.04 manually if I like. See how I move from cmd into bash and exit out, then go back in, seamlessly. Again, no VM.

C:>wslconfig /l /all

Windows Subsystem for Linux Distributions:
Ubuntu (Default)
Ubuntu-18.04
openSUSE-42
Debian
kali-rolling

C:>wslconfig /l
Windows Subsystem for Linux Distributions:
Ubuntu (Default)
Ubuntu-18.04
openSUSE-42
Debian
kali-rolling

C:>bash
128 → $ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.4 LTS
Release: 16.04
Codename: xenial
128 → $ exit
logout

C:>ubuntu1804
scott@SONOFHEXPOWER:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04 LTS
Release: 18.04
Codename: bionic
scott@SONOFHEXPOWER:~$

You can also pipe things into Linux commands by piping to wsl or bash like this:

C:UsersscottDesktop>dir | wsl grep "poop"

05/18/2018 04:23 PM <DIR> poop

If you're in Windows, running cmd.exe or powershell.exe, it's best to move into Linux by running wsl or bash as it keeps the current directory.

C:UsersscottDesktop>bash

129 → $ pwd
/mnt/c/Users/scott/Desktop
129 → $ exit
logout

Cool! Wondering what that number is before my Prompt? That's my blood sugar. But that's another blog post.

wsl.conf

There's a file in /etc/wsl.conf that lets you control things like if your Linux of choice automounts your Windows drives. You can also control more advanced things like if Windows autogenerates a hosts file or processes /etc/fstab. It's up to you!

Distros

There's a half dozen distros available and more coming I'm told, but YOU can also make/package your own Linux distribution for WSL with packager/distro-launcher that's open sourced at GitHub.

Docker and WSL

Everyone wants to know if you can run Docker "natively" on WSL. No, that's a little too "Inception," and as mentioned, the Linux Kernel is not present. The unmodified elf binaries work fine but Windows does the work. BUT!

You can run Docker for Windows and click "Expose daemon on localhost:2375" and since Windows and WSL/Linux share the same port space, you CAN run the Docker client very happily on WSL.

After you've got Docker for Windows running in the background, install it in Ubuntu following the regular instructions. Then update your .bashrc to force your local docker client to talk to Docker for Windows:

echo "export DOCKER_HOST=tcp://0.0.0.0:2375" >> ~/.bashrc && source ~/.bashrc

There's lots of much longer and more details "Docker on WSL" tutorials, so if you'd like more technical detail, I'd encourage you to check them out! If you use a lot of Volume Mounts, I found Nick's write-up very useful.

Now when I run "docker images" or whatever from WSL I'm talking to Docker for Windows. Works great, exactly as you'd expect and you're sharing images and containers in both worlds.

128 → $ docker images

REPOSITORY TAG IMAGE ID CREATED SIZE
podcast test 1bd29d0223da 9 days ago 2.07GB
podcast latest e9dd366f0375 9 days ago 271MB
microsoft/dotnet-samples aspnetapp 80a65a6b6f95 11 days ago 258MB
microsoft/dotnet-samples dotnetapp b3d7f438bad3 2 weeks ago 180MB
microsoft/dotnet 2.1-sdk 1f63052e44c2 2 weeks ago 1.72GB
microsoft/dotnet 2.1-aspnetcore-runtime 083ca6a642ea 2 weeks ago 255MB
microsoft/dotnet 2.1-runtime 6d25f57ea9d6 2 weeks ago 180MB
microsoft/powershell latest 708fb186511e 2 weeks ago 318MB
microsoft/azure-cli latest 92bbcaff2f87 3 weeks ago 423MB
debian jessie 4eb8376dc2a3 4 weeks ago 127MB
microsoft/dotnet-samples latest 4070d1d1e7bb 5 weeks ago 219MB
docker4w/nsenter-dockerd latest cae870735e91 7 months ago 187kB
glennc/fancypants latest e1c29c74e891 20 months ago 291MB

Fabulous.

Coding and Editing Files

I need to hit this point again. Do not change Linux files using Windows apps and tools. However, you CAN share files and edit them with both Windows and Linux by keeping code on the Windows filesystem.

For example, my work is at c:github so it's also at /mnt/github. I use Visual Studio code and edit my code there (or vim, from within WSL) and I run the code from Linux. I can even run bash/wsl from within Visual Studio Code using its integrated terminal. Just hit "Ctrl+P" in Visual Studio Code and type "Select Default Shell."

Select Default Shell in Visual Studio Code

On Windows 10 Insiders edition, Windows now has a UI called "Sets" that will give you Tabbed Command Prompts. Here I am installing Ruby on Rails in Ubuntu next to two other prompts - Cmd and PowerShell. This is all default Windows - no add-ons or extra programs for this experience.

Tabbed Command Prompts

I'm using Rails as an example here because Ruby/Rails support on Windows with native extensions has historically been a challenge. There's been a group of people heroically (and thanklessly) trying to get Ruby on Rails working well on Windows, but today there is no need. It runs great on Linux under Windows.

I can also run Windows apps or tools from Linux as long as I use their full name with extension (like code.exe) or set an alias.

Here I've made an alias "code" that runs code in the current directory, then I've got VS Code running editing my new Rails app.

Editing a Rails app on Linux on Windows 10 with VS Code

I can even mix and match Windows and Linux when piping. This will likely make Windows people happy and deeply offend Linux people. Or, if you're non-denominational like me, you'll dig it!

$ ipconfig.exe | grep IPv4 | cut -d: -f2

172.21.240.1
10.159.21.24

Again a reminder: Modifying files located not under /mnt/<x> with a Windows application in WSL is not supported. But edit stuff on /mnt/x with whatever and you're cool.

Sharing Sharing Sharing

If you have Windows 10 Build 17064 or newer (run ver from windows or "cmd.exe /c /ver" from Linux) and you can even share an environment variable!

131 → $ cmd.exe /c ver


Microsoft Windows [Version 10.0.17672.1000]

There's a special environment variable called "WSLENV" that is a colon-delimited list of environment variables that should be included when launching WSL processes from Win32 or Win32 processes from WSL. Basically you give it a list of variables you want to roam/share. This will make it easy for things like cross-platform dual builds. You can even add a /p flag and it'll automatically translate paths between c:windows style and /mnt/c/windows style.

Check out the example at the WSL Blog about how to share a GOPATH and use VSCode in Windows and run Go in both places.

You can also use a special built-in command line called "wslpath" to translate path names between Windows and WSL. This is useful if you're sharing bash scripts, doing cross-platform scripts (I have PowerShell Core scripts that run in both places) or just need to programmatically switch path types.

131 → $ wslpath "d:githubhanselminutes-core"

/mnt/d/github/hanselminutes-core
131 → $ wslpath "c:UsersscottDesktop"
/mnt/c/Users/scott/Desktop

There is no man page for wslpath yet, but copied from this GitHub issue, here's the gist:

wslpath usage:

-a force result to absolute path format
-u translate from a Windows path to a WSL path (default)
-w translate from a WSL path to a Windows path
-m translate from a WSL path to a Windows path, with ‘/’ instead of ‘\’

One final note, once you've installed a Linux distro from the Windows Store, it's on you to keep it up to date. The Windows Store won't run "apt upgrade" or ever touch your Linuxes once they have been installed. Additionally, you can have Ubuntu 1604 and 1804 installed side-by-side and it won't hurt anything.

Related Links

Are you using WSL?


Sponsor: Check out JetBrains Rider: a cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!



© 2018 Scott Hanselman. All rights reserved.
     

Azure.Source – Volume 33

$
0
0

Microsoft continues to be a leader in Gartner's Cloud IaaS MQ - For the fifth consecutive year, Microsoft is recognized as a leader in Gartner's IaaS Magic Quadrent. Check out this post for the details and a link to not only this report, but a long list of other Gartner MQs that place Microsoft in the leader's quadrant.

Now in preview

Create enterprise subscription experience in Azure portal public preview - Learn about the public preview of a fully-integrated create subscription experience in the Azure portal for Azure Enterprise Agreement (EA) subscriptions. Previously, EA subscriptions were created in a separate EA portal.

Azure AD Authentication for Azure Storage now in public preview - This post announces the preview of Azure AD Authentication for Azure Blobs and Queues, which is one of the features most requested by enterprise customers looking to simplify how they control access to their data as part of their security or compliance needs. It is available in all public regions of Azure.

61bcc542-39f7-4fc8-b2a7-c992f37971bd

Azure AD Authentication for Azure Storage

Now generally available

New capabilities to enable robust GDPR compliance - If you haven't checked your inbox lately, May 25th marked the beginning of enforcement of the EU General Data Protection Regulation (GDPR). To help you support compliance, new offerings include the general availability of the Azure GDPR Data Subject Request (DSR) portal, Azure Policy, Compliance Manager for GDPR, Data Log Export, and the Azure Security and Compliance Blueprint for GDPR. This post includes links to multiple resouces to learn more about what's available, including a link to an on-demand webcast from Microsoft President, Brad Smith.

Also generally available

News and updates

An update on the integration of Avere Systems into the Azure family - Get an update on Microsoft's acquisition of Avere Systems, which closed earlier this year. This post also includes answers to some of the most common questions we've heard from customers. Avere brought an innovative combination of file system and caching technologies to support the performance requirements for customers who run large-scale compute workloads, whether it’s building animations and special effects for the next blockbuster movie or discovering new treatments for life-threatening diseases.

Changes coming to PowerShell (preview) in Azure Cloud Shell - Thanks to the general availability of PowerShell Core 6 earlier this year, the PowerShell experience in Cloud Shell is switching to a Linux container running PowerShell Core 6. This experience will replace the Windows-based PowerShell experience. Check out this post for all of the details.

New updates for Microsoft Azure Storage Explorer - Storage Explorer is a great tool for managing contents of your Azure storage account with a client-side tool from Windows, macOS, and Linux. After the recent general availability for Storage Explorer, we also added new features in the latest 1.1 release to align with Azure Storage platform and accessibility support. Read this to learn what else is new with this update to the tool.

4f9766b3-1617-4267-8e3d-4af142b3e3ae

Microsoft Azure Storage Explorer

Load confidently with SQL Data Warehouse PolyBase Rejected Row Location - In Azure SQL Data Warehouse, the Create External Table definition has been extended to include a Rejected_Row_Location parameter to let you know which rows failed to load and why. PolyBase is a technology that accesses data outside of the database via T-SQL. PolyBase creates a directory on the External Data Source at the Rejected_Row_Location if one doesn’t exist.

Additional news and updates

Azure Friday

Azure Friday | Episode 435 - How I choose which services to use in Azure - Azure MVP Barry Luijbregts chats with Scott Hanselman about how he goes about choosing the right services in Azure to run his applications and store his data.

Technical content and training

Azure IoT Reference Architecture update - Earlier this month, we released an update to the Azure IoT Reference Architecture Guide, which provides proven production-ready architecture, with proven technology implementation choices, and links to Solution Accelerator reference architecture implementations. This post provides an overview of the changes made, and of course includes a link to the latest version of the guide: https://aka.ms/iotrefarchitecture.

Blue-Green deployments using Azure Traffic Manager - One key customer use case for Traffic Manager is to make their software deployments smoother with minimal impact to their users by implementing a Blue-Green deployment process using Traffic Manager’s weighted round-robin routing method. Blue-Green deployment is a software rollout method that can reduce the impact of interruptions caused due to issues in the new version being deployed. This blog shows you how to implement Blue-Green deployment using Traffic Manager.

Control Azure Data Lake costs using Log Analytics to create service alerts - In this post, you will learn how to use Log Analytics with your Data Lake accounts to create alerts that can notify you of Data Lake activity events and when certain usage thresholds are reached.

Serverless real-time notifications in Azure using Azure #CosmosDB - In this post, you wil learn about a sample solution that creates a complete serverless scenario for a chat application that stores data in Azure Cosmos DB, Azure Functions for hosting & event processing, and Azure SignalR for websocket client messaging.

The Azure Podcast

The Azure Podcast: Episode 230 - Moving apps to Azure - More goodness from BUILD 2018 - this time a great discussion with Paul Yuknewicz and Cesar De La Torre on moving apps to Azure

Events

Devs imagine, create, and code the future at Microsoft Build - Check out some of the key takeaways we heard from this year’s attendees.

Explore Build 2018 content with playlists - If you couldn’t attend Microsoft Build 2018, now is your opportunity to do so online with Microsoft Build Live 2018. Learn about the cloud, AI, IoT, and much more using playlists to guide you select content. This post covers eight of the available playlists.

Customers and partners

M-Series certified for SAP HANA and available worldwide - We recently announced SAP HANA certification for M-series VMs, enabling you to run production SAP HANA workloads with a pay-as-you-go, per second VM billing. M-series VMs are now generally available in four more regions: Australia East, Australia Southeast, Japan East, Japan West. This adds to availability in US East 2, US West 2, West Europe, UK South, Southeast Asia and US Gov Virginia. Twelve additional regions are slated to roll out M-series VMs later this year.

Accelerate your SAP on Azure HANA project with SUSE Microsoft Solution Templates - Following the launch of M-SERIES/SAP HANA certified Virtual Machines (VMs) on Azure, we are happy to announce a new marketplace offering in partnership with SUSE. The offering leverages Azure Resource Manager Templates to automate the rapid provisioning of an SAP HANA infrastructure in Azure with SUSE technology. This post covers the templates available.

Do more with Chef and Microsoft Azure - Last week at ChefConf, the Chef and Azure teams announced the inclusion of Chef InSpec, directly in Azure Cloud Shell, as well as the new Chef Developer Hub in Azure Docs. Earlier this month at Microsoft Build, Chef announced new integrations for Habitat, with our fully-managed container registry and Kubernetes services, Azure Container Registry (ACR) and Azure Kubernetes Services (AKS).

90328626-2a0a-4d7c-a298-6a50a2a8f7e8

Publish to ACR directly from the Habitat Builder build service

Accelerate data warehouse modernization with Informatica Intelligent Cloud Services for Azure - Last week at Informatica World, Microsoft and Informatica announced the availability of Informatica Intelligent Cloud Services (IICS) for Azure. Microsoft has partnered with Informatica, a leader in Enterprise Data Management, to help our customers accelerate data warehouse modernization. This service is available as a free preview on Azure today. Informatica provides a discovery-driven approach to data warehouse migration, which simplifies the process of identifying and moving data into Azure SQL Data Warehouse.

Transact capabilities for SaaS apps now available in Azure Marketplace - Azure Marketplace has long-offered SaaS apps for discovery and trial. At Build, we announced that SaaS apps can now be transacted within Azure Marketplace. ISVs building and selling SaaS applications built for Azure can now not only list or offer trials, but also monetize their SaaS applications directly with customers.

New container images in Azure Marketplace - At Build, we announced a new category of offer in Azure Marketplace - container images. Azure customers can now discover and acquire secure and certified container images in Azure marketplace to build a container-based solution. All images available in Azure marketplace are certified and validated against container runtimes in Azure like managed Azure Kubernetes Service (AKS). We partnered with Bitnami and Couchbase as our initial launch partners, but you can also onboard to Azure Marketplace as a publisher.

Azure tips & tricks

Adding Extensions to Web Apps in Azure App Service

Deployment Slots for Web Apps using Azure App Service

Developer spotlight

Build 2018 Playlist: Learn to build cloud-native apps - When you build a new application, you want to focus on the things that matter: your logic, your added value. You don't want to focus on load balancers, operating systems and virtual networks. When you create a new application with Microsoft Azure, it handles the plumbing for you. "Born in the cloud" apps can take advantage of everything that Azure has to offer. Make your logic "serverless" and only pay for it when it runs. Distribute your data all over the world with a click of a button to make your app performant for all users.

Build 2018 Playlist: Add intelligence to apps using ML and AI - There is tremendous value in your data, that you can use to enhance your applications and your business. Access the value in the data by ingesting it, arranging it and analyzing it. Microsoft offers powerful tools for preparing and analyzing data, regardless of where that data lives. Use the tools to create your own machine learning algorithms and expose the resulting models to apps. You can even enhance offline mobile apps by consuming AI as a service with Azure Cognitive Services.

Build 2018 Playlist: Deploy and manage apps at scale - No matter what applications you are building, you need to be able to deploy them fast and without errors to get new features to your users. To do that efficiently, you need to automate as much as you can. Microsoft Azure has incredible tools and guidance that enable you to implement Continuous Integration (CI) and Continuous Delivery (CD) for all application types with minimum effort.

Auto-Away Assist for NEST Thermostat - Let Nest know when you are in another room to assist Auto-Away using motion sensors, Particle.io, and Azure!

Azure IoT Workbench - The IoT Workbench extension makes it easy to code, build, deploy and debug your IoT project in Visual Studio Code, with a rich set of functionalities.

Azure IoT and Serverless Button Sample - This sample will walk you through building an IoT application to post a tweet to Twitter.

Internet of Things Show

Azure Maps intro for developers - Azure Maps is now publicly available. Previously known as Azure Location Based Services during its preview period, the service not only got a new name but also lots of great new features. Ricky Brundritt, PM in the Azure Maps team, walks Olivier through what's new and interesting for developers in using Azure Maps to build IoT applications with geospatial services APIs that integrate seamlessly with other Azure tools and services.

IoT White Boarded for the Top Floor - What is the Internet of Things and what is Microsoft doing about it? These are high level questions we have not addressed thoroughly yet on the IoT Show. Tom Davis, PM lead in the IoT Business Acceleration team, joins us for a whiteboard session to literally draw us the big picture.

The best of AppSource & Azure Marketplace at Build 2018

$
0
0

Did you miss Build Conference this month? We made some big announcements about our marketplace including new features, new functionality, and new services. You can get the quick overview of these announcements in the Azure Marketplace is reaching new audiences blog.

Got a little more time to spare and want to go deeper? Check out some of the key marketplace sessions and content on-demand.

Check out the full catalog of recorded sessions and content from Build including keynote sessions.

Native Ads on the Microsoft Ad Monetization Platform

$
0
0

We at Microsoft are committed to helping developers maximize their monetization through ads. The Microsoft ad mediation service along with the Microsoft Advertising SDK are the two key components of  Microsoft’s new ad monetization platform that dynamically optimizes ad partner configurations to drive the highest yield for the developers and deliver innovative Ad experiences for consumers.

Native Ads is a component-based Ad format that gives publishers the flexibility of placing the individual components of an Ad – title, image, logo, description, call to action text – to provide the best possible fit to the look and feel of the rest of the app. This enables the developers to use fonts, colors and animations of their own to stitch unobtrusive user experiences in their app while earning high yield from advertising. For advertisers as well, this provides high performing placements since the ads experience is tightly built into the app and users tend to interact a lot more with such sponsored content. The click through rates (CTR) tend to be thus higher with Native Ads which results in better monetization compared to traditional Ad experiences such as Banner Ads.

Last year, we announced an invitation only pilot for Native Ads support. At Microsoft Build 2018, we happily announced that we have made this capability generally available to all developers.

What’s new in Native Ads?

We have worked on enhancing the stability and completeness of the Native Ads experience since we announced the pilot. We have added several Native Ad partners such as AppNexus, Microsoft app install ads and Revcontent to serve on the Native Ad inventory. We are working actively to bring additional demand partners such as MSN Content Recommendations and Taboola in the coming weeks.

How do I get started?

You need the latest version of the Microsoft Advertising SDK to get started with including Native Ads in your UWP applications. If you haven’t played around with the Microsoft Advertising SDK before this, please take a look at the Get Started guide. Also, please check out our guide and sample code on MSDN to quickly incorporate Native Ads into your application.

Examples of Native Ad integration:

PicsArt, a participant in our Native Ads preview program, was able to integrate Native Ads into their ‘Photo Studio’ application easily and provide a compelling and immersive Ad experience to their users.

 “Implementation of Native ads was a smooth and seamless experience. The creation of an ad unit to its implementation was simple thanks to Microsoft’s comprehensive documentation.” – PicsArt 

Example of a Native Ad in the PicsArt application ‘PicsArt Photo Studio’

Figure 1 Example of a Native Ad in the PicsArt application ‘PicsArt Photo Studio’ 

Good2Create, another participant in the pilot, was able to stitch Native Ads that blend beautifully into their application ‘Wallpaper Studio.’

Example of a Native Ad in the Good2Create application ‘Wallpaper Studio 10'

Figure 2 Example of a Native Ad in the Good2Create application ‘Wallpaper Studio 10’    

What’s next in Native ads?

We will continue to build on the Native ads story by adding more Native Ad partners and offer enhanced creative experiences to help developers monetize better using Native Ads. Please reach us at aiacare@microsoft.com with your questions and comments.

The post Native Ads on the Microsoft Ad Monetization Platform appeared first on Windows Developer Blog.

Guest Post: Bing Maps Advantages Lead to Development and Growth of RouteSavvy Route Planning Software

$
0
0

When US-based OnTerra Systems first set out to develop new route planning software for small to mid-sized fleets back in 2010, a significant array of Bing Maps advantages led OnTerra Systems management to use Bing Maps as the platform for RouteSavvy.

OnTerra Systems already was a long-time Microsoft partner, but the array of Bing Maps advantages made it an even easier business decision to use Bing Maps as the base for RouteSavvy, according to Steve Milroy, president and founder of OnTerra Systems.

“We chose Bing Maps as the foundation for RouteSavvy route planning software for many reasons,” said Mr. Milroy. “For starters, Bing Maps has a really strong focus on enterprise and business mapping applications,” he said. “Second, the licensing is cost-effective, which allows us to charge a lower price point for RouteSavvy and that has helped create amazing ROI on the product.”

The fact that Bing Maps continues to invest in, and refine its technology was another key factor, according to Mr. Milroy. “The new fleet management APIs that Bing introduced just in the past couple of months illustrates their commitment to investing in and advancing their mapping technology,” he said. “For example, there are powerful functions built into Bing’s new fleet management APIs that are way ahead of other mapping vendors.”

As a result of the Bing Maps technology layered into RouteSavvy, it has morphed into one of the fastest growing route planning software solutions on the market today.

The cost-effective pricing model for Bing Maps allows OnTerra Systems to price RouteSavvy at just $300 a year. As a route planner, businesses and organizations with small to mid-sized fleets input a variety of addresses for the day’s service calls, deliveries, or pick-ups. RouteSavvy then calculates the most efficient route.

The main benefits of RouteSavvy route planning software are:

  • Reduced miles driven = reduced fuel costs
  • Improved productivity = handle more calls, deliveries, or pick-ups in a day
  • Ability to handle more calls in a normal workday = reduced overtime labor costs

“All these savings add to savings that go straight to the bottom line,” added Mr. Milroy.

RouteSavvy utilizes many of the Bing Maps APIs, including the maps and imagery, streetside, address geocoding, routing and new fleet management features.

RouteSavvy

Return ROI By Huge Margins

The fact that RouteSavvy is based on Bing Maps, with its affordable licensing model, is a major reason why this route planning software delivers tremendous return on investment. Here are some examples of how Bing’s affordable licensing has helped RouteSavvy offer tremendous ROI to its customers:

“Bing Maps continues to be a great platform for our RouteSavvy route planning software,” explained Steve Milroy. “Thanks to Bing Maps, RouteSavvy provides ROI that’s generated in days and weeks, not years.”

To learn more about the Bing Maps Fleet Management APIs, go to https://www.microsoft.com/en-us/maps/fleet-management.

- Tillman Saylor (Sales Manager, OnTerra Systems)

About OnTerra Systems & RouteSavvy:
OnTerra Systems is a long-time Microsoft partner that uses Bing Maps as the basis for many of its product offerings. RouteSavvy is a powerful, affordable route planning software tool designed for small to mid-sized fleets (up to 100 vehicles) in the business of service calls, deliveries, or pick-ups. For more information, visit: http://www.RouteSavvy.com.

About the Author:
Tillman Saylor is the Sales Manager of USA-based OnTerra Systems, which offers RouteSavvy.com route planning software, MapSavvy.com Web Mapping Service & Bing Maps licensing. He can be reached at: tillmans@onterrasystems.com.

Announcing the May 2018 Git Security Vulnerability

$
0
0
The Git community has disclosed an industry-wide security vulnerability in Git that can lead to arbitrary code execution when a user operates in a malicious repository. This vulnerability has been assigned CVE 2018-11234 and CVE 2018-11235 by Mitre, the organization that assigns unique numbers to track security vulnerabilities in software. Git 2.17.1 and Git for... Read More
Viewing all 10804 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>