Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

Accelerating healthcare AI startups in the cloud

$
0
0

Frost and Sullivan has estimated the Artificial Intelligence (AI) Market for Healthcare IT in 2018 for hospitals at $574 million USD and expects it to grow at a CAGR of 65 percent to $4.3 billion USD in 2022! Similar explosive growth is predicted across other segments of healthcare. Machine Learning (ML) is a type of AI that has already seen successful application in healthcare, particularly rapid growth, and shows major untapped potential to further help improve healthcare going forward. Key use cases range from resource and asset optimization to readmission prevention, chatbots, anti-fraud, behavioral analytics, medical risk analytics, claims analytics, cybersecurity, and many more. Business values driving healthcare organizations to deploy AI solutions across these use cases span cost reduction, improving patient outcomes, and improving the engagement and experiences of patients and healthcare professionals. Major opportunities for startups range from creating AI products and solutions for specific use cases and healthcare needs to services for education, customizing AI solutions, integrating them with existing enterprise systems and data stores, managing and operating solutions, and so forth. See the AI in Healthcare Guide for more information on these use cases and opportunities.

Below I review key goals of any healthcare AI startup, and how Microsoft Azure empowers startups to both meet their goals and maximize the benefits of AI to the healthcare organizations they serve.

image

Run lean

With finite funding, all startups are looking to run lean, maximize their runway (how long the startup can survive if income and expenses stay constant) and their ability to achieve success. The major way to run lean is by avoiding capital expense of data center equipment, and expensive IT and cyber security staff (Azure has numerous built-in security measures — see Azure Security). Microsoft Azure enables you to acquire just the right amount of data center capability you require and address it as an operating expense, rather than a capital expense. You can also focus your limited resources on creating your AI solution versus building, securing, and managing low-level data center infrastructure. Microsoft Azure enables you to start free, and supports a wide range of open source technologies including a wide variety of tools such as Node.js to multiple operating systems from Ubuntu, to Debian, SUSE, and more.

Start quickly

With any compelling idea, there is a finite window of opportunity, and time is of the essence for any startup, especially those in AI where innovation is moving at a brisk and accelerating pace. With Microsoft Azure you can deploy web apps and Virtual Machines (VMs) in seconds. Furthermore, rather than start your cloud at zero with a blank slate, you can bootstrap your solution in Microsoft Azure with blueprints that include example code, test data, automated deployment, documentation, and security and compliance support. This approach gets you a working reference solution in your Microsoft Azure cloud, and 50-90 percent towards your end solution. From that beginning, you can get started on your pilot as soon as possible and rapidly close the gap to your end solution. See the AI in healthcare blueprint with HIPAA and HITRUST security and compliance support for further details.

Stay agile

Startups move quickly, learn fast, and must be able to pivot rapidly to keep competitive. Microsoft Azure provides agility for your AI solution, enabling you to make changes and scale with point-and-click efficiency using your secure, powerful web-based dashboard for managing your cloud environment.

Grow fast, go big

Microsoft Azure provides support for startups at every stage of their growth, from technology enablement to exploration, to business growth through accelerators and venture funding, and connecting you with customers. With Azure you can also scale worldwide to 34 regions globally, supported by Azure’s industry-leading law and regulatory compliance framework support. For more on Microsoft’s commitment to startups see the announcement Grow, build, and connect with Microsoft for Startups:

“We are committing $500 million over the next two years to offer joint sales engagements with startups, along with access to our technology, and new community spaces that promote collaboration across local and global ecosystems. Startups are an indisputable innovation engine, and Microsoft is partnering with founders and investors to help propel their growth,” said Charlotte Yarkoni, Corporate VP, Growth and Ecosystems at Microsoft.

Getting started

For much more on how to partner with Microsoft to power your healthcare AI startup to success see the Azure for Startups, which includes multiple case studies and success stories.

Further resources

  1. AI in Healthcare Use Cases Guide
  2. AI in Healthcare Solutions Guide
  3. AI in Healthcare Blueprint

Collaboration

What other use cases, challenges, solutions, and opportunities for startups are you seeing for AI in healthcare? We welcome any feedback or questions you may have below. AI in healthcare is a fast-moving field. New developments are emerging daily. Many of these new developments are not things you can read about yet in a textbook. I post daily about these new developments and solutions in healthcare, AI, cloud computing, security, privacy, and compliance on social media. Reach out to connect with me on LinkedIn and Twitter.


Bring Your Own Keys for Apache Kafka on HDInsight

$
0
0

One of the biggest security and compliance requirements for enterprise customers is to encrypt their data at rest using their own encryption key. This is even more critical in a post-GDPR world. Today, we’re announcing the public preview of Bring Your Own Key (BYOK) for data at rest in Apache Kafka on Azure HDInsight.

Azure HDInsight clusters already provide several levels of security. At the perimeter level, traffic can be controlled via Virtual Networks and Network Security Groups. Kerberos authentication and Apache Ranger provide the ability to finely control access to Kafka topics. Further, all managed disks are protected via Azure Storage Service Encryption (SSE). However, for some customers it is vital that they own and manage the keys used to encrypt the data at rest. Some customers achieve this by encrypting all Kafka messages in their producer applications and decrypting them in their consumer applications. This process is cumbersome and involves custom logic. Moreover, it doesn’t allow for usage of community supported connectors.

With HDInsight Kafka’s support for Bring Your Own Key (BYOK), encryption at rest is a one step process handled during cluster creation. Customers should use a user-assigned managed identity with the Azure Key Vault (AKV) to achieve this. AKV provides a highly available, scalable, and secure storage for cryptographic keys.

The data engineer authorizes the managed identity to have read access from AKV and then enables BYOK in HDInsight by providing the Azure Key Vault URL associated with the encryption key.

All messages to the Kafka cluster including replicas maintained by Kafka, are stored in Azure Managed Disks. With BYOK turned on, the attached Managed Disks are encrypted with a symmetric Data Encryption Key (DEK), which in turn is protected using the Key Encryption Key (KEK) from the customer’s key vault. The encryption and decryption processes are entirely handled by HDInsight. This setup is transparent to the customer; Kafka clients (producer and consumer applications) need not be modified. The cluster or key vault admin can safely rotate the keys in the key vault via the Azure portal or Azure CLI and the HDInsight Kafka cluster will start using the new key within minutes.

Customers must enable Soft-delete for customer managed keys that help protect them against ransomware scenarios and accidental deletion.

With BYOK on HDInsight Kafka, enterprise customers can now be more confident than ever in the security of their cluster. This feature unlocks Kafka for customers for whom BYOK is a prerequisite for data at rest. There is no additional charge for enabling this feature.

To get started with BYOK on HDInsight Kafka, please refer to the following documentation:

Follow us on @AzureHDInsight or HDInsight blog for the latest updates. For questions and feedback, reach out to AskHDInsight@microsoft.com.

About Azure HDInsight

Azure HDInsight is an easy, cost-effective, enterprise-grade service for open source analytics that enables customers to easily run popular open source frameworks including Apache Hadoop, Spark, Kafka, and others. The service is available in 27 public regions and Azure Government Clouds in the US and Germany. Azure HDInsight powers mission critical applications in a wide variety of sectors and enables a wide range of use cases including ETL, streaming, and interactive querying.

One month retirement notice: Access Control Service

$
0
0

Access Control Service is scheduled to be retired on November 7, 2018 with an extension option to February 4, 2019

The Access Control Service, otherwise known as ACS, is officially being retired. Since the ACS retirement announcement last year, many customers have reached out for our guidance and have completed their migration. Some customers communicated that their migration has started but will not complete before the deadline. We have decided to offer an extension to February 4, 2019. Failure to opt into the extension will result in your namespace being turned off on November 7, 2018. At that point, all requests to the namespace will fail.

What action is required?

If you are using ACS, you will need a migration strategy. The correct migration path for you depends on how your existing apps and services use ACS. We have published migration guidance to assist. In most cases, migration will require code changes on your part.

The Azure customers most likely to find ACS namespaces signed up Azure Service Bus prior to 2014. These namespaces can be identified by their –sb extension. The Service Bus team has provided migration guidance and will continue to publish updates to their blog.

How to request extension?

If you cannot finish your migration by November 7, 2018 and need more time. We are offering one time extension to February 4, 2019. Here is how you can request the extension:

1. Determine your ACS namespace

  • Connect to ACS using the Connect-AcsAccount cmdlet. (You may need to change your execution policy by running Set-ExecutionPolicy before you can run the command.)
  • List your available Azure subscriptions using the Get-AcsSubscription cmdlet.
  • List your ACS namespaces using the Get-AcsNamespace cmdlet.

2. Navigate to your ACS namespace’s management portal via https://{your-namespace}.accesscontrol.windows.net.

    3. Read the updated Term of Use by clicking the “Read Terms,” button, which will redirect you to a page with updated Term of Use.

    Step1

      4. Clicking the “Request Extension” button in the banner at the top of the page. The button will only be enabled after you read the updated Terms of Use.

      Step2

        5. After the extension request has been registered, the page should refresh with a new banner at the top of the page.

        Step3

        Contact us

        For more information about the retirement of ACS, please check our ACS migration guidance first. If none of the migration options will work for you, or if you still have questions or feedback about ACS retirement, please contact us at acsfeedback@microsoft.com. If you have specific Service Bus ACS questions, feel free to send email to ACS-SB@microsoft.com.

        In case you missed it: September 2018 roundup

        $
        0
        0

        In case you missed them, here are some articles from September of particular interest to R users.

        R code by Barry Rowlingson to replicate an XKCD comic about curve fitting.

        The rayshader package creates 3-D relief maps in R with perspective, shadows, and depth of field.

        The R Developer's Guide to Azure, with links to documentation for Azure cloud services integrating R.

        A review of many commercial applications of R presented at EARL London 2018.

        Roundup of AI, Machine Learning and Data Science news from September 2018.

        A Shiny app using the Custom Vision API to identify pictures (or not) of hotdogs.

        Two academic articles use survey techniques to estimate casualties from Hurricane Maria in Puerto Rico.

        Yihui Xie describes the benefits of RMarkdown documents in response to criticism of Jupyter Notebooks.

        A video demonstrates the use of R and Python in Azure HDinsight

        Similarity analyses in R used to identify candidate authors for an anonymous op-ed in the New York Times.

        A review of the book "SQL Server 2017 Machine Learning Services with R".

        And some general interest stories (not necessarily related to R):

        As always, thanks for the comments and please send any suggestions to me at davidsmi@microsoft.com. Don't forget you can follow the blog using an RSS reader, via email using blogtrottr, or by following me on Twitter (I'm @revodavid). You can find roundups of previous months here

        Microsoft joins LOT Network, helping protect developers against patent assertions

        $
        0
        0

        We are pleased to announce that Microsoft is joining the LOT Network, a growing, non-profit community of companies that is helping to lead the way toward addressing the patent troll problem, an issue that impacts businesses of all sizes.

        Microsoft has seen this problem firsthand. We’ve faced hundreds of meritless patent assertions and lawsuits over the years, and we want to do more to help others dealing with this issue. In most cases, the opportunists behind these assertions were not involved in the research and development of the ideas that came to be embodied in patents. Many do not even understand the technical concepts described in them. In the most extreme cases, we’ve seen mass mailings and campaigns to extract value from small businesses who are not equipped to understand patents. Although these problems are less acute in the US today than in the past, in part because of changes in the law, the challenge persists for many businesses. Entrepreneur magazine cited a recent study showing that 40 percent of small companies involved in patent litigation reported “significant operational impact” from those suits, which some described as a “death knell.”

        What does all of this mean for you if you’re a software developer or in the technology business? It means that Microsoft is taking another step to help stop patents from being asserted against you by companies running aggressive monetization campaigns. It also means that Microsoft is aligning with other industry leaders on this topic and committing to do more in the future to address IP risk. By joining the LOT network, we are committing to license our patents for free to other members if we ever transfer them to companies in the business of asserting patents. This pledge has immediate value to the nearly 300 members of the LOT community today, which covers approximately 1.35 million patents.  

        This also means we are continuing on the path we started with the introduction of the Azure IP Advantage program in 2017. As part of that program, Microsoft said that it would defend and indemnify developers against claims of intellectual property infringement even if the service powering Azure was built on open source. We also said that if we transferred a patent to a company in the business of asserting patents, then Azure customers would get a license for free. Our LOT membership expands this pledge to other companies in the LOT network. 

        Patents and intellectual property still play an important role in our industry because they protect breakthrough innovations and allow companies large and small to recoup research and development investments in areas like artificial intelligence, mixed reality, network security, and database management. However, these benefits are undermined when the system is abused by opportunists pursuing needless litigation. We all need to work together to prevent patent litigation abuse. We invite other companies to join the LOT network! We look forward to working with LOT in the future on other ideas that benefit developers and customers facing IP risks. 

        Announcing Azure database services for MySQL and PostgreSQL for Azure Government customers

        $
        0
        0

        Today, we are excited to announce the general availability (GA) of Azure Database for MySQL and Azure Database for PostgreSQL for Azure Government customers. With this GA milestone, government agencies and their partners have even more options as they work to advance their missions. These fully managed, government-ready database as a service offerings bring the community versions of MySQL and PostgreSQL to mission-critical government workloads with built-in high availability, a 99.99% availability SLA, elastic scaling for performance, and industry leading security and compliance. Building on the community edition of MySQL and PostgreSQL, lifting and shifting to the cloud is easier than ever, while maintaining full compatibility with your languages and frameworks of your choice.

        Features include compute scale up to 32 vCores, offering a new Memory Optimized tier, ability to scale storage on-line independent of compute without impact to application performance, allowing greater flexibility in backup storage options, and achieving industry compliance with ISO, SOC, and HIPAA. We are also compliant with the General Data Protection Regulation (GDPR). As with every Azure Government service, you also get the assurance of world-class security, compliance and advanced threat protection services.

        The Azure Government difference

        Azure Government delivers a dedicated cloud enabling government agencies and their partners to transform mission-critical workloads to the cloud. As a government-only service, it was created from the ground up to serve the strict needs of the US government. It meets critical compliance standards and exceeds US government regulatory requirements. Azure Government is available in 8 announced regions across the United States, including two DoD regions certified at Impact Level 5.

        Learn more

        GoChain blockchain available on Azure

        $
        0
        0

        The team at GoChain is bringing their private, scalable blockchain offering to Microsoft Azure. GoChain is a smart contract blockchain based on a go-ethereum code fork, with changes to protocol and consensus model primarily to increase the transaction speed of the network. This is acheived through core protocol changes as well as the introduction of Proof of Reputation consensus mechanism. This function similar to the current Proof of Authority based models in that verification of blocks are controlled by validation nodes agreed upon by the consoritium.

        GoChain is an Ethereum compatible blockchain which enables enterprise customers and developers the ability to build and deploy decentralized applications and smart contracts. With this compatibility, existing Ethereum assets can be reused with little changes require, such as solidity based smart contracts.

        Enterprises are looking for blockchain solutions that allow them to take advantage of core blockchain attributes such as immutability, strong digital signatures, and distributed architecture. However, they have needs that differ from public based chains in that they have controlled participants on the network. Solutions such as GoChain allow this type of deployment.

        Upgradable Smart Contracts

        An area that the team at GoChain has been focused on is enabling a more friendly option to upgrade existing smart contracts. Smart Contracts are immutable by the very nature of protocol. While this leads to a strong assurance and trust model, enterprises are asking for patterns to upgrade existing deployed contracts. A few options have emerged, however GoChain is working on a model that moves functionality such as the ability to "pause" a smart contract from an inherited library or contract to the protocol level. Additionally, enabling a better governance model around upgradability rules via a voting mechanism is another enhancement the team is enabling. Find more details on upgradeable smart contracts.  GoChain is targeting January 2019 for this update.

        Other roadmap items

        In addition to the core protocol and upgradable smart contracts, GoChain is also working on the following areas for near term releases:

        • Multi-Node deployment template.
        • Prioritized peer broadcasting to authorized signers.
        • Archive/backup of chain data off-node to cloud storage.
        • WASM contract support.

        How can I get started?

        Check out the free offering in the Azure Marketplace.

        std::any: How, when, and why

        $
        0
        0

        This post is part of a regular series of posts where the C++ product team here at Microsoft and other guests answer questions we have received from customers. The questions can be about anything C++ related: MSVC toolset, the standard language and library, the C++ standards committee, isocpp.org, CppCon, etc. Today’s post is by Casey Carter.

        C++17 adds several new “vocabulary types” – types intended to be used in the interfaces between components from different sources – to the standard library. MSVC has been shipping implementations of std::optionalstd::any, and std::variantsince the Visual Studio 2017 release, but we haven’t provided any guidelines on how and when these vocabulary types should be used. This article on std::any is the second of a series that examines each of the vocabulary types in turn.

        Storing arbitrary user data

        Say you’re creating a calendar component that you intend to distribute in a library for use by other programmers. You want your calendar to be usable for solving a wide array of problems, so you decide you need a mechanism to associate arbitrary client data with days/weeks/months/years. How do you best implement this extensibility design requirement?

        A C programmer might add a void* to each appropriate data structure:

        struct day { 
          // ...things... 
          void* user_data; 
        }; 
        
        struct month { 
          std::vector<day> days; 
          void* user_data; 
        };
        

        and suggest that clients hang whatever data they like from it. This solution has a few immediately apparent shortcomings:

        • You can always cast a void* to a Foo* whether or not the object it points at is actually a Foo. The lack of type information for the associated data means that the library can’t provide even a basic level of type safety by guaranteeing that later accesses to stored data use the same type as was stored originally:
          some_day.user_data = new std::string{"Hello, World!"}; 
          // …much later 
          Foo* some_foo = static_cast<Foo*>(some_day.user_data); 
          some_foo->frobnicate(); // BOOM!
          
        • void* doesn’t manage lifetime like a smart pointer would, so clients must manage the lifetime of the associated data manually. Mistakes result in memory leaks:
          delete some_day.user_data; 
          some_day.user_data = nullptr; 
          some_month.days.clear(); // Oops: hopefully none of these days had 
                                   // non-null user_data
          
        • The library cannot copy the object that a void* points at since it doesn’t know that object’s type. For example, if your library provides facilities to copy annotations from one week to another, clients must copy the associated data manually. As was the case with manual lifetime management, mistakes are likely to result in dangling pointers, double frees, or leaks:
          some_month.days[0] = some_month.days[1]; 
          if (some_month.days[1].user_data) { 
            // I'm storing strings in user_data, and don't want them shared 
            // between days. Copy manually: 
            std::string const& src = *some_month.days[1].user_data; 
            some_month.days[0].user_data = new std::string(src); 
          }
          

        The C++ Standard Library provides us with at least one tool that can help: shared_ptr<void>. Replacing the void* with shared_ptr<void> solves the problem of lifetime management:

        struct day {
          // ...things...
          std::shared_ptr<void> user_data;
        };
        
        struct month {
          std::vector<day> days;
          std::shared_ptr<void> user_data;
        };
        

        since shared_ptr squirrels away enough type info to know how to properly destroy the object it points at. A client could create a shared_ptr<Foo>, and the deleter would continue to work just fine after converting to shared_ptr<void> for storage in the calendar:

        some_day.user_data = std::make_shared<std::string>("Hello, world!");
        // ...much later...
        some_day = some_other_day; // the object at which some_day.user_data _was_
                                   // pointing is freed automatically
        

        This solution may help solve the copyability problem as well, if the client is happy to have multiple days/weeks/etc. hold copies of the same shared_ptr<void> – denoting a single object – rather than independent values. shared_ptr doesn’t help with the primary problem of type-safety, however. Just as with void*shared_ptr<void> provides no help tracking the proper type for associated data. Using a shared_ptr instead of a void* also makes it impossible for clients to “hack the system” to avoid memory allocation by reinterpreting integral values as void* and storing them directly; using shared_ptr forces us to allocate memory even for tiny objects like int.

        Not just any solution will do

        std::any is the smarter void*/shared_ptr<void>. You can initialize an any with a value of any copyable type:

        std::any a0; 
        std::any a1 = 42; 
        std::any a2 = month{"October"};
        

        Like shared_ptrany remembers how to destroy the contained value for you when the any object is destroyed. Unlike shared_ptrany also remembers how to copy the contained value and does so when the any object is copied:

        std::any a3 = a0; // Copies the empty any from the previous snippet
        std::any a4 = a1; // Copies the "int"-containing any
        a4 = a0;          // copy assignment works, and properly destroys the old value
        

        Unlike shared_ptrany knows what type it contains:

        assert(!a0.has_value());            // a0 is still empty
        assert(a1.type() == typeid(int));
        assert(a2.type() == typeid(month));
        assert(a4.type() == typeid(void));  // type() returns typeid(void) when empty
        

        and uses that knowledge to ensure that when you access the contained value – for example, by obtaining a reference with any_cast – you access it with the correct type:

        assert(std::any_cast<int&>(a1) == 42);             // succeeds
        std::string str = std::any_cast<std::string&>(a1); // throws bad_any_cast since
                                                           // a1 holds int, not string
        assert(std::any_cast<month&>(a2).days.size() == 0);
        std::any_cast<month&>(a2).days.push_back(some_day);
        

        If you want to avoid exceptions in a particular code sequence and you are uncertain what type an any contains, you can perform a combined type query and access with the pointer overload of any_cast:

        if (auto ptr = std::any_cast<int>(&a1)) {
          assert(*ptr == 42); // runs since a1 contains an int, and succeeds
        }
        if (auto ptr = std::any_cast<std::string>(&a1)) {
          assert(false);      // never runs: any_cast returns nullptr since
                              // a1 doesn't contain a string
        }
        

        The C++ Standard encourages implementations to store small objects with non-throwing move constructors directly in the storage of the any object, avoiding the costs of dynamic allocation. This feature is best-effort and there’s no guaranteed threshold below which any is portably guaranteed not to allocate. In practice, the Visual C++ implementation uses a larger any that avoids allocation for object types with non-throwing moves up to a handful of pointers in size, whereas libc++ and libstdc++ allocate for objects that are two or more pointers in size (See https://godbolt.org/z/RQd_w5).

        How to select a vocabulary type (aka “What if you know the type(s) to be stored?”)

        If you have knowledge about the type(s) being stored – beyond the fact that the types being stored must be copyable – then std::any is probably not the proper tool: its flexibility has performance costs. If there is exactly one such type T, you should reach for std::optional. If the types to store will always be function objects with a particular signature – callbacks, for example – you want std::function. If you only need to store types from some set fixed at compile time, std::variant is a good choice; but let’s not get ahead of ourselves – that will be the next article.

        Conclusions

        When you need to store an object of an arbitrary type, pull std::any out of your toolbox. Be aware that there are probably more appropriate tools available when you do know something about the type to be stored.

        If you have any questions (Get it? “any” questions?), please feel free to post in the comments below. You can also send any comments and suggestions directly to the author via e-mail at cacarter@microsoft.com, or Twitter @CoderCasey. Thank you!


        Visual Studio 2017 version 15.9 Preview 3

        $
        0
        0

        Today, we are releasing the third preview of Visual Studio 2017 version 15.9. You can download it here and share your feedback with our engineering teams. This release includes ARM64 support in UWP apps as well as improvements to Xamarin and TypeScript. Continue reading below for an overview the fixes and new features. If you’d like to see the full list, check out the release notes for more details.

        ARM64 Support for UWP Applications

        You can now build ARM64 UWP applications for all languages. If you create a new application, the ARM64 configuration will be included in the project by default. For existing applications, you will add an ARM64 solution configuration (copied from the ARM configuration) using the configuration manager. Your UWP projects will automatically build for the correct architecture when selecting that configuration.

        Note: For C# and VB UWP applications, the Minimum Version of your application must be set to the Fall Creators Update (build 16299) or higher to build for ARM64. In addition, C# and VB applications must reference the latest .NET Core for Universal Windows Platform preview NuGet package (6.2.0 preview) or higher to build for ARM64. Only .NET Native is supported for building ARM64 UWP applications.

        Visual Studio Tools for Xamarin

        Visual Studio Tools for Xamarin now supports Xcode 10, which enables you to build and debug apps for iOS 12, tvOS 12, and watchOS 5. For example, iOS 12 adds Siri Shortcuts, allowing all types of apps to expose their functionality to Siri. Siri then learns when certain app-based tasks are most relevant to the user and uses this knowledge to suggest potential actions via shortcuts. An example is provided in the GIF below.

        See how to get ready for iOS 12 and our introduction to iOS 12 for more details on the new features available.

        Xamarin.Android Build Performance

        This release also brings Xamarin.Android 9.1, in which we have included initial build performance improvements. For a further breakdown of these improvements between releases, see our Xamarin.Android 15.8 vs. 15.9 build performance comparison for details.

        C++ – Charconv for Floats

        We’ve implemented the shortest round-trip decimal overloads of floating-point to_chars() in C++17’s charconv header. For scientific notation, it is approximately 10 times (not percent) as fast as sprintf_s() “%.8e” for floats, and 30 times (not percent) as fast as sprintf_s() “%.16e” for doubles. This uses Ulf Adams’ new algorithm, Ryu.

        JavaScript and TypeScript Project References

        We now support project references, which provide functionality for splitting a large TypeScript project into separate builds that reference each other. We have additionally rolled out the ability to easily update your project to the latest TypeScript 3.0 with our file renaming feature that also integrates project references. Now, if you rename your JavaScript or TypeScript file, the language service offers to fix references across your project.

        Try out the Preview

        As always, we invite you to download the preview and report bugs or suggest features by connecting with our product teams. The issues you report show us how to build better tools for your work and the ideas you share help us grow our tools alongside your projects.

        Angel Zhou Program Manager, Visual Studio

        Angel Zhou is a program manager on the Visual Studio release engineering team, which is responsible for making Visual Studio releases available to our customers around the world.

        Announcing Babylon.js 3.3!

        $
        0
        0

        We are pleased to announce the launch of Babylon.js 3.3! This release adds amazing new UI and interaction features, major updates to particle system controls, new environment generation tools, procedural noise texture generation, 360-degree photo and video support, and more! For the full list of features, please visit our release notes. Some of our favorites are listed below.

        Example of 3D GUI feature.

        3D GUI

        To simplify GUI construction for VR, we have added two features from the Microsoft Mixed Reality Toolkit (MRTK). The first is the new 3D volume grids used to easily lay out your interface for direct integration into your VR scene. Second, we have also added power to your VR GUI design by including methods to call the standard MRTK holographic and 3D mesh buttons.

        Example of the new Transformation Gizmo.

        Transformation Gizmo

        Give your users complete control over the scene with the new transformation gizmo. Easily attach the gizmo to any object to unlock translation, rotation, or scaling with the standard or custom gizmos!

        Example of updates particle systems.

        Particle Systems

        We have introduced nearly 30 new controls to take your particle systems to the next level. With the power of transformation over lifetime, the emission control over the lifetime of the system, the addition of animation randomization for sprite sheets, and the addition of new emitter shapes, you have more control than ever to make your particle systems shine.

        Examples of improvements to Environment textures.

        Environment Textures

        New improvements to our image-based lighting bring the quality of our real-time rendering closer to popular ray tracers for environment lighting. And we went step further to increase performance with the Environment Texture Tool which reduces the size of your pre-filtered DDS files!

        Example of Typescript in the Playground.

        Typescript in the Playground

        To make our playground development environment more flexible and more powerful, we now support Typescript as well as JavaScript.

        Example of improved documentation.

        Improved Documentation

        We have been working hard on improving our users’ understanding of Babylon.js and now have our entire API completely documented. This has been a large undertaking, but the payoff is well worth the investment as you will now have more information right at your fingertips while writing your code.

        We hope you enjoy our new Babylon.js 3.3 release and can’t wait to see what you build with our new features!

        Demos

        The post Announcing Babylon.js 3.3! appeared first on Windows Developer Blog.

        Microsoft Ignite 2018 Bing Maps APIs session recordings now available

        $
        0
        0

        The Bing Maps team was in Orlando, Florida, September 24th through the 28th for Microsoft Ignite 2018If you were not able to attend the event, the session recordings are now available for viewing on https://myignite.techcommunity.microsoft.com/videos

        Microsoft Bing Maps APIs - Solutions Built for the Enterprise

        Session synopsis: The Microsoft Bing Maps APIs platform provides mapping services for the enterprise, with advanced data visualization, website and mobile application solutions, fleet and logistics management and more. In this session, we’ll provide an overview of the Bing Maps APIs platform (what it is and what’s new) and how it can add value to your business solution.

        View session recording

        Cost effective, productivity solutions with fleet management tools from Microsoft Bing Maps APIs
        Session synopsis: The Bing Maps API platform includes advanced fleet and asset management solutions, such as the Distance Matrix, Truck Routing, Isochrone, and Snap-to-Road APIs that can help your business reduce costs and increase productivity. Come learn more about our fleet management solutions as well as see a short demo on how you can quickly set up and deploy a fleet tracking solution.

        View session recording

        For more information about the Bing Maps Platform, go to https://www.microsoft.com/maps/choose-your-bing-maps-API.aspx.

        - Bing Maps Team

        Update on .NET Core 3.0 and .NET Framework 4.8

        $
        0
        0

        In May, we announced .NET Core 3.0, the next major version of .NET Core that adds support for building desktop applications using WinForms, WPF, and Entity Framework 6. We also announced some exciting updates to .NET Framework which enable you to use the new modern controls from UWP in existing WinForms and WPF applications.

        Today, Microsoft is sharing a bit more detail on what we’re building and the future of .NET Core and .NET Framework.

        .NET Core 3.0 addresses three scenarios our .NET Framework developer community has asked for, including:

        • Side-by-side versions of .NET that support WinForms and WPF: Today there can only be one version of .NET Framework on a machine. This means that when we update .NET Framework on patch Tuesday or via updates to Windows there is a risk that a security fix, bug fix, or new API can break applications on the machine. With .NET Core, we solve this problem by allowing multiple versions of .NET Core on the same machine. Applications can be locked to one of the versions and can be moved to use a different version when ready and tested.

        • Embed .NET directly into an application: Today, since there can only be one version of .NET Framework on a machine, if you want to take advantage of the latest framework or language feature you need to install or have IT install a newer version on the machine. With .NET Core, you can ship the framework as part of your application. This enables you to take advantage of the latest version, features, and APIs without having to wait for the framework to be installed.

        • Take advantage of .NET Core features: .NET Core is the fast-moving, open source version of .NET. Its side-by-side nature enables us to quickly introduce new innovative APIs and BCL (Base Class Library) improvements without the risk of breaking compatibility. Now WinForms and WPF applications on Windows can take advantage of the latest .NET Core features, which also includes more fundamental fixes for an even better high-DPI support.

        .NET Framework 4.8 addresses three scenarios our .NET Framework developer community has asked for, including:

        • Modern browser and modern media controls: Today, .NET desktop applications use Internet Explorer and Windows Media Player for showing HTML and playing media files. Since these legacy controls don’t show the latest HTML or play the latest media files, we are adding new controls that take advantage of Microsoft Edge and newer media players to support the latest standards.

        • Access to touch and UWP Controls: UWP (Universal Windows Platform) contains new controls that take advantage of the latest Windows features and touch displays. You won’t have to rewrite your applications to use these new features and controls. We are going to make them available to WinForms and WPF so that you can take advantage of these new features in your existing code.

        • High DPI improvements: The resolution of displays is steadily increasing to 4K and now even 8K resolutions. We want to make sure your existing WinForms and WPF applications can look great on these displays.

        Given these updates, we’re hearing a few common questions, such as “What does this mean for the future of .NET Framework?” and “Do I have to move off .NET Framework to remain supported?” While we’ll provide detailed answers below, the key takeaway is that we will continue to move forward and support the .NET Framework, albeit at a slower pace.

        How Do We Think of .NET Framework and .NET Core Moving Forward?

        .NET Framework is the implementation of .NET that’s installed on over one billion machines and thus needs to remain as compatible as possible. Because of this, it moves at a slower pace than .NET Core. I mentioned above that even security and bug fixes can cause breaks in applications because applications depend on the previous behavior. We will make sure that .NET Framework always supports the latest networking protocols, security standards, and Windows features.

        .NET Core is the open source, cross-platform, and fast-moving version of .NET. Because of its side-by-side nature it can take changes that we can’t risk applying back to .NET Framework. This means that .NET Core will get new APIs and language features over time that .NET Framework cannot. At Build I did a demo showing how the file APIs were faster on .NET Core. If we put those same changes into .NET Framework we could break existing applications, and we don’t want to do that.

        We will continue to make it easier to move applications to .NET Core. .NET Core 3.0 takes a huge step by adding WPF, WinForms and Entity Framework 6 support, and we will keep porting APIs and features to help close the gap and make migration easier for those who chose to do so.

        If you have existing .NET Framework applications, you should not feel pressured to move to .NET Core. Both .NET Framework and .NET Core will move forward, and both will be fully supported, .NET Framework will always be a part of Windows. But moving forward they will contain somewhat different features. Even inside of Microsoft we have many large product lines that are based on .NET Framework and will remain on .NET Framework.

        In conclusion, this is an amazing time to be a .NET developer. We are continuing to advance the .NET Framework with some exciting new features in 4.8 to make your desktop applications more modern. .NET Core is expanding into new areas like Desktop, IoT and Machine Learning. And we are making it easier and easier to share code between all the .NET’s with .NET Standard.

        Scott Hunter, Director of Program Management for .NET

        Scott Hunter works for Microsoft as a Director of Program Management for .NET. This include .NET Framework, .NET Core, Managed Languages, ASP.NET, Entity Framework and .NET tooling. Before this Scott was the CTO of several startups including Mustang Software and Starbase, where he focused on a variety of technologies – but programming the Web has always been his real passion.

        What’s new in Microsoft Edge in the Windows 10 October 2018 Update

        $
        0
        0

        Yesterday, Yusuf Mehdi announced the Windows 10 October 2018 Update, the newest feature update for Windows 10. The October 2018 Update brings with it the best version of Microsoft Edge yet, with new features and enhancements, and updates the web platform to EdgeHTML 18.

        You can get the Windows 10 October 2018 Update yourself by checking for updates on your Windows 10 device. For developers on other platforms, we expect to offer updated EdgeHTML 18 virtual machines on the Microsoft Edge Dev Site shortly, as well as updated images for free remote testing via BrowserStack. We’ll update this post as soon as those resources are available.

        In this post, we’ll walk through some highlights of what’s new for end-users with the October 2018 Update, and the new capabilities for web developers in EdgeHTML 18.

        What’s new in Microsoft Edge

        The October 2018 refines the look, feel, and functionality of Microsoft Edge throughout the product, including a refreshed menu and settings interface, more customizability, and new ways to learn and stay focused.

        We’ve highlighted a few of the most exciting features below – you can learn more about everything that’s new over at Microsoft Edge Tips! Web developers can skip ahead to the next section to see all the new developer features in EdgeHTML 18.

        Control whether media can play automatically

        Unexpected videos and sounds can be alarming and annoying, especially when it’s hard to tell where they’re coming from. We’ve heard a lot of feedback from users that want more control over when videos can play automatically on a page. Starting with the October 2018 Update, you can now control whether sites can autoplay media, so you’re never surprised.

        Screen capture showing the Autoplay settings in Microsoft Edge

        You can find Media Autoplay settings in the Advanced tab of Microsoft Edge’s Settings menu

        You can get started in Settings under “Advanced” > “Media Autoplay,” where you’ll find three options: Allow, Limit, and Block.

        • “Allow” is the default and will continue to play videos when a tab is first viewed in the foreground, at the site’s discretion.
        • “Limit“ will restrict autoplay to only work when videos are muted, so you‘re never surprised by sound. Once you click anywhere on the page, autoplay is re-enabled, and will continue to be allowed within that domain in that tab.
        • “Block” will prevent autoplay on all sites until you interact with the media content. Note that this may break some sites due to the strict enforcement – you may need to click multiple times for some video or audio to play correctly.

        You can also enable or block autoplay on a case-by-case basis by clicking Show site information in the address bar (Lock icon or Information icon) and changing the Media autoplay settings.

        Illustration showing the Site Information panel in Microsoft Edge

        Use the Site Information panel to adjust Media Autoplay and other permissions on a site-by-site basis

        Developers should refer to the Autoplay policies dev guide for details and best practices to ensure a good user experience with media hosted on your site.

        Refreshed menus and settings interface

        We heard your feedback that the Microsoft Edge settings were getting a little complex for a single page. In this release, we’ve made Settings easier to navigate, putting commonly used actions front and center, and providing more ways to customize the browser toolbar.

        Your bookmarks, history, downloads, and more live in the redesigned Hub menu in Microsoft Edge. Just select the “Favorites” () icon by the address bar and choose Reading List, Books, History, or Downloads to see what’s new.

        In the “Settings and more” (Settings and more icon) menu, options are now organized into groups, with icons for each entry and keyboard shortcuts (where applicable) for a faster and more scannable experience.

        We’ve also added the much-requested ability to customize which icons appear in the Microsoft Edge toolbar – you can remove them all for a tidier look, or add as many as you like to bring your favorite functionality to your fingertips. Just select the “Show in toolbar” option in the “Settings and more” (Settings and more icon) menu to get started.

        Stay focused with improvements to reading mode and learning tools

        We’ve made a number improvements to reading modes and learning tools in Microsoft Edge to help you stay focused and get things done.

        Now, when you’re browsing a web page in reading view, you can narrow the focus of the content by highlighting a few lines at a time to help tune out distractions. Just click or tap anywhere on the page, select Learning Tools (Learning tools icon) > Reading Preferences (Reading Preferences icon) and turn on Line focus.

        Illustration of Line Focus in Microsoft Edge

        Line Focus allows you to dim the areas of the page you’re not focused on, to help minimize distractions.

        As you read, you can now look up definitions for key words in Reading View, Books, and PDFs, using the new dictionary function. Simply select any single word to see the definition appear above your selection, even when you’re offline.

        You can adjust which documents and sites the dictionary works in by going to the General tab in Settings and selection options under “Show definitions inline for…

        And lots more…

        Those are just a few highlights – there’s lots more to discover throughout Microsoft Edge, including design refinements, improved PDF handling, a more powerful download manager, and more. Learn how to use the new features at Microsoft Edge Tips, or see the full list of everything that’s new over at the Microsoft Edge Changelog.

        What’s new for web developers in EdgeHTML 18

        The October 2018 Update includes EdgeHTML 18, the latest revision of the rendering engine for Microsoft Edge and the Windows platform.

        Web Authentication

        Microsoft Edge now includes unprefixed support for the new Web Authentication API (aka WebAuthN). Web Authentication provides an open, scalable, and interoperable solution to simplify authentication, enabling better and more secure user experiences by replacing passwords with stronger hardware-bound credentials. The implementation in Microsoft Edge allows the use of Windows Hello enabling users to sign in with their face, fingerprint, or PIN, in addition to external authenticators like FIDO2 Security Keys or FIDO U2F Security Keys, to securely authenticate to websites.

        Animation showing a purchase using Web Authentication via Windows Hello

        For more information, head over to the blog post Introducing Web Authentication in Microsoft Edge.

        New Autoplay policies

        With the Windows 10 October 2018 Update, Microsoft Edge provides customers with the ability to personalize their browsing preferences on websites that autoplay media with sound to minimize distractions on the web and conserve bandwidth. Users can customize media behavior with both global and per-site autoplay controls. Additionally, Microsoft Edge automatically suppresses autoplay of media in background tabs.

        Developers should check out the Autoplay policies guide for details and best practices to ensure a good user experience with media hosted on your site.

        Service Worker updates

        For a refresher on what Service Workers are and how they work, check out the Service Worker API summary written by our partners over at MDN. We’ve made several updates to Service Worker support in EdgeHTML 18. The fetchEvent enables the Service Worker to use preloadResponse to promise a response, and the resultingClientId to return the ID of the Client that the current service worker is controlling. The NavigationPreloadManager interface provides methods for managing the preloading of resources, allowing you to make a request in parallel while a service worker is booting-up, avoiding any time delay. Check out the newly supported API properties for preloading resources with a Service Worker.

        CSS masking, background blend, and overscroll

        EdgeHTML 18 improves support for CSS Masking. This implementation further supports the CSS mask-image property with improved WebKit support, including webkitMask, webkitMaskComposite, webkitMaskImage, webkitMaskPosition, webkitMaskPositionX, webkitMaskPositionY, webkitMaskRepeat, webkitMaskSize, as well as more complete standards support, adding maskComposite, maskPosition, maskPositionX, mskPositionY, and maskRepeat.

        Determining how an element’s background images should blend with each other also receives a standards-based update in this release, background-blend-mode will now be enabled by default.

        CSS improvements can also be found in how Microsoft Edge handles what happens when the boundary of a scrolling area is reached, now supporting overscroll-behavior, including overscroll-behavior-x, overscroll-behavior-y, and overflow-wrap.

        Chakra Improvements

        EdgeHTML 18 includes improvements to the Chakra JavaScript engine to support new ES and WASM features, improve performance, and improve interoperability. Look for a separate blog post later this month recapping all the Chakra improvements in more detail.

        Developer Tools

        The latest update to Microsoft Edge DevTools adds a number of conveniences both to the UI and under the hood, including new dedicated panels for Service Workers and Storage, source file search tools in the Debugger, and new Edge DevTools Protocol domains for style/layout debugging and console APIs. We’ll be covering the improvements to the Microsoft Edge DevTools in more detail in a separate post coming soon – stay tuned!

        Web Notification properties

        Four new properties are now supported for web notifications: actions, badge, image, and maxActions, improving our ability to create notifications on the web that are compatible with existing notification systems, while remaining platform-independent.

        Listening to your feedback

        By popular demand, we’ve implemented support for several commonly requested APIs in EdgeHTML 18, including the DataTransfer.setDragImage() method used to set a custom image when dragging and dropping, and secureConnectionStart, a property of the Performance Resource Timing API, which can be used for returning a timestamp immediately before the browser starts the handshake process to secure the current connection.

        In addition, no one likes enumerating the attributes collection, so we’ve added support for Element.getAttributeNames to return the attribute names of the element as an Array of strings, as well as, Element.toggleAttribute to toggle a boolean attribute (removing if present and adding if not).

        Building on our major accessibility enhancements in previous releases, we’ve added support for three new ARIA roles to allow users of assistive technologies to get semantic meaning when traversing SVG yelements that map to these roles (graphics-document, graphics-object, and graphics-symbol).

        We’ve also added support for WebP images, improving interoperability with sites that serve them across the web.

        Progressive Web Apps

        Windows 10 JavaScript apps (web apps running in a WWAHost.exe process) now support an optional per-application background script that starts before any views are activated and runs for the duration of the process. With this, you can monitor and modify navigations, track state across navigations, monitor navigation errors, and run code before views are activated.

        When specified as the StartPage in your app manifest, each of the app’s views (windows) are exposed to the script as instances of the new WebUIView class, providing the same events, properties, and methods as a general (Win32) WebView. Your script can listen for the NewWebUIViewCreated event to intercept control of the navigation for a new view:

        Windows.UI.WebUI.WebUIApplication.addEventListener(<span class="hljs-string">"newwebuiviewcreated"</span>, newWebUIViewCreatedEventHandler);

        Any app activation with the background script as the StartPage will rely on the script itself for navigation.

        WebView

        Service workers

        Service workers are now supported in the WebView control, in addition to the Microsoft Edge browser and Windows 10 JavaScript apps. All flavors of the Microsoft Edge webview (PWA, UWP, Win32) support service workers, however please be aware that the Push API is not yet available for the UWP and Win32 versions.

        x64 app architectures require Neutral (Any CPU) or x64 packages, as service workers are not supported in WoW64 processes. (To conserve disk space, the WoW version of the required DLLs are not natively included in Windows.)

        Win32 WebView updates

        The EdgeHTML WebViewControl for Windows desktop (Win32) apps has been updated with several new features, including the ability to inject script upon page load before any other scripts on the page are run (AddInitializeScript) and know when a particular WebViewControl receives or loses focus (GotFocus/LostFocus).

        Additionally, you can now create a new WebViewControl as the opened window from window.open. The NewWindowRequested event still notifies an app when script inside the WebViewControl calls window.open as it always has, but with EdgeHTML 18 its NewWindowRequestedEventArgs include the ability to take a deferral (GetDeferral) in order to set a new WebViewControl (NewWindow) as the target for the window.open:

        WebViewControlProcess wvProc;
        WebViewControl webView;
        
        void OnWebViewControlNewWindowRequested(WebViewControl sender, WebViewControlNewWindowRequestedEventArgs args)
        {
        
            if (args.Uri.Domain == “mydomain.com”)
            {
                using deferral = args.GetDeferral();
                args.NewWindow = await wvProc.CreateWebViewControlAsync(
                    parentWindow, targetWebViewBounds);
                deferral.Complete();
            }
            else
            {
                // Prevent WebView from launching in the default browser.
                args.Handled = true;
            }
        }
        
        String htmlContent = “<html><img src="data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7" data-wp-preserve="%3Cscript%3Ewindow.open(%E2%80%98http%3A%2F%2Fmydomain.com%E2%80%99)%3C%2Fscript%3E" data-mce-resize="false" data-mce-placeholder="1" class="mce-object" width="20" height="20" alt="&lt;script&gt;" title="&lt;script&gt;" /><body></body></html>”;
        
        webView.NavigateToString(htmlContent);
        

        WebDriver

        We’re continuing to improve the interoperability and completeness of our WebDriver implementation, now passing 1222 of 1357 web platform tests (up from 783 in our June update).

        WebDriver is now a Windows Feature on Demand (FoD), making it easier than ever to automate testing in Microsoft Edge and get the right version for your device. You will no longer need to match the build/branch/flavor manually when installing WebDriver, your WebDriver will automatically update to match any new Windows 10 updates.

        You can install WebDriver by turning on Developer Mode, or install it as a standalone by going to Windows Settings > Apps > Apps & features > Manage optional features. For more information, check out the WebDriver announcement on the Windows Blog site.

        Get started testing EdgeHTML 18

        The October 2018 Update is now available for Windows 10 customers as an automatic update, rolling out to devices based on real-time feedback and telemetry and a machine-learning driven targeting model. If you have a device that you need to update right away for testing purposes, you can install the update manually using the Windows Update Assistant.

        For developers who don’t have a Windows 10 device handy, we’re working on updating our free virtual machines from Microsoft Edge Dev to EdgeHTML 18 in the near future. We’ve also partnered with BrowserStack to offer unlimited remote manual and automated testing in Microsoft Edge—we’ll update this post once EdgeHTML 18 is available on BrowserStack.

        We look forward to hearing what you think of this release! You can get in touch with our team directly via @MSEdgeDev on Twitter, or via the Feedback Hub app in Windows. And don’t forget to check out the latest on our roadmap over at https://status.microsoftedge.com, where you can help direct what we build next!

        Kyle Pflug, Senior Program Manager, Microsoft Edge
        Erika Doyle Navara, Senior Dev Writer, Microsoft Edge
        Matt Wojciakowski, Dev Writer, Microsoft Edge

        The post What’s new in Microsoft Edge in the Windows 10 October 2018 Update appeared first on Microsoft Edge Dev Blog.

        Announcing ASP.NET SignalR 2.4.0 Preview 1

        $
        0
        0

        We recently released the first preview of the upcoming 2.4.0 release of ASP.NET SignalR. As we mentioned in our previous blog post on the future of ASP.NET SignalR we are releasing a new minor update to ASP.NET SignalR (the version of SignalR for System.Web and/or OWIN-based applications) that includes, support for the Azure SignalR Service, as well as some bug fixes and minor features.

        We recommend you try upgrading to the preview even if you’re not interested in adopting the Azure SignalR Service at this time. Your feedback is critical to making sure we produce a stable and compatible update! You can find details about the release on the releases page of the ASP.NET SignalR GitHub repository.

        Azure SignalR Service support

        The Azure SignalR Service is a fully-managed service that can handle all your real-time experiences and allow you to easily scale your real-time web application. The Azure SignalR Service takes the load off your application so that you don’t have to handle all the persistent connections directly in your application. Instead, clients are re-routed to the service, which takes the burden of those persistent connections so your app can scale based on the actual throughput you need.

        Moving to the Azure SignalR Service provides major benefits to your application:

        1. Your app does not need a SignalR “backplane” (Redis, SQL Server, Azure Service Bus, etc.) anymore. You can completely remove this configuration from your application and scale without having to manage your backplane.
        2. SignalR traffic runs through the Azure SignalR Service, which takes the load off your app servers. The messages still flow to your app servers, but the persistent connections themselves are made to the service, which means you can scale based on the message throughput instead of having to scale based on the number of concurrent users.
        3. Since clients connect to the service, you no longer have to worry about concurrent connection limits on your server or in browsers.

        Convert an ASP.NET SignalR app to use the Azure SignalR Service

        Converting your existing ASP.NET SignalR app to use the Azure SignalR Service just takes a few steps and requires no changes to your client other than updating them to the latest version of ASP.NET SignalR.

        Before you can convert to use the Azure SignalR Service, you need to update your SignalR Server and Client to the latest preview build of ASP.NET SignalR 2.4.0 on NuGet.org (the latest build at the time of publishing is 2.4.0-preview1-20180920-03 but we’ll be shipping more previews so you should make sure you use the latest prerelease build).

        After updating to SignalR 2.4.0, install the Microsoft.Azure.SignalR.AspNet NuGet package and change your app.MapSignalR() to app.MapAzureSignalR() and you’re using the Azure SignalR service. You just need to provision an Azure SignalR instance and provide the connection string in your Web.config file and you’re good to go!

        For a detailed guide, see the aspnet-samples/ChatRoom Sample in the AzureSignalR-samples GitHub repo. The sample walks through converting an existing ASP.NET SignalR app (the aspnet-samples/ChatRoomLocal Sample) to use the Azure SignalR Service.

        Limitations

        • The JSONP protocol is not supported in Azure SignalR. Your browser clients must run on browsers that support Cross-Origin Resource Sharing in order to use Azure SignalR.
        • The Azure SignalR Service does not support Persistent Connections at this time.
        • The Azure SignalR SDK requires .NET Framework 4.6.1 or higher on the server.

        Conclusion

        We hope you’ll try out ASP.NET SignalR 2.4.0 Preview 1 and give us feedback! Even if you aren’t interested in migrating to Azure SignalR, there are a few bug fixes that are worth getting, and we’d appreciate it if you could try updating and let us know if you encounter any issues!

        As always, if you have issues, bugs or feature requests, please file them on GitHub. If you’re having an issue with the Azure SignalR SDK specifically, file it on the Azure SignalR SDK Repository. If you’re having an issue with SignalR on-premise, file it on the SignalR Repository. If you’re not sure where your issue should go, that’s OK, just file it in either one and we’ll figure out where it belongs :).

        Also, if you’re interested in migrating your ASP.NET SignalR application to the Azure SignalR Service, the team would love to hear from you, please let us know at asrs@microsoft.com so we can learn more about your scenario!

        Ephemeral OS Disk in limited preview

        $
        0
        0

        Last week at Microsoft Ignite, we launched Ultra SSD, a new industry leading high-performance disk type for IO intensive workloads. Adding to that, today we are delighted to share the limited preview of Ephemeral OS Disk, a new type of OS disk created directly on the host node, providing local disk performance and faster boot/reset time.

        Ephemeral OS Disk is supported for all virtual machines (VM) and virtual machine scale sets (VMSS). This offering is based on your feedback to provide a lower cost, higher performant OS disk for stateless applications, which enable them to quickly deploy the VMs and reset them to its original state.

        Ephemeral OS Disk is ideal for stateless workloads that require consistent read/write latency to OS disk, as well as frequent reimage operations to reset the VM(s) to the original state. This includes workloads such as website applications, game server hosting services, VM pools, computation, jobs and more. Ephemeral OS Disk also works well for workloads that are leveraging low-priority VM scale sets.

        Key comparisons between Persistent OS disk and Ephemeral OS disk

          Persistent OS Disk Ephemeral OS Disk
        Size limit for OS disk 2 TiB

        Up to 30 GiB for preview

        Up to VM cache size at general availability

        VM sizes supported All DSv1, DSv2, DSv3, Esv2, Fs, FsV2, GS
        Disk type support Managed and unmanaged OS disk Managed OS disk only
        Region support All regions All regions excluding sovereign clouds
        Specialized OS disk support Yes No
        Data persistence OS disk data written to OS disk are stored in Azure Storage OS disk data is stored to local host machine and is not persisted to Azure Storage
        Stop-deallocated state VMSS instances and VMs can be stop-deallocated and restarted from the stop-deallocated state VMSS instances and VMs cannot be stop-deallocated
        OS disk resize Supported during VM creation and after VM is stop-deallocated Supported during VM creation only
        Resizing to a new VM size OS disk data is preserved OS disk data is deleted, OS is re-provisioned

        How to join preview

        To enroll in the limited public preview, you will need to submit this form and your subscription will be approved within 24 to 48 hours starting October 10, 2018. You can check the status of your request by running the following PowerShell command:

        Get-AzureRmProviderFeature -FeatureName localdiffdiskpreview -ProviderNamespace Microsoft.Compute

        We would love to get your feedback via Microsoft Teams or email EphemeralOSDiskvTeam@microsoft.com.


        Provide feedback on detected threats in Azure Security Center

        $
        0
        0

        Azure Security Center automatically collects, analyzes, and integrates log data from your Azure resources to detect threats. Machine learning algorithms run against collected data and generate security alerts. A list of prioritized security alerts is shown in Security Center along with the information you need to quickly investigate the problem, as well as recommendations for how to remediate an attack.

        However, the threat landscape is constantly changing, and different customers have different needs. Therefore, it is important to stay in contact with customers and to continuously improve our threat detection capabilities, and to provide customers with the right information to help them address a security threat. To fulfill this, we have added the Alerts Customer Feedback to Azure Security Center, which gives the Security Center customers a channel to give feedback on the alerts that they received. This capability is currently available in public preview and is accessible from the alert blade. In the bottom part of the alert you will see the question “Was this useful?”, as shown below:

        Alert blade

        At this point, you can provide feedback in multiple resolutions with a simple user interface. The first resolution is to provide a feedback on whether the alert was useful or not. Once an answer is provided, a drop-down list of reasons appears. Useful alerts can be due to detected malicious activity (malicious true positive) or to non-malicious activity that still should cause an alert (benign true positive). The latter may be a result of a non-malicious attempt, such as a penetration testing activity or a benign login to a resource from an unusual location or in an unusual manner.

        Useful alert

        Alerts that are not useful should have not been fired or should have had more relevant information for further investigation. The available options are shown below:

        Not useful alert

        In the third level of resolution, the user can provide additional comments in free text about the alerts.

        This feature gives Security Center the attentiveness required for improving our detection mechanisms, allowing the product and support groups to listen more closely to customer needs. The feedback also lays out the foundations for automated tuning of alerts to better meet customer needs.

        To learn more about the benefits of Security Center, visit our webpage.

        Use Hybrid Connections to Incrementally Migrate Applications to the Cloud

        $
        0
        0

        As the software industry shifts to running software in the cloud, organizations are looking to migrate existing applications from on-premises to the cloud. Last week at Microsoft’s Ignite conference, Paul Yuknewicz and I delivered a talk focused on how to get started migrating applications to Azure (watch the talk free) where we walked through the business case for migrating to the cloud, and choosing the right hosting and data services.

        If your application is a candidate for running in App Service, one of the most useful pieces of technology that we showed was Hybrid Connections. Hybrid Connections let you host a part of your application in Azure App Service, while calling back into resources and services not running in Azure (e.g. still on-premises). This enables you to try running a small part of your application in the cloud without the need to move your entire application and all of its dependencies at once; which is usually time consuming, and extremely difficult to debug when things don’t work. So, in this post I’ll show you how to host an ASP.NET front application in the cloud, and configure a hybrid connection to connect back to a service on your local machine.

        Publishing Our Sample App to the Cloud

        For the purposes of this post, I’m going to use the Smart Hotel 360 App sample that uses an ASP.NET front end that calls a WCF service which then accesses a SQL Express LocalDB instance on my machine.

        The first thing I need to do is publish the ASP.NET application to App Service. To do this, right click on the “SmartHotel.Registration.Web” project and choose “Publish”

        clip_image001

        The publish target dialog is already on App Service, and I want to create a new one, so I will just click the “Publish” button.

        This will bring up the “Create App Service” dialog.  Next, I will click “Create” and wait for a minute while the resources in the cloud are created and the application is published.

        clip_image003

        When it’s finished publishing, my web browser will open to my published site. At this point, there will be an error loading the page since it cannot connect to the WCF service. To fix this we’ll add a hybrid connection.

        image

        Create the Hybrid Connection

        To create the Hybrid Connection, I navigate to the App Service I just created in the Azure Portal. One quick way to do this is to click the “Managed in Cloud Explorer” link on the publish summary page

        clip_image005

        Right click the site, and choose “Open in Portal” (You can manually navigate to the page by logging into the Azure portal, click App Services, and choose your site).

        clip_image006

        To create the hybrid connection:

        Click the “Networking” tab in the Settings section on the left side of the App Service page

        Click “Configure your hybrid connection endpoints” in the “Hybrid connections” section

        image

        Next, click “Add a hybrid connection”

        Then click “Create a new hybrid connection”

        clip_image010

        Fill out the “Create new hybrid connection” form as follows:

        • Hybrid connection Name: any unique name that you want
        • Endpoint Host: This is the machine URL your application is currently using to connect to the on-premises resource. In this case, this is “localhost” (Note: per the documentation, use the hostname rather than a specific IP address if possible as it’s more robust)
        • Endpoint Port: The port the on-premises resource is listening on. In this case, the WCF service on my local machine is listening on 2901
        • Servicebus namespace: If you’ve previously configured hybrid connections you can re-use an existing one, in this case we’ll create a new one, and give it a name

        clip_image011

        Click “OK”. It will take about 30 seconds to create the hybrid connection, when it’s done you’ll see it appear on the Hybrid connections page.

        Configure the Hybrid Connection Locally

        Now we need to install the Hybrid Connection Manager on the local machine. To do this, click the “Download connection manager” on the Hybrid connections page and install the MSI.

        clip_image013

        After the connection manager finishes installing, launch the “Hybrid Connections Manager UI”, it should appear in your Windows Start menu if you type “Hybrid Connections”. (If for some reason it doesn’t appear on the Start Menu, launch it manually from “C:Program FilesMicrosoftHybridConnectionManager <version#>”)

        Click the “Add a new Hybrid Connection” button in the Hybrid Connections Manager UI and login with the same credentials you used to publish your application.

        clip_image015

        Choose the subscription you used published your application from the “Subscription” dropdown, choose the hybrid connection you just created in the portal, and click “Save”.

        clip_image017

        In the overview, you should see the status say “Connected”. Note: If the state won’t change from “Not Connected”, I’ve found that rebooting my machine fixes this (it can take a few minutes to connect after the reboot).

        clip_image019

        Make sure everything is running correctly on your local machine, and then when we open the site running in App Service we can see that it loads with no error. In fact, we can even put a breakpoint in the GetTodayRegistrations() method of Service.svc.cs, hit F5 in Visual Studio, and when the page loads in App Service the breakpoint on the local machine is hit!

        clip_image021

        Conclusion

        If you are looking to move applications to the cloud, I hope that this quick introduction to Hybrid Connections will enable you to try moving things incrementally. Additionally, you may find these resources helpful:

        As always, if you have any questions, or problems let me know via Twitter, or in the comments section below.

        Getting started with Universal Packages

        $
        0
        0
        At the end of last sprint we flipped the switch on a new feature for Azure Artifacts called Universal Packages. With Universal Packages teams can store artifacts that don’t neatly fit into the other kinds of package types that we support. A Universal Package is just a collection of files that you’ve uploaded to our... Read More

        Visual Studio 2017 and Visual Studio for Mac Support Updates

        $
        0
        0

        As we work to bring you Visual Studio 2019, our team will release the final update to Visual Studio 2017, version 15.9, in the coming months; you can try a preview of version 15.9 here. We’d love your feedback on this release as we finish it up; use Report-a-Problem to submit issues.

        Following our standard Visual Studio support policy, Visual Studio 2017 version 15.9 will be designated as the “Service Pack”. Once version 15.9 ships, customers still using version 15.0.x (RTM) will have one year to update to version 15.9 to remain in a supported state. (Customers using versions 15.1 through 15.8 must update to the latest version immediately to remain supported.) After January 14, 2020, all support calls, servicing, and security fixes will require a minimum installed version of 15.9 for the duration of the ten-year support lifecycle.

        You can install the most up-to-date version of Visual Studio 2017 by using the Notifications hub, the Visual Studio Installer, or from visualstudio.microsoft.com/downloads.

        We also plan to release Visual Studio 2017 for Mac version 7.7 in the coming months, and a final significant update to Visual Studio 2017 for Mac (version 7.8) in the first half of 2019, focused primarily on quality improvements. Visual Studio for Mac continues to follow the Microsoft Modern Lifecycle Policy, and Visual Studio 2017 for Mac version 7.8 will be superseded by Visual Studio 2019 for Mac version 8.0 once released. For instructions on updating, see Updating Visual Studio for Mac.

        More information is available on the Product Lifecycle and Servicing Information for Visual Studio and Team Foundation Server page and the Servicing for Visual Studio for Mac page.

        Paul Chapman, Senior Program Manager

        Paul is a program manager on the Visual Studio release engineering team, which is responsible for making Visual Studio releases available to our customers around the world.

        Remediating the October 2018 Git Security Vulnerability

        $
        0
        0
        Today, the Git project has announced a security vulnerability: there is a security issue in recursively cloning submodules that can lead to arbitrary code execution. The Azure DevOps team encourages you to examine whether you are on an affected platform and, if so, upgrade your Git clients to the latest version. This includes Git clients... Read More
        Viewing all 10804 articles
        Browse latest View live


        <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>