Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

The Year in .NET – Visual Studio 2017 RC and .NET Core updated, On .NET with Stephen Cleary and Luis Valencia, Ulterius, Inferno, Bastion, LoGeek Night

$
0
0

To read last week’s post, see The week in .NET – On .NET on MyGet – FlexViewer – I Expect You To Die.

The Week in .NET is now more than a year old! Our first post was published on December 1 of last year, and it had only 6 links. This week’s issue has more than 60! This is not me becoming less selective (I’m actually becoming more selective), it really is the community growing, and producing more quality content each week. My goals when I started these posts were the following:

  • Provide useful resources every week.
  • Show how productive the .NET community is.
  • Recognize the amazing work that’s being done by you all.

Thank you all for an amazing year. Thank you to all the great writers of code and blogs, without whom this could simply not exist. Thank you to Stacey, Phillip, Dan, and Rowan for sending me gaming, F#, Xamarin, and EF content every week. And finally, thanks to all of you who read and support us every week.

Visual Studio 2017 RC and .NET Core 1.0 updated

Yesterday, Visual Studio 2017 RC got an update, with further improvements to the csproj format. You can read all the .NET Core and csproj details in Updating Visual Studio 2017 RC – .NET Core Tooling improvements, and the ASP.NET changes in New Updates to Web Tools in Visual Studio 2017 RC. .NET Core 1.0 also got updated to 1.0.3, along with ASP.NET and Entity Framework Core.

On .NET

Last week, I published the first two of our MVP Summit interviews.

Stephen Cleary talked about his AsyncEx library:

Luis Valencia showed his IoT work with sensor data aggregation using Azure:

This week, we’ll be back in the studio to speak with Immo Landwerth, Karel Zikmund, and Wes Haggard about the way the .NET team manages the .NET Core open source projects and repositories. The show is on Thursdays and begins at 10AM Pacific Time on Channel 9. We’ll take questions on Gitter, on the dotnet/home channel and on Twitter. Please use the #onnet tag. It’s OK to start sending us questions in advance if you can’t do it live during the show.

App of the week: Ulterius

Ulterius is a complete remote computer access solution in your browser. It features hardware and process monitoring and management, remote shells (cmd, PowerShell, and bash), file system access, scheduling, webcam access, and remote desktop.

Ulterius' remote desktop feature

Ulterius is open source and built with .NET.

Package of the week: Inferno

Inferno is a modern, open-source, general-purpose .NET crypto library, that has been professionally audited. It comes to us from Stan Drapkin, the author of Security-drive .NET.

Game of the week: Bastion

Bastion is an action role playing game. Play as a young man who sets out on a journey towards Bastion after waking up to find his world shattered to pieces by a catastrophe called the Calamity. Explore over 40 beautifully hand-painted environments as you discover the secrets of Calamity while trying to reverse its effects. Bastion features a reactive narrator who marks your ever move, upgradeable weapons, and character customization that lets you tailor game play to your style.

Bastion

Bastion was created Supergiant Games using C# and their own custom engine. It is currently available on Steam, Xbox One, Xbox 360, PlayStation 4, PlayStation VITA and the Apple App Store.

User group meeting of the week: LTS LoGeek Night in Wrocław, Poland

LoGeek Night is a full day of presentations and discussions in a relaxed atmosphere as hot pizza and cold beer are served. LoGeek Night is on Thursday, December 15, at the Wędrówki Pub in Wrocław.

The presentations include:

  • Łukasz Pyrzyk: .NET Core and Open Source in 2017.
  • Ivan Koshelev: Advanced queries in LINQ, IQueryable and Expression Trees with examples in Entity Framework 6.
  • Andrey Gordienkov: Transitive dependecies: they are not who we think they are.

.NET

There’s an awesome article this week by Matt Warren about research papers in the .NET source that’s juxtaposing research papers with their application in the .NET source code. It’s also showing examples of the reverse: academic papers that are the result of work on .NET. This is an excellent read that I highly recommend!

ASP.NET

F#

Check out the F# Advent Calendar for loads of great F# blog posts for the month of December.

Check out F# Weekly for more great content from the F# community.

Xamarin

Azure

Data

Games

And this is it for this week!

Contribute to the week in .NET

As always, this weekly post couldn’t exist without community contributions, and I’d like to thank all those who sent links and tips. The F# section is provided by Phillip Carter, the gaming section by Stacey Haffner, and the Xamarin section by Dan Rigby.

You can participate too. Did you write a great blog post, or just read one? Do you want everyone to know about an amazing new contribution or a useful library? Did you make or play a great game built on .NET?
We’d love to hear from you, and feature your contributions on future posts:

This week’s post (and future posts) also contains news I first read on The ASP.NET Community Standup, on Weekly Xamarin, on F# weekly, and on Chris Alcock’s The Morning Brew.


Cortana to open up to new devices and developers with Cortana Skills Kit and Cortana Devices SDK

$
0
0

We believe that everyone deserves a personal assistant. One to help you cope as you battle to stay on top of everything, from work to your home life. Calendars, communications and commitments. An assistant that is available everywhere you need it, working in concert with the experts you rely on to get things done.

We’re at the beginning of a technological revolution in artificial intelligence. The personal digital assistant is the interface where all the powers of that intelligence can become an extension of each one of us. Delivering on this promise will take a community that is equally invested in the outcome and able to share in the benefits.

Today we are inviting you to join us in this vision for Cortana with the announcement of the Cortana Skills Kit and Cortana Devices SDK.

The Cortana Skills Kit is designed to help developers reach the growing audience of 145 million Cortana users, helping users get things done while driving discovery and engagement across platforms: Windows, Android, iOS, Xbox and new Cortana-powered devices.

The Cortana Devices SDK will allow OEMs and ODMs to create a new generation of smart, personal devices – on no screen or the big screen, in homes and on wheels.

Developers and device manufacturers can sign up today to receive updates as we move out of private preview.

Cortana Skills Kit Preview

The Cortana Skills Kit will allow developers to leverage bots created with the Microsoft Bot Framework and publish them to Cortana as a new skill, to integrate their web services as skills and to repurpose code from their existing Alexa skills to create Cortana skills. It will connect users to skills when users ask, and proactively present skills to users in the appropriate context. And it will help developers personalize their experiences by leveraging Cortana’s understanding of users’ preferences and context, based on user permissions.

In today’s San Francisco event, we showed how early development partners are working with the private preview of the Cortana Skills Kit ahead of broader availability in February 2017.

  • Knowmail is applying AI to the problem of email overload and used the Bot Framework to build a bot which they’ve published to Cortana. Their intelligent solution works in Outlook and Office 365, learning your email habits in order to prioritize which emails to focus on while on-the-go in the time you have available.
  • We showed how Capital One, the first financial services company to sign on to the platform, leveraged existing investments in voice technology to enable customers to efficiently manage their money through a hands-free, natural language conversation with Cortana.
  • Expedia has published a bot to Skype using the Microsoft Bot Framework, and they demonstrated how the bot, as a new Cortana skill, will help users book hotels.
  • We demonstrated TalkLocal’s Cortana skill, which allows people to find local services using natural language. For example, “Hey Cortana, there’s a leak in my ceiling and it’s an emergency” gets Talk Local looking for a plumber.

Developers can sign up today to stay up to date with news about the Cortana Skills Kit.  today to stay up to date with news about the Cortana Skills Kit.  today to stay up to date with news about the Cortana Skills Kit.

Cortana Devices SDK for device manufacturers

We believe that your personal assistant needs to help across your day wherever you are: home, at work and everywhere in between. We refer to this as Cortana being “unbound” – tied to you, not to any one platform or device. That’s why Cortana is available on Windows 10, on Android and iOS, on Xbox and across mobile platforms.

We shared last week that Cortana will be included in the IoT Core edition of the Windows 10 Creators Update, which powers IoT devices.

The next step in this journey is the Cortana Devices SDK, which makes Cortana available to all OEMs and ODMs to build smarter devices on all platforms.

It will carry Cortana’s promise in personal productivity everywhere and deliver real-time, two-way audio communications with Skype, Email, calendar and list integration – all helping Cortana make life easier, everywhere. And, of course, it will carry Cortana expert skills across devices.

We are working with partners across a range of industries and hardware categories, including some exciting work with connected cars. The devices SDK is designed for diversity, supporting cross platforms including Windows IoT, Linux, Android and more through open-source protocols and libraries.

One early device partner, Harman Kardon, a leader in premium audio, will have more news to share next year about their plans, but today provided a sneak peek at their new device coming in 2017.

The Cortana Devices SDK is currently in private preview and will be available more broadly in 2017. If you are an OEM or ODM interested in including Cortana in your device, please contact us using this form to receive updates on the latest news about the Cortana Devices SDK and to be considered for access to the early preview.

The post Cortana to open up to new devices and developers with Cortana Skills Kit and Cortana Devices SDK appeared first on Building Apps for Windows.

Project Springfield: a Cloud Service Built Entirely in F#

$
0
0

This post was written by William Blum, a Principal Software Engineering Manager on the Springfield team at Microsoft Research.

Earlier this year, Microsoft announced a preview of Project Springfield, one of the most sophisticated tools Microsoft has for rooting out potential security vulnerabilities in software. Project Springfield is a fuzz testing service which finds security-critical bugs in your code.

One of the amazing things about Springfield is that it’s built on top of Microsoft’s development platforms – F#, .NET, and Azure! This post will go over some of the what, why, and how of Project Springfield, F#, .NET, and Azure.

What is Project Springfield?

Project Springfield is Microsoft’s unique fuzz testing service for finding security critical bugs in software. It helps you quickly adopt practices and technology from Microsoft. The service leverages the power of the Azure cloud to scale security testing using a suite of security testing tools from Microsoft. It’s currently in preview, and you can sign up at Project Springfield if you want to give it a try!

Microsoft's Springfield group. (Photography by Scott Eklund/Red Box Pictures)

William Blum (right) and Cheick Omar Keita (left) discuss the architecture of Springfield.

Why F#?

The reason why we chose F# for this project can be summarized as fast time to market.

In 2015, Microsoft Research NExT kicked off Project Springfield. The engineering team, consisting of three developers at the time, was given the ambitious goal to build an entirely new service from scratch and ship it to external customers in just three months.

Due to its conciseness, correctness, and interoperability with the entire .NET ecosystem, we believe that F# accelerated our development cycle and reduced our time to market. Some specific benefits of using F# we saw included scripting capabilities and interactive REPL to quickly prototype working code, Algebraic Data Types, Immutability by default, Pattern Matching, Higher-Order Functions, a powerful Asynchronous Programming model, and Type Providers.

How it was done

F# scripting allowed the team to quickly discover and interact with external .NET APIs. For instance, we routinely used F# interactive to learn how to use the Azure SDK:

The above script in F# Interactive will enumerate all Azure Virtual Machines in the specified Azure subscription.

Later on in the development process the very same script code could easily be integrated into the final compiled code without any modification.

Functional Programming

Because F# is a functional programming language, we wrote our code in a functional style, which allowed us to eliminate a lot of boilerplate code.

For instance, when working with collections such as lists or arrays, we would use F# sequence operations from the Seq module to process data. Because F# supports partial application of function arguments and functions as first-class arguments, we can process sequences of data with simple, reliable, and composable code.

We find this to be simple because it avoids iteration and needing to store the state of something in a temporary variable, such as when using a C-style for loop. We value the increased reliability of not needing to iterate because it avoids common issues found in iteration, such as Out of Bounds array indexing exceptions. Lastly, the pipeline operator in F# (|>) allows us to compose operations succinctly.

Sorted histogram using F# piping operator and Sequence module

The above snippet demonstrates the use of quite a few functional programming features like lambda functions, function composition, and functional purity, which are natural defaults for F#. The power of these constructs have even led to some integration into non-functional languages like C#.

Functional purity in particular is one of the biggest benefits to our codebase. Functional purity is a form of determinism indicating that a function’s output is fully determined by the value of its input. That is, for a given input, a pure function will always return the same output. Furthermore, a pure function does not affect its environment in any way (e.g., global variables, affecting files on disk…).

Upholding the property of functional purity in our codebase has lead to code that’s easier to reason about and simpler for us to test. And because pure functions do not affect one another, they are easily parallelizable!

F# makes it easy to write functionally pure code by making all let-bounded values immutable by default. In other words, you must opt in to not writing a pure function. When this is truly necessary (e.g., you must modify the environment in some way), you can easily do so by explicitly indicating mutability with the let mutable keywords. In Springfield, we only have five mutable variables in our entire F# codebase.

Conciseness

The example above highlights another aspect of functional languages: programs tend to be concise. The four lines of code from the program above would be expanded to many more if written in imperative style. This may look like a contrived example but at the scale of Springfield, it yields a codebase that is very easy to maintain.

In fact, we could quantify this phenomenon when we ported some of our components from other languages to F#. In order to remove some legacy dependencies, for instance, we ported a Perl script to a 37% smaller F# program. In a separate effort we ported 1,338 lines of PowerShell scripts to just 489 lines of F# (2.7 times smaller). In both cases, despite the code size reduction, the resulting F# program improved logging, readability and reliability (due in part to static type checking).

Correctness

Another reason why F# helped us to ship quickly is because we found that the functional paradigm F# uses helped us improve code correctness. One of the most compelling examples of how language constructs improve correctness is the use of Algebraic Data Types and Pattern Matching.

The quintessential example of this is how you represent and handle missing data in F#. In most mainstream languages, missing data is typically represented by a special null value. This has a big drawback: because null is implicit in most types you operate on, it’s easy to forget to check for the possibility of a null when consuming your data. This makes it easy to have reliability issues and bugs such as NullReferenceException errors at runtime. In many languages, such as C#, every object value is by default nullable, which means that the need to check for null is spread throughout an entire codebase.

In F#, the datatypes you define are by default non-nullable. If missing data is expected, then you wrap your existing type 'T into the Algebraic Data Type 'T option (or Option). F# Options are inhabited by two possible kinds of values: the None value representing absence of data, or Some v where v is some valid of type 'T.

By capturing the possible absence of data in the Option type itself, the compiler is able to enforce that you account for both the Some v and None cases whenever you try to consume an Optional in your code. This is typically done with Pattern Matching and the match ... with construct. Here is an example taken from Springfield’s codebase:

Pattern matching works together with the type system to ensure that all cases are accounted for: the None case and the Some case.

This language feature alone helped us almost entirely eliminate null as a concern from our codebase, which ultimately saved us very precious time.

An expressive type system

Option types are just one example of the power of Algebraic Data Types. Used more generally, Algebraic Data Types allowed us to concisely define all the data structures involved in the system, and write correct and efficient code to manipulate those data structures. For instance, we use simple Discriminated Unions to define the size of the virtual machines provisioned in Azure for testing:

We also use more complex structures to encode events and messages exchanged between various component of the system.

For each test workload submitted to Springfield, thousands of messages are being created and exchanged between the various components of the service. Thanks to the powerful F# type system we can easily represent such complex information via F# Records and Discriminated Unions:

Once we represent incoming messages via the type system, we can use Pattern Matching dispatch on the incoming message.

What’s nice about the above is that the compiler enforces that we account for all cases. Any queue message that is successfully deserialized into the F# discriminated union EventType is guaranteed to be accounted for by the dispatch function. Because we get that correctness guarantee, we don’t spend nearly as much time debugging. Features like F# types, used in conjunction with Pattern Matching, helped us tremendously in getting working code completed faster.

Another example: for reliability, service requests are implemented in our back end using finite state machines. The state of the machine is saved onto an Azure Queue, that way the service can resume from where it left off should a failure ever happen. Once again F# lets us define our state machines very succinctly:

Finite state machines used in Springfield backend

Json serialization and open source contribution

In Springfield, we leveraged Json.NET to serialize and deserialize JSON messages. However, we found that the default output when serializing F# data types was too verbose for our needs. We built FSharpLu.Json, a small library which wraps and augments Json.NET when serializing F# data types, so we could more succinctly serialize F# data types like Options, Algebraic Data Types and Discriminated Unions.

For example the simple value Some [ None; Some 2; Some 3; None; Some 5 ] gets serialized by FSharpLu.Json to just [null, 2, 3, null, 5]. Without FSharpLu.Json, it would get serialized to the following:

For complex data types, like the Event type introduced earlier, the difference becomes more appreciable. The following event, for instance:

gets serialized with FSharpLu.Json to just

which better reflects the F# syntax, and is 47% more compact than the default Json.NET formatting:

We thought a JSON utility like this would be useful to the F# community, so we’ve open-sourced FSharpLu.Json on GitHub and released it on NuGet.

F# Type Providers + Azure

Springfield is built entirely on Azure. All the compute and network resources used to run the test workloads are dynamically provisioned in Azure through the Azure Resource Manager (ARM). Creating resources in ARM requires authoring two JSON files: one template JSON file definining all the resources you want to create (e.g., virtual machines), and one parameter JSON file with values used to customize the deployment (e.g., machine names).

Springfield allocates compute resources dynamically, therefore it needs to generate JSON parameter files at run-time; a task that can be error-prone. With F# Type Providers we can statically verify at compilation time that our generated template parameters are valid. Because our ARM templates constantly evolves, this tremendously speed-up development and debugging time.

With the Json Type Provider from FSharp.Data, just three lines of F# code suffice to automatically infer from the template parameters file (shown in the screenshot below) all the necessary types required to submit the deployment to Azure:

JSON Type Providers for Azure Templates

Screenshot showing F# Intellisense catching a missing field from the template parameters (left), and the corresponding ARM template (right)

Strongly-typed logging, Asynchronous programming, and Active Patterns

To illustrate other areas where F# helped us build Springfield, let’s look at another snippet from our codebase. Below is the function we use to delete a resource group in Azure.

Strongly-typed logging

In the snippet of code above, C/C++ programmers must recognize the printf-like formatting with the use of %s in the calls to logging functions Trace.info and Trace.error. According to game programmer John Carmack, “Printf format string errors were, after null-safety issue, the second biggest issue in (video game Rage C/C++) codebase”. Such errors occur when you pass an incorrect number of parameters to the printf function, or if the input parameter types do not match the format specifiers like %d and %s.

Because we rely a lot on trace logging to help us diagnose bugs and issues in Springfield, we cannot afford reliability issues in the logging function themselves! Thanks to its powerful type system, F# helps you eliminate the problem altogether: any mismatch between formatting specification and parameters is statically caught by the compiler! To take advantage of this, we simply defined our own trace logging helpers using the strongly-typed formatting module Printf. The underlying logging logic is then offloaded to some other logging APIs like .NET Frameworks’s System.Diagnostics.TraceInformation or Azure SDK’s AppInsights.

We’ve open sourced the strongly-typed wrapper for System.Diagnostics.TraceInformation in the FSharpLu library and plan to open source the AppInsights wrapper in the future.

Strongly-typed logging to System.Diagnostics with Microsoft.FSharpLu.TraceLogging

Asynchronous programming

To achieve high scalability, online services like Springfield must make use of asynchronous code to further utilize hardware resources. Because this is a difficult task for programmers, language-level abstractions for asynchronous programming, which make this task easier, have recently begun to emerge in mainstream languages.

F# pioneered a language-level asynchronous programming model for the .NET platform in 2007. In practice this means that F# comes out of the box with state of the art support for asynchrony in the form of Asynchronous Workflows.

In Springfield, most of the IO-bound code is wrapped inside an async{..} block and make us of the let! operator to asynchronously wait for the underlying IO operation to complete.

For example, in the delete snippet above, we use let! to asynchronously wait on the delete API from the Azure SDK. Asynchronous workflows are used pervasively in our services. Our back end event processing and our REST API are all asynchronous:

Asynchronous REST API to submit a Springfield job

The F# asynchronous programming model is implemented entirely in the F# Core Library using Computation Expressions, a language construct based on a sound theoretical foundation used to extend the language syntax in a very generic way.

Many common pitfalls faced by C# programmers when writing asynchronous code aren’t a concern when using the F# asynchronous programming model. To learn more, check out Tomas Petricek’s wonderful blog post which explores the differences in the C# and F# models of asynchrony

Handling asynchronous exceptions with Active patterns

One of the key behaviors of asynchronous and parallel programming in .NET is that exceptions sometimes get nested under, or grouped into exceptions of type System.AggregateException. In .NET languages like C#, exception handling dispatch is based solely on the type of the exception. In F#, the pattern matching construct lets you express complex conditions to filter on the exception you want to handle. For instance, in the delete function from the snippet above, we use pattern matching in combination with Active Patterns to concisely filter on aggregated exceptions:

Active pattern to match over aggregated exceptions

Pattern matching to filter over Azure SDK exception Hyak.Common.CloudException

F# as a scripting language

F# comes with a REPL environment that makes it a great alternative to other scripting languages like PowerShell. Since F# scripts are executed on the .NET Platform, they can leverage existing code from existing core assemblies. In Springfield, we have F# scripts to perform maintenance operations like usage monitoring and clean up. Another advantage of F# scripts is that they are statically type-checked, an unusual thing for a scripting language! In practice this yields huge saving in debugging time. Foolish errors like typos in variable names or incorrect typing are immediately caught by Intellisense in the IDE tooling available for F# – Visual Studio, Xamarin Studio, and Visual Studio Code with the Ionide suite of plug-ins. Refactoring code also becomes a piece of cake. This stands in stark contrast to the fragility of PowerShell scripts experienced by our team.

These features of F# Scripting have been a huge benefit, allowing our team to replace PowerShell for our scripting needs in some components of the service.

We still use PowerShell for our deployments and resource management, mainly due to our reliance on Azure, or because some tools like Service Fabric only expose certain feature through PowerShell. But whenever possible, we try to stick to F# scripting.

Springfield .FSX script to list all resource groups in Azure

Scaling with .NET and Azure

Because F# is a .NET language, we can leverage the entire .NET ecosystem. In particular we use the Azure .NET SDK to access many Azure services such as the Resource Manager, Compute, Network, Files storage, Queues, KeyVault, and AppInsights. We also built our backend service using Service Fabric.

Read more about how Springfield used Azure here: https://azure.microsoft.com/blog/scaling-up-project-springfield-using-azure

Community libraries

What’s also great about F# is its vibrant community. In Springfield we leverage many open-source projects like Paket to simplify NuGet dependency management, FsCheck for automated test generation, type-providers from FSharp.Data and the cross-platform F# editor Ionide for Visual Studio Code. We also keep a close eye on other projects. For instance, we are considering Suave for future web-related components.

As mentioned earlier we’ve also contributed back to the community in the form of two F# libraries: FSharpLu and FSharpLu.Json.

What’s Next for Project Springfield

This article, hopefully, gives you a good overview of some aspects of F# that helped us build Springfield. When we started the project, we chose F# based on positive experiences building smaller projects. Throughout the development of Springfield we learnt that you can use it just as well to build a full-fledge online service!

The functional paradigm is now mainstream in the industry as indicated by the popularity of languages like F#, Scala, Swift, Elixir, and Rust; as well as the inclusion of functional programming constructs in languages such as C# and Java. Even C++ wants its own lambdas now! The reason, we believe, is that the correctness guarantees and expressivity of the functional paradigm yields a unique competitive advantage in a world where code must evolve rapidly to adapt to changing customer needs! For .NET developers, F# is the perfect language to make the jump!

To conclude, we want to call out the success we’ve had with F# as a recruiting tool. When building an engineering team to work on a codebase on a less popular language like F#, one of the biggest concerns is that you won’t be able to find enough people. But surprisingly, things turned out otherwise. Firstly, we found that many great candidates were interested in the position precisely due to using a functional programming language like F#. For some, it was just out of pure love for the language or frustration for not being able to use it in their current job (sometimes due to resistance in their current team). For others, it was curiosity in learning a new programming paradigm, and willingness to challenge themselves and try to do things differently. Secondly, we observed that, once hired, those engineers turn out to be great developers in any language, not just in F#. We had no trouble recruiting engineers to work on Springfield, and even found the use of F# in the codebase a boon to hiring talented people!

Microsoft's Springfield group photographed on November 1, 2016. (Photography by Scott Eklund/Red Box Pictures)

Members of the Springfield Team. From left to right: Lena Hall, Patrice Godefroid, Stas Tishkin, David Molnar, Marc Greisen, William Blum, Marina Polishchuk

As for Springfield, we have plenty more work in the pipeline. Among other things, we are considering porting our backend to .NET core, which F# will support in the forthcoming 4.1 release!

Learn more

SonarSource have announced their own SonarQube Team Services / TFS integration

$
0
0

Microsoft have been partnering with SonarSource for almost two years to bring SonarQube to .NET developers and to make it easy to analyze MSBuild and Java projects from Visual Studio Team Services, TFS and Visual Studio. The partnership, and Team Services extensibility, have now matured to the point that we have jointly decided that it was time for Microsoft to transfer ownership of the SonarQube MSBuild build tasks to SonarSource. They are better placed to keep the tasks up to date and consistent with the SonarQube vision. SonarSource have now announced the availability of their own SonarQube Team Services and TFS extension on the VSTS marketplace.

Concretely what does it change for you?

In the past, we released the SonarQube Team Services build tasks “in the box”, so whenever we updated VSTS – every 3 weeks – we pushed updates to these tasks. The tasks were also shipped with the TFS on-premises product. The source code is in the vsts-tasks repository on GitHub along with the other tasks released by Microsoft. In future Microsoft’s SonarQube tasks won’t be released in the service or TFS product. Like many partners, SonarSource is now providing a dedicated SonarQube extension. This allows them to fully control the development and deployment of updates and fixes. Therefore, we are deprecating the MSBuild SonarQube tasks, and you will need to install the SonarQube extension to continue analyzing technical debt in your MSBuild projects.

What build tasks are affected?

The two tasks which are deprecated are the SonarQube for MSBuild tasks (SonarQube for ‘MSBuild – Begin Analysis’ and ‘SonarQube for MSBuild – End Analysis’).

Note that we also integrated SonarQube into the Java build tasks for Maven and Gradle in order to enable code analysis feedback in Pull Requests. These integrations will remain as they are for now, and will continue to be released by Microsoft. SonarSource may in the future provide a replacement build task or tasks for Java with this capability.

What will be the deprecation experience?

If you are a Team Services user, when you run a build that contains SonarQube for MSBuild tasks, you’ll notice some build warnings:

clip_image002

The warnings contain hyperlinks that will help you migrate.

Also, if you try to add the former tasks to a build definition, you’ll notice the [DEPRECATED] prefix in their label:

clip_image003

On the other hand, if you are working on-premises with TFS 2017, you’ll see these changes starting with TFS 2017 Update1.

Moving to the new tasks

At some point, the Microsoft-owned tasks will be deleted. We recommend switching to SonarSource’s extension as soon as possible. This is straight-forward – just install the SonarQube extension to your account and you’ll notice three new tasks in your library:

clip_image004

You‘ll recognize the last two, but the first is new: SonarSource is introducing a new task named “SonarQube Scanner CLI” that supports analysis of projects outside MSBuild and Java build technologies, a common request. You can now analyze your node.js projects, etc …

Minor breaking changes

SonarSource have taken the opportunity to address shortcomings in the old tasks and to action some of your feedback. Consequently, there are 2 small breaking changes:

  • There is now a dedicated “SonarQube” endpoint instead of a generic one. This is an improvement, since you will now be able to find at a glance the service endpoints which are relevant to these tasks, without having to trawl through a long list of generic service end points.

clip_image005

You will be asked to input a token, which can be generated from your SonarQube dashboard.

clip_image007

The new endpoint will show up in the list of end points with a SonarQube icon, so that you can see it immediately.

  • The database connection input fields are no longer available in the build step. This will only matter if you are using a version of SonarQube lower than 5.2. In that case, we advise you to upgrade your SonarQube server, or use the Additional Settings field to configure these parameters
    clip_image009

Support

Moving forward, support is moving entirely to SonarSource, who would love to get your feedback. If you have questions or suggestions about the SonarQube build tasks, please use Google Groups with the SonarQube tag: https://groups.google.com/forum/#!forum/sonarqube

Team Services December Extensions Roundup

$
0
0

It is the holiday season and we get to look back on a fantastic year for the Team Services Marketplace! Thanks to our growing publisher community there are 321 extensions in the Marketplace and November was one of the best months ever for our installation traffic. 2017 is full of potential as we continue to invest and grow our ecosystem. This month I’ve got two extensions for you, one of them is a must have for our Work Item users. Happy Holidays!

Work Item Search

See it in the Marketplace: https://marketplace.visualstudio.com/items?itemName=ms.vss-workitem-search

Big and small teams rejoice! The need to create small temporary queries to find that pesky work item you lost track of is gone! With Work Item Search, you get fast and flexible text search of all Work Item fields across all projects in your account.

  • Search all of your Work Item fields– You can easily search across all work item fields, including custom fields, which enables more natural searches. The snippet view indicates where matches were found.

image002

  • Inline search filters help you narrow it down, fast– The dropdown list of suggestions helps complete your search faster. For example, a search such as “AssignedTo: Chris WorkItemType: Bug State: Active” finds all active bugs assigned to a user named Chris.

image003

Code Coverage Widgets

See it in the Marketplace: https://marketplace.visualstudio.com/items?itemName=shanebdavis.code-coverage-dashboard-widgets

Dashboard widgets are great because so many interesting scenarios can be enabled there. Code Coverage Widgets adds another set of tools for those who want to stay on top of the quality of new code for a growing project. This widget displays the percentage of unit test code coverage based on a selected build definition. If a build definition does not have any unit tests results recognized by the widget or if has not yet been configured, it will indicate so with a message displayed within the widget.

preview1

The widget has two customizable properties

  • Title– Title of the widget as it is displayed on the dashboard.
  • Build Definition– The build definition you want code coverage displayed for on the widget.

configuration

You will be able to make your dashboard a build status powerhouse with Code Coverage Widgets!

microsoft-visualstudio-services-screenshots

    Are you using an extension you think should be featured here?

    I’ll be on the lookout for extensions to feature in the future, so if you’d like to see yours (or someone else’s) here, then let me know on Twitter!

    @JoeB_in_NC

    Connect(“demos”); // 2016: BikeSharing360 on GitHub

    $
    0
    0

    Microsoft loves developers and is constantly investing in enabling the future of development with cloud-first, mobile-first solutions that serve any developer, any application, and any platform.

    During our Connect(); event this year we presented 15 demos in Scott Guthrie’s and Scott Hanselman’s keynotes. If you missed the keynotes, you can watch the recording in Channel 9. I highly recommend it!

    New products, services, and tools we announced help bring innovation to your apps. We enjoy working on the demos for the keynotes and building real-world applications through which you can directly experience what’s possible using those technologies. This year, we built out a full intelligent bike sharing scenario for our Connect(); //2016 demos and are delighted to share all the source code with you today.

    clip_image002

    BikeSharing360 is a fictitious example of a smart bike sharing system with 10,000 bikes distributed in 650 stations located throughout New York City and Seattle. Their vision is to provide a modern and personalized experience to riders and to run their business with intelligence.

    In this demo scenario, we built several apps for both the enterprise and the consumer (bike riders).

    BikeSharing360 (Enterprise)

    New York, Seattle, and more coming soon!

    • Manage our business with intelligence
    • Own fleets of smart bikes we can track with IoT devices
    • Go mobile and get bike maintenance reports
    • Intelligent kiosks with face and speech recognition to help customers rent bikes easily
    • Intelligent customer service: AI – assisted customer service through bots
    Bike Riders (Consumer)
    • Go mobile! Go green! Save time, money & have fun!
    • Find and rent bikes and manage your rides
    • My rides: Discover and track your routes
    • Get personalized recommendations for events
    • Issues on the road? Chat with the BikeSharing360 bot, your customer service personal assistant

    BikeSharing360 Suite of Apps

    We want you to be inspired and learn how to use multiple tools, products, and our Microsoft application platform capabilities to unleash your productivity, help transform your businesses, and build deeply personalized apps for your customers.

    We built a suite of apps for the BikeSharing360 enterprise and bike riders. The following diagram provides a high-level overview of the apps we built for:

    Watch the demos in action and download the code

    This time we are releasing multiple demo projects split in seven different demo repos now available in GitHub:

    Websites

    BikeSharing360: Websites on GitHub

    • Web Apps focused on bike rentals and corporate users
    • BikeSharing360 Public Web Site (MVC)
    • BikeSharing360 Public Web Site (ASP.NET Core)
    • BikeSharing360 Private Web Site (ASP.NET Core 1.1)

    Mobile apps

    BikeSharing360: Mobile apps on GitHub

    • BikeRider: Native mobile apps using Xamarin Forms for iOS, Android and UWP
    • Maintenance: Cordova cross-platform mobile app

    Watch demos in action:

    Backend services

    BikeSharing360: Backend services on GitHub

    • Backend microservices used in various Connect() demos (mainly in the Xamarin apps).
    • Azure Functions

    Watch demos in action:

    Single container apps

    BikeSharing360: Single container app on GitHub

    • Single Container App: Existing marketing site and publish to Azure App Service running Linux Docker Containers

    Watch demos in action:

    Multi container apps

    BikeSharing360: Multi container app on GitHub

    · Multi Container App: More complex app to demonstrate setting up Continuous Delivery with Visual Studio 2017 RC. The project was then deployed to Azure Container Services, through the Azure Container Registry.

    Watch demos in action

    · Watch Donovan Brown demo a single container app

    Cognitive Services kiosk app

    BikeSharing360: Cognitive Services kiosk app on GitHub

    • UWP Intelligent Kiosk with Cognitive Services (Face recognition API, Voice recognition)

    Watch demos in action:

    Bot app

    BikeSharing360: Bot app on GitHub

    • BikeSharing360 Intelligent Bot: Customer Services integrated with Language Understanding Intelligent Service (LUIS)

    Watch demos in action:

    You can also watch this Visual Studio Toolbox episode for an E2E overview of the BikeSharing360 demo apps:

    Even more demos from Connect();!

    Here are a few of our tooling demos showing the latest improvements on our Visual Studio family of products:

    It is a great time to be a developer. Create amazing apps and services that delight customers and build your business. With Microsoft’s intelligent Azure cloud, powerful data platform, and flexible developer tools, it is easier than ever to design, build, and manage breakthrough apps that work across platforms and devices.

    Enjoy BikeSharing360 from our demo team!

    Erika Ehrli Cabral, Senior Product Marketing Manager, Cloud Apps Dev and Data@erikaehrli1

    Erika has been at Microsoft for over 12 years, working first in Microsoft Consulting and enjoying later on different roles where she created content and code samples for developers. In her current role, she is now focused on executive keynote demos and Visual Studio and Azure product marketing.

    Writing Declaration Files for @types

    $
    0
    0

    A while back we talked about how TypeScript 2.0 made it easier to grab declaration files for your favorite library. Declaration files, if you’re not familiar, are just files that describe the shape of an existing JavaScript codebase to TypeScript. By using declaration files (also called .d.ts files), you can avoid misusing libraries and get things like completions in your editor.

    As a recap of that previous blog post, if you’re using an npm package named foo-bar and it doesn’t ship any .d.ts files, you can just run

    npm install -S @types/foo-bar

    and things will just work from there.

    But you might have asked yourself things like “where do these ‘at-types’ packages come from?” or “how do I update the .d.ts files I get from it?”. We’re going to try to answer those very questions.

    DefinitelyTyped

    The simple answer to where our @types packages come from is DefinitelyTyped. DefinitelyTyped is just a simple repository on GitHub that hosts TypeScript declaration files for all your favorite packages. The project is community-driven, but supported by the TypeScript team as well. That means that anyone can help out or contribute new declarations at any time.

    Authoring New Declarations

    Let’s say that we want to create declaration files for our favorite library. First, we’ll need to fork DefinitelyTyped, clone your fork, and create a new branch.

    git clone https://github.com/YOUR_USERNAME_HERE/DefinitelyTyped
    cd DefinitelyTyped
    git checkout -b my-favorite-library

    Next, we can run an npm install and create a new package using the new-package npm script.

    npm install
    npm run new-package my-favorite-library

    For whatever library you use, my-favorite-library should be replaced with the verbatim name that it was published with on npm.
    If for some reason the package doesn’t exist in npm, mention this in the pull request you send later on.

    The new-package script should create a new folder named my-favorite-library with the following files:

    • index.d.ts
    • my-favorite-library-tests.ts
    • tsconfig.json
    • tslint.json

    Finally we can get started writing our declaration files. First fix up the comments for index.d.ts by adding the library’s MAJOR.MINOR version, the project URL, and your username. Then, start describing your library. Here’s what my-favorite-library/index.d.ts might look like:

    // Type definitions for my-favorite-library x.x// Project: https://github.com/my-favorite-library-author/my-favorite-library// Definitions by: Your Name Here // Definitions: https://github.com/DefinitelyTyped/DefinitelyTypedexportfunction getPerpetualEnergy():any[];exportfunction endWorldHunger(n:boolean):void;

    Notice we wrote this as a module– a file that contains explicit imports and exports. We’re intending to import this library through a module loader of some sort, using Node’s require() function, AMD’s define function, etc.

    Now, this library might have been written using the UMD pattern, meaning that it could either be imported or used as a global. This is rare in libraries for Node, but common in front-end code where you might use your library by including a tag. So in this example, if my-favorite-library is accessible as the global MyFavoriteLibrary, we can tell TypeScript that with this one-liner:

    exportasnamespaceMyFavoriteLibrary;

    So the body of our declaration file should end up looking like this:

    // Our exports:exportfunction getPerpetualEnergy():any[];exportfunction endWorldHunger(n:boolean):void;// Make this available as a global for non-module code.exportasnamespaceMyFavoriteLibrary;

    Finally, we can add tests for this package in my-favorite-library/my-favorite-library-tests.ts:

    import*aslibfrom"my-favorite-library";const energy =lib.getPerpetualEnergy()[14];lib.endWorldHunger(true);

    And that’s it. We can then commit, push our changes to GitHub…

    git add ./my-favorite-library
    git commit -m "Added declarations for 'my-favorite-library'."
    git push -u origin my-favorite-library

    …and send a pull request to the master branch on DefinitelyTyped.

    Once our change is pulled in by a maintainer, it should be automatically published to npm and available. The published version number will depend on the major/minor version numbers you specified in the header comments of index.d.ts.

    Sending Fixes

    Sometimes we might find ourselves wanting to update a declaration file as well. For instance, let’s say we want to fix up getPerpetualEnergy to return an array of booleans.

    In that case, the process is pretty similar. We can simply fork & clone DefinitelyTyped as described above, check out the master branch, and create a branch from there.

    git clone https://github.com/YOUR_USERNAME_HERE/DefinitelyTyped
    git checkout -b fix-fav-library-return-type

    Then we can fix up our library’s declaration.

    - export function getPerpetualEnergy(): any[];+ export function getPerpetualEnergy(): boolean[];

    And fix up my-favorite-library‘s test file to make sure our change can be verified:

    import*aslibfrom"my-favorite-library";// Notice we added a type annotation to 'energy' so TypeScript could check it for us.const energy:boolean=lib.getPerpetualEnergy()[14];lib.endWorldHunger(true);

    Dependency Management

    Many packages in the @types repo will end up depending on other type declaration packages. For instance, the declarations for react-dom will import react. By default, writing a declaration file that imports any library in DefinitelyTyped will automatically create a dependency for the latest version of that library.

    If you want to snap to some version, you can make an explicit package.json for the package you’re working in, and fill in the list of dependencies explicitly. For instance, the declarations for leaflet-draw depend on the the @types/leaflet package. Similarly, the Twix declarations package has a dependency on moment itself (since Moment 2.14.0 now ships with declaration files).

    As a note, only the dependencies field package.json is necessary, as the DefinitelyTyped infrastructure will provide the rest.

    Quicker Scaffolding with dts-gen

    We realize that for some packages writing out every function in the API an be a pain. Thats why we wrote dts-gen, a neat tool that can quickly scaffold out declaration files fairly quickly. For APIs that are fairly straightforward, dts-gen can get the job done.

    For instance, if we wanted to create declaration files for the array-uniq package, we could use dts-gen intsead of DefinitelyTyped’s new-package script. We can try this our by installing dts-gen:

    npm install -g dts-gen

    and then creating the package in our DefinitelyTyped clone:

    cd ./DefinitelyTyped
    npm install array-uniq
    
    dts-gen -d -m array-uniq

    The -d flag will create a folder structure like DefinitelyTyped’s new-package script. You can peek in and see that dts-gen figured out the basic structure on its own:

    export=array_uniq;declarefunction array_uniq(arr:any):any;

    You can even try this out with something like TypeScript itself!

    Keep in mind dts-gen doesn’t figure out everything– for example, it typically substitutes parameter and return values as any, and can’t figure out which parameters are optional. It’s up to you to make a quality declaration file, but we’re hoping dts-gen can help bootstrap that process a little better.

    dts-gen is still in early experimental stages, but is on GitHub and we’re looking feedback and contributions!

    A Note About Typings, tsd, and DefinitelyTyped Branches

    If you’re not using tools like tsd or Typings, you can probably skip this section. If you’ve sent pull requests to DefinitelyTyped recently, you might have heard about a branch on DefinitelyTyped called types-2.0. The types-2.0 branch existed so that infrastructure for @types packages wouldn’t interfere with other tools.

    However, this was a source of confusion for new contributors and so we’ve merged types-2.0 with master. The short story is that all new packages should be sent to the master branch, which now must be structured for for TypeScript 2.0+ libraries.

    Tools like tsd and Typings will continue to install existing packages that are locked on specific revisions.

    Next Steps

    Our team wants to make it easier for our community to use TypeScript and help out on DefinitelyTyped. Currently we have our guide on Publishing, but going forward we’d like to cover more of this information on our website proper.

    We’d also like to hear about resources you’d like to see improved, and information that isn’t obvious to you, so feel free to leave your feedback below.

    Hope to see you on DefinitelyTyped. Happy hacking!

    JBoss and WildFly extension for Visual Studio Team Services

    $
    0
    0

    We are pleased to announce the new JBoss and WildFly extension available from the Visual Studio Marketplace for Visual Studio Team Services / Team Foundation Server.

    This extension provides a task to deploy your Java applications to an instance of JBoss Enterprise Application Platform (EAP) 7 or WildFly Application Server 8 and above over the HTTP management interface.  It also includes a utility to run CLI commands as part of your build/release process.  Check out this video for a demo.

    screenshot

    This extension is open sourced on GitHub so reach out to us with any suggestions or issues.  We welcome contributions.

    To learn more about how to deploy tolegacy JBoss EAP 6, please refer to this guide.

    To learn more about Java and cross platform support in Visual Studio Team Services and Team Foundation Server, visit http://java.visualstudio.com or follow us on twitter @JavaALM.


    Playing with an Onion Omega IoT to show live Blood Sugar on an OLED screen

    $
    0
    0

    arduino_lb3dg8I've been playing with IoT stuff on my vacation. Today I'm looking at an Onion Omega. This is a US$19 computer that you can program with Python, Node.js, or C/C++. There's a current IndieGogo happening for the Onion Omega2 for $5. That's a $5 Linux computer with Wi-Fi. Realistically you'd want to spend more and get expansion docks, chargers, batteries, etc, but you get the idea. I got the original Omega along with the bluetooth dongle, Arduino compatible base, tiny OLED screen. A ton of stuff to play with for less than $100.

    Note that I am not affiliated with Onion at all and I paid for it with my own money, to use for fun.

    One of the most striking things about the Onion Omega line is how polished it is. There's lots of tiny Linux Machines that basically drop you at the command line and say "OK, SSH in and here's root." The Onion Omega is far more polished.

    Onion Omega has a very polished Web UI

    The Omega can do that for you, but if you have Bonjour installed (for zeroconf networking) and can SSH in once to setup Wi-Fi, you're able to access this lovely web-based interface.

    Look at all the info about the Omega's memory, networking, device status, and more

    This clean, local web server and useful UI makes the Onion Omega extremely useful as a teaching tool. The Particle line of IoT products has a similarly polished web-interfaces, but while the Onion uses a local web server and app, the Particle Photon uses a cloud-based app that bounces down to a local administrative interface on the device. There's arguments for each, but I remain impressed with how easy it was for me to update the firmware on the Omega and get a new experience. Additionally, I made a few mistakes and "bricked" it and was able - just by following some basic instructions - to totally reflash and reset it to the defaults in just about 10 minutes. Impressive docs for an impressive product.

    image

    Onion Omega based Glucose Display via NightScout

    So it's a cool product, but how quickly can I do something trivial, but useful? Well, I have a NightScout open source diabetes management server with an API that lets me see my blood sugar. The resulting JSON looks like this:

    [  
    {
    "_id":"5851b235b8d1fea108df8b",
    "sgv":135,
    "date":1481748935000,
    "dateString":"2016-12-14T20:55:35.000Z",
    "trend":4,
    "direction":"Flat",
    "device":"share2",
    "type":"sgv"
    }
    ]

    That number under "sgv" (serum glucose value) is 135 mg/dl. That's my blood sugar right now. I could get n values back from the WebAPI and plot a chart, but baby steps. Note also the "direction" for my sugars is "flat." It's not rising nor falling in any major way.

    Let's add the OLED Display to the Onion Omega and show my sugars. Since it's an OpenWRT Linux machine, I can just add Python!

    opkg update
    opkg install python

    Some may (and will) argue that for a small IoT system, Linux is totally overkill. Sure, it likely it. But it's also very productive, fun to prototype with, and functional. Were I to go to market for real, I'd likely use something more hardened.

    As I said, I could SSH into the machine but since the Web UI is so nice, it includes an HTML-based terminal!

    A Terminal built in!

    The Onion Omega includes not just libraries for expansions like the OLED Display, but also command-line utilities. This script clears the display, initializes it, and displays some text. The value of that text will come from my yet-to-be-written python script.

    #!/bin/sh    

    oled-exp -c

    VAR=$(python ./sugar_script.py)

    oled-exp -i
    oled-exp write "$VAR"

    Then in my Python script I could return print the value. OR, I can use the Python Module for this OLED screen directly and do this:

    #!/usr/bin/env python                                                                                                        

    from OmegaExpansion import oledExp
    import urllib
    import json

    site="https://hanselsugars.azurewebsites.net/api/v1/entries/sgv.json?count=1"
    jfile=urllib.urlopen(site)
    jsfile=jfile.read()
    jsfile=jsfile.replace("\n","")
    jsfile=jsfile.replace("/","")
    jsfile=jsfile.replace("]","")
    jsfile=jsfile.replace("[","")

    a=json.loads(jsfile)
    sugar=a['sgv']
    direction=a['direction']
    info="\n" + str(sugar)+" mg/dl and "+direction

    oledExp.driverInit()
    oledExp.clear()
    oledExp.write(info)

    Now here's a pic of my live blood sugar on the Onion Omega with the OLED! I could put this to run on a timer and I'm off to the races.

    Photo Dec 14, 2 16 27 PM

    The next step might be to clean up the output, parse the date better, and perhaps even dynamically generate a sparkline and display the graphic on the small B&W OLED Screen.

    Have you used a small Linux IoT device like the Onion Omega?


    Sponsor: Do you deploy the same application multiple times for each of your end customers? The team at Octopus have taken the pain out of multi-tenant deployments. Check out their latest 3.4 release



    © 2016 Scott Hanselman. All rights reserved.
         

    Code Style Configuration in the VS2017 RC Update

    $
    0
    0

    Fighting like Cats and Dogs

    Visual Studio 2017 RC introduced code style enforcement and EditorConfig support. We are excited to announce that the update includes more code style rules and allows developers to configure code style via EditorConfig.

    What is EditorConfig?

    EditorConfig is an open source file format that helps developers configure and enforce formatting and code style conventions to achieve consistent, more readable codebases. EditorConfig files are easily checked into source control and are applied at repository and project levels. EditorConfig conventions override their equivalents in your personal settings, such that the conventions of the codebase take precedence over the individual developer.

    The simplicity and universality of EditorConfig make it an attractive choice for team-based code style settings in Visual Studio (and beyond!). We’re excited to work with the EditorConfig community to add support in Visual Studio and extend their format to include .NET code style settings.

    EditorConfig with .NET Code Style

    In VS2017 RC, developers could globally configure their personal preferences for code style in Visual Studio via Tools>Options. In the update, you can now configure your coding conventions in an EditorConfig file and have any rule violations get caught live in the editor as you type. This means that now, no matter what side you’re on in The Code Style Debate, you can choose what conventions you feel are best for any portion of your codebase—whether it be a whole solution or just a legacy section that you don’t want to change the conventions for—and enforce your conventions live in the editor. To demonstrate the ins-and-outs of this feature, let’s walk through how we updated the Roslyn repo to use EditorConfig.

    Getting Started

    The Roslyn repo by-and-large uses the style outlined in the .NET Foundation Coding Guidelines. Configuring these rules inside an EditorConfig file will allow developers to catch their coding convention violations as they type rather than in the code review process.

    To define code style and formatting settings for an entire repo, simply add an .editorconfig file in your top-level directory. To establish these rules as the “root” settings, add the following to your .editorconfig (you can do this in your editor/IDE of choice):

    EditorConfig settings are applied from the top-down with overrides, meaning you describe a broad policy at the top and override it further down in your directory-tree as needed. In the Roslyn repo, the files in the Compilers directory do not use var, so we can just create another EditorConfig file that contains different settings for the var preferences and these rules will only be enforced on the files in the directory. Note that when we create this EditorConfig file in the Compiler directory, we do not want to add root = true (this allows us to inherit the rules from a parent directory, or in this case, the top-level Roslyn directory).

    EditorConfig File Hierarchy

    Figure 1. Rules defined in the top-most EditorConfig file will apply to all projects in the “src” directory except for the rules that are overridden by the EditorConfig file in “src/Compilers”.

    Code Formatting Rules

    Now that we have our EditorConfig files in our directories, we can start to define some rules. There are seven formatting rules that are commonly supported via EditorConfig in editors and IDEs: indent_style, indent_size, tab_width, end_of_line, charset, trim_trailing_whitespace, and insert_final_newline. As of VS2017 RC, only the first five formatting rules are supported. To add a formatting rule, specify the type(s) of files you want the rule to apply to and then define your rules, for example:

    Code Style Rules

    After reaching out to the EditorConfig community, we’ve extended the file format to support .NET code style. We have also expanded the set of coding conventions that can be configured and enforced to include rules such as preferring collection initializers , expression-bodied members, C#7 pattern matching over cast and null checks, and many more!

    Let’s walk through an example of how coding conventions can be defined:

    The left side is the name of the rule, in this case “csharp_style_var_for_built_in_types”. The right side indicates the rule settings: preference and enforcementlevel, respectively.

    • A preference setting can be either true (meaning, “prefer this rule”) or false (meaning, “do not prefer this rule”).
    • The enforcement level is the same for all Roslyn-based code analysis and can be, from least severe to most severe: none, suggestion, warning, or error.

    Ultimately, your build will break if you violate a rule that is enforced at the error severity level (however, this is not yet supported in the RC). To see all code style rules available in the VS2017 RC update and the final Roslyn code style rules, see the Roslyn .editorconfig or check out our documentation.

    If you need a refresher on the different severity levels and what they do, see below:

    Table of code analysis severity levels

    Pro-tip: The gray dots that indicate a suggestion are rather drab. To spice up your life, try changing them to a pleasant pink. To do so, go to Tools>Options>Environment>Fonts and Colors>Suggestion ellipses (…) and give the setting the following custom color (R:255, G:136, B:196):

    R:255, G:136, B:196

    Experience in Visual Studio

    When you add an EditorConfig file to an existing repo or project, the files are not automatically cleaned up to conform to your conventions. You must also close and reopen­ any open files you have when you add or edit the EditorConfig file to have the new settings apply. To make an entire document adhere to code formatting rules defined in your settings, you can use Format Document (Ctrl+K,D). This one-click cleanup does not exist yet for code style, but you can use the Quick Actions menu (Ctrl+.) to apply a code style fix to all occurrences in your document/project/solution.

    Fix all violations of a code style rule

    Figure 2. Rules set in EditorConfig files apply to generated code and code fixes can be applied to all occurrences in the document, project, or solution.

    Pro Tip: To verify that your document is using spaces vs tabs, enable Edit>Advanced>View White Space.

    How do you know if an EditorConfig file is applied to your document? You should be able to look at the bottom status bar of Visual Studio and see this message:

    Visual Studio status bar

    Note that this means EditorConfig files override any code style settings you have configured in Tools>Options.

    Conclusion

    Visual Studio 2017 RC is just a stepping stone in the coding convention configuration and enforcement experience. To read more about EditorConfig support in Visual Studio 2017, check out our documentation. Download the VS2017 RC with the update to test out .NET code style in EditorConfig and let us know what you think!

    Over ‘n’ out,

    Kasey Uhlenhuth, Program Manager, .NET Managed Languages

    Known Issues

    • Code style configuration and enforcement only works inside the Visual Studio 2017 RC update at this time. Once we make all the code style rules into a separate NuGet package, you will be able to enforce these rules in your CI systems as well as have rules that are enforced as errors break your build if violated.
    • You must close and reopen any open files to have EditorConfig settings apply once it is added or edited.
    • Only indent_style, indent_size, tab_width, end_of_line, and charset are supported code formatting rules in Visual Studio 2017 RC.
    • IntelliSense and syntax highlighting are “in-progress” for EditorConfig files in Visual Studio right now. In the meantime, you can use MadsK’s VS extension for this support.
    • Visual Basic-specific rules are not currently supported in EditorConfig beyond the ones that are covered by the dotnet_style_* group.
    • Custom naming convention support is not yet supported with EditorConfig, but you can still use the rules available in Tools>Options>Text Editor>C#>Code Style>Naming. View our progress on this feature on the Roslyn repo.
    • There is no way to make a document adhere to all code style rules with a one-click cleanup (yet!).

    GameAnalytics SDK for Microsoft UWP Released

    $
    0
    0

    We’re excited to announce our partnership with GameAnalytics, a powerful tool that helps developers understand player behavior so they can improve engagement, reduce churn and increase monetization.

    The tool gives game developers a central platform that consolidates player data from various channels to help visualize their core gaming KPIs in one convenient view. It also enables team members to collaborate with reporting and benchmark their game to see how it compares with more than 10,000 similar titles.

    You can set up GameAnalytics in a few minutes and it’s totally free of charge, without any caps on usage or premium subscription tiers. If you’d rather see the platform in action before making any technical changes, just sign up to view the demo game and data.

    GameAnalytics is used by more than 30,000 game developers worldwide and handles over five billion unique events every day across 1.7 billion devices.

    “I believe the single most valuable asset for any game developer in today’s market is knowledge,” said GameAnalytics Founder and Chairman, Morten E Wulff. “Since I started GameAnalytics back in 2012, I’ve met with hundreds of game studios from all over the world, and every single one is struggling with increasing user acquisition costs and falling retention rates.”

    “When they do strike gold, they don’t always know why. GameAnalytics is here to change that. To be successful, game studios will have to combine creative excellence with a data-driven approach to development and monetization. We are here to bridge this gap and make it available to everyone for free,” he added.

    GameAnalytics provides SDKs for every major game engine. The following guide will outline how to install the SDK and setup GameAnalytics to start tracking player behavior in four steps.

    1.  Create a free GameAnalytics account

    To get started, sign up for a free GameAnalytics account and add your first game. When you’ve created your game, you’ll find the integration keys in the settings menu (the gear icon), under “Game information.” You’ll need to copy your Game Key and Secret Key for the following steps.

    2.  Download the standalone SDK for Microsoft UWP

    Next, download the GameAnalytics SDK for Microsoft UWP. Once downloaded, you can begin the installation process.

    3.  Install the native UWP SDK

    To install the GameAnalytics SDK for Microsoft UWP, simply install using the Nuget by adding the GameAnalytics.UWP.SDK package from Nuget package manager. For Manual installation, use the following instructions:

    Manual installation

    • Open GA-SDK-UWP.sln and compile the GA_SDK_UWP project
    • Create a Nuget package: nuget pack GA_SDK_UWP/GA_SDK_UWP.nuspec
    • Copy the resulting GameAnalytics.UWP.SDK.[VERSION].nupkg (where [VERSION] is the version specified in the .nuspec file) into for example C:\Nuget.Local (the name and location of the folder is up to you)
    • Add C:\Nuget.Local (or whatever you called the folder) to the Nuget package sources (and disable Official Nuget source)
    • Add GameAnalytics.UWP.SDK package from Nuget packet manager

    4.  Initialize the integration

    Call this method to initialize using the Game Key and Secret Key for your game (copied in step 1):

    
    // Initialize
    GameAnalytics.Initialize("[game key]", "[secret key]");
    :bulb:
    
    

    Below is a practical example of code that is called at the beginning of the game to initialize GameAnalytics:

    
    using GameAnalyticsSDK.Net;
    
    namespace MyGame
    {
        public class MyGameClass
        {
            // ... other code from your project ...
            void OnStart()
            {
                GameAnalytics.SetEnabledInfoLog(true);
                GameAnalytics.SetEnabledVerboseLog(true);
                GameAnalytics.ConfigureBuild("0.10");
    
                GameAnalytics.ConfigureAvailableResourceCurrencies("gems", "gold");
                GameAnalytics.ConfigureAvailableResourceItemTypes("boost", "lives");
                GameAnalytics.ConfigureAvailableCustomDimensions01("ninja", "samurai");
                GameAnalytics.ConfigureAvailableCustomDimensions02("whale", "dolpin");
                GameAnalytics.ConfigureAvailableCustomDimensions03("horde", "alliance");
                GameAnalytics.Initialize("[game key]", "[secret key]");
            }
        }
    }
    
    

    5.  Build to your game engine

    GameAnalytics has provided full documentation for each game engine and platform. You can view and download all files via their Github page, or follow the steps below. They currently support building to the following game engines with Microsoft UWP:

    You can also connect to the service using their Rest API.

    Viewing your game data

    Once implemented, GameAnalytics provides insight into more than 50 of the top gaming KPIs, straight out of the box. Many of these metrics are viewable on a real-time dashboard to get a quick overview into the health of your game throughout the day.

    The real-time dashboard gives you visual insight into your number of concurrent users, incoming events, new users, returning users, transactions, total revenue, first time revenue and error logs.

    Creating custom events

    You can create your own custom events with unique IDs, which allow you to track actions specific to your game experience and measure these findings within the GameAnalytics interface. Event IDs are fully customizable and should fall within one of the following event types:

    EventDescription
    BusinessIn-App Purchases supporting receipt validation on GA servers.
    ResourceManaging the flow of virtual currencies – like gems or lives.
    ProgressionLevel attempts with Start, Fail & Complete event.
    ErrorSubmit exception stack traces or custom error messages.
    DesignSubmit custom event IDs. Useful for tracking metrics specifically needed for your game.

    For more information about planning and implementing each of these event types to suit your game, visit the game analytics data and events page.

    GameAnalytics Dashboards

    Developers using GameAnalytics can track their events in a selection of dashboards tailored specifically to games. The dashboards are powerful, yet totally flexible to suit any use case.

    Overview Dashboard

    With this dashboard you will see a quick snapshot of your core game KPIs.

    Acquisition Dashboard

    This dashboard provides insight into your player acquisition costs and best marketing sources.

    Engagement

    This dashboard helps to measure how engaged your players are over time.

    Monetization

    This dashboard visualizes all of the monetization metrics relating to your game.

    Progression

    This dashboard helps you understand where players grind or drop off in your game.

    Resources

    This dashboard helps you balance the flow of “sink” and “gain” resources in your game economy.

    You can find a more detailed overview for each dashboard on the GameAnalytics documentation portal.

    The post GameAnalytics SDK for Microsoft UWP Released appeared first on Building Apps for Windows.

    Bing Location Control helps devs add location to the conversation

    $
    0
    0

    Bots often need the user to input a location to complete a task. And normally bot developers need to use a combination of location or place APIs, and have their bots engage in a multi-turn dialog with users to get their desired location and subsequently validate it. The development steps are usually complicated and error-prone.

    As announced on the Bot Framework blog, the open source Bing Location Control for Bot Framework allows bot developers to easily and reliably get the user’s desired location within a conversation. The control is available in C# and Node.js and works consistently across all messaging channels supported by Bot Framework. All this with a few lines of code. Read the full post on the Bing Developer blog.

    UWP Experiences – App Samples

    $
    0
    0

    The UWP App Experiences are beautiful, cross-device, feature-rich and functional app samples built to demonstrate realistic app scenarios on the UWP platform across PC, Tablet, Xbox and more. Besides being open source on GitHub, each sample is accompanied by at least one blog post and short overview video, and will be published on the Windows Store in the upcoming month to provide easier access for developers.

    The News Experience

    ( source | blog post | video )

    Fourth Coffee is a news app that works across desktop, phone, and Xbox One, and offers a premium experience that takes advantage of each device’s strengths including tailored UI for each input modality such as controller on Xbox, touch on tablet and mouse on Desktop.

    The Weather Experience

    ( source | blog post | video )

    Atmosphere is a weather app that showcases the use of the popular Unity Engine to build beautiful UWP apps. In addition, the app implements UWP app extensions to enable other developers to extend certain areas of the app, as well as exposes an app service that enables other apps to use that weather information, as illustrated by Fourth Coffee.

    The Music Experience

    ( source | blog post | video )

    Backdrop is a cross-platform music app sharing code between UWP and other platforms using Xamarin. It supports background audio on UWP devices and cross-platform device collaboration using SignalR.

    The Video Experience

    ( source | blog post | video )

    South Ridge Video is a hosted web application built with React.js and hosted on a web server. The app can easily be converted to a UWP application that takes advantage of native platform capabilities, and can be distributed through the Windows Store as with any other UWP app.

    The IoT Experience

    ( source | blog post | video )

    Best For You is a fitness UWP app focused on collecting data from an IoT device using Windows IoT Core, Azure IoT Hub, Azure Event Hub and Azure Stream Analytics for processing.

    The Social Experience

    ( source )

    Adventure Works is a cross-device UWP application for sharing adventures and experiences with fictional friends. It is separated into three parts:

    About the samples

    These samples have been built and designed for multiple UWP devices and scenarios in mind from the start and are meant to showcase end to end solutions. Any developer can take advantage of these samples regardless of the device type or features they are targeting, and we are looking forward to hearing about your experience on the official GitHub repository.

    Happy coding!

    The post UWP Experiences – App Samples appeared first on Building Apps for Windows.

    Bing 2016 End Of Year

    $
    0
    0




    Bing users made billions of searches in 2016. Those searches often reveal the year’s most powerful moments, telling stories of heartbreak, LOLs, and of trends that seized the world’s imagination.
     
    The data clearly shows us one thing: people spent much of the year searching topics they love. And that’s what the Bing team wants to recognize as 2016 comes to an end–the bright snapshots in time, such as the year’s top searched video games or the top searched celebrities, while also pointing toward an exciting tomorrow, including top anticipated movies and top searched cars of 2017.
     
    No matter what the new year brings, we look forward to helping you on your quest for knowledge, news, and information. Discover more here.
     
    Happy New Year from your friends at Bing.

    -The Bing Team
     

    ICYMI – Cortana Skills Kit, Adobe XD, Surface Dial and the GameAnalytics SDK

    $
    0
    0

    No time for an intro, we want to jump right into this.

    The New Cortana Skills Kit

    Excited doesn’t even begin to describe the glass case of emotion containing our feelings toward the new Cortana Skills Kit announced this week. In addition to preparing developers to reach millions of new users, the Cortana Skills Kit will also help developers leverage bots and personalize apps to specific users.

    Adobe XD

    Many creatives use tools like Photoshop, Premiere and the entire Adobe Creative Cloud on a daily basis to mock up and prototype a wide range of assets. Now, UWP developers can tap into the prototyping power of the Adobe Creative Cloud with Adobe Experience Design (Adobe XD).

    Surface Dial for Devs

    The Surface Dial is an amazing new input method, and in our latest Surface Dial blog post, we show how developers can either add it to their toolkit or develop apps with the Surface Dial in mind.

    GameAnalytics SDK for UWP

    Get all of the information you need and see how your game is performing in one easy-to-digest dashboard. Even better, GameAnalytics is free. Click below to read more and get started.

    TL;DR: Go check out the Cortana Skills Kit, dial in the dev side of the Surface Dial, prototype your UWP app idea with Adobe XD, and check your game’s performance with the GameAnalytics SDK.

    Download Visual Studio to get started.

    The Windows team would love to hear your feedback.  Please keep the feedback coming using our Windows Developer UserVoice site. If you have a direct bug, please use the Windows Feedback tool built directly into Windows 10.

    The post ICYMI – Cortana Skills Kit, Adobe XD, Surface Dial and the GameAnalytics SDK appeared first on Building Apps for Windows.


    Free Intermediate ASP.NET Core 1.0 Training on Microsoft Virtual Academy

    $
    0
    0

    At the end of October I announced that Maria from my team and I published a Microsoft Virtual Academy on ASP.NET Core. This Free ASP.NET Core 1.0 Training is up on Microsoft Virtual Academy now for you to watch and enjoy! I hope you like it, we worked very hard to bring it to you.

    Again, start with Introduction to ASP.NET Core 1.0, and explore this new technology even further in Intermediate ASP.NET Core 1.0.

    Intermediate ASP.NET Core 1.0

    Intermediate ASP.NET Core 1.0

    We've just launched Day 2, the Intermediate day. If the first is 100 level, this is 200 level. In this day, Jeff Fritz and I build on what we learned with Maria in Day 1. We're also joined by Rowan Miller and Maria later in the day as we explore topics like:

    • Tag Helpers
    • Authentication
    • Custom Middleware
    • Dependency Injection
    • Web APIs
    • Single Page Apps
    • Entity Framework Core and Database Access
    • Publishing and Deployment

    In a few weeks (maybe sooner) we'll publish Day 3 which we'll do exclusively on Macs and Linux machines. We'll talk about Cross-Platform concerns and Containers.

    NOTE: There's a LOT of quality free courseware for learning .NET Core and ASP.NET Core. We've put the best at http://asp.net/free-courses and I encourage you to check them out!

    Also, please help me out by adding a few stars there under Ratings. We're new. ;)


    Sponsor: Do you deploy the same application multiple times for each of your end customers? The team at Octopus have taken the pain out of multi-tenant deployments. Check out their latest 3.4 release


    © 2016 Scott Hanselman. All rights reserved.
         

    Connecting my Particle Photon Internet of Things device to the Azure IoT Hub

    $
    0
    0

    Particle Photon connected to the cloudMy vacation continues. Yesterday I had shoulder surgery (adhesive capsulitis release) so today I'm messing around with Azure IoT Hub. I had some devices on my desk - some of which I had never really gotten around to exploring - and I thought I'd see if I could accomplish something.

    I've got a Particle Photon here, as well as a Tessel 2, a LattePanda, Funduino, and Onion Omega. A few days ago I was able to get the Onion Omega to show my blood sugar on a small OLED screen, which was cool. Tonight I'm going to try to hook the ParticlePhoton up to the Azure IoT hub for monitoring.

    The Photon is a tiny little device with Wi-Fi built-in. It's super easy to setup and it has a cloud-based IDE with tons of examples written in C and Node.js for you to use. Particle Photon also has a node.js based command line. From there you can list out your Photons, see their available functions, and even call functions over the internet! A hacker's delight, to be sure.

    Here's a standard "blink an LED" Hello world on a Photon. This one creates a cloud function called "led" and binds it to the "ledToggle" method. Those cloud methods take a string, so there's no enum for the on/off command.

    int led1 = D0;
    int led2 = D7;
    void setup() {
    pinMode(led1, OUTPUT);
    pinMode(led2, OUTPUT);
    Spark.function("led",ledToggle);
    digitalWrite(led1, LOW);
    digitalWrite(led2, LOW);
    }

    void loop() {
    }

    int ledToggle(String command) {
    if (command=="on") {
    digitalWrite(led1,HIGH);
    digitalWrite(led2,HIGH);
    return 1;
    }
    else if (command=="off") {
    digitalWrite(led1,LOW);
    digitalWrite(led2,LOW);
    return 0;
    }
    else {
    return -1;
    }
    }

    From the command line I can use the Particle command line interface (CLI) to enumerate my devices:

    C:\Users\scott>particle list
    hansel_photon [390039000647xxxxxxxxxxx] (Photon) is online
    Functions:
    int led(String args)

    See how it doesn't just enumerate devices, but also cloud methods that hang off devices? LOVE THIS.

    I can get a secret API Key from the Particle Photon's cloud based Console. Then using my Device ID and auth token I can call the method...with an HTTP request! How much easier could this be?

    C:\Users\scott\>curl https://api.particle.io/v1/devices/390039000647xxxxxxxxx/led -d access_token=31fa2e6f --insecure -d arg="on"
    {
    "id": "390039000647xxxxxxxxx",
    "last_app": "",
    "connected": true,
    "return_value": 1
    }

    At this moment the LED on the Particle Photon turns on. I'm going to change the code a little and add some telemetry using the Particle's online code editor.

    Editing Particle Photon Code online

    They've got a great online code editor, but I could also edit and compile the code locally:

    C:\Users\scott\Desktop>particle compile photon webconnected.ino

    Compiling code for photon

    Including:
    webconnected.ino
    attempting to compile firmware
    downloading binary from: /v1/binaries/5858b74667ddf87fb2a2df8f
    saving to: photon_firmware_1482209089877.bin
    Memory use:
    text data bss dec hex filename
    6156 12 1488 7656 1de8
    Compile succeeded.
    Saved firmware to: C:\Users\scott\Desktop\photon_firmware_1482209089877.bin

    I'll change the code to announce an "Event" when I turn on the LED.

    if (command=="on") {
    digitalWrite(led1,HIGH);
    digitalWrite(led2,HIGH);

    String data = "Amazing! Some Data would be here! The light is on.";
    Particle.publish("ledBlinked", data);

    return 1;
    }

    I can head back over to the http://console.particle.io and see these events live on the web:

    Particle Photon's have great online charts

    Particle also supports integration with Google Cloud and Azure IoT Hub. Azure IoT Hub allows you to manage billions of devices and all their many billions of events. I just have a few, but we all have to start somewhere. ;)

    I created a free Azure IoT Hub in my Azure Account...

    Azure IoT Hub has charts and graphs built in

    And made a shared access policy for my Particle Devices.

    Be sure to set all the Access Policy Permissions you need

    Then I told Particle about Azure in their Integrations system.

    Particle has Azure IoT Hub integration built in

    The Azure IoT SDKS on GitHub at https://github.com/Azure/azure-iot-sdks/releases have both a Windows-based Azure IoT Explorer and a command-line one called IoT Hub Explorer.

    I logged in to the IoT Hub Explorer using the connection string from the Azure Portal:

    iothub-explorer login "HostName=HanselIoT.azure-devices.net;SharedAccessKeyName=particle-iot-hub;SharedAccessKey=rdWUVMXs="

    Then I'll run "iothub-explorer monitor-events" passing in the device ID and the connection string for the shared access policy. Monitor-events is cool because it'll hang and just output the events as they're flowing through the whole system.

    IoTHub-Explorer monitor-events command line

    So I'm able to call methods on the Particle using their cloud, and monitor events from within Azure IoT Hub. I can explore diagnostics data and query huge amounts of device-to-cloud data that would potentially flow in from my hardware devices.

    The IoT Hub Limits are very generous for free/hobbyist users as we learn to develop. I haven't paid anything yet. However, it can scale to thousands of messages a second per unit! That means millions of messages a second if you need it.

    I can definitely see how the the value an IoT Hub solution like this would add up quickly after you've got more than one device. Text files don't really scale. Even if I just IoT'ed up my house, it would be nice to have all that data flowing into a single hub I could manage and query securely.


    Sponsor: Big thanks to Telerik! They recently published a comprehensive whitepaper on The State of C#, discussing the history of C#, what’s new in C# 7 and whether C# is the top tech to know. Check it out!



    © 2016 Scott Hanselman. All rights reserved.
         

    Happy holidays 2016

    $
    0
    0

    Today is my last day in the office for the year.  I just want to take a moment to say thank you to everyone who reads my blog and engages on TFS and VS Team Services.  It’s great working with all of you.

    Overall, it’s been a good year.  We shipped TFS 2015 Update 2 and 3 and TFS 2017.  We continued our tradition of substantial Team Services updates every sprint.  Overall service stability improved over the year (knock on wood), though we certainly had a few rough patches and learned how to improve from them.  As I reflect over it, I’m proud of what we have accomplished and I hope you all like it.

    Some of my favorite highlights from this year include:

    • The marketplace became real and now has hundreds of really useful extensions – for both Team Services and TFS.  New extensions are published almost every day.
    • Package management shipped with both Nuget and NPM support and Maven on the way.
    • Code search shipped, making it easy to search your entire code base with semantic understanding for common languages.
    • The new release management service went from promising approach to really powerful release orchestration across almost any app type and platform.
    • The pull request experience is now really awesome!  Improved UX, policies, iterations and more.
    • Our Java support really became first class with improvements across the board – IDE integration, build and release tasks, test integration, Jenkins integration, etc.
    • Great improvements in our build reports – integrating test results, code coverage and more.
    • The web based test experience became a viable alternative to the rich client.
    • Good progress on the new work item customization experience.
    • Good progress on a UX refresh to improve consistency, modernize the look and reduce clicks and navigation overhead, along with lots of performance improvements.
    • A new import service to bring TFS project collections into Team Services and our first set of really large enterprise customers (other than ourselves) on the service and happy.

    And this is just a small sampling.  Check out our release timeline to see more.  I’m very much looking forward to 2017 being another great year.

    I wish you all happy holidays and a great beginning to the new year.  I look forward to chatting with you again in January.  I’ll monitor blog comments on and off over the next 10 days but expect I’ll be a bit slower than usual.

    Happy Holidays!

    Brian

    The week in .NET – .NET Core triage on On .NET, ShareX

    $
    0
    0

    To read last week’s post, see The Year in .NET – Visual Studio 2017 RC and .NET Core updated, On .NET with Stephen Cleary and Luis Valencia, Ulterius, Inferno, Bastion, LoGeek Night.

    I might not post next week, for reasons you’ll surely understand, or I might do so one or two days late… Happy holidays!

    On .NET

    Last week, we had Karel Zikmund, Wes Haggard, and Immo Landwerth on the show to talk about .NET Core triage and project management:

    Because it’s the holiday season, we won’t have have a live show until next year, but I will publish more of the short interviews that we recorded at the MVP Summit, so stay tuned for new videos on Channel 9 and YouTube.

    App of the week: ShareX

    ShareX is an astonishingly complete screen capture tool that supports both still and video capture, and upload to popular file sharing services. The best thing about it however is that it’s free, open source, and written in C#.


    .NET

    ASP.NET

    F#

    Check out the F# Advent Calendar for loads of great F# blog posts for the month of December.

    Check out F# Weekly for more great content from the F# community.

    Xamarin

    Azure

    Data

    And this is it for this week!

    Contribute to the week in .NET

    As always, this weekly post couldn’t exist without community contributions, and I’d like to thank all those who sent links and tips. The F# section is provided by Phillip Carter, the gaming section by Stacey Haffner, and the Xamarin section by Dan Rigby.

    You can participate too. Did you write a great blog post, or just read one? Do you want everyone to know about an amazing new contribution or a useful library? Did you make or play a great game built on .NET? We’d love to hear from you, and feature your contributions on future posts:

    This week’s post (and future posts) also contains news I first read on The ASP.NET Community Standup, on Weekly Xamarin, on F# weekly, and on Chris Alcock’s The Morning Brew.

    Create great looking, fast, mobile apps using JavaScript, Angular 2, and Ionic 2

    $
    0
    0

     

    In a recent Visual Studio Toolbox episode, I highlighted some new Ionic 2 templates for use with the Visual Studio 2015 Tools for Apache Cordova (TACO). These new Ionic 2 RC templates are now available for you to try out and in this post I’ll talk you through what’s new.

    >Download Ionic 2 templates on the Visual Studio Extension Gallery.

    What is Ionic?

    Ionic is a JavaScript framework that helps you build Android, iOS, and Windows 10 devices using the design patterns of the native operating system. Ionic builds on top of Apache Cordova, an open source framework that helps you build mobile apps using HTML, JavaScript, and CSS. These mobile apps have access to native device capabilities, like the camera or accelerometer, and can be published through public or private app stores just like any native mobile app.

    Ionic provides HTML-based controls that go beyond the primitive UI of HTML; giving you a set of ready-made UI controls that are optimized for mobile devices. If you need to work with tabs, lists, navigation bars – Ionic has those for you. If you want micro-interactions, animations and buttery smooth scrolling – Ionic has those for you, too.

    ionic-2-ui-overview

    Examples of how the Ionic UI controls adapt to each device. Taken from the Ionic Azure Conference app demo application.

    Ionic has been around since 2012 and over 3,000,000 apps have been built using it. Ionic 1 was built using Angular 1 and released in 2014. The Ionic team is now nearing the first release of Ionic 2, built on top of Angular 2.

    How do I build apps with Ionic?

    When you build an app with Ionic 2, you’re building an app using Angular 2. This means that you’ll use TypeScript as your programming language of choice, which brings several benefits. You be able to:

    • Use future JavaScript features (like classes and modules) today instead of waiting for your target mobile platforms to support them. TypeScript compiles to simple, human-readable, JavaScript.
    • Commit code changes with higher confidence, thanks to type checking and refactoring tools that make sure your code is working the way you expect before and after a change.
    • Investigate bugs, or new code bases, more quickly thanks to high-confidence navigation tools like Find References or Peek.

    You’ll also work with Sass, a language to define styling for your apps. Sass provides valuable features for CSS development, like support for variables, and it compiles down to simple CSS files.

    When building your Ionic 2 application, you focus on building self-contained components which combine HTML & CSS templates for presentation with JavaScript/TypeScript for component logic. These components are then combined into pages – i.e. the individual screens of your application.

    For example, if you look at this default page for the Ionic 2 Blank template:

    ionic2blanktemplate

    The following TypeScript defines the component that creates this page:

    import { Component } from '@angular/core';import { NavController } from 'ionic-angular';
        @Component({  selector: 'page-home',  templateUrl: 'home.html'
        })export class HomePage {
          constructor(public navCtrl: NavController) {
        
          }
      
          onLink(url: string) {
              window.open(url);
          }
        }

    The @Component decorator is used for the HomePage class to identify this as an Angular component and the templateUrl argument specifies the URL of the HTML for this component, which looks like:

    <ion-header>  <ion-navbar>    <ion-title>      Ionic Blank
            ion-title>  ion-navbar>ion-header><ion-content class="home" padding>    <h2 class="center">Ionic 2 - Blank Starterh2> 
     
            <div class="center">        <p class="ionic-logo">p>    div>    <ion-card>        <ion-card-header>            Docs are here to help you
                ion-card-header>        <ion-list>            <button ion-item 
                      (click)="onLink('http://go.microsoft.com/fwlink/?LinkID=820516')">                <ion-icon name="jet" item-left>ion-icon>                Getting Started
                    button>                    ion-list>        
            ion-card>ion-content>

    Finally, here’s a snippet of the Sass that defines the styling of this page:

    page-home {
            .center {
                text-aligncenter;  
            }
            
            .ionic-logo {
              displayinline-flex;      position:relative;      width:87px;      height:87px;      border:3.5px solid #5E86C4;      border-radius:100%;      -moz-border-radius:100%;      -webkit-border-radius:100%;      -moz-animationspin 2s infinite linear;      -webkit-animationspin 2s infinite linear;    }
     
            /* ... */
        }

    When you build your project, Ionic takes care of compiling the TypeScript (.ts) files to JavaScript (.js) files, and Sass (.scss) files to CSS. It also places all the compiled files in the right places for Cordova to build your application and run it on a mobile device.

    Try out the Ionic 2 tutorial on the TACO documentation site, to experience it for yourself.

    What’s new for Ionic 2 & TACO?

    Our Tools for Apache Cordova (TACO) team, has been working closely with the Ionic team for years to make sure that Visual Studio developers have a first-class experience when using the Ionic framework and the TypeScript language,. We previously released templates for beta versions of Ionic 2, and now we’ve refreshed these for the latest Ionic 2 RC 4 release.

    The new templates mirror the standard starter templates included in Ionic:

    • Blank
    • SideMenu
    • Tabs

    Each of these templates adapts to match the look and feel of the device you’re using, as you can see in this following comparison of the SideMenu template:

    ionicstartertemplateplatformcomparison

    Getting started with Ionic 2 RC templates

    You can use these templates with Visual Studio 2015; Visual Studio 2017 RC isn’t supported yet, but we’re working on it. To use these with Visual Studio 2015, there are a few additional extensions you need to install:

    • TypeScript 2.0.6 editor– Adds support for version 2 of the TypeScript language, used by Ionic.
    • Microsoft ASP.NET and Web Tools– Among other changes for ASP.NET & web development, this adds support for new versions of NPM and the task runner explorer, used by the Ionic templates.
    • NPM Task Runner– Adds NPM script support to the Visual Studio task runner explorer, used to perform builds in Ionic.

    Finally, you’ll install the new Ionic 2 RC template extension and can follow along with the Ionic 2 tutorial in the TACO documentation.

    Join our Insiders and share your thoughts

    After you try out the Ionic 2 RC templates, please let us know what you think! For early access to future Ionic & Cordova related updates, join our TACO insiders.

    You can also share feedback via the Report a Problem option in the upper right corner of the Visual Studio IDE itself and track your feedback on the developer community portal. If you have a question about Ionic development, check out the Ionic Forum.

    Jordan MatthiesenJordan Matthiesen (@JMatthiesen)
    Program Manager, JavaScript mobile developer toolsJordan works at Microsoft on JavaScript tooling for web and mobile application developers. He’s been a developer for over 19 years, and currently focuses on talking to as many awesome mobile developers as possible. When not working on dev tools, you’ll find him enjoying quality time with his wife, 4 kids, dog, cat, and a cup of coffee.
    Viewing all 10804 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>