Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

ALM Rangers Branching and Merging guide for Visual Studio 2012 is available

$
0
0

As mentioned in Cowabunga! Visual Studio ALM Rangers Readiness “GIG” goes RTM! the RTM version of the Branching and Merging Guide for Visual Studio 2012  was not included in the sim-ship wave. But we have just received the following signal from the product owner Matthew Mitrik and the project lead Micheal Learned:

image  Well done team!

The team has released the following bits, with more to come:

Guidance

  • Branching and Merging GuideScreenshot (4)

Cheatsheets

  • Advanced Plan
  • Basic Branch Plan
  • Code Promotion Plan
  • Feature Branch Plan
  • Picking a Strategy
  • Standard Branch Plan
  • Terminology
  • Hands-on Labs
  • Local Workspaces, Diff and Merge HOL
  • Shared Resources HOL

Illustrations

  • Reusable Illustrations PPTX collection

Download the latest bits today…
image
…and share your candid feedback!


NuGet Feed Performance Update

$
0
0

As you might know, NuGet has been having some performance (and timeout) related issues recently. Earlier this week, we completed a deployment that helped, but it didn't address everything.

Many users are still seeing slow responses or even timeouts when trying to use the ‘Manage NuGet Packages’ dialog in Visual Studio.

Ongoing Investigation

The deployment earlier this week greatly improved the packages page on the gallery, but it didn't address the Visual Studio dialog performance as much as we had hoped. Since that deployment, we’ve been focusing on the queries behind the Visual Studio dialog.

We have found that the SQL queries that get executed from the ‘Manage NuGet Package’ dialog are not great. While the execution plans look okay, the memory grants for the queries are HUGE–MUCH bigger than they need to be. Because of the huge memory grants, the queries are stacking up behind each other waiting for memory to be granted. This is leading to poor performance and also timeouts.

We are working to get a different approach for these queries in place. We will let you know when a fix is ready. If you’re curious about what the queries look like, you can see the current and tentative rewritten queries here. The rewritten queries are using about 1/100th of the memory of the current queries.

To stay in touch with us, follow @nuget, or the #nuget tag on twitter for live updates. You can also check in on JabbR's #nuget room.

Source: http://blog.nuget.org/20120824/nuget-feed-performance-update.html

Memory Profiling: The Instances View

$
0
0

[Performance is a key concern for all but the most trivial of apps, and memory usage helps determine how well your app runs. In this series, Pratap Lakshman shows how the Windows Phone Performance Analysis Tool can help you diagnose and fix memory issues and improve app performance. –ed.]

In previous installments I explained how the Heap Summary view of the Windows Phone Performance Analysis Tool provided a categorized demographic representation of heap activity, and that the Types view presented a convenient grouping of the participating types.

Today I’ll focus on the Instances view, which takes things a step further by presenting an account of the lifetime of every allocated instance of each of the participating types. Tracking the lifetime of each allocated instance on the heap calls for diligent bookkeeping from the profiler; each instance has to be uniquely identified and its progression through the object lifecycle accounted for.

A sample
Consider the execution semantics of the following code snippet from the sample app we have been working with in this series:

image

As discussed in an earlier post, the GC mediates all allocation requests from your code, and operates on a heap that it has partitioned into two regions (generations), with allocations happening in the ephemeral “Gen0” region and objects surviving a GC run possibly promoted to an older “Gen1” region. The new instance of Person is thus allocated in the Gen0 region.

Thereafter, we explicitly induce a full GC run with a call to GC.Collect. It is seldom a good idea to call GC.Collect directly, but in this case we do so in order to keep the example brief and focus attention on how object instances might migrate between generations and illustrate how the profiler is able to track and report that; the alternative of allocating thousands of objects until the GC was provoked could as well have been used. In the present case, the GC cannot reclaim the memory allocated to the Person instance because that is still referenced by ‘p’; the instance survives the run and as an implementation detail is migrated to (promoted) the “Gen1” region.

We then allocate another instance of a Person. Remembering that all new allocations happen in Gen0, we now have a case where an instance in Gen1 ('p') is referencing an instance in Gen0 (p.m_p).

Finally, we explicitly set both p.m_p and p to null; since their memory is thus no longer referenced, it can be garbage collected. However, since there are no further induced GCs in code, and no further memory allocation pressure to provoke a GC during the rest of the course of the application’s execution, the next GC will be run only at the time the application exits (in the context of the AppDomain getting unloaded). The memory for these instances remains uncollected until such time.

To see how the profiler reports the above semantics, launch the sample through the Memory Profiler (as in the previous post, you can leave the Allocation Depth set to 12).

Navigating to the Types view via the New Allocations category shows the following table:

clip_image002

As expected there are 2 allocated instances of Person.

The Instances view
Navigating to the Instances view from here shows the following table:

clip_image004

Notice the following:

  • 2 instances of Person identified by unique IDs; 4333, 4334.
  • The instances can be dated based on their Create Time timestamp: 4333 (elder), 4334 (younger).
  • There was a Full GC run in between the allocations of the two instances as seen from the timestamp on the tooltip over the GC marker and the time stamps of Create Time of the instances.
  • Both of the instances were allocated in the Gen0 region.
  • And the provenances as discussed as we had seen the in Types view.

Navigating to the Types view via the Collected Allocations category shows the following table:

clip_image006

As expected there are 2 collected instances of Person. Navigating to the Instances view from here shows the following table:

clip_image008

Notice the following:

  • The same instances are now reported as collected (refer the same unique IDs).
  • The older instance resides in “Gen1”, whereas the younger instances resides in “Gen0”.
  • Both instances remain alive almost throughout and are collected at 6.003s (“Destroy Time”) into the execution of the application. There is no prescribed immediacy associated with when an object becomes dead and when it will be collected by the GC. While our sample was intentionally simple consider the case where an object becomes dead, but there is no subsequent GC during the rest of the application's execution because the application was able to maintain its allocation rate below the threshold that would have triggered a GC. In such a case, the dead object is not collected until the GC eventually runs. Games might routinely control their allocation rate during game play to be below this threshold to avoid triggering a GC.
  • Looking at the GC markers we see that this Destroy Time corresponds to that eventual full GC (that was run at the time the AppDomain itself is unloaded).
  • The older instance had a marginally longer lifetime than the younger instance.
  • And the provenances as we had seen the in Types view.

Together, these map exactly with the interpretation and associated commentary of the source code above! In an upcoming post, I shall introduce the Methods view and how it enables tracing the execution path leading up to each allocation.

This series shows you how to take advantage of the memory profiling feature of the Windows Phone Performance Analysis Tool. More posts in the series:

Enable Remote Audio Capture

$
0
0

To record audio as part of a feedback or testing session on a remote machine that is running Microsoft Feedback Client, Test Runner, or Exploratory Testing window, you must configure audio redirection settings on the remote machine. You must also enable “record from this computer” when you connect to the remote machine. You can perform this procedure only if your local computer is running Windows 7 or Windows Server 2008 R2.

Of course, to record audio, you must have an audio recording device configured on your computer, or on a remote machine if you access Microsoft Feedback Client, Test Runner, or Exploratory Testing from a remote device.


Note: We recommend that you open Microsoft Feedback Client on your local computer. Redirection will decrease audio quality. Microsoft Feedback Client does not require you to record audio as part of the feedback session.


To set a default microphone on your local computer

To record audio from your local computer, you must set a microphone device as the default device.

  1. Choose Start, Control Panel.
  2. In the Control Panel window, choose the Hardware and Sound link, and then choose Sound.
  3. In the Sound window, choose the Recording tab.
  4. Select a recording device from the list provided and then choose Set Default.

To enable remote audio capture

 

  1. Choose Start, choose Run, enter regedit, choose the OK button, and then set the value of the following registry key to 0.

    HKLM\SYSTEM\CurrentControlSet\Control\Terminal Server\WinStations\RDP-Tcp\fDisableAudioCapture


    Important:
    If you edit the registry incorrectly, you might severely damage your system. Before you make any changes to the registry, you should back up any valued data on the computer.


    For more information about how to set values of registry keys, see the following page on the Microsoft website: How to add, modify, or delete registry subkeys and values by using a registration entries (.reg) file.
  2. To connect to the remote machine, choose Start, choose Run, enter mstsc, and then choose the OK button.

    The Remote Desktop Connection dialog box opens.
  3. In the Computer list, choose or enter the name of the computer to which you want to connect, and, in the User name box, enter your user name.
  4. On the Local Resources tab, choose the Settings button.
  5. Under Remote audio recording, choose the Record from this computer option button, and then choose the OK button.
  6. Choose the Connect button.
  7. Exit and restart whichever client tool that you are using, for example Microsoft Feedback Client, Test Runner, or Exploratory Testing.

    You must restart you client in order for it to register the audio settings.

 

To learn more:

 

Connecting C++ and XAML

$
0
0

Hi, I’m Andy Rich, a tester on the C++ frontend compiler and one of the primary testers of the C++/CX language extensions.  If you’re like me, making use of a technology without understanding how it works can be confusing and frustrating.  This blog post will help explain how XAML and C++ work together in the build system to make a Windows Store application that still respects the C++ language build model and syntax.  (Note: this blog post is targeted towards Windows Store app developers.)

The Build Process

From a user-facing standpoint, Pages and other custom controls are really a trio of user-editable files.  For example, the definition of the class MainPage is comprised of three files: MainPage.xaml, MainPage.xaml.h, and MainPage.xaml.cpp.  Both mainpage.xaml and mainpage.xaml.h contribute to the actual definition of the MainPage class, while MainPage.xaml.cpp provides the method implementations for those methods defined in MainPage.xaml.h.  However, how this actually works in practice is far more complex.


 

This drawing is very complex, so please bear with me while I break it down into its constituent pieces.

Every box in the diagram represents a file.  The light-blue files on the left side of the diagram are the files which the user edits.  These are the only files that typically show up in the Solution Explorer.  I’ll speak specifically about MainPage.xaml and its associated files, but this same process occurs for all xaml/h/cpp trios in the project.

The first step in the build is XAML compilation, which will actually occur in several steps.  First, the user-edited MainPage.xaml file is processed to generate MainPage.g.h.  This file is special in that it is processed at design-time (that is, you do not need to invoke a build in order to have this file be updated).  The reason for this is that edits you make to MainPage.xaml can change the contents of the MainPage class, and you want those changes to be reflected in your Intellisense without requiring a rebuild.  Except for this step, all of the other steps only occur when a user invokes a Build.

Partial Classes

You may note that the build process introduces a problem: the class MainPage actually has two definitions, one that comes from MainPage.g.h:

 partial ref class MainPage : public ::Windows::UI::Xaml::Controls::Page,
      public ::Windows::UI::Xaml::Markup::IComponentConnector
{
public:
  void InitializeComponent();
  virtual void Connect(int connectionId, ::Platform::Object^ target);

private:
  bool _contentLoaded;

};

And one that comes from MainPage.xaml.h:

public ref class MainPage sealed
{
public:
  MainPage();

protected:
  virtual void OnNavigatedTo(Windows::UI::Xaml::Navigation::NavigationEventArgs^ e) override;
};

This issue is reconciled via a new language extension: Partial Classes.

The compiler parsing of partial classes is actually fairly straightforward.  First, all partial definitions for a class must be within one translation unit.  Second, all class definitions must be marked with the keyword partial except for the very last definition (sometimes referred to as the ‘final’ definition).  During parsing, the partial definitions are deferred by the compiler until the final definition is seen, at which point all of the partial definitions (along with the final definition) are combined together and parsed as one definition.  This feature is what enables both the XAML-compiler-generated file MainPage.g.h and the user-editable file MainPage.xaml.h to contribute to the definition of the MainPage class.

Compilation

For compilation, MainPage.g.h is included in MainPage.xaml.h, which is further included in MainPage.xaml.cpp.  These files are compiled by the C++ compiler to produce MainPage.obj.  (This compilation is represented by the red lines in the above diagram.)  MainPage.obj, along with the other obj files that are available at this stage are passed through the linker with the switch /WINMD:ONLY to generate the Windows Metadata (WinMD) file for the project. This process is denoted in the diagram by the orange line.  At this stage we are not linking the final executable, only producing the WinMD file, because MainPage.obj still contains some unresolved externals for the MainPage class, namely any functions which are defined in MainPage.g.h (typically the InitializeComponent and Connect functions).  These definitions were generated by the XAML compiler and placed into MainPage.g.hpp, which will be compiled at a later stage.

MainPage.g.hpp, along with the *.g.hpp files for the other XAML files in the project, will be included in a file called XamlTypeInfo.g.cpp.  This is for build performance optimization: these various .hpp files do not need to be compiled separately but can be built as one translation unit along with XamlTypeInfo.g.cpp, reducing the number of compiler invocations required to build the project.

Data Binding and XamlTypeInfo

Data binding is a key feature of XAML architecture, and enables advanced design patterns such as MVVM.  C++ fully supports data binding; however, in order for the XAML architecture to perform data binding, it needs to be able to take the string representation of a field (such as “FullName”) and turn that into a property getter call against an object.  In the managed world, this can be accomplished with reflection, but native C++ does not have a built-in reflection model.

Instead, the XAML compiler (which is itself a .NET application) loads the WinMD file for the project, reflects upon it, and generates C++ source that ends up in the XamlTypeInfo.g.cpp file.  It will generate the necessary data binding source for any public class marked with the Bindable attribute.

It may be instructive to look at the definition of a data-bindable class and see what source is generated that enables the data binding to succeed.  Here is a simple bindable class definition:

[Windows::UI::Xaml::Data::Bindable]
public ref class SampleBindableClass sealed {
public:
  property Platform::String^ FullName;
};

When this is compiled, as the class definition is public, it will end up in the WinMD file as seen here:

This WinMD is processed by the XAML compiler and adds source to two important functions within XamlTypeInfo.g.cpp: CreateXamlType and CreateXamlMember.

The source added to CreateXamlType generates basic type information for the SampleBindableClass type, provides an Activator (a function that can create an instance of the class) and enumerates the members of the class:

if (typeName == L"BlogDemoApp.SampleBindableClass")
{
 XamlUserType^ userType = ref new XamlUserType(this, typeName, GetXamlTypeByName(L"Object"));
  userType->KindOfType = ::Windows::UI::Xaml::Interop::TypeKind::Custom;
  userType->Activator = 
   []() -> Platform::Object^
   {
     return ref new ::BlogDemoApp::SampleBindableClass();
   };
  userType->AddMemberName(L"FullName");
  userType->SetIsBindable();
  return userType;
}

Note how a lambda is used to adapt the call to ref new (which will return a SampleBindableClass^) into the Activator function (which always returns an Object^).

From String to Function Call

As I mentioned previously, the fundamental issue with data binding is transforming the text name of a property (in our example, “FullName”) into the getter and setter function calls for this property.  This translation magic is implemented by the XamlMember class.

XamlMember stores two function pointers: Getter and Setter.  These function pointers are defined against the base type Object^ (which all WinRT and fundamental types can convert to/from).  A XamlUserType stores a map; when data binding requires a getter or setter to be called, the appropriate XamlUserType can be found in the map and its associated Getter or Setter function pointer can be invoked.

The source added to CreateXamlMember initializes these Getter and Setter function pointers for each property.  These function pointers always have a parameter of type Object^ (the instance of the class to get from or set to) and either a return parameter of type Object^ (in the case of a getter) or have a second parameter of type Object^ (for setters).

if (longMemberName == L"BlogDemoApp.SampleBindableClass.FullName")
{
  XamlMember^ xamlMember = ref new XamlMember(this, L"FullName", L"String");
  xamlMember->Getter =
  [](Object^ instance) -> Object^
  {
    auto that = (::BlogDemoApp::SampleBindableClass^)instance;
    return that->FullName;
  };

  xamlMember->Setter =
 [](Object^ instance, Object^ value) -> void
  {
    auto that = (::BlogDemoApp::SampleBindableClass^)instance;
    that->FullName = (::Platform::String^)value;
  };
  return xamlMember;
}

The two lambdas defined use the lambda ‘decay to pointer’ functionality to bind to Getter and Setter methods.  These function pointers can then be called by the data binding infrastructure, passing in an object instance, in order to set or get a property based on only its name.  Within the lambdas, the generated code adds the proper type casts in order to marshal to/from the actual types.

Final Linking and Final Thoughts

After compiling the xamltypeinfo.g.cpp file into xamltypeinfo.g.obj, we can then link this object file along with the other object files to generate the final executable for the program.  This executable, along with the winmd file previously generated, and your xaml files, are packaged up into the app package that makes up your Windows Store Application.

A note: the Bindable attribute described in this post is one way to enable data binding in WinRT, but it is not the only way.  Data binding can also be enabled on a class by implementing either the ICustomPropertyProvider interface or IMap.  These other implementations would be useful if the Bindable attribute cannot be used, particularly if you want a non-public class to be data-bindable.

For additional info, I recommend looking at this walkthrough, which will guide you through building a fully-featured Windows Store Application in C++/XAML from the ground up.  The Microsoft Patterns and Practices team has also developed a large application which demonstrates some best practices when developing Windows Store Applications in C++: project Hilo.  The sources and documentation for this project can be found at http://hilo.codeplex.com/.  (Note that this project may not yet be updated for the RTM release of Visual Studio.)

I hope this post has given you some insight into how user-edited files and generated files are compiled together to produce a functional (and valid) C++ program, and given you some insight into how the XAML data binding infrastructure actually works behind the scenes.

Customizing the login UI when using OAuth/OpenID

$
0
0

In the last post I showed how you can plug in your own OAuth/OpenID provider. This post shows you how you can pass in extra data about the provider such as display name, image etc and use this information when building up the UI for login screen

 

If you see the experience of login screen in the ASP.NET templates, it looks like this.

Let’s see how can we customize this UI to look like the following

Web Forms

  • Pass in extra data in App_Start\AuthConfig.cs when registering the provider as follows
Dictionary<string, object> FacebooksocialData = new Dictionary<string, object>();
            FacebooksocialData.Add("Icon", "~/Images/facebook.png");
            OpenAuth.AuthenticationClients.AddFacebook(
                appId: "your Facebook app id",
                appSecret: "your Facebook app secret",
                extraData: FacebooksocialData                
                );
  • Access this data in the View(in the WebForms Internet template case it is Account\OpenAuthProviders.ascx
"<%# Item.ExtraData["Icon"] %>" alt="Alternate Text" />

 

MVC

  • Pass in extra data in App_Start\AuthConfig.cs when registering the provider as follows
Dictionary<string, object> FacebooksocialData = new Dictionary<string, object>();
            FacebooksocialData.Add("Icon", "~/Images/facebook.png");
            OAuthWebSecurity.RegisterFacebookClient(
                appId: "someid",
                appSecret: "somesecret",
                displayName: "Facebook",
                extraData: FacebooksocialData);
    • Access this data in the View(in the MVC Internet template case it is Views\Account\_ExternalLoginsListPartial

     

     @foreach (AuthenticationClientData p in Model)
            {
                "@p.ExtraData["Icon"]" alt="Icon for @p.DisplayName" />
               }

     

    WebPages

    • In _AppStart.cshtml

     

    Dictionary<string, object> FacebooksocialData = new Dictionary<string, object>();
                FacebooksocialData.Add("Icon", "~/Images/facebook.png");
                OAuthWebSecurity.RegisterFacebookClient(
                    appId: "empty",
                    appSecret: "empty",
                    displayName: "Facebook",
                    extraData: FacebooksocialData);

       

      • Access this data in the View(in the webpages template case it is Account\_ExternalLoginsListPartial
       @foreach (AuthenticationClientData p in Model)
              {
                  "@p.ExtraData["Icon"]" alt="Icon for @p.DisplayName" />
                 }

      Cross posted to http://blogs.msdn.com/b/pranav_rastogi/archive/2012/08/24/customizing-the-login-ui-when-using-oauth-openid.aspx

      Pranav Rastogi | @rustd

      Evolving the Reflection API

      $
      0
      0

      As many developers have noticed, the reflection APIs changed in the .NET API set for Windows Store apps. Much of .NET’s ability to offer a consistent programming model to so many platforms over the last ten years has been the result of great architectural thinking. The changes to reflection are here to prepare for the challenges over the next decade, again with a focus on great architecture. Richard Lander and Mircea Trofin, program managers respectively from the Common Language Runtime and .NET Core Framework teams, wrote this post. -- Brandon

      In this post, we will look at changes to the reflection API for the .NET Framework 4.5 and .NET APIs for Windows Store apps (available in Windows 8). These changes adopt current API patterns, provide a basis for future innovation, and retain compatibility for existing .NET Framework profiles.

      Reflecting on a popular .NET Framework API

      The .NET Framework is one of the most extensive and productive development platforms available. As .NET developers, you create a broad variety of apps, including Windows Store apps, ASP.NET websites, and desktop apps. A big part of what enables you to be so productive across Microsoft app platforms is the consistency of the developer experience. For many developers, the language – be it C# or Visual Basic – is this common thread. For others, it is the Base Class Libraries (BCL). When you look more closely, you'll see that reflection is the basis for many language, BCL, and Visual Studio features.

      Many .NET Framework tools and APIs rely on CLR metadata, accessed via reflection, to operate. Examples include static analysis tools, application containers, extensibility frameworks (like MEF), and serialization engines. The ability to query, invoke, and even mutate types and objects is a focal point for .NET developers in many scenarios.

      In the .NET Framework 4, we support a rich but narrow set of views and operations over both static metadata and live objects. As we looked at expanding reflection scenarios, we realized that we first needed to evolve the reflection API to enable further innovation.

      We saw the design effort to create .NET APIs for Windows Store apps as the avenue to establish the changes that we needed in reflection. We defined a set of updates to reflection that would enable more flexibility for future innovation. We were able to apply that design to .NET APIs for Windows Store apps and also to the Portable Class Library. At the same time, we were able to compatibly enable that design in the .NET Framework 4.5. As a result, we were able to evolve the reflection API consistently and compatibly across .NET Framework API subsets.

      We took an in-depth look at the .NET APIs for Windows Store apps in an earlier .NET Team blog post. In this post, we saw that this new API subset is considerably smaller than the .NET Framework 4.5. We have documented the differences on MSDN. One difference that applies is Reflection.Emit, which is not available to Windows Store apps.

      Reflection scenarios

      Reflection can be described in terms of three big scenarios:

      • Runtime reflection
        • Primary scenario: You can request metadata information about loaded assemblies, types, and objects, instantiate types, and invoke methods
        • Status: Supported
      • Reflection-only loading for static analysis
        • Primary scenario: You can request metadata information about types and objects that are described in CLR assemblies, without execution side-effects
        • Status: Limited support
        • Alternatives today: CCI
      • Extensibility of reflection
        • Primary scenario: You can augment metadata in either of the two scenarios above
        • Status: Supported, but complicated

      Today, we have reasonable support for runtime reflection, but not for the other two scenarios. Given that reflection is such a key API, richer support across all three scenarios would likely enable new classes of tools, component frameworks, and other technologies. Full support is inhibited by some limitations in the reflection API in the .NET Framework 4. Primarily, reflection innovation is constrained by a lack of separation of concepts within the API, particularly as it relates to types. The System.Type class is oriented (per its original design) around the runtime reflection scenario. This is problematic because System.Type is used to represent types across all scenarios. Instead, we would benefit from a broader representation of types, designed to support all three scenarios.

      Splitting System.Type into two concepts

      System.Type is the primary abstraction and entry point into the reflection model. It is used to describe two related but different concepts, reference and definition, and enables operations across both. This lack of separation of concepts is the primary motivation for changing the reflection API. For example, the following scenarios are either difficult or unsupported with the existing model:

      • Reading CLR metadata without execution side-effects
      • Loading types from alternate sources other than CLR metadata
      • Augmenting type representation (for example, changing shape, adding attributes)

      In other parts of the product, we have first-class concepts of reference and definition. At a high level, a reference is a shallow representation of something, whereas a definition provides a rich representation. One needs to look no farther than assemblies, a higher level part of reflection, to see this. The System.Reflection.Assembly class represents assembly definitions, whereas the System.Reflection.AssemblyName class represents assembly references. The former exposes rich functionality, and the latter is just data that helps you get the definition should you want it. That’s exactly the model that we wanted to adopt for System.Type.

      In order to achieve a similar split for the System.Type concept and class, we created a new System.Reflection.TypeInfo class and shrunk the meaning of the System.Type class. The TypeInfo class represents type definitions and the Type class represents type references. Given a Type object, you can get the name of the type as a string, without any requirement to load anything more. Alternatively, if you need rich information about a type, you can get a TypeInfo object from a Type object. Given a TypeInfo object, you can perform all the rich behavior that you expect with a type definition, such as getting lists of members, implemented interfaces, or the base type.

      The value of Type and TypeInfo

      The API changes in the .NET Framework 4.5 were made such that we could evolve the reflection API to deliver new scenarios and value. While we changed the shape of the API, we haven’t yet added the additional features that would deliver the value. This section provides a preview of what that value would look like in practice.

      Suppose that we are using a static analysis tool that is implemented with the new reflection model. We are looking for all types in an app that derive from the UIControl class. We want to be able to run this tool on workstations on which the UIControl assembly (which contains the UIControl class) does not exist. In this example, let’s assume that we open an assembly that contains a class that derives from the UIControl class:

      class MyClass : UIControl

      In the .NET Framework 4 reflection model, the Type object (incorporating both reference and definition) that represents MyClass would create a Type object for the base class, which is UIControl. On machines that don't have the UIControl assembly, the request to construct the UIControl Type object would fail, and so too would the request to create a Type object for MyClass, as a result.

      Here you see what the reference/definition split achieves. In the new model, MyClass is a TypeInfo (definition); however, BaseType is a Type (reference), and will contain only the information about UIControl that the (MyClass) assembly contains, without requiring finding its actual definition.

      Type baseType = myClassTypeInfo.BaseType;

      In other scenarios, you may need to obtain the definition of UIControl. In that case, you can use the extension method on the Type clas, GetTypeInfo, to get a TypeInfo for UIControl:

      TypeInfo baseType = myClassTypeInfo.BaseType.GetTypeInfo();

      Of course, in this case, the UIControl assembly would need to be available. In this new model, your code (not the reflection API) controls the assembly loading policy.

      Once again, in the .NET Framework 4.5, the reflection API still eagerly loads the type definition for the base class. The reflection API implementation is largely oriented around the runtime reflection scenario, which has a bias towards loading base type definitions eagerly. At the point that we build support for the static analysis scenario described earlier, we will be able to deliver on full value of the reference/definition split, made possible by Type and TypeInfo.

      Applying the System.Type split to the Base Class Libraries

      Changing the meaning of Type and adding TypeInfo made it necessary to ensure consistency in the .NET Framework BCL. The .NET Framework has many APIs that return the Type class. For each API, we needed to decide whether a Type (reference) or a TypeInfo (definition) was appropriate. In practice, these choices were easy, since the API inherently either returned a reference or a definition. You either have access to rich data or you don't. We’ll look at a few examples that demonstrate the trend.

      • The Assembly.DefinedTypes property returns TypeInfo.
        • This API gets the types defined in that assembly.
      • The Type.BaseType property returns a Type.
        • This API returns a statement of what the base type is, not its shape.
        • The base type could be defined in another assembly, which would require an assembly load.
      • The Object.GetType method returns a Type.
        • This API returns a Type, since you only need a representation of a type, not its shape.
        • The type could be defined in another assembly, which would require an assembly load.
        • By returning a Type and not a TypeInfo, we also removed a dependency on the reflection subsystem from the core of the .NET Framework.
      • Language keywords, like C# typeof, return a Type.
        • Same rationale and behavior as Object.GetType.

      Deeper dive into the reflection model update

      So far, we’ve been looking at better abstraction in the reflection API, which is the reference/definition split that we made with Type and TypeInfo. We also made other changes, some of which contributed to the reference/definition split and others that satisfied other goals. Let's dive a little deeper into those changes.

      Replacing runtime reflection-oriented APIs

      In the .NET Framework 4.5 (and earlier releases), you can call Type.GetMethods() to get a list of methods that are exposed on a given type. Such a list of methods will include inherited methods. Our implementation of the GetMethods method has a particular policy for how it traverses the inheritance chain to get the complete list of methods, including loading assemblies for base types that are located in other assemblies. This approach can sometimes be problematic, since loading assemblies can have side-effects that change the execution of your program. The GetMethods method is an example of the heavy bias that the reflection API has to satisfying runtime reflection scenarios, and therefore, is not appropriate for reflection-only loading scenarios.

      For the new model, we introduced the DeclaredMethods property that reports the members that are declared (as opposed to members that are available via inheritance) on a given type. There are several other properties, such as DeclaredMembers and DeclaredEvents that follow the same pattern.

      The following example illustrates the difference in the behavior between Type/TypeInfo.GetMethods and TypeInfo.DeclaredMethods, using the .NET Framework 4.5.

      class MyClass
      {
      public void SomeMethod() { }
      } class Program { static void Main(string[] args) { var t = typeof(MyClass).GetTypeInfo(); Console.WriteLine("---all methods---"); foreach (MethodInfo m in t.GetMethods()) Console.WriteLine(m.Name); Console.WriteLine("======================="); Console.WriteLine("---declared methods only---"); foreach (MethodInfo m in t.DeclaredMethods) Console.WriteLine(m.Name); Console.ReadKey(); } }

      The output is:

      ---all methods---
      SomeMethod
      ToString
      Equals
      GetHashCode
      GetType
      =======================
      —declared methods only---
      SomeMethod

      You will notice that the GetMethods method retrieves all the public methods accessible on MyClass – including the ones defined on System.Object, like ToString, Equals, GetType and GetHashCode. DeclaredMethods returns all the declared methods (in this case, one method) on a given type, regardless of visibility, and including static methods.

      Adopting current API patterns -- IEnumerable

      In the era of the async programming model, the reflection APIs stand out since many of them, such as MemberInfo[], return arrays. As you likely know, arrays need to be fully populated before they are returned from an API. This characteristic is bad for both working-set and responsiveness. In the .NET APIs for Windows Store apps, we have replaced all the array return types with IEnumerable collections. Most of you will appreciate working with this friendlier API pattern, which will likely blend in better with the rest of your code.

      We have not yet fully taken advantage of this model yet. In our internal implementation of these APIs, we are still using the arrays that were formerly part of the public API contract. In a later version of the product, we can change the implementation to lazy evaluation without needing an associated change to the public API.

      Compatibility across .NET target frameworks

      We designed the reflection API updates with a goal of compatibility with existing code. In particular, we wanted developers to be able to share code between the .NET Framework 4.5 and .NET APIs for Windows Store apps.

      In .NET APIs for Windows Store apps, TypeInfo inherits from MemberInfo, while Type inherits from Object. Type definitions must inherit from MemberInfo to allow for nested types – types that are members of other types – in the same way that methods, properties, events, fields, or constructors are members of a type. You can see that this inheritance approach makes sense, particularly now that Type is very light-weight.

      In the .NET Framework 4.5, TypeInfo inherits from Type, while Type is still a MemberInfo. In order to maintain compatibility with the .NET Framework 4, we could not change the base type of Type. We expect that future .NET Framework releases will maintain this same factoring (that is, Type will continue to be a MemberInfo) for backward compatibility.

      However, if you are writing code that targets the .NET Framework 4.5, and you want to use the new reflection model, we encourage you to write that code as a Portable Class Library. Portable Class Library projects that target the .NET Framework 4.5 and .NET APIs for Windows Store apps follow the new model, as described above.

      See the figure below for a visual illustration of the reflection type hierarchy in the .NET Framework 4.5 and .NET APIs for Windows Store apps.

      Reflection type hierarchy in the .NET Framework 4.5 and .NET APIs for Windows Store apps

      Figure: Reflection type hierarchy in the .NET Framework 4.5 and .NET APIs for Windows Store apps

      Updating your code to use the new reflection model

      Now that you have a fundamental understanding of the new model, let's look at the mechanics of the APIs. Basically, you need to know three things:

      1. The Type class exposes basic data about a type.
      2. The TypeInfo class exposes all the functionality for a type. It is also a proper superset of Type.
      3. The GetTypeInfo extension method enables you to get a TypeInfo object from a Type object.

      The following sample code demonstrates the basic mechanics of Type and TypeInfo. It also provides examples of the data that you can get from Type and TypeInfo.

      class Class1
      {
          public void Type_TypeInfo_Demo()
          {
              //Get a Type
              Type type = typeof(Class1);
              //Gets the name of the type
              String typeName = type.FullName;
              //Gets the assembly-qualified type name
              String aqtn = type.AssemblyQualifiedName;
      
              //Get TypeInfo via the type
              //Note that .GetTypeInfo is an extension method
              TypeInfo typeInfo = type.GetTypeInfo();
              //Get the list of members
              IEnumerable<members> = typeInfo.DeclaredMembers;
              //You can do many other things with a TypeInfo
          }
      }
      

      We have received feedback that this change inserts another step – calling the GetTypeInfo extension method – and that it represents a migration hurdle for developers. This change is opt-in for the .NET Framework 4.5, for compatibility reasons. You do not have to use the GetTypeInfo method or the TypeInfo class if you are targeting the .NET Framework 4.5.

      With .NET APIs for Windows Store apps, we had the opportunity to create a fully consistent API, which is why we chose to create a clean split between Type and TypeInfo. As a result, code that targets .NET APIs for Windows Store apps will need to use appropriate combinations of Type and TypeInfo classes and the GetTypeInfo extension method. The same is true for Portable Class Library code that targets both .NET APIs for Windows Store apps and the .NET Framework 4.5.

      Writing code for the new reflection API – Windows Store and Portable Class Library

      As we discussed above, you'll need to adopt the new reflection model if your code targets .NET APIs for Windows Store apps or you're creating a Portable Class Library project that targets both .NET APIs for Windows Store apps and the .NET Framework 4.5.

      For example, you will notice that the Get* methods (for example, GetMethod) described earlier are not available, but are replaced by the Declared* properties (for example, DeclaredMethod). If the Get* methods are not present, the reflection binding constraints (BindingFlags options) are not available either. If you're writing new code, you'll need to follow the new model, and if you're porting code from another project, you'll need to update your code to the same model. We understand that these changes may result in non-trivial migration efforts in some cases; however, we hope that you can see the value that can be achieved with the type/typeinfo split.

      While the APIs have changed, you may still need to access inherited APIs and filter results. There are a couple of patterns that you can use to accommodate those changes. We’ll look at those now.

      We recommend that you write the code that provides the reflection objects that you need. You’ll actually get a clearer view of what the reflection sub-system does by seeing the code in your source file. Your implementation may also be more efficient than our implementation in the .NET Framework, since we accommodate several uncommon cases.

      You'll need some code that is a proxy for GetMethods, but that is implemented in terms of the new reflection API. You might need a replacement for another Get* method, such as GetInterfaces; however, you should find that the GetMethods example equally applies. The most straightforward implementation for GetMethods follows. It walks the inheritance chain of a type, and requests the set of declared methods on each class in that chain.

      public static IEnumerable<MethodInfo> GetMethods(this Type someType)
      {
          var t = someType;
          while (t != null)
          {
              var ti = t.GetTypeInfo();
              foreach (var m in ti.DeclaredMethods)
                  yield return m;
              t = ti.BaseType;
          }
      }

      Since binding flags are not provided in the new reflection API, you do not immediately have an obvious way to filter results, to public, private, static members, or to choose any of the other options offered by the BindingFlags enum. To accommodate this change, you can write pretty simple LINQ queries to filter on the results of the Declared* APIs, as you see in the following example:

      IEnumerable<MethodInfo> methods = typeInfo.DeclaredMethods.Where(m => m.IsPublic);
        

      We do offer another pattern as an option for porting code more efficiently. We created the GetRuntimeMethods extension method as a convenience API that provides the same semantics as the existing GetMethods API. Related extension methods have been created as an option for the other Get* methods, such as GetRuntimeProperties, as well. As the API names suggests, they are runtime reflection APIs, which will load all base types, even if they are located in other assemblies. These new extension methods do not support the BindingFlags enum, so the filtering approach suggested with LINQ above also applies.

      Both of these suggested patterns are good choices for adopting the new reflection model. Note that if reflection support expands in the future to include reflection-only scenarios for static analysis, GetRuntime* methods would no longer be appropriate, should you want to take advantage of those new scenarios.

      Writing code for the new reflection API – .NET Framework 4.5

      If your code targets the .NET Framework 4.5, you can opt to use the new model, but you do not have to. The .NET Framework 4.5 API is a superset of old and new reflection models, so all the APIs that you’ve used before are available, plus the new ones.

      Portable Class Library projects that target the .NET Framework 4, Silverlight, or Windows Phone 7.5 expose only the old model. In these cases, the new reflection APIs are not available.

      Conclusion

      In this post, we’ve discussed the improvements that we made to reflection APIs in .NET APIs for Windows Store apps, the .NET Framework 4.5, and Portable Class Library projects. These changes are intended to provide a solid basis for future innovation in reflection, while enabling compatibility for existing code.

      For more information, porting guides, and utility extension methods, please see .NET for Windows Store apps overview in the Windows Dev Center.

      --Rich and Mircea

      Follow or talk to us on twitter -- http://twitter.com/dotnet.

      Add cloud to your app with Windows Azure Mobile Services

      $
      0
      0

      Great Windows Store apps are connected. They use live tiles, authenticate users with single sign-on and share data between devices and users. To get all these great benefits of being connected, your app needs to use services in the cloud.

      Building cloud services is hard. Most cloud platforms offer general purpose capabilities to store data and execute code, but you have to author reams of infrastructure code to glue these capabilities together. I’m sure you are up for the challenge, but I bet backend infrastructure code is not your first priority. You want to focus on realizing your awesome app idea.

      Addressing this difficulty, earlier this week we announced a preview of the new service in Windows Azure: Mobile Services. Let me show you how you can add the cloud services you need to your app in minutes, using Mobile Services.

      To get started, sign up for the free trial of Windows Azure. You’ll get 10 Mobile Services for free. Let’s use one of them to build a simple todo list app.

      Create a new Mobile Service

      1. After you’ve signed up, go to http://manage.windows.azure.com log in using your Microsoft account. Create a new Mobile Service by clicking on the +NEW button at the bottom of the navigation pane.
      2. new_button

      3. Select Mobile Service and click Create. You will see a screen like this:

        mobile_service

        Figure 1. Create drawer in the management portal.

      4. In the New Mobile Service wizard, type a name of your app. This forms part of the URL for your new service.

        mobile_service_wizard

        Figure 2 Create Mobile Service wizard, first screen.

      5. When you create a Windows Azure Mobile Service, we automatically associate it with a SQL database inside Windows Azure.  The Windows Azure Mobile Service backend then provides built-in support for enabling remote apps to securely store and retrieve data from it, without you having to write or deploy any custom server code. Type the name of the new database, and enter a Login name and password for your new SQL Server. Remember these credentials if you want to reuse this database server for other Mobile Services. If you signed up for a 90 days free trial, you are entitled for one 1GB database for free.

        mobile_service_wizard2

      Figure 3. Create Mobile Service wizard, second screen.

      Click the tick button to complete the process. In just a few seconds you’ll have a Mobile Service – a backend that you can use to store data, send push notifications and authenticate users. Let’s try it out from an app.

      Create a new Windows Store app

      1. Click the name of your newly created mobile service.
      2. store_apps

        Figure 4. Your newly created mobile service.

        You now have two choices: to create a new app, or to connect an existing app to your Mobile Service. Let’s pick the first option and create a simple todo list that stores todo items in your SQL database. Follow the steps on the screen:

        todo

        Figure 5. Creating a Mobile Services app.

      3. Install the Visual Studio 2012 and Mobile Services SDK, if you haven't already done so.
      4. To store todo items, you need to create a table. You don’t need to predefine the schema for your table; Mobile Services will automatically add columns as needed to store your data.
      5. Next, select your favorite language, C# or JavaScript, and click Download. This downloads a personalized project that has been pre-configured to connect to your new Mobile Service. Save the compressed project file to your local computer.
      6. Browse to the location where you saved the compressed project files, expand the files on your computer, and open the solution in Visual Studio 2012 Express for Windows 8.
      7. Press F5 to launch the app.
        In the app, type a todo item in the textbox on the left and click Save. This sends an HTTP request to the new Mobile Service hosted in Windows Azure. This data is then safely stored in your TodoItem table. You receive an acknowledgement from the Mobile Service and your data is displayed in the list on the right.

      todolist

      Figure 6. Completed app.

      Let us take a look at the code inside the app that saves your data. Stop the todo list app and double-click on App.xaml.cs. Notice the lines:

      public static MobileServiceClient MobileService = new MobileServiceClient(
      "https://todolist.azure-mobile.net/",
      "xPwJLJqYTMsAiBsHBHDhDEamZdtUGw75"
      );

      This is the only code you need to connect your app to your Mobile Service. If you are connecting an existing app to your Mobile Service, you can copy this code from the quick start “Connect your existing app” option. Now, open MainPage.xaml.cs and take a look at the next code that inserts data into the Mobile Service:

      private IMobileServiceTable todoTable = App.MobileService.GetTable();
      private async void InsertTodoItem(TodoItem todoItem)
      {
      await todoTable.InsertAsync(todoItem);
      items.Add(todoItem);
      }
      This is all that’s needed to store data in your cloud backend. Here is an equivalent code in JavaScript:
      var client = new Microsoft.WindowsAzure.MobileServices.MobileServiceClient(
      "https://todolist.azure-mobile.net/",
      "xPwJLJqYTMsAiBsHBHDhDEamZdtUGw75"
      );
      var todoTable = client.getTable('TodoItem');

      var insertTodoItem = function (todoItem) {
      todoTable.insert(todoItem).done(function (item) {
      todoItems.push(item);
      });
      };

      Manage and monitor your Mobile Service

      1. Go back to the Windows Azure Management Portal, click the Dashboard tab to see real-time monitoring and usage info for your new Mobile Service.

        todolist2

        Figure 7. Mobile Services dashboard.

      2. Click the Data tab and then click on the TodoItems table.

        data_tab

        Figure 8. Data tab.

      3. From here you can browse the data that the app inserted into the table.

        browse

        Figure 9. Browse data.

      Conclusion

      In this brief overview we’ve looked at how easy it is to add the power of Windows Azure to your app without the drag that comes with authoring, managing and deploying a complex backend project. We’ve only scraped the surface of what you can do with Mobile Services. Here’s a teaser of some of the other features that are ready for you to explore:

      Learn more about Mobile Services at http://www.windowsazure.com/mobile

      --Kirill Gavrylyuk, Lead Program Manager, Windows Azure


      Optical Zooming in Legacy Document Modes

      $
      0
      0

      Internet Explorer 9 introduced sub-pixel font positioning as part of its hardware-accelerated rendering of HTML5 content as described in this IEBlog post. That was an important step into the future as it enabled zoom-independent text metrics—an important characteristic when pinch-zoom is part of the browsing experience as it is in IE10 on Windows 8 touch-enabled devices.

      As noted in that post 18 months ago, IE9’s legacy compatibility modes use whole-pixel text metrics. This compatibility-driven decision continues in IE10 with IE5 quirks, IE7 standards, and IE8 standards modes all running with whole-pixel font metrics; IE10 document modes Standards, Quirks, and IE9 Standards all use sub-pixel text metrics.

      As a result, the text in sites running in legacy document modes 5, 7, and 8 does not scale smoothly when the page is zoomed by pinch-zoom, double-tap zoom, or when the page is auto-zoomed for display in Windows 8’s snap and fill views.

      Zoom Example: Legacy vs. Standards Modes

      Below are side-by-side comparisons showing text from a popular news site in 8 and 10 document modes at 100% and 150%. Note the especially poor letter spacing between some letters in the 150% 8 mode example (upper right).

      Default size (100%) Optically zoomed to 150%
      8 Sample text in IE8 mode at 100% Sample text in IE8 mode using GDI-compatible font metrics at optical zoom 150%
      10 Sample text in IE10 mode at 100% Sample text in IE10 mode using sub-pixel font metrics at optical zoom 150%

      Move to Standards Today

      The best fix for this behavior is to move your pages to IE9 or IE10 Standards mode. IE10 Compat Inspector is a valuable tool to help you migrate to IE9 or IE10 mode. Compat Inspector identifies potential issues and offers steps you can take to resolve them. In general, the HTML, CSS, and JavaScript markup and code you use with other browsers will work great in IE10 once any browser detection is replaced with feature detection and vendor-specific CSS prefixes are updated to include -ms- or unprefixed versions. Modernizr is a JavaScript library that can help with these issues.

      Specifying Sub-pixel Metrics in Legacy Modes

      If moving to standards-based markup is out-of-scope for your site at this time, you may enable sub-pixel text metrics in legacy document modes using an HTTP header or tag. Based on our testing, most sites will work fine with natural text metrics.

      The format of the HTTP header is:

      X-UA-TextLayoutMetrics: Natural

      The syntax of the tag is:

      <meta http-equiv="X-UA-TextLayoutMetrics" content="natural" />

      Support for this HTTP header and tag are new in the final release version of IE10 on Windows 8.

      To improve the Windows 8 out-of-box experience for touch-enabled devices, we’ve added a section to the IE10 Compatibility View List that enables natural metrics for approximately 570 popular sites that currently run in legacy document modes. If your site is included on the list but you would prefer it not be, email iepo@microsoft.com. Include your name, company, title, and contact information along with the domain you want removed.

      Be Ready for IE10

      Move your legacy document mode site to IE9’s default standards mode today and be ready for IE10 tomorrow. Visitors to your site using IE10 on Windows 8 will thank you.

      —Ted Johnson, Program Manager Lead for Web Graphics

      OData 101: Using the [NotMapped] attribute to exclude Enum properties

      $
      0
      0

      TL;DR: OData does not currently support enum types, and WCF Data Services throws an unclear exception when using the EF provider with a model that has an enum property. Mark the property with a [NotMapped] attribute to get around this limitation.

      In today’s OData 101, we’ll take a look at a problem you might run into if you have an Entity Framework provider and are using Entity Framework 5.0+.

      The Problem

      The problem lies in the fact that Entity Framework and WCF Data Services use a common format to describe the data model: the Entity Data Model (EDM). In Entity Framework 5.0, the EF team made some modifications to MC-CSDL, the specification that codifies CSDL, the best-known serialization format for an EDM. Among other changes, the EnumType element was added to the specification. An EF 5.0 model in a project that targets .NET 4.5 will allow developers to add enum properties. However, WCF Data Services hasn’t yet done the work to implement support for enum properties. (And the OData protocol doesn’t have a mature understanding of how enums should be implemented yet; this is something we’re working through with OASIS.)

      If you are trying to use an EF model that includes an enum property with WCF Data Services, you’ll get the following error:

      The server encountered an error processing the request. The exception message is 'Value cannot be null. Parameter name: propertyResourceType'. See server logs for more details. The exception stack trace is:

      at System.Data.Services.WebUtil.CheckArgumentNull[T](T value, String parameterName) at System.Data.Services.Providers.ResourceProperty..ctor(String name, ResourcePropertyKind kind, ResourceType propertyResourceType) at System.Data.Services.Providers.ObjectContextServiceProvider.PopulateMemberMetadata(ResourceType resourceType, IProviderMetadata workspace, IDictionary2 knownTypes, PrimitiveResourceTypeMap primitiveResourceTypeMap) at System.Data.Services.Providers.ObjectContextServiceProvider.PopulateMetadata(IDictionary2 knownTypes, IDictionary2 childTypes, IDictionary2 entitySets) at System.Data.Services.Providers.BaseServiceProvider.PopulateMetadata() at System.Data.Services.Providers.BaseServiceProvider.LoadMetadata() at System.Data.Services.DataService1.CreateMetadataAndQueryProviders(IDataServiceMetadataProvider& metadataProviderInstance, IDataServiceQueryProvider& queryProviderInstance, BaseServiceProvider& builtInProvider, Object& dataSourceInstance) at System.Data.Services.DataService1.CreateProvider() at System.Data.Services.DataService1.HandleRequest() at System.Data.Services.DataService1.ProcessRequestForMessage(Stream messageBody) at SyncInvokeProcessRequestForMessage(Object , Object[] , Object[] ) at System.ServiceModel.Dispatcher.SyncMethodInvoker.Invoke(Object instance, Object[] inputs, Object[]& outputs) at System.ServiceModel.Dispatcher.DispatchOperationRuntime.InvokeBegin(MessageRpc& rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage5(MessageRpc& rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage41(MessageRpc& rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage4(MessageRpc& rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage31(MessageRpc& rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage3(MessageRpc& rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage2(MessageRpc& rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage11(MessageRpc& rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage1(MessageRpc& rpc) at System.ServiceModel.Dispatcher.MessageRpc.Process(Boolean isOperationContextSet)

      Fortunately this error is reasonably easy to workaround.

      Workaround

      The first and most obvious workaround should be to consider removing the enum property from the EF model until WCF Data Services provides proper support for enums.

      If that doesn’t work because you need the enum in business logic, you can use the [NotMapped] attribute from System.ComponentModel.DataAnnotations.Schema to tell EF not to expose the property in EDM.

      For example, consider the following code:

      using System;
      using System.ComponentModel.DataAnnotations.Schema;
      
      namespace WcfDataServices101.UsingTheNotMappedAttribute
      {
          public class AccessControlEntry
          {
              public int Id { get; set; }
      
              // An Enum property cannot be mapped to an OData feed, so it must be
              // explicitly excluded with the [NotMapped] attribute.
              [NotMapped]
              public FileRights FileRights { get; set; }
      
              // This property provides a means to serialize the value of the FileRights
              // property in an OData-compatible way.
              public string Rights
              {
                  get { return FileRights.ToString(); }
                  set { FileRights = (FileRights)Enum.Parse(typeof(FileRights), value); }
              }
          }
      
          [Flags]
          public enum FileRights
          {
              Read = 1,
              Write = 2,
              Create = 4,
              Delete = 8
          }
      }

      In this example, we’ve marked the enum property with the [NotMapped] attribute and have created a manual serialization property which will help to synchronize an OData-compatible value with the value of the enum property.

      Implications

      There are a few implications of this workaround:

      1. The [NotMapped] attribute also controls whether EF will try to create the value in the underlying data store. In this case, our table in SQL Server would have an nvarchar column called Rights that stores the value of the Rights property, but not an integer column that stores the value of FileRights.
      2. Since the FileRights column is not stored in the database, LINQ-to-Entities queries that use that value will not return the expected results. This is a non-issue if your EF model is only queried through the WCF Data Service.

      Result

      The result of this workaround is an enumeration property that can be used in business logic in the service code, for instance, in a QueryInterceptor. Since the [NotMapped] attribute prevents the enum value from serializing, your OData service will continue to operate as expected and you will be able to benefit from enumerated values.

      In the example above, there is an additional benefit of constraining the possible values for Rights to some combination of the values of the FileRights enum. An invalid value (e.g., Modify) would be rejected by the server since Enum.Parse will throw an ArgumentException.

      Visual Studio 2012 Application Lifecycle Management Virtual Machine and Hands-on-Labs / Demo Scripts

      OData 101: Bin deploying WCF Data Services

      $
      0
      0

      TL;DR: If you’re bin-deploying WCF Data Services you need to make sure that the .svc file contains the right version string (or no version string).

      The idea for this post originated from an email I received yesterday. The author of the email, George Tsiokos (@gtsiokos), complained of a bug where updating from WCF Data Services 5.0 to 5.0.1 broke his WCF Data Service. Upon investigation it became clear that there was an easy fix for the short-term, and perhaps something we can do in 5.1.0 to make your life easier in the long-term.

      What’s Bin Deploy?

      We’ve been blogging about our goals for bin deploying applications for a while now. In a nutshell, bin deploy is the term for copying and pasting a folder to a remote machine and expecting it to work without any other prerequisites. In our case, we clearly still have the IIS prerequisite, but you shouldn’t have to run an installer on the server that GACs the Microsoft.Data.Services DLL.

      Bin deploy is frequently necessary when dealing with hosted servers, where you may not have rights to install assemblies to the GAC. Bin deploy is also extremely useful in a number of other environments as it decreases the barriers to hosting a WCF Data Service. (Imagine not having to get your ops team to install something in order to try out WCF Data Services!)

      Replicating the Problem

      First, let’s walk through what’s actually happening. I’m going to do this in Visual Studio 2012, but this would happen similarly in Visual Studio 2010.

      First, create an Empty ASP.NET Web Application. Then right-click the project and choose Add > New Item. Select WCF Data Service, assign the service a name, and click Add. (Remember that in Visual Studio 2012, the item template actually adds the reference to the NuGet package on your behalf. If you’re in Visual Studio 2010, you should add the NuGet package now.)

      Stub enough code into the service to make it load properly. In our case, this is actually enough to load the service (though admittedly it’s not very useful):

      using System.Data.Services;
      
      namespace OData101.UpdateTheSvcFileWhenBinDeploying
      {
          public class BinDeploy : DataService { }
      
          public class DummyContext { }
      }

      Press F5 to debug the project. While the debugger is attached, open the Modules window (Ctrl+D,M). Notice that Microsoft.Data.Services 5.0.0.50627 is loaded:

      image

      Now update your NuGet package to 5.0.1 or some subsequent version. (In this example I updated to 5.0.2.) Debug again, and look at the difference in the Modules window:

      image

      In this case we have two versions of Microsoft.Data.Services loaded. We pulled 5.0.2 from the bin deploy folder and we still have 5.0.0.50627 loaded from the GAC. Were you to bin deploy this application as-is, it would fail on the server with an error similar to the following:

      Could not load file or assembly 'Microsoft.Data.Services, Version=5.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040)

      So why is the service loading both assemblies? If you look at your .svc file (you need to right-click it and choose View Markup), you’ll see something like the following line in it:

      <%@ ServiceHost Language="C#" Factory="System.Data.Services.DataServiceHostFactory, Microsoft.Data.Services, Version=5.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" Service="OData101.UpdateTheSvcFileWhenBinDeploying.BinDeploy" %>

      That 5.0.0.0 string is what is causing the GACed version of Microsoft.Data.Services to be loaded.

      Resolving the Problem

      For the short term, you have two options:

      1. Change the version number to the right number. The first three digits (major/minor/patch) are the significant digits; assembly resolution will ignore the final digit.
      2. Remove the version number entirely. (You should remove the entire name/value pair and the trailing comma.)

      Either of these changes will allow you to successfully bin deploy the service.

      A Permanent Fix?

      We’re looking at what we can do to provide a better experience here. We believe that we will be able to leverage the PowerShell script functionality in NuGet to at least advise, if not outright change, that value.

      So what do you think? Would you feel comfortable with a package update making a change to your .svc file? Would you rather just have a text file pop up with a message that says you need to go update that value manually? Can you think of a better solution? We’d love to hear your thoughts and ideas in the comments below.

      WCF Data Service 5.0.2 Released

      $
      0
      0

      We’re happy to announce the release of WCF Data Services 5.0.2.

      What’s in this release

      This release contains a number of bug fixes:

      • Fixes NuGet packages to have explicit version dependencies
      • Fixes a bug where WCF Data Services client did not send the correct DataServiceVersion header when following a nextlink
      • Fixes a bug where projections involving more than eight columns would fail if the EF Oracle provider was being used
      • Fixes a bug where a DateTimeOffset could not be materialized from a v2 JSON Verbose value
      • Fixes a bug where the message quotas set in the client and server were not being propagated to ODataLib
      • Fixes a bug where WCF Data Services client binaries did not work correctly on Silverlight hosted in Chrome
      • Allows "True" and "False" to be recognized Boolean values in ATOM (note that this is more relaxed than the OData spec, but there were known cases where servers were serializing "True" and "False")
      • Fixes a bug where nullable action parameters were still required to be in the payload
      • Fixes a bug where EdmLib would fail validation for attribute names that are not SimpleIdentifiers
      • Fixes a bug where the FeedAtomMetadata annotation wasn't being attached to the feed even when EnableAtomMetadataReading was set to true
      • Fixes a race condition in the WCF Data Services server bits when using reflection to get property values
      • Fixes an error message that wasn't getting localized correctly

      Getting the release

      The release is only available on NuGet. To install this prerelease NuGet package, you can use the Package Manager GUI or one of the following commands from the Package Manager Console:

      • Install-Package
      • Update-Package

      Our NuGet package ids are:

      Call to action

      If you have experienced one of the bugs mentioned above, we encourage you to try out these bits. As always, we’d love to hear any feedback you have!

      Bin deploy guidance

      If you are bin deploying WCF Data Services, please read through this blog post.

      Clearing the credentials for connecting to a Team Foundation Server

      $
      0
      0

      Ran in to a situation a couple of days ago where i needed to log into a Team Foundation Server as a different user - and since Visual Studio/Team Explorer "remembers" your last credentials you don't have a chance to re-enter these credentials.  This information is cached even after removing and re-adding the server in team explorer.  Which begs the question - where is it cached?

      image

      Turns out we are relying on Windows to do this for us.  To dump this cache all you need to do is go to control panel > User Accounts > Manage Your Network Passwords select the Team foundation Server and choose remove - viola! Next time you go into Team Explorer you will be prompted for a new set of credentials. 

      image

      image

      Changing fields on the Visual Studio Extension Gallery page

      $
      0
      0

      Had a community member ask:

      I want to change some fields on my Visual Studio Extension Gallery page. These fields do not appear to be editable:

            · “Supported Versions”

           · “Supported Visual Studio 2010 Editions”

       

      Luckily one of our old time ALM MVPs, Terje Sandstrom,  had the answer:

      Make sure and check out his posts on Architecture and Code Analysis:

      How to fix the CA0053 error in Code Analysis in Visual Studio 2012

      Issues with mixed C++ and C# projects in Visual Studio 2012 running Code Analysis

      Video on Architecture and Code Quality using Visual Studio 2012–interview with Marcel de Vries and Terje Sandstrom by Adam Cogan

       

      *******************************************************

       

      This is controlled through the VSIX manifest file:

      In VS 2010, the editor looks like this:

      clip_image001

      From your list, you CAN change

      1- Icon

      2- Preview image (screenshot)

      3- You can change the editions of the VS you run.  Note:  You only need to check the lowest SKU it applies for.  It will automatically apply for every higher SKU

      clip_image003

      4- And all the other fields, you didn’t mention J

      When uploading this to the gallery, the gallery extracts the information from these fields.

      Then, note that this editor is as it looks in VS 2010.  And the editor doesn’t handle multiple VS versions.  But that is just this editor…..So, Open the manifest file again, as an XML file.

      Jakob showed the fields to add, but you need to add those in addition to the VS 2010 fields.  So, it could look like this:

           

              Pro

              IntegratedShell

           

           

              Pro

              IntegratedShell

         

      Running this give you this install dialog:

      clip_image005

      I am not quite sure what you mean by Live ID profile, afaik you can’t transfer the person (Live ID) for a gallery entry.

      Another thing, for VSIX in Visual Studio 2012, the VSIX format is changed.  There is a new VSIX editor too, and in that editor you can add in multiple “targets”.

      However, VS 2012 STILL honours and can build the old VSIX format.   That’s very good !

      /terje


      C++/CX Part 0 of [n]: An Introduction

      $
      0
      0

      Hello; I'm James McNellis, and I've recently joined the Visual C++ team as a libraries developer. My first encounter with the C++/CX language extensions was early last year, while implementing some code generation features for the Visual Studio 2012 XAML designer. I started off by hunting for some example code, and it suffices to say that I was a bit surprised with what I first saw. My initial reaction was along the lines of:

      "What the heck are these hats doing in this C++ code?"

      Actually, I was quite worried; because I thought it was C++/CLI—managed code. Not that managed code is bad, per se, but I'm a C++ programmer, and I had been promised native code.

      Thankfully, my initial impression was uninformed and wrong: while C++/CX is syntactically similar to C++/CLI and thus looks almost the same in many ways, it is semantically quite different. C++/CX code is native code, no CLR required. Programming in C++/CLI can be very challenging, as one must deftly juggle two very different object models at the same time: the C++ object model with its deterministic object lifetimes, and the garbage-collected CLI object model. C++/CX is much simpler to work with, because the Windows Runtime, which is based on COM, maps very well to the C++ programming language.

      Windows Runtime defines a relatively simple, low-level Application Binary Interface (ABI), and mandates that components define their types using a common metadata format. C++/CX is not strictly required to write a native Windows Runtime component: it is quite possible to write Windows Runtime components using C++ without using the C++/CX language extensions, and Visual C++ 2012 includes a library, the Windows Runtime C++ Template Library (WRL), to help make this easier. Many of the Windows Runtime components that ship as part of Windows (in the Windows namespace) are written using WRL. There's no magic in C++/CX: it just makes writing Windows Runtime components in C++ much, much simpler and helps to cut the amount of repetitive and verbose code that you would have to write when using a library-based solution like WRL.

      The intent of this series of articles is to discuss the Windows Runtime ABI and to explain what really happens under the hood when you use the C++/CX language constructs, by demonstrating equivalent Windows Runtime components written in C++ both with and without C++/CX, and by showing how the C++ compiler actually transforms C++/CX code for compilation.

      Recommended Resources

      There are already quite a few great sources of information about C++/CX, and I certainly don't intend for a simple series of blog articles to replace them, so before we begin digging into C++/CX, I wanted to start with a roundup of those resources.

      First, if you're interested in the rationale behind why the C++/CX language extension were developed and how the C++/CLI syntax ended up being selected for reuse, I'd recommend Jim Springfield's post on this blog from last year, "Inside the C++/CX Design". Also of note is episode 3 of GoingNative, in which Marian Luparu discusses C++/CX.

      If you're new to C++/CX (or Windows Store app and Windows Runtime component development in general), and are looking for an introduction to building software with C++/CX, or if you're building something using C++/CX and are trying to figure out how to accomplish a particular task, I'd recommend the following resources as starting points:

      • Visual C++ Language Reference (C++/CX): The language reference includes a lot of useful information, including a C++/CX syntax reference with many short examples demonstrating its use. There's also a useful walkthrough of how to build a Windows Store app using C++/CX and XAML. If you're just starting out, this would be a great place to start.

      • C++ Metro style app samples: Most of the C++ sample applications and components make use of C++/CX and many demonstrate interoperation with XAML.

      • Component Extensions for Runtime Platforms: This used to be the documentation for C++/CLI, but it has since been updated to include documentation for C++/CX, with comparisons of what each syntactic feature does in each set of language extensions.

      • Hilo is an example application, written using C++, C++/CX, and XAML, and is a great resource from which to observe good coding practices—both for modern C++ and for mixing ordinary C++ code with C++/CX.

      • Building Metro style apps with C++ on MSDN Forums is a great place to ask questions if you are stuck.

      Tools for Exploration

      Often, the best way to learn about how the compiler handles code is to take a look at what the compiler outputs. For C++/CX, there are two outputs that are useful to look at: the metadata for the component, and the generated C++ transformation of the C++/CX code.

      Metadata: As noted above, Windows Runtime requires each component to include metadata containing information about any public types defined by the component and any public or protected members of those types. This metadata is stored in a Windows Metadata (WinMD) file with a .winmd extension. When you build a Windows Runtime component using C++/CX, the WinMD file is generated by the C++ compiler; when you build a component using C++ (without C++/CX), the WinMD file is generated from IDL. WinMD files use the same metadata format as .NET assemblies.

      If you want to know what types have been fabricated by the C++ compiler to support your C++/CX code, or how different C++/CX language constructs appear in metadata, it is useful to start by inspecting the generated WinMD file. Because WinMD files use the .NET metadata format, you can use the ildasm tool from the .NET Framework SDK to view the contents of a WinMD file. This tool doesn't do much interpretation of the data, so it can take some getting used to how it presents data, but it's very helpful nonetheless.

      Generated Code: When compiling C++/CX code, the Visual C++ compiler transforms most C++/CX constructs into equivalent C++ code. If you're curious about what a particular snippet of C++/CX code really does, it's useful to take a look at this transformation.

      There is a top-secret compiler option, /d1ZWtokens, which causes the compiler to print the generated C++ code that it generated from your C++/CX source. (Ok, this compiler option isn't really top secret: Deon Brewis mentioned it in his excellent //BUILD/ 2011 presentation, "Under the covers with C++ for Metro style apps." However, do note that this option is undocumented, and thus it is unsupported and its behavior may change at any time.)

      The output is intended for diagnostic purposes only, so you won't be able to just copy and paste the output and expect it to be compilable as-is, but it's good enough to demonstrate how the compiler treats C++/CX code during compilation, and that makes this option invaluable. The output is quite verbose, so it is best to use this option with as small a source file as possible. The output includes any generated headers, including the implicitly included . I find it's often best to use types and members with distinctive names so you can easily search for the parts that correspond to your code.

      There are two other useful compiler options, also mentioned in Deon's presentation, which can be useful if you want to figure out how class hierarchies and virtual function tables (vtables) are laid out. The first is /d1ReportAllClassLayout, which will cause the compiler to print out the class and vtable layouts for all classes and functions in the translation unit. The other is /d1ReportSingleClassLayoutClyde which will cause the compiler to print out the class and vtable layouts for any class whose name contains "Clyde" (substitute "Clyde" for your own type name). These options are also undocumented and unsupported, and they too should only be used for diagnostic purposes.

      Next Up...

      In our next article (which will be the first "real" article), we'll introduce a simple C++/CX class and discuss how it maps to the Windows Runtime ABI.

      Exploring Device Orientation and Motion

      $
      0
      0

      Today, we released a prototype implementation of the W3C DeviceOrientation Event Specification draft on HTML5Labs.com. This specification defines new DOM events that provide information about the physical orientation and motion of a device. Such APIs will let Web developers easily deliver advanced Web user experiences leveraging modern devices' sensors.

      How This Helps Developers

      With the Device Orientation API, developers can explore new input mechanisms for games, new gestures for apps (such as “shake to clear the screen” or “tilt to zoom”) or even augmented reality experiences. The prototype’s installation includes a sample game to get you started in understanding the API.


      Video showing the concepts explained in this post in action

      How This Works

      The Device Orientation API exposes two different types of sensor data: orientation and motion.

      When the physical orientation of the device is changed (e.g. the user tilts or rotates it), the deviceorientation event is fired at the window and supplies the alpha, beta, and gamma angles of rotation (expressed in degrees):

      Diagram showing the alpha, beta, and gamma angles of rotation returned in the deviceorientation event related to 3D X, Y, and Z axes: alpha = rotate around the Z axis, beta = X axis, and gamma = Y axis.

      <div id="directions">div>

      <script>

      window.addEventListener("deviceorientation", findNorth);

      function findNorth(evt) {

      var directions = document.getElementById("directions");

      if (evt.alpha < 5 || evt.alpha > 355) {

      directions.innerHTML = "North!";

      } else if (evt.alpha < 180) {

      directions.innerHTML = "Turn Left";

      } else {

      directions.innerHTML = "Turn Right";

      }

      }

      script>

      When a device is being moved or rotated (more accurately, accelerated), the devicemotion event is fired at the window and provides acceleration (both with and without the effects of gravitational acceleration on the device, expressed in m/s2) in the x, y, and z axis as well as the rate of change in the alpha, beta, and gamma rotation angles (expressed in deg/s):

      Diagram illustrating the gravitational acceleration on the device returned by the devicemotion event in the x, y, and z axis.

      <div id="status">div>

      <script>

      window.addEventListener("devicemotion", detectShake);

      function detectShake(evt) {

      var status = document.getElementById("status");

      var accl = evt.acceleration;

      if (accl.x > 1.5 || accl.y > 1.5 || accl.z > 1.5) {

      status.innerHTML = "EARTHQUAKE!!!";

      } else {

      status.innerHTML = "All systems go!";

      }

      }

      script>

      Trying Out The Prototype

      You can download the prototype at HTML5Labs. This prototype requires Internet Explorer 10 running on devices with accelerometer sensors supported by Windows 8. The prototype works as an extension to Internet Explorer on the desktop, where developers can get a first-hand look at the APIs. To get started building your own pages with the prototype, all you need to do is install the prototype and then include a reference to the DeviceOrientation.js script file (copied to the desktop after installing the prototype):

      <script type="text/javascript" src="DeviceOrientation.js">script>

      We Want Your Feedback

      We want to hear from developers on this prototype implementation of the W3C Device Orientation Event Specification, so please let us know what you think by commenting on this post or sending us a message.

      —Abu Obeida Bakhach, Program Manager, Microsoft Open Technologies Inc.
      Jacob Rossi, Program Manager, Internet Explorer

      Getting Started: XAML Authoring with Blend for VS 2012

      $
      0
      0

      Now that you have your copy of Visual Studio 2012 including Blend, you must be raring to go and explore the world of Windows 8 development. We hope this post helps all the C#/VB & C++ junkies out there get started pronto on creating amazing apps for the Windows Store.

      Who doesn’t like food? Trying to find the perfect recipe to cook for dinner is a decision we battle every day, so one of our sister teams in Microsoft had created this awesome Contoso Cookbook sample app. For the purposes of this post we will take this much loved sample application already oozing with design goodness and add a bit more styling and flair to it.

      Download the Contoso sample app uploaded to accompany this tutorial.

      Contoso Cookbook Sample

      Contoso cookbook app

      Contoso After a bit of styling with XAML design tools

      All we did to style this page was 4 things:

      1. Add a more vibrant background.
      2. Edit the style of Items in the list of recipes (a.k.a ItemTemplate of the GridView).
      3. Edit the style of the Group (a.k.a GroupStyle of the GridView).
      4. Added a snazzy Windows 8 animation to the background image.

      Changing the background

      The below video demonstrates some aspects of designing layout in Blend and working with a wide variety of brushes.

      Tips & Tricks

      • Did you know Blend enables you to do Brush Transforms using the Brush Transform tool?
      • You can even select colors from a snapped Windows Store app using Blend’s eyedropper tool. Try it out.
      • Visual Studio now has a good picture editor built in. Double click on any image to open it in the picture editor.
      • Check out CTRL+0, it’s the single best designer shortcut you can learn.

      Learn how to work with brushes , resources, layout & the eyedropper tool.

      Editing an Item Template

      The below video demonstrates some aspects of working with Item Templates in XAML tools.

      Tips & Tricks

      • Changing the panel type in an ItemsPanelTemplate can be hard because the platform sometimes throws an exception if you delete and try adding a new panel type. A better way to do it is to use the Change Layout Command in the context menu of the panel to change to panel type to the one you want.
      • Did you know you can use the data binding dialog in both Blend & VS even if you set the DataContext in the code-behind? Hint: set the property d:DataContext which is completely supported by our intelliSense and the data binding dialog will now treat it as the data context of the scope during design time and leave the runtime behavior unchanged).

      Learn about designing templates, resource scoping & management.

      Editing a GroupStyle

      The below video demonstrates some aspects of working with states & GroupStyles in XAML tools.

      Tips & Tricks

      In case you have multiple GroupStyles and you would like to design them all in the designer, you can use the Collection Editor of the GroupStyle property and move the desired GroupStyle to index 0 and then you can use the Edit GroupStyle command to design that style to your heart’s content.

      Learn about designing GroupStyles.

      Working with the Windows 8 animation library

      Tips & Tricks

      • Did you know that you can copy one state to another? Try out the CopyState command in the States pane, which you can access by bringing up the context menu on a state, this is a huge productivity booster.
      • Blend enables you to create in-state animations, e.g. pulsing a button when selected by using the Objects & Timeline pane. While in a state select the timeline icon in Objects & Timeline panel to start creating in state animations.

      Learn about ThemeAnimations & ThemeTransitions.

      Designing View States

      The below video demonstrates some aspects of working with view states in Blend.

      Tips & Tricks

      • Did you know that the BasicPage Item Template in the Add new item dialog has built in View state awareness?

      Learn more about working with  Application View States.

      Thanks,
      Harikrishna Menon, Blend Program Manager

      Getting Started: Authoring HTML for Windows 8 Apps using Blend for VS 2012

      $
      0
      0

      Almost two years ago, we set out on a path towards building a flavor of Blend that allowed you to seamlessly craft HTML, CSS, and JavaScript into Metro style applications. We are thrilled to announce that the final, released version of Blend is available for you to try out. As in previous releases, there is no more need to download Blend separately. It is now part of every VS download that targets any of the supported platforms. You can learn more from our launch blog post.

      Download Visual Studio 2012 with Blend and Windows 8.  MSDN Subscriber Downloads has several versions available, and there are free downloads available for non-subscribers.

      Getting Blend installed and running is only one part of it. Using Blend in conjunction with Visual Studio to build great Windows applications is the other part, and that is where this post comes in. In this post, you will get an overview of Blend and the great features it provides for allowing you to build great Windows applications.

      Hello World

      What better place to start than with the classic hello world example. Learn the basics of the Blend UI and how to start working with the content creation and styling features you have available:

      Building a Windows Application

      Let’s go a bit further now and look at how to work with controls, data, templates, layout, and more – all of which will play a crucial role as you build your own Windows applications.

      Designing for Devices

      Because your applications will be running on devices other than just a laptop or a desktop, let’s see how Blend can be used to help you design for multiple orientations, screen sizes, and view states:

      Further Reading

      Hopefully the videos helped get you excited about using Blend and Visual Studio to design and develop great Windows applications. To help you on your journey, be sure to check out the following links as well:

      1. UX Guidelines for Windows 8 app Development
      2. Guidance and Best Practices for building your Windows 8 UI
      3. Browse and Download HTML/CSS/JavaScript Samples
      4. Give and receive help on the forums

      As always, we are looking forward to hearing from you. Please let us know what you think by commenting below or posting in our forums.

      Cheers,

      Kirupa Chinnathambi, Program Manager

       

      Getting Symbols and Source with ASP.NET Nightly NuGet Packages

      $
      0
      0

      You can now get full symbols and source along with the nightly NuGet packages making it possible to debug the latest MVC, Web API, and Web Pages bits by tracing directly through the source. This is enabled by SymbolSource, which hosts the symbols and source for the nightly NuGet packages, and MyGet which hosts the nightly NuGet feed. Great services!

      If you want to use the nightly NuGet packages then please see Using Nightly ASP.NET Web Stack NuGet Packages for getting started. Please remember that the nightly NuGet packages are “raw” and come with no guarantees.

      Configuring Visual Studio

      The instructions apply to both Visual Studio 2010 and 2012 and works in both full and express editions.

      First open the Debug | Options and Settings menu, go the General tab and do the following:

      1. Uncheck Enable Just My Code
      2. Check Enable source server support
      3. Uncheck Require source files to exactly match the original version

      It should look something like this:

      DebugOptions

       

      Now go to the Symbols tab and add http://srv.symbolsource.org/pdb/MyGet to the list of symbols locations. Also, make sure you have a short path for the symbols cache as the file names otherwise can get too long resulting in the symbols not getting loaded properly. A suggestion is to use C:\SymbolCache. It should look something like this:

      DebugSymbols

      For more details, please check out these instructions from symbolsource.org.

      Trying it Out

      Let’s try this out on the Validation Sample project which is one of the ASP.NET Web API samples. After updating the NuGet packages to use the nightly feed, start the debugger. First we set a break point in the sample in PostValidCustomer method which is hit as expected:

      ValidationSampleDebug1

      Now we hit F11 to step into the next statement and it takes us to the JsonMediaTypeFormatter class in ASP.NET Web API:

      ValidationSampleDebug2

      If you have issues or questions then please follow up on the Symbols for nightly builds available discussion thread.

      Have fun!

      Henrik

      Viewing all 10804 articles
      Browse latest View live


      <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>