Dec 172013
 

A while back, I blogged on the topic of implementing a custom error page in an MVC application.  Considering the amount of traffic this post gets each month, it seems like it’s been a pretty useful resource for a number of folks.  Recently, a comment was left on this post asking how it addresses the issue of errors encountered in an AJAX request.  The answer to that question is very simple; it doesn’t address AJAX requests.  The code I included in that post would return a view for any error encountered in the application.  In this day and age of JavaScript and AJAX heavy web applications, this simply isn’t an acceptable solution.  Any approach for handling custom errors really needs to handle AJAX requests in a meaningful way.  With that in mind, I’ve made a few changes to the code that will handle errors in AJAX requests in a much nicer way.

Let’s start by taking a look at the ErrorController class first:

The first change I’ve made is to add a bool parameter to the Index method named isAjaxRequest.  As you can probably guess, this indicates whether the caused the error was an AJAX request or not.  If it was’t an AJAX error than we just return the error view as we were doing before.  However if the request was an AJAX request, we create an instance of an anonymous type that contains the message from the exception.  We then return this object as JSON, making sure we set the JsonRequestBehavior to AllowGet.  I’m opting to return the exception message as JSON simply for demo purposes, this may or may not be appropriate in your application, you’ll have to make that call.

Next let’s take a look at Application_Error in Global.asax:

Application_Error is the method that’s going to be called anytime there’s an unhandled exception in your application.  Essentially what we want to do in Application_Error is to route a request to our ErrorController.  To that end, we setup a RouteData object and add items to the Values collection for controller name, action name, the status code we want to return, the exception object and the new isAjaxRequest we added to ErrorController.Index.

The last item to look at is how we deal with the error information returned from an AJAX request:

In the Index view, I’ve added a click handler that attempts to do an AJAX GET from a controller action that raises an exception.  To this GET call, I’ve chained the jQuery error handler which will be invoked if the GET request returns a non-success HTTP status code.  The error handler function gets the XMLHttpRequest object passed in as a parameter.  One of the fields on this object is called responseText and has the JSON our ErrorController returned.  All we need to do is parse that string to JSON.  In this case I’m appending the text in the JSON object’s message field to a div for display purposes.

I’ve updated the original GitHub repo with these changes.  Hopefully adding support for handling AJAX requests will make this even more useful!

Dec 132013
 

At long last, I’m finally able to wrap up the last post in my series on using Windows Azure Mobile Services for authentication with Xamarin.iOS.  As I mentioned in my post the other day, I never imagined it would be over six months before I got around to wrapping up the series, so again, apologies to those who have been waiting patiently on this.  So, without further delay, let’s dive into the nuts and bolts of custom registration and authentication users via Azure Mobile Services

If you’ve worked with Azure Mobile Services or read my previous post, you’ll know that AMS supports identity via a provider model.  This model supports authenticating users via a set of social providers that currently includes Microsoft’s Live, Google, Twitter and Facebook.  Personally, I think this is a great approach for handling identity.  First, managing users and authentication is not trivial.  It’s definitely not something you want to get wrong in this age of data theft.  Second, if you’re building a mobile app, I’d say there’s a better than good chance that your users will have an account with at least one of the currently supported identity providers.  I’m guessing most users would rather use an existing credential than create yet another account that they’ll have to remember.  With all this in mind, it’s still possible that you may end up building an app where using one of the social providers isn’t a viable option.  So, if you find yourself in such a situation, how can you still authenticate users and ensure that only authenticated users have access to your data?  Like all problems that we as developers tackle, I’m sure there are at least a few ways this problem can be approached.  Fortunately, Chris Risner, a technical evangelist at Microsoft has come up with one approach.  Unfortunately, his solution has way more square brackets (Objectionable-C, er, Objective-C) than I personally care for :)  So with that in mind, I’m going to leverage this fine work that Chris did on the backend and implement the app in Xamarin.

Setting up the Service

The first thing you’ll need to do if you don’t already have an existing service will be to create a new Azure Mobile Service.  If you’re unsure of how to do that, Microsoft has a nice guide that you can follow.

Once we have a service created, we need to add two tables to it.  The first table will create should be named Account.  This table, not surprisingly will store account data (usernames, hashed/salted passwords, etc).  The second table should be named TestData.  We’ll use this table to demonstrate that only authenticated users can read data.  Once you’ve completed this step, you should see something like the following under the Data tab for your service in the Azure portal:

Azure Mobile Services tables

 

The next thing we need to do is add a script to the Account table.  Mobile Services tables allow you to specify scripts for CRUD operations.  This allows you to execute JavaScript code anytime a create/read/update/delete operation is performed against your table.  In this case, we’ll be adding a script for the Insert operation on the Account table.  The script looks like this:

There’s a lot going on in this script so let’s take a quick spin through it.  The first thing we do is check to see if the parameters object has a field named login that is equal to “true”.  The parameters object is made up of query string name/value pairs that were included in the URL that triggered the script execution.  If the login field is present and equal to “true” then we proceed with attempting to login the existing user.  We do this by attempting to find a match in the Account table based on the username field.  If we don’t find a match then we return a 401 (unauthorized).  If we do find a match then we hash the password that was sent as an input parameter to the script along with the salt from the row in the table and see if that matches the hash we have stored in the table.  If the hashes match then we create a new authentication token and return a 200 response, along with the token.  If the hashes don’t match then we return a 401.  If the login field wasn’t present in the parameters arg then we are creating a new user.  We first check to see if the username and password match requirements and return 400 (bad request) if they don’t.  Assuming the username/password are good then we proceed to check whether the username already exists or not.  If it does, we return 400.  If not then we create a salt, hash the password, store the data in the table and return 201 (created), along with an authentication token.

Before we proceed with the client, we need to do one more thing on the service side.  To ensure that only authenticated users can read data from our TestData table, we need to ensure the permissions are locked down.  We want to set the permissions for all operations on this table to Only Authenticated Users.

TestData table permissions

 

This will ensure that only users that have received an authentication token via the insert script we looked at perviously will be able to perform any operation on this table.

The Client

With our tables and scripts setup on the service side, we are ready to move on to the client.  For the purposes of this post, I put together a very bare bones Xamarin iPhone app.  The app has the following functionality:

  • The ability to register a new user with username/password/email address
  • The ability to login with credentials created on the registration screen
  • The ability to create and retrieve data from our restricted TestData table

Rather than going through all of the app, I’m just going to focus in on the pieces that are specific to interacting with the Account table we created for handling registration/authentication and retrieving data from the protected TestData table.

When we talk about registering a new user, if we look back at the script we created on the Account table, you can see that registration is handled when an Insert is performed on the table without passing the login parameter in the query string.  In general, doing an insert with the Azure mobile services SDK looks something like this:

The problem here is that while the InsertAsync method is a void operation, we need to get at the value of the authentication token that the service is returning.  If InsertAsync isn’t returning anything, how can we get the token value?  It turns out that the Mobile Services SDK has an extension point in the form of System.Net.Http.HttpMessageHandler.  Subclassing HttpMessageHandler allows us to inspect and modify HttpRequestMessages before they are sent to our Azure Mobile Service endpoint and inspect/modify HttpResponseMessages after they are returned from our Mobile Service.  When we create instances of MobileServiceClient from the SDK, there are several different constructor overloads, some of which allow for passing in an array of HttpMessageHandler instances that will be used when sending/receiving the HTTP messages to/from the Mobile Service endpoint.

The approach I came up with for getting the authentication token relies on creating an HttpMessageHandler subclass that inspects messages received from the service, retrieving the auth token from the response message and then storing that token in a static variable which can be used later when sending messages to the Mobile Service endpoint for protected tables.

The implementation of this message handler is quite simple:

Looking through the code, we simply pass the request message on to the base SendAsync method which allows the message to proceed through the pipeline.  When we get the response, we parse the contents of the body into a JObject, retrieve the value of the “token” field and store that in a static property on a class called AccountService.

Looking at the code the the Account service, you can see how we use AuthenticationHandler:

In the DoInsert method, we create a new instance of AuthenticationHandler and pass that into the constructor of MobileServiceClient via the factory class I’m using the create MobileServiceClient instances.  This ensures that the AuthenticationHandler instance will be invoked for any operation sent to the Mobile Service endpoint.

Now that we have our authentication token, we need a way to use it when we are performing operations against protected tables.  It turns out that Azure Mobile Services uses this token by sending it from the client to the service endpoint as an HTTP header named X-ZUMO-AUTH, Zumo being the Microsoft codename for Azure Mobile Services (for aZUre MObile, perhaps?).  So, how do we go about setting an HTTP header when using the Azure Mobile Services SDK?  Once again, HttpMessageHandler is our friend.  For this I created a new HttpMessageHandler that would take care of setting the header value before sending the message to the service endpoint.  The code for this handler looks like so:

This handler is even more simple, we just add a header name and value to the Headers collection before we send the message on through the pipeline.

Using this handler looks like so:

Similar to how we used the handler in AccountService, we just instantiate ZumoAuthHeaderHandler class and pass it into the MobileServiceClient constructor via the factory class.

At this point, with all of this plumbing in place, you should be able to register new users, authenticate those registered users and restrict table access to only authenticated users.

Code for the solution I put together for this post can be found here:  https://github.com/13daysaweek/AzureCustomAuth.git

Dec 112013
 

I know there are at least a couple of folks that have been waiting patiently for this third part so I wanted to put a quick note here to say that it is indeed in progress.  Way back in April when I started working on this series of posts, I never intended to have such a long delay before I got around to finishing the series.  Unfortunately, as is so often the case, life got in the way of things.  It’s been an incredibly busy year for me on a personal level.  I’ll spare you the details but will say that it’s all been positive stuff, moving in with my then fiance, moving again, this time with her, to a new house and finally, getting married to her back in September.  With all of this going on, personal projects have taken a backseat.  Now that things have settled down, I’m excited to have some time to be working on stuff like blogging on WAMS authentication.

So, here’s where things stand.  I have a (very quick and dirty) Xamarin.iOS project that uses WAMS to register custom users as well as authenticate those users.  Additionally, I’m able to retrieve an authentication token from WAMS and use that on subsequent API calls so that I can restrict table access to authenticated users.  There are still a few things I need to do but things are looking good so I’m hoping to have this wrapped up at some point this week.

Thanks again for the patience as I’ve left this third part hanging!

Nov 052013
 

This morning I ran into a problem that left me stumped for the better part of an hour.  After I got to the bottom of it, I figured I’m probably not the only person to have run into this issue so I thought I’d write up a little something on it in hopes of saving someone else some time or at the very least saving myself some time when this comes up again.

A little background:  The project I’m currently working on involves retrieving some data from a third party API and then processing that data.  The data returned by this API is fairly complex.  What the data is isn’t really important to this post, however the data returned can be classified into one of six different entities depending on the structure of the JSON returned by the API.  These six different entities have some similarities in their structure but also some significant differences.  In the end, I was able to create an object model to encapsulate the common properties in a base class.  This base class is generic.  The type parameter of this generic base refers to the specific type of entity.  The base class then exposes a property that is typed based on the generic type parameter.  This is simplifying things somewhat but again, the specifics aren’t really important to this post.

Some of the entities returned by this API aren’t able to be processed immediately.  Those objects are serialized to XML and stored in a SQL table in a varchar(max) column, along with another varchar field that stores the CLR type that is contained in the serialized XML.  A value in this type field might look something like MyProject.Domain.SomeEntity, MyProject.Domain”.  The idea is that the serialized XML and type string can be retrieved from the database later and then using the type string, I can deserialize the XML into an object instance.  Before we proceed, let me just clear one thing up:  Yes, I know it’s weird retrieving JSON from an API and then storing it in a database as XML in a varchar field.  That being said, again for reasons that aren’t important to this post, there is a good reason for storing this in the database as XML.

This was all working quite well for me up until this morning.  Unfortunately, as I was driving in to work, I was thinking about a choice I made yesterday while writing this deserialization code and decided I wasn’t happy with it.  Yesterday when I wrote the code to deserialize these XML strings, I wasn’t sure where to put it so I stuck it in a static method in my base domain class.  There were a few reasons why I didn’t like this but the one that was really getting at me was the lack of testability.  Stuffing this code into a static on my domain class meant I had no way of mocking calls to this function.  I decided since I already had a class for parsing the JSON returned by the API, it wasn’t unreasonable to add a method to this class to handle the deserialization from XML.  So, with that in mind, I went ahead and moved the deserialization code from my base entity in my domain project to my parser class in my core services project.  I then proceeded to move my tests for the parsing code to their new home and watched as all my previously green tests for deserializing XML to object instances went red.

I looked at the stack trace for my exception and found that I was getting an ArgumentNullException in my deserialization code.  The class was storing a static Dictionary<Type, DataContractSerializer> and the exception was being raised when I was calling ContainsKey(Type t) on the Dictionary.  This seemed like a very odd place for the code to blow up.  It had been working prior to my refactoring so I wasn’t sure why the Type I was passing into ContainsKey would be null.  Tracing back in my code, I was creating the instance of the Type variable by parsing the Type string I’d stored in the database with a call to Type.GetType(string s).  I thought I must have screwed something up in my code when I moved it from the base entity class to the parser class so I stepped through it in the debugger.  Unfortunately, everything looked like I thought it should.  The string I was passing into Type.GetType(string s) looked fine.

After trying a number of different things, I finally tracked down my problem.  The issue was that the value I was storing in the database for my type name was coming from Type.FullName.  After looking at MSDN, things were starting to make sense.  According to MSDN, Type.FullName returns a string that includes the type and the namespace but not the assembly.  Ok, that’s great, but why isn’t that enough to load a type?  Well, it is enough to load a type, if  that type is in the same assembly as the code that is trying to load the type.  Recall that before my refactor, the code to deserialize the XML was in the same base entity type that was being deserialized.  If you look at the documentation for Type.GetType, “If typeName includes the namespace but not the assembly name, this method searches only the calling object’s assembly and Mscorlib.dll, in that order. If typeName is fully qualified with the partial or complete assembly name, this method searches in the specified assembly. If the assembly has a strong name, a complete assembly name is required.”  This was starting to make a bit more sense but I still thought my type string included the assembly name.  Well, in my case, since I was dealing with a generic type, Type.FullName assembly qualifies any generic type parameters but not the generic type itself.  To get the assembly qualified name, whether you’re using a generic type or not, you need to use the aptly named Type.AssemblyQualifiedName property.

Let’s take a look at some code to demonstrate all of this.  Consider this very simple set of domain classes I put together:

And now, some tests to demonstrate parsing types based on Type.FullName and Type.AssemblyQualifiedName:

As you can see, the first test uses Type.FullName.  It then attempts to load a Type using that string which is located in a separate assembly.  Type.GetType returns null because it is only searching the current assembly and mscorlib.

The second test uses AssemblyQualifiedName.  Type.GetType returns a Type instance in this case because the Type string contains the name of the assembly that should be searched.

Hopefully this will save someone some aggravation in the future!

Sep 202013
 

This week we pushed my latest project to production. As always, there was a certain sense of accomplishment in seeing something I worked on move to production. While there were great automated processes in place to build the code for the project, build an MSI and execute that MSI to perform the installation, we ran into a few issues around our deployment environment. Unfortunately, as I quickly found out, the environment we used for testing had a few significant differences from the production environment we were deploying to. This meant that while there had been rigorous testing of the application’s functionality, we still hadn’t fully tested the application we were deploying.

The first problem we ran into came immediately upon trying to login to the application.  We saw an error message like this:

Error decrypting antiforgery token

The application uses ASP.Net forms authentication and following best practices for MVC security, I used the AntiForgeryToken HTML helper on my login form and the ValidateAntiForgeryTokenAttribute on the controller action that handles authenticating users. If you’re not familiar with these two items, they help protect your web application from Cross-site Request Forgery.  The HTML helper method outputs an encrypted token into a hidden form field on your view.  It also sets the same encrypted value to an HTTP cookie.  The ValidateAntiForgeryTokenAttribute checks to ensure that the form field value is present, the cookie is present and the two values are equal.  This all works great until you move your web site from a single server to a web farm.  In our case, it turns out we did our testing on a single server and our prod environment consisted of two servers.  Fortunately, the exception that’s thrown in this situation provides a very detailed message, including instructions on how to resolve the problem.  As the message says, all sites running the web application need to use the same machine key.  Setting your machine key in IIS 7 is pretty straight forward, however the following TechNet article covers the topic in great detail:  Configuring Machine Keys in IIS 7

Once we got our machine keys sorted out, we were able to login and resume validating functionality.  Things looked good at first until we got to a particular view that seemed to be having a JavaScript issue.  This particular view contains a Kendo UI treeview that contains checkboxes for each node in the treeview.  When a user selects a checkbox in this treeview, there’s some JavaScript that runs to perform a REST request to get a list of items that are then displayed on the view when the REST request returns.  The REST resource is actually a Web API controller that had a single operation on it called Get which took an array of integers, corresponding to the Ids of the selected nodes in the treeview.  As you might expect, selecting a parent node in the treeview would cause all child nodes to be selected.  In a case like that, the REST request would append each Id to the query string so the resource URL might look something like this:

http://some.server/api/controller?id=1&id=2&id=3

All of this of course worked flawlessly in our test environment.  In production, when selecting nodes in the treeview, nothing happened.  I reached into my dev toolbox and fired up Fiddler to verify the REST request was being made and was returning.  From what I could see in Fiddler, the request was being issued but was promptly returning with a 404 status code.  This seemed less than ideal and also somewhat odd as other calls to Web API controllers were succeeding.  I copy/pasted the url that was resulting in the 404 from Fiddler to Chrome and verified that it was indeed returning a 404.  It definitely was, Chrome showed a 404.  I then tried stripping the query string off the URL, just performing a GET against the controller with no parameters.  This also returned a 404, which in hindsight made sense as this particular controller had no operation defined named Get that took no parameters.  Finally, I tried executing the same URL with a single Id rather than the long string of Ids that the view was using.  This yielded results, the controller returned a single model corresponding to the data for the Id I had used.  It seemed that there was a problem with the length of the query string that was being used.  I took to Google and quickly found some references to IIS Request Filtering which, among other things, can be used to restrict the length of the URL or query string that IIS will accept.  I contacted the data  center folks and they confirmed that they were indeed restricting the max. allowed length for both query strings and URLs.  They had both options set to their default values, 4096 bytes for max. URL length and 2048 bytes for max. query string length.  To test out whether this was indeed causing our problems or not, we added the following section to the app’s web.config:

This change set the max. URL  and max. query string lengths to 32kb.

With this change in place, we tried again, but once again, no joy.  The REST request was issued but this time instead of returning with a 404, it was returning with a status code of 500.  This in my mind was actually kind of good news.  It means that we were at least getting to ASP.Net.  A bit more Googling and I was reminded that ASP.Net also has it’s own setting for max. URL length and max. query string length.  ASP.Net uses a default of 260 characters for max. URL length and 2048 characters for max. query string length.  So, we made one more edit to web.config:

Finally, after making that change, we tested the app and this time it worked just as well as it had in our test environment!

So, after all of this, I’ve learned a couple of valuable lessons.  First, when testing, make sure at least one of your test iterations is in an environment that is as close to your production environment as possible.  This means that if you’re using clustering, load balancing, locked down security, etc in your prod environment, have a test environment setup that mimics all of this.  Basically, try to eliminate as many variables between test and production as you can.  Next, when you’re doing functional testing of your application, make sure you test with a dataset that resembles the size of data you’ll be contending with in production.  In our case,  we had never tested with larger accounts in our test environment so even if IIS Request Filtering was configured in test, we likely never would have run into this issues.  As a developer, this is definitely something I’m going to keep in mind for future projects.  Finally, to get around all this hassle with long URLs and long query strings, I already have a work item in progress to refactor that GET operation into a POST.  Rather than having a massive query string, the controller will get a blob of JSON posted to it that contains an array of Ids.

Jul 122013
 

I recently wrapped up work at my previous client and started working with a new client.  As much as I’m going to miss the awesome group of people I was working with and that I was getting to work on my favorite mobile platform, Xamarin, I’m excited about the new engagement.  It sounds like I’m going to be working on a few different things during my time here but my initial project is a new MVC 4 web app.  This is great for me because as much as I’ve enjoyed the past year in the mobile space, I’ve been wanting to get back into web development and ASP.Net MVC is really the only way I want to build web apps in .Net.

I’m lucky enough to be starting this MVC project from scratch.  This means that I can start things out correctly, ensuring that things like DI, IoC and unit testing are factored in from the ground up.  I’ve done MVC projects in the past and compared to say, web forms, designing for testability is a breeze.  That being said, there are certain aspects of MVC that are not so straight forward when it comes to testing.  One example, which I’ve written about previously is ActionFilter attributes.  Another example, which I ran into pretty quickly with my new project is HTTP modules.  If you’re not familiar with HTTP Modules, these are simply classes that implement the IHttpModule interface and are then registered with IIS.  A class that implements IHttpModule has  the opportunity to intercept various stages of the HTTP request pipeline, allowing you to do some neat things.  In my case, we’re using a custom HTTP module to set a custom principal for authenticated users.  This module has a dependency on a repository so of course my first thought was how would we handle unit testing this class with this dependency?

The challenge you run into with HTTP modules is that as they are registered and executed by IIS, how and when they get instantiated is outside of our control.  This is similar to the problem we have with ActionFilter attributes, there isn’t an obvious extension point in MVC where we can control how these objects are created like there is with controllers and their dependencies.  A quick Google search yielded an article from a few years ago by Phil Haack.  In this article he demonstrates an approach for handling dependency injection with Ninject.  Essentially he creates an HTTP module that is responsible for initializing all implementations of IHttpModule you have registered with Ninject.  This seemed like exactly what I wanted since Ninject is my client’s IoC container of choice.  I dove in and created an HTTP module similar to what Phil described and then proceeded with creating the HTTP module that would handle setting the custom principal.  I was able to mock the dependency on the repository in my tests for the module and life was looking good.  I tried debugging the web app and things were looking good there as well, my module was executing and the custom principal was being set.  Unfortunately, as I looked a bit closer, it seemed like the module was being executed twice for every request.  This didn’t exactly cause a problem but it seemed less than ideal and it’s certainly possible it might cause a problem in the future.  With that in mind, I dug in to try and figure out what was going on.  After seeing nothing obvious in my project that would cause the module to execute twice, I decided to create a new project to reproduce the problem.  Sure enough, the problem was happening there as well.  I’ll spare you all the steps I took to figure out the resolution and all the cursing under my breath that went on, but it turns out the solution was quite simple.  The “container” HTTP module demonstrated in Phil’s article is no longer needed with Ninject.  Ninject has it’s own HTTP module that handles registering with IIS any IHttpModules you’ve registered with Ninject.  If you’re using the Ninject.MVC3 NuGet package, in your App_Start folder, you’ll have a file named NinjectWebCommon.cs.  Inside this file, you’ll find a method named Start() which is invoked when your web app is starting:

That second line of code is registering an HTTP module named NinjectHttpModule.  If you look at the source code for NinjectHttpModule, you’ll see that it does exactly what Phil’s “container” HTTP module does.

One other thing I want to call out with regards to unit testing HTTP modules is their dependency in HttpContext.  If you’ve been doing unit testing with ASP.Net for any length of time, you’ll know that HttpContext is not at all unit test friendly.  Fortunately, rather than exposing HttpContext, MVC exposes HttpContextBase in places like controllers.  This makes unit testing under MVC a much less frustrating experience.  Unfortunately the story for unit testing HTTP modules isn’t so rosey. The general pattern for implementing features in HTTP modules is to subscribe to whatever events you’re interested in the module’s Init method.  This ends up looking something like this:

As you can see, we’re casting the sender of the event to HttpContext.  Fortunately, Google came to the rescue once again.  I found another blog post that had an incredibly simple but extremely useful pattern for making HTTP modules testable.  The idea is that you create an abstract base for your HTTP modules.  You then create virtual methods for every event you want to handle.  These methods take just one parameter, HttpContextBase.  In your base module’s Init method, you wire up the events to be handled by your module to their respective virtual method, passing in a new HttpContextWrapper, initialized with the HttpContext parameter passed by the event handler.  Your base HTTP module ends up looking like this:

All you need to do once you have your base HTTP module is subclass it and override the necessary methods.  In my example I’ve only setup one event but the pattern is easy to repeat for as many events as you need to handle.

I put together a small project that demonstrates all of this, along with mocking HttpContextBase in a unit test:  MvcHttpModuleDI.

May 132013
 

This week I had my first go at working with MapKit and CoreLocation in Xamarin.iOS.  I was writing some spike code to test MapKit as a possible solution for an upcoming project.  I had two main goals for what I wanted to do with MapKit.  First, I wanted to be able to draw shaded polygons over US states.  That is, I wanted to shade Minnesota one color, Wisconsin another color and so on.  Second, I wanted to be able to respond to a tap gesture on one of these polygons and know which state had been tapped.

The first thing I decided to tackle was drawing the state polygons on an MKMapView.  Having not previously worked with MapKit, I was a bit concerned as to how I was going to translate the geographic boundaries of the various US states into points on the view.  As it turns out, MapKit does all of the heavy lifting for you.  The MapKit API allows you to use an array of geographic coordinates as an overlay to the map.  MapKit then takes care of making sure that overlay is in the correct location on screen regardless of orientation or zoom.  Pretty slick!  The other challenge I had with this item was actually figuring out the boundaries of US states.  Fortunately I found an XML file that contained just what I needed, all the states as well as all the coordinates that make up their boundaries (sorry, forgot where I found it so I can’t give proper credit to the creator).

Once I figured out how I would draw my state polygons, I was ready to start coding.  I created a simple UI, just a UIToolbar at the top of the screen and an MKMapView below that.  I then added a UIBarButtonItem to the toolbar that would display a UIPickerView which would contain a list of states.  The idea was that when the selection in the UIPickerView changed, I’d draw and shade a polygon over the selected state, removing the polygon from the previously selected state.  In my UIPickerViewModel class, I created an event which I subscribed to in my viewcontroller so I’d know when the state selection changed.  The following code shows how I’m handling the change in state selection and changing the polygon:

The first thing I do is check a class level field that stores the current polygon.  If it’s not null that means we’re already displaying a polygon and we need to remove it from the map.  Next, we have another class level field that is a collection of State objects each of which contains the name of a state and a collection of coordinates that make up the state’s boundary.  This field was previously set via a method that parses the previously mentioned XML file.  From this field, via LINQ we get the selected state, using the event args which contains the name of the selected state.  Now we need an array of CLLocationCoordinate2D objects to tell MapKit where we want our polygon to be drawn.  CLLocationCoordinate2D is just a struct that contains latitude and longitude so again using LINQ we pull the coordinates out of the selected State object. Using our CLLocationCoordinate2D array, we create an MKPolygon, set it’s title and add it to the map.  The  Title propert isn’t displayed on the map but we will use it later.

If you only add an overlay to your map as I’ve shown above and were  to run your app, you’d probably be a bit disappointed because you wouldn’t see anything on your map.  Don’t panic, this isn’t a problem with your code, you just need to configure the view for your polygon.  As you saw previously, we added an MKPolygon to the map.  That MKPolygon has an associated MKPolygonView that we need to configure.  To configure the view for an overlay we need to provide our MKMapView with an MKMapViewDelegate.  As you’re probably already guessing, to do that we need to create a class that derives from MKMapKitDelegate.  In that class, we need to override the GetViewForOverlay method.  Below is what the delegate looks like that I’m using:

GetViewForOverlay provides us with an MKMapView and an overlay as an NSObject.  In my case, since I’m only dealing with MKPolygon overlays, I can just cast to that type and create a new MKPolygonView.  On the view I set the color so that it will now have some visibility on the map.

At this point, with what we’ve done so far, the app will display a polygon over the state selected in the UIPickerView  As I mentioned at the beginning of this post, in addition to shading individual states, the other thing I wanted to accomplish was being able to respond to touch events on these polygons and know which state was being selected.  My first thought on how to approach this problem was with either the MKPolygon or the MKPolygonView.  Neither MKPolygon or MKPolygonView expose any events that would allow me to accomplish this.  My next thought was to use a UITapGestureRecognizer.  MKPolygon doesn’t support gesture recognizers however MKPolygonView does so I was optimistic that would be a workable solution.  I put together some quick and dirty code to attach a UITapGestureRecognizer to my MKPolygonView and just output a message to the console so I could verify this was a workable solution.  When I ran the app however, the results weren’t good.  I didn’t see an error, a crash or anything like that.  Rather, what I saw was nothing, my gesture recognizer seemed to be ignored.  After double checking my code, I took to Google to see if others were experiencing this problem and what solutions were being proposed.  Indeed, it seemed that I was not alone in experiencing this problem.  I saw a few different solutions but the one that looked the least unpleasant involved attaching the gesture recognizer directly to the MKMapView.  Of course this introduces a new challenge.  If the gesture recognizer is attached to the MKMapView, how do we know if the tap handled by the gesture recognizer lies within the state polygon?  It turns out that we can get the location of the tap being handled by the gesture recognizer and can access the overlays in the map to determine whether or not the tap is in the overlay for the state.  The code we need looks like this:

Obviously, the first thing we need to do is add a UITapGestureRecognizer to our MKMapView.  The interesting part of course is what we do in the action for that gesture recognizer.  The delegate method we provide for our UITapGestureRecognizer receives an instance of the UITapGestureRecognizer as a parameter. We first use that parameter to get the location in our MKMapView where the tap occurred.  Next we convert that location, a PointF to something MapKit can understand, CLLocationCoordinate2D.  Now we loops through all the overlays on the map.  Strictly speaking we only have one overlay on our map but a real world app there’s a good chance you’ll have multiple overlays so I think it’s worth demonstrating how to handle that scenario in this code even though it complicates the code a bit.  Once we find an overlay that’s an MKPolygon we get the MKOverlayView for that overlay.  If that MKOverlayView is an MKPolygonView we get the point for the tap event and check to see if it lies within the MKOverlayView we’re dealing with and act upon it appropriately.

As it turns out, for what I was looking to do, MapKit isn’t an ideal solution but it was definitely an interesting experience working with it and trying to figure out how to handle tap events inide MKOverlays.

Full code for this post is available here:  https://github.com/13daysaweek/MKOverlayView.git

May 032013
 

I’ve found myself in a position at work where I’ve inherited a rather sizable codebase.  I’ve actually been working off and on in this codebase for about a year now, however my primary focus was on another application so I wasn’t as familiar with it as I would have liked when I inherited it.  As with with any codebase you inherit, as I get to know it, there are plenty of things I like about it and a few things that I would have done differently.

This particular application displays charts and graphs for users.  Users are presented with inputs that allow them to select various parameters to filter their charts and graphs as well as other inputs to select the specific report they want to view.  The different types of reports and the options for how they are displayed are modeled as a couple of different enums.  Based on selections the user makes and the resulting value of these enums, along with a graph or chart, we display a title for what the user is viewing.  The title is constructed by evaluating the enum values for the currently selected display and report types as well as the parameters used to filter the data.  Back when this app was first conceived, there were a limited number of display options and report types.  At the time, the code to construct the title consisted of a switch statement and some string concatenation.  Unfortunately, over time, we added more report types, more display types and more places in the codebase where we needed to construct a title.  I was actually in the process of adding a new report type when I came across this code.  I did a bit of analysis to figure out what I would need to do to add this new report.  The first thing I came across was the enumeration that represented the types of reports we currently support.  It was obvious I’d need to add a new value there.  I made that change, ran my code to see how things like and indeed, my new report did render as expected but the title didn’t display properly.  Digging back in, I found the code where we construct the title.  In that code we had a Dictionary<ReportType, string> that mapped our report type enum to a string to include in the title when that type of report was rendered.  Ok, no big deal, so I went ahead and added my new enum value and a title to that dictionary.  I ran the app again and indeed, this time the report rendered and the title displayed correctly.  However, something else was amiss.  There are actually two places where we display titles, we display one title on the chart or graph as well as another, different title on the app’s navigation bar.  The title on the navigation bar was still incorrect.  I tracked down the code that was setting the title in the nav bar and found that once again, we were using a Dictionary<ReportType, string> to set this title.  Unfortunately, this was a different dictionary because the titles we use in the report and in the nav bar are are slightly different from one another.

This approach we were using to set our titles was feeling less than ideal to me.  It actually got worse as I dug into things more and found we had a whole bunch of similar logic scattered across our app that used the value of various enums to return different strings that were sort of similar to the name of a particular enum.  I think what aggravated me the most was that here I was, setting up a new report, it was easy enough to discover that I had to add a new enum, but how was I supposed to know about all this convoluted logic about setting titles and mapping enums to dictionaries?  Certainly there had to be a better way.  I didn’t mind the enums themselves, using those to represent our different reports and display options seemed fine to me.  I considered wrapping those inside a class that would also have properties for things like graph title and nav bar title but that seemed like a lot of refactoring.

If only there were some other way I could associate some extra data with those enum values, some way to declaratively set data that would ride along with those values and not require me to change how we code uses those enums.  Fortunately, the .Net framework has just such a thing, in the form of attributes.  Maybe it’s just me, but in my experience, it seems like as developers, we routinely make use of attributes that the .Net framework and other frameworks provide for us, but we don’t always remember that custom attributes are another very powerful tool in our toolbox.  I know for myself at least, I don’t consider creating a custom attribute to solve a problem as often as I probably should.  Fortunately for me, when I looked at this mess of enums and dictionaries mapping enums to strings, a custom attribute was one of the first things that popped into my mind.

I’ve put together a brief sample project that demonstrates roughly the approach I took.  To keep things simple, this sample application is a console application that contains an enum, ReportType and a ReportTitle attribute.  The ReportTitle attribute has two properties, one for the title that should be displayed for a given report and the second for the color of font that should be used to display the title.

There’s not too much of interest going on in the attribute class but I will call out a couple of things.  First, note the class is decorated with the AttributeUsage attribute.  This guy tells the framework where our attribute can be used, whether multiple instances of our attribute can be applied and whether or not it’s valid when applied to base classes.  In my case I’m not allowing multiple instances and opted not to allow inheritance.  As for where the attribute is valid, I used AttributeFlags.Field.  This might seem a bit odd since we’re applying it to enum values and indeed, it seemed odd to me at first as well.  I had assumed and tried initially to use AttributeFlags.Enum, however compilation failed.  This is because AttributeFlags.Enum is valid for an enum type, not the actual fields within the type.  To apply it to enum fields you need to use AttributeFlags.Field.  Finally, I opted to set the properties in my attribute directly rather than via the constructor.  I went this way only because I feel that it’s more explicit as to what the values are being initialized.  That being said, I guess I could have used an overloaded constructor and named parameters to the same effect.

Below is how we use the attribute with our enum, which shouldn’t be any surprise:

Finally, we need to be able to use our attribute.  Getting custom attributes from a type or member requires a bit of reflection.  To help simplify that, I created a small extension method:

So, there you go.  It’s definitely not an exciting solution but since I hadn’t dealt with custom attributes in a while and feel like they don’t get enough attention in .Net development, I thought I’d put together a post on the subject.

Sample project for this post is available here:  https://github.com/13daysaweek/TitleAttribute.git

Apr 302013
 

In my previous post, we went through the very tedious process of configuring Windows Azure Mobile Services for authentication against Facebook, Twitter, Microsoft and Google.  As you’ll recall, that process amounted to a lot of setting up developer accounts with the various social networks, setting up applications and copying IDs and keys from the various developer sites into the Azure management portal.  Personally, when I think of developing software, this sort of busy work doesn’t really excite me too much.  Fortunately, in this post we’ll be looking at code and will be keeping the busy work to a minimum.  Specifically we’ll be taking a look at what it takes to build a simple Xamarin.iOS application that authenticates against the four social providers we configured in the last post.

Let’s start by firing up Xamarin Studio and creating a Single View application for the iPhone.  I’m going to assume if you’re reading this post that you already know how to do that so we won’t walk through that.  I promised that we’d be spending time on code in this post but before we get into that, we do need to head over to the Azure management portal to get a couple of values.  The first value we’ll get is the application key for our mobile service.  This can be found by clicking the Manage Keys button at the bottom of the screen on your mobile service dashboard:

AzuerManagementKeys

You’ll want to copy the value from the first field, Application Key.  Once you have this, the other item we need to get from the portal is the URL for our mobile service.  We used this extensively in the previous post when we configured our authentication providers so I’ll assume you remember where to find this guy.

Once you’ve gotten the values for your application key and URL, let’s go ahead and create constants for these two guys in our ViewController like so:

Exciting stuff, I know :)  Certainly there are a number of different ways you could handle storing these values in your application.  I’m not saying that stuffing them into constants in a ViewController is the best way, or even the right way, I’m just choosing to do it here to keep things simple and to avoid muddling this code up with a bunch of stuff that isn’t related to what we’re trying to demo in this post.

Now that we have the values for our key and URL in our code, we need to create an instance of MobileServiceClient.  This class is part of the Azure Mobile Services SDK and is how we interact with WAMS for authentication and data operations.  Constructing an instance is simple enough, we just need to pass in our key and URL to the constructor.  I’m going to again opt for the easy approach here and just create my MobileServiceClient instance directly in my ViewController.  My updated ViewController now has the following class level declarations, including the constants we created previously:

Next, let’s go ahead and create our UI.  Again, we’ll keep things simple here.  We’ll just create a button for each of our authentication providers and wire each of those up to an outlet like so:

WAMSAuthDemo Xib

 

Now we need to respond to the TouchUpInside event for each of these buttons.  We’ll have each one call a method called DoLogin that takes a parameter indicating the authentication provider associated with the button that was tapped.  Again, keeping things simple, let’s just wire up these event handlers in the ViewDidLoad method for our ViewController like so:

The last thing we need to do is add our DoLogin method.  This is the guy that will do the heavy lifting of authenticating against the selected social provider.  Fortunately, Azure Mobile Services makes that heavy lifting remarkably simple:

The MobileServiceClient class has a method on it called LoginAsync which we use to authenticate against our chosen provider.  We specify that provider by passing in an enum of type MobileServiceAuthenticationProvider which is a type in the Mobile Services SDK.  We pass that provider into our call to LoginAsync, along with a reference to our ViewController.  Since the login is handled asynchronously, we need to use a continuation task to handle any action we want to take when the login is completed.  The Result property of our continuation task is a MobileServiceUser, another type from the Mobile Services SDK. Again, keeping things simple, we’re just displaying a UIAlertView that shows a welcome message, along with the UserId property from the MobileServiceUser returned from the login operation.  Note that this UserId is an identifier from the login provider, not the actual username of the authenticated user.

When we run the app, tapping on any of the buttons will bring up a dialog with a UIWebView that displays the login page for the selected provider.  Below is what I see when I try to auth with Twitter:

Azure Mobile Services Twitter Authentication Screen

 

And after I provide my credentials and login, I see the following dialog in the app:

Authenticated by Twitter

 

That’s all there is to it.  Personally, I think that it’s very impressive that we’re able to authenticate against four different social providers using so little code.  That being said, authentication is only one piece of the puzzle when it comes to integrating with social networks.  This doesn’t address things like accessing a user’s profile, displaying their profile picture, accessing their friends list, etc.  From what I understand, those things are possible with Azure Mobile Services but I’ll save those for another day.  In the meantime, in part III of this series, we’ll take a look at doing custom user registration and authentication with Azure Mobile Services.  Hopefully I’ll get that posted in the next week or so.

Apr 252013
 

As I mentioned in my initial post about Project X, one of the things I need to contend with in this app is backend data storage.  I’m not expecting to have a very complex or large data model but based on some of the social features I’m planning on implementing, user data will need to be stored in some sort of backend data store.  I have plenty of experience modeling data and building SOAP and RESTful services on top of said data model for CRUD operations.  For Project X, I could certainly take the approach of building out my own data model, coding RESTful services for CRUD operations and hosting these services somewhere in the cloud.  That being said, when I think of all the work that is going to go into Project X before it ships, building out these services just doesn’t sound that interesting to me.  It seems like in this day and age, there should be a better option than a bunch of repetitive coding of services that do little more than persist objects in some backend store.  As it turns out, there is a better option and this option is known as backend-as-a-service.

Briefly, backend-as-a-service is a cloud based model for easily linking applications, generally mobile applications to services such as data storage, identity management, push notifications and social network integration.  While I haven’t done a ton of research of backend-as-a-service offerings, I have looked at a number of different providers and one thing they all seem to have in common is that they require you to write little to no code on the backend to achieve data persistence from your client application.  Instead of creating database tables, writing data access code and service code on top of that, with backend-as-a-service you simply define the name of an entity you wish to store on the backend.  On the client you then model that entity with whatever attributes you wish to store and using the SDK for your backend-as-a-service provider you are able to persist and retrieve these entities from the provider’s cloud based data storage.

Windows Azure Mobile Services is Microsoft’s backend-as-a-service offering.  It’s currently a preview offering as part of the Azure suite of services but looks to be quite full featured.  Features include:

  • Data storage:  This is built on top of SQL Azure however it does allow you to have dynamic schemas so you don’t model your entities on the backend.
  • Scheduled jobs
  • Push notifications:  This includes support for push to iOS, Android and Windows clients.
  • Social network identity integration:  This includes support for logging into your apps with Twitter, Facebook, Google and Microsoft accounts
  • Scripting:  While you don’t need to write code on the backend, WAMS is build on top of node.js so your scheduled jobs will be written with JavaScript.  You can also intercept data storage operations (read/insert/update/delete) and handle these with custom JavaScript as well.
  • Broad client support:  Microsoft has SDKs available for download from the Azure portal for Windows Store apps, Windows Phone 8, iOS (Objective-C), Android (Java) and HTML/JavaScript.  The Xamarin Component Store has a component available for integrating Xamarin apps with WAMS.

Similar to Windows Azure Websites, Mobile Services currently offers two options.  You can deploy your services in a shared compute environment for free or pay for the ability to scale up to 10 dedicated compute instances.  In both the free and dedicated pricing models you still end up paying for data storage at whatever your plan rate is.

I’ve been looking at using WAMS from a Xamarin app for a couple of weeks now and I have to say, it is indeed very easy to persist and retrieve data from the cloud.  I also had the opportunity to try out authentication with the Twitter provider as well as push notifications while I was at the Xamarin Evolve conference.  Getting the Twitter auth setup was ridiculously easy.  The hardest part was getting registering my app with Twitter and trust me, that wasn’t hard at all.  Push notifications were a bit more frustrating to get setup but that’s more because push notifications in general are difficult to setup due to Apple’s certificate based security model.  Seeing how easy it is to work with data in WAMS and social network authentication has me strongly considering WAMS as my backend service for Project X.  That being said, before I commit to using WAMS, I want to spend a bit more time working with the authentication functionality.  Specifically I’d like to take a look at working with the other social authentication providers they support.  I’d also like to explore the possibility of doing custom authentication for those two or three people out there who don’t already have a login with Facebook/Twitter/Google/Microsoft and don’t want to sign up with one of these.  Fortunately I’ve found a couple of really good blog posts that go through setting up the various providers, integrating with them and even setting up custom authentication.  Unfortunately, these posts were all around building clients with Objective-C or Java so I’ve decided to put together a series of posts on doing this with a Xamarin.iOS app.  Specifically I’m going to break this down into three posts:

  • Part I:  Configuring social authentication providers (this is what you’re currently reading)
  • Part II:  Building an app and integrating it with Twitter, Facebook, Google and Microsoft authentication
  • Part III:  Custom user registration and authentication

Getting Started

This probably goes without saying, but the first thing you’re going to need is a Windows Azure account.  If you don’t already have one, head over to the Windows Azure website and sign up.  You can get a free 90 day trial, however you do need to provide a credit card that will be charged in the event you go over the amount of service you’re allocated for your trial.  Once you have your account, login to the Azure management portal and create a new mobile service.  I’m not going to walk through how to do this as there are plenty of good references that will do a better job of this than I could.

Configuring Microsoft Authentication

Once you’ve created your service, head on over to Microsoft’s Live Connect Developer Center.  This is where you’ll register your Xamarin app with Microsoft.  Here you’ll click the Create Application link under the My Applications section:

LiveConnect_Create_Application

 

On the next screen you just need to enter a name for your application and click the I Accept button:

Enter Application Name

 

After accepting, the next screen will present you with your Client ID and Client Secret which we’ll need to enter into the Azure Portal.  Before we do that however, we need to enter a value for Redirect Domain:

Redirect Domain

 

For Redirect Domain, we just enter the Mobile Service URL found in the Dashboard screen in the Azure management portal for the mobile service we created earlier.  You’ll want to keep this URL handy as we’ll be using it to configure our other authentication providers.  Note that in the image above, I’ve blurred out both Client ID and Client Secret because, well, these should be kept secret.  Once you’ve entered your redirect domain, copy your Client ID and Client Secret from this screen and head back over to the Azure management portal and find the identity settings for the mobile service you created.  Here you’ll see fields to enter your Client ID and Client Secret that you saved, so go ahead and do that:

Microsoft Account Settings

 

Once you’ve entered your Client ID and Client Secret, click the Save button at the bottom of the portal window.  That’s it for your Microsoft account.  Next we’ll take a look at Facebook.

Configuring Facebook Authentication

Now that we’re on to setting up our second provider, you’ll start to see a common pattern:  Create your app on the provider’s developer portal, provide a callback URL and copy/paste some values from the provider’s portal to the Azure portal.  Indeed, that’s what we’ll be doing with Facebook to get it setup.  First, head over to the Facebook Developer site.  If you haven’t already registered as a Facebook developer, you’ll have to do that first but it’s quite painless.  After you’ve registered and logged into the developer site, click the Apps menu at the top of the screen and then click the Create New App button:

Create Facebook App

 

After you click Create New App, you’ll get a dialog where you’ll need to enter a name for your app.  You can skip the app namespace and since we’re talking mobile apps, you definitely don’t need Heroku:

Facebook App Name

 

After clicking Continue you’ll need to enter a CAPTCHA.  Head back to the Azure management portal and copy the URL for your mobile service again then scroll down the page for your app on the Facebook site and click the checkbox next to Website with Facebook Login and enter the URL for your mobile service into the textbox that appears:

Facebook Login URL

 

Click the Save Changes button at the bottom of the screen then head to the top of the screen and copy both the App ID and App Secret values for your app.  You’ve probably already guessed the next step but if not, head back to the Azure management portal and find the Facebook section on the Identity tab for your app.  There you’ll need to enter the App ID and App Secret values you just copied:

Azure Facebook Settings

Click the Save button at the bottom of the screen and you’re now ready to start accepting logins from Facebook users.  Next we’ll move on to Twitter.

Configuring Twitter Authentication

To get started with Twitter, we need to navigate over to the Twitter developer site.  After you login, find your Twitter avatar in the upper right hand corner of the screen, mouse over it and click the My Applications item on the menu that appears:

Twitter Menu

After you click My Applications, click the Create New Application button.  On the following screen you’ll need to fill in some information about your app:

Twitter App Details

Provide a name and a description for your app.  For both Website and Callback URL, provide the URL of your mobile service, which as with the other providers, you can find on the Azure management portal.  Scroll down, agree to all the legal terms, enter the CAPTCHA and click the Create New Twitter Application button.  After we click the Create button, we need to head over to the Settings tab.  Scroll down and find the checkbox labeled Allow this application to be used to Sign in with Twitter:

Twitter Allow Login

Check the checkbox then scroll down and click the Update button.  Head back to the Details tab and then look for the OAuth Settings section.  This is where you’ll find the Consumer Key and Consumer Secret values that you’ll need to copy over to the Azure management portal:

Twitter OAuth Settings

On the Azure management portal, navigate back to the Identity tab, scroll down to the Twitter section and enter your Consumer Key and Consumer Secret from the Twitter site:

Azure Twitter Settings

Once again, we click Save at the bottom of the screen.  We’re now done with Twitter and ready to move on to our last authentication provider, Google.

Configuring Google Authentication

To get started configuring Google, head over to the Google API Console.  After you login, click the dropdown on the upper left hand side of the screen and then click the Create item on the menu:

Create Google App

After clicking Create, we’re next prompted to name our project:

Google Project Name

 

In the menu on the left hand side of the screen, click the API Access item:

Google API Access

 

On the API Access screen, your one and only option is a rather large button labeled Create an OAuth 2.0 Client ID so go ahead and click that.  On the next screen you’ll be presented with a few fields to fill in.  All we need to provide is a Product Name:

Google Create Client ID

 

Go ahead and click Next.  If you don’t already have it memorized, head back to the Azure management portal and copy your mobile service URL.  On this screen we’ll need to enter the host name for our mobile service.  Leave the Application Type set to Web Application.  To the end of the host name, we want to add the text /login/google.  So if our mobile service URL is https://mobileservice.azure-mobile.net, on this screen we would enter mobileservice.azure-mobile.net/login/google.  Note that after you enter your host name + /login/google, the /login/google part will be automatically removed but will still be reflected in the Redirect URI field.

Google Client ID Settings

 

Go ahead and click the Create client ID button.  You should now see some information including your Client ID and Client Secret.  Go ahead and copy these values because, yes, you guessed it, we’re going to head back to the Azure management portal, click the Identity tab for our mobile service and scroll down to the settings for Google:

Azure Google Settings

 

After you’ve entered your Google Client ID and Client Secret, click Save at the bottom of your screen.  And with that, we’re done.  Assuming we got all of our copying and pasting of values into the correct fields for our providers, we should now be ready to accept logins from Twitter, Facebook, Google and Microsoft account users.  In the next post we’ll go through how to build a Xamarin.iOS app that uses the Azure Mobile Services SDK to authenticate via our four providers.  Fortunately, that code is a lot less hassle than all of the copy/paste action that was required to setup the providers!

Finally, I’d like to give a big thanks to Chris Risner.  His blog post on configuring the various authentication providers in WAMS mobile services saved me a ton of time an aggravation!