Monthly Archives: April 2010

Apple’s Fruitless Security

For 20+ years I have relied on Jeff for friendship and technical advice.  He is a solid nerd, rooted in things Oracle/Linux with more knowledge of hardware and operating systems than any ten people I know.  In other words, he is no moron.  Recently, his iTunes account was hacked and upwards of $300 worth of apps/movies purchased.   Apple’s response? Paraphrasing “You need to cancel your credit card and take up disputing the charges with them.  As for Apple, we’ll be keeping that money.  Oh, and once you get a new card, please remember to add it back to your disabled iTunes account.”  In other words, you just got ramrodded while we watched, did nothing, and profited by it.

Now, I am an Apple fan.  I have an iPhone; I have a G5 (PowerPC-based, old school baby) that I use every day.  I love Mac OSX, finding it superior to the Windows operating systems (although, Windows 7 is pretty damn good) and have been on the cusp of buying a MacBook for a few months now.  I bought my wife an iMac, and she loves it.  In other words, I am not a Windows nerd bashing Apple.  I was once a blind fanboy, encouraging everyone I knew to by a Mac and get an iPhone.  I would passionately debate why Apple products were superior to all comers, sometimes without the benefit of rational thought.

I am an Apple user, and this may be the last straw.

Once the honeymoon of my relationship with Apple products faded into history, I started noticing what Apple gives me as a proponent of their products.  They don’t trust me to change a battery or add storage.  They force me to use a singular application to activate and update my phone (iTunes).  Their products are outrageously more expensive when compared to the competition.  A few times a year they have a media circus to unveil new crazy expensive hardware while their king talks down to me like I am expected to embrace whatever floats to the top of his mock turtleneck, even when it’s underwhelming (copy/paste).  Apple wardens off their systems, keeping a who’s who list of frameworks and products that are allowed inside the velvet ropes (i.e., the striking omission of Flash on the iPhone)  They allow me to pay $99 for the right to develop for their mobile platform, but only if I use a language who’s base feature set would have been laughed out of most late-1970’s development shops.  Oh, and I can pay another $99/year to have a closed off online  e-mail/contacts/photo/file offering who’s initial shininess fades rapidly under the light of actual use.

All this, and now an approach to online fraud protection that only an evil dictator could appreciate.  Apple’s software was hacked, my friend was affected, and they basically asked him to suck on it and come back for more.  I have heard many a user/nerd pontificate on why Apple’s user base pays a premium to be treated like dirt.  I have wondered aloud why the governments of the UK and the US will drop the “Monopoly” moniker on Microsoft, but allow Apple to dominate and control the mobile market without a peep.  You have to hand it to Apple, they have created the perfect spot for themselves.

I want to have faith in the masses.  I want to hope for the day that the users revolt and demand Apple to stop gouging our wallets and closing off their systems.  I just don’t see it coming.  Talking to other die-hard Apple users, they say that Apple should be allowed to control what is allowed on their devices and operating systems.  These are the same people that would have held sit-ins to force Microsoft to allow more than one browser in Windows.  The double-standards are obvious and ubiquitous.

I’ve been told that I don’t have to buy Apple products.  I don’t have to subject myself to the whims of black turtlenecks.  This is true.  My hope is now shifting to Microsoft and Google.  Two other behemoths that want my money.  Here’s hoping that they realize that my business is their privilege, that my information is worth protecting, and that my choice is still mine.


Unit Testing Objects Dependent on ArcGIS Server Javascript API

Recently, I’ve created a custom Dojo dijit that contains a esri.Map from the ArcGIS Server Javascript API.  The dijit, right now, creates the map on the fly, so it calls new esri.Map(…) and map.addLayer(new esri.layers.ArcGISTiledMapServiceLayer(url)), for example.  This can cause heartburn when trying to unit test the operations that my custom dijit is performing.  I am going to run through the current (somewhat hackish) way I am performing unit tests without having to create a full blown HTML representation of the dijit.

I’ll be using JSpec for this example, so you may want to swing over to that site and brush up on the syntax, which is pretty easy to grok, especially if you’ve done any BDD/spec type unit testing before.

The contrived, but relatively common, scenario for this post is:

  1. My custom dijit makes a call to the server to get information about some feature.
  2. The service returns JSON with the extent of the object in question.
  3. I want my map control to go to that extent.

Following the Arrange-Act-Assert pattern for our unit tests, the vast majority of the work here will be in the Arrange part.  I don’t want to pull in the entire AGS Javascript API for my unit tests.  It’s big and heavy and I am not testing it so I don’t want it.    Also, I don’t want to invoke the call to the server in this test.  Again, it’s not really what I am testing, and it slows the tests down.  I want to test that, if my operation gets the right JSON, it sets the extent on the map properly.

The Whole Test

Here is the whole JSpec file:

describe 'VersionDiff'

 before_each
   vd = new esi.dijits.VersionDiff();
 end
 describe 'updateImages()'
   it 'should change the extent of the child map'
     //Arrange
     esri={
       Map:function(){
         this.setExtent=function(obj){
           this.extent=obj;
         };
       },
       geometry:{
         Extent:function(xmin,ymin,xmax,ymax){
           var obj={};
           obj.xmin=xmin;
           obj.ymin=ymin;
           obj.xmax=xmax;
           obj.ymax=ymax;
           return obj;
         }
       }
     };
     vd.childMap = new esri.Map();
     var text = fixture("getFeatureVersionGraphics.txt")
     text = dojo.fromJson(text)
     //Act
     vd.updateMapToFeatureExtent(text)
     //Assert
     vd.childMap.extent.xmin.should.be 7660976.8567093275
     vd.childMap.extent.xmax.should.be 7661002.6869258471
     vd.childMap.extent.ymin.should.be 704520.0600393787
     vd.childMap.extent.ymax.should.be 704553.8080708608
   end
 end
end

//Arrange

The Arrange portion of the test stubs out the methods that will be called in the esri namespace.  Since the goal of the test is to make sure my djiit changes the extent of the map, all I need to do is stub out the setExtent method on the Map.  setExtent takes an Extent object as an argument, so I create that in my local, tiny esri namespace.  Now I can set the property on my dijit  using my stubbed out map.  Thanks to closures global variables (ahem), the esri namespace I just created will be available inside my function under test.  Closures are sexy, and I only know enough about them to be dangerous.  Yay!  I don’t have to suck in all the API code for this little test.  That fixture function is provided by JSpec, and basically pulls in a text file that has the JSON I want to use for my test.  I created the fixture my saving the output of a call to my service, so now I don’t have to invoke the service inside the unit test.

//Act

This is the easy part.   Call the function under test, passing in our fixture.

//Assert

How do I know the extent was changed?  When I created my tiny esri namespace, my esri.geometry.Extent() function returns an object that has the same xmin/ymin/xmax/ymax properties of an esri.geometry.Extent object.  The setExtent() function on the map stores this object in an extent property.  All I have to do is make sure the extent values match what was in my fixture.

I didn’t include the source to the operation being tested, because I don’t think it adds much to the point.  Suffice it to say that it calls setExtent() on the childMap property.

So What?

I realize this may not be the greatest or cleanest approach, but it is serving my needs nicely.  I am sure in my next refactor of the unit tests that I’ll find a new approach that makes me hate myself for this blog post.  As always, leave a comment if you have any insight or opinion.  Oh, and regardless of this test, you should really look into JSpec for javascript unit testing.  What I show here is barely the tip of what it offers.

Reblog this post [with Zemanta]

Robotlegs and Cairngorm 3: Initial Impressions

Smackdown? (Well, not really...)

I have had Robotlegs on my radar screen for months now, just didn’t have the time/brains to really check it out.  I cut my Flex framework teeth on Cairngorm 2, which seems to be the framework non grata of all the current well-known Flex frameworks (Swiz, PureMVC, Robotlegs, etc.)  However, at Joel Hooks‘ suggestion, I took a look at Cairngorm 3 (henceforth referred to as CG3), enough to do a presentation on it for a recent nerd conference. The presentation includes a demo app that I wrote using Cairngorm 3, which (currently) uses Parsley as it’s IoC container.  That’s right, Cairngorm 3 presumes an IoC container, but does not specify one.  This means that you can adapt the CG3 guidelines to your IoC of choice.  This is only one of the many ways CG3 is different from CG2, with others being:

  • No more locators, model or otherwise.  Your model can be injected by the IoC or handled via events.  For you Singleton haters, this is a biggie.
  • A Presentation Model approach is prescribed, meaning your MXML components are ignorant of everything except an injected presentation model.  The views events (clicks, drags, etc) call functions on the presentation model inline.  The presentation model then raises the business events.  This allows simple unit testing of the view logic in the presentation models.
  • The Command Pattern is still in use in CG3, but commands are not mapped the same way.  For CG3, your commands are mapped to events by your IoC.  Parsley has a command pattern approach in the latest version that actually came from the CG3 beta.  This approach uses metadata (like [Command] and [CommandResult]) to tell Parsley which event to map.  Again, this results in highly testable command logic.
  • CG3 includes a few peripheral libraries to handle common needs that are very nice.  Things like Popups, navigation, validation, and the observer pattern are all included in libraries that are not dependent on using CG3 or anything, really.  If you don’t intend to use CG3, it may be worth your while just to check out these swcs.

All in all, CG3 is a breath of fresh air for someone who has been using CG2.  Cairngorm 2 is fine, and it’s certainly better than not using any framework, but the singletons always made me uneasy and, in some of our lazier moments, we ended up with hacks to get the job done.  I feel that CG3 really supports a test-driven approach and Parsley is very nice and well-documented.   It’s worth saying that I only know enough about Parsley to get it to work in the CG3-prescribed manner, and there seems to be much more to the framework.

Once I had a basic handle on CG3, Joel said he was interested in how CG3 could work with Robotlegs (henceforth referred to as RL).  Also, after my presentation, a couple of folks wandered up and mentioned RL as well.  So, when I got home, I ported the demo app to RL (it’s a branch of the github repo I link to above) so I could finally check it off my Nerd Bucket List.

First of all, Robotlegs is, like CG3, prescriptive in nature.  It is there to help you wire up your dependencies and maintain loose coupling to create highly testable apps (right, all buzzwords out of the way in one sentence.  Excellent)  Like CG3, it presumes an IoC, but does not specify a particular one.  The “default” implementation uses SwiftSuspenders (link?) but it allows anyone to use another IoC if they feel the need.  I have heard rumblings of a Parsley implementation of RL, which I’d like to see.  Also, the default implementation is more MVCS-ish than the default CG3 implementation.  What the hell does that mean?  Well, MVC is a broad brush and can be applied to many architectures.  In my opinion, the CG3-Parsley approach uses the Presentation Model as the View, the Parsley IoC/Messaging framework for the controller and injected value objects for the model.  The RL approach uses view mediators and views for the view, which reverses the dependency from CG3.    The RL controller is very similar, but commands are mapped explicitly in the context to their events, rather than using metadata.  The model is also value objects, but it’s handled differently.  In CG3, the model is injected into commands and presentation models, then bound to the view.  So, if a command changes a model object, it’s reflected in the presentation model, and the view is bound to that attribute on the presentation model. In RL, the command would raise an event to signify model changes, passing the changes as the event payload.  The view mediator listens for the event, handles the event payload to update the view, again through data binding. (NOTE:  You can handle the model this way in Parsley as well, using [MessageHandler] metadata tags, FYI)

It’s worth mentioning that when I did the RL port, I added a twist by using the very impressive as3-signals library from Robert Penner.  Signals is an alternative to event handling in Flex, and I really like it.  Check it out.  Anyway, RL and Signals play very well together, but it means I wasn’t necessarily comparing apples-to-apples.  Signals is not a requirement of using RL, at all, but the Twittersphere was raving about it and I can see why.  The biggest con to using Signals with CG3 might be some of the peripheral CG3 libraries.  For example, I think you’d end up writing more code to adapt things like the Navigation Library to Signals.  The Navigation Library uses NavigationEvent to navigate between views, which would need to be adapted to Signals.  Of course, I am of the opinion that, if you are going to use something like Signals, you should use it for ALL events and not mix the two eventing approaches.  This is a philosophical issue that hasn’t had the chance (in my case) to be tested by pragmatism.

So, which framework do I like better?  After a single demo application, it’s hard to make a firm choice.  I really like the CG3 approach, and it’s only in beta.  However, I also really like the Signals and RL integration, which I think makes eventing in Flex much easier to code and test.  I am not that big a fan of the SwftSuspenders IoC context, as there doesn’t seem to be anyway to change parameters at runtime, which is something I use IoC to handle.  An example is a service URL that would be different for test and production.  I’d like to be able to change that in an XML file without having to rebuild my SWF.  I asked about this on the Robotlegs forum, and was told that it’s a roll-your-own answer.  On the other hand, Parsely offers the ability to  create the context in MXML, actionscript, XML, or any mixture of the three.  I like that.  I think the winner could be a Parsley-RL-Signals implementation pulling in the peripheral CG3 libraries, which mixes everything I like into one delicious approach.  MMMMMM.

Anyone have questions about the two frameworks that I didn’t cover?  Hit the comments.  Also, if anything I have said/presumed/implied here is flat out wrong, please correct me.  The last thing I want to do is lead people (including myself) down the wrong path.

Related articles by Zemanta (well, some of them were by Zemanta, others by me)

People You Should Follow on Twitter About This Stuff

Reblog this post [with Zemanta]