Sunday, September 27, 2009

Kickstart your BDD training

Rob Conery posted an excellent quick-start for getting up and running using MSpec titled "Make BDD your BFF". For those who still haven't taken the plunge, MSpec is framework for writing tests using the BDD style of development. Overall, the literature on MSpec on the internet is lacking so take advantage of this and help get the word out!

Also, just so I'm not repeating too much, Apostolis Bekiaris just posted an entry similar to my own except with more links to resources.

Friday, September 25, 2009

I don't mock much anymore

I still use Rhino Mocks. Never tried any other libraries. I probably should just so I can be more worldly but it's fit my needs and I have a laundry list a mile long of ToDos.

That being said, I've done a lot of unmentored exploration into the whole unit test and TDD countryside. At the beginning of my journey I did a metric ton of mock creation. I'm speaking to the different types of test doubles. Go check out Fowler's ever-expanding documentation on the topic.

What I found was the obvious (but not at the time); mocks are part of the test assertions. Its giving me intimate details about how an object is doing its work. As such, they should only be used to verify the test at hand. If they are not truly verifying the expected behavior then it's not a mock. It's a stub, it's a fake, it's something that's allowing the process to emulate its behavior in some integrated environment.

The tests get brittle when you mock things that have no business being mocked. What I started doing was creating everything as a stub and then used the Rhino Mocks AssertWas/WasNotCalled() extension methods. Things end up being much cleaner and far less prone to unexpected test failures.

But I ramble. We're coders. Let's look at some code. Here's an example of a test I wrote using Context/Specification (a style of testing supporting BDD) and the MSpec library. Everything is a stub and I use the AssertWas/WasNotCalled() to assert that certain collaborations occurred the way I had expected them to.

Forgive the user_registration_scenario class as I use it as the base class for a number of scenarios not shown here. It's the collection of inputs/collaborators used for the test and, per Context/Specification, the Act of Arrange/Act/Assert is at a class level, not at a method level.


[Concern(typeof(UserAccountCreationService))]
public class When_a_user_registers_successfully : user_registraton_scenario
{
Establish context = () =>
{
var no_errors = new List<ErrorInfo>();

_modelValidator
.Expect(v => v.Validate(_account))
.Return(no_errors);

_userRepository
.Expect(r => r.UsernameExists(_account.Username))
.Return(false);
};

Because of = () => _service.Create(_account);

It should_send_confirmation_email =()=>
_emailService.AssertWasCalled(es => es.SendConfirmationEmail(_account));

It should_save_user_to_persistent_store =()=>
_userRepository.AssertWasCalled(r => r.CreateUserAccount(_account));
}

public class user_registraton_scenario
{
protected static IEmailService _emailService;
protected static IUserRepository _userRepository;
protected static IModelValidator _modelValidator;
protected static UserAccountCreationService _service;
protected static UserAccount _account;

Establish context = () =>
{
_account = new UserAccount { Username = "kdog" };
_modelValidator = MockRepository.GenerateStub<IModelValidator>();
_emailService = MockRepository.GenerateStub<IEmailService>();
_userRepository = MockRepository.GenerateStub<IUserRepository>();
_service = new UserAccountCreationService(_userRepository, _emailService, _modelValidator);
};
}


I might remark that I'm quite enamored with Context/Specification. I never liked the look of my tests so much. At work, it's agonizing because we're still at a very basic level of unit testing. I'm thinking of giving a presentation on BDD using MSpec...

Sunday, April 12, 2009

Constructor injection FTW

I prefer using constructor injection with DI for a few simple reasons.
  1. It's self-documenting code. It's explicitly telling clients or maintainers of the component how it's put together.
  2. It keeps my components honest. In most cases, excessive constructor arguments is a code smell.
I also don't like setter injection for a few simple reasons.
  1. It adds noise to component's API. I wouldn't want to see public properties hanging off a component that serves's me no use as a client.
  2. It feels like it breaks encapsulation. How the component does its work internally shouldn't be part of its public API.
  3. It obfuscates how the component is put together.

Thursday, April 9, 2009

Finally; ubuntu installs...no problems!

This has been a WIP since years and years now. I install the latest version of Ubuntu two or three times a year mostly because I'm curious and partially because I'm a masochist. I generally want to get into using an open source OS and just be aware of how things work on the other side of the fence (being a Windows user and .NET developer). That's also why I chose Ubuntu; it's the entry-level linux distro.

Things usually work out this way; I install Ubuntu and there are issues. Wireless has never worked out-of-the-box. The graphics are 50-50. Sometimes they work and other times they don't. I've done the whole ndiswrapper thing and iwconfig and messed around on the command line more than any Windows user should ever have to feel comfortable with. Things never end up pretty and are generally unstable.

For instance, my last installation was Ubuntu 8.04. Wireless didn't work. Graphics were fine. I sacrificed the customary chicken and goat and eventually got the wireless to work. But then after I did a restart it stopped working again. Then it intermittently worked until finally it stopped working altogether. Completely random.

I tethered my laptop to the wired network and decided to updgrade Ubuntu to 8.10. After I did, the bootup had the funniest behavior. It would stop booting unless I pressed keys. Nothing in particular. Just pressing keys would make it advance. Once I booted into the desktop it said there were new drivers for the nVidia graphics card. I let the update take place (the drivers were recommended by Ubuntu). When I restarted, I couldn't get to the desktop anymore. One of the startup steps would fail and it would die. I'm not a linux ninja so it was as good as dead.

I burned down the now defunct Ubuntu 8.10 install. I downloaded and installed Kubuntu 8.10 out of sheer desperation. It too demonstrated the "press some goddam keys to boot" behavior so this wasn't an isolated incidence. Once I got to the desktop it didn't recognize I had wireless card at all and the graphics chip wasn't detected either. I was worse off than when I was just in Ubuntu 8.10. Fail.

That was last night. Today, I see that there's ubuntu 9.04 experimental. I installed it, it detected my wireless and it only took a minor spiritual channeling to get the graphics to work. Needless to say, I am happy that things just worked out-of-the-box. This blog post has come from my linux desktop. Joyous day!

Monday, April 6, 2009

Hot feature storm and digging into ASP.NET MVC

Hot feature - A feature that is released as a hotfix usually due to an overly complex or bloated release process that makes normal releases a burden on the entire organization. Those wishing to circumvent the release process will hide features in hotfixes and have them pushed out without any of the normal safety nets used for an official release.

Hi and welcome to my life for the last month. Hot features suck. As a developer, I've never felt so dirty or violated. The things I do for money...

The story; we built a releasable product but never got an official release for it due to lack of QA resources and the overall brutal release process we have now (4 days for the automated QA tests to run! Bullshit!). The solution that was worked out was that I had to tear apart the product and release each feature as a hotfix (henceforth, hot feature). Decomposing a product for its features in a non-chronological order was a nightmare in and of itself given how code evolves. This was only compounded by the fact that there was perfect storm of daily issues that would block or bump these hot features. There was one day where I had to tear apart the product 3 times, each time adding or removing a feature until ultimately, we didn't release anything at all. I will never partake in such a thing ever again.

Tomorrow we push out the last of the hot features and I'll officially be switching teams. My new role is much like my existing role. I'm generally involved with modeling the solution domain and implementing the application layer. One of the first things my team lead has indicated to me is that I'll be working on some new web pages and the platform of choice is going to be ASP.NET MVC. This is thrilling news for me as I've never cut my teeth on any MVC platform. I've been reading about it, I bought a Ruby book that just arrived today so that I could try out rails and I'm hearing stuff all the time from Mr. O'Hara about his MVP framework.

As the .NET web dev world knows, ASP.NET MVC 1.0 was RTM'ed a few weeks back. I've kept it on my radar but hadn't invested any time into it. Not because I don't care but that I've been involved with a brutal release schedule at work (as outlined above) and a newborn child. This last weekend I finally had a chance to download it and take it for a spin.

I've been developing with web forms since time out of mind and I haven't really known anything else. They've never felt quite right and given my gravitation towards TDD, it's become apparent, more than ever, that it's just not the right solution for me. I can't test the way I want to and I have to struggle against web forms to create logic that fits the web forms domain, not my product's domain. I've got the page lifecycle down pat and it was a great introductory solution for me to get acquainted with web app development but it just creates tightly coupled code by default.

My first impression of MVC; I'm smitten. It just flows the way my mental model of the application layer should be behaving. As far as ASP.NET MVC I'm glad I hopped on at 1.0 since they've improved it considerably since the betas. What it allows me to do is nearly forget the fact that I'm working on the web (outside of rendering the views). Controllers create the workflow of the app and they create truly meaningful names and actions that speaks directly to what you're modeling. It's liberating from my years of web forms oppression.

I really want to articulate for all 3 readers in my audience how this has liberated me and one example ran through my mind that captured it perfectly. Let's take the example of some portion of a simple banking application where we want to save a deposit. Use your imagination as to how it should behave.

In web forms, I load an account on Page_Load and I tie into the button click event of a web control to submit a deposit from a user.
public partial class AccountPage : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
//Load the account
}

protected void btnSaveDeposit_Click(object sender, EventArgs e)
{
//Save the deposit
}
}

In MVC, I speak to the AccountController and I tell it to load an account or save a deposit.
public class AccountController : Controller
{
public ActionResult LoadAccount(int accountId)
{
//Load an account and show it to the user.
}

public ActionResult Save(Deposit deposit)
{
//Save the deposit
}
}

Do you see the difference in the way you think about this code? Web forms have the unintentional side effect of obfuscating the workflow of your application. It's noise that doesn't speak to anything. What the hell is Page_Load and why do I care about intercepting events to a button? Why do I need event arguments that I'm not going to use? What does this all have to do with a bank account or saving a deposit?

Absolutely nothing. You get mired in the details of the page lifecycle and all the black magic you have to employ to do what should normally be simple activities. I can never forget lessons like "Don't forget to load dynamic controls at page init or else their viewstate is lost!" or "Always load your data on prerender as events are fired after page load and you may end up displaying stale data!". I just have a hard time caring for web forms, the page request life cycle, and all the bloat that comes with their attempt at creating a stateful application on a stateless medium.

With MVC, it makes perfect sense. My workflow is clearly illustrated, my purpose is outlined with meaningfully named actions and we can get strongly typed arguments. There is no ambiguity, this is no unexpected behavior and despite it's learning curve, I can show this to anyone and they will get an idea of how the application flows.

So, this is my first impression and what I've witnessed. As a 1.0 product it's no doubt rough around the edges and it seems like there's a lot of work to be done to ease the pain of creating the views as the web controls built for web forms can't (for the most part) be dropped into an MVC view. The tutorials on the main website are also light on details. They aren't tutorials so much as they are quickstarts. I had to munge around to find out how some things worked. Other than that, it looks to be a fun platform and I'll report back when I have more dirty details about my travels in ASP.NET MVC land.

Thursday, March 26, 2009

Holy smokes, do I have a lot to learn

It's been a loooooooooong time since I really invested in my technical expertise as a developer. Sure, I've grown a ton over the last year and I've really "come into my own" but as far as the latest and greatest talks, toys and technology platforms, my head is still somewhere in late 2007. Luckily, podcasts and prolific blog writers are making my education far easier than I could have imagined. At the same time I'm absorbing massive quantities of information that I hope I can retain at least 50% of. Here's a shortlist of the craziness swimming in my mind.

I made a small quest into truly learning what Javascript is all about and I'm happy to report that it really is an awesome tool. It's been getting a bad rap for a number of years but it's really turned out to be a simple but effective language. I've been getting interested in dynamic languages overall and I used Javascript as an opportunity to check out something I've already meddled with AND I can program with nothing more than Notepad and a browser (how effing sweet is that?). Seriously, I'm sick of compilation. I waste an hour of my day every day compiling projects. But that's beside the point. Javascript is great. Go check it out, understand closure, understand that everything is a module (function, whatevs) and dynamic languages are not as scary as you've been programmed to believe.

I've begun to study domain driven design simply so I can see what all the hoopla is about. At first, you'll be patting yourself on the back since it's talking about things that any seasoned programmer is familiar with. You have your solution domain where you model your customers or orders or whatever it is that your software is trying to solve. But it's more than that. It's creating discipline and drawing lines and boundaries where you may not have considered them before. It's trying to promote clarity not purity.

Case in point, I was involved with a recent code review where I came across some code that had a distinct smell. Upon discussing the issue with the developer submitting the review it was revealed that the code existed due to a hack from an external system. Basically, the external system didn't have a way to model some financial transaction so it used another financial transaction to "hack" it into the system. For example, it's like recording a negative deposit to some banking software because the software doesn't have a function to withdraw money. Some of the DDD literature I've been reading recently points to the simple fact that we're no longer modeling the domain. We're modeling the hack of another system. Frankly, you don't need DDD to see this as a problem but it certainly brings it into perspective. It brings confusion to the codebase, to the domain, to any developers coming on board, the code that evolves from this will be as bent and unsightly as its ancestors and so forth. I may have not given this as much scrutiny had I not had so much DDD dancing through my mind.

I read Ken Schwaber and Mike Beedle's Agile Software Development with Scrum since the scrum tsunami is about to crash on my development organization. I'm drinking the Kool Aid so I'm not so concerned about whether or not it's a good methodology. The only thing that worries me is if the management cast can stomach it when I actually speak up and collaborate in ways I had never done before. If what I read is true then I should be telling them when what we're doing is ill advised. I continuously read that I am now "empowered". We'll have to see how that ultimately shakes out...

I started digging into .NET 3.5 and some of the niceties that have come to the framework. Nothing too serious but I hope to be writing a new web app with it soon. I'm particularly interested in ASP.NET MVC since that's now in 1.0. I've downloaded everything, now I just need to do something with it.

Other stuff that's on my radar but I haven't played with yet is Fluent NHibernate, Structure Map, Ruby, and DSLs. I think I may have a post or two coming to discuss some thoughts that those topics have generated for me.

In any case, my brain is running dry and it's 2am.

Wednesday, March 25, 2009

XML sucks

Friggin' namespaces. Such a pain in my ass. JUST PARSE THE GODDAM VALUE!

Tuesday, March 17, 2009

Of C# Interfaces and dynamic typing

Adopting the design kung fu of TDD was one of the better practices I've picked up as a developer in the short time that I've been one. I build out the behavior of my system with stable, solid code and maximum reward. Due to its method of design (unit testing) I've also learned other important skills in creating loosely coupled code.

The one style of programming that goes nearly hand-in-hand with TDD is programming against interfaces. This is a decades old approach. What I find to be the most enjoyable behavior of interfaces is the fact that it's breaking the client code from the implementing code. You're trying to loosen up your coupling. I used to think it was insulting to create an interface with any less than one implementing class. I was missing the point. I forget where I read it but I saw someone who summarized it wonderfully, “Creating interfaces is not to the benefit of the implementing code but rather to the client.” Points to anyone who can help me find the original source.

TDD becomes a natural litmus test for discovering when I should introduce an interface. You get what's commonly called “test friction”. Friction is simply noise in your tests that has nothing to do with the component you're testing. It causes them to easily crash or be brittle, and makes it hard to test overall. Anything that turns your tests into massive scripts or breaks often can be considered test friction. Abstracting out the friction into interfaces is key to keeping the tests focused and your classes loosely coupled.

At first you'll be surprised at how many interfaces you end up creating but then it eventually feels right. The interfaces were always there between your objects, you're just formalizing it. What's wrong with that?

Well, actually, maybe a couple of things. I do get annoyed with the ceremony of interfaces. First I need to make the compiler happy. I may have a class that perfectly conforms to the expected method signatures of an interface but I can't use that in place of that interface; it's not the right type.

So then I need to create the interface every time I need to abstract something(which is often enough). Being an obsessive, compulsive coder, I create a new file for it (although, from time to time, I may embed them with their default implementation). I'm introducing noise into the project by having more files and more types to wade through in the auto-complete. Not ideal.

Recently I've looked at dynamically typed languages and felt a secret jealousy. They don't have interfaces. It's just good old fashioned duck typing. It's just messages being passed between objects. Why can't I do that? I don't need an interface to pass messages between objects. The object either accepts those messages or it doesn't. No need to paint myself into a corner. Testing becomes easier. I create objects on the fly, mock/stub/fake them up with hardly the effort it takes in C#. No extra interfaces necessary.

I guess the root of this may be that I'm finding the world of statically typed languages to be confining. It's been a feeling I've had for at least the last half of year or so. I've had a hard time articulating what it is that bothers me but I'm seeing it at as these things that I'm being forced into obeying by the compiler. Statically typed languages will certainly help you to avoid simple issues in your types but it can't guarantee you won't run into runtime issues anyways. Who can say they've never encountered a InvalidCastException? NullReferenceException? Statically typed languages are still exposed to the same problems that dynamically typed languages are. Compilation is providing compile-time debugging of your types. But if you're already writing unit tests, which undoubtedly test the interfaces of your objects, does that negate the advantage of compile-time debugging?

Most accounts that I've read of developers sharing a sentiment like my own usually end up embracing dynamically typed languages. Some even switch specialization and jump ship. I don't know what to do yet. I guess I have to go and find out for myself.

There' an awesome alt.net podcast that really speaks to where my head is at. It's an interview with Scott Bellware so if you dig his contributions to the community then you'll certainly enjoy this. I strongly recommend you give it a listen.

OOP in Ruby with Scott BellwareAOP

Monday, March 16, 2009

javascript unit testing/mocking framework?

I've looked around the web and I see a ton. Can anyone give a recommendation on frameworks they've worked with? I'm being lazy and avoiding testing every one that I find.

Tuesday, March 10, 2009

Why we do it: Unit Testing

What started me on this blog on such an elementary concept? I could list a couple reasons. There are those that don't unit test. To them, unit testing is an annoyance. They're experienced so they know exactly what the design and subsequent code will be. Unit testing just slows them down. I might comment that their defect rate is much higher than my own or that their code is far harder to modify or work with. These things don't matter. What matters is that when I face this resistance I need to be ready to respond with all the facts that negates any ill-informed opinions.

The other reason I'm on this is that I've been listening to too many podcasts. Notably some from Bob Martin (with his craftsmanship movement) and Scott Bellware's less-than-complimentary view on recent alt.net activity. They reasonably argue that developers latch onto the idea of using unit tests or practicing SOLID without ever knowing what benefit it is to them. They can't quickly tell you why they use it. In effect they're stagnating an otherwise progressive movement like alt.net by no longer questioning the practices that gave them the awesome methodologies and tools that they're using.

As a final note, lots of the benefits of unit testing are sometimes attributed to TDD. Recognize that TDD's primary goal is design and it is the beneficiary of unit testing. From what I read, BDD was born from this fundamental misunderstanding. This writing also assumes that we understand what good test writing is. When you change code it shouldn't make 10% of your tests break. That's a code smell. If you find yourself in such a situation then I recommend reading the XUnit Test Patterns book.

Without further adieu...

Unit tests are documentation

They document your API and demonstrate its usage. Test assertions are telling clients exactly how to specify the inputs to get the desired outputs. Invalid and erroneous use cases are outlined. Best practices for your code are emphasized. For these reasons, you may see developers (like myself) that will forgive violating DRY in order to make the test more story-like and readable. You want anyone to read the script and "get it".

Unit tests are better than documentation. Code comments, UML diagrams, design docs, etc. go stale within minutes of the ink drying. They aren't reliable for anything but the most static systems. That's not to say that they don't have their place but ,for the sake of development, they're best expressed in white board sessions and reflected in the code.

Unit tests are security

Good unit tests execute in fractions of a second and running the whole test suite should take a few seconds at most. If your unit testing suite executes on the order of minutes then they aren't unit tests.

They are your safety net for regression. Introducing change to the system will raise red flags anywhere that has been impacted. This immediate feedback loop allows us to fix problems before they go out the door as defects. It also illustrates defects in design. Making a simple change that implodes half of your test suite indicates that the recipient of the change is a dependency attractor.

I'll add the obligatory mention of ease of refactoring. The added security of unit tests allows you to confidently refactor your code with little fear of breaking the system.

Unit tests preclude debugging

At one point in my life I thought it was the best thing ever to be able to walk through my code, examine values, change them, simulate branching logic and so forth. No code could be a mystery to me because I could interrogate it in the watch window and make it tell its secrets. I'd be lying if I said that it didn't save my ass on more than one occasion.

Flash forward to today. I hate debugging. Debugging is a hassle. Someone has a problem in code then I have to go do a source control update, get the latest dependencies, do a build, work through peripheral issues in getting my integration environment up to snuff, somehow ham-bone data in the DB to recreate the issue BEFORE I can even begin to execute the debugger. Then, after I run through the debugger once, I have to do a DB restore or some other voodoo magic to restore my integration environment so that I may try again.

Imagine my shock and surprise when the problem that warranted the debugging session was as pathetic and minor as are most defects I've worked through. In fact, I can't remember the last time I found a defect that was anything more than an uninitialized value or an unexpected branching. It's almost heart breaking to make this discovery as you'd hope it would be some mythical code beast that needs slaying so then at least traveling minstrels and bards will sing songs of your valor and courage (Monty Python just sprung to mind).

Unit testing isolates the units of the system (duh). Recreating and driving out defects can be done without firing up any debuggers. Some TDD practitioners will profess that they never have to use a debugger, ever. The tests themselves provide the debugging. I can't make such a boast. The point is that you will be less involved with debugging because you will have appropriately addressed the conditions that create defects and you can easily hop into and test the code that is throwing the error. Debugging becomes a "break glass in case of emergency" type utility. You only need it in extreme circumstances.

One thing that I can boast is that integration or functional tests seem to be a formality now. I can almost always guarantee that my code works right out of the box before I deploy it to an integrated environment. It's a very satisfying thing to me and to clients of my code.

AUTHOR'S NOTE: I kind of got bored with this post halfway through it so if you're not feeling the love then neither was I. It's not you, baby, it's me. I'm just getting this out there since I've spent enough time looking at it in my posting queue.

Saturday, March 7, 2009

Testing code highlighter

This should look snazzy.


public interface IComicBookRepository
{
ComicBookCollection GetComicsByGenre(Genre genre);
}

Decorator pattern: alternative to using AOP?

I've wanted to check out some of the Aspect Oriented Programming frameworks out there just to get a feel for how they behave in the code. I understand the paradigm and the vocabulary but I haven't really played around with it. AOP is one of those things I keep promising myself to check out but I always put it off. Now I'm finally sitting down to check out what's around and how it would fit in with my day-to-day.

If you're an AOP framework you're generally implemented in one of two ways; either you're compile-time and the code is "weaved" into the class of choice or you're runtime and a proxy is generated for the class in question. Basically, you're looking to attach some behavior to a class that will happen before a method call, after a method call or when an exception is thrown.

AOP being a tool in my six-demon bag, I have to see how it can improve any of my current projects. Right now, I only have the ubiquitous logging issue that is commonly mentioned in the same breath as AOP. The usual problem is thus; you have some component who's main concern is not logging wonderfully verbose diagnostic information but rather that it builds a list of accounting records for export to a general ledger or it's approximating how late an MBTA train can be before the commuting populace riots. To keep the gods of Truth and Beauty satisfied you must separate these concerns.

My only problem is that I may have already. I hadn't seen it done this way before but I was using the decorator pattern to effectively perform the same work that an AOP framework would. I had fallen into using decorator pattern recently to eliminate inappropriate inheritance (just about any inheritance is inappropriate to me but that's a topic for another post). Think of it this way; the AOP framework is providing a mechanism to intercept method calls and do some magic before or after the a method is executed. The decorator pattern is doing just that. I'll break down the pros and cons for using an AOP framework or using the decorator pattern in a moment but here's a code snippet to demonstrate the thought.

NOTE: For the sake of brevity I'm hoping you can make logical leaps about the types and classes I'm using. They're purely fictional and they're not what's important.

public interface IComicBookRepostitory
{
ComicBookCollection GetComicsBy(SearchParameters searchParams);
}

An IComicBookRepository is simply a repo for comic books. In this case it has a method for retrieving comic books by a set of parameters. The concrete class for the repo looks like the following.

public class ComicBookRepository : IComicBookRepostitory
{
public ComicBookCollection GetComicsBy(SearchParameters searchParams)
{
return new ComicBookCollection();
}
}

This class obviously does nothing of import but that's ok. We can fill this in with whatever we please when we know what that implementation will look like.

I can create a class that will "decorate" any instance of IComicBookRepostitory with error logging. Hence the LoggedComicBookRepository.

public class LoggedComicBookRepository : IComicBookRepostitory
{
private ILog _log;
private IComicBookRepostitory _wrappedRepo;

public LoggedComicBookRepository(ILog log, IComicBookRepostitory wrappedRepo)
{
_log = log;
_wrappedRepo = wrappedRepo;
}

public ComicBookCollection GetComicsBy(SearchParameters searchParams)
{
try
{
_wrappedRepo.GetComicsBy(searchParams);
}
catch (ArgumentException ex)
{
_log.Error(
string.Format("Parameters: Genre {0}, Year {1}", searchParams.Genre, searchParams.Year), ex);
throw;
}
}
}

Notice that it contains an instance of an IComicBookRepostitory. The LoggedComicBookRepository has a constructor that is configured with an ILog and an IComicBookRepostitory. The ILog will capture logged messages and the IComicBookRepostitory is what is being wrapped (decorated per the pattern). Whenever the wrapped instance of the IComicBookRepostitory throws an error it will be captured and the appropriate error logging will occur. Calling GetComicsBy(searchParams) is now completely logged and our concrete ComicBookRepository is none the wiser. Likewise, if we wanted to do more work before or after the method is called for any other reason then we may do so and capture the essence of AOP.

Here's some setup code that will perform comic book retrieval that will be logged.

public class SomeClass
{
public void DoSomething()
{
IComicBookRepostitory repo =
new LoggedComicBookRepository(
log4net.LogManager.GetLogger(GetType(LoggedComicBookRepository)),
new ComicBookRepository());

SearchParameters searchParams = new SearchParameters(Genre.Action, 1995);
ComicBookCollection collection = repo.GetComicsBy(searchParams);
}
}

That setup/configuration code can easily be hidden behind a DI container or something similar but I'm just illustrating how it would generally work.

The advantage to this is that I don't need to bring in a new library with XML configuration, code setup and an API that may not be familiar to other developers. It's also swatting a fly with a bazooka. In the scenario presented above we really don't need to involve a professional AOP solution.

Where AOP makes sense is for a more generic scenario. With the code I've been working with, I want to emit important information about specific method calls and their inputs. If I didn't care for that then an AOP framework would be a far better choice. For example, if I wanted to include code that profiled method execution then I could have a stopwatch that started on method entry and stopped on method exit using an AOP framework. The only thing it may capture about the method is its name. If I tried to hack an AOP framework into my previous example I would have to generate additional code that would provide mapping between method names, their overloads and the class that was responsible with logging each.

So, once again, I've avoided digging into an AOP framework. I know what I need to know and when I find the square hole I'll put the square block of AOP into it.

Friday, March 6, 2009

I've made up my mind

I want to write my own developer blog. I really do. I have two major enemies at the moment. A lack of casual time and rabid curiosity. The lack of time is self explanatory. I have a 4 year old, a 2 month old baby and a demanding job.

The "rabid curiosity" part is my laundry list of methodologies, technologies, practices and tools and I want to preview them all at the same time. For example, I want to check out what the premier dependency injection framework is so I'll go read some blogs and documentation. Someone will make a reference to BDD so I have to see what that is. It's behavior driven design which is apparently the logical evolution of TDD. So then I'll fire up another tab and start looking at that. Someone mentions something about TypeMock so now I have to go see what that's all about. Then I'll start reading the blog war around TypeMock and how it supposedly promotes writing untestable code.

You can see where this is going. At the end of this I can't say I have a demonstrable understanding in what it was I had initially set out to explore. I picked up a handful of nuggets and have a general understanding but it's nothing I can make any good use of.

I've recognized this and I've forced myself to give only a single topic of interest the bandwidth it needs. This has worked out well. I've been interested in scrum as it's coming to my company so I read Schwaber and Beedle's book on it. I wanted to understand the wackiness of javascript so I spent a week and read around and played a bit. Now I "get it".

I'm going to continue on this path and start blogging a bit more on the experience. It's the best way to learn and then demonstrate that I have a grasp on the subject.

I also need to improve on my writing. There was a time where I was never at a loss for words or had trouble communicating my thoughts. I let that skill atrophy so I have to build it up again. Bear with me in the meantime.