A New Internet Library: Add Your Website/Blog or Suggest A Website/Blog to our Free Web Directory http://anil.myfunda.net.

Its very simple, free and SEO Friendly.
Submit Now....

Monday, August 25, 2008

"SmartInstance" in StructureMap 2.5

From the feedback on the StructureMap Google group, Chad and I have hashed out a new Fluent Interface expressions for explicitly defining constructor arguments and setter values (I still think Setter Injection is a design smell, but I've been vociferously voted down).  The problems in the current version is that SetProperty() is overloaded to mean either setter or constructor arguments.  The underlying mechanisms in StructureMap stores the information the same way, but the API was causing real confusion.  So, to alleviate that confusion and also to utilize some of the new .Net 3.5 goodness, I present the new language:

Defining primitive constructor arguments -- WithCtorArg("name").EqualTo("value")  or  WithCtorArg("name").EqualToAppSetting("key")

        [Test]

        public void DeepInstanceTest_with_SmartInstance()

        {

            assertThingMatches(registry =>

            {

                registry.ForRequestedType<Thing>().TheDefault.Is.OfConcreteType<Thing>()

                    .WithCtorArg("name").EqualTo("Jeremy")

                    .WithCtorArg("count").EqualTo(4)

                    .WithCtorArg("average").EqualTo(.333)

                    .SetterDependency<Rule>().Is(x =>

                    {

                        x.OfConcreteType<WidgetRule>().SetterDependency<IWidget>().Is(

                            c => c.OfConcreteType<ColorWidget>().WithCtorArg("color").EqualTo("yellow"));

                    });

            });

        }

 

Defining primitive setter properties - just uses a Lambda expression that will be applied to the object as soon as it's built.  Intellisense and compiler safety are good things, so you might as well use it.  StructureMap now supports optional setter injection, meaning that you no longer need to do the [Setter] attributes in the concrete classes.  If you specify the value of a setter, StructureMap will use that value regardless of whether or not the [Setter] property exists.  The same rule applies to non-primitive setter dependencies.

instance.SetProperty(x => x.Name = "Jeremy")

 

Overriding Constructor Dependencies - Only when you want to override the auto wiring behavior.  I've chosen to use a Nested Closure for defining the child instance of the IWidget constructor argument below.  You could also replace x.Object() with x.OfConcreteType<T> or x.ConstructedBy(Func<T>) or other options inside the Is() method.  I chose this solution because I thought it would make it easier to guide the user to the possible options.

instance.CtorDependency<IWidget>("widget").Is(x => x.Object(widget))

 

Overriding Setter Dependencies - Setter dependencies (non-primitive types) are specified much like constructor arguments.  Your options are to say:  .SetterDependency<T>().Is(whatever) or .SetterDependency<T>(x => x.Setter).Is(whatever).

                registry.ForRequestedType<Thing>().TheDefault.Is.OfConcreteType<Thing>()

                    .WithCtorArg("name").EqualTo("Jeremy")

                    .WithCtorArg("count").EqualTo(4)

                    .WithCtorArg("average").EqualTo(.333)

                    .SetterDependency<Rule>().Is(x =>

                    {

                        x.OfConcreteType<WidgetRule>().SetterDependency<IWidget>().Is(

                            c => c.OfConcreteType<ColorWidget>().WithCtorArg("color").EqualTo("yellow"));

                    });

 

Explicitly defining an array of dependencies - I get into this scenario with configuring business rules.  Let's say you have a class that depends on an array of some other type of service.  That syntax looks like:

                registry.ForRequestedType<Processor>().TheDefault.Is.OfConcreteType<Processor>()

                    .WithCtorArg("name").EqualTo("Jeremy")

                    .TheArrayOf<IHandler>().Contains(x =>

                    {

                        x.References("Two");

                        x.References("One");

                    });

 

or

 

            IContainer container = new Container(r =>

            {

                r.ForRequestedType<Processor>().TheDefault.Is.OfConcreteType<Processor>()

                    .WithCtorArg("name").EqualTo("Jeremy")

                    .TheArrayOf<IHandler>().Contains(x =>

                    {

                        x.OfConcreteType<Handler1>();

                        x.OfConcreteType<Handler2>();

                        x.OfConcreteType<Handler3>();

                    });

            });

 

In this case, I was lazy and made no distinction between constructor arguments and setter arguments.

 

 

Adding additional instances of a given type - Sometimes you're adding more than one instance of a given service type to StructureMap.  You may be fetching by ObjectFactory.GetNamedInstance<T>(name) or by ObjectFactory.GetAllInstances<T>().  Either way, this syntax will work.

                registry.ForRequestedType<IService>().AddInstances(x =>

                {

                    x.OfConcreteType<ColorService>().WithName("Red")

                        .WithCtorArg("color").EqualTo("Red");

 

                    x.Object(new ColorService("Yellow")).WithName("Yellow");

 

                    x.ConstructedBy(() => new ColorService("Purple")).WithName("Purple");

 

                    x.OfConcreteType<ColorService>().WithName("Decorated").WithCtorArg("color").EqualTo(

                        "Orange");

                }));

 

 

Thoughts?  Comments?  I should say here that StructureMap 2.5 will be about 95% backwards compatible with the existing FI, so no worries about converting.

 

 

 

It's funny, but I generally think that with few exceptions the Constructor Injection is the preferable approach, but I continuously read that the Java guys are exactly the opposite.  To each his or her own I guess.



Source Click Here.

Source Files Rebasing

 
I would like to detail here a recent featurette we added to NDepend v2.9.1: Source files rebasing. Source files rebasing enables a common CI scenario where the code is compiled in one location and analyzed in another one.

 

NDepend needs to access source files during various scenario:

During source file parsing at analysis time, to fetch comments and cyclomatic complexity metrics values and to store code elements' declarations locations.

 

 

When clicking Open my declaration in source code of a code element.

 

 

When clicking Compare older and newer versions of source file of a code element.

 

 

 

The magic of PDB files

 

To understand the need for Source files rebasing, let's remind some details about how the .NET debugger works with source code.

 

In the .NET world, when one wants to bind with source code from IL code, one relies on PDB files (Program Database Files). PDB files are built at compile-time by the C#/VB.NET or C++ compiler. There is one PDB file per .NET assembly. PDB files contain binary information about where concrete methods (i.e methods that contain code, as opposed to abstract methods) are declared in source file and which piece of IL code corresponds to which sequence point. A sequence point is a contiguous excerpt of a source file that is considered as a unit of execution from the debugger point of view. Sequence points can thus be considered as debugger breakpoint because they are meant to be unit of execution.

 

As a side note, NDepend uses sequence points to compute the number of lines of code of a method. Doing so comes with several major benefits explained here: How do you count your number of Lines Of Code (LOC) ? 

 

These source files' paths contained in PDB represent essential information for the debugger. Indeed, the debugger relies on this info to bind source code with currently executed IL code. If you develop on several machines and thus move your source code from one machine to another, you might have stumble upon the following VisualStudio message that informs you that source file can't be found based on the absolute file paths extracted from the PDB. You can then tell the debugger where the source file is and it will be smart enough to rebase further absolute source file path.

 

 

A variant is if a source file has been changed and the PDB files hasn't been updated since (internally a hash code is used to make possible the synchronization check between source file content and PDB files' sequence points).

 

Btw, as many others build process flaws, the fact that PDB files are not in-sync or source files can't be found from PDB absolute source files paths are reported by the NDepend analysis. This feature is part of what we call Build Process health.

 

 

Source files rebasing within NDepend

 

Basically the underlying problematic is the same with NDepend: source code might have been compiled in folder A and maybe the analysis occurs on a different machine, where the source files are in folder B. The absolute source files' paths extracted from the PDB are not anymore relevant and NDepend needs to rebase them, from A to B. This is possible thanks to the following analysis option:

 

For maximum flexibility, source file rebasing can be configured when analysis results have been loaded inside NDepend. This way, analysis results can be churned from any machine and the users can still go back to source files declaration at any time, no matter where the source files hierarchy is stored:


 

As in VisualStudio, VisualNDepend will be smart enough to ask the user if she wants to rebase a non-found source file and will apply rebasing delta for further source file declaration opening.

 

Another useful scenario is during 2 builds comparison. Usually, analysis results are done the same way and thus, source files paths are the same both in older and newer analysis results. In this condition, when trying to compare 2 versions of a source file NDepend tells that because the 2 paths are the same, it can't compare the source files.

 

A solution to this problem is to fetch the older and the newer source files hierarchy from the source code repository and rebase both the older and newer application in VisualNDepend. This way, 2 versions of a source file can be compared properly:


 

All the NDepend path code relies on the NDepend.Helpers.FileDirectoryPath library. Each time I have a look at the extremely poor path handling support of the .NET Fx I am surprised that

  1. this library is not so popular
  2. there is no other equivalent library proposed (as far as I know)
  3. MS doesn't seem to consider it as a major room for future improvement.

Don't other developers need to handle complex path scenario or does everybody re-invent the wheel?

 

I think that all tool for .NET developers work more or less this way when it comes to source files rebasing, as you can read here for the excellent JetBrain dotTrace profiler.

 



Source Click Here.

And we're off...

Yesterday was my last day at Fuel. It was a hard decision, but one I felt was necessary in order to grow.

I'm heading back into more enterprise-type of work. I'll essentially be building management systems and services for emergency-room medical equipment. Our teams portion of the work isn't as critical as the medical devices themselves, but there's still an extremely high need for quality.

From my observations, our field suffers badly from high turnover. Part of that is the economic flux in software development - I think most companies are still grappling to figure out how much a software developer is actually worth to them with respect to an open market. A major problem is how deceivingly simple it is to write software - but how difficult it is to write good software. A young company can start off hiring a group of junior developers at $40K (the going rate in Ottawa as far as I can tell), but as soon as jobs starts to get bigger and expectations greater, they seem unable to make that jump to $80K. We can get 2 programmers for that price - they reason. Of course, as has been shown numerous times before (don't managers read these things?!), top programmers are unbelievably cheap with respect to productivity.

We generally learn from our failures and success, and by being exposed to a variety of programs and fellow programmers. Stay at one place for too long, and you get really good at doing one thing, but opportunities to learn become few and far apart. I think good developers start to thirst for failure - or at least the risk of failure. I'm not taking about the failure that comes from an unreasonable deadline either, but from true problem solving.

Which of course leads to another big reason I see developers leaving their jobs - they want to do things differently. The most drastic example is the move to Agile methodologies. It's easy to get caught up in the Agile-hype - in large part because a lot of it is common sense - but the reality is that most companies aren't ready or capable of doing such huge transition. It's far easier to change companies than to try to change a company.

I think all of these problems can be summed up by companies not understanding software development or developers. Some companies don't even realize they are in the software business...yikes!

People often say the grass is greener on the other side. But sometimes it really is greener on the other side. I guess we'll soon find out.



Source Click Here.

Valid date-time values in sql server. SqlDateTime vs DateTime

You cannot store every date in sql server. The valid range of dates is from 1/1/1753 (1-1-1753) to 12/31/9999 (31-12-9999). The range of the .NET DateTime type is far larger. So before storing a datetime in a sql server database you have to perform a check. This should be (and is) not to difficult in .NET. But as the documentation of SqlDateTime and other Google results are confusing hereby a quick summary.

The .NET framework has two types, DateTime and SqlDateTime The SqlDateTime type has implicit operators which convert a sql datetime into a regular DateTime. Thanks to this implicit type conversion you can mix both date types in an expression. At first sight the follwing code looks like a good check.

DateTime bla = DateTime.MinValue;  if ((bla >= SqlDateTime.MinValue) && (bla <= SqlDateTime.MaxValue))  {      // bla is a valid sql datetime  }  

To my initial surprise it throws a sql exception: "System.Data.SqlTypes.SqlTypeException: SqlDateTime overflow. Must be between 1/1/1753 12:00:00 AM and 12/31/9999 11:59:59 PM." What happens is that before comparing the date the variable bla will be (due to the implicit operator) cast to SqlDateTime. Doing that it will hit the the sqlexception. The rule is that in an expression with two different types they are converted to the narrowest of the two. So what will work is explicitly cast the SqlDateTime to a DateTime. Like this

DateTime bla = DateTime.MinValue;  if ((bla >= (DateTime)SqlDateTime.MinValue) && (bla <= (DateTime) SqlDateTime.MaxValue))  {      // bla is a valid sql datetime  }

This behavior will not show up until the test meets an invalid sql date at runtime.The good thing is that this same kind of implicit conversion can prevent a compile.

  

 

The message is enigmatic until you start realizing that the implicit conversion of date leads to a different type for the result of the expression. A SqlBool instead of a .NET bool.

This function builds and runs well.

static bool isValidSqlDate(DateTime date)  {      return ((date >= (DateTime)SqlDateTime.MinValue) && (date <= (DateTime)SqlDateTime.MaxValue));  }  

Far easier than the many parsing and testing frenzy I found Googling on this.



Source Click Here.

Firing generic events with EventAggregator

It's been a while since i posted anything on Prism. When I left p&p, i said you would see more posts on Prism on my blog. I've been pretty immersed in MEF since leaving and haven't done any posts on Prism yet. Today, I got inspired (albeit late in the evening) based on a post on the forums to actually do that, so here I am :-). The post was from TSChaena about using EventAggregator to fire generic events similar to the way we did things with EventBroker in CAB. Using EventBroker allows you to dynamically define events in your application that are identified through a topic name rather than needing to define a strongly typed class as you do with the EventAggregator. There are several advantages to not using the approach in EB which I have identified in this post. However, there are times when you want to do a dynamic eventing system. The good news is that there actually is a solution for doing this with our EventAggreator though it is not exactly the same as the way we did it in EB.

CompositeWPFEvent

Before we look at the solution I came up with, lets talk quickly about CompositeWPFEvent. CompositeWPFEvent is a generic class that contains one type parameter TPayload. TPayload defines the payload that will be passed into the event when it is fired. The subscriber also uses the payload to provide filters for the subscription. For example if TPayload is a FundOrder as in the EventAggregator QuickStart, then you can supply a lambda such as fundOrder=>fundOrder.CustomeriD == customerID for example to filter on only events that are received for a specific customer. The common pattern you will see for defining such events is to create a class that inherits from CompositeWPFEvent for each event that is typed to the specific paramers. For example below is the definition for the FundOrderAdded event.

public class FundOrderAdded : CompositeWpfEvent<FundOrderAdded> {}

This event is then retrieved from the EventAggregator by calling the GetEvent method passing FundOrderAdded as the event. Now, although this is the common pattern, there is nothing about the EventAggregator that requires you to create a new event class for each event. CompositeWPFEvent is not an abstract class, so you can simply "use" it as you will, even in a generic case. For example you can do the following.

   1: public class ThrowsEvents {
   2:   public ThrowsEvents(IEventAggregator eventAgg) {
   3:     eventAgg.GetEvent<CompositeWPFEvent<string>>().Publish("SomethingEvent")
   4:     eventAgg.GetEvent<CompositeWPFEvent<string>>().Publish("SomethingElseEvent")
   5:   }
   6: }
   7:  
   8: public class HandlesEvents {
   9:   public HandlesEvents(IEventAggregator eventAgg) {
  10:     CompositeWPFEvent genericEvent = eventAgg.GetEvent<CompositeWPFEvent<string>>();
  11:     genericEvent.Subscribe(action=>Console.WriteLine("SomethingEvent fired", ThreadOption.UIThread, 
  12:       false, e=>e == "SomethingEvent");
  13:     genericEvent.Subscribe(action=>Console.WriteLine("SomethingElseEvent fired", ThreadOption.UIThread, 
  14:       false, e=>e == "SomethingElseEvent");
  15:   } 
  16: }

If you look at the above code, you'll notice that we are using CompositeWPFEvent event directly, rather than creating a specific inheritor. When we call for the event from the aggregator, we are passing in a param of type string which represents the EventName / Topic. I am then using our event subscription mechanism to subscribe two different handlers to the same "event" by using the eventName as the filter. So here we have the basics of doing generic event publication and subscription. However, we are missing something important....that payload :). To handle this, you could instead create your own custom class that carries two parameters, EventName, and Value. With that approach, you can pass both the event name and the value, still filter on the event, and you can pass a value along. For example the above code passing  a value would look like the following.

   1: public class SomeEventParams {
   2:   public SomeEventParams(string eventName, object value) {
   3:     EventName = eventName;
   4:     Value = value;
   5:   }
   6:   
   7:   public string EventName {get;private set;}
   8:   public object Value {get; private set;}
   9: }
  10:  
  11:  
  12: public class ThrowsEvents {
  13:  
  14:   public ThrowsEvents(IEventAggregator eventAgg) {
  15:     eventAgg.GetEvent<CompositeWPFEvent<SomeEventParams>>().Publish(new SomeEventParams("SomethingEvent","SomeValue"));
  16:     eventAgg.GetEvent<CompositeWPFEvent<SomeEventParams>>().Publish(new SomeEventParams("SomethingElseEvent", "SomeOtherValue"));
  17:   }
  18: }
  19:  
  20: public class HandlesEvents {
  21:   public HandlesEvents(IEventAggregator eventAgg) {
  22:     CompositeWPFEvent genericEvent = eventAgg.GetEvent<CompositeWPFEvent<string>>();
  23:     genericEvent.Subscribe(action=>Console.WriteLine("SomethingEvent fired" + action.Value, ThreadOption.UIThread, 
  24:       false, e=>e.EventName == "SomethingEvent");
  25:     genericEvent.Subscribe(action=>Console.WriteLine("SomethingElseEvent fired" + action.Value, ThreadOption.UIThread, 
  26:       false, e=>e.EventName == "SomethingElseEvent");
  27:   } 
  28: }

That's OK, except now the parameters are simply object. That means we are losing the type safety that the EventAgg was built for in the first place! Now you can further refactor and make SomeEventParams a generic type that accepts a type param for the value. The only downside of this, is the code will get much more verbose and harder to read. For example retrieving the event to publish will now look like...

eventAgg.GetEvent<CompositeWPFEvent<SomeEventParams<string>>().Publish...

Suboptimal. I bet your thinking you could refactor this a bit more..yes, you can. This is what led me to a GenericEvent.

GenericEvent

If we keep refactoring, we can get rid of alot of the DRY behavior, by creating an inheritor of CompositeWPFEvent, GenericEvent. The event and associated parameters class looks like this

public class EventParameters<TValue>
{
  public string Topic { get; private set; }
  public TValue Value { get; private set; }
 
 
  public EventParameters(string topic, TValue value)
  {
    Topic = topic;
    Value = value;
  }
}
 
public class GenericEvent<TValue> : CompositeWpfEvent<EventParameters<TValue>> {}

Subscribing and publishing is now easier as well. The previous GetEvent code now looks like

eventAgg.GetEvent<GenericEvent<string>().Publish...

Because I have strongly typed my Value, I know have back my strongly typed filters and delegates.

Putting the rubber to the road with the EventAggregation QuickStart.

In order to test this out, I took a copy of the EventAggregaton QuickStart that is included with the Prism bits, and I modified it to use the new GenericEvent. I also added a Remove button to the QS in order to demonstrate using more than one event. The new Quickstart looks like the following.

image 

In the new version of the Quickstart, the FundOrderAddedEvent is removed. Instead, I have added two constants to define the different events.

public class Events
{
  public const string FundAdded = "FundAdded";
  public const string FundRemoved = "FundRemoved";
}

I added a RemoveFund method to the AddFundPresenter as well as refactored the AddFund method as follows.

void RemoveFund(object sender, EventArgs e)
{
    FundOrder fundOrder = new FundOrder();
    fundOrder.CustomerId = View.Customer;
    fundOrder.TickerSymbol = View.Fund;
 
    if (!string.IsNullOrEmpty(fundOrder.CustomerId) && !string.IsNullOrEmpty(fundOrder.TickerSymbol))
        eventAggregator.GetEvent<GenericEvent<FundOrder>>().
          Publish(new EventParameters<FundOrder>(Events.FundRemoved, fundOrder));
    
}
 
void AddFund(object sender, EventArgs e)
{
    FundOrder fundOrder = new FundOrder();
    fundOrder.CustomerId = View.Customer;
    fundOrder.TickerSymbol = View.Fund;
 
    if (!string.IsNullOrEmpty(fundOrder.CustomerId) && !string.IsNullOrEmpty(fundOrder.TickerSymbol))
        eventAggregator.GetEvent<GenericEvent<FundOrder>>().
          Publish(new EventParameters<FundOrder>(Events.FundAdded, fundOrder));
}

Finally, I refactored the ActivityPresenter in a similar fashion

public string CustomerId
{
    get { return _customerId; }
    set
    {
        _customerId = value;
 
        GenericEvent<FundOrder> fundOrderEvent = eventAggregator.GetEvent<GenericEvent<FundOrder>>();
 
        if (fundAddedSubscriptionToken != null)
        {
            fundOrderEvent.Unsubscribe(fundAddedSubscriptionToken);
            fundOrderEvent.Unsubscribe(fundRemovedSubscriptionToken);
            
        }
 
        fundAddedSubscriptionToken = fundOrderEvent.Subscribe(FundAddedEventHandler, ThreadOption.UIThread, false,
                                                     parms => parms.Topic == Events.FundAdded && parms.Value.CustomerId == _customerId);
 
        fundRemovedSubscriptionToken = fundOrderEvent.Subscribe(FundRemovedEventHandler, ThreadOption.UIThread, false,
                                                     parms => parms.Topic == Events.FundRemoved && parms.Value.CustomerId == _customerId);
 
        View.Title = string.Format(CultureInfo.CurrentCulture, Resources.ActivityTitle, CustomerId);
    }
}

Notice how in the subscription I am now filteirng on the event Topic in addition to the value. This is the result of moving to a generic event.

Wrapping Up

Using the approach show in this post, we've seen how you can utilize the existing EventAggregator infrastructure to do generic eventing similar to the way EventBroker in CAB functions.

Personally I think using strongly typed specific events is more maintainable. The reasoning is because the event payload type is intrinsically defined to the event wheras in this model they are not. For example with generic events I might have an event that publishes passing a customer, but on the receiving side I have  defined it as a string. This event will never get handled, because the susbscriber and publisher don't match. If you use strongly typed events that is not the case, as the type is the match ;) However there are scenarios where it may make sense to have something more dynamic, for example if you have a metadata driven system that needs to do dynamic wiring.

Attached, you'll find the code for my modified version of the QuickStart. Let me know if this works for you. Now time to get some sleep :)



Source Click Here.

Introducing Kanban at Xclaim

We've rolled out Kanban at my company Xclaim Software. Prior to this we were following a more-or-less XP process evolved and tweaked over some two years.

Xclaim Kanban 1.0

Even though our team has been doing the Agile thing with good results, there are times that the process seems a little opaque and wasteful. I've noticed that it's hard to surface where we're encountering bottlenecks or impediments. Planning and maintaining a large inventory of backlog creates waste; planning can take several hours for large batches of new stories and, while I think there's good value in preparing a as-full-as-possible backlog at the beginning of a project, I see very little value in maintaining a backlog over three months at a maximum. There's simply too much risk of redundancy and re-work in a large backlog.

For a while now I've seen iterations as an arbitrary nuisance. We all know velocity is a yardstick measure that's imprecise and best used for rough planning. We can also take points delievered over any two points of time and compare this number to previous durations to develop a trendline. Over a longer history we can use these numbers as a measure of throughput and improvement. Why then do we need to make them reset every Wednesday? If we're using similarly-sized items -- which we are -- it seems that feature cycle time (time from 'activation' to 'in production') is equally useful and much more understandable by both customer and developer.

A big source of waste, waste due to over-processing, is the planning, retrospective, and customer demo ceremonies. It's easy to burn a half-day or more in these meetings and the fact of the matter is that a lot of these things can just be JIT'd on an as needed basis with the right people getting us much closer to the lean concept of pull.

Is it too much of a stretch to say project determines process? Every project we work on and environment we work in will come with requirements that drive a customized process. Of course we can't get there from day one. We need to setup a good baseline with the practices we know have broad applicability, acceptance, and tolerance: TDD, rolling wave planning, etc. Good Agile teams, however, continually adjust their process to fit their product and the needs of their customers. In a sense, we're designing our process as we go and this is something I see Kanban encouraging.

There's a few things to say about this Kanban thing we've got going on, and I'd like to tackle this as a mini-series to make the posts digestible. I'll continue with five installments to start:

1. Why bother? Pull, flow, throughput and constraints.
2. Anatomy: queues, buffers, work-in-process, standard work and order points.
3. Developing and introducing a Kanban in your team.
4. A tour of our initial Kanban pipeline.
5. Handling rework and the zero defect mindset.
6. TBD

If you're interested in Kanban, I highly recommend subscribing to the Yahoo! Group. You may also want to check out Corey Ladas' blog; he writes about the practice with some regularity, sharing valuable insight and experience.



Source Click Here.

A Train of Thought - August 24th, 2008 Edition

Thank God I don't have my old 3 hours a day of commuting into Manhattan, but I needed to spit out some little blog postlets in more than 140 characters at a time, so I present to you the latest Train of Thought.  Opinionated blathering ahead, comments are always open at the bottom.

 

Don't Overreach in Your Designs

One of my older posts on The Last Responsible Moment and another post on evolutionary design were recently linked on some of the DZone/DotNetKicks sites and got some traffic.  I got some comments and emails to the effect of "we did evolutionary design and it bit us in the ass with all that refactoring and rewriting."  Maybe, but let's talk about how to do evolutionary design in a way that minimizes outright rework.  From my experience, the worst rework results from choosing elaborate abstractions upfront that turn out to be harmful.  The analogy that I like is trying to walk on slippery ground.  Anybody who's walked across an icy patch or a muddy field this knows that the way to do it is to keep your feet as close to your center of gravity as possible by taking short steps.  If you take a big step you're much more likely to slip and fall.  Design is the same way.  Bad things happen when you allow your design thinking and abstractions get ahead of your development and requirements.

We're doing evolutionary design, and yes we have had to rewrite some functionality when we've found shortcomings in the design or simply found a better way to do it.  I would attribute the worst example of avoidable rework on our project to overreaching with some infrastructure outside of user stories.  We're using the new ASP.Net MVC framework, and we didn't like the way that it handles (or really doesn't handle) the "M" part of the MVC triad.  We had one of those conversations that starts with "wouldn't it be cool if." and ended with one or both of us spending days of architectural spiking on an approach for screen synchronization - before we created our first web page.  We created the idea of a "ViewModel" that would represent screen state and help us to move data between the web page form elements and our Domain Model objects.  We wrote a very elaborate code generation scheme to automate a lot of the grunt coding.  As soon as we started to work on our first couple web pages we quickly realized that much of our ViewModel infrastructure was unnecessary or just plain wrong.  We effectively rewrote the ViewModel code generation in a simpler way and got on with the project.  Since then, we've extended the ViewModel code generation to add new behaviors on an as needed basis, but we haven't had to rewrite any of it.

Just to head off the comments, I didn't know about the BindingHelperExtensions in the MVC at the time (shame on me).  I don't regret rolling our own infrastructure at all because I don't think that BindingHelperExtensions is adequate, but I wish we'd played it a little smarter and put off the ViewModel code generation until we had a couple working pages to point out the real patterns.

What I'm trying to say here is to avoid speculative abstractions and fancy patterns outside of feedback from the real features and needs of the system.  It's relatively painless to extend simple code for more elaborate usages than its original intentions, but it hurts to throw out or change elaborate code.  You can hedge your design bets by (almost) always starting simple.

 

Enabling Evolutionary Design

So, how do you do evolutionary design without incurring a lot of rework?  Here's my recipe:

  • Worry a lot about cohesion and coupling as you work. 
  • Follow the Open/Closed Principle
  • Follow the Single Responsibility Principle
  • Use TDD or BDD for low level design.  First because it does more to ensure good cohesion and coupling on a class by class basis than any other technique, but also because the automated unit testing coverage left behind enables changes in the code to be cheaper in many cases.  Regression testing is a cost and risk associated with changing code is a considerable road block to making design improvements midstream.  If you reduce that cost and risk, evolutionary approaches are a lot more attractive.  That test coverage is one of the ways that TDD/BDD is more valuable a practice than merely applying some unit tests after the fact to strategic areas of the code.

From an old post:

One way to think about TDD is an analogy to Lego blocks. The Lego sets I had as a child were the very basic block shapes. Using a lot of little Lego pieces, you can build almost anything your imagination can create. If you buy a fancy Lego set that has a single large piece shaped like a pirate's ship, all you can make is a variation of a pirate ship.

In that context I was talking about TDD, but I feel like the analogy holds very true for doing evolutionary design effectively.  Composing your system of small Lego pieces that can be rearranged is much better than using big monolithic pieces of code that are more likely to be modified later.

In the end, it really amounts to just design things well, all the time.  Unsurprisingly, I think that teams with strong software design skills are best equipped to do evolutionary design.

 

On Software Factories

Last week I was at an Open Spaces event in Colorado with a very diverse group of folks in a session that rambled around until "Software Factories" came up.  I stated, and not for the first time, that Software Factories are often Big Design Upfront dressed up in sexier new clothes.  I definitely think the software factory idea can work (with Ruby on Rails as exhibit A), but I think the activity of defining elaborate project and class templates upfront is risky or at least unoptimal.  A project team's is going to have much less willingness to reconsider designs if design changes require changing the software factory templates.  To me, software factory techniques will succeed if and only if it's easy to modify the factory automation guidance as the team works and learns more about their system.

My other point with software factories was that I think micro code generation (live templates, file templates, ReSharper tricks, scaffolding, etc.) where the developer is in complete control has a much better chance of succeeding than the elaborate factories that try to generate most of the application for you. 

 

Opinionated Software

Ruby on Rails introduced "Opinionated Software" into the common lexicon, but it's been around for a while.  I think that my team is gaining some advantages from our design's "opinions," but what if you don't like the opinions of your chosen framework?  Take CSLA.Net as an example.  I want absolutely nothing to do with CSLA.Net because I think its basic model is severely flawed, but I bet that it's providing a lot of value for many teams.  That value is largely because CSLA.Net has firmly engrained "opinions" about how almost every common development task should be done.  I can't use CSLA.Net, and a lot of the Microsoft tooling for that matter, because I don't agree with the "opinions" baked into that tooling.  I'll happily build my own infrastructure to support the way that *I* feel software should be created, or go around a piece of infrastructure I don't agree with.  Heck, the MVC framework isn't even released and we've already considerably diverged from its opinions.  Other developers will simply go with the flow of whatever tooling that their using and invest time into learning the idioms of that particular tool and not waste time questioning that tool. 

I think this comes down to a question of "go with the flow or change the course of the river."  I'm a "change the course of the river" to the optimal path kind of guy, but I frequently wonder if it would be better to just give up and go with the flow.

 

TypeMock is only a Bullet of Ordinary Composition

I was out of pocket last week at a little open spaces event, so I missed most of the latest Twitter and blogging flareup of the TypeMock question.  I'll repeat my opinion that there's nothing inherently wrong with TypeMock itself, but I think that the rhetoric from TypeMock proponents is often harmful to the greater discussion of software design and practices.

TypeMock might be a better mocking framework than Rhino Mocks or Moq, but it does NOT change the fundamental rules of mock object usage.  Just because you can use TypeMock to mock a dependency doesn't mean that it's the right thing to do.  Let's remember some of my rules of mock object usage:

  • Don't ever try to mock chatty interfaces like ADO.Net or anything related to HttpContext because the effort to reward ratio is all wrong and you can never read those tests anyway. 
  • Be extremely cautious of mocking interfaces that you do not understand. 

The only thing that TypeMock changes is *how* the mock object is introduced into the code being tested.  If you really think that having separate interface definitions plus Dependency Injection is hard, then yeah, use TypeMock (an assertion that I would obviously dispute in this age of auto wiring, auto mocking containers, ReSharper, and convention driven configuration of IoC containers).  Just remember a couple things please:

  • Mocking in general isn't going to be an effective technique with classes that aren't cohesive or have a lot of semantic coupling with their dependencies.  In other words, interaction testing with any mock object is going to be painful with badly written code.  TypeMock simply doesn't change that equation.  I've heard TypeMock put forward as a solution for unit testing legacy code.  In theory yes, but the reality that I've found is that interaction testing inside Legacy Code is an exercise in pain.  Most legacy code (and I'm using the Feathers definition of legacy code here) has very poor internal structure and poor separation of concerns.  Exactly the kind of code that you shouldn't bother using interaction testing on.  I'd instead recommend surrounding Legacy Code with more coarse grained integration tests to preserve behavior first, then trying to modify the internal code to a better structure before writing fine grained unit tests.  Yes, it is possible to use TypeMock to "unit test" typical legacy code, but those tests would almost automatically be the type of overspecified unit tests that cause more harm than good.  The problem with legacy code is often the structure of the code more than the fact that it doesn't have any unit tests.
  • Yes, you can unit test the class in question that news up its own dependencies and calls static methods, but you still have a very tight runtime coupling to those dependencies and the static method calls.  Regardless of your ability to unit test the class in question, that tight coupling can often be a problem.  Your ability to reuse those classes is compromised by the tight dependencies.  Your ability to practice evolutionary design is compromised because of the tight coupling.  Remember that Dependency Inversion and Inversion of Control have other benefits than just unit testing.

I think the TypeMock proponents are too focused on unit testing in a way.  I firmly believe that code that can't be efficiently unit tested is almost automatically bad code (to me, testability == productivity).  However, code that can be unit tested isn't necessarily good.

To recap, I don't think there's anything wrong with TypeMock per se, but I think that much of the TypeMock proponent's rhetoric is irresponsible.  Just because TypeMock *can* do something, doesn't mean that doing that something is a good idea. 

 

 

In Tribute to George Carlin

I couldn't think of 7, and it's a couple months late for a Carlin tribute, but here's my list of the words or phrases that are henceforth banned from appearing in my blog or presence (starting right now).  Almost no conversation is going to be useful if it includes one of these words:

  • Mort - Apparently Microsoft is now referring to the developer formerly known as "Mort" as "Pragmatic Developers."  Puh-leeze.  Everybody in the world thinks that they're pragmatic, but yet we disagree on many significant directions in the best way to build software.  I was dead set against ALT.NET getting renamed "Pragmatic.Net" for the same reasons.  I gotta say though, "Pragmatic Developer" is much less a pejorative than "Mort" became and the typical "Joe Schmoe Developer who builds LOB apps at General Motors" line you hear from Microsoft employees.
  • Entity Framework - At least until there's something new to say.  I'm liking that my attention lately has been on the advance of Fluent NHibernate instead of worrying about a tool that I'm very unlikely to use in the next 2-3 years.
  • Stored Procedures - I've seen nothing to change my opinion about sprocs for several years (good for edge cases and utility database scripts, bad everywhere else, i.e. 95%+ of the time I think sprocs are unnecessary)
  • TypeMock
     
  • "Vietnam of Software Development" - Most  overblown and misused analogy this side of Software as Construction.
  • "Software as Construction" - I worked on the engineering side of construction projects measured in the 100's of million dollars and even billion dollar+ projects (and this was in the pre-W days when the USD was more than paper money), plus I worked for my father building houses as well.  I feel perfectly qualified to say that the "Software as Construction" analogy is an extremely poor fit.  Software as Manufacturing is better, but I bet that somebody will write a rant about that comparison in the next couple years.
  • Foo Considered Evil - It's a cliche now
  • "Cargo Cult" - used as a magic talisman to win any argument, regardless of whether the use of the phrase is applicable or not.
  • "Your Emperor has no Clothes" - see above
  • "Jumped the Shark" - see above
  • "You should just use whatever is best for your project" - The intellectual equivalent of empty calories
  • "You're just being dogmatic!" - Lamest way to try to win an argument.  Basically, this is code for "I'm pissed that you don't agree with me so I'm just going to call you names and declare victory, so there!"
  • "You can just Refactor it later" - You can write simplistic code upfront and say you'll refactor it later to eliminate duplication or handle more complicated cases as those cases arise, but you don't write bad code on purpose.  You certainly don't use Refactoring as an excuse to just not think about design.
  • "We're refactoring" when the team really means "we're rewriting that code altogether."  There's no such thing as a big refactoring.

 

 

Okay, I'm done.  Your turn:



Source Click Here.

Originals Enjoy