A New Internet Library: Add Your Website/Blog or Suggest A Website/Blog to our Free Web Directory http://anil.myfunda.net.

Its very simple, free and SEO Friendly.
Submit Now....

Sunday, November 30, 2008

Ray Ozzie at Mix08

Just heard that Ray Ozzie will also be talking at Mix this year.  If you don't know, Ray Ozzie is Microsoft's Chief Architect - taking over after Bill Gates. I've followed Ray's career for a very long time - since the days of Lotus Notes (which he created) and Groove (which is still ahead of its time... even though Microsoft has done absolutely nothing with the product since they purchased it - which is sad).

I have to admit, Ray Ozzie isn't the most exciting presenter in the lineup - however, I think along with Guthrie, he'll be the most profound.  With the speakers that they have lined up - I truly hope Microsoft drops some huge news on our laps...something that will really blow us away and keep us excited.  

 

clip_image001



Source Click Here.

FolderShare finally getting integrated into Live.com

Nice....

https://www.foldershare.com/

image



Source Click Here.

Requirements Management with Team System White Paper

Requirements Management is something very near to my heart. So, is Team System.  Wouldn't you know that there is now a white paper out from Microsoft that talks about both!!!

 



Source Click Here.

Behaviour Driven Development Video

Good watch...

 



Source Click Here.

Live.com changed...

I noticed today that live.com has changed.  My first experience - silly fast!  I was impressed.  My second experience was impressive as well - I performed a vanity video search - up popped a few results - simply hover your mouse over the video thumbnail - and it started playing.  That feature actually gave me a startle, since I wasn't expecting it and all of a sudden my computer was talking to me.



Source Click Here.

Free Webcast on Process Template Customization...and more...

Over the next few weeks, Imaginet will be hosting a series of public Webcasts on various different topics...The first of this series is Steve Porter on process template customization for Team System.

Here are a few more Webcasts we're putting on:

http://www.imaginets.com/news--events/spring-2008-webcast-series.aspx

image

Windows Communication Service Gateways with C#3.0 and Linq-To-SQL
An overview of a simple Gateway data accessing pattern exposed via WCF, using LinqToSql for data access. Also includes a brief look into handy uses of new C# 3.0 and .NET 3.5 features including Lambda expressions, LinqToEntities, and more.
May 28 - 2008
https://www119.livemeeting.com/lrs/8000153370/Registration.aspx?pageName=0ggc1605gzxfjkkf&FromPublicUrl=1

NHibernate: An Entry-Level Primer
A look into getting started creating Data Access Layers with NHibernate.
June 25 2008
https://www.livemeeting.com/lrs/8000153370/Registration.aspx?pageName=cjqnm4lgfrcz3wr9

Customizing Team System Process Templates
Learn how to customize Team System's process templates to allow you to align your organization's unique processes to those processes managed by the Team Foundation Server.
May 14 - 2008
https://www119.livemeeting.com/lrs/8000153370/Registration.aspx?pageName=73fdz3q11vj4s4hl&FromPublicUrl=1

Dependency Injection with StructureMap
Learn how to decouple your application and drive it towards a cleaner and more testable design by using dependency injection.
June 18 - 2008
https://www.livemeeting.com/lrs/8000153370/Registration.aspx?pageName=g22s2bs6wzh759g7

Creating Real world applications with CSLA 3.5
I'll do a quick walkthrough of creating a Point of Sale system using Parent-Child-Grandchild relationship using the new features of CSLA 3.5. These include Linq-to-SQL, better property management and persistence management through the chain.
July 2 - 2008
https://www.livemeeting.com/lrs/8000153370/Registration.aspx?pageName=l6p0t81q6lm23pml

Introduction to Software Factories
This web cast will introduce developers to Software Factories.  We'll focus specifically on those published by Microsoft, including the Web Client Software Factory, the Smart Client Software Factory, and the Services Software Factory.  We'll demo how the factories work and examine what they produce.
May 21 - 2008
https://www119.livemeeting.com/lrs/8000153370/Registration.aspx?pageName=8gprzl6j2tz2skv5&FromPublicUrl=1

Customizing Software Factories
This Webcast will walk developers through what is involved in customizing the software factories published by Microsoft. We'll discuss the Guidance Automation Toolkit (GAT) and the Guidance Automation Extensions (GAX), and will discuss the dos, don'ts, and pain points involved in working with these technologies.
Familiarity with Software Factories is recommended.
June 11 - 2008
https://www.livemeeting.com/lrs/8000153370/Registration.aspx?pageName=699ps56b6hdjqx90

MVC vs MVP smackdown
This Webcast will compare and contrast 2 presentation layer design patterns, the Model View Controller (MVC) design pattern being baked into the the ASP.NET MVC Framework and the Model View Presenter (MVP) design pattern that is currently baked into the Microsoft Software Factories.  What's the difference? Which is better?  Find out!
July 9 - 2008
https://www.livemeeting.com/lrs/8000153370/Registration.aspx?pageName=dnr4zf5th6xp8cnl

Testing out the MVC: Routing
One of the most difficult parts of an application to test is the User Interface.  With the impending release of the ASP.NET MVC framework, this will become a lot easier for web based applications.  This webcast will take a Test Driven approach to exploring the new MVC Framework focusing on the URL Routing.
June 4 - 2008
https://www119.livemeeting.com/lrs/8000153370/Registration.aspx?pageName=j8l7d1rddtq7sjm3&FromPublicUrl=1

Creating a Web Video Player Using Silverlight
A brief introduction to creating silverlight applications for websites, and a demo of creating a video player.
July 16 - 2008
https://www.livemeeting.com/lrs/8000153370/Registration.aspx?pageName=tb930khj51nfpgp2



Source Click Here.

Tech Behind Live Earth

Some of the 3d views in Live Earth are stunning (especially if you live in a major US city).    Mark Brown talks about the camera technology that sits behind it.  Very cool.

http://blogs.msdn.com/markbrown/archive/2008/05/23/cnet-article-on-google-capturing-3d-data.aspx

A bit about the cameral

216 mega-pixels with a panchromatic image size of 14,430 x 9,420 pixels, capturing data at over 3 GBits/sec, 13 CCD's - 7 pan and 4 color (RGB + Near IR) and 14 CPU's to process the raw images and data in real-time. The data units for the camera hold 1.7TB, enough for about 4,700 images

How the heck can I get me one of those 216 Mega-pixel cameras?  I'm sure in 15 years Canon will come out with a consumer model ;-)



Source Click Here.

Saturday, November 29, 2008

Visual Studio Team System 2010 is Coming

I normally let the product groups blog about their announcements - they do a much better job at blogging anyway.. however, I wanted to make sure that everyone is going to check out Channel9 this week as there is some great content coming.

http://channel9.msdn.com/posts/VisualStudio/Visual-Studio-Team-System-2010-Week-on-Channel-9/

Stay tuned to http://channel9.msdn.com/VisualStudio/ for all of the action!



Source Click Here.

Dynamic Data and the Associated Metadata Class

If you've used Dynamic Data or watched some demos, you may have been puzzled by the way metadata is associated with database fields.  Instead of being placed directly on the partial Entity class (e.g. Product), it needs to go on a different class which gets associated with the real class via a MetadataTypeAttribute.

Here is an example where we're adding a display name on the UnitsInStock column:

[MetadataType(typeof(Product_Metadata))]  public partial class Product {  }    public class Product_Metadata {      [DisplayName("The Units In Stock")]      public object UnitsInStock { get; set; }  }

Here are the key pieces at play:

  • First you have the Product partial class, which is 'partial' with the Product class that was generated by the Linq To Sql or Entity Framework designer.  The generated class contains the 'real' UnitsInStock property, which is an integer (not shown above, but you'll find it in the generated Northwind.designer.cs file).
  • This partial Product class has a MetadataType attribute pointing to a different class called Product_Metadata (the name is arbitrary but appending _Metadata is a good convention to indicate a MetaData buddy class).  This is the metadata class, sometimes referred to as the associated class or the 'buddy' class.  Note that this class only exists to hold metadata, and is never instantiated.
  • On the metadata class, there is a UnitsInStock property.  This is sort of a 'dummy' property that is used as a place to put CLR attributes on.  We typically give it a return type of 'object', though in reality it is ignored and could be any type.
  • And finally there is the metadata attribute itself, in this case DisplayName (and obviously you could have more than one attribute).

Why are we doing this?

So I have described what we do, but not really why we do it this way, instead of putting attributes directly on the real property.  Simply stated, the reason is that it's not possible, due to a C# language limitation (VB.NET has the same issue).  The limitation is that when one half of a partial class defines a property (in this case it's in the generated class), you cannot add attributes to that property from the other half of the partial class (in this case in the user code).

Of course, you could easily add the attributes directly on the generated class where the property is defined, but that would be a very bad idea, because you would lose it all as soon as you make a model change that requires the file to be regenerated.  In general, you should never make any changes to generated files, as that is asking for trouble!

What if you don't want to use CLR attributes for your metadata?

Some users don't like using CLR attributes on their entity class to define their metadata.  To see an alternative technique that lets you keep your metadata in arbitrary places, check out this really cool post by Marcin (one of the developers on Dynamic Data).



Source Click Here.

ASP.NET Dynamic Data on CodePlex

Today, ASP.NET Dynamic Data moved from its current home on Code Gallery to its new home on CodePlex.  Note that this is part of a general ASP.NET CodePlex project, so you'll see other features mentioned in there, like MVC and AJAX.  This doesn't mean that those features tie with Dynamic Data any more than they did before; they're just sharing a home on the same CodePlex project.

There are a couple different things related to Dynamic Data that are available there, and since there is room for confusion I'll try to help make sense of it all.

The 'core' Dynamic Data

The first piece is Dynamic Data itself.  This will ship later this summer as part of Framework 3.5 SP1, but until that happens you need to install preview bits.  In order to get the latest preview, you first need to install VS 2008 SP1 Beta.  This comes with an outdated build of Dynamic Data, so you then need to install the update from CodePlex.  This is the same update that we had on Code Gallery, so if you already installed that, you don't need to do it again.

Once Framework 3.5 SP1 officially ships, there will no longer be a need for this.  Note that for all practical purpose, this update is identical to what the final release will be in SP1.  We are now completely locked down, and have not made a change in 3 weeks.  It just takes a little more time for the whole VS SP1 stack to finish up and be out the door.

Note that even though this is on CodePlex, we are not at this time making the sources of the 'core' Dynamic Data available.  I certainly hope that we will be able to in the future, but that has to go through some lawyers and other fun things.

The Dynamic Data Futures

The second important piece that's available on CodePlex is the Dynamic Data Futures (previously calls 'Dynamic Data Extensions').  At this time, these Futures are basically a big VS solution that contain both a library of new features (Microsoft.Web.DynamicData.dll), and a couple web sites that show them in action.

Unlike the Dynamic Data 'core', this 'Futures' piece is fully available in source form.  The solution first builds Microsoft.Web.DynamicData.dll, and then builds the various samples that rely on it.

Note that those Futures do not replace the core Dynamic Data, but instead run on top of it.  So it shouldn't be looked at as being the 'next version' of Dynamic Data, but rather as being some extra functionality that will be available on top of the Framework 3.5 SP1 release.

This project serves a few different purposes:

  • It gives us a place to experiment with new features that we didn't have time to put in SP1.  Some of those will eventually make their way into the next version of Dynamic Data, while others are likely to remain samples.  Obviously, we'd love to get feedback on those (the forum is the right place for that).
  • It lets us address some issues that are in the SP1 release and couldn't be fixed in time.
  • It has some great samples for Dynamic Data users to look at and learn how to achieve various results.  Note that a few of the samples that were previously separate are now included in there (e.g. Scott Hunter's DbImage sample).

What happens next

The 'Futures' project is very much work in progress, and we are actively adding things to it.  So you should expect to see fairly frequent updates on CodePlex.

Please give this a try and let us know how it goes on the forum!



Source Click Here.

Using ASP.NET Dynamic Data with ObjectDataSource

Support for LINQ based O/R mappers

Out of the box, ASP.NET Dynamic Data has support for both Linq To Sql and Entity Framework.  In addition, it has a provider model which allows additional O/R mappers to be supported.  For instance, we have a sample provider that supports ADO.NET Data Services (aka Astoria).  There is also a provider for LLBLGen.

Note that all those scenarios are some things in common:

  • They are based on some form of data context: it's called various things in the different O/R mappers (DataContext, ObjectContext, DataServiceContext), but essentially it's a class that has a property for each Entity Set.
  • They support LINQ: the Entity Set properties on the data context class implement IQueryable<T>, which allows LINQ to be used on them.

 

Support for scenarios that use ObjectDataSource

While having this extensibility is great, it doesn't cover a whole range of scenarios where users have their own data layer, with no LINQ and no data context in sight.  For a simple example of such scenario, see the sample attached to this MSDN page.  The key aspects are:

  • Entity class: you have a strongly typed some entity class (NewData in the MSDN sample).  It just defines the fields you care about, with no real  logic attached.
  • Business layer class: you have a class that has various methods to perform your CRUD operations (the class is named AggregateData in that sample).
  • ObjectDataSource: your page uses an ObjectDataSource to have ASP.NET interact with your entity and business layer classes.  You tell the data source exactly which method to call for each CRUD operation.

Quite a few people use this technique, as it is very flexible and can work with fairly arbitrary business layer.

So how does this all work when you want to use Dynamic Data in this type of scenarios?  It actually integrates very nicely, by using some simple functionality that's available as part of the Dynamic Data Futures on CodePlex.  If you download that solution, it includes a fully functional sample based on the MSDN sample above.

Here is more details on what it all looks like.

The business layer class

This remains pretty unchanged, except that you have the ability to throw an exception when you see invalid data come in, e.g.

public class AggregateData {        // Unchanged pieces omitted        public DataTable Select() {          return (table == null) ? CreateData() : table;      }          public int Insert(NewData newRecord) {          if (newRecord.Name.Length < 3) {              throw new ValidationException("The name must have at least 3 characters");          }            table.Rows.Add(new object[] { newRecord.Name, newRecord.Number, newRecord.Date });          return 1;      }  }

Here, it's saying that the name must have at least 3 characters.  What's really nice is that the exception you throw here gets shown exactly where you expect it: in the ASP.NET validation summary.

 

The Entity class

Let's now look at the entity class.  Again, it's mostly unchanged from the original, except that we now have the ability to annotate it using Dynamic Data attributes:

public class NewData {      [RegularExpression(@"[A-Z].*", ErrorMessage="The name must start with an upper case character")]      [Required]      [DisplayName("The name")]      public string Name { get; set; }        [Range(0, 1000)]      [UIHint("IntegerSlider")]      [DefaultValue(345)]      public int Number { get; set; }        [DataType(DataType.Date)]      [UIHint("DateAjaxCalendar")]      [Required]      public DateTime Date { get; set; }  }

I won't go over the details of what each attribute means, as they are the standard Dynamic Data attributes and they're fairly self-explanatory.  The key point is that everything works exactly in the same way as they would when you use Dynamic Data with Linq To Sql: the rendering goes through the field templates, the UIHint is used to select the template, the validation attributes become ASP.NET validators, ...

 

The aspx page

Finally, let's look at the ASP.NET page.  Again, it's not all that different from the original sample:

    <asp:ValidationSummary ID="ValidationSummary1" runat="server" EnableClientScript="true"          HeaderText="List of validation errors" />      <asp:DynamicValidator runat="server" ID="DetailsViewValidator" ControlToValidate="DetailsView1" Display="None" />        <asp:DetailsView ID="DetailsView1" runat="server" AllowPaging="True" AutoGenerateInsertButton="True"          DataSourceID="ObjectDataSource1" EnableModelValidation="true">      </asp:DetailsView>            <asp:DynamicObjectDataSource ID="ObjectDataSource1" runat="server" DataObjectTypeName="DynamicDataFuturesSample.NewData"          InsertMethod="Insert" SelectMethod="Select" TypeName="DynamicDataFuturesSample.AggregateData">      </asp:DynamicObjectDataSource>  

The most notable change is that ObjectDataSource became a DynamicObjectDataSource, and that we added a DynamicValidator.

 

So let's see it run!

 

Here is what it looks like with the above metadata:

image

 

Some notable things:

  • The header for the name says 'the name' thanks to the DisplayName on the Name property
  • It complains that the name doesn't start with an upper case, per the RegularExpression attribute
  • The Number's UI shows up as an AJAX slider, thanks to the UIHint
  • Likewise, the Date UI uses an AJAX calendar

 

Conclusion

Even though it wasn't a scenario that we originally intended to cover, it turns out that using Dynamic Data with ObjectDataSource scenarios is quite easy and works very well.  It really opens up a whole new range of scenarios that don't force users to switch to a different O/R mapper.

It does have a notable limitation: it is not meant to support the full site scaffolding that you get with Linq To Sql or Entity Framework.  This comes from the fact that without a data context, there is no easy way to know what the set of relevant tables is, and what relationships they have with each other.  Of course, you still get the scaffolding aspect at the page level, in so far as the UI for your DetailsView/GridView is completely model driven, which is the main appeal of Dynamic Data.

One of our users Dan Meineck just blogged about this as well, so check out what he had to say!



Source Click Here.

Dynamic Data Filtering project up on CodePlex

Josh Heyse has just released a very cool extension to ASP.NET Dynamic Data which supports much fancier filtering than is available is the core Dynamic Data.  The core idea is that filters can contribute arbitrary LINQ expression trees to the Select query performed by the data source.  So for instance you can have a Range filter which checks that a column is between two values, or a Search filter which performs sub-string matching (e.g. StartsWith, EndsWith or Contains).  All that is made available with very little effort from the user.

It's all available on CodePlex, so check it out!  Start with the included AdventureWorks sample to get an idea what it's all about.  Great job Josh!



Source Click Here.

Why posting questions to the forums beats private emails

While most of the Q&A about ASP.NET happens in the forums, I also receive a number of direct questions from users (and so do other people on the team).  In most cases, I try to convince the sender to post to the forums instead, and there are a couple of good reasons for it.  And no, the reason is not to just get rid of the user! :-)

The first reason is that posting to the forum gives you a much larger audience of people who may be able to answer the question.  The answer may come from one of the Microsoft folks who are active there, or from one of the many experienced users from the community.  As a result, you're much more likely to get a quick answer than by sending a private email to one person.

A second very important reason is that posting to the forums builds up the general knowledge base.  Your question and the answers that it receives will be searchable by others for years to come.  So asking your question there can effectively save others from asking the same thing.  Or maybe their question won't be exactly the same, but it will be similar enough that they will post a new question that 'builds' on the previous answers.

Of course, there is no guarantee that every question you post will get answered instantly in a way to fully solves it, and it may take a few iterations to get it there.  But it is your best bet!



Source Click Here.

A 'Many To Many' field template for Dynamic Data

Unlike Linq To Sql, Entity Framework directly supports Many to Many relationships.  I'll first describe what this support means.

In the Northwind sample database, you have Employees, Territories and EmployeeTerritories tables.  EmployeeTerritories is a 'junction' table which has only two columns: an EmployeeID and a TerritoryID, which creates a Many to Many relationship between Employees and Territories.

When using Linq To Sql, all three tables get mapped in your model, and you need to manually deal with the EmployeeTerritories junction table.  But when using Entity Framework, the EmployeeTerritories junction table is not part of your model.  Instead, your Employee entity class has a 'Territories' navigation property, and conversely your Territory entity class has an 'Employees' navigation property.  What Entity Framework does under the cover to make all this work is pretty amazing!

Unfortunately, in ASP.NET Dynamic Data's initial release (as part of Framework 3.5 SP1), we didn't have time to add proper support for such Many to Many relationships, and in fact it behaves in a pretty broken way when it encounters them.

The good news is that it is possible to write a field template that adds great support for this, and that is exactly what this blog post is about.  I should note that a couple of our users have written such field templates before (in particular, see this post).  In fact, that's what got me going to write one! :)

One difference is that I set mine out to be completely generic, in the sense that it doesn't assume any specific database or table.  e.g. it works for Northwind's Employees/Territories (which my sample includes), but works just as well for any other database that uses Many to Many relationships.

In read-only mode, it shows you a list of links to the related entities.  e.g. when looking at a Territory, you'll see links to each of the Employees that work there.

Details mode

In edit and insert mode, it gets more interesting: it displays a list of checkboxes, one for each Employee in the database.  Then, whether the Employee works in this territory is determined by whether the checkbox is checked.  Pretty much what you'd expect!

Edit mode

I won't go into great details about how it works here, but if you are interested, I encourage you to download the sample and look at the code, which I commented pretty well.  The key things to look at are:

  • The field templates ManyToMany.ascx (used for read-only) and ManyToMany_Edit.ascx (used for Edit and Insert).
  • The AutoFieldGenerator, which automatically uses those field templates for Many to Many relationships.

Enjoy, and let me know if you have feedback on this.



Source Click Here.

ProcessGeneratedCode: A hidden gem for Control Builder writers

If you've ever written any non-trivial ASP.NET control, you're probably familiar with the concept of a Control Builder.  Basically, it's a class that you associate with your control and that affects the way your control gets processed at parse time.  While ControlBuilder has been around since the ASP.NET 1.0 days, a very powerful new feature was added to it in 3.5 (i.e. VS 2008).  Unfortunately, we never had a chance to tell people about it, and a web search reveals that essentially no one knows about it!  Pretty unfortunate, and obviously, the point of this post is to change that. :-)

So what is this super cool feature?  Simply put, it lets the ControlBuilder party on the CodeDom tree used for code generation of the page.  That means a ControlBuilder can inspect what's being generated, and make arbitrary changes to it.

Warning: this post assumes some basic knowledge of CodeDom.  If you are not familiar with it, you may want to get a basic introduction to it on MSDN or elsewhere before continuing.

 

How do you use this?

To use this feature, all you have to do is override the new ProcessGeneratedCode() method on ControlBuilder.  here is what this method looks like:

//  // Summary:  //     Enables custom control builders to access the generated Code Document Object  //     Model (CodeDom) and insert and modify code during the process of parsing  //     and building controls.  //  // Parameters:  //   codeCompileUnit:  //     The root container of a CodeDOM graph of the control that is being built.  //  //   baseType:  //     The base type of the page or user control that contains the control that  //     is being built.  //  //   derivedType:  //     The derived type of the page or user control that contains the control that  //     is being built.  //  //   buildMethod:  //     The code that is used to build the control.  //  //   dataBindingMethod:  //     The code that is used to build the data-binding method of the control.  public virtual void ProcessGeneratedCode(CodeCompileUnit codeCompileUnit, CodeTypeDeclaration baseType, CodeTypeDeclaration derivedType, CodeMemberMethod buildMethod, CodeMemberMethod dataBindingMethod);

 

So basically you get passed a bunch of CodeDom objects and you get to party on them.  It may seem a bit confusing at first to get passed so many different things, but they all make sense in various scenarios.

  • The CodeCompileUnit is the top level construct, which you would use for instance if you wanted to generate new classes.
  • Then you have the two CodeTypeDeclarations, which represent the classes that are generated for this page.  The reason there are two has to do with how the page is generated.  First, there is a partial base class just has control declaration, and gets merged with the code behind class the user writes (in Web Sites, Web Applications are a bit different).  Then we derive another class from it, which has the bulk of the code that makes the page run.  I know this seems confusing, so let me give you a rule of thumb: when in doubt, use baseType rather than derivedType.
  • Finally, we have the two CodeMemberMethods, which represent methods that relate to the particular control that's being built.  One is for the method that builds the control (e.g. assigns its properties from the markup, .), and the other one deals with data binding.

Tip to make more sense of all that stuff: a great way to learn more about the code ASP.NET generate is simply to look at it!  To do this, add debug="true" on your page, add a compilation error in there (e.g. <% BAD %>) and request the page.  In the browser error page, you'll be able to look at all the generated code.

 

How about a little sample to demonstrate?

Let's take a look at a trivial sample that uses this.  It doesn't do anything super useful but does demonstrate the feature.  First, let's write a little control that uses a ControlBuilder:

[ControlBuilder(typeof(MyGeneratingControlBuilder))]  public class MyGeneratingControl : Control {      // Control doesn't do anything other than generate code via its ControlBuilder  }

Now in the ControlBuilder, let's implement ProcessGeneratedCode so that it spits out a little test property:

// Spit out a property that looks like:  //protected virtual string CtrlID_SomeCoolProp {  //    get {  //        return "Hello!";  //    }  //}  var prop = new CodeMemberProperty() {      Attributes = MemberAttributes.Family,      Name = ID + "_SomeCoolProp",      Type = new CodeTypeReference(typeof(string))  };    prop.GetStatements.Add(new CodeMethodReturnStatement(new CodePrimitiveExpression("Hello!")));    baseType.Members.Add(prop);

 

So  it just generates a string property with a name derived from the control ID.  Now let's look at the page:

    <test:MyGeneratingControl runat="server" ID="Foo" />  

 

And finally, let's use the generated property in code.  The simple presence of the this tag allows me to write:

Label1.Text = Foo_SomeCoolProp;

 

And the really cool things is that Visual Studio picks this up, giving you full intellisense on the code generated by your ControlBuilder.  How cool is that! :)

 

Full runnable sample is attached to this post.

 

What about security?

At first glance, it may seem like this feature gives too much power to ControlBuilders, letting them inject arbitrary code into the page that's about to run.  The reality is that it really doesn't let an evil control do anything that it could have done before.  Consider those two cases:

  • Full trust: if the Control and ControlBuilder are running in full trust, then they can directly do anything that they want.  They gain nothing more from  generating code that does bad things
  • Partial trust: if the Control and ControlBuilder are running in partial trust, then they can't do much damage themselves directly, and can attempt to do additional damage indirectly via code they generate.  However, if the app is running in partial trust (as is typical in hosted environments), then the dynamically generated page code also runs in partial trust.  So effectively, it can't do any more than the control could do directly.

 

Conclusion

ProcessGeneratedCode is a pretty powerful feature, giving your ControlBuilders full control over the code generation.  It's also a pretty advanced feature, and you can certainly shoot yourself in the foot with it if you're not careful.  So be careful!



Source Click Here.

Creating a ControlBuilder for the page itself

One user on my previous post on ProcessGenerateCode asked how he could associate a ControlBuilder not with a control, but with the page itself.  There is in fact a way to do it, and it's another one of those things that have never really been advertized.  The trick is that instead of associating the ControlBuilder using the standard ControlBuilderAttribute, you need to use a FileLevelControlBuilderAttribute.  Let's walk through a little example.

First, we create a custom page type with that attribute:

[FileLevelControlBuilder(typeof(MyPageControlBuilder))]  public class MyPage : Page {  }

We could do all kind of non-default thing in the derived Page, but I'll keep it simple.  Now let's write the ControlBuilder for it:

// Note that it must extend FileLevelPageControlBuilder, not ControlBuilder!  public class MyPageControlBuilder : FileLevelPageControlBuilder {      public override Type GetChildControlType(string tagName, IDictionary attribs) {          // If it's a Label, change it into our derived Label          Type t = base.GetChildControlType(tagName, attribs);          if (t == typeof(Label)) {              t = typeof(MyLabel);          }            return t;      }  }

The most important thing to note here is that it extends FileLevelPageControlBuilder, not ControlBuilder.  If you extend ControlBuilder, it may seem like it works on simple pages but some things will break (e.g with master pages).

As for what we do in that class, it's just some random thing to demonstrate that it is in fact getting called.  Here, we change the type of all Labels to a derived type which slightly modifies the output:

public class MyLabel : Label {      public override string Text {          set { base.Text = "[[" + value + "]]"; }      }  }

And then of course, we need to tell our aspx page to use our base class:

<%@ Page Language="C#" Inherits="MyPage" %>  

The full runnable sample is attached below.



Source Click Here.

Using User Controls as Page Templates in Dynamic Data

Dynamic Data has the concept of Page Templates, which are pages that live under ~/DynamicData/PageTemplates and are used by default for all tables.  Recently a user on the forum asked whether they could use User Controls instead of Pages for those templates.

To me, this means potentially two distinct scenarios, and I tried to address both in this post:

  1. Using routing: in this scenario, you still want all your requests to go through the routing engine, but have the user control templates somehow get used.  The URLs here would still  look identical to what they are in a default Dynamic Data app, e.g.  /app/Products/List.aspx (or whatever you set them to be in your routes).
  2. No routing: in this scenario, you want the URL to go directly to a specific .aspx page, and then have that page do the right thing to use Dynamic Data through the User Control templates.

Creating the User Controls

First, let's look at what will be the same for both cases.  Under ~/DynamicData/PageTemplates, I deleted all the aspx files (to prove that they're not used), and instead created matching ascx files (i.e. User Controls).  Those user controls contain essentially the same things that were in the pages, minus all the 'outer' stuff (e.g. things that relate to the master page).

Making it work using routing

Now let's look specifically at case #1 above, where we want to use routing.  First, we need to make a small change to the route to use a custom route handler:

routes.Add(new DynamicDataRoute("{table}/{action}.aspx") {      Constraints = new RouteValueDictionary(new { action = "List|Details|Edit|Insert" }),      Model = model,      RouteHandler = new CustomDynamicDataRouteHandler()  });

Note how we set RouteHandler to our own custom handler type.  Now let see what this type looks like:

public class CustomDynamicDataRouteHandler : DynamicDataRouteHandler {      public override IHttpHandler CreateHandler(DynamicDataRoute route, MetaTable table, string action) {          // Always instantiate the same page.  The page itself has the logic to load the right user control          return (IHttpHandler)BuildManager.CreateInstanceFromVirtualPath("~/RoutedTestPage.aspx", typeof(Page));      }  }

It's basically a trivial handler which always instantiates the same page!  That may look strange, but the idea is that all the relevant information is carried by the route data, which that page can then make use of.  Now let's see what we're doing in this one page.  It's itself pretty trivial, with just a bit of logic in its Page_Init:

protected void Page_Init(object sender, EventArgs e) {      // Get table and action from the route data      MetaTable table = DynamicDataRouteHandler.GetRequestMetaTable(Context);      var requestContext = DynamicDataRouteHandler.GetRequestContext(Context);      string action = requestContext.RouteData.GetRequiredString("action");        // Load the proper user control for the table/action      string ucVirtualPath = table.GetScaffoldPageVirtualPath(action);      ph.Controls.Add(LoadControl(ucVirtualPath));  }

The first 3 lines show you what it takes to retrieve the MetaTable and action from the route date.  Then the next couple lines load the right user control for the situation.  GetScaffoldPageVirtualPath is a simple helper method which has logic to locate the proper ascx:

public static string GetScaffoldPageVirtualPath(this MetaTable table, string viewName) {      string pathPattern = "{0}PageTemplates/{1}.ascx";      return String.Format(pathPattern, table.Model.DynamicDataFolderVirtualPath, viewName);  }

And that's basically it for the routed case!

Making it work without routing

Now let's look at the non-routed case.  Here, the routes are not involved, so the URL goes directly to a page.  That page is similar to the one above, but with some key differences.  Here is what it does in it Page_Init:

protected void Page_Init(object sender, EventArgs e) {      // Get table and action from query string      string tableName = Request.QueryString["table"];      string action = Request.QueryString["action"];            // Get the MetaTable and set it in the dynamic data route handler      MetaTable table = MetaModel.Default.GetTable(tableName);      DynamicDataRouteHandler.SetRequestMetaTable(Context, table);        // Load the proper user control for the table/action      string ucVirtualPath = table.GetScaffoldPageVirtualPath(action);      ph.Controls.Add(LoadControl(ucVirtualPath));  }

The key difference  is that it can't rely on route data being available, so it needs to get the table and action information from  somewhere else.  You can come up with arbitrary logic for this, but the most obvious way is to just use the query string (e.g. ?table=Products&action=List).

Then  it does sort of the reverse of the case above, and sets the MetaTable into the route handler.  Even though routing is not used, Dynamic Data tries to get the MetaTable from DynamicDataRouteHandler, so having it there allows many things to just work.

And finally, it loads the user control using the exact same steps as above.  And that's that!

The code is attached below.



Source Click Here.

Fun with T4 templates and Dynamic Data

T4 templates have been a pretty popular topic lately.  If you have no idea what they are, don't feel bad, I didn't either only a couple weeks ago!  In a nutshell, it's a simple template processor that's built into VS and allows for all kind of cool code generation scenario.  For a bunch of information about them, check out the following two blogs:

The basic idea is that you can drop a hello.tt file into a project, and it will generate a hello.cs (or hello.anything) file via its template.  The template syntax of .tt files is very similar to ASP.NET's <% . %> blocks, except that they use <# . #> instead.

For a cool example of what you can do with T4, check out this post from Danny Simmons, which  shares out a .tt which generates all the Entity Framework classes automatically from an edmx file.  What's really nice is that you can easily customize the template to generate exactly the code that you want, instead of being locked into what the designer normally generates.

So how does that have anything to do with Dynamic Data?

One weakness we currently have with Dynamic Data is that we don't have a great way to get started with a custom page.  Suppose you start out with a web app in full scaffold mode, and find the need to customize the Details page for Products.  The 'standard' procedure to do this is:

  • Create a Products folder under ~/DynamicData/CustomPages
  • Copy ~/DynamicData/PageTemplates/Details.aspx into  it
  • Modify the copied file to do the customization you need

While this generally works, it has one big weakness: you need to start your customization from a 'generic' file which knows nothing about your specific table.  e.g. you start out with a <asp:DetailsView> control that uses AutoGenerateRows to generate all the fields.  So if you want to do any kind of real UI customization, you'll likely have to do a bunch of repetitive steps, e.g.

  • Get rid of the DetailsView and replace it with a FormView (or ListView) so you can get full control over the layout
  • In the FormView's item template, you'll then need to add one <asp:DynamicControl> for each field that you want to display, along with formatting UI between them (e.g. <tr>/<td>).

A T4 template to the rescue

In a few hours, I was able to put together a quick T4 template that replaces most of those repetitive steps.  The steps using this template become simply:

  • Create a Products folder under ~/DynamicData/CustomPages (as above)
  • Drop Details.tt into it, and instantly it generates the aspx file, all expanded out with a FormView and all the DynamicControls! This .tt file is attached at the end of this post.

While this is cool, there is one thing that is a bit strange about it: you're left with a .tt file you don't want in your project:

image

And since details.tt file generates details.aspx file, you can't really change the aspx until you delete the .tt file.  Unfortunately, VS makes this more difficult than you would think because the two files are linked to each other.  The best steps I found to delete it are:

  • Right click the .tt file and choose Exclude From Project
  • Right click the Products folder in open it in Windows Explorer
  • Delete the .tt file in the explorer
  • Back in VS, click Show All Files in the solution explorer.  You should see the aspx
  • Now right click it and Include it in the project

I know, that's painful!!  Ideally, what we really want to do is use the .tt do to a one time transform, but never actually have it be part of the project.  At this point, I'm not yet sure how to do this, but I'm hopeful that there is a way.  Suggestions are welcome! :)

How does the template work?

In order to perform its task, the template needs to figure out what all the entity type's fields are for the custom page.  Doing this is a bit non-trivial, and what I do here is not as clean as it could be, as this is just a quick prototype.  Here is a brief description of what it does:

  • Based on the folder name, it knows what the Entity Set name is (e.g. Products)
  • It then locates the app's assembly in bin (caveat: this only works in Web Apps, not Web Sites!)
  • It copies it to a temp location and loads it from there, to avoid locking the bin assembly
  • In the assembly, it finds the DataContext/ObjectContext (it handles both Linq To Sql and Entity Framework)
  • From the context and the entity set name, and can know what the Entity Type is
  • Once it has that, it knows the fields and can generate the page!

If you look at the t4 files, you'll see all this logic at the end.  That part is certainly quick and dirty, and could use some rewriting.  But what comes before it is a pretty clean template that's easy to tweak to your needs.  e.g.

                    <table class="detailstable">  <# foreach (var prop in data.Properties) { #>                          <tr>                              <th>                                  <#= prop.Name #>                              </th>                              <td>                                  <asp:DynamicControl DataField="<#= prop.Name #>" runat="server" />                              </td>                          </tr>  <# } #>                      </table>  

As you can see, it looks very much like inline code in an aspx file, except that instead of executing at runtime on the web server, it executes in Visual Studio to generate a file that becomes part of the project.

What about all the other views?

So I'm only talking about Details.tt/Details.aspx.  What about the other Dynamic Data views: List, Edit, Insert?  Well, those would all work in pretty much the same way, and I simply haven't had a chance to get to them.  For now, I just wanted to get this first T4 file out as a proof of concept that shows the kind of things that can be done.  Contributions are welcome! :)

Conclusion

Clearly, a lot of what I describe here is pretty raw, and it's only a very early prototype.  But I think it should be enough to convey the potential power of design time file generation via T4 templates.  Later, I'd like to get better integration with VS.  e.g instead of dropping the .tt file in the project and later having to delete it, it'd be nice to get a custom action by right clicking on the CustomPages folder, that would let you generate the aspx file.  It would still use T4 under the cover, but you wouldn't have to see it unless you want to.

Hopefully, I'll  be writing more about this in the future if there is interest.



Source Click Here.

Introducing SemanticEngine.NET

For a long time I wanted to find a way to make it easy for .NET developers to start using the semantic web or Web 3.0. The semantic web is relatively unknown to most people and some of the technologies are very complex to understand.

Technologies or formats such as FOAF, APML, SIOC, XFN tags and microformats are some of the building blocks of the social aspects of the semantic web. They are used to create cross-site profiles of people and also represent relationships between them. XFN and microformats are somewhat easy to start using, but FOAF, APML and SIOC are a different story.

SemanticEngine.NET

That’s why I’ve just released the first initial code for a class library called SemanticEngine.NET on CodePlex. There is still a lot of code to be written, but right now it supports various different formats ready to use.

The idea is that you build the various formats using an easy-to-understand object model and then call a method that writes the documents to the response stream or to disk in XML. The idea is also that you can consume all these formats by pointing to a document somewhere on the web, and then the library will parse it and return an object graph.

APML

APML describes interests and is used to customize experiences online. This blog supports it natively, which makes it possible to filter all my blog posts based on the interests specified in your APML document. Check it out in the top right corner of this page.

The APML support in SemanticEngine.NET is done. You can easily generate your own documents or you can parse APML documents found on the web.

FOAF

Right now you can only generate FOAF documents, but very soon I’ll add a FOAF parser as well. This is more difficult because in order to do that, I actually have to use or write a full fledged RDF parser first.

SIOC

Coming soon

XFN

The XFN support in SemanticEngine.NET is almost done. At the moment you can give the XFN parser a website URL and it will then return a list of links that has the rel="me" attribute. The next version will return all the XFN attribute values such as friend, met, co-worker etc. That way you can scan a person’s social graph.

Microformats

The support for microformats is yet to come and it will be in the form of a parser. My idea is to let the parser take a website URL and then parse the HTML document for all the popular microformats such as hCard, hCalendar, hReview etc.

License

The library is free to use and free to modify in any given way. No crediting is needed. This is because my goal is to get people to use this everywhere. The more people that use semantic formats, the more useful it becomes and the more cool experiences will be possible to create online.

If you are interested in joining the development of SemanticEngine.NET, then please let me know. I sure could use some help, since my time is a bit limited. If you want to know more about the semantic web, then you can read my Guide to the semantic web post. Did I mention that all BlogEngine.NET 1.4+ users already have APML, FOAF, SIOC, XFN and microformats supported out-of-the-box?



Source Click Here.

BlogEngine.NET meeting this Friday

It’s about 1½ year since BlogEngine.NET saw the light of day, and one person has been on the team almost from the very beginning. Of course, I’m talking about Al Nyveldt. We’ve been planning, designing and developing BlogEngine.NET all this time, but never actually met since I live in Denmark in him in Pennsylvania.

Well, now it’s about to change because I leave for New York City in a few hours and Al has agreed to meet up with me there. If you want to have a talk about the past, present and future of BlogEngine.NET, then meet us at noon in the Hilton hotel near Central Park – it’s this Friday, October 3rd.



Source Click Here.

Thursday's Silverlight presentation

This Thursday, October 23rd, the Copenhagen .NET user group is throwing one kick-ass presentation for anyone interested. The theme is Silverlight 2.0 experiences for .NET developers and it’s going to blow you away. It’s not drag ‘n drop demos or a tutorial, but about the possibilities the new Silverlight version brings to ordinary .NET developers, presented by one of the most experienced Silverlight guys in Denmark.

The reason I’m looking forward to this talk so much is I’ve never really played that much with Silverlight and I want to see if the new version can be used for something cool in BlogEngine.NET and at ZYB. I’ve only seen one presentation of Silverlight back in the day and that was a bit boring with all the dragging and dropping of controls in Visual Studio. This will not be a presentation like that.

So, if you’re interested in coming along for the ride, be at Microsoft in Hellerup this Thursday at 4pm. It’s free and arranged by the Copenhagen .NET user group and people from the .NET community are going to be there. Check out www.cnug.dk for more info and remember to join the Facebook group.



Source Click Here.

My 10 common mistakes in ASP.NET

At work I’ve come to realize that I don’t always produce code that just work. I make mistakes and often forget to test properly before I sign off a task. So I talked to our test lead and asked him to compile a list of the most common issues he finds when testing my work. I then printed out the list and hung it on the wall next to my desk as a checklist to go through whenever I think a task is done. 

He could actually only produce 7 common issues, but I put in three more that relates to code quality. Since I’m an ASP.NET front-end developer, the list focuses on UI issues seen through the lens of a tester. The list is not prioritized.

I hereby declare that all features, tasks and tweaks…

Works correctly in IE6, IE7 and Firefox

The test lead found that cross-browser issues surfaced from time to time. Most often it was really minor issues like positioning of elements or general IE6 quirks. The reason why this rule isn’t called “Works correctly across browsers” but instead focuses on IE and Firefox is simple. If you can get your page to work in IE6, IE7 and Firefox, then it almost always works in Safari and Opera as well. Second, those browsers cover about 98% of our visitors. Third, if it was called “Works correctly across browsers” it might be easier to ignore when going through the list.

Is XSS secure

Make sure that all input fields are handled correctly so no HTML or JavaScript entered by a user can mess up the page.

Long text doesn't break design

My limited brain tells me that a name is never longer than approx. 50 characters. I still believe it, but what happens if you put in a 300 character name? In many cases this would break the UI. Either limit the length of text a user can submit or tweak the design to take long text into account.

Localize everything

When I start working on a new feature for the website, I hard code all text because it might change during development. When the feature is approved by the product team then I localize the text. Since this is on the list, I sometimes forget to localize ALL the text. I will still do hard coding of text, but this checklist will help me remember to localize before signing off.

The enter-key works as expected

When using ASP.NET webforms, a form element is probably located at the master page so all pages resides inside that form. That often leads to strange behaviours of the enter-key. Remember to set default buttons either from code-behind or on the Panel webcontrol.

The code is reusable when possible

This is a general coding law and in ASP.NET the same rules apply. Separate elements in user- and server controls and make them small and specialized so they can be used elsewhere. Read more about reusable ASP.NET.

Unit tests written when possible

If you’re not using the ASP.NET MVC framework, unit testing your website is not easy. However, if you pull out as much of the code-behind logic into components that you can put in a library, then you can unit test. Instead of using .ashx files for HttpHandlers, put them in a separate library. Use server controls instead of user controls when you can.

The code is well commented

This goes without saying. Comment and document your code so other developers can easily pick up where you left.

Is peer verified before going into testing

Before a feature is signed off and sent to test, we do peer verification. Peer verification is when one of your colleagues performs sanity checking on your feature or just tries to break it. This let you find issues early on and make the life easier for the testers. When we are really busy, I often forget to ask for peer verification and that shows.

Is signed off by product owner

At work we have a product team that always have the ownership of a feature. We developers also have ownership of features, but that is from an implementation level. During a busy day when I do several different tasks, I sometimes forget to get the product owner to sign off my work. If it isn’t signed off, then you’re simply not done yet and I sometimes have to finish a task several days after I was “done”. This is annoying and makes it harder to meet deadlines. I must say that this is probably the single most important issue, but it is also the one that I almost never forget for that very reason.

Time will tell if this list will raise the quality of my work. I’m sure that as long as I don’t ignore the list, it will. Are any issues missing?



Source Click Here.

Track your visitors using an HttpModule

I’ve been thinking about how to solve a very simple problem on a website: visitor behaviour tracking. In a sense it is what Google Analytics does, but there are problems with conventional JavaScript based trackers.

They are good at tracking page views, but very bad at tracking actions or behaviour around a website. Some products like Headlight are actually pretty good at tracking actions such as button clicks etc, but at the level I’m interested in tracking, I would have to add JavaScript all over my page. I don’t want to do that.

Also, there are some very good server-side logging products like log4net out there. The problem is that they are not really meant for tracking website behaviour with URLs, user agents and other important metadata.

What I want is a combination of the traditional JavaScript- and server-side methods. So, I’ve played around with a custom HttpModule that logs all page views and custom actions. You add the custom actions yourself by calling VistorLog.AddAction("message", "type"). That way you have page views and actions in a chronologically correct order.

A neat thing is that all page views and actions are kept in session and only when the session expires does it write to the database. That way it can do a batch insert which is much faster than hitting the database constantly. Another neat thing is that the HttpModule is only 100 lines of code.

The code

Basically, three things are going on. The session starts and we add a Visit object to it.

void session_Start(object sender, EventArgs e)

{

  HttpContext context = HttpContext.Current;

  Visit visit = new Visit();

  visit.UserAgent = context.Request.UserAgent;

  visit.IpAddress = context.Request.UserHostAddress;

  context.Session.Add("visit", visit);

}

Then every page view is registered after an .aspx page is served.

void context_PostRequestHandlerExecute(object sender, EventArgs e)

{

  HttpContext context = ((HttpApplication)sender).Context;

 

  if (context.CurrentHandler is Page)

  {

    Visit visit = context.Session["visit"] as Visit;

    if (visit != null)

    {

      Action action = new Action();

      action.Url = context.Request.Url;

      action.Type = "pageview";

      visit.Action.Add(action);

    }

  }

}

Then the session ends and we need to store the visitor log.

void session_End(object sender, EventArgs e)

{

  HttpContext context = HttpContext.Current;

  Visit visit = context.Session["visit"] as Visit;

  if (visit != null)

  {

    // Log the Visit object to a database

  }

}

Implementation

When you have registered the HttpModule in the web.config, then it starts collection page views in the session. To store them in a database you must add your own code to the session_End method of the module. Now you are also able to store actions just by calling a static method on the VisitorLog module:

VisitorLog.AddAction("Profile picture deleted", "deletion");

Keep in mind that this code is just me playing around in my sandbox. It has never been in a production environment.

Download

VisitorLog.zip (1,11 kb)



Source Click Here.

Search for people using SHA1 hashing

About 10 years ago, it was actually possible to look people up by their e-mail address online. You could also find a persons e-mail by searching for his or her name. Back then there where a lot of e-mail directories that acted like the yellow pages but for e-mail addresses. Very handy, but when spam became a problem, no one was willing to publicise their e-mail address and the e-mail search quickly died out.

Years passed and nobody thinks seriously about searching for people by their e-mail address anymore. It was tossed out of our toolbox – abandoned and forgotten.

Then a few years ago, something wonderful started to happen with the web. Community sites, forums, blog platforms etc. stated publishing FOAF and SIOC documents. Both documents contain e-mail addresses of people but not in the traditional sense. They publish SHA1 hashed e-mail addresses.

You can hash an e-mail using the SHA1 algorithm but you can never reverse it. That means the hashed e-mail addresses are secured from spam bots, but they are also left public for all of us to search for. All you need to do is to hash an e-mail address and do a Google search with the hashed value. Try searching for my hashed e-mail address on Google or go hash your own e-mail.

Here is a quick way of using SHA1 algorithm to hash any string value in C#.

public static string CalculateSHA1(string value)

{

  value = value.ToLowerInvariant().Trim();

  return FormsAuthentication.HashPasswordForStoringInConfigFile(value, "sha1").ToLowerInvariant();

}

The limitations of the SHA1 e-mail search is that you can only find people that have an online profile or blog, participate in online discussions or comments on blogs. The number of searchable people will rise as more and more sites start supporting FOAF and SIOC.



Source Click Here.

Simple JavaScript event model

At work we are using a lot of JavaScript for all of our user controls and other ASP.NET components. I’m guessing that so are you. Our solution is to add a .js file per each user control and then load them on the page dynamically as explained here. That is a great way to componentize your web application.

Bad behavior

The problem is when an action on one user control’s JavaScript effects elements on other user controls. An example could be the page header where it says Signed in as Joe. When you are on the update profile page and change your name from Joe to Johnny using AJAX, then you don’t update the header element. If you do, you probably reference the DOM element from your profile update script directly. This is bad since the header and profile are located in two different user controls and thus haven’t got a clue about the existence of each other. Why should the JavaScript treat them differently?

Good behavior

What you really want to do is to use an event model. Then it works like so:

The user changes his name to Johnny and the AJAX function in JavaScript triggers an event called profileNameUpdated and passes the new name along as a parameter. The page header have already told the event model to subscribe to the profileNameUpdated event and so it now catches the event triggered by the profile. It reads the new name and updates its own DOM element.

The code

Amazingly, the JavaScript code for this is less than 1KB and works on all websites, platforms and browsers. This is what it looks like.

// The constructor of the eventFramework
function eventFramework()
{
 this.handlers = [];
}

// Triggers the event specified by the name and passes the eventArgs to listeners
eventFramework.prototype.trigger = function(name, eventArgs)
{     
  for(var i = 0; i < this.handlers.length; i++)
  {
  if(this.handlers[i].eventName == name)
   this.handlers[i].eventHandler.call(this, eventArgs);
 }
}

// Adds a listener/subscriber to the event specified by the 'name' parameter
eventFramework.prototype.addListener = function(name, handler)
{
  if(typeof(name) != 'string' || typeof(handler) != 'function')
  {
  throw new SyntaxError("Invalid parameters when creating listener with the following arguments: 'Name': " + name + ", 'Handler': " + handler);
  }
 
  this.handlers.push({ "eventName" : name, "eventHandler" : handler });
}

// Initializes the eventFramework and makes it available globally
var events = new eventFramework();

Implementation

Download the .js file below and add it to your site. It should be the first JavaScript to be included in the <head> element of your pages.

When the script is included, it is now possible to start triggering and listening to events.  To trigger an event, simple write this:

events.trigger('profileNameUpdated', 'Johnny');

You can also pass objects or JSON as a parameter like so:

events.trigger('profileNameUpdated', {'name':'Johnny', 'oldName':'Joe'});

To subscribe or listen to these events, you simply add the following to any user control or JavaScript file:

events.addListener('profileNameUpdated', eventListener);

function eventListener(eventArgs)
{
  alert(eventArgs);
}

The eventArgs parameter will contain whatever was passed along by the trigger.

Download

eventmodel.js (966,00 bytes)



Source Click Here.

Reverse GEO lookup in C#

Google’s maps API now supports reversed GEO lookup which allows you to find an address based on geo coordinates.  All you need is a latitude, a longitude and this handy method:

private const string endPoint = "http://maps.google.com/maps/geo?q={0},{1}&output=xml&sensor=true&key=YOURKEY";

 

private static string GetAddress(double latitude, double longitude)

{

  string lat = latitude.ToString(CultureInfo.InvariantCulture);

  string lon = longitude.ToString(CultureInfo.InvariantCulture);

  string url = string.Format(endPoint, lat, lon);

 

  using (WebClient client = new WebClient())

  {

    string xml = client.DownloadString(url);

    XmlDocument doc = new XmlDocument();

    doc.LoadXml(xml);

 

    XmlNode node = doc.ChildNodes[1].FirstChild.ChildNodes[2].ChildNodes[0];

    return node.InnerText;

  }

}

It returns the address as a string.



Source Click Here.

Originals Enjoy