For the preconference, I decided to attend Paul Sherrif’s talk on Architecting ASP.NET Applications. Fortunately, I was able to get to the room about an hour early and find one of the few seats situated close to a power outlet. I really didn’t want to have to transfer my notes from paper to my laptop later; I’d rather just type them out directly!
N-Tier Development Benefits
The workshop has begun with a discussion of n-tier development; the benefits of logically separating services into different libraries and isolating functionality into discreet sections. The advantages of this approach are that the maintenance and extension of your application are vastly simplified. Refactoring common methods into reusable libraries minimizes the amount of work involved in fixing bugs because the buggy code needs only be fixed in one location. Adapting to large-scale architectural changes in the application is also easier if a modular development model is adopted. Switching out small components to adapt to technology shifts is far easier than making global code changes. This means that large-scale changes can be handled cheaply and quickly. The majority of the application libraries do not need to be run through a quality assurance cycle because they were not changed when the maintenance was performed. Only the new library needs to be tested in addition to running a full integration test, which is still far cheaper that re-testing the entire system. The following is a table detailing the suggested areas of isolation for an ASP.NET application:
|Business Rules||User Interface||Configuration Management|
|Data Access Service||Exception Management||Security Management|
|Data Layer||User Tracking||Log Management|
As opposed to using the System.Configuration.ConfigurationManager.AppSettings[“MyConfigItem”] approach for reading settings, a custom AppConfigManager class can be built that isolates access to configuration settings into static properties on that class. The argument for this architecture is that when and if the access mechanism for configuration settings changes in future versions of .NET, the changes will be isolated to only the AppConfigManager class. This is a valid argument because the mechanism for reading configuration settings changed from .NET 1.1 to .NET 2.0. In .NET 1.1 we used System.Configuration.ConfigurationSettings, however in .NET 2.0 we are encouraged to use System.Configuration.ConfigurationManager instead. Encapsulating application configuration settings is an architecture I’ve considered several times before but there is something that always makes me balk at the idea of a class full of statics. The difference here, which I like, is that a sound justification exists and that the statics aren’t just returning constants. Instead the static is returning a value using the System.Configuration.ConfigurationManager.AppSettings[“MyConfigItem”] mechanism. This does have the nice effect of isolating the final method of configuration management from the rest of the code in the application. Should the mechanism for accessing configuration settings change in the future, only the AppConfigManager wrapper class will need to be updated.
I could be wrong, but I think Paul just compared using Intellisense to “using the force” ala Star Wars: <sigh>. At least the humor hasn’t changed since the last conference I attended. Actually, so far the guy has been pretty funny so I’ll give him a break.
Custom Collection Classes
The major philosophical question that has arisen so far is whether or not to use custom collection classes when transacting between a data layer and a business tier or user interface layer. The code samples presented use DataTable instances. The argument presented is that a DataTable is actually analogous to a custom collection class as it has proprietary column names within it that correspond to the unique properties and data types of the business object being modeled. The only concession offered in the workshop is that this does open a potential window for failure if Microsoft decided to drop support for the DataTable class in favor of a superior class.
For a while now I have separated data adapter classes from object builder classes. The data adapter classes are responsible for executing sql commands to a database and either writing or reading common data collection classes; such as IDataReader. The builder classes are responsible for digesting those data collections and building concrete objects from them. This allows for a nice separation between data interaction and data digestion. However, I’ve always written both classes by hand. They do not present a large chunk of work and I have usually disregarded the cost of coding such small functional classes as being negligible. Code generation is something that has been suggested for these classes in the past, but I have often found that building the required code generation framework can be just as expensive as writing the classes that are needed. However, the argument being presented in this workshop for code generation is that if the data builder level classes and stored procedures are generated then schema level changes are reflected more quickly. This is an interesting advantage to note in the argument for using code generation for parts of the data layer.
Deriving Business Rule Objects
The next part of this discussion addresses a second common problem with code generation, that of custom rule application. A code generator can easily intuit the basic rules that need to be enforced for objects built from a schema, for example that required fields are populated. However, a code generator cannot easily imply that a cost field needs to be between certain threshold values. To address this problem, Paul suggests that a custom business rule object be derived from the generated data class. The data class implements validation as virtual methods that can be overridden in the custom business class. The overridden method can then apply the additional validation to enforce proprietary business rules.