CodeMash Keynote

I just finished with Neal Ford’s keynote, and am waiting for Jeff Blankenburg to start his talk in lieu of Jesse Liberty on Silverlight.  I have to say I disagree with Neal about his choice of metaphors for software engineering.  Having worked in a RUP shop before, I’d liken the blueprints of traditional engineering with the UML diagrams popular in the RUP methodology.  Neal was saying that the actual code we write is the equivalent of a blueprint, and that the compiler is the manufacturing phase, but had to admit that the metaphor doesn’t fit exactly because in traditional engineering, the manufacturing part is the most expensive and time consuming phase.  Well obviously the writing of the actual code is the most time consuming, expensive part of software engineering.  The compiler is more like the tools that the laborers use to DO the manufacturing.

Agile practices eliminate the UML dependency of traditional RUP practices, and so the metaphor starts to break down from that side as well, but there’s not much we can do about that.  I suppose in our agile development practices, it’s closer to having the engineers do the actual construction, sometimes with the assistance and guidance of a "senior" engineer with whom they must discuss all design decisions before committing them to concrete.

I totally agree with most of the rest of the talk, though.  Except maybe that last part about the type safety… I loves me some type safety… it’s my thing.  Sorry.

Technorati Tags:
Posted in Computers and Internet | Leave a comment

CodeMash schedule

Well, I think I’ve decided on what sessions I’ll be attending:
http://www.codemash.org/SessionScheduler/?1=2&2=30&3=33&4=7&5=3&6=9&7=42&8=32

I’m still a little up-in-the-air about the "Introduction to Workflow Foudation" vs. Brian’s "Applied SOA" session.  Brian’s a great speaker, and I’d want to get in at least one of his sessions, but I’m pretty sure I’ve heard this one before.  On the other hand, an intro to WF is probably going to cover stuff I already know as well.  Maybe I’ll get into the true CodeMash spirit, and go do a session about some completely different technology.  I could do the "LinqTo<T>" session.  I’m not sure I’ll need it anytime soon, but it’s definitely something I don’t already know.

Decisions, decisions.

Technorati Tags:
Posted in Uncategorized | Leave a comment

Almost CodeMash time

Only a week to go until CodeMash 2008.  My kids are pretty excited because they’ll get to spend two days in a water park.  I get to spend two days in classes, and maybe a few hours in the evening in the water park.  We’ll see how the schedule pans out, but this was a blast last year, so I’m looking forward to next week.
Posted in Computers and Internet | Leave a comment

Gearhead in Times Square

As promised, here are another couple photos of me with my CodeMash shirt in interesting places.  In this case Times Square, NY.
 
2007-10-27--19-41-442007-10-27--19-42-09
 
Geeks across America!
Posted in Computers and Internet | Leave a comment

Custom Security Rule storage with Enterprise Library Security

The Enterprise Library’s security block is very handy, and pretty easy to use, but the one complaint I see about it most often is that people don’t want all of their security rules living in the app.config file usually due to security concerns.  This can be easily solved by encrypting the config, but this will only really keep out casual hackers.  You could also implement the SqlConfigurationSource which is provided as a sample with the EL source, and store the EL configuration in the database, but by default, the EL is going to want to store ALL of its settings in the database, not just the ones you wanted to hide.

Initially, the client I’m working for just wanted a simple interface to map primitive rights to roles via a CheckBoxList or something.  When I explained that I could get more flexibility in less time by using the EL security block, the choice was clear, and I got the go-ahead to investigate whether I could make it work or not.  The difficulty here was that while security rules aren’t normally subject to much change after deployment, in this case there was an actual requirement to edit the rules from within the application itself.  We did not want to use the app config, and the SqlConfigurationSource route would have required us to use the EL’s configuration tool in order to maintain the rules in the database.

Unlike my previous experience with trying to bend the EL to my will, the security block is actually much easier to use in non-standard ways.  Instantiating the AuthorizationRuleProvider class requires you to pass in an IDictionary containing the rules that the instance will enforce.  This means I could load and save my rules from wherever I want.  In this case we’ve used a Linq entity class called SecurityRule, and a static class called AuthorizationManager.  The constructor for AuthorizationManager loads the rules, adds them to a dictionary, and then instantiates the AuthorizationRuleProvider class as follows:

        private static Dictionary<string, IAuthorizationRule> _securityRules = new Dictionary<string, IAuthorizationRule>();

        static AuthorizationManager()
        {
            // Get the security rules from the database
            SecurityRuleLogic logic = new SecurityRuleLogic();
            foreach(SecurityRule rule in logic.GetActive())
            {
                _securityRules.Add(rule.Name, new AuthorizationRuleData(rule.Name, rule.Expression));
            }

            _ruleProvider = new AuthorizationRuleProvider(_securityRules);
        }

After this, it’s just a matter of asking the AuthorizationManager whether a user is or isn’t allowed to perform a certain action by calling the AuthorizeCurrentPrincipal method:

        public static bool AuthorizeCurrentPrincipal(string ruleName)
        {
            if (_securityRules.ContainsKey(ruleName))
            {
                IPrincipal principal = Thread.CurrentPrincipal;
                return _ruleProvider.Authorize(principal, ruleName);
            }
            else
            {
                return false;
            }
        }

Simple, right?  In the real world it might be a little more complex.  I’ve omitted things like caching of the current principal, etc.  The point is that all the EL blocks should be this flexible.  The designers have hit the 90% that most people are going to need, but there’s always that 10% case where you have to do something weird.  This time I was lucky.

Posted in Uncategorized | Leave a comment

Handicapped? (Originally published 11/11/07)

Hey, look what I found in the parking lot at Veteran’s Memorial today?

Image002(3) 

Now to be fair, there was a handicapped placard hanging on the rear-view mirror of this monster.  It does make you wonder, though.  If you can climb in and out of an H2, how handicapped can you BE?  You’ve practically gotta climb up a freakin’ ladder to get into one of these things, but somehow you need the closest spot to the building wherever you go?

Posted in Uncategorized | Leave a comment

Wildcard searches with Linq (Originally published 10/31/07)

How do you do a wildcard search with Linq? 

It took me quite a while to find the answer to this question, so I thought I’d blog it and maybe the search engines will pick it up for the next poor slob who searches the web for an answer.  I don’t know when you’re reading this, but when I did a Google search for "Linq wildcard" I came up empty, even in the groups search.  Seemingly no-one in the world was asking the question I wanted answered.  I’m sure once we’re all using Linq on a daily basis this question will seem very basic, and will be covered in every Linq book on the market.  Right now, though, all the authors are holding on to their books waiting for the RTM before they start selling.  It makes this kind of basic research a bit painful.

So, how do you perform a wildcard search in Linq?  It’s easy enough to write a lambda expression to say that a field must start with, end with, or contain a sub-string, but what if you don’t know which situation you’re going to be facing?  If your users are allowed to type arbitrary, wildcarded values into the UI at runtime, how can you possibly know where they’re going to put the percent signs?  You can’t know this ahead of time, so you’re kind of stuck unless you already know the answer.  That answer is the System.Data.Linq.SqlClient.SqlMethod class, and the Like method specifically.  All you need to do to perform a Like search is something the following:

    if (!string.IsNullOrEmpty(lastName))
        list = list.Where(c => SqlMethods.Like(c.LastName, lastName));

Linq will correctly turn this into a LIKE clause in the resulting SQL, and if the user didn’t happen to put any wildcard characters in the text box, then the results will be the same as a direct comparison search would have returned.  I’m not saying that the execution plan or performance will be the same, I’m not a DBA, but the results of the search will be the same.

Technorati Tags: , ,

Posted in Uncategorized | Leave a comment

Validation Block Abuse (Originally published 10/30/07)

Well, there’s nothing like a good round of standards abuse to get the blood flowing.  Recently I’ve found myself in a situation where I can’t use part of the enterprise library as it was originally intended to be used.  The Enterprise Library’s Validation Application Block is designed to be used in a few different ways.  First, you can decorate your business entities with attributes that specify what rules should be applied to which properties, and the block will discover them through reflection at runtime and apply the rules.  Second, you can describe your rules in an external configuration file which the VAB will read and match up to the objects at runtime (once again via reflection).  Finally, you can decorate your class with the "HasSelfValidation" attribute, and the VAB will call any methods on that class decorated with the "SelfValidation" attribute.

The config file has the advantage of letting you change or modify rules without having to recompile the application.  You could expand the length of a field in the database, change the rule in the config file, and everything will continue to work.  Unfortunately it loses you quite a bit of type safety.  For instance, renaming a property in your application won’t result in an error.  Instead, that rule will simply stop being applied which is definitely a deal-breaker for my current project.  If the VAB had a way to specify that we want to know if there’s anything in the config that doesn’t match reality, then this is definitely the way I’d want to go.

The attribute-based approach guarantees that the rules will continue to operate even if you rename or re-namespace the entities in question, but requires you to recompile the application in order to make any changes to the validation rules.  It also won’t work if you’re dealing with generated partial classes (IE, Linq entities) where you can’t get to the properties to decorate them, or at least not without having to re-do it each time you touch the designer surface.  If you could specify arbitrary attributes through the designer, then this would be my second choice.

The self-validation approach allows you to implement custom rules that can’t be expressed using the built-in validators, and aren’t worth writing a whole custom validator class for.  The trouble is that they require you to do all the work yourself.  You test your rule, and if it’s broken, then you create a ValidationResult object and add it to your ValidationResults instance.

So we have three approaches, none of which are ideal.  Refactoring support and not losing rules due to renaming changes are a must for this project, but we’re using linq-generated entities, and can’t decorate them with the required attributes.  Writing everything as custom self-validation methods means giving up the advantages of using the validation block in the first place, so what can we do?  What we need is a code-based approach that can still leverage the built-in VAB validators to do the actual work.

Well, as it turns out, you actually can use the VAB validators from code, but the results are a bit less than ideal.  For instance, to check whether a number is within a certain range, let’s say 1 to 10 for this example, you need to write the following code:

    ValidationResults results = new ValidationResults();
    RangeValidator validator = new RangeValidator(1, 10);
    validator.Validate(this.Size, results);

If the validation fails, then results will contain a new ValidationResult instance detailing the problem.  There are a couple problems with this approach, though.  First, this is a lot of code to write for each and every little property you want to validate.  A simple, 20 line method has now become at least 40 plus some overhead.  Since the built-in validators don’t have any kind of static methods you can call when you just want to perform a simple property validation from code, you have to instantiate a different validator for each and every unique combination of limits, bounds, or patterns you might need to test.  Second, and more importantly, the resulting ValidationResult entities won’t have their Key or Tag properties filled in, and their Target properties will be pointing to the actual values that were tested rather than the objects which contained them.  You can’t just fill them in after the fact, either, because these properties are all read-only, and can only be set from the constructor.

During "normal" operation of the VAB, the values for these properties are known to the VAB because it’s either been reflecting over the assembly looking for properties with validator attributes on them, or it’s been reflecting over the assembly looking for properties that match the descriptions stored in the config file.  Either way, the "Key" property of the ValidationResult gets set to match the name of the property, and that is figured out via reflection.  In order to fill these properties in manually, we’re going to have to iterate over our results, and make new ones that look just like them, but better:

    ValidationResults betterResults = new ValidationResults();
    foreach (ValidationResult result in results)
    {
        betterResults.AddResult(new ValidationResult(result.Message, target, key, tag, result.Validator));
    }

This is getting kind of messy by this point, but don’t give up yet, things are about to get better.  What we need is a good old-fashioned helper class, or perhaps a new class derived from ValidationResults which could encapsulate some of this grunt-work for us.  The ValidationResults class isn’t sealed, so there’s nothing to stop us from deriving a new class from that, adding our helper methods, and going on about our merry way, but since this is .net 3.5, we’re going to do something far more sinister, and implement our helpers as extension methods instead.  Why?  Because it’s there, that’s why.

I have, in the past, followed a general pattern when it comes to validation coding.  I try to make it look as much like a unit test as I can.  It’s already familiar to most developers, it’s easy to understand, and I think it encapsulates things rather nicely.  I have done this by creating multiple "AssertXyz" methods on a business rule collection of some sort.  If the rule is broken, then the collection adds a new BusinessRule object to itself.  When you’re done asserting rules, you just check the number of rules that ended up in the collection, and if it’s zero, then the object is valid.  It seems simple enough, and I’ve had a lot of success doing it this way.  We’ll try to make something similar, but instead of writing all my own comparison logic as I’ve done in the past, I’m going to try to reuse the existing VAB validators to do the work for me.  This will ensure consistent results whether the rules have been specified in attributes, configuration or code.

I won’t include the entire class here because it would be terribly long, but you’ll get the idea:

    public static void AssertRange(this ValidationResults results, object target, IComparable value, 
        string key, string tag, IComparable lowerBound, RangeBoundaryType lowerBoundType, 
        IComparable upperBound, RangeBoundaryType upperBoundType, string messageTemplate, bool negated)
    {
        if ((lowerBoundType != RangeBoundaryType.Ignore) || (value != null))
        {
            // Compose a basic error message.
            // The default message generated by the VAB does not include the name of the field.
            if (string.IsNullOrEmpty(messageTemplate) && !string.IsNullOrEmpty(key))
            {
                messageTemplate = string.Format("{0} must {1}be between {2} ({3}) and {4} ({5}).", 
                    key, negated ? "not " : string.Empty, lowerBound, lowerBoundType, upperBound, upperBoundType);
            }

            ValidationResults tempResults = new ValidationResults();
            RangeValidator validator = new RangeValidator(lowerBound, lowerBoundType, upperBound, upperBoundType, 
                messageTemplate, negated);
            validator.Validate(value, tempResults);
            results.AddAllResults(tempResults, target, key, tag);
        }
    }

This method extends the ValidationResults class to test whether a given value falls within the specified range, and add a new ValidationResult to itself if not.  You can see that I’ve also created an appropriate messageTemplate in code, since there’s no attribute of config file to define it in, and the default message generated by the VAB won’t include any of the key identifying information to let you know which property has been found in violation.  A validator is then instantiated and used to validate the value argument.  Finally, the results are added to the real ValidationResults instance via another couple of extension method that fill in the Target, Key and Tag properties on the way:

    public static void AddAllResults(this ValidationResults results, 
IEnumerable<ValidationResult> sourceValidationResults, object target, string key, string tag) { foreach (ValidationResult result in sourceValidationResults) { results.AddResult(result.Message, target, key, tag, result.Validator); } } public static void AddResult(this ValidationResults results, string message, object target, string key,
string tag, Validator validator) { results.AddResult(new ValidationResult(message, target, key, tag, validator)); }

All that’s left is to create multiple AssertRange overloads that take different sets of parameters and do more and more of the work for you:

    public static void AssertRange(this ValidationResults results, object target, IComparable value, string key, 
IComparable upperBound) { AssertRange(results, target, value, key, null, null, RangeBoundaryType.Ignore, upperBound,
RangeBoundaryType.Inclusive, null, false); } public static void AssertRange(this ValidationResults results, object target, IComparable value, string key,
IComparable upperBound, bool negated) { AssertRange(results, target, value, key, null, null, RangeBoundaryType.Ignore, upperBound,
RangeBoundaryType.Inclusive, null, negated); } public static void AssertRange(this ValidationResults results, object target, IComparable value, string key,
IComparable lowerBound, IComparable upperBound) { AssertRange(results, target, value, key, null, lowerBound, RangeBoundaryType.Inclusive, upperBound,
RangeBoundaryType.Inclusive, null, false); } public static void AssertRange(this ValidationResults results, object target, IComparable value, string key,
IComparable lowerBound, IComparable upperBound, bool negated) { AssertRange(results, target, value, key, null, lowerBound, RangeBoundaryType.Inclusive, upperBound,
RangeBoundaryType.Inclusive, null, negated); } public static void AssertRange(this ValidationResults results, object target, IComparable value, string key,
IComparable lowerBound, RangeBoundaryType lowerBoundType,
IComparable upperBound, RangeBoundaryType upperBoundType) { AssertRange(results, target, value, key, null, lowerBound, lowerBoundType, upperBound, upperBoundType,
null, false); } public static void AssertRange(this ValidationResults results, object target, IComparable value, string key,
IComparable lowerBound, RangeBoundaryType lowerBoundType,
IComparable upperBound, RangeBoundaryType upperBoundType, bool negated) { AssertRange(results, target, value, key, null, lowerBound, lowerBoundType, upperBound, upperBoundType,
null, negated); } public static void AssertRange(this ValidationResults results, object target, IComparable value, string key,
IComparable lowerBound, RangeBoundaryType lowerBoundType,
IComparable upperBound, RangeBoundaryType upperBoundType, string messageTemplate) { AssertRange(results, target, value, key, null, lowerBound, lowerBoundType, upperBound, upperBoundType,
messageTemplate, false); }

I tried to create one Assert method for each overloaded constructor on the validator class in question so that the extension methods expose all the different ways the original validators could be used.  Lather, rinse, and repeat with the other validator types you need, and you’re all set.

Posted in Uncategorized | Leave a comment

Performance or Maintainability (Originally published 10/10/07)

In a recent client interview, I was posed some interesting "puzzle" questions.  These weren’t of the "How would you move mount Fuji" variety (By the way, I would never dream of doing such a thing.  Surely some detailed requirements gathering would show that there’s an alternative to moving an actual mountain).  These questions were of the computer science variety, but from 20 years ago.  These were questions such as "How would you detect a circular reference in a linked list using only a couple of pointers?"  Huh?  Linked list?  I mean yeah, I know what a linked list is.  I had to write my own memory allocation and list structure stuff in the past (the distant, academic past), but when’s the last time you had to do something like that, seriously?  Fortunately, there are no hand-rolled linked lists on this project, but it started me thinking.

There will always be developers who want to eke out every possible performance gain they can from a system, but the code they leave behind is often impossible for their successors to understand and maintain.  Even if I can prove that my algorithm for solving a problem is truly superior in its performance, in all likelihood that gain will be completely negated by the time wasted by some future developer shaking his head and trying to work out what I’ve done.

The "correct" solution to the linked list problem, is very clever, uses almost no memory, and works in a nearly O(n) scale, but if even one future developer has to stare at it for a few minutes trying to figure out how it works, then whatever benefits it had have likely been rendered moot.  You know what I’d do?  I’d replace the linked-list with a List<Entity> or a dictionary keyed on some unique property of the entities to eliminate the possibility of duplicates in the first place and move on.  If I couldn’t do that then I might walk that list, adding references to a second List<Entity> as I went, checking for duplicates on the way.  Then, if and only if I determined that this piece of code caused a noticeable bottleneck or delay would I go back and look for a better way.  These days we’re swapping "clever" for "robust" with the help of the .net framework, and I’ve yet to see a system grind to a halt because of a call to List.Contains().  I see systems grind to a halt because of nine layers of "clever" indirection in the UI, and "clever" homegrown data access layers.

Consulting and development isn’t about being smarter than the client, it’s about helping the client succeed, and that never means leaving behind something no-one can understand without detailed analysis or a 1980’s computer science education.

Posted in Uncategorized | Leave a comment

Stupid LINQ tricks (Originally published 10/5/07)

This won’t be news to any of the Microsoft gurus or MVP’s, but it struck me as pretty friggin’ cool, so I thought I’d share it.  If you’ve read any of Scott Guthrie’s blogs, then you’ve seen some cool stuff like the new type initializer syntax.  It’s what lets you write stuff like this:

Person person = new Person { FirstName = "Joe", LastName = "Bob", Age = 32 };

Instead of this:

Person person = new Person();
person.FirstName = "Joe";
person.LastName = "Bob";
person.Age = 32;

It’s kind of like a pseudo-constructor.  But did you know that you can use it with regular constructors as well?

public class Person
{
    public string FirstName { get; set; }
    public string LastName  { get; set; }       
    public int Age { get; set; }

    public Person(string firstName, string lastName)
    {
        this.FirstName = firstName;
        this.LastName = lastName;
    }
}

private static void InitializerTest()
{
    Person person = new Person("Joe", "Bob") { Age = 32 };
}

Yeah, no-one mentions that, but it works just fine.  After all, it’s just syntactic sugar, it ends up meaning the exact same thing on the back-end.  I just decided to try it to see what it’d do, and it worked just fine.  RAWK!

So what good is that?  It’s really good, because it lets us initialize and fill in NON-anonymous classes when performing LINQ queries.  Everyone’s so busy writing articles showing how to LINQ query into anonymous types that no-one’s talking about querying into named types, and especially into existing named types that already define constructors.  It’s just gotten passed over. 

Let’s suppose I have the canonical "AdventureWorks" sporting goods site, and I have a web page that should display a list of products in a certain category.  Let’s further suppose that my web server is sitting on the other side of a service boundary from the back-end server, perhaps using WCF or a WebService call.  I don’t want to retrieve the entirety of the products from the database and clog up the network transferring them to the web server because I’m just displaying a list, and it doesn’t need each and every little column.  I need to transfer a lighter weight "ProductSummary" object instead of the heavy "Product".

I can do a LINQ query into an anonymous type on the server-side, but how am I supposed to write a translator to turn it into my DTO?  Well, using the new type inference I could just refer to the contents of the collection as "var" and do it that way, but frankly I don’t like the look of that, and how do I type the parameter being fed in to be translated?  What if I actually NEED the ProductSummary for something on the server-side too, and would actually LIKE it to have a name?  Or what if I need to use a pre-existing ProductSummary in an existing solution?  What if that pre-existing summary object has no parameterless constructor?

Check this out:

private static void SelectIntoSummary()
{
    AdventureWorksDataContext db = new AdventureWorksDataContext();
    IEnumerable<ProductSummary> productSummaries =
        from p in db.Products
        where (p.ProductSubcategoryID == 5)
        select new ProductSummary(p.ProductID, p.Name)
        {
            CategoryName = p.ProductSubcategory.ProductCategory.Name,
            SubCategoryName = p.ProductSubcategory.Name
        };
    }
}

I was able to select into an existing object that doesn’t match the database table at all, the ProductID and Name values are passed into the constructor like normal, and I can still fill in the CategoryName and SubCategoryName properties using the new initializer syntax.  I also avoided loading extra fields from the database into the full-fledged Product object just to create a ProductSummary.

This is not news.  I have not discovered some new technique here, but I HAVE just found what I thought was a pretty interesting dark corner where none of the articles I’ve read so far have gone exploring yet.  It’s not glamorous or groundbreaking, but it is pretty cool.  And the coolest part about this is that it just worked like you’d expect it to.  LINQ is good.

I think I’m going to like the next few years.

Posted in Uncategorized | Leave a comment