My earlier article on this topic has proven to be very popular (and was recently mentioned on the XP mailing list). The discussion there has prompted me to write an update article. What I would like to do here is show a more concrete example based on the ideas from the original article, and my experiences since writing it.
A common concern I hear about testing around boundaries is that it makes the code more complicated. This may be true. Too some extent its a function of the boundary itself -- the libraries that we often have to use as-is (e.g. ADO.NET) that make testing at the boundary harder than it might otherwise be. But it is my opinion that having the ability to test my code at the boundary outweighs the additional complexity of the extra layer of indirection.
Furthermore, the actual boundary access code (e.g. the code that would have the actual ADO.NET calls in it) becomes very simple boiler-plate code that is reusable for all access to that boundary -- at the cost of creating a higher-level class containing the business logic that is specific to each domain object or domain operation that accesses the boundary (but this class is testable).
What follows is a more or less complete example based on several recent projects. It shows an implementation of data access code in C#. In each case, the code was started by applying the pattern described in the earlier article, and then subjecting that code to extension and refactoring to meet the specifc needs of the project (this code has been aggregated from multiple projects and scrubbed to remove any client-specific implementation).
A simplified view of the object model looks something like this:
The key class is FooDomainObjectData -- a class that knows how to send and get data for working with Foo Domain Objects across the boundary (in this case a relational database). Note that RealDatabase is not tested -- but it also has very little code in it (which still will be tested by the story/acceptance/functional/end-to-end/wrapper tests).
Here is the code for the critical classes (and two sample tests):
public class FooDomainObjectData { private IDatabase _db; public FooDomainObjectCollection FooDomainObjects; public FooDomainObjectData(IDatabase db) { _db = db; FooDomainObjects = new FooDomainObjectCollection(); } //called by whatever driver program is responsible for creating the "real" boundary objects public void ParseResults() { DataReader reader = _db.GetResults(); while (reader.Read()) { //pull fields off reader to create a domain domain object(s) //populate a public field with those objects FooDomainObjects.Add(new DomainObject(fieldvalue1, fieldvalue2)); //etc } reader.Close(); } //called by driver program -- yet testable public void GetSomeData(FooDomainObject foo ) { SqlCommand command = CreateGetTransactionDataCommand(foo); _db.GetDataReader(command); } //also testable -- and allows for inspecting out parameters private SqlCommand CreateGetTransactionDataCommand(FooDomainObject foo) { SqlCommand command = new SqlCommand(NAME_OF_STORED_PROCEDURE); command.CommandType = CommandType.StoredProcedure; // Add Parameters command.Parameters.Add("@SomeField", new SomeSqlDataType(foo.SomeField)); command.Parameters.Add("@SomeOtherField", new SomeOtherSqlDataType(foo.SomeOtherField)); return command; } } //instance will be created by driver code in "real" application public class RealDatabase : IDatabase { private DataReader _results = null; public void GetDataReader(SqlCommand command) { //GetConnection() creates a SqlConnection based on information //passed to ctor (neither method shown) command.Connection = GetConnection(); command.Connection.Open(); _results = new DataReader(command.ExecuteReader()); } public int Update(SqlCommand command) { command.Connection = GetConnection(); command.Connection.Open(); try { return command.ExecuteNonQuery(); } catch (Exception) { throw; } finally { command.Connection.Close(); } } public DataReader GetResults() { return _results; } } //created and used by tests public class FakeDatabase : IDatabase { public DataReader Results = null; public SqlCommand LastCommand = null; public FakeDatabase() {} public FakeDatabase( DataReader reader) { Results = reader; } public DataReader GetResults() { return Results; } public void GetDataReader(SqlCommand command) { LastCommand = command; } public int Update(SqlCommand command) { LastCommand = command; return 1; } } [Test] public void GetSomeDataCreatesStoredProcedureCommand() { FooDomainObject foo = new FooDomainObject(); foo.SomeField = 42; foo.SomeOtherField = "fortytwo"; FakeDatabase db = new FakeDatabase(); FooDomainObjectData data = new FooDomainObjectData(db); data.GetSomeData(foo); Assert.IsNotNull(db.LastCommand); Assert.AreEqual(CommandType.StoredProcedure, db.LastCommand.CommandType); Assert.AreEqual("Our_stored_procedure_name", db.LastCommand.CommandText); Assert.AreEqual(42, db.LastCommand.Parameters["@SomeField"].Value); Assert.AreEqual("fortytwo", db.LastCommand.Parameters["@SomeOtherField"].Value); } [Test] public void CanCreateFooDomainObjectsFromResults() { FakeReader reader = new FakeReader(); reader.AddRow(BuildRow()); //this creates a row in the "reader that can simulate our returning dataset FooDomainObjectData data = new FooDomainObjectData(new FakeDatabase(reader)); data.ParseResults(); FooDomainObject foo = data.FooDomainObjects[0]; Assert.AreEqual(42, foo.SomeField); Assert.AreEqual("fortytwo", foo.SomeOtherField); //and so forth }
Note that in this example the injection is being done via the constructor. Also note that the impact on the design by the boundary library: ADO DataReaders are harder to work with than DataSets -- but easier to test with, so the solution uses data readers in the IDatabase implementation.
In short, after two years of using this pattern, I still find the benefits of testability at the boundary outweigh the additional complexity of the code. Further, the pattern holds for any boundary (i.e. not just databases). The resulting class pattern thus becomes familiar to the team quickly (and so easily grokked). Once in place the structure extends reasonably well (i.e. once in place you don't have to write much housekeeping code to add and test functionality that accesses the boundary). Finally, it enables ongoing refactoring of all the code above the RealDatabase class (which itself rarely changes) meaning that our investment in the extra code pays off as soon as our data access needs change, or (as on my current project) the level of abstraction changes -- multiple FooData classes were merged into a higher-level abstraction (and thus reducing data base round-trips) using basic refactoring techniques -- without breaking existing functionality.