Don’t be tempted to break away from your TDD principles…

OK, you know how it is.  A story comes up to do a similar thing as code that’s in the system currently.  You know there’s high risk around altering this piece of code so you say [in a very knee-jerk way] “oh, we could just cut-n-paste the old code…only two (story) points.  Done…”   ….Big mistake.

You then open up the original code and GOOD GRIEF, what was the programmer doing, aside from anything than writing production code.  You know it works, but can you bear to propagate this around.   As it turns out, I can’t.  Let me give you the context.

The original code has to interface with the accounting system, which btw, is about to change.  The old code is a Quartz.Net CRON job, generally run from a windows service application.  Your brief is that the two jobs will need to exist in tandem for a short time, oh and also, eventually you’ll not be needing to do this in the future as they will develop a process to replace yours (yeah, right)…

You’re running a team that has TDD as it’s core.  What do you do, where do you start (after you’ve said that infamous line above).  First thing, [mandatory] do a quick U-turn to the product owner and the scrum master (in front of the team).  Of course the (default) reason is that “it’s not testable” – which of course it isn’t.  There is an “integration test” around the old job but this is almost irrelevant because it’s so complicated you can’t work out what’s being tested.

OK, this is how I approached it:

  1. Write an test to test the new job class via the only entry point.  In this case its signature is
    public void Execute (IJobContext jobContext);
  2. Write the actual class that implements this, copy and pasting the original code in and creating the assembly from the dependency container.  Your code should now compile.
  3. OK, a CRON job is a CRON job.  It should be delegating most of the work to another class otherwise you’ll be violating SRP.  Create a new unit test to prove that you are calling this new class using sensible entry points using mocks.
  4. Write the new class to implement the mocked interface (for simplicity we’ll call this IJobWorker).  Pull code out from the job and put it in to this new class.  Your code should again compile at this point.  Once you are green again, move on
  5. OK, scan the pulled out code code for any dependencies from JobWorker.  These will typically manifest by the use of the “new” keyword.  These are your dependencies.
  6. Write a set of tests where you mock in the identified dependencies.
  7. Write the code to accept these dependencies (in our case we use Property injection.  Your code should now compile.
  8. Write a set of tests that call the required dependencies under each condition.  Don’t forget to write the negative tests in this case too.  Moq is fantastic for this using the “Times” class and the “It” class.  It.Is<T>() is particularly useful here too.
  9. Write the code to implement these calls.  Rinse and repeat until your completely happy
  10. Compare to the original code.  What have you missed?
  11. Inject in some logging statements.  You know this is critical, you will be thanked many time over when you can show good logging to support what happened.

OK, some final rules of engagement.

  1. As tempting as it is, the old job should not be touched.  You know that this code will disappear as soon as the accounting system is replaced.  AND, it is a critical piece of functionality.
  2. You need to end-to-end test this.  As much as I hate wring them, a full integration test(s) is mandatory here.  As we can trigger this through the API, saving writing integration tests in POSTMAN is something the testers can understand and build upon.  Just make sure all of the inputs can be passed in via the job context (for each environment).
  3. You go through a formal code review (this is part of our SOP in any case).  In this review, the reviewer is also looking for instances where the functionality may break or the tests haven’t covered off the scenarios.
  4. Revisit this code right through each environment.  Amazing how you think of something when it’s at the back of your mind.  If anything should change, write the test first!
Advertisements

Leave a comment

October 18, 2015 · 8:25 am

Duplex WCF via Windows Service for File Processing from WPF

I wrote yesterday about my first smart client application.  The problem I had was because it relied so heavily on feedback from the server component, via events, it made for a very thick application.  Not smart at all.  It happens to be written in WPF too which needs lots of background threads to keep the UI nice and responsive.

The answer to this was to create a WCF service in the middle.  The service uses binding=”wsDualHttpBinding” and most of the interface is in defining the callbacks (similar concept to events, but not so smart). Operations need to be OneWay="True" too. The interface looks like this:

[ServiceContract(Namespace = "http://www.datamark.co.nz",
SessionMode = SessionMode.Required,
CallbackContract = typeof(IStatementFileProcessingDuplexCallback))]
public interface IProcessStatementFileService
{
[OperationContract(IsOneWay = true)]
Task ProcessFile(RemoteFileInfo request);
}

public interface IStatementFileProcessingDuplexCallback
{
[OperationContract(IsOneWay = true)]
void OnError(string message);

[OperationContract(IsOneWay = true)]
void OnMessage(string message);

[OperationContract(IsOneWay = true)]
void OnPercentageProcessed(int percentage);

[OperationContract(IsOneWay = true)]
void OnFinishedImporting();
}

[MessageContract]
public class RemoteFileInfo : IDisposable
{
[MessageHeader(MustUnderstand = true)] public string FileName;
[MessageBodyMember(Order = 1)] public Stream FileByteStream;

public void Dispose()
{
if (FileByteStream != null)
{
FileByteStream.Close();
FileByteStream = null;
}
}
}

The service essentially has to take in a file and process it line-by-line against the database. Because of the large nature of this file, and the need to get more than just the Stream object across, I needed to do 2 things

  1. Implement a named binding behaviour that allows the necessary tweaks int the message to come down the wire. Next, add in this attribute in the endpoint element. See below:
    
    
    
    
      
       
       
       
      
     
    
    
  2. Create a MessageContract where the body of the message is the Stream and all other required parameters are passed in as MessageHeader elements in the MessageContract. The code is posted above

Now the really interesting stuff is whilst I was expecting ProcessFile() to be awaitable, it turns out that the

[OneWay="True"]

attribute means that the method only waits for it to be called, the calling operation therefore does not await the WCF method to complete. So the design of the cliet application is to allow complete control of the logic to only come from the CallbackContract.

In my mind this works out to be a better design – less logic in the client, it’s all done in the client implementation of . See below for the OnError logic:

public void OnError(string message)
{
   Application.Current.Dispatcher.Invoke(
     (ThreadStart)delegate
     {
        foreach (Window window in Application.Current.Windows)
        {
           if (window.GetType() == typeof(MainWindow))
           {
             ((MainWindow)window).Errors.Add(new Message { Content = message, LineColour = Colors.Black });
             ((MainWindow)window).ProcessingProgress.Foreground = new SolidColorBrush(Colors.Red);
           }
         }
       });
     }

Note, as this is a WPF application and the callbacks are coming in on different threads, I have to make use of Dispatcher.Invoke() as the control is on the main UI thread.

The other important thing is that the method handler only has access to the form (called “MainWindow”) through the Application.Current.Windows collection, hence the iteration.

Important…just found out that this protocol will not work across the wire: http://www.dotnetconsult.co.uk/weblog2/PermaLink,guid,b891610a-6b78-4b54-b9a6-4ec81c82b7c0.aspx. If I had time, perhaps netTcpBinding would be the answer.

Leave a comment

Filed under WCF

Freewheeling through SE Asia

Less than 20 days to go before flying out to Bangkok for an epic trip with Exodus through three countries.  I’ve chosen to take my own bike having borrowed a cycle box and tested this out.  Exodus seem happy to do this (and importantly refund some money).  I will only know if this is a good idea once I’m back.  There’s a risk here too, insurance only covers $750 of a bike (less excess).  I guess I’m not that risk averse.

It will be interesting coming back too.  Namely the project I’ve been working on has come to a natural end and the company is wanting to abandon .Net altogether.  It leaves me and my other team mate just hanging.  Retrain in Java + spring or ?  Good thing Wellington’s a vibrant scene for .Net still.

I wrote my first smart client last week (believe it or not).  It’s not really that smart though as there’s no service to write to (so deploy all the things!).  Currently I’m looking at introducing a service (WCF seems the natural choice) but it really needs ways to implement feedback from the service to the client.  Currently the server is throwing out a lot of events, a callback model seems feasible in WCF but alas pure events do not.  Remoting may work but this is 2018 and this technology seems so far out of date.  Time to start researching…

Leave a comment

Filed under Cycling

DevOps using Codeploy

I had a really positive response last night from the Wellington DotNet community talking about using AWS CodeDeploy to get application code on to AWS servers.

The presentation focused on how to use CodeDeploy and the application hooks that you can take advantage of.  The hooks are used to “inject” PowerShell scripting code into various of the deployment life cycle.  Of course, you could use bash scripts and linux servers instead, but this was not my audience.

The other cool thing about code deploy was that there are two models for deploying: “in-place” or “blue-green”.  “In-place” for targeting existing servers and blue-green for targeting a new stack and moving traffic over to that.  A great question can from the audience about what to do if you want to have two applications to target a new stack.  A good solution would be to blue-green deploy the first application then do an in-place deployment for the second application.  In this example, the first application was public facing (so we do not want outages), but the second application was not.

I found that in place deployments could result in the load balancer serving up content from a server that is healthy according to the load balancer but because it’s being deployed to, not ready for serving up content.  This could be mitigated with more scripts to take the server out of the load balancer during the deployment process, but there is more scripting involved to doing this.  It depends on your goals…

Anyway, the slide deck is attached.  At the end, there is a link to a Udemy course that I’d recommend and also so more code examples

Leave a comment

Filed under Agile, AWS, CodeDeploy, DevOps

Unit testing SQL Server

Finally had an Ahha! moment today with the implementing of SQL Server unit tests to start off the next change… Continue reading

Leave a comment

Filed under TDD

TDD payback

One of the warm fuzzy moments to me (well in a professional capacity) is when I hear something I’ve said come back to me.  In this case it was a statement about my own code…”well, it just doesn’t smell right”.  Here’s the story… Continue reading

Leave a comment

Filed under TDD

What’s in a test?

I had a interesting discussion with a colleague today about “unit” tests vs. “integration” tests.

For this team, we have TDD at the core so it got me thinking, why do we have the distinction.  What led me to this thought was that I needed to write a so-called integration test to prove that it could be created from our dependency injection container and another suite of tests around testing calling dependencies and such like – unit tests.

The problem came when I went to run the tests, for a brief moment, I couldn’t find the integration test I had just written.  Why was is separate to the others?  Surely we just need “tests”, or perhaps “tddtests” for a clear distinction and run the entire fixture every time..

It turns out that many of the tests in the integration test project are very slow.  As a consequence, they don’t get run as often as they should.  Aha – another problem  with this setup.   My colleague was telling me that unit tests are there because they use Mocking.  Well, so do some of the integration tests.  I also heard that integration tests aren’t true integration tests if they use mocking.  This is a moot point.  I think they are because it depends on what your trying to develop at the time (remember, we’re doing TDD here).

All this aside, of course we need complete integration tests.  But this is more after the fact and they usually are slow, e\specially all of the setup and tear down of data, not to mention the interaction with the other systems.  Perhaps another tool is better (e.g. Postman for the API calls, but there is no chance of the integration tests going away some, it is a case of natural attrition…

Leave a comment

October 19, 2015 · 10:43 am