When Good is Enough (and when it isn’t)

It’s a constant debate that we fight on a daily basis, is the work we’ve completed good enough, or is important enough to spend an extra half day or day(s) perfecting.

Often times we’re forced to make sacrifices, but when is it the right time to push back or accept defeat?

Obviously what follows here are my own opinions but I’m a firm believer that a well executed plan only 80% complete, is worth leaps and bounds more than a flawed plan 100% complete.

 

A few questions I commonly ask myself:

  • Is what I’m working on tied to a software release.  *We release on a month basis in two 2 week iterations.  I know that if I want to make it into a release, I need to have it done sufficiently ahead of time in order to ensure adequate testing.  *This may mean cutting scope.  **Cutting scope is fine IMO, just do so in a way that you avoid silly shortcuts that create headaches for yourself when you inevitably have to come back and add functionality.
  • Who is the consumer. Is it a paying customer (ie. they have already paid)? Is it for a demo? Is it for another developer? If its a customer, what are they actually trying to accomplish? Often times, we get caught up in our own expectations of how something should look or behave, and lose sight of what the customer is actually trying to do.  If the customer is currently using pen & paper to track something, a simple (ugly?) *web form may be perfectly acceptable. *Don’t overcomplify. With good feedback, you can always come back and do a better job. Focus on the present, don’t get distracted on what you ***might* need to do in the future.
  • How often is the functionality used. *If the feature or function being worked on is used by a handful of backend administrators, you may not need to go over the top on day one.  If they come back and tell you that they absolutely cannot use the system without A, B or C.  Make sure you understand what A, B and C are and get cracking.  *Quick turnaround built loyalty and trust.  You need both as early in the customer relationship as possible.  **If the feature is public facing and used by a large percentage of the audience, by all means, perfect it.
  • Do I need to modify the database. From a technical perspective, this has always been a killer.  Make a flub manipulating the database, and you’ll cause yourself countless hours of time trying to correct customer datasets. Get representational datasets, migrate them automatically and test test test!  Speaking from experience, do not cut corners testing or working with your database migrations.
  • Have I been burnt in this area before. Maybe the work you’re doing is in direct response to a filed bug.  Either way, if you’re working a particularly risky area of the codebase, it’s worth going that extra step to ensure you’ve got solid code coverage (unit, regression, selenium tests, etc.).  Fool me once… Fool me twice…

 

There are plenty more, but in general, I err on the side of trying to get releases out the door.  If we spend 20 calendar days in a release (4 work weeks), up to 6 of those will be spent either performing non-automated regression tests (don’t worry, we have automated tests as well) or fixing post code-complete deficiencies. 

Test cases are a given.  If you’re working Java or any other programming language with tooling like JUnit or TestNG, I really see no excuse not to use it.  If nothing else, it’s a good safety net if you ever need to refactor a particular area of the code base. 

 

If I had to pick a guiding principle, it would be:  Get it in, Make it right, Make it fast. 


**Get it in *because you need the feedback.*

**Make it right *because that’s what we’re paid to do and stakeholders expect it.*

**Make it fast *because it’s no fun staying up until 2:00am trying to debug performance problems on a customers live system.*

Using Interfaces with JAXB

I set about the other day to use JAXB-annotated classes to generate some JSON as part of some web services work.

The trivial case worked.

@XmlRootElement
public class ExtMessage
{
    private String owner;

    @XmlElement
    private ExtConcreteBody body;

}

What I set about doing next caused some immediate grief.  My intention for ‘body’ is to actually be one of many JSON entities. First attempt was to introduce a JSONBody interface and use that instead of ExtConcreteBody.

@XmlRootElement
public class ExtMessage
{
    private String owner;

    @XmlElement
    private JSONBody body;

}

Of course that didn’t work.  At marshalling time, the JAXB provider complained about not supporting interfaces.

A quick search on Google said I wasn’t the only person who had run into this problem before. 

Best resource I found was the JAXB User Guide.  Seems to have some funny rendering and be slightly out of date, but it led me down the correct path.

Essentially you need to make use of an XmlJavaTypeAdapter.  JAXB ships a default one (AnyTypeAdapter) that will generate a ‘type="xs:anyType"’ in your xml schema.  If you want specific type support, you can implement your own Adapter. 

After adding the class-level @XmlJavaTypeAdapter(AnyTypeAdapter.class) to JSONBody, I thought I had it licked.

Then the JAXB provider complained that “Marshalling Error: class […] nor any of its super class is known to this context”.

WTF?

Fortunately that led me to a comment on a random blog post on Collections and JAX-RS mentioning that you should use an @XmlSeeAlso(…) if you want to avoid that error.

Finally it works.

@XmlRootElement

@XmlSeeAlso(ExtConcreteBody.class)
public class ExtMessage
{
    private String owner;

    @XmlElement
    private JSONBody body;

}

@XmlJavaTypeAdapter(AnyTypeAdapter.class)
public interface JSONBody
{
}

@XmlRootElement
public class ExtConreteBody implements JSONBody
{
    private int id;

}

The JSON looked like:

{"extMessage":{"body":{"@type":"extConcreateBody","id":"0"},"owner":"dummy.user"}}

Which seemed reasonable to me.

Hopefully this helps anyone else out there that is having grief trying to do the same thing!

You’re a big company, now act like a little one.

Very good article over on Smart Bear about small companies behaving like small companies when it comes to interacting with customers, and not pretending to be something they aren’t.

 

I won’t claim to work for a big company, it’s still very much the opposite.  But over the past 5 or 6 years, I have seen a transition in how customers are marketed to and interacted with from sale to solution implementation and beyond.  We’ve gone from a group of 2 to a company of more than 70 complete with departments, budgets, vice presidents, BoDs, etc. 

I’m not a marketer so I’m not going to comment on our web presence or our quarterly e-mail campaigns.  They seem to be working as that mythical funnel remains full.  Needless to say, our sales process was very rudimentary back in the day, there certainly was no funnel.  We develop enterprise solutions and have always made the $100k+ deals.

 

All things aside, when you start considering customers as a number or percentage point in some quarterly metric you’re trying to hit, you’ve lost an edge.  The startup isn’t even going to have the metric in the first place, they’re going to be concentrating on getting the customer’s attention, getting them product, iterating on said product, and maintaining the relationship.  The product may suck (our first few releases definitely did) but that’s addressable over time.  The relationship is far more important and can be lost very quickly. 

The other thing that changes as a company gets larger is focus.  No longer do you have that laser guided focus on getting product to a single customer, you’re now dealing with existing and new customers simultaneously.  Your decision making ability slows as you’re forced to consider the implications of change to all customers.  It’s one of those annoying but necessary evils that comes with maturity and customer successes.  :). 

I shouldn’t say it’s impossible to maintain the focus, it just becomes more difficult.  Your ability to focus on a particular customer is, in my mind, a key indicator of your ability to be successful.  It’s one of the reasons we track a customer satisfaction metric.  I won’t get into how it’s calculated, but essentially its a gauge of how often we’re communicating with a customer and whether or not the relationship is positive and if they’re happy with their particular solution.  Green is good, Red/Yellow is bad.

The other key indicator in my mind is not making the same mistake twice, but that’s a topic for another post.  It’s an item of significant competitive advantage that the slightly larger company has over the startup.