.comment-link {margin-left:.6em;}

Uncommon Sense (for Software)

This blog has been moved to www.UncommonSenseForSoftware.com.

Tuesday, September 27, 2005

Building it Front-to-Back

Wow. I wish I'd tried this sooner. Conventional wisdom in software development says that first you design your architecture, then you lay the foundation, build the back-end, and somewhere near the end of your project, features start to bubble up through the user interface (UI). Maybe it's because we keep comparing software development to construction, where first the foundation goes in, then the framing, and the color of the paint and countertops doesn't even need to be decided until the very end. I'm not sure.

How many times have you experienced this: A gaggle of developers working feverishly for months, all the while, stakeholders asking, "Can we see it yet?" "Nope.", comes the reply, "We're still building the system that the UI is going to sit on top of." Then, one day, "Tada! Here's the UI!" "But that's not what we wanted!" Oh oh. (Or best case, the UI is functionally sound, but looks like ass because you're too pressed for time to polish it up and make it esthetically appealing and ergonomically friendly).

I never really thought to question it before. Not until I had the complete freedom to try it a different way, without having to fight to convince anyone that doing things in reverse might actually be a better way. I'm building a large system front-to-back right now (user interface to data storage), and am thrilled with all of the benefits this approach is offering.

Most people in development know that the last things on the project schedule are the things at most risk of being cut in a time crunch. Most development projects see the bulk of the user interface being developed after all the system work has been done. But the user interface is the only thing the customer sees - it's what they buy. See the problem? You're leaving the most important thing to last. This is way too risky, for you and your customer.

The idea that you can reduce an application design down to a bunch of nicely written Word documents is nice, but not realistic. Try and try again, with this approach, it'll never match having a living breathing example (UI) of what you're building, while you build all of the system architecture level things that no-one else gets to see but us developers.

Not that you shouldn't spend a really sizable chunk of time designing and building your architecture. (See my last post on architecture). But rather, the architecture should be derived from a living mockup, or UI design. Not just screen caps (not enough), and not necessarily a full prototype (too much, and too likely to get thrown into production), but something in the middle.

Here's the steps I followed with my current project (a web-based modelling tool, built to be hosted ASP style; = large scale deployments):

  1. Started in Excel: 'cause it's a data modelling type tool, so I wanted to layout the main tables and views that people would have available to them


  2. Moved to Photoshop: to put some lipstick on the pig and make it look pretty (don't forget, esthetics sell)


  3. Got feedback on the concept - Was the concept right? How does it look? Good. Ready to proceed.


  4. Moved to HTML/CSS/Javascript: This step might be unique to me, but I really wanted to have the richness of the desktop in my web-based application (drag/drop, inline editing, etc.), so I built a bunch of re-usable, cross-browser Javascript controls for this and demonstrated them on a few sample pages of the application


  5. Got feedback on the interactivity - how does this behave compared to regular web-apps? What do you think? Good. Ready to proceed.


  6. Moved back to PowerPoint: I wanted to mock-up every single screen in the application so that I could get feedback and make any changes quickly without having to rip-out months worth of coding. PowerPoint is great for UI design because you can throw in image pieces and move them around with text and rectangles quickly. And with one-slide-per-screen, you can still have 60 screens in one file, with annotations, printer-friendly, and sharable.


  7. Got feedback - How does the functionality set look? Anything missing? Anything wrong? Good. Ready to proceed.


  8. Implemented each screen in HTML/CSS/Javascript: At this point, I took the time to do some "real" development. Not hacking, not prototyping, but the real take-the-time-to-do-it-right design and development, but ONLY for the UI. The goal was to have every screen built to do nothing but show dummy data and move around to the other screens.


  9. Got feedback - This was the single most important feedback stage, and allowed me to get the detailed kind of comments that would only otherwise be available at the end of a development cycle (when it's too late), because the application "appeared" to be complete, from the user's perspective - just that it was all dummy data. At this point, beyond looking at screen caps, they could actually click around and "use" it.


  10. Business Logic and Back-end: Now things start to look like a normal development cycle from here-on-in. Do your architectural design, but this time with a living specification staring you in the face (one that the customer has actually approved), and proceed to implement it.


With this approach, I've pretty well eliminated the risk that the product won't match customer expectations because they've actually been able to see it and play with it before I go do the lion's share of the development. Not only that, but now that I'm building the back-end, while I still want to build a somewhat future-proof architecture and be fairly generic with it, I have an extremely specific, living example of the application I ultimately want to build with it.

Some might call this rapid prototyping, but there are a couple important differences:

  • Only the UI was mocked up: do we really need to demonstrate that we can read and write from a database? No. Skip that stuff. Just mockup the screens and connect them so the user can get a feel for the navigation. Populate the screens with dummy data.


  • There was nothing "rapid" about it: while the PowerPoint mockups were done quickly, they could also be changed quickly. When it came time to move the screens into HTML/CSS/Javascript, it was solid development practices the whole way. Not prototyping, not throw-away, but solid design and re-usability. This was production UI code.


While the goal of this approach was similar to one of the goals of Extreme Programming (get the customer in early), I think the approach is stronger because while you get all the benefits of the user being involved early on to validate what you're doing, you don't have to sacrifice the benefits of solid up-front design for each tier (UI, Biz Logic, Data Storage), by making it up as you go along and trying to convince yourself you can re-factor quickly.

I can't tell you how useful this front-to-back approach has been. Virtually all of the requirements and design unknowns have been squashed, and makes building the back-end a dream. Descreased risk, increased predictability, increased quality.

This approach definitely took more time upfront than a "document only" approach to design, but I believe the time saved later, and the risks mitigated make it an exceptional investment. I would highly recommend it to anyone.
 

Thursday, September 22, 2005

How to Explain Good Architecture to the Non-Techie

How many times have you tried to explain the importance of good architecture to one of the many stake-holders in one of your software projects? Could be from sales, or marketing, or even your boss, depending on your company. If you have, you've probably seen their nose wrinkle, or their eyes glaze over. After an honest attempt to make them see the light, you probably got frustrated and thought that they just don't care, or just don't get it - and never will. The frustration that both of you feel actually creates a kind of tension and over time, this lack of being able to relate to each other makes it more difficult to be shooting for the same goals.

Wise people say that to be a good communicator isn't about having any particular mastery of the language, or about demonstrating through over use of industry jargon how incredibly smart you are. It's about being able to put yourself in your listener's shoes and hear what they hear - or, simply about relating.

Although I've been managing software teams for many years now, it finally just occured to me while walking the dogs at lunch, what good architecture is all about, and how to get the point across to your non-technical peers. Like many, I intuitively knew what it was, but couldn't quite nail the explanation when trying to describe it to a non-techie.

I'm a firm believer that good architecture is actually a pillar of a successful product company. It's just that it's not for reasons you would necessarily think.

It'll sound a bit funny at first, but stick with me: Good architecture has nothing to do with the quality of the product you're building. You can have a product that customers absolutely adore, that is about to fall over dead tomorrow if one more tiny little feature is added to it because nobody can see the house of cards it's built on, except you. Customers don't buy architecture (unless your product is a platform, then they do). Even if your product is extendable through some API, they still don't care too much about what goes on under the covers. Customers buy a solution to a problem. Even if the problem is simply that they're bored and they're buying a video game to amuse them (problem solved). They buy features, benefits, and capabilities, but they don't buy architecture.

Good architecture is not about the product. It is about operational effectiveness. Ahhh, now your boss is listening. It's about being able to react more quickly to customers and the market, even the competition. Ahhhh, now Sales and Marketing are listening.

Understanding good architecture comes from understanding the symptoms of bad architecture, and knowing that good architecture prevents those symptoms. So. Bad architecture: Can't scale the system up; can't increase performance; can't make it easier to install; can't add any more features because of the unforseen conflicts they create with existing features; takes too long to fix bugs and add what new features are possible. Most of these things are about the company's ability to innovate, to react quickly to threats, to support their customers, to solve problems, to add new features, and to pump out new products.

A good architecture can get you: faster turnaround time on fixes and new feature development, more flexibility for supporting customers, a stronger ability to compete in the market place, etc.

The difference between a good architecture and bad architecture can often affect each of those symptoms by as much as 2 or 3 times. It makes the product team more effective at doing their jobs, even multiplying their productivity in many cases. Good for the team, good for the company, good for all the stake-holders.

Those are things that your stake-holders care about. And hopefully, this is a way to help get the point across.
 

Monday, September 19, 2005

An isolated occurance might be managable but a trend can kill you.

Suppose you're running a software project and a task that was originally estimated to take 5 days actually took 8 days (60% longer). What impact to the schedule might that slip have?

The intuitive response is to think that the person that owned that task is now 3 days behind schedule. The intuitive response would be wrong. If managing software projects were that intuitive, the global failure rate on I.T. projects wouldn't be 70%.

Worse, it might be tempting to think that there is no impact, because that person can make up the 3 days later in the schedule. (If you ever convince yourself of this in a software project, go stand in the corner and think about what you've done.)

Here's the not-so-intuitive "real" story on that 3 day slip:

Firstly: We can make up those 3 days later. Really? You mean there is 3 full days in the schedule where this person is supposed to be sitting on their butt doing nothing, and we'll just use those 3 days to make up for this slip? No way. 9 times out of 10, a slip like that is not recoverable just by "willing" it to be so. If you actually do have buffer time to account for slips like these, pat yourself on the back. (But hope you have enough buffer to account for this sort of thing repeating itself.)

Secondly: Working overtime probably won't help. Do you know what it takes to make up 3 full days working overtime? Best case, suppose someone could work an extra half-day each day (4 extra hours). That's 6 days, more than a week, of working a day and a half shifts. If they can only sustain 10 hours per day (2 extra hours per day), then that's 12 days (over 2 weeks) of overtime to make up for a measly 3 day slip. Overtime makes up for schedule slips at a surprisingly unhelpful rate, and burns people out at the same time. Not to mention the poor quality code they produce while seriously demotivated and tired - poor quality code that they'll have to come back and fix later with, what... More overtime? And, while working overtime to correct for one slip, chances are that something else has slipped that will require even more overtime after that. Eventually the whole schedule just blows up.

Thirdly: Even if you re-jig the schedule so that the 3 day slip doesn't seem to hurt the product at all, what will you do with the next slip, and the one after that? Because unless this is your first software project, this isn't the first slip, and it won't be the last.

Finally: Don't you dare go reduce the amount of time scheduled for some future task unless you actually reduced the amount of requirements for that task. Otherwise, you've just swept the problem under the rug for now and it'll come back to bite you twice as hard later.

Crap. So, now what? Stop looking at a slip as an isolated occurance. What ever caused the slip, however it might appear isolated and unique, it is in some way part of a larger trend. Whether it was an error in the original time estimate, or the fact that this person got pulled off the project to go do something else for a while (a distraction), these things tend to repeat themselves if not managed as a trend.

The differences between an occurance and a trend are their perceived impact and the way you deal with them. The impact of a 3 day slip might seem small and recoverable, but if it is part of a trend (say every 10 days worth of work, there's a 3 day slip), its impact is much larger. An isolated occurance might be managable but a trend can kill you. 3 days is nothing, but 30% on a 6 month schedule is almost 2 months.

If someone on the team was off on a time estimate by 30%, what do you think the chances are that every other time estimate this person has given will be 100% accurate? Not very good. By the time someone has completed a few tasks, there's enough data to predict a trend that can have far more impact on your schedule than an isolated occurance, but thankfully is also easier to manage because a trend is more predictable.

The same goes for any time that this person may have been taken away from the project to go do other work. Was this the first time? Do you think that it could ever possibly happen again? Probably. By the time it happens a few times, again, there's enough data to form a trend.

So, if on average, tasks are slipping by 3 days for every 10 days worth of work, the pace of the project is off by 30%. By thinking about this as a trend instead of an isolated occurance, your approach to solving the problem is likely different. For an isolated occurance, you may be tempted to crack the whip and just try to get people to work harder or longer to make up for the 3 days slip. But as soon as you recognize it as a trend, you know that will never work - it's not sustainable. Your next option should be to re-cast the schedule, but this time you've got some real data (say, average Time Estimation Error or Distraction Rates) to pad the schedule with, thus negating the effects of the trend. Each trend that you spot and measure, gets some padding. Problem solved - well, immediate problem solved. You may still want to then attack the particular trends that are tripping you up in the first place, like improving your Time Estimation Error rates or reducing the Distraction Rates, but at least your project will be insulated from them while you figure out how to improve the trends themselves.

Better yet, if you tracked the trends from your last project, then you've already built them into this project schedule right? If not, then you should have enough data after the first month of a 6 month project to extend the trend over the balance of the project to figure out what its impact could be.
 

Wednesday, September 14, 2005

The software wasn't late, it was supposed to take that long

Over time, new words keep getting introduced into the English language. It may take tens or hundreds of years of people using a particular term, but if it’s used enough, eventually it gets put in the dictionary. My spell-checker keeps telling me that verticalize isn’t a word, but enough business folks use it that I’m pretty sure some day it will be. Another term I think has a shot is: thesoftwareislateagain.

For the past year and a half, I’ve been working on a solution to that very problem. Unfortunately I can’t tell you about it yet because my company is still in stealth mode. But it is a multi-faceted solution, with one significant piece of the equation being time estimation.

People keep calling the software “late”. Of all the times we perceive a software project to be late, I submit to you that only half the time it actually is late. A software project can really only be late if it took longer than it should have. That “should have” part implies that we have some divine control over the duration, and we decide how long something should take like we decide what color it should be. Hate to burst the bubble, but we don’t. If a software project’s requirements (what it needs to do) and designs (how it’s going to be accomplished) stay reasonably fixed during the life of the project, and the team and platform are truly qualified for the job, then the duration is what it is. Our job as project managers (and developers) is to figure out that duration ahead of time - accurately. We estimate how long it should take – we don’t decide it.

If it didn’t ship on time and it wasn’t late, then what is it? It was estimated wrong. It’s that simple. If all other variables about a task were reasonably fixed, and it wasn’t done when you thought it would be, then it was estimated wrong. As a rule, we tend to underestimate things a lot. Software development in particular is difficult and complex (at least, to produce good software is – banging out crap is easy). That makes time estimation hard. Too hard to do with hallway conversations and incomplete requirements.

Most people think if a task was scheduled to take 10 days but really took 15, that it was 5 days late. In most cases, it wasn’t late, it was supposed to take 15 days. It was just underestimated by 5 days. Assuming everyone was working at capacity, and even if there were unforeseen technical difficulties along the way, the original estimate of 10 days was an illusion. The real duration of the task was 15 days, not 10. This is an important distinction because it determines your choices for action. If something was late, it’s tempting to start looking for ways to “make people work harder”, which often isn’t the solution. The only bad thing about the task taking the 15 days was that you thought it would take 10, so you planned around it. Then other parts of the organization got screwed up because you missed some deliverable to them. If you knew ahead of time that it was really going to take 15 days, there likely wouldn’t have been a problem. By shifting your thinking to time estimation instead, you can correct the problem at the source and insulate the rest of the organization from these costly inaccuracies.

People all over software-land would be better served by doing more of the homework upfront, so that their time estimates are more accurate. We've all heard the stories of big shaggy software projects never getting off the drawing-board because of too much design up-front. Does that really apply anymore? Seems to me that today, the trend has reversed. People rush ahead without knowing what they're trying to build. How the heck do you estimate the time required for something you haven't even written down in detail?
 

Tuesday, September 13, 2005

The power of suggestion

Big fat books have been written on the subject of delivering software on time. Expensive courses are offered. You can actually go get your Masters Degree in Project Management. The success rate of software projects still stinks: sitting somewhere around the 30% mark.

I don't know of any other kind of business project that fails as often as software projects do. The strange thing is, people in the biz seem to know why they get off track, they just haven't figured out what to do about it! If you ask anyone in the software biz why projects fall off track, they'd tell you:

- Changing requirements
- Bad time estimates
- People getting pulled off the project to go do other work
- Etc.

If so many people know the text-book reasons that software projects run off the tracks, then why the unbelievable failure rate?

I think it's the power of suggestion. People want to believe they can get it done faster than they really can. They want to believe that they can keep playing with requirements after the schedule is set and the developers are off to the races, and they want to believe that the software keeps getting built even while the whole team is taken off to go help the Sales department figure out their new CRM system. So they keep telling themselves, "I think I can, I think I can...", until they actually become convinced of it.

So countless software project managers run off trying to come up with a Gantt chart that, while structurally sound, has no chance of ever being implemented on time. They've fallen victim to the trap of drawing out what they want to believe will happen, rather than what statistics and experience should be telling them will actually happen.

The tools don't help either. They say that people that give-in to addictive personalities are called "enablers". Like, if you take your alcoholic friend to a bar, then you're an "enabler". You're supposedly partly to blame for your friend falling off the wagon (or is it jumping back on the wagon again? I can't remember). If that's the case, then today's project management tools are enablers of bad schedules and failed projects. They show you want you want to see, not what will actually happen. Tsk. Tsk. Those pesky tools should know better. Countless failed schedules have been typed into those bloody tools and a few good ones too. You'd think by now they'd be able to stop you from making the same mistakes over and over again, by saying, "Ahem. Did you think about this?", and, "Um, I can tell you from your history that this schedule is just not going to work. Try this, that and the other."

What we really need are virtual project management Nanny's that slap us when we forget to build in vacations, sick time, holidays (the basics), and coach us with their vast historical references by telling us that on average, Suzy Queue is off her time estimates by 30% and Little Boy Blue always gets called off the project for at least 5 days each month to go help lead customers - so build that into your schedule! Reading a book once and subsequently forgetting it doesn't work. Neither does going to that annual conference in Las Vegas on Project Management. It's got to be in the tool, in your face, all the time.

We want to believe so badly that we suppress our experience and in some cases, actual historical data from our own environment so that we get the Gantt chart that makes us all feel good. Unless it's staring us in the face every time we look at that Gantt chart, we conveniently overlook the things that kill us every time.

If only there were such a tool. Wouldn't that be nice.