Earlier in the week I mentioned that I thought the REAL Software beta process is broken. I’m a passionate user and I use RB all day, every day, so I have some strong opinions. I happen to know a thing or two about testing on a commercial software product.
Earlier in my software development career I was the lead tester for a printer utility (for a very large printer company). I was in charge of the test scripts (imagine the same test on 30 printer models for each beta release – yeah it was mind-numbingly dull) that each tester used to report bugs. We’d find a bug in Test 3.13b (how to reproduce), explain what the error is and sent it to the developer in charge (via the bug tracking system).
The developer would fix it and then the build developer would put together the list of changes for that build and the system would then send the bug back to the original tester for verification. If it passed our test (with the next build) we told the system that the fix was verified. If not, it got sent back to the developer.
Then, when the developer made a public release, the fixed and verified bugs were put into the change list. Depending upon how late in the process we were, verified bugs were listed as known bugs so the public testers didn’t report the same bug a billion times.
It’s a tedious process but it’s the only way to really do a beta program. If you assume that you really want a quality (good enough?) product you need to slow it down and be tedious about it.
So why do I think that the RS beta program is broken? First, in this release, several bugs were listed as fixed and were clearly not. This, to me, says that there is no verification process on fixed bugs, or if there is, it’s not a very stringent one. I understand how this happens because on small teams everyone is maxed out.
It could also be that the bug, as described, was fixed but it didn’t fix the overall problem. I could easily see this happening. The developer has a long list of things to do, looks at the bug report, fixes it, verifies it to his or her satisfaction and marks it as fixed. This, all without a deeper look at why the bug is occurring. I understand because it’s happened to me.
I would also posit that that the beta program, as it exists, doesn’t work the way that benefits REALbasic (and us end users) the most. Bugs are getting introduced into the product. Bugs aren’t getting fixed. New features don’t get tested properly and take several releases to get working properly.
The beta program asks members to test each build against their projects. Here’s the ugly truth: When you ask me to test ‘everything’, it’s like asking me to test ‘nothing’. There are a couple of dozen of controls that can be used in millions of different ways. There are hundreds of REALbasic classes that can be used in an infinite number of ways. Telling me to test my project against the new version only catches in-your-face, or easily noticeable, bugs.
Yes, there is a change list for each beta version but they never tell the beta list what the changes are per new beta build. Several developers take the time to parse through this list and then publish what’s changed, but why are the testers doing this and not RS? They should know what has changed in each release and publish the list based on beta build not just an overall list.
ARBP did a survey late last year asking about the beta program. Most developers said they did it for early access to the next release. This is akin to saying, “We are part of the program to make sure it doesn’t muck with my project.” Sure, they test, but is it what RS really needs?
My recommendations. Not in any particular order and some are mutually exclusive:
1) Since the beta program isn’t producing the feedback RS needs/wants early enough, scale back on bug fixes and new features and do more internal testing for each release.
2) Provide guidance on each beta build as to what to focus testing on. If the listbox received a lot of work, then say that. As a beta tester I should focus on the listbox. With Cocoa receiving a ton of changes for each build, it would be helpful to know what the developer wants tested.
3) Scrap the program entirely and rebuild it by invitation only. This ensures quality testers and a good mix of hardware, operating systems, projects types, time commitment, etc. Perhaps even do groupings of people to focus on different aspects of the product in each build. Group A does controls in this release while Group B focuses on a framework or whatever and in the next release you reverse it. It ensures that there are always different people looking at different things. The key here is having as many eyes looking at as many things as possible. This gets rid of the tire kickers too that provide no valuable feedback.
4) Have a single person in charge of the beta and build process. Give that person the authority to delay a public release if beta feedback is too negative or bugs are found at the last minute. Don’t push the product out the door “just because”. If there is a legal commitment (for whatever reason) to release on a certain date, then there must be proper time for the testers to vet out problems.
5) Enforce a proper feedback loop. Proper discussion needs to take place, both internally and externally, before major things get worked on. We, users, have a certain set of expectations about features and when no one gets our expectations on record then we end up being overly disappointed in a feature we can’t/won’t use. We, the users, are the biggest marketing arm of REAL Software. Keeping us happy makes for happy reviews and comments in public forums.
6) Don’t ignore beta feedback by saying we’re not the typical RB user. Um…yeah, we are. We care enough about the product to give you our time – for free. Yes, we’re in it for our own interests but don’t dismiss our thoughts as not being representative.
Thoughts? What would you change to the beta program, if anything?