I never understood the point of manual test scripts. They annoy me. I view them as nothing more than a candidate for automation. I have never come across a manual script that wouldn’t be better used as an automation script, which of course violates the inherent nature of them being manual test scripts. The only value to manual test scripts is to give them to clients, so that they can run through the new app you just created for them and feel comfortable about the application (and learn about the app as they run through the scripts).
Jonathan Kohl presents the perfect argument about why manual test cases should be extinct. Everyone should read this. Developers should read it, clients should read it, testers should read this, and, most definitely, project managers should read this.
Most bugs will never be found by a manual script. They only illustrate the “conventional” click-path for completing a task, and the developer should have already went through this during their own testing; there is high probability that this path will already work. End-users are never going to follow this path, anyway; they will do something that you entirely don’t expect. They will hit the ‘Back’ button when you didn’t plan for it, or double-click the ‘Submit’ button when you didn’t handle it, or bookmark the third step in a five-step wizard. Scenarios like these will never be tested in a manual script, but could be tested if so much of the industry wasn’t convinced that scripts are the holy grail, and will be tested by any tester worth his salt.
CruiseControl .Net 1.0 has been released. download | release notes
This is a must upgrade for anyone running v0.9 or earlier. There are many updates that I am excited about, most notably the overhaul to CCTray (the client-side build monitoring tool that sits in your system tray). Our developers have had to use Firefox’s CC.Net monitor extension to monitor multiple builds, simultaneously. No more.
We will be upgrading within the next week.
MSIExec error code 1605 has been a thorn in my side for quite a while. When an MSI was command-line deployed by one user (manually deployed by me in the middle of the day), it couldn’t be uninstalled by another (automation during the nightly) due to the “Just Me” default. If I installed it through using the UI, and installed it for use by “Everyone”, then the nightly would build just fine. I needed a way to run an “Everyone” install from the command line, but Google wasn’t helping me out. Unfortunately, Microsoft does not seem to have a lot of documentation on this functionality, either.
It further frustrated me this morning when my nightlies were failing again, but only on one server. Of course, I manually deployed the package to this same server to a few days ago. I tried Google again, and this time hit pay dirt. Executing it with ALLUSERS=2 in the command line makes it available for everyone. Apparently, it forces an “Everyone” install for the UI, too.
Finally I can pull the thorn out.
MSIExec /i mypackage .msi … ALLUSERS=2
“It compiles! Ship it!”
Microsoft has sent Visual Studio 2005 to the printers. That brings .Net 2.0 to the table in all of its glory. The official release date is still November 7, and though it is available now to all of us MSDN subscribers (though the site is too flooded to ping, let alone download), there is still some question on if the media will be ready in time to go in all of the pretty little VS05 boxes at your local Microsoft store.
Outside of the QA world (and unfortunately, sometimes in the QA world), I’ve heard people toss around ‘Performance Testing’, ‘Load Testing’, ‘Scalability Testing’, and ‘Stress Testing’, yet always mean the same thing. My clients do this. My project managers do this. My fellow developers do this. It doesn’t bother me–I’m not some QA psycho that harasses anyone that doesn’t use exactly the correct term–but I do smirk on the inside whenever one of these offenses occurs.
Performance testing is not load testing is not scalability testing is not stress testing. They are not the same thing. They closely relate, but they are not the same thing.
- Load testing is testing that involves applying a load to the system.
- Performance testing evaluates how well the system performs.
- Stress testing looks at how the system behaves under a heavy load.
- Scalability testing investigates how well the system scales as the load and/or resources are increased.
Alexander Podelko, Load Testing in a Diverse Environment, Software Test & Performance, October 2005.
Performance Testing
Any type of testing–and I mean any type–that measures the performance (essentially, speed) of the system in question. Measuring the speed at which your database cluster switches from the primary to secondary database server when the primary is unplugged is a performance test and has nothing to do with the load on the system.
Load Testing
Any type of test that is dependent upon load or a specific load being placed on the system. Load testing is not always a performance test. When 25 transactions per second (tps) are placed on a web site, and the load balancer is monitored to ensure that traffic is being properly distributed to the farm, you are load testing without a care for performance.
Stress Testing
Here is where I disagree with Alexander: stress testing places some sort of unexpected stress on the system, but does not have to be a heavy load. Stress testing could include testing a web server where one of its two processors have failed, a load-balanced farm with some if its servers dropped from the cluster, a wireless system with a weak signal or increased signal noise, or a laptop outside in below-freezing temperatures.
Scalability Testing
Testing how well a system scales also is independent of load or resources, but still relies on load or resources. Does a system produce timeout errors when you increase the load from 20tps to 40tps? At 40tps, does the system produce less timeout errors as the number of web servers in the farm is increased from 2 servers to 4? Or when the Dell PowerEdge 2300s are replaced with PE2500s?
Any type of testing in QA is vague. This includes the countless types of functional testing, reliability testing, performance testing, and so on. Often time a single test can fit into a handful of testing categories. Testing how fast the login page loads after three days of 20tps traffic can be a load test, a performance test, and a reliability test. The type of testing that it should be categorized as is dependent upon what you are trying to do or achieve. Under this example, it is a performance testing, since the goal is to measure ‘how fast’. If you change the question to ‘is it slower after three days’, then it is a reliability test. The point is that no matter where the test fits in your “Venn Diagram of QA,” the true identify of a test is based on what you are trying to get out of it. The rest is just a means to an end.
I know. I haven’t posted in a while. But I’ve been crazy busy. Twelve hour days are my norm, right now. But enough complaining; let’s get to the good stuff.
By now you know my love for PsExec. I discovered it when trying to find a way to add assemblies to a remote GAC [post]. I’ve found more love for it. Now, I can remotely execute my performance tests!
Execute LoadRunner test using NAnt via LoadRunner:
<exec basedir="${P1}"
program="psexec"
failonerror="false"
commandline='\${P2} /u ${P3} /p ${P4} /i /w "${P5}" cmd /c wlrun -Run
-InvokeAnalysis -TestPath "${P6}" -ResultLocation "${P7}"
-ResultCleanName "${P8}"' />
(I’ve created generic parameter names so that you can read it a little better.)
P1: Local directory for PsExec
P2: LoadRunner Controller Server name
P3: LoadRunner Controller Server user username. I use an Admin-level ID here, since this ID also needs rights to capture Windows PerfMon metrics on my app servers.
P4: LoadRunner Controller Server user password
P5: Working directory on P2 for 'wlrun.exe', such as C:\Program Files\Mercury\Mercury LoadRunner\bin
P6: Path on P2 to the LoadRunner scenario file P7: Directory on P2 that contains all results from every test
P8: Result Set name for this test run
'-InvokeAnalysis' will automatically execute LoadRunner analysis at test completion. If you properly configure your Analysis default template, Analysis will automatically generate the result set you want, save the Analysis session information, and create a HTML report of the results. Now, put IIS on your Controller machine, and VDir to the main results directory in P7, and you will have access to the HTML report within minutes after your test completes.
Other ideas:
- You can also hook it up to CruiseControl and have your CC.Net report include a link to the LR report.
- Create a nightly build in CC.Net that will compile your code, deploy it to your performance testing environment, and execute the performance test. When you get to work in the morning, you have a link to your full performance test report waiting in your inbox.
The catch for all of this: you need a session logged in to the LoadRunner controller box at all times. The '/i' in the PsExec command means that it interacts with the desktop.
Sidenote
PsExec is my favorite tool right now. I can do so many cool things. I admit, as a domain administrator, I also get a little malicious, sometimes. The other day I used PsExec to start up solitaire on a co-workers box, then razzed him for playing games on the clock.
I remember a day in my past when my project manager approached me, relaying a client request. This client always received a copy of the test cases we used when testing the application, and their request involved modifying our practices regarding case creation. Through this request—and you know how client ‘requests’ go—the client was convinced that we would be more efficient and better testers.
Fortunately I was able to convince my project manager that it was not a good idea, or at least “not a good idea right now.”
We relayed that we appreciated any suggestions to improve our process, but “would not be implementing this suggestion at this time.”
I am constantly looking for ways to improve my craft, and have received many quality suggestions from clients in a similar form to “Our testing department does [this]. You should take a look at it, and see you can benefit from it.” Suggestions carry the mood of “If you implement it, great. If you don’t, that’s great, too.” However, be weary of ‘missions from God’ to change your practices. The client’s plan may be driven by budget, promoting inferior methods that will save a few dollars. They may be based on their own practices that are less refined or matured than your own, also resulting in inferior methods. Finally, changing your practices mid-stream in a project—as many adopted “client requests” manifest—will disrupt flow, causing less quality over-all.
Your client is in the business of making whozigadgets. You trust that they know what they are doing, and know far better than you how to do it. You are in the business of testing. Likewise, your client should trust that you are the subject matter expert in your field.
I’m not advocating that all clients don’t know anything about what you do, and that everything they say about your craft should be blown off. All qualifying* suggestions should be thoroughly considered and evaluated; that’s good business. Perhaps there is a place in your organization for the process change, and that it would make you more efficient at what it is you do. However, I am advocating that you should not take a gung-ho attitude to please the client in any way possible, and implement every process change they utter; that’s suicide. Your testing team will turn in to a confused, ad-hoc organization. Your quality—and with it, your reputation—will crumble.
* Qualifying Suggestion: Any suggestion that is reasonable, intelligent, and well-thought. i.e. Do not abandon all QA to save costs, and rely on the client’s internal testing to find all bugs.
With our new nightly database restore we now have the desire to automatically run all of the change scripts associated with a project. We’ve found a way; I created a NAnt script that will parse the Visual Studio Database Project (or "DBP") and execute all of the change scripts in it. Here’s how we got there.
Problem 1: Visual Studio Command Files are worthless
Our first idea was to have everyone update a command file in the DBP, and have NAnt run it every night. Visual Studio command files are great and all, but we have discovered a problem with them: they do not keep the files in order. We have named all of our folders (01 DDL, 02 DML, etc) and our change scripts (0001 Create MyTable.sql, 0002 AddInfoColumn to MyTable.sql) accordingly so that they should run in order. We have found that the command file feature of VS.Net 2003 does not keep them in order but rather seems to sort them first by extension, then by order, or some similar oddness. Obviously, if I try to at InfoColumn to MyTable before MyTable exists, I’m going to have a problem. So, the command file idea was axed.
Problem 2: Visual SourceSafe contents can’t be trusted
Our second idea was to VSSGET the DBP directory in VSS and execute every script in it. However, the VSS store cannot be trusted. If a developer creates a script in VS.Net called ‘0001 Crate MyTable.sql’ and checks it in to the project, then proceeds to correct the spelling error in VS.Net to ‘0001 Create MyTable.sql’, VS does not rename the old file in VSS. Instead, it removes the old file from the project, renames it locally, then adds the new name to the project and to VSS. It also never deletes the old file name from the VSS store. Now, both files (’0001 Crate MyTable.sql’ and ‘0001 Create MyTable.sql’) exist in VSS. Performing a VSSGET and executing all scripts will run both scripts, which could lead to more troubles.
So, we can’t use a command file, because it won’t maintain the order. We can’t trust VSS, since it can have obsolete files. We can only trust the project, but how do we get a list of files, ourselves?
Fortunately, DBP files are just text in a weird XML-wannabe format. The NAnt script will open the file and run through it looking for every ‘SCRIPT’ entry in the file. If it finds a ‘BEGIN something’ entry, it assumes that ’something’ is a folder name, and appends it to the working path until it finds ‘END’, at which time it returns to the parent directory.
It’s not perfect. It still runs in to some problems, but here it is in v0.1 form.
<project name="RunDBPScripts" default="RunScripts">
<!–-
Execute all scripts in a VS.Net DBP
Author: Jay Harris, http://www.cptloadtest.com, (c) 2005 Jason Harris
License: This work is licensed under a
Creative Commons Attribution 3.0 United States License.
http://creativecommons.org/licenses/by/3.0/us/
This script is offered as-is.
I am not responsible for any misfortunes that may arise from its use.
Use at your own risk.
-–>
<!-– Project: The path of the DBP file –->
<property name="project" value="Scripts.dbp" overwrite="false" />
<!-– Server: The machine name of the Database Server –->
<property name="server" value="localhost" overwrite="false" />
<!-– Database: The database that the scripts will be run against –->
<property name="database" value="Northwind" overwrite="false" />
<target name="RunScripts">
<property name="currentpath"
value="${directory::get-parent-directory(project)}" />
<foreach item="Line" property="ProjectLineItem" in="${project}">
<if test="${string::contains(ProjectLineItem, 'Begin Folder = ')}">
<regex pattern="Folder = "(?’ProjectFolder’.*)"$"
input="${string::trim(ProjectLineItem)}" />
<property name="currentpath"
value="${path::combine(currentpath, ProjectFolder)}" />
</if>
<if test="${string::contains(ProjectLineItem, 'Script = ')}">
<regex pattern="Script = "(?’ScriptName’.*)"$"
input="${string::trim(ProjectLineItem)}" />
<echo message="Executing Change Script (${server+"\"+database}): ${path::combine(currentpath, ScriptName)}" />
<exec workingdir="${currentpath}" program="osql"
basedir="C:\Program Files\Microsoft SQL Server\80\Tools\Binn"
commandline=’-S ${server} -d ${database} -i “${ScriptName}" -n -E -b’ />
</if>
<if test="${string::trim(ProjectLineItem) == 'End’}">
<property name="currentpath"
value="${directory::get-parent-directory(currentpath)}" />
</if>
</foreach>
</target>
</project>
I used an <EXEC> NAnt task rather than <SQL>. I found that a lot of the scripts would not execute in the SQL task because of their design. VS Command Files use OSQL, so that’s what I used. I guess those command files were worth something after all.
If you know of a better way, or have any suggestions or comments, please let me know.
With all that we stuff into the database on the QA environment, we need to perform a regular database restore. This way, we also get a fresh DB without any of the corruption from the previous day’s QA attacks.
I created a NAnt script to automate the process, including restoring security access when we restore from a backup created on a different machine. Centerting around the NAnt code below, my script disconnects all current connections to the database in question (we can not restore the DB without dropping it, and we can not drop it while connections are open), drops and restores the database, refreshes security, and performs a few other tasks such as setting all email addresses to internal addresses to prevent spamming the client and truncating the log since our server is a little short on disk space.
if exists (Select * from master.dbo.sysdatabases where name = '${database}')
Begin
DROP DATABASE [${database}]
End
RESTORE DATABASE [${database}]
FROM DISK = N'${backupfile}'
WITH FILE = 1,
NOUNLOAD ,
STATS = 10,
RECOVERY,
– changes file locations from what was in the backup
MOVE '${dataname}' TO '${path::combine(datadirectory,database+'.mdf')}',
MOVE '${logname}' TO '${path::combine(logdirectory,database+'_Log.ldf')}'
As Lead QA, I have the fun responsibility of screening resumes and conducting phone interviews. I weed out the hackers from the script kiddies before we bring them in to face the firing squad. It never fails to amaze me how people embellish their resume beyond reasonable limits. I am particularly fond of people that list skills they can not define, and of people who don’t proof read their resume when applying for a detail-oriented position.
As I run through my stack of paper I came across one unfortunate soul that did both. I was quite amused in a genuinely entertained sense. He proclaimed is proficiency in ‘Quick Teat Professional 8.0′, presumably an application through which you can automate cow milking, complete with data drivers and checkpoints. “OK. So he missed the ’s’ and didn’t catch it. So what?” Well, he also bolded the misspelling, perhaps to point out his attentiveness. This was only slightly before listing its usage in 2003 for a former employer that he also misspelled. (Note: QTP v8.0 was not available until the summer of 2004.)
However, and forgivably, my recruiter is not aware of such things and had already scheduled a phone interview for me and my entertaining candidate; I honored the call, giving the prospective a chance at redemption.
He failed.
Question number two asks the candidate to list the types of testing with which s/he has experience. This reply included integration testing (also stated in his resume, correctly spelled). My follow-up asked him to define integration testing; a common ploy to make sure I’m not just being fed buzz-words. It was a definition he could not supply, or even attempt.
A candidate should be able to define every ‘word’ he claims experience with. If you can not define it you obviously do not have enough experience in it to make it applicable. If you can not define ‘integration testing’, I will not hold it against you providing you do not list experience in it. Similarly, if you do not list it, and I ask you what you know about it, be straight; tell me straight-up that you cannot define it. You will rate higher in my book than someone who stumbles through an obviously concocted and blatantly incorrect response.
BTW, if you are looking for a position as a quality analyst, and can work in the Brighton, Michigan area, drop me a line and a resume. I would be happy to hear from you. Ability to define ‘integration testing’ a plus.
|