Testing (with JUnit) and Using Assertions

(Also Logging, Monitoring, and Management of Services)

[ Adapted in part from “expert one-on-one J2EE Design and Development”, Ch. 3, by Rod Johnson, Wrox pub.  (c)2003 by Wiley Pub. ]

Failure isn’t an option.  It’s mandatory.

Testing is one of the most important areas in software development, yet one which students (and unfortunately others) ignore completely or at best pay lip-service to.  Modern development methods such as Extreme Programming (XP) and Agile Software Development advocate a test first method of programming:  First you write the tests from the specifications, and then you write the actual code.  Testing can do a lot more than find bugs in your code!

Recall that software is developed in phases.  There is lots of research and general agreement that finding bugs in the early phases of a software project is much cheaper than finding them later.  At least one study found the cost of finding and fixing bugs goes up 10 times with each passing phase of a project!  A bug spotted in the requirements phase can often be corrected with a 5 minute phone call to the client, but that same bug spotted during acceptance testing may require a complete re-design and re-implementation of the project!

The bottom-line is, even if it doubles the time it would otherwise take during coding to develop tests, it will still save time and money overall.

In addition to finding bugs, tests can be used to specify the required behavior of methods and classes.  A method signature may tell you something about how a method can be called, but it doesn’t say everything about how it should be used:

  public Color pickColor ( float r, float g, float b )

This doesn’t really tell you what the range of expected values for the three arguments are, or what happens if negative values are sent.  The full specification of a method needs to include the semantic information too.  Commonly this is done with comments (Java docs), with formal specifications of pre-conditions, post-conditions, and invariants, with a set of tests cases, or a combination.

Ad-hoc testing is a commonly used term for software testing performed without planning and documentation.  These tests are intended to be run only once, unless a defect is discovered.  Such an approach is better than nothing, but is only useful for small (“toy”) problems.  A better approach with JUnit is discussed below.

Ad hoc testing is sometimes considered exploratory testing.  Ad hoc testing has been criticized because it isn’t structured, but this is also the main strength of exploratory testing: important things can be found quickly.  It is performed with improvisation; the tester seeks to find bugs by any means that seem appropriate.  (This takes experience, talent, and luck.)

Rubber duckie testing is an informal term that refers to a method of debugging code, not testing.  Also known as rubber ducking or rubber duck debugging.  The idea is that you can debug your by forcing yourself to explain it, line-by-line, to an inanimate object such as a rubber duck.  The expectation is that upon reaching a piece of incorrect code and trying to explain it, a programmer will notice the error.

It is not possible to completely test any non-trivial program, to the point where you can prove no bugs remain.  (Show “Test case self-assessment web resource.)  That doesn’t mean you shouldn’t use testing!  Rather, initially you will need to “cherry-pick” a few dozen tests that you have the time and budget for.  As bugs are discovered, or new features proposed, you add more test cases to the initial set.

Test code must be simple and clear.  If too complex, you can’t be certain the test isn’t flawed.  Even simple, short test methods should be “tested”.  I usually manage that by inserting a debugging line of code into the method, altering the result of the code under test.  In this way, I can force the test method to fail, so I can examine the resulting messages.  Then I comment-out or remove debugging lines.

Types of Testing

All testing can be defined as one of two main types:

·       Black-Box Testing — This means to test to the public interface (or specifications).  Such tests should be built from the design documents and not by inspecting the code.  In essence these tests make sure the code does what it is supposed to do, without testing the implementation details.

·       White-Box Testing — Also called glass-box testing or structural testing, such tests are designed to thoroughly cover all the code.  This requires you to inspect the code.  For example: make sure all if statements are tested in both branches, every method is called, every loop body is executed at least once, etc.

There are many different types of tests done.  Each has a different goal and is often done differently:

·       Unit Testing — These are tests that cover the functionality of a single unit of code: usually a class but also applies to individual methods or to whole packages.  Unit tests are black-box tests and don’t test the implementation; assertions (see below) can be used for implementation testing.

Unit tests are based on the design documents, not the implementation or the requirements/specifications.  The point is to ensure the module implements the design correctly.

Unit tests often use both legal and illegal sample data.  It is important to pick test data that doesn’t have special properties that might mask errors.  Often you duplicate tests with a few different sets of values.  Each separate item tested has its own test case.  The set of test cases for some unit comprise the unit test and is called a test suite.  A test case using only legal input values, that shouldn’t cause any exceptions or errors, is called happy path testing.  You should make sure your test suite include a one (or a few) such tests, in addition to (numerous) tests for failure conditions.

Testing with illegal input data, even random data, is known as fuzz testing, or simply “fuzzing”.

·       Boundary-value Testing — These unit tests cover what happens when you use extreme but legal data.  Examples include: passing in negative, zero, or huge values for primitive types; Strings, arrays, and collections of zero length (or very large), etc.  Often simple tests using normal values won’t catch the “off by one” errors in loops, or the division by zero.  This is part of unit testing (and/or implementation testing with assertions).

·       Smoke testing — Also known as a build verification testing, this is testing performed on a system prior to being accepted for further testing.  This is a “shallow and wide” black-box test that looks for answers to questions such as “Does it crash when launched?”, “Does the UI show?”, or “Do the buttons on the windows do anything?”  If you get a “no” answer to basic questions like these then the application is so badly broken there’s no point in more detailed testing.  When automated tools are used the tests are often initiated by the same process that generates the build itself.

In software engineering a smoke test generally consists of a collection of tests that can be applied to a newly created or updated computer program.  In this sense a smoke test is the process of validating code changes before the changes are checked into the larger product’s official source code collection.  Such tests are often done automatically by the build software or (source code) check-in software.

The term smoke testing come from the plumbing trade.  It refers to the first tests made after repairs or first assembly to provide some assurance that the system under test will not catastrophically fail.  After a smoke test proves that the gas pipes will not leak or the circuit will not burn, the assembly is ready for more stressful testing.

·       Regression Testing — These are tests (both unit and whole system) designed to make sure that what was working before is still working now.  After adding new features or bug fixes to code you need to run regression tests to make sure your changes haven’t broken anything.  If you saved your unit tests and they are of the black-box type, they can be used for regressing tests.

As bugs are found a good practice is to write a new test case to reproduce that bug and add that test case to your regression test suite.  Only then go and fix the bug.

·       Acceptance Testing — Sometimes called functional tests, these test the system as a whole lives up to expectations.  Unlike other black-box tests, these test the code against the requirements and specifications, not against the design.  Often a contract will specify payment only after successful acceptance testing.  Such tests usually involve the customer playing the role of a user, and putting the system through various use cases or scenarios.  When such tests use only legal input values, that should work without errors, they are called happy path tests.  Such tests are more common with acceptance testing than unit testing.

·       Load and Stress Testing — These test the whole system (or major part, such as the web server application) under increasing load and for long periods of time.  When you only have loads in the range required it is called load testing.  Such testing can show stability and possibly uncover concurrency (multi-threading and multi-processing) problems.  When the load is increased beyond the design point it is called stress testing.  Such tests can often uncover weak spots in the design or implementation, and should also show if the application degrades gracefully (that is, it doesn’t crash and/or corrupt the valuable data).

·       Quality of Service (QoS) Testing — This is similar to load testing.  It is sometimes referred to as Quality Assurance Testing.  You need to make sure the application loads quickly and is responsive when running, even when network connections go down or bandwidth is limited.

·       Testing the interactions of multiple threads in an application can be important too, but is very hard to arrange.  Testing Java EE (multiple clients accessing a cluster of servers) is very difficult and requires special tools and techniques (mentioned later).

·       Testing GUI user interfaces is very difficult (but not impossible) to automate.  Mostly a human tester interacts with the GUI.  Automated testers use a script to simulate a human.  Some such tools also have a “monkey test” mode, where the tester automatically clicks and types at the GUI until it crashes.

·       CoverageChoose test cases carefully!  Not every possible set of inputs and conditions can be tested. You want to have the smallest number of tests that as fully as possible test (“cover”) every feature.  For example if one input is a filename, you should test the code with very long names, short names, null names, and names with strange and even illegal characters.

You can use random generators to pick values such as numbers or text strings, in a given range (except for boundary-value testing).  This removes a tester’s bias which might mask errors.

However even if you have a test for every possible condition, you still haven’t thoroughly tested the system.  This is because a large number of errors may only appear when certain conditions occur together, for example very long filenames containing weird characters.  It is quite possible that testing each condition separately will not reveal such errors.

It is in general not possible to test all combinations of all features.  However a technique known as pairwise testing can generate a small number of test cases that will test every possible pair of features.  (Usually commercial software is used to generate the tests from a list of features and the number of values/conditions of each.)  (3-way and n-way testing is also possible.)  This type of testing is also known as combinatorial interface testing (CIT).

For example suppose I needed to test software with 3 features, each of which contains two types of test values.  Then testing each feature by itself takes six test cases, and all combinations would take nine test cases.  But just four pairwise tests will test each feature in combination with each other feature.  Since an error caused by a single feature is likely (but not guaranteed!) to manifest when tested in combination with some other feature(s), just the four test cases will provide proper coverage.  For example:

Test Case

OS Type



















Usually there are many more than three features, and each may have several different values or conditions to test.  For just 10 binary features, only 13 test cases are needed for all three-way feature interactions (as opposed to 2^10 test cases.  For real-world software with dozens of features with several possible values each, the benefits are enormous!

Pairwise coverage won’t generally find errors that only occur when 3 or more features have specific values (conditions), but such errors are very rare anyway.

See pairwise.org for a list of tools, many free, which can generate the coverage table.  Note, you still need to turn that into JUnit or other tests to run.  One popular tool is the freely available PICT tool from Microsoft.  (Demo.)

There are other considerations to using CIT.  If your system has configuration settings included, you need to run all the unit tests for each combination of those.  If a test fails, you can’t be certain without additional testing (or code examination) which factor or pair of factors caused the failure.  Certain systems require slightly different approaches to CIT, such as with event-driven system where you need to test different sequences of events.  Coming up with the system model (the list of parameters and their possible values) is still a manual process, so if you miss something, it won’t be tested.  On the other hand, if you include too much, testing gets expensive and you may not be able to compete all scheduled tests.  For example, the popular DBMS “MySQL” was tested using CIT; it took a cluster of computers months to complete the test suite.

CIT testing is an active area of computer science research.

Another meaning of coverage in testing, is the percent of code that was used during the test.  Such coverage tools can tell you what parts of your code were not included in any test.  An example tool is jcoverage.

Unit Testing with JUnit

JUnit is a very powerful unit testing framework and includes many features we won’t discuss here.  It is ubiquitous today, although other testing frameworks exist.  Download JUnit from junit.org and put the junit.jar file someplace.  Note, an additional jar file includes the Java docs for JUnit, which can be handy to have and book-mark.  For examples of using JUnit, see the old cookbook at the JUnit home, or look at the examples at the JUnit Wiki site.

Update CLASSPATH to include the junit-version.jar file.  Don’t forget to also add “.” to CLASSPATH too.  (JUnit uses CLASSPATH to locate all your class and your test code, and not the extension directories!  So if testing your own jar files, make sure to list them on CLASSPATH.)

For my preferred setup, I install the JDK someplace without version numbers, C:\Java (or C:\Program Files\JDK, if you prefer).  That directory should be defined as the environment variable JAVA_HOME.  Next, I create a folder to contain all my extensions that I download as jar files: C:\Java\MyExtensions.  I can place any jar file in there, such as junit.jar.  Finally, I use the CLASSPATH wildcard mechanism:  CLASSPATH=.;C:\Java\MyExtensions\*.

Then I can update the JDK (to the same location) and not have to change any environment variables.  (The JDK installer doesn’t remove previous versions; do that manually before installing a new version of the JDK.)

Create a class to hold your test suite (set of unit tests, which become regression tests; note you can have multiple test suites if you wish):

import org.junit.*;  // The various annotations
import static org.junit.Assert.*; //AssertX methods
import org.junit.runner.JUnitCore; // Test runner

public class MyAppTestSuite {
    test methods go here

Since unit tests shouldn’t need access to private members of your classes, they don’t need to go into the same package.  Nonetheless, it is common to do so, since you can use the JUnit testing framework for more the simple blackbox unit tests.

Now all you do is add test cases.  Each test case is a public void method that takes no arguments and that is annotated with “@Test”.  The test cases can be in any order.  (They don’t necessarily run in the order you list them.)

A good naming scheme for such methods is testMethodDescriptionOfTest.  (You don’t get extra points for short names!)  (JUnit 3 required method names to start with “test”, but not v4.)  These tests have this pattern of code: setup - invoke a method to test it - examine result.  Here’s an example:

public void testPadLegalInputs ()  // test pad legal-inputs
 { final int num = 512, minWidth = 6;
   final String result = pad( num, minWidth );
   assertFalse( "null String returned from: pad("
   + num + ", " + minWidth + ").", result == null );

The test case methods contain any code you need, and one or more “Assertxyz” statements (See JUnit Java docs).  For example: assertTrue(“message”, cond), assertFalse(“message”, cond), assertSame(“message”, objRef1, objRef2), or assertEqual(“message”, obj1, obj2).  There are many others too (show JUnit Java docs for class Assert).

It is not uncommon to have one assert per method.  Thus, if testing legal inputs with three different sets of data, you would have three different (but very similar) methods.  The advantage is that when a test fails, no further code in that test method is run.  So if you had additional asserts in the same method, the additional testswon’t be run.

However, I think it is okay to group multiple test data sets into a single test method, as long as the method is testing just one aspect.  This approach as benefits too: a data-driven test method (using a loop) is simple and clear, and easy to add many data sets to.  And if one data set fails, it is usually pointless to run the others until that problem has been fixed.  For example, a boundary value test fails on the first of ten test data; the other tests for the same boundary value test are not needed since you know what the problem is already.

Generally, I like to use assertFalse which causes the test to fail (and displays the message) if cond evaluates to true.  (I’ve always found that confusing!)  For example this tests that num <= max:

   assertFalse("num greater than max!", num > max);

(With assertFalse, the expression matches the message.)

What if a test should throw an Exception, or time out?  You can pass optional parameters to the @Test annotation.  This test should throw an Exception:

 public void testFooIndexOutOfBoundsException() {
   List emptyList = new ArrayList();
   Object o = emptyList.get(0);  //this in foo

Note that uncaught exceptions (other than the expected one) cause a test to fail, but without any helpful error message.

The optional parameter timeout causes a test to fail if it takes longer than a specified amount of clock time (in milliseconds). The following test fails:

 public void testFooTimeOut() {
   for(;;) ;  // This code in method foo

Generally, unit tests cover public interfaces of classes or packages, so you can simply import your packages and classes under test into your test suite.  You can create a new package foo_test, for the tests for package foo.

Rarely, you may wish to unit test some package-private or private methods, or use the JUnit test framework for other sorts of tests.  Your test suite class(es) will need access to your code to use any non-public methods or fields.

There are a number of techniques for this situation.  The simplest way is to put the testing classes into the same package as the code being tested.  The best use reflection to change the accessibility when testing.  You can do that yourself, or use JUnit-Addons.  Another possibility is to nest the class you wish to test, inside your unit test class.  That gives your test code special access but can cause deployment issues.  A good discussion of this is found on stackoverflow.

Test Fixtures

You can add “helper” methods if you want to the test suite class (or even additional classes).  Such methods are handy when you need to do the same steps in several different test cases.  For example in complex situations you may need to create a bunch of objects, create files, and/or setup database tables.  The environment that your tests require is called the test fixture.

Besides manually invoking helper methods from within each test case method, JUnit allow you to create test fixtures and work with them easily.  Before each and every test case is run, JUnit will invoke all the methods annotated with @Before and afterward invoke all methods annotated with @After.  You can have any number of these.

JUnit4 also provides @BeforeClass and @AfterClass annotations for a pair of methods that run once per test cycle, to do any one-time set up and tear down (example: initialize a test database).

In some cases you may need to create a number of realistic objects to run your tests on some class.  For example, suppose you want to unit test a mail server (MTA).  To test it you will need some MUA objects, but for this test you don’t want to wait until a fully functional MUA class is completed and tested.  You need MUA objects that behave just realistically enough so you can test your MTA.  Such objects are called mock objects.  Rather than create these objects manually there are ways to simplify their creation; all you need is the interface for the object and a mock object that implements it can be used.  See jmock.org for more information.

To test a method that produces console output, use something like this:

PrintStream savedOutputStream = System.out;
ByteArrayOutputStream baos = new ByteArrayOutputStream();
PrintStream pw = new PrintStream(baos);
    // do something
System.setOut(savedOutputStream); pw.flush();
String actualOutput = baos.toString()

This sort of thing illustrates a somewhat subtle advantage of test driven design.  There are often small differences in an interface design that make all the difference in how easy it is to test.  Test driven design naturally leads to a preference for designs that can be tested over designs that are harder to test.  For example, many output methods could be written to take a PrintWriter or PrintStream parameter.  Methods written that way are easy to test by giving them a PrintWriter based on a StringWriter or a PrintStream based on a ByteArrayOutputStream.

Running Tests with JUnit

Compile the test suite class and pass its name as an argument to the JUnit test runner:  java org.junit.runner.JUnitCore MyTestSuite ...

A simpler way is to add a main method to your MyTestSuite class that invokes the runner; then you just use: java MyAppTestSuite.  Here’s how:

    public static void main ( String [] args )
    {   JUnitCore.main( "MyAppTestSuite" );   }

(Show TextKitTestSuite.)  The output will show one dot per successful, one ‘E’ per failure, and one ‘I’ for ignored tests.  Here’s a sample output:

JUnit version 4.10
Time: 0.032
OK (4 tests)

When a test fails, JUnit will abort that test case by throwing an exception.  This will print a message about the failure and print a stack trace; one line in that will show the line number of your test case that failed.  Note the other tests still get run.

JUnit4 no longer includes a GUI runner, but there are stand-alone ones available, GUI ones built into Eclipse and NetBeans, and an ant task with XML output.  (There’s a nice NetBeans plugin “Code Coverage” that highlights code run during testing, so you can see what code was and wasn’t tested.)

Addition Notes:

·       Eclipse and NetBeans have JUnit 4 support.  Just right-click the class (or its src file) in the package explorer, then select new... --> JUnit Test Case.  Repeat for each test case.  To run all the test cases, right-click on the package name, select Run As...-->Junit Test.  (JUnit 3 is also supported.)  You may wish to move the test cases to a single class; Eclipse seems to put each test case into its own class (but all such classes do run).

·       Keep test data files with the test suite classes, and access them with Class.getResource.  When code is moved the pathnames don’t mess up.

·       Avoid side effects in your test cases.  If you must have these make sure you use setUp (@Before) and tearDown (@After) methods to restore the system state for each test case.

·       Test to (public) interfaces for unit tests.  Leave implementation (or “white box”) testing for assertions.

·       Some code depends on other code.  In those cases you can create “stub” methods that simulate the correct behavior of that other code, at least well enough for testing purposes.  If this isn’t easy to do, consider duplicating the production environment for testing.  (Include any frameworks and databases.)

·       If you have one class extending another, you should mirror that by having one test suite extend another (rather than extend TestCase directly).  All the superclass tests will be run automatically!

·       Don’t put your test code in the same directory tree as your application code.  It will be hard to separate out later, and you don’t want to ship that code (or those .class files).  Make sure CLASSPATH lists where your code is.

·       Don’t waste effort by testing every single method.  Getters and Setters (a.k.a. accessors and mutators) are usually too simple to bother testing.

·       Adding more test cases will eventually reach the point of diminished returns, where further testing is no longer cost effective.  Even knowing exactly when you’ve reached this point is unknowable!  However most students and new (and self-taught) developers write far fewer test cases than they should.

·       Automating Testing  Use scripts, make, or today many prefer to use “ant” (from apache.org).

The TestNG testing framework is a competitor to JUnit.  All you do to define a test method is precede it with @Test annotation.  When you later run the TestNG program, giving it the class containing your tests, it finds the test methods and runs them using this annotation.  Output is plain HTML that can be viewed in any browser (Eclipse has a plug-in for this).  Visit TestNG.org for more information.

Assertions — Implementation Testing

Useful for testing the implementation details, assertions were added to Java in version 1.4.  You embed the assertion right in your code.  The assertion contains a Boolean expression that gets evaluated.  If false an exception will be thrown.

By keeping the test with the code it becomes easy to change the tests when you change the code.  Assertions are complementary to unit tests: one tests black-box (against the specification), the other tests the implementation details: private methods, properties, and private nested classes that are not part of the public (or protected) interface.

For example, if a public method takes an int num as an argument that should be greater than zero, you should use an if statement to test this and throw an IllegalArgumentException if num is negative.  However, for a private method you don’t need to worry about this.  You should test it but it shouldn’t be tested every time in the production environment.  Assertions are perfect for this:

    assert num > 0;

The beauty of assertions is that they can be turned on or off at runtime.  If turned off there is no runtime overhead for them!  You can selectively enable assertions too, to limit them to small parts of your application that you are currently testing.

An assertion looks like this:

   assert booleanExpressionThatShouldBeTrue;

If assertions are enabled (as when testing), if the expression is false an AssertError exception is thrown (Errors are not Exceptions but they are Throwables), containing information about where the problem was located.

You can also add a second argument to assert:

    assert booleanExpression : expression2;

Which will evaluate expression2 and pass it to the AssertError constructor.  This will in turn convert it into a String and display it when the error is thrown.  Expression2 is often an informational String concatenated with the values of various variables (or method return values) that would aid in debugging.

Beware of invoking methods with side effects in assertions!  (Methods with side effects should be avoided in any case.)  Consider:

    assert methodWithSideEffect() > 0 : "Whoops";

Such programs behave differently when assertions are on versus off.

One mistake to avoid is to use assertions when you should be using if statements and throwing exceptions (e.g., checking parameters to public methods, a.k.a. pre‑conditions in a public method).  When something is part of the contract or implementation of your code, don’t use assertions.  Remember that these can (and usually are) turned off!  That could break your program.

On the other hand, any comments that state something “should never get here” or “At this point we know num must be zero” can and should be converted into assertions.  Use assertions for proving: pre- and post- conditions, and invariants (i.e., accounts balance).  Use for switch statements with no default case.  (See assertion usage notes and/or programming with assertions for more information on using assertions for pre- and post- conditions, and invariants.)

Assertions are disabled by default.  To enable them requires the use of the “-ea” (or the longer “-enableassertions”) command line switch to the “java” command (if not using Sun’s JVM you must read the docs to see how this works with your JVM).  There is also a “-da” switch to disable assertions.  These can be combined to selectively enable assertions just where you want:

-ea                                 enable all assertions (except for system classes)

-ea:package              enable in just this package

-ea:package...       enable in just this package and any sub packages

-ea:...                        enable in the default nameless  package

-ea:class                   enable in just this class

For example:

java -ea:com.wpollock... -da:com.wpollock.Foo com.wpollock.MyApp

To enable assertions for system classes and packages, use the “-esa” and “-dsa” switches, which take no arguments.  There is rarely if ever any need to do this.

Posted in comp.lang.java.programmer on 11/24/07 by Stefan Ram (with comment by Patricia Shanahan:  “I was just writing a small and simple program to calculate a random number from the set {0,1}:

   import java.security.SecureRandom;
   final SecureRandom rnd = new SecureRandom();
   final int r = rnd.nextInt();
   final int result = ( r / 69313 )% 2;

“Just for the heck of it, I added:

   assert result >= 0 && result < 2;

“As I wrote this, I thought: ‘This assertion is completely unnecessary, because it is obvious that the result of »% 2« is always in the set {0,1}.  So what am I doing here?  Just stating the obvious to needlessly enlarge the source code?’  The next thing happening then, of course, was:

   Exception in thread "main" java.lang.AssertionError

“[PS:] The moral, perhaps, is that the more deeply you are assuming something, the more valuable it is to assert it.  For those not used to the ‘assert’ keyword yet, the key concept is “invariants”.  The presumed invariant wanted in Stefan’s example was “that the result ... is always in the set {0,1}”; it just happened that ... »% 2« was not enough to ensure it.  Math.abs() would have helped ensure it [since nextInt can return negative numbers -WP].  This is an excellent example of how assertions help make a program bug free.”

Enterprise Application Testing

Testing EJBs, web services, and other enterprise application code is much harder than simple unit testing, in part because such code depends on services provided by the application server (EJB container).  Entity EJBs don’t usually contain much logic (they just represent persistent data) so unit testing them is usually sufficient.  However session EJBs are managed by the container, which instantiates them as needed, passes them around, invokes their methods, hooks them up to database connection pools and other resources, etc.  (An analogy is testing a complex AWT or swing application without using a JVM that provides the event handling.)

One way is to test using only remote interfaces.  You set up the code on a test server and run it and then the tests connect to the server as a client would.  Such testing can be quite through for EJBs that use RMI and thus tend to have extensive remote interfaces.

When the web server (presentation tier) runs on the same physical host as the application server (business logic tier), the two usually use local interfaces to communicate rather than remote interfaces.  (i.e., simple method calls or communications via sockets.)  In this case you may need to deploy your tests as another application in the same application server, and have the test EJBs invoke the application’s EJBs.  This is much harder to setup and control, and the testing code must be removed before deployment.

If the application is simple it may be possible to create “stubs” for the application server objects.  The app. talks to your stubs rather than a real application server.

One tool that can be used is cactus (from jakarta.apache.org/cactus).  This tool requires JUnit and builds on that to allow you to test EJBs, servlets, and JSPs.  Cactus resides in the application server.  You write regular JUnit test that get packaged into a WAR and deployed (copied) to the application server.  When running the test suite on a client machine, the client talks to cactus on the server.  Cactus “redirects” the connections to the EJBs under test, and runs the (duplicate) tests on the server itself, and sends the results back to the client.

Both JUnit and cactus testing can be automated.  A simpler way to test EJBs is with JUnitEE (from www.JUnitEE.org), but these test are run from a web browser and are thus not easily automated.

To test the EIS-tier (database interaction) usually requires access to a test database with sufficient data to appear realistic.  (A sanitized copy of a production database would be ideal.)  This approach still requires you to create a set of SQL scripts to restore the DB in an initial state for each run of your test suite, and a set of queries to examine the state of the DB after the code executes to see if the test succeeds or fails.  DB code is central to many applications and can be complex: caching, transactions, and rollbacks in the event of errors (or terminated sessions) all must be tested.

To test the Web-tier is difficult too; you must deal with some browser/user unique issues: re-submission, submission from multiple windows, timeouts, implications of the back button, browser compatibility, caching proxy servers, and resistance to denial of service attacks are all important elements.  JSP pages have additional difficulties, as they don’t even exist (the java code in them is collected and compiled into servlets by the server when invoked).  Cactus can be used for this testing too, as well as for acceptance testing of the web tier.

Two additional tools that can help are Marathon, used to test AWT or swing GUIs, and Selenium, used to automate web browsers.  (Both are open-source.)

Marathon runs GUI based acceptance tests against Java/Swing applications.  It provides an integrated environment for test script creation and execution.  Currently, Marathon supports Jython and JRuby script models for recording the test scripts.  You can download it from sourceforge.net/projects/marathonman.

SeleniumHQ is used to automate web browsers, from a script.  While this can be used for many things, mostly it is useful to test web applications.  Selenium can be integrated with JUnit as well.  Selenium can query the state of anything in the HTML page’s DOM or JavaScript environment.  It also has the ability to inject data from a database into the test session, click on buttons, and programmatically confirm that the generated report matched other the data from the database.

To load and stress test web interfaces, about the easiest tool is Microsoft’s free Web Capacity Analysis Tool (WCAT, also from here), formerly called the Web Application Stress Tool.  Other good tools are slamd from www.slamd.com and Apache JMeter.  Selenium could be used as well.


A comment about the search engine project:

If you’ve had difficulties editing each other’s code, you now know why you should first implement the public interfaces, with comments.  Then and only then, do you each work on your portion of the code.  Other group members never need your code, just the agreed interface (and some stub methods, for testing).

Also, it pays to use the MVC design pattern when you can.  For this project, the various (two) user interfaces are the view.  The index data (the Map), and the list of indexed files, are both models.  Everything else is the business logic: the file operations, and the actual searching.  Even this part can be split into the persistence (long term data storage) and the business logic.

Such a design is easier to work on, in groups.  There is very little coupling between the different parts, so everyone can work on their part of the project independently.  In addition, you can change parts of the application easily, say to use XML files or a database, or to change a stand-alone application to a Java-EE web-based one.

Logging and Tracing

Logging [partly adapted from Apache.org/log4j documentation]

Inserting log (print) statements into your code is a low-tech method for debugging it.  However, this is often the only way to debug enterprise (distributed) applications.  Adding printf statements at various points in your code allow the developer to see the state of an object as it changes, to catch any unexpected values when they occur.

Any added log statements increase the size of the code and reduce its speed, even when logging is turned off.  A preprocessor such as used with C++ can be used to strip out debugging/logging statements when building for production.  However, any changes can affect the program, especially a multi-threaded one running on multi-core systems.  So just turning on logging can cause the problems to fail to manifest.  In Java, a preprocessor is not generally available anyway.  In Java, the logging statements have a small, fixed overhead.

Deciding where to insert log statements into the application code requires some planning.  In some applications about 4 percent of the code is dedicated to logging!  A medium or large sized application may contain thousands of log statements.

Logging is related to tracing.  The difference is that tracing is often at a lower level, and doesn’t require statements to be added to the code.  Tracing output has less detail, can be used to trace hardware, firmware, kernel, or applications, and can generate much more data.  For example, if some application fails due to some permission error, but the developer didn’t include enough information to pinpoint the problem, you can trace the application to see what it tries and where it fails.  (Show trace output; point out that the output can be filtered to reduce the clutter.)

A popular logging tool used today is Apache.org’s log4j.  With log4j version 2, it is possible to enable logging at runtime without modifying the application binary.  The log4j package is designed so that these statements can remain in shipped code without incurring a performance cost.  Logging behavior can be controlled by editing a configuration file, without any modification to the application binary.  (This is similar to how assertions work.)

No matter which logging framework you use, the use by the programmer is the same.  You create a logger object, and use its methods to generate log messages.  Loggers have methods to control which log statement to ignore, where to send the output, and the format of log messages (e.g., plain text, syslog format, XML).

Logging statements are put into the code at key points (any spot in your code at which you might want to generate a message).  Each log statement is assigned a log level: one of TRACE, DEBUG, INFO, WARN, ERROR, or FATAL.  (Note: the available log levels depend on which logging framework you use, but they are all similar.)

Logging output is controlled by selecting a log level and a part of the application: A whole package or just part of a class.  If you set the logger’s level to, say “INFO”, then only log statements of level INFO or higher will produce any log messages; other log statements won’t do anything.

The raw log information needs to include a timestamp, the location in the code where the log message was generated, and the log message itself.  When you configure logging, you must specify the log output format as well as a destination for the log messages (not as easy on a Java EE application running on multiple hosts!)

Logging provides developers with details about application failures.  On the other hand testing provides quality assurance and confidence in the application.  Logging and testing should not be confused.  They are complementary.

Be cautious when log messages include data from external sources.  Users can create fake log entries that way.  For example:

Jun  5 22:29:27 YborStudent sshd[21183]: session opened for user auser
Jun  5 22:29:29 YborStudent [21184]: user auser attempted to access file foo
Jun  5 22:29:31 YborStudent sshd[21195]: session opened for user fake

Jun  5 22:30:14 YborStudent sshd[21183]: session closed for user auser

Where the bold text indicates the filename used by auser.  To prevent this, you should sanitize any external data before using it in a log message.

Using Log4J

Although log4j is the most common, Sun Java now includes logging built-in.  See java.util.logging API.

The log4j environment can be configured programmatically.  However, it is more flexible (and common) to configure log4j using configuration files.  Currently (version 1.2) configuration files can be written in either XML or in Java properties (key=value) format.

Here is a trivial but complete example, stolen from the log4j docs (download log4j jar file, put in ext directory):

import org.apache.log4j.*;

public class MyApp {

 // Define a static logger var that references the

 // Logger instance named "MyApp":

 static Logger logger = Logger.getLogger(MyApp.class);

 public static void main ( String [] args ) {

  // Use a simple conf. that logs on the console:


  logger.info("Entering application.");

  Bar bar = new Bar();


  logger.info("Exiting application.");


import org.apache.log4j.*;

class Bar {

  static Logger logger = Logger.getLogger(Bar.class);

   public void doIt() {

     logger.debug("Did it again!");

}  }

The output (on the console) will look like this:

  0  [main] INFO MyApp - Entering application.

  36 [main] DEBUG Bar - Did it again!

  51 [main] INFO MyApp - Exiting application.

(The number is elapsed milliseconds since the logger was initialized.  This is only the default layout; you can include lots of different information, in any format you can imagine — or that is required by your project requirements).

Using java.util.logging

After log4j became popular, Sun added logging facilities to the Java 4.  There is a cool companion project in log4j called chainsaw, that has a GUI log viewer tool that works with both Java logs and log4j logs, and even shows live events.

The concepts of logging are the same for Java’s logging as for log4j, but the exact details differ.  (Show the logger demo Web resource.)  One difference is, with Java’s logging, you create a hierarchy of loggers.  You can have a root logger for the whole application, children loggers for each package, grand-children loggers for each class, and even lower level loggers for parts of a class.  The logger names can be anything, but using the fully-qualified class name (for example, “Foo.class.getName()”) is common and provides a natural hierarchy.  By changing the configuration settings, you can enable/disable loggers without changing the code.

Each logger has a log level set on it.  Any messages with a lower level are simply and quickly ignored.  Otherwise, the logger creates a log record and sends that to any output handlers defined for that logger.  It also sends the record to the parent logger, which works the same.  It is the handler that produces the output.  By default, the root logger uses a console handler, and loggers you create have no handlers.  (So by default, your log messages travel from your code, to your logger, to its parent logger, ..., to the root logger, and then to the console handler.)  There are a few standard handlers, and you can create new ones.

Note, every logger and handler has its own log level.  By default, the root logger’s level is INFO.  If you don’t set the level of your own loggers, it defaults to “null”, which means it uses the same log level as its parent.  For a log message to appear, every logger and handler it passes through must have a level set that would permit the message.  It is common to only have a handler set on the root logger.

If setting the log level at any point isn’t enough control for you, every logger and handler can have one or more filters attached.  Each filter takes a log record and returns a boolean to indicate if it should be dropped.

Management Tools: JMX and MBeans

Management of a large application can be a problem.  With an EE application running on a cluster, it can be very hard to see what parts (classes) of the application are heavily used, which versions are deployed, and to turn on/off parts of the application functionality.  If some part of an application running on a cluster fails, or is just over-burdened and slow, how will you know?  Neither logging nor testing can help you here.

Code that supports management tools is often referred to as instrumented.  Managed applications can be updated, reconfigured, and have parts of the application enabled (deployed) and disabled (undeployed).  Managed applications provide statistics about application performance (in whole and for various parts).  This data is used by managers to decide on configurations and hardware upgrades.  Finally managed applications can tell a system administrator that something bad has happened (rather than wait for customer complaints).

Every feature of an enterprise application should have a “switch” to enable/disable just that feature.

JMX (Java Management Extensions) technology provides a simple, standard way of managing resources such as applications, devices, and services.  It can be used to monitor and manage resources as they are created, installed and implemented, and to monitor and manage the Java Virtual Machine.

Using JMX technology, a given resource is instrumented by one or more Java objects known as Managed Beans or MBeans.  These MBeans are registered in an MBean server that acts as a management agent and can run on most devices enabled for the Java programming language. 

An MBean is a Java object that represents resources to be managed.  It has a management interface consisting of named attributes that can be read and written, named operations that can be invoked, and notifications that can be emitted by the MBean.  Note the JVM (and possibly the OS itself) can be managed the same way, only such system resource management beans are called mxbeans.

A JBoss (and other) Java EE servers have a management console that uses JMX to discover the mbeans and mxbeans, and provides an interface where you can get information (statistics) about them, deploy/undeploy them, and view notifications.  This sort of technology is very valuable to manage Java EE applications, especially in a cluster.  See the Javadoc guide for management, the tools docs for jconsole and the newer jvisualvm, the package API docs (javax.management.*).    (Compile using “‑g” option to include debugging info.)  Try this (all one line):

  java -Dcom.sun.management.jmxremote ^
    -Dcom.sun.management.jmxremote.port=1090 ^
    -Dcom.sun.management.jmxremote.ssl=false ^
    -Dcom.sun.management.jmxremote.authenticate=false ^
    -jar "%JAVA_HOME%\demo\jfc\Java2D\Java2Demo.jar"

Then from a different window (or even host) try:

java -jar "%JAVA_HOME%\demo\management\JTop\JTop.jar" ^

And from yet another window, try:   jconsole localhost:1090 

A simpler demo:

java -jar "%JAVA_HOME%/demo/jfc/Java2D/Java2Demo.jar"

Then use task manager (or ps on Unix) to determine the process ID (pid) of that.  Then from another command line prompt, run:

jconsole pid

Then try:

jvisualvm (Show some features, e.g., a heap dump, then show String objects.  Or show profiling by CPU time or memory use.  Can add plug-ins, too; then show Visual GC.)

(Note!  The logging and management tools may not work if your code isn’t in a package!  MBeans have trouble with classes in the default, nameless package.)