Archive for the ‘Java/J2ee’ Category

In an object-based application, most objects are passive. A passive object just sits there waiting for one of its methods to be invoked. A passive object’s private member variables can only be changed by the code in its own methods, so its state remains constant until one of its methods is invoked. In a multithreaded environment like Java, threads can run within objects to make the objects active. Objects that are active make autonomous changes to themselves.

Sometimes in modeling a system, it becomes apparent that if some of the objects were active, the model would be simplified. Earlier in this book, classes that implemented Runnable were instantiated, passed to one of the constructors ofThread, and then start() was invoked. This style required a user of a class to know that a thread needed to be started to run within it, creating a burden on the user of the class. In addition, because the user of the class created theThread object for it, a reference to Thread was available for misuse. The user of the class could erroneously set the priority of the thread, suspend it at a bad time, or outright stop the thread when the object it was running in was in an inconsistent state. Having to activate objects externally is both inconvenient and potentially hazardous. In this chapter, I’ll show you how to have an active object transparently create and start up its own internal thread.

Simple Self-Running Class

The class SelfRun, shown in  demonstrates a simple example of an active object. During construction, it automatically starts an internal thread running.

Code Listing SelfRun.java—A Simple Self-Running Class
1: public class SelfRun extends Object implements Runnable {
 2:     private Thread internalThread;
 3:     private volatile boolean noStopRequested;
 4:
 5:     public SelfRun() {
 6:         // other constructor stuff should appear here first ...
 7:         System.out.println("in constructor - initializing...");
 8:
 9:         // Just before returning, the thread should be
10:         // created and started.
11:         noStopRequested = true;
12:         internalThread = new Thread(this);
13:         internalThread.start();
14:     }
15:
16:     public void run() {
17:         // Check that no one has erroneously invoked
18:         // this public method.
19:         if ( Thread.currentThread() != internalThread ) {
20:             throw new RuntimeException("only the internal " +
21:                 "thread is allowed to invoke run()");
22:         }
23:
24:         while ( noStopRequested ) {
25:             System.out.println("in run() - still going...");
26:
27:             try {
28:                 Thread.sleep(700);
29:             } catch ( InterruptedException x ) {
30:                 // Any caught interrupts should be habitually
31:                 // reasserted for any blocking statements
32:                 // which follow.
33:                 Thread.currentThread().interrupt();
34:             }
35:         }
36:     }
37:
38:     public void stopRequest() {
39:         noStopRequested = false;
40:         internalThread.interrupt();
41:     }
42:
43:     public boolean isAlive() {
44:         return internalThread.isAlive();
45:     }
46: }

One of the SOA Principle :

1. LOOSE COUPLING

Loose coupling is a principle by which the consumer and service are insulated from changes in underlying technology and behavior. In some manner, the loose coupling principle describes a logical separation of concerns. That is, the consumers in our SOA are intentionally separated from direct or physical connection to services. The intent is to protect the individual integrity of each SOA consumer and SOA service and to avoid physical dependencies between them (see Figure 2.1).

When this principle is applied to the design of our services and the service interface, we can mitigate the impact of change. The most common example is an SOA service that has been previously implemented and deployed into the ESB. In this example, a number of consuming applications are successfully using the service. As new consumers find the service, they may also require additional functionality and data not defined to this service. If the service were tightly coupled with each of the previous consumers, the ability to add new functionality to the service would be significantly limited. When tightly coupled, those previously existing consumers would likely be affected by changes to the service. Alternatively, when consumers and services are loosely coupled, there are techniques that can be applied to help mitigate the impact of change to existing consumers of a service. Previous consumers are then unaware and generally unaffected by new additions to the service functionality.

To comply with the loose coupling principle, consumers and services do not communicate directly. As we learned in Chapter 1, consumers and services communicate via messaging. Using a message to exchange requests and replies avoids any direct technical connections between consumers and services. In addition to messaging, there are other service interface design techniques that can be applied to further limit the degree of coupling between consumers and the service.

Messages exchanged between consumers and services are realizations of the service interface. In the case of a Web service, the service interface is defined by the combination of a Web Service Definition Language artifact (WSDL) and an XML Schema definition (XSD) as its referenced metadata. These two types of interface artifacts (WSDL and XML Schemas) are the foundation of any Web service. The design and development of these two interface artifacts are a focus of the loose coupling principle. Other types of services can use other interface definitions and artifacts, but the same principle of loose coupling can still be applied.

The role of the ESB in SOA

In order to implement an SOA, both applications and infrastructure must support the SOA principles. Enabling applications involves the creation of service interfaces to existing or new functions, either directly or through the use of adapters. Enabling the infrastructure at the most basic level involves the provision of capability to route and transport service requests to the correct service provider. The role of the Enterprise Service Bus is, in part, simply to enable the infrastructure in this way.

The true value of the Enterprise Service Bus concept, however, is to enable the infrastructure for SOA in a way that reflects the needs of today’s enterprise: to provide suitable service levels and manageability, and to operate and integrate in a heterogeneous environment. The implications of these requirements go beyond basic routing and transport capability, and they are described in The Enterprise Service Bus Capability Model in 4.3, “A capability model for the Enterprise Service Bus” on page 82.

The ESB should enable the substitution of one service implementation by another with no effect to the clients of that service. This requires both the service interfaces that are specified by SOA and that the ESB allows client code to invoke services in a manner that is independent of the service location and communication protocol that is involved.

The ESB supports multiple integration paradigms

In order to fully support the variety of interaction patterns that are required in a comprehensive SOA (for example, request / response, publish / subscribe, events), the Enterprise Service Bus must support in one infrastructure the three major styles of Enterprise Integration:

  •  Service-oriented architectures in which applications communicate through reusable services with well-defined, explicit interfaces. Service-oriented interactions leverage underlying messaging and event communication models.
  •  Message-driven architectures in which applications send messages through the ESB to receiving applications.
  •  Event-driven architectures in which applications generate and consume messages independently of one another.

 The ESB does this while providing additional capabilities to mediate or transform service messages and interactions, enabling a wide variety of behaviors and supporting the various models of coupling interaction that are described in 3.2.1, “Coupling and decoupling of aspects of service interactions. 

shows a high-level view of the Enterprise Service Bus.


A clustered application or application component is one that is available on multiple WebLogic Server instances in a cluster. If an object is clustered, failover and load balancing for that object is available. Deploy objects homogeneously—to every server instance in your cluster—to simplify cluster administration, maintenance, and troubleshooting.

Web applications can consist of different types of objects, including Enterprise Java Beans (EJBs), servlets, and Java Server Pages (JSPs). Each object type has a unique set of behaviors related to control, invocation, and how it functions within an application. For this reason, the methods that WebLogic Server uses to support clustering—and hence to provide load balancing and failover—can vary for different types of objects. The following types of objects can be clustered in a WebLogic Server deployment:

  • Servlets
  • JSPs
  • EJBs
  • Remote Method Invocation (RMI) objects
  • Java Messaging Service (JMS) destinations
  • Java Database Connectivity (JDBC) connections

Different object types can have certain behaviors in common. When this is the case, the clustering support and implementation considerations for those similar object types may be same. In the sections that follow, explanations and instructions for the following types of objects are generally combined:

  • Servlets and JSPs
  • EJBs and RMI objects

The sections that follow briefly describe the clustering, failover, and load balancing support that WebLogic Server provides for different types of objects.

  • Minimises the amount of code in your application. With IOC containers you do not care about how services are  created and how you get references to the ones you need. You can also easily add additional services by adding a new constructor or a setter method with little or no extra configuration.

  • Make your application more testable by not requiring any singletons or JNDI lookup mechanisms in your unit test cases. IOC containers make unit testing and switching implementations very easy by manually allowing you to inject your own objects into the object under test.

  • Loose coupling is promoted with minimal effort and least intrusive mechanism. The factory design pattern is more intrusive because components or services need to be requested explicitly whereas in IOC the dependency is injected into requesting piece of code. Also some containers promote the design to interfaces not to implementations design concept by encouraging managed objects to implement a well-defined service interface of your own.

  • IOC containers support eager instantiation and lazy loading of services. Containers also provide support for instantiation of managed objects, cyclical dependencies, life cycles management, and dependency resolution between managed objects etc.
Many programmers will never need to implement their own Collections classes. You can go pretty far using the implementations described in the preceding sections of this chapter. However, someday you might want to write your own implementation. It is fairly easy to do this with the aid of the abstract implementations provided by the Java platform. Before we discuss how to write an implementation, let’s discuss why you might want to write one.

Reasons to Write an Implementation

The following list illustrates the sort of custom Collections you might want to implement. It is not intended to be exhaustive:
  • Persistent: All of the built-in Collection implementations reside in main memory and vanish when the program exits. If you want a collection that will still be present the next time the program starts, you can implement it by building a veneer over an external database. Such a collection might be concurrently accessible by multiple programs.
  • Application-specific: This is a very broad category. One example is an unmodifiable Map containing real-time telemetry data. The keys could represent locations, and the values could be read from sensors at these locations in response to the getoperation.
  • High-performance, special-purpose: Many data structures take advantage of restricted usage to offer better performance than is possible with general-purpose implementations. For instance, consider a List containing long runs of identical element values. Such lists, which occur frequently in text processing, can be run-length encoded — runs can be represented as a single object containing the repeated element and the number of consecutive repetitions. This example is interesting because it trades off two aspects of performance: It requires less space but more time than an ArrayList.
  • High-performance, general-purpose: The Java Collections Framework’s designers tried to provide the best general-purpose implementations for each interface, but many, many data structures could have been used, and new ones are invented every day. Maybe you can come up with something faster!
  • Enhanced functionality: Suppose you need an efficient bag implementation (also known as a multiset): a Collection that offers constant-time containment checks while allowing duplicate elements. It’s reasonably straightforward to implement such a collection atop a HashMap.
  • Convenience: You may want additional implementations that offer conveniences beyond those offered by the Java platform. For instance, you may frequently need List instances representing a contiguous range of Integers.
  • Adapter: Suppose you are using a legacy API that has its own ad hoc collections’ API. You can write an adapter implementation that permits these collections to operate in the Java Collections Framework. An adapter implementation is a thin veneer that wraps objects of one type and makes them behave like objects of another type by translating operations on the latter type into operations on the former.


How to Write a Custom Implementation

Writing a custom implementation is surprisingly easy. The Java Collections Framework provides abstract implementations designed expressly to facilitate custom implementations. We’ll start with the following example of an implementation ofArrays.asList.
public static <T> List<T> asList(T[] a) {
    return new MyArrayList<T>(a);
}

private static class MyArrayList<T> extends AbstractList<T> {

    private final T[] a;

    MyArrayList(T[] array) {         a = array;     }

    public T get(int index) {
        return a[index];
    }

    public T set(int index, T element) {
        T oldValue = a[index];
        a[index] = element;
        return oldValue;
    }

    public int size() {
        return a.length;
    }
}
Believe it or not, this is very close to the implementation that is contained in java.util.Arrays. It’s that simple! You provide a constructor and the getset, and size methods, and AbstractList does all the rest. You get the ListIterator, bulk operations, search operations, hash code computation, comparison, and string representation for free.
Suppose you want to make the implementation a bit faster. The API documentation for abstract implementations describes precisely how each method is implemented, so you’ll know which methods to override to get the performance you want. The preceding implementation’s performance is fine, but it can be improved a bit. In particular, the toArray method iterates over the List, copying one element at a time. Given the internal representation, it’s a lot faster and more sensible just to clone the array.
public Object[] toArray() {     return (Object[]) a.clone(); }
With the addition of this override and a few more like it, this implementation is exactly the one found in java.util.Arrays. In the interest of full disclosure, it’s a bit tougher to use the other abstract implementations because you will have to write your own iterator, but it’s still not that difficult.
The following list summarizes the abstract implementations:
  • AbstractCollection — a Collection that is neither a Set nor a List. At a minimum, you must provide the iterator and the size methods.
  • AbstractSet — a Set; use is identical to AbstractCollection.
  • AbstractList — a List backed up by a random-access data store, such as an array. At a minimum, you must provide the positional access methods (get and, optionally, setremove, and add) and the size method. The abstract class takes care oflistIterator (and iterator).
  • AbstractSequentialList — a List backed up by a sequential-access data store, such as a linked list. At a minimum, you must provide the listIterator and size methods. The abstract class takes care of the positional access methods. (This is the opposite of AbstractList.)
  • AbstractQueue — at a minimum, you must provide the offerpeekpoll, and size methods and an iterator supporting remove.
  • AbstractMap — a Map. At a minimum you must provide the entrySet view. This is typically implemented with the AbstractSet class. If the Map is modifiable, you must also provide the put method.
The process of writing a custom implementation follows:
  1. Choose the appropriate abstract implementation class from the preceding list.
  2. Provide implementations for all the class’s abstract methods. If your custom collection is to be modifiable, you’ll have to override one or more of the concrete methods as well. The API documentation for the abstract implementation class will tell you which methods to override.
  3. Test and, if necessary, debug the implementation. You now have a working custom collection implementation.
  4. If you’re concerned about performance, read the abstract implementation class’s API documentation for all the methods whose implementations you’re inheriting. If any seem too slow, override them. If you override any methods, be sure to measure the performance of the method before and after the override. How much effort you put into tweaking performance should be a function of how much use the implementation will get and how critical to performance its use is. (Often this step is best
Using java API, you can convert base64 encoded string image file and store on your disk.
you need to put axis.jar file into classpath before running this program. You can use sun propriety , Apache Codec library also

//Makes sure axis.jar is in classpath
import java.io.FileOutputStream;
import org.apache.axis.encoding.Base64;

public class ConvertintoPDFImg{
public static void main(String[] args) {
try {
Base64 b = new Base64();
String s = “R0lGODlhha7qq87qre7qrw7rsS7rsw6sAQEAOw==”; //Your Image or PDF base64 encoded string
byte[] imageBytes = b.decode(s);
FileOutputStream file= new FileOutputStream(“C:\\Img.gif”);// Or PDF file
file.write(imageBytes);
file.close();
} catch (Exception e) {
System.out.println(“Error :::” + e);
e.printStackTrace();
}
}
}

    Any problem in pure Java code throws a Java exception or error. Java exceptions or errors will not cause a core dump (on UNIX systems) or a Dr.Watson error (on WIN32systems). Any serious Java problem will result in an OutOfMemoryError thrown by the JVM with the stack trace and consequently JVM will exit. These Java stack traces are very useful for identifying the cause for an abnormal exit of the JVM. So is there a way to know that OutOfMemoryError is about to occur? The Java JDK 1.5 has a package called java.lang.management which has useful JMX beans that we can use to manage the JVM. One of these beans is the MemoryMXBean.

    An OutOfMemoryError can be thrown due to one of the following 4 reasons:

    JVM may have a memory leak due to a bug in its internal heap management implementation. But this is highly unlikely because JVMs are well tested for this. The application may not have enough heap memory allocated for its running. You can allocate more JVM heap size (with –Xmx parameter to the JVM) or decrease the amount of memory your application takes to overcome this. To increase the heap space:

     Java -Xms1024M -Xmx1024M

    Care should be taken not to make the –Xmx value too large because it can slow down your application. The secret is to make the maximum heap size value the right size. Another not so prevalent cause is the running out of a memory area called the “perm” which sits next to the heap. All the binary code of currently running classes is archived in the “perm” area. The ‘perm’ area is important if your application or any of the third party jar files you use dynamically generate classes.

    For  example: “perm” space is consumed when XSLT templates are dynamically compiled into classes, J2EE application servers, JasperReports, JAXB etc use Java reflection to dynamically generate classes and/or

    large amount of classes in your application. To increase perm space:

     Java -XX:PermSize=256M -XX:MaxPermSize=256M

    The fourth and the most common reason is that you may have a memory leak in your application as discussed in Q64 in Java section. [Good read/reference: “Know Your Worst Friend, the Garbage Collectorhttp://java.syscon.com/read/84695.htm by Romain Guy]

     So why does the JVM crash with a core dump or Dr.Watson error?

    Both the core dump on UNIX operating system and Dr.Watson error on WIN32 systems mean the same thing. The JVM is a process like any other and when a process crashes a core dump is created. A core dump is a memory map of a running process. This can happen due to one of the following reasons:

    Using JNI (Java Native Interface) code, which has a fatal bug in its native code. Example: using Oracle OCI  drivers, which are written partially in native code or jdbc-odbc bridge drivers, which are written in non Java code. Using 100% pure Java drivers (communicates directly with the database instead of through client

    software utilizing the JNI) instead of native drivers can solve this problem. We can use Oracle thin driver, which is a 100% pure Java driver. The operating system on which your JVM is running might require a patch or a service pack. The JVM implementation you are using may have a bug in translating system resources like threads, file handles, sockets etc from the platform neutral Java byte code into platform specific operations. If this JVM’s translated native code performs an illegal operation then the operating system will instantly kill the process and mostly will generate a core dump file, which is a hexadecimal file indicating program’s state in memory at the time of error. The core dump files are generated by the operating system in response to certain signals. Operating system signals are responsible for notifying certain events to its threads and

    processes. The JVM can also intercept certain signals like SIGQUIT which is kill -3 < process id > from the

    operating system and it responds to this signal by printing out a Java stack trace and then continue to run.

    The JVM continues to run because the JVM has a special built-in debug routine, which will trap the signal -3. On the other hand signals like SIGSTOP (kill -23 <process id>) and SIGKILL (kill -9 <process id>) will cause the JVM process to stop or die. The following JVM argument will indicate JVM not to pause on SIGQUIT signal from the operating system.

    Java –Xsqnopause

    Memory allocation in Java

    Posted: March 22, 2012 in Java/J2ee
    Tags: , , ,

    Each time an object is created in Java it goes into the area of memory known as heap. The primitive variables like  int and double are allocated in the stack, if they are local method variables and in the heap if they are member
    variables (i.e. fields of a class). In Java methods local variables are pushed into stack when a method is invoked and stack pointer is decremented when a method call is completed. In a multi-threaded application each thread
    will have its own stack but will share the same heap. This is why care should be taken in your code to avoid any concurrent access issues in the heap space. The stack is threadsafe (each thread will have its own stack) but the
    heap is not threadsafe unless guarded with synchronisation through your code.

    A method in stack is re-entrant allowing multiple concurrent invocations that do not interfere with each other. A function is recursive if it calls itself. Given enough stack space, recursive method calls are perfectly valid in Java though it is tough to debug. Recursive functions are useful in removing iterations from many sorts of algorithms. All recursive functions are re-entrant but not all re-entrant functions are recursive. Idempotent methods are methods, which are written in such a way that repeated calls to the same method with the same arguments yield same results. For example clustered EJBs, which are written with idempotent methods, can automatically recover from a server failure as long as it can reach another server.

    The default behaviour of an object’s clone() method automatically yields a shallow copy. So to achieve a deep
    copy the classes must be edited or adjusted.

    Shallow copy: If a shallow copy is performed on obj-1 as shown in fig-2 then it is copied but its contained objects
    are not. The contained objects Obj-1 and Obj-2 are affected by changes to cloned Obj-2. Java supports shallow
    cloning of objects by default when a class implements the java.lang.Cloneable interface.

    Deep copy: If a deep copy is performed on obj-1 as shown in fig-3 then not only obj-1 has been copied but the
    objects contained within it have been copied as well. Serialization can be used to achieve deep cloning. Deep
    cloning through serialization is faster to develop and easier to maintain but carries a performance overhead.

    For example, invoking clone() method on a HashMap returns a shallow copy of HashMap instance, which means
    the keys and values themselves are not cloned. If you want a deep copy then a simple method is to serialize
    the HashMap to a ByteArrayOutputSream and then deserialize it. This creates a deep copy but does require that
    all keys and values in the HashMap are Serializable. Its primary advantage is that it will deep copy any arbitrary
    object graph.

    This is an implementation of the priority queue abstract data structure based on the min-heap. Furthermore, the min-heap uses an array for structuring its internal nodes. That said, the add( ) and removeMin( ) operations have a logarithmic runtime complexity, O( log n ), whereas the min( ) operation has a constant runtime complexity, O( 1 ) (as it does not remove the minimum element from the heap, but it just returns it). Finally, the implementation is as follows (do note that this implementation only works with non-negative integers):

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    78
    79
    80
    81
    82
    83
    84
    85
    86
    87
    88
    89
    90
    91
    92
    93
    94
    95
    96
    97
    98
    99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    /** Priority Queue implementation based on Min-Heap */
    public class PriorityQueue {
    
        private final int MAX_SIZE;
        private static final int DEFAULT_SIZE = 1024;
        private static final int ROOT = 1;
        private static final int NULL = -1;
        private int[ ] array;
        private int lastIndex = ROOT;
    
        /** Constructs a priority queue with the specified size */
        public PriorityQueue( int size ) {
            MAX_SIZE = size;
            array = new int[ MAX_SIZE ];
            java.util.Arrays.fill( array, NULL );
        }
    
        /** Constructs a priority queue with the default size */
        public PriorityQueue() {
            this( DEFAULT_SIZE );
        }
    
        /** Adds a new element to the array while maintaining the min-heap property */
        public void add( int element ) {
            if( element < 0 || lastIndex == MAX_SIZE ) {
                return;
            }
    
            int elementIndex = lastIndex++;
            array[ elementIndex ] = element;
    
            while( array[ elementIndex ] < array[ parent( elementIndex ) ] ) {
                swap( elementIndex, parent( elementIndex ) );
                elementIndex = parent( elementIndex );
            }
        }
    
        /** Returns the parent index */
        private int parent( int index ) {
            return index / 2;
        }
    
        /** Returns the left child index */
        private int leftChild( int index ) {
            return 2 * index;
        }
    
        /** Returns the right child index */
        private int rightChild( int index ) {
            return 2 * index + 1;
        }
    
        /** Returns the smallest child index */
        private int minChild( int index ) {
    
            int leftChildIndex = leftChild( index );
            int rightChildIndex = rightChild( index );
    
            if( leftChildIndex >= MAX_SIZE && rightChildIndex >= MAX_SIZE ) return NULL;
            else if( rightChildIndex >= MAX_SIZE ) return leftChildIndex;
    
            if( array[ leftChildIndex ] == NULL && array[ rightChildIndex ] == NULL ) return NULL;
            else if( array[ rightChildIndex ] == NULL ) return leftChildIndex;
    
            return array[ leftChildIndex ] <= array[ rightChildIndex ] ? leftChildIndex : rightChildIndex;	
        }
    
        /** Returns the minimum element from the heap */
        public int min() {
            return array[ ROOT ];
        }
    
        /** Returns and removes the minimum element from the heap */
        public int removeMin() {
    
            int rootElement = array[ ROOT ];
            int elementIndex = --lastIndex;
            int element = array[ elementIndex ];
    
            array[ ROOT ] = element;
            array[ elementIndex ] = NULL;
            elementIndex = ROOT;
    
            for( int minChildIndex; ( ( minChildIndex = minChild( elementIndex ) ) != NULL ) 
                && ( array[ elementIndex ] > array[ minChildIndex ] ); ) {
    
                swap( elementIndex, minChildIndex );
                elementIndex = minChildIndex;			
            }
    
            return rootElement;
        }
    
        /** Checks if the heap is empty */
        public boolean isEmpty( ) {
            return lastIndex == ROOT;
        }
    
        /** Helper method for swapping elements in the array */
        private void swap( int a, int b ) {
            int temp = array[ a ];
            array[ a ] = array[ b ];
            array[ b ] = temp;
        }
    
        @Override
        public String toString( ) {
            StringBuffer sb = new StringBuffer( "[" );
            for( int i = 1; i < lastIndex; i++ ) {
                sb.append( array[ i ] );
                if( i + 1 < lastIndex ) sb.append( ", " );
            }
            sb.append( "]" );
            return sb.toString( );
        }
    }

    And for your testing needs I was so nice as to supply you with the following JUnit test (I’ll try not to make a habit out of it :P ):

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    import static org.junit.Assert.*;
    import org.junit.Test;
    import java.util.Random;
    import java.util.Date;
    
    public class PriorityQueueTest {
    
        @Test
        public void BasicTest( ) {
    
            PriorityQueue instance = new PriorityQueue( );
    
            for( int i = 0; i != 10; i++ ) {
                instance.add( i );
            }
    
            for( int i = 0; i != 10; i++ ) {
                assertEquals( i, instance.removeMin( ), 0.0 );
            }
        }
    
        @Test
        public void StressTest( ) {
    
            java.util.PriorityQueue<Integer> queue = new java.util.PriorityQueue<Integer>( );
            PriorityQueue myQueue = new PriorityQueue( );
            Random random = new Random( new Date( ).getTime( ) );
    
            for( int i = 0; i < 1000; i++ ) {
    
                int item = random.nextInt( 1000 );
                queue.offer( item );
                myQueue.add( item );
            }
    
            for( int i = 0; i < 1000; i++ ) {
    
                if( i % 2 == 0  ) {
                    assertEquals( queue.poll( ), myQueue.removeMin( ), 0.0 );
                } else {
                    assertEquals( queue.peek( ), myQueue.min( ), 0.0 );
                }
            }
        }
    
        @Test
        public void testIsEmpty( ) {
    
            PriorityQueue instance = new PriorityQueue( );
            boolean expResult = true;
            boolean result = instance.isEmpty( );
            assertEquals( expResult, result );
        }
    
        @Test
        public void testGetMin( ) {
    
            PriorityQueue instance = new PriorityQueue( );
            float expResult = -1;
            float result = instance.min( );
            assertEquals( expResult, result, 0.0 );
        }
    
        @Test
        public void testRemoveMin( ) {
    
            PriorityQueue instance = new PriorityQueue( );
            float expResult = -1;
            float result = instance.removeMin( );
            assertEquals( expResult, result, 0.0 );
        }
    }