Tuesday, September 26, 2006

Insurance on Property - 1.5

An insurance broker receives an inquiry from a client regarding by-law insurance. Give reasons why such an inquiry might have been made. What losses would such insurance cover?

Answer:

One of the exclusion of Basic Fire Policy says the by-law expenses are not covered. It means loss arising in any way from the enforcement of any by-law or other law regulating zoning or the demolition, repair or construction of buildings, making it impossible to repair or reinstate the property as it was just before the loss, is excluded.

Expenses imposed by-law are independent of any insured peril. Insureds would be subject to by-laws even if the loss was caused by an uninsured peril or even if there is no insurance. Therefore, insurers are not obliged to assume such expenses.

If insureds wish to be indemnified of by-law expenses, they need to add to their policies some additional coverage by endorsement, which insurers receive additional premium and insureds receive boardened financial protection and they will recover the additional costs that by-laws impose on them.

Insurance on Property - 1.4

Give an example of circumstances in which the heat process exclusion in the Basic Fire Policy does not apply to part of a loss.

Answer:

Loss or damage to goods undergoing a process involving the application of heat is excluded if it results from the process. However, we cannot exclude such goods automatically from the policy. A fire starting elsewhere in an insured's premises eventually may destroy goods undergoing a heat process. The exclusion would not operate in such circumstances.

Only the goods actually undergoing the process are excluded. The exclusion would not apply to other property burned when a fire spreads beyond the excluded property.

Once a computer center suffered a fire. The root cause was one of the terminals. That terminal was undergoing a process involving the application of heat. That terminal should be excluded. At the same time, for other terminals and computers, even they were all generating heat, they should not be excluded and loss on these equipments should be covered.

Insurance on Property - 1.3

Give an example of circumstances in which a business suffers both direct and indirect loss. Explain the difference between the two. (An exmaple taken from the course text is not acceptable.)

Answer:

Fire insurance policies indemnify the insured against direct loss or damage to property insured, but do not cover indirect loss. A direct loss is the loss of economic value that occurs when property is damaged or destroyed. An indirect loss is the economic loss that arise from the direct loss or damage to property.

A hotel is destroyed by a fire. The hotel own lost the value of the building. This is a direct loss. And the hotel won't be able to run business. Its business interrupted. This is an indirect loss. A photographer may have left his equipments in the hotel. The loss of equipments is a direct loss. For the photographer can not get his equipment in time so he can not report an event. This is an indirect loss.

Insurance on Property - 1.2

What obligations are placed upon an insurer that receives a written application for fire insurance? (Your answer should include provincial differences, if any.)

Answer:

When an insurer receives a written application for fire insurance, any policy the insurer inssues must reflect that application, unless the insurer points out in writing how it differs from the application.

In the common law provinces, the insured may reject the policy within two weeks of receiving the notification. If the insured does nothing within that time, the insurer can assume the insured has accepted that policy.

In Quebec, the insurer is required to include a copy the application with the policy. However, there is no specified time by which an insured must reject a policy.

Insurance on Property - 1.1

In addition to damage or destruction by an insured peril of property insured under a Basic Fire Policy, the insured may incur expenses for the removal of debris left by the loss.

Describe the coverage provided by the Basic Fire Policy for such expense.

Answer:

The Basic Fire Policy indemnifies the insured for the value of insured property damaged or destroyed. But the loss many also leave behind debris that must be removed, especially if the property is to be rebuilt or repaired. Without special provision in the policy, the expense would not be covered by the policy.

This expense is considered to be covered by the policy if it is included in the amount of insurance chosen by the insured. In this case, the total amount of insurance is the total of both loss of or damage to the insured's property and the expense to remove any debris left by the loss. However, if the policy contains a coinsurance clause, this expense will not be included.

Friday, September 22, 2006

What is OOAD

To facilitate software development, the practice of object-oriented analysis and design (OOAD) has developed in tandem with object-oriented technology.

OOAD focuses on analyzing the requirements of a system and designing a model of the system before any code is developed.

OOAD is done to ensure that the purpose and requirements of a system are thoroughly captured and documented before the system is built.

Although it sounds obvious, it is very important that the proposed system will be useful to its intended users and will fulfill their requirements.

OOAD allows a detailed model of the system to be developed, based on the documentation of the users' requirements.

The model provides abstraction from the underlying complexity of the system and allows the system to be viewed as a whole.

It also provides a way for users, analysts, designers, and implementers to study different, but compatible, aspects of the system.

As OOAD takes place, different views of a system should be abstracted to form a model of the system as a whole.

Once this is done, it is easier for developers to see how the components of the system should interact and users can verify that their requirements are met.

Then detail can be added to transform the model into one that can be used for implementation.

Frequently during early analysis and design, different solutions to the problem are modeled and the results are compared to find the best system.

OOAD divides into two phases - object-oriented analysis (OOA) and object-oriented design (OOD).

OOA involves creating a model of a system based on what the users require from that system.

OOD adds detail and design decisions to the model.

The analysis phase takes a "black box" approach to the system, ignoring its inner workings.

The design phase takes a "white box" approach and makes decisions on how the model will be implemented in code.

So analysis takes place from a user's perspective and design takes place from a developer's perspective.

During analysis, a concise, accurate model of what the desired system should do is created.

This model should not consider how the system will perform its functions.

Analysis focuses on abstracting from the problem domain - that is, the real world that the system will function in.

This is done to discover the primary classes and objects in the system.

The objects in the analysis model should be problem domain concepts only - they should not be implementation concepts.

During analysis, the environment the system will be implemented in should not be considered.

However, at the design stage it must be ascertained whether the analysis model will work in the intended implementation environment.

During design, the analysis model is expanded into a technical blueprint for implementing a system.

New classes are added to provide the mechanisms that enable the system to work - for example, mechanisms to handle persistence or interprocess communication.

And the classes discovered during analysis are fleshed out to take account of the implementation environment.

In practice, many portions of the analysis model may be implemented without change, so there is often overlap between analysis and design.

Analysis is some times refined during design, or after key design decisions are made.

Quite often, analysis and design are parallel activities on a large project, so it is difficult to draw a line between where analysis ends and design starts.

At the end of the OOAD process, the analysis and design models combine to provide an overview of the system.

This overview can be used throughout the development process.

During the OOAD process, it is important to be aware of the goals of the software system being developed.

The main goal of a successful software system is that the user should be able to use it effectively.

Other goals of successful systems are that
• it should be easily maintainable

• it should be scalable

• it should be portable between platforms

• its code should be reusable

It is important for the success of a project that its result is delivered in time and within budget.

There are several tactics that facilitate the development of a successful software system that can be taken into account during analysis and design.

The system should be split into modules based on logical functionality.

The modules should be separately compilable so that if changes need to be made to one of them, the whole system won't be affected.

The modules should communicate with each other through small, well-defined interfaces, which act as wrappers to hide the implementation.

And there should be as few of these interfaces as possible.

The use of interfaces allows modules to be very portable, and standard interfaces such as COM interfaces can be used to further increase portability.

Models are usually represented visually by some type of notation.

The notation often takes the form of graphical symbols and connections.

A graphical notation facilitates portraying the structure of a complex system.

It also provides consistency throughout the development process, as models produced using it are standard and are retained for the lifetime of the project.

This means they can be read and understood by everyone involved with the project.

A good notation should allow an accurate description of the system it represents.

It should be as simple as possible, without being oversimplified.

And it should be easy to update and to communicate to others.

The Unified Modeling Language (UML) is a robust notation that you can use to build OOAD models.

It consists of a series of diagrams that represent the different views of a software system in analysis and design.

For example, there are diagrams to chart the interaction between a user and the application.

When you attempt to develop a system, it is not enough to have a notation for modeling - you also need to know how to use the notation.

This means you need a process to guide you through a software development project and through using the notation.

The combination of a notation and a process is known as a method.

Analysis of all but the smallest systems will generate a large number of diagrams, so it is important to record these accurately to ensure consistency in the model.

In order to record the diagrams that you create using a process and notation, you need a tool.

The Rational Software Corporation has devised the Rational Objectory Process to guide developers through a software project.

And it has created a tool called Rational Rose for creating UML diagrams.

The UML has its origins in several competing OOAD methods that were developed separately in the late 1980s and early 1990s in response to the software crisis.

One of the main methods was the Object Modeling Technique (OMT), which was devised by James Rumbaugh and others at General Electric.

It consists of a series of models - use case, object, dynamic, and functional - that combine to give a full view of a system.

The Booch method was devised by Grady Booch and developed the practice of analyzing a system as a series of views.

It emphasizes analyzing the system from both a macro development view and micro development view.

And it was accompanied by a very detailed notation.

The Object-Oriented Software Engineering (OOSE) method was devised by Ivar Jacobson and focused on the analysis of system behavior.

It advocated that at each stage of the process there should be a check to see that the requirements of the user were being met.

Each of these methods had their strong points and their weak points.

Each had their own notation and their own tool.

This made it very difficult for developers to choose the method and notation that suited them and to use it successfully.

This period is often referred to as the time of the "method wars".

New versions of some of the methods were created, each drawing on strengths of the others to augment their weaker aspects.

This led to a growing similarity between the methods.

In 1994 Rumbaugh joined Booch at Rational Software Corporation in order to create a new method.

It was called the Unified Method and its aim was to unite the Booch and OMT methods.

In 1995 Booch and Rumbaugh were joined by Jacobson and the emphasis on the project changed.

It became clear that the focus of their work was on creating a single, standard notation rather than a method, so they renamed their work the Unified Modeling Language.

In January 1997 version 1.0 of the UML was released.

And in September 1997 the Object Management Group (OMG) accepted the notation as a formal standard.

Although main parts of the UML are based on the Booch, OMT, and OOSE methods, the UML also includes elements from other methods.

For example, state charts, devised by David Harrel, have been incorporated into UML state diagrams, and the work of Erich Gamma and his colleagues on patterns has influenced the UML.

The UML is an attempt to standardize the notation used for analysis and design.

The standardization includes diagrams, syntactic notation, and semantic models.

The aims of the UML that its designers have set are
• to model systems using object-oriented concepts

• to accurately describe conceptual and executable artifacts

• to support both small-scale and large-scale analysis and design

• to provide a notation that can be used by both people and machines

The UML is an object-oriented modeling language and its diagrams and semantics are based on object-oriented concepts.

It is designed to describe many types of system in object-oriented terms.

However, the UML is not limited to modeling object-oriented systems.

An example of the type of system the UML can model is an information system based on large databases.

These databases store data in complex relationships and make large amounts of data available to users.

The UML can model both relational and object databases.

Object-oriented technology facilitates the implementation of embedded and real-time systems.

And it facilitates the development of systems that are distributed across a number of machines and require synchronized communication mechanisms.

The UML can be used to model both of these types of system.

The UML can be used to model system software such as operating systems and networking systems.

And it can be used to model technical systems that control equipment such as industrial machines, military hardware, or telecommunications systems.

In addition to software systems, the UML can be used to model business processes, such as the flow of work within and between departments in a company.

The UML is composed of three different parts:
• model elements

• diagrams

• views

The model elements represent basic object-oriented concepts such as classes, objects, and relationships.

Each model element has a corresponding graphical symbol to represent it in the diagrams.

For example, the symbol for a class is shown here.

Model elements are defined semantically in formal statements describing what they are or what they represent.

Each model element can be used in several different diagrams but it always retains the same symbol and meaning.

And there are rules governing the diagrams that each model element can appear in.

Diagrams portray different combinations of model elements.

For example, the class diagram represents a group of classes and the relationships, such as association and inheritance, between them.

The UML provides nine types of diagram - use case, class, object, state, sequence, collaboration, activity, component, and deployment.

Views provide the highest level of abstraction for analyzing the system.

Each view is an aspect of the system that is abstracted to a number of related UML diagrams.

Taken together, the views of a system provide a picture of the system in its entirety.

In the UML, the five main views of the system are
• use case

• logical

• component

• concurrency

• deployment

It is necessary to break the model of a system down into several views with related diagrams.

This is because it would be impossible for one diagram to represent an entire system accurately and clearly.

In addition to model elements, diagrams, and views, the UML provides mechanisms for adding comments, information, or semantics to diagrams.

And it provides mechanisms to adapt or extend itself to a particular method, software system, or organization.

Wednesday, September 20, 2006

Java Garbage Collector

The Java garbage collector is a daemon thread – a thread that runs for the benefit of other threads. It is a mark-sweep facility that scans Java's dynamic memory areas for objects, marking those that are referenced. When Java determines that there are no longer any references to an object, it marks the object for eventual garbage collection. The "mark and sweep" garbage collection algorithm is not suitable for all applications. The new version of Java has been expanded to contain several new garbage collection algorithms. These include the "copying collector", the "parallel copying collector" and the "parallel scavenge collector". All these new algorithms stop all application threads until the garbage collection is complete.

Java's automatic garbage collector eliminates many of the memory leaks that can occur in C and C++. It runs as a low-priority thread, waiting for higher-priority threads to relinquish the processor. You can use a finalize method in a class to help return resources to the system. The finalize method runs automatically when the system runs out of memory or when runtime ends, but you can also invoke it at other times using the System.runFinalization method.

Final Methods and Performance

When compiling code, each method call to a final method can be replaced by the actual method code. This is known as inlining. Inlining can potentially speed up program execution as final methods are inlined at compile time. A final method can be optimized in this way because the compiler knows it will not have to look up the correct version of the method at runtime, as polymorphism does not exist for final methods.

The latest virtual machines, however, can detect whether or not a non-final method is overridden. So declaring methods final to improve performance is no longer as valid as it once was.

Final but not Static Variables

Declaring variables as final but not static and not assigning it any initial value allows different constant values for each instance of the class to be assigned in the constructor.

A final variable that refers to an object always refers to the same object. However, values within the object to which it points may change. Similarly, a final variable that refers to an array always refers to the same array, even though elements in that array can change.

Parameters as Final

You can declare parameters as final. Declaring parameters as final does not affect method overriding.

For example, method1(final int x, final int y) means the parameters x and y are constant throughout the method.

Final parameters have no impact on variables passed to methods. Because arguments are passed by value, changing them in a method would not have affected the original values in any case. But final arguments cannot be changed within the method.

Static Initialization Block

You cannot initialize class variables using a constructor. A useful way of initializing class variables is to use a static initialization block. It saves memory because only one copy of the static initialization block is store for all instances of the class.

A static initialization block begins with the static keyword and is encapsulated in braces. Here is an example,
static {
try {
charsInFile = 0;
FileReader in = new FileReader("TestStaticBlock.java");
while(in.read() != -1) {
charsInFile++;
}
} catch (Exception e){
}
System.out.println("Finished static initialization block: " + charsInFile);
}

Methods in Interfaces

You do not have to declare methods in interfaces with an abstract keyword, since interface methods are implicitly abstract.

Methods declared in interfaces body are all implicitly public, and can be accessed outside the package in which the interface is declared as long as the interface itself is public or protected. An interface can be declared private or protected only if it occurs within a class.

Implications of Inheritance

In a subclass, you can create a new variable with the same name as an inherited variable, same type or class is not necessary. The subclass uses this variable instead of the inherited variable. This is known as hiding a variable.

Generally, you only need to hide superclass variables in rare situations where the generic class was not well defined, such as when maintaining someone else's code. Variable hiding is not recommended, as it can lead to confusing and ambiguous code.

Java makes a copy of each inherited superclass variable available to each object in a subclass, even if the variable is hidden. To access a specific data member within an inheritance hierarchy, you sometimes need to perform an explicit cast.

Overloading a method is often confused with overriding a method. Overloading is to create methods with the same name as an existing method, but with different arguments. Overloading can happen within a same class or subclasses.

Constructors are not automatically inherited, so subclasses do not automatically receive any constructor implementation code from their superclass, such as initialization code for variables. Java ensures that an object is constructed in the correct order, from its base to its subclass. This is called constructor chaining. If you do not explicitly call a constructor of a superclass, Java automatically calls the no-argument superclass constructor. If the superclass has no such constructor, the class will not compile.

Late Binding vs Early Binding

Often, an object type cannot be determined at compile time and must be dynamically resolved at runtime. This runtime resolution of object types is called late binding of instance methods and makes polymorphism possible in Java.

Late binding enables Java to choose the right version of a called method at runtime, depending on the type of object that is created.

The opposite of late binding is early binding. In early binding, variables and methods are resolved at compile time. Early binding doesn't support polymorphism, and is used in non-OO languages.

UML - Introduction

UML stands for Unified Modeling Language, which is a family of graphical notations, backed by single meta-model, the help in describing and designing software systems, particularly software systems build using the object oriented style.

A picture says a thousand words. That's why UML comes to the area of OOAD. Of these graphical notations, the UML's importance comes from its wide use and standardization within the OO development community. Graphical modeling languages have been around in the software industry for a long time. The fundamental driver behind them all is that programming languages are not at a high enough level of abstraction to facilitate discussions about design. Despite the fact that graphical modeling languages have been around for a long time, there is an enormous amount of dispute in the software industry about their role. The disputes play directly into how people perceive the role of the UML itself.

So the Object Management Group(OMG) was formed to build UML standards. OMG is also known for the CORBA(Common Object Request Broker Architecture) standards. The earlist UML was version 0.8 released in Oct. 1995. In 1996, version 0.9 and 0.91 and offically UML received its name. In Jan. 1997, version 1.0 submitted, Sep. 1997, version 1.1 submitted, but only adopted by OMG toward the end of 1997. UML 1.2 appeared in 1998, 1.3 in 1999, 1.4 in 2001, 1.5 in 2002.

As the UML 1 series continued, the developers of the UML set their sights on a major revision to the UML with UML 2. The first request for proposals were issued in 2000, but UML 2 didn't start to stabilize until 2003. In Oct. 2004, UML 2.0 was adopted. Now, version 2.1 is the most current.

There are total 13 types of diagrams in UML 2,
  • Activity - procedural and parallel behavior
  • Class - Class, features, and relationships
  • Communication - Interaction between objects, emphasis on links
  • Component - structure and connections of components
  • Composite structure - runtime decomposition of a class (since UML 2)
  • Deployment - deployment of artifacts to nodes
  • Interaction overview - mix of sequence and activity (since UML 2)
  • Object - example configurations of instances (unofficially in UML 1)
  • Package - compile-time hierarchic structure (unofficially in UML 1)
  • Sequence - Interaction between objects, emphasis on sequence
  • State machine - how events change an object over its life
  • Timing - interaction between objects, emphasis on timing (since UML 2)
  • Use case - how users interact with a system
Nobody knows all of these diagrams, even their creators. Among these 13 diagrams, use case, class, sequence, activity and deployment are the most used. If time is a limited resource for you, you can master these 5 diagrams first.

On the other side, although the UML provides quite a considerable body of various diagrams that help to define an application, it's by no means a complete list of all the useful diagrams that you might want to use. In many places, different diagrams can be useful, and you shouldn't hesitate to use a non-UML diagram if no UML diagram suits your purpose.

OOAD - Introduction

OOAD, which stands for Object Oriented Analysis and Design, is just one chain in the process of software/application development. Every software or application has to response to user's requirements. Usually, architects analyze these requirements and design a solution, then developers implement the solution with a programming language, then quality assurance staff test the implementation and finally training facilities turn the key to end users.

Object Oriented(OO) is just one of the methodologies that architects choose. It is popular now, but before it, there have been many other methodologies, e.g. Function Oriented, and they are actually successful and still useful especially when OO programming language is not available or suitable in certain situations. However, OO does have some advantages over others. Code reuse is one of the major advantages. For detail, please google "Object Oriented" and "advantage".

Analysis is the interface between architects and end users, customers and business analysts (shortly as users). A description of the problem or requirements must be created before anything can happen. Requirements are actually the responsibility of users, however, since most users are not trained with software engineering, starting here architects need to take some leadership (see, there IS a reason why you need to be a leader to be hired as an architect. ^_^) and gather stories from users and create documents to describe and code all requirements. And that defines what the problem is about and what a software or application must do. Analysis emphasizes an investigation of the problem rather than how a solution is defined. For example, if a card game which players lead 4 cards to a result of 24 by using +-x/ is desired, what are the business(or game) processes related to its use?

To develop an application, it is also necessary to have high level and detailed descriptions of the logical solution and how it fulfills requirements and constraints. Design emphasizes a logical solution, how the system fulfills the requirements. To illustrate, how exactly will a library system capture and record over-due loans?

After all, the investigation on business processes is called requirements analysis, investigation on user roles is called domain analysis, investigation on responsibilities and interactions is called responsibility and interaction design. Besides, OOAD emphasizes decomposing a problem space by objects rather than by functions, systems and sub-systems.

Monday, September 18, 2006

Abstract Class or Interface

Developers are often confused with the choice of abstract class or interface. In design patterns, for example, factory method and abstract factory, abstract classes are used sometimes and interfaces other times.

What is exactly the difference between abstract class and interface? One simple idea is the relationship. Abstract class hints a "is-a" relationship and interface a "is" . For example, a Dog is an animal, and a Dog is priceable, which means it can have a price and participate in a market.
Most introductory Java texts take an implementation-centric stab at how to use interfaces and abstract classes. However, few provide a clear design distinction for choosing between these two similar object-oriented constructs. This article investigates such a distinction through a discussion of a simple, yet common, example. The resulting design uses both interfaces and abstract classes to maximize flexibility and extendibility.
Please read the Maximize flexibility with interfaces and abstract classes for deep understanding of their differences.

Polymorphism in Java

In simple terms, polymorphism lets you treat derived class members just like their parent class's members.

In more precise terms, polymorphism (object-oriented programming theory) is the ability of objects belonging to different types to respond to method calls of methods of the same name, each one according to an appropriate type-specific behaviour. The programmer (and the program) does not have to know the exact type of the object in advance, so this behavior can be implemented at run time (this is called late binding or dynamic binding).
Java developers all too often associate the term polymorphism with an object's ability to magically execute correct method behavior at appropriate points in a program. That behavior is usually associated with overriding inherited class method implementations. However, a careful examination of polymorphism demystifies the magic and reveals that polymorphic behavior is best understood in terms of type, rather than as dependent on overriding implementation inheritance. That understanding allows developers to fully take advantage of polymorphism.
Please read the Reveal the magic behind subtype polymorphism for a deep understanding of polymorphism.

Disable Directory Listing in Tomcat 5

For fresh Tomcat installations, directory listing is enabled by default. This can be a very useful debugging tool, and if, like me, you sometimes forget what servlets are deployed in a certain web application, you can get a complete listing by simply keying in the web application's URL.

But for production deployments, you may want to turn it off. If nothing else, it discourages users from poking around where they should not. There are basically 2 methods of "turning off" this option :

1. Create an index.html file and place it in the web application's directory
2. Edit the global web.xml file to turn off the option.

The first option is fairly simple, so we shall only examine the second option.

Open the file web.xml which is located inside $CATALINA_HOME/conf/. This is the global web.xml file, which means that any changes here will affect ALL web applications deployed by that Tomcat instance. If you want more granular control, like turning it off for certain applications but not for others, you will need to go with the first option of creating index.html files.

Change param-value of listing to false and you turn off directory listing. It is that simple.

Context Descriptor in Tomcat 5

The context descriptor file, according to the Tomcat official documentation, is "used to define Tomcat specific configuration options, such as loggers, data sources, session manager configuration and more".

The file follows an XML syntax, and the name of the file is always the name of the web application, with a .xml extension. So, for this application, called MyFirst, the name of the context descriptor is MyFirst.xml.

Create a file called MyFirst.xml with the following contents:

<Context path="/MyFirst" docBase="MyFirst" debug="0" reloadable="true"/>


Save the file into $CATALINA_HOME/conf/Catalina/localhost/ directory.

With much older versions of Tomcat, such as the early Tomcat 3.x series or 4.0.x series, you had to add the context definitions inside server.xml. With Tomcat 5.x, the context descriptor provides a cleaner separation of web application configuration and the main Tomcat server configuration. An added benefit is that web applications deployed in this way do not require a stop and restart of the Tomcat server process. Tomcat should automatically pick it up while it is still running.

One of the things I really enjoy about Open Source software is that you can sometimes get useful insights from people smarter and more experienced than yourself. I had an interesting discussion with Josh Rehman on the relative merits of deploying web applications using the server.xml method or using the context descriptor method.

Josh's position is that the context descriptor method should become the canonical method for web application deployment for many reasons: the unreliability of server.xml edits propagating through the server, and the difficulty of removing those contexts that are already deployed.

I had not considered that position before, probably because I do not run Tomcat in a high volume, mission-critical environment. Things are different in the little corner of Asia where I stay and work. The traffic is much lower and you can pretty much reboot the server anytime you wish. So bringing down the Tomcat server process to add, modify or delete a context is feasible.

If, however, you have responsibilities for a large deployment of Tomcat servers, or just a Tomcat server running in a high volume environment, the game changes fundamentally. You will need something that allows for "on-the-fly" changes, and more importantly, you need a clear separation between server configuration parameters shared by all applications, and configurations for each individual web application. Although there are merits in keeping all configuration in one place, when you are pressed for time, you don't want to wade through an ultra-long configuration file to get at the parts you want to change or delete. I learned that painful lesson when adding a CD-RW drive to a running web server recently.

Saturday, September 16, 2006

Enable Apache Portable Runtime in Tomcat 5

If Apache Portable Runtime is not enabled in Tomcat 5.

On startup you will see:

INFO main org.apache.catalina.core.AprLifecycleListener - The Apache Portable Runtime which allows optimal performance in production environments was not found on the java.library.path:


And then on shutdown:

INFO main org.apache.catalina.core.AprLifecycleListener - Failed shutdown of Apache Portable Runtime

Do the following to enable APR.
  • Install apr packages using #yum -y install apr apr-util apr-devel apr-util-devel.
  • Download tomcat5 in tar.gz format and install tomcat5.
  • > cd /bin
    > ungzip tomcat-native.tar.gz
    > tar -xvf tomcat-native.tar
    > cd tomcat-native-/jni/native
  • >./configure --with-java-home= --with-apr=/usr/bin/apr-1-config
    >make
    >make install (switch to root)
  • > cd /jre/lib/i386
    > ln -s /usr/local/apr/lib/libtcnative-1.a libtcnative-1.a
    > ln -s /usr/local/apr/lib/libtcnative-1.so libtcnative-1.so
    > ln -s /usr/local/apr/lib/libtcnative-1.so.2 libtcnative-1.so.2
    > ln -s /usr/local/apr/lib/pkgconfig/ pkgconfig
And now Tomcat doesn't complain anymore. The Tomcat log file also states something new:

INFO main org.apache.coyote.http11.Http11AprProtocol - Starting Coyote HTTP/1.1 on http-8180

Friday, September 15, 2006

Handy Reference of Java Operators

Simple Assignment Operator
= Simple assignment operator

Arithmetic Operators
+ Additive operator (also used for String concatenation)
- Subtraction operator
* Multiplication operator
/ Division operator
% Remainder operator

Unary Operators
+ Unary plus operator; indicates positive value (numbers are positive without this, however)
- Unary minus operator; negates an expression
++ Increment operator; increments a value by 1
-- Decrement operator; decrements a value by 1
! Logical compliment operator; inverts the value of a boolean

Equality and Relational Operators
== Equal to
!= Not equal to
> Greater than
>= Greater than or equal to
< Less than
<= Less than or equal to

Conditional Operators
&& Conditional-AND
|| Conditional-OR
?: Shorthand for if-then statement

Type Comparison Operator
instanceof Compares an object to a specified type

Bitwise and Bit Shift Operators
~ Unary bitwise complement
<< Signed left shift
>> Signed right sift
>>> Unsigned right shift
& Bitwise AND
^ Bitwise exclusive OR
| Bitwise inclusive OR

Thursday, September 14, 2006

Install Struts

yum install tomcat5-admin-webapps

Chinse Input on Linux with SCIM

Try #yum install scim-pinyin to enable Simplifed Chinese Input on Linux.

Then run #scim-setup to configure SCIM.

Check Panel > GTK select ToolBar.

Tuesday, September 12, 2006

How can I install Flash in my web browser?

Create a file macromedia-mplug.repo, insert the following into the file and put it under /etc/yum.repos.d.
[macromedia]
name=Macromedia for i386 Linux
baseurl=http://macromedia.rediris.es/rpm/
enabled=1
gpgcheck=1
gpgkey=http://macromedia.mplug.org/FEDORA-GPG-KEY
Run as root, # yum install flash-plugin.

Restart Firefox browser. Done.

Monday, September 11, 2006

How to install Accounting Application (GnuCash)

  • Read #General Notes
  • Read #How to add extra repositories yum -y install gnucash
  • rm -fr /usr/share/gnome/apps/Applications/
  • gedit /usr/share/applications/GnuCash.desktop
  • Insert the following lines into the new file
[Desktop Entry]
Name=GnuCash
Comment=GnuCash Personal Finance
Exec=gnucash
Icon=/usr/share/pixmaps/gnucash/gnucash-icon.png
Terminal=false
Type=Application
Categories=Application;Office;
  • Save the edited file
  • Applications -> Office -> GnuCash

Enable Java on Firefox on Linux

On Linux, Firefox requires JRE 1.4.2 or later.

Firefox is compiled with gcc 3.2.3, so a compatible version of the Java plugin must be used. JRE 1.4.2 contains a compatible plugin.

If you installed the JRE 1.4.2_01 RPM, this plugin is /usr/java/j2re1.4.2_01/plugin/i386/ns610-gcc32/libjavaplugin_oji.so and to install it for Firefox, do the following:

1. Open a terminal
2. Change to your Firefox plugins directory
3. Issue the following command: ln -s /usr/java/j2re1.4.2_01/plugin/i386/ns610-gcc32/libjavaplugin_oji.so

If you are using an older Linux distribution, you may need to install the gcc3 support libraries, as the gcc 3.2 version of the Java plugin requires libgcc_s.so.1 to operate. You may be able to find packages using Google.

If you are using an old or unofficial build of Firefox, you can check which compiler was used by entering about:buildconfig in the location bar and pressing enter. You will see a line such as gcc version 3.3 20030226 (prerelease) (SuSE Linux), which will show the compiler that was used. If gcc2.9x was used, you need to use the ns610 plugin, not the ns610-gcc32 plugin.

###### How I did it. ######

I download jre1.5 rpm package from Sun web site. Install it using rpm -ivh. JRE is installed under /usr/java/jre1.5...

I find firefox installed under /usr/lib/firefox-... I change directory to firefox's plugins directory. Make the symbolic link

ln -s /usr/java/jre1.5.0_06/plugin/i386/ns7/libjavaplugin_oji.so

Restart Firefox. applet works then in browser.

Note: if the applet brings down the browser, the wrong so has been linked. remove the symbolic link and firefox will go back to normal.

pgAdmin3

Try "yum install -y pgadmin3" to install pgAdminIII.

The installation has completed successfully. Now I need to learn how to use it.

Friday, September 08, 2006

Run Memtest86+

I installed memtest86+ on my Fedora Core 5 box yesterday.

# yum install -y memtest86+.i386

After that, I run the memtest-setup to insert it into the GRUB menu.

Then I reboot the box and selected memtest86+. It found 2500+ error with the 512MB PC133 RAM.

Too bad, I decided to remove that RAM off the box.

Thursday, September 07, 2006

FYI - Punctuation Marks Confusing to Chinese

{} - braces or curly brackets.
[] - brackets or square brackets.
<> - angle brackets.
() - parenthesis (pl) parentheses or curved brackets.
/ - slash.
\ - back slash.
& - ampersand.
` - apostrophe.
! - exclamation mark or exclamation point.
- - dash or hyphen.
* - asterisk.
^ - caret.
~ - tilde.
| - verticle par or pipe bar.

How constructors differ from methods

To learn Java, you must understand constructors. Because constructors share some characteristics with methods, it is easy for the Java beginner to confuse them. However, constructors and methods have important differences. If you are at all uncertain about this fundamental Java point, you should read this introductory-level tutorial.
By Robert Nielsen
http://www.ccnyddm.com/AdvJava/java_constructor_tutorial.htm

Wednesday, September 06, 2006

null in Java

In Java, null is defined as a reference literal, a reserved word. It looks like a keyword, but technically, it is not.

The reason is actually simple. Let's get back to programming fundamentals. Keyword is defined in a programming language so that it is special to the compiler. For Java compiler, null is just a literal and thus null is not a keyword. So are true and false. They are boolean literals other than keywords.

The following quote comes from Sun documents. It will help you understand more.
There is also a special null type, the type of the expression null, which has no name. Because the null type has no name, it is impossible to declare a variable of the null type or to cast to the null type. The null reference is the only possible value of an expression of null type. The null reference can always be cast to any reference type. In practice, the programmer can ignore the null type and just pretend that null is merely a special literal that can be of any reference type.

Work Around with Bad RAM

Wed Feb 2 07:12:38 2005 Badram, Badmem, and Memtest86.bin Posted by Drag
Search Keys: BadRAM BadMEM Linux kernel memtest memtest86
Referencing: http://rick.vanrein.org/linux/badram/

Well I was messing around on my computer I noticed that every once in a while a program would just get up and die all of a sudden for seemingly no reason. Then I noticed that when compiling big jobs my GCC compiler was segfaulting a awful lot.

So I know that when you get almost-random stuff going wrong like that, and you know that your using what should be a fairly stable OS the likely culprit is going to be flaky hardware. And out of flaky hardware the thing I hate the most is bad RAM modules, so that's what is most likely wrong.

So since it was debian I downloaded and installed memtest86 by typing into the console: apt-get install memtest86

Memtest86 is a very nice memory testing program for x86 machines. If you have a problem with memory hardware then this guy will find it. It'll check the L1 and L2 cache, it will check your memory modules and anything else that ends up as 'RAM' in your system.

How it works is that it boots up your computer, finds all the available memory, and then uses different patterns of bits and copies them from memory address to memory address in different fashions. These are 'tests' and it performs several of them on your computer. It takes while to complete the entire battery of tests, and once it's finished it simply starts over again at test one. It's best to let it run for a few hours because memory problems can be very intermittent.

If it finds any errors then it will tell you what memory range the test failed at.

Memtest86 is GREAT if your building a new computer and need to test the RAM. This is especially important with AMD64 machines and their touchy on-cpu-die built-in memory controllers. Sometimes reseating the RAM can fix problems, some memory sticks work in some motherboards and not others, sometimes simply moving the sticks to different slots will fix problems, or other times you need to underclock the machine to get it stable. Often you just get bad RAM and it needs to be replaced.

Well when apt-get installed memtest86 it copied it to my /boot directory and called it "/boot/memtest86.bin", then it modified my /boot/grub/menu.lst grub configuration file and added this entry:

title Debian GNU/Linux, kernel memtest86
root (hd0,0)
kernel /memtest86.bin
savedefault
boot

That way I could simply reboot, select the memtest86 entry in my grub boot-time menu and then the program would run.

However this won't work for all machines. There are several ways to run memtest86. For windows machines you can use a floppy image and make a bootable floppy with dd or rawrite. They also have cdrom ISO images you can use to make bootable cdroms from.

All this and very good documentation can be obtained from the memtest86 homepage

So I rebooted the desktop, selected memtest86 entry and let that go for a couple hours.

As it turns out the main node had a clearly bad section of RAM! Now this sucked because I had a gig of ram in that machine and to fix it normally I would have to toss away (since any warranty on them is long-gone) a 512meg memory module and that's pretty expensive for me to do.

(That'll teach me to be sure to use a anti-static grounding bracelet in the future when I assemble machines.)

Normally this would be my only choice, but with Linux there are a couple tricks you can do to get a perfectly stable machine with a RAM modules that has clearly one bad section, and that's it. I wouldn't do it with a production server, but with my little home desktop so it doesn't make much of a difference. Plus it was just a small section that was bad and no other issues as far as I could tell.

A couple of the tricks revolve around kernel patches called BadRAM and BadMEM. Out of the two, BadMEM provides a lot of features and such, but BadRAM seemed simple and 'good enough'. (BadMEM was originally based off of BadRAM).

Basically, how the work is that they take the bad section of RAM and make it part of protected kernel memory space. This makes sure that no programs will accidentally access it and it's like that particular section of the RAM module might as well never really exist. It's a surprising effective and safe fix, and it only adds a couple dozen lines of code to the kernel.

The downside is that if it is a section of RAM that is naturally occupied by the kernel at boot-time then your probably SOL because it will corrupt the kernel and probably make your system unbootable. Sometimes you can work around it by moving the memory cards around, or by making a very small kernel with lots of modules instead of built-ins, then you can sometimes work around it.

So I rebooted back into Linux, downloaded the patch for my specific kernel, built it (took a couple tries) and rebooted into memtest86. My particular version (not sure if it's part of all memtest86 versions) has the ability to change it's error output from simply stating the affected memory space, but to print it out in a form that I can easy use with BadRAM-patched kernel parameters.

After about 5 minutes of running memtest86 spit out: badram=0x13495568,0xfffffffc
then
badram=0x13495568,0xfffffff,0x13495568,0xfffffffc
then
badram=0x13495568,0xfffffff,0x13495568,0xfffffffc,0x13495568,0xfffffffc
and so on and so forth. I let it run for another 45 minutes or so, but it didn't report any other bad sections so I rebooted.

In grub I hit "e" to edit my menu entry, selected the kernel line, hit "e" again, then modified my kernel entry from this:
kernel /vmlinuz-2.4.22-1.2199.nptl-ssi-686-smp devfs=mount hdb=ide-scsi hdc=ide-scsi root=/dev/hda2 ro
to look like this:
kernel /vmlinuz-2.4.22-1.2199.nptl-ssi-686-smp devfs=mount badram=0x13495568,0xfffffffc hdb=ide-scsi hdc=ide-scsi root=/dev/hda2 ro

I hit 'return' and then 'b' to boot. Once it booted up I made the change permanent by editing my boot config at /boot/grub/menu.lst and now I have a perfectly stable machine once again.

I figure that this would be especially useful for older machines that you may use for a firewall, a simple e-mail server, or something like that that may have become unstable due to memory errors. Or maybe if you have a Intel Pentium III (or was it 4?) that has the RAMBUS style ram that is incompatible with the much more common (and cheap) sdram or ddr sdram types.

February 2, 2005

Please note that there are two sites featuring a memory test utility known as memtest: the one you mentioned, rel="nofollow">http://www.memtest86.com and its cooperative competitor, memtest86+ at rel="nofollow">http://www.memtest.org . The latter is based on the former, with improvements and bug fixes that are released more frequently. The current version of memtest86+ is 1.50, released in Jan '05. Both memtest86 and memtest86+ are O/S independent. You should mention that the BIOS memory test offered by most PC's is virtually worthless at detecting bad memory, unless it's completely absent. Also note that other Unixes have there own method of mapping available memory at boot time. SCO Openserver allows you to use the mem= option at the boot: prompt to include or exclude ranges of memory for use by the system. And finally, many quality brands of memory come with lifetime warranties. The definition of "lifetime" varies by vendor, with Kingston in my opinion having the most liberal policy of them all: if it fails, they'll replace it, even if the product is obsolete. No proof of purchase required. Amazingly enough, I've never had to make use of their warranty after all these years. On the other hand, every single piece of no-name SDRAM we purchased 4 years ago failed within 18 months.

Bob

February 2, 2005

How stable have things been, since you have mapped the badram? The Kingston memory policy of replacing, no questions asked, is very nice. I will certainly consider Kingston when I purchase my next bunch of RAM, which is coming up shortly, as I piece together another server for home.

BruceGarlock

February 3, 2005

Things are pretty stable. I haven't noticed any more issues with things crashing randomly and I've been playing around with scripts and stuff that do a good job of thrashing the cpu and filing system. Also I ran a memtest utility that was installed as part of Debian's sysutils package and it ran for a couple hours without finding any problems. I don't think it's does as good job as memtest86, but it runs on the OS so I can test the memory with the badram mapping working. If I had the chance of changing out the memory or sending back to the manufacturer, I would much prefer that. But I don't remember even were I bought the stuff from. I never tried anything like mapping around errors before, but it seems to work pretty well so far. at rel="nofollow">http://cr.yp.to/hardware/ecc.html D. J. Bernstein 's homepage he talks quite a bit about the advantages of using ECC ram to protect against errors, and I think that I agree with him for the most part. Especially with these AMD64 machines, as I seem to have more issues with them, for some reason (could just be me). With memory sizes going up past 1-2gigs thats a lot that can go wrong. Also as a sidenote he has good advice on building a high quality, but inexpensive workstation http://cr.yp.to/hardware/advice.html.

Drag

February 4, 2005

Hmmm. Interesting links. Although this is off-topic, I am hopefully going to build a SCSI HD enclosure this weekend, with parts from http://www.scsisource.com/scsi_enclosure_cables/ - I plan on writing an article. I am using an old tower to house the SCSI drives, and it is basically stripped of everything, but the power supply, and motherboard (without CPU). I have a bunch of drives that I would retire, after upgrading to a larger drive. Most of them are 9.1GB U160 drives, so I would like to put them together on a software RAID-0 array, and use it as a staging area for video editing, and images. After that project, I plan on building another tower, based on SCSI drives, and an AMD CPU. It looks like I will try to purchase Kingston memory after reading about their warranty, and no hassle policy above. I am currently searching for the perfect motherboard. I would like to find something that has plenty of PCI slots, and I would prefer at least one slot to be 64-bit, since my Adaptec card supports 64-bit PCI slots (backwards compatible with regualr 32-bit slots). I know SCSI is still expensive, but the performance is stellar, and the fact that you can add so many drives to one controller is a plus. I have not decided on a Linux distro yet. I may try Xandros, which is Debian based, or Fedora core-3. I may plan pn running some commerical software on it, like BackupEdge, MainActor (video editing, although Kino is really coming along nicely), and possibly a few other commerical apps. I am yet to get my feet wet with Debian. I have heard great things about it. I wonder what distro Linux runs? Maybe it's all homegrown for him. I can see him running something line Gentoo. Good thing your system is now stable. My current tower is in need of a new CPU fan. I would wake up every morning to the CPU alarm, from being overheated. Good thing it is winter. I have the tower in my basement, where it is about 15-20 degrees cooler than the rest of the house. Currently, I have the cover open, and the tower at a 45 degree angle, because for some reason, the fan will only operate at that angle :-) Looks pretty bad, but it works. I don't know why I just don't replace the fan - they are not expensive :-) Maybe this weekend, if I need to run out to get any spare parts for my other project :-)

BruceGarlock

February 4, 2005

You mean what distro Linus runs? I think that last time I heard he was using Fedora. But then somebody says that he was using PowerPC dual-G5 setup, so Fedora's PPC port isn't that hot. I would guess Fedora if push comes to shove. Doesn't realy matter. Personally I LOVE Debian. It's the best, in my opinion. But I am open to other ones. Also FreeBSD is nice.. actually the documentation is VERY nice compared to most Linux setups. I use Debian Testing (also called Debian Sarge) on both my main machines. I have a ibook that runs the PPC version of Debian and my main desktop is now a cluster. :P It's a OpenSSI-based cluster based on Debian Testing with a heavily patched Fedora Core 1 kernel. It's made up of three machines, but I only realy use 2 right now. In the main node called "spock" it has the 1gig of ram that I talked about in the report above. I has a 2400+ AMD proccessor, 1gig of DDR ram, 80gig harddrive and 2 nic cards. The motherboard is a biostar (bleh, I don't that brand so much, next time I'm getting Asus) Via kt600 chipset based setup. The secondary node is called "alabama" and has the same cpu and motherboard with 256 megs of RAM. The harddrive setup is kinda unique. This is all very experimental for me, so I am learning as I go. It has 3 120 gig 7200 harddrives. 1 is a older WD drive with 8meg cache that goes thru the onboard ATA100 (or ATA133?) controller. The other two atatch thru a SATA (ATA150, I beleive) Sis PCI to IDE adapter. They are setup in a software RAID 5 array, and on that I run it as a LVM volume group. Before setting up the LVM stuff I ran the array as one big ext3 formatted file system to see what performance advantage it would have over a single drive. Since the computer was running on a shared ROOT partition on the other computer I was free to format and try different setups with different block sizes and ext3 raid optimizations.

Basicly the performance advantage for Read Write performance is non-existant over a single drive, unfortunately. And it actually may be slightly less. Although one harddrive is very different interface (PATA vs SATA) from the other two, they are still your basic 8meg cache, 7200rpm, 120gig drives and idividually peform very close to one another. (the WD is actually in between the two Matrox drives. They were bought both at the same time, from the same place, and are the same model. So it goes to show you the variations in production) I think that is a potential performance increase, but it's negated by the overhead from parity... As for RAID0, I didn't try that. But I've read at storagereview.com and anandtech.com that there is very very little if any performance advantage for just 2 drives. Now this is all generic IDE drives runningn on cheasy software-driven propriatory drivers with onboard controllers that Windows 'overclocker' types use, but I don't know if software-based RAID 0 setup with just two drives.. Now I definately no expert, but this is just my personal experiances (and very limited at that), but I think that going with a Raid 0 array may be a mistake. I'd try it out and benchmark it and
such. But I dont' think that it's going to be very nice. Now I'd understand that if your working with large files you'd want all that disk space as scratch space, but you could get the same thing with Linux's LVM stuff. It's fairly risky because your not going to get any reduncy and if one drive blows out then your information on both drives is probably worthless after that, but it's the same thing with RAID 0. The advantage as I see it over using LVM over Raid 0 is that it would be very easy to add and subtract extra drive space. You can resize partitions and such, and make them span accross multiple drives and RAID arrays. So if your working on something that needs more disk space then you have you can just run down to the store and buy a extra 250gig drive and slap it in. Then you modify the volumegroup to include the new drive, or a partition off of the drive, and then expand your logical volume (your 'partition') to fill up the newly added space. This isn't safe, of course, from one of your drives failing, but it's perfectly fine for situations were maximum disk space is preferable over redundancy and such. You've mentioned Kino and Mainactor.. There is a application that I've been meaning to play around with, but I haven't been able to because I don't have a DV camera and whatnot (need to borrow
one from my brother). It's called Cinelerra. It's designed to be a "professional" level non-linear video editor. It seems very nice, but it's very new user unfriendly. They tell you point blank on the website that if your looking for a application for editing your home movies your looking in the wrong place. It's at: http://heroinewarrior.com/cinelerra.php3 From what I can tell it's very effective though. They have some demo movies you can download, they are in a special quicktime format that they've developed specificly for this application, so you need to download
their special video player and codec, but they have a motorcycle one that is very nice looking... Well, at least to me. (I have no experiance in these matters) It's a full 60fps and they have some very slick editing things going on. All very nice looking with no visual artifacts or blemishes from what I can tell. Smooth. They also have some other applications to go along with it, that may be usefull for things other then just video editing. They have a thing called 'firehose' that combines
multiple network interfaces into a special purpose data pipe. You can't realy network over it, but it's just for moving data from one computer to another very fast. They also have the ability to support special purpose render nodes to help take the load off of the main computer when doing effects and such. They say that it's capable of running real time effects on HD-sized video, which
I think seems pretty fantastic. This company sells workstations and clusters and such that run cinelerra and has the ability to support many different applications. It's at: rel="nofollow">http://www.lmahd.com/cinelerra.html VERY expensive. But I think it's worth looking athe hardware they are using. Mostly IBM stuff, but a person that can build their own computer can probably put together something similar. Of course this is all for state of the art stuff. I don't expect normal TV resolutions to need so much grunt work. Now remember I am pretty ignorant about video editing and such, and I don't even own a camera, but this is something I would realy like to play around with. With Debian they have pre-compiled binaries that work, and they have some for other distributions. If you want to use Mainactor, you could probably make it work in any distro with some massaging, but they have specific versions for Mandrake and Suse, I think. But most any distro will work fine, it's just up to your personal preference as far as I can tell. (which isn't very far). Also there are many other Linux applications for this purpose. Apple has "Apple Shake" that has a Linux version, however the Linux version is 5000 dollars vs the OS X version which is 3000 dollars. Then there is "Smoke" which is some absolutely horrificly expensive setup that includes a special IBM/Linux machine. And there are 3-4 others that are much more reasonably priced, which I can't remember right now. Mainactor and Kino look more like my speed right now though. ;) (mainactor has a Demo version for Linux you can try out, btw. But personally I prefer Free applications whenever possible)

Drag