Process Model – Oracle iPlanet Webserver – Connection Handling

Connection-Handling Overview

 Acceptor Threads Accept the connections and put them into the Connection Queue. Request Processing threads (Workers) pick up the connections from the Connection Queue and service the requests.

This is different from the normal design of Acceptor thread directly assigning the request to the worker thread or keeping it in wait state till the worker is available .i.e., connection is bound to a thread.

One of the daemon session threads (also called worker threads) pulls the request from the connection queue and starts processing the request. After the request is served, if the client has requested the connection to be alive then connection will go to the keep alive pollmanager. Keep alive threads poll all keep alive connections, whenever a new request comes to the this connection then keep alive threads again puts the connection to the connection queue. If there is no further request to the same connection for some period (keep alive timeout) then keep alive threads close the connection.

 
 
Acceptor Threads
Acceptor Threads are responsible for accepting the connection requests from the clients. All new incoming HTTP requests will be initially handled by Acceptor Threads. After accepting the requests, connections are placed in the Connection Queue.

Since the request acceptor is not resource intensive, only need few threads are needed to handle incoming connections. Usually, they would be configured the same number of processors .

 
Connection Queue
Connection queue is being used as intermediate medium for passing the connections between Acceptor threads and Worker threads. Connection Queue is a thread safe data structure.

Since multiple threads access the connection queue, it should be a thread safe data structure. When all worker threads are busy processing requests, all subsequent requests wait in the connection queue. Typically, they would be configured many times of worker threads.

 
Worker Threads
Worker threads are the workhorses of the Web Server. They are the request Processing Threads. Worker thread runs the request through the various request processing stages and sends response back to the client.

After serving the request on Keep-Alive connection, worker thread checks to see whether other requests are ready to be processed on the same connection. This avoids latency of passing the connection to the Keep-Alive system only to be passed back to the worker thread.

If there is no request is readily available on a persistent connection, the worker thread deems the connection idle and passes the connection to the keep-alive threads for monitoring.

Keep-Alive Threads
Keep-alive threads manage idle HTTP connections. An HTTP connection is deemed “idle” if the HTTP request data isn’t readily available to be read by the server. Keep-alive threads periodically monitor all the idle connections for data.

When data is detected on a connection, the keep-alive thread passes the connection onto a worker thread, which then processes the request.

 
 

The Native Thread Pool

Native pool contains a pool of threads that the web server creates and uses for running NSAPI functions that require native thread for execution.

Web Server uses Netscape Portable Runtime (NSPR), which is an underlying portability layer that provides a platform-neutral API for accessing operating system facilities and services such as threads, synchronization, time, I/O, network addresses, and memory management.

This layer provides abstractions for threads that are not always the same as those for the OS-provided threads. These non-native threads have lower scheduling overhead, so their use improves performance. However, these threads are sensitive to blocking calls to the OS, such as I/O calls.

NSPR threads have lower scheduling overheads (thereby contributing to improved performance) but are sensitive to blocking system calls to the operating system. NSAPI functions that make blocking system calls should not execute on a non-native thread because this can prevent the execution of all the other non-native threads.

To make it easier to write NSAPI extensions that can make use of blocking calls, the server keeps a pool of threads that safely support blocking calls. These threads are usually native OS threads. During request processing, any NSAPI function that is not marked as being safe for execution on a non-native thread is scheduled for execution on one of the threads in the native thread pool.

By default, SAFs in a plug-in are scheduled to execute on a native thread. The NativeThread parameter to the load-modules function in magnus.conf is used to specify the threading model to use when executing the SAFs in a plug-in.

If you have written your own NSAPI plug-ins such as NameTrans, Service, or PathCheck functions, these execute by default on a thread from the native thread pool. If your plug-in makes use of the NSAPI functions for I/O exclusively or does not use the NSAPI I/O functions at all, then it can execute on a non-native thread. For this to happen, the function must be loaded with a NativeThread=”no” option, indicating that it does not require a native thread.

For example, add the following to the load-modules Init line in the magnus.conf file:

Init fn=”load-modules” shlib=”/plugins/custom/myplugin.so” funcs=”test_function,my_function” NativeThread=”no”

The NativeThread flag affects all functions in the funcs list, so if you have more than one function in a library, but only some of them use native threads, use separate Init lines. If you set NativeThread to yes, the thread maps directly to an OS thread.

 
 

Custom Thread Pools

By default, the connection queue sends requests to the default thread pool. Custom Thread Pools can also be created by using a thread pool Init function in magnus.conf file. These custom thread pools are used for executing NSAPI Service Application Functions(SAFs), not entire requests.

If the SAF requires the use of a custom thread pool, the current request processing thread queues the request, waits until the other thread from the custom thread pool completes the SAF, then the request processing thread completes the rest of the request.

For example, the obj.conf file contains the following:

NameTrans fn="assign-name" from="/myapp" name="myapp" pool="my-custom-pool"
...
<object>
         ObjectType fn="force-type" type="magnus-internal/myapp"
         Service fn="wl_proxy" WebLogicCluster="MS1:8010,MS2:8010,MS3:8020 pool="my-custom-pool2"
<\object>

In this example, the request is processed as follows: 

  1. Thread in the default queue(T1) picks up the request and executes the steps till before NameTrans directive.
  2. T1 thread queues the request if it starts with /myapp. T1 thread goes to wait state.
  3. Thread in my-custom-pool (T2) picks up the request queued by T1. T2 thread completes the execution of NameTrans directive and then, return to wait state.
  4. T1 Thread wakes up and continues the processing of request. It executes the ObjectType directive and moves to the Service function.
  5. Since Custom Pool is configured for the Service function, T1 thread queues the request to my-custom-pool2
  6. Thread from the my-custom-pool2 (T3) picks up the queued request and completes the service directive and goes to the wait state.
  7. T1 thread wakes up and continues processing the request.

Three threads (T1, T2 & T3) work together and completes the processing of the request.

Part of the functionality is somewhat similar to the Event based Process Model, where the request is handled by various threads, depending on the different stages of the Request Processing Lifecycle.

 
 

Process Modes

iPlanet web server can run in one of the following modes:
            Single-Process Mode 
            Multi-Process Mode

Single-Process Mode
In Single-Process mode, Server receives requests to a Single process. All the threads run in the same process. Acceptor threads keep waiting for the new requests to arrive. When the request arrives, Acceptor thread accepts the connection and puts into the Connection Queue. Request Processing thread picks up the request from connection queue and handles the request.

Since all the threads run in a Single process, NSAPI extensions written must be thread-safe. Use of shared resources must be properly synchronized.

Multi-Process Mode
Web Server can be configured to handle requests using multiple processes, and each process having multiple threads.

In the multi-process mode, the server spawns multiple server processes at startup. Depending on the configuration, each process contains one or more threads, that receive incoming requests. Since each process is completely independent, each one has its own copies of global variables, caches, and other resources. If an application is using the shared state, then it has to synchronize the state across multiple processes.

Do not use the MaxProcs directive when Web Server performs session management for Java applications. Multi-process mode is deprecated for Java technology-enabled servers

When you specify a MaxProcs value greater than 1, the server relies on the operating system to distribute connections among multiple server processes.

 By setting the following directive in magnus.conf file, we see the below information during the startup. 

MaxProcs 2

After the Successful Startup, we can the 2 worker processes for the test instance, apart from the primordial and watchdog processes. 

 

Now, if I kill one worker process (kill -9 18644), primordial process starts new worker process as given below: