Core Java

Java Thread-per-Connection vs Thread-per-Request

This article explores how Java uses threads to manage client connections and process incoming requests. We will compare the thread-per-connection and thread-per-request models and show how to create small Java programs that implement these ideas.

1. Understanding Connection vs Request

Before exploring thread-per-connection and thread-per-request models, it’s crucial to understand the core concepts: connection and request.

What is a Connection?

A connection is a persistent link between a client and a server established over a socket (TCP). This channel is used to send and receive data. In the case of HTTP/1.1, a connection may remain open (persistent connection) to handle multiple requests from the same client.

In the context of a Java socket server, a connection is established when a client successfully connects to the server’s socket, remaining open until either the client or the server closes it, allowing data to flow bidirectionally while active.

Key characteristics of a connection include its persistence until explicitly closed, its typical association with a single client, and its ability to handle multiple requests during its lifetime, as seen in protocols like HTTP/1.1 with keep-alive or WebSockets.

What is a Request?

A request is a unit of communication from a client to the server that asks the server to perform an action or provide data. In HTTP, for example, each GET or POST message from the client is a separate request.

In Java socket programming or web applications, a request typically involves a client sending a command (such as GET /data) or a query (like POST user info) over an established connection; a single connection can handle multiple sequential or concurrent requests, which are usually short-lived and expect a response from the server.

Key characteristics of a request include being short-lived and often stateless, typically resulting in one response per request, and the ability to be sent multiple times over the same connection.

2. Thread-per-Connection Model

In this model, the server spawns a new thread for each client connection. That thread manages all I/O (read/write) for that client for the entire life of the connection. This model is easy to build and understand, which makes it a good fit for protocols that rely on long-lived connections. However, it doesn’t scale well with a large number of clients due to the overhead of maintaining a dedicated thread per connection and becomes inefficient when many connections remain idle.

2.1 Example: Thread-per-Connection Server

Let’s implement a basic socket-based HTTP server where each client connection gets its own thread.

public class ThreadPerConnectionServer {

    private static final int PORT = 8080;
    private static final Logger logger = Logger.getLogger(ThreadPerConnectionServer.class.getName());

    public static void main(String[] args) {
        logger.log(Level.INFO, "Starting Thread-Per-Connection Server on port {0}", PORT);

        try (ServerSocket serverSocket = new ServerSocket(PORT)) {
            while (true) {
                Socket clientSocket = serverSocket.accept();
                logger.log(Level.INFO, "Accepted connection from {0}", clientSocket.getRemoteSocketAddress());

                new Thread(() ->  handleClient(clientSocket)).start();
            }
        } catch (IOException e) {
            logger.log(Level.SEVERE, "Server error: ", e);
        }
    }

    private static void handleClient(Socket clientSocket) {
        try (
            Socket socket = clientSocket; 
            BufferedReader in = new BufferedReader(new InputStreamReader(socket.getInputStream())); 
            PrintWriter out = new PrintWriter(socket.getOutputStream(), true)) {
            String inputLine;
            while ((inputLine = in.readLine()) != null) {
                logger.info("Received: " + inputLine);
                out.println("Echo: " + inputLine);
            }
        } catch (IOException e) {
            logger.log(Level.WARNING, "Client connection error: ", e);
        }
    }
}

This server implements the thread-per-connection model by creating a new thread for each client connection. It starts by listening on port 8080 using a ServerSocket. Each time a client connects, the server accepts the connection and immediately spawns a new thread to handle it.

Inside the handleClient method, the client’s Socket is processed using a try-with-resources block to ensure all resources are properly closed. The server reads input lines from the client and sends back a response prefixed with "Echo:". Communication continues until the client closes the connection.

Compile and Run the Server

After successfully compiling and running the server, you should see output similar to the following:

May 16, 2025 2:36:47 P.M. com.jcg.example.ThreadPerConnectionServer main
INFO: Starting Thread-Per-Connection Server on port 8,080

The server is now running and listening on port 8080.

Connect to the Server (Client Interaction)

You can test the server using a simple terminal tool.

Using telnet:

telnet localhost 8080

Or using netcat (nc):

nc localhost 8080

Once connected, type a message like:

Hello server

You should immediately see:

Echo: Hello server

Server Log Output:

Meanwhile, the server will log events such as:

May 16, 2025 2:47:41 P.M. com.jcg.example.ThreadPerConnectionServer main
INFO: Accepted connection from /[0:0:0:0:0:0:0:1]:49657
May 16, 2025 2:47:47 P.M. com.jcg.example.ThreadPerConnectionServer handleClient
INFO: Received: Hello Server

When a client connects, the server logs the client’s remote address and port, then assigns a dedicated thread to handle that connection. For every message the client sends, the server logs the content and replies with a response prefixed by "Echo:". Since each connection runs in its own thread, multiple clients can interact with the server simultaneously and independently. This behavior demonstrates the thread-per-connection model, where each client maintains a persistent interaction through a dedicated thread until the connection is closed.

3. Thread-per-Request Model

Now we implement a version where a new thread is created for each HTTP request, regardless of the connection. This model keeps each request separate, making the processing easier to manage. It’s a good choice when many requests come in quickly one after another or when each request needs a lot of processing power.

3.1 Example: Thread-per-Request Server

public class ThreadPerRequestServer {
    
    private static final Logger logger = Logger.getLogger(ThreadPerRequestServer.class.getName());
    private static final int PORT = 8081;
    private static final ExecutorService executor = Executors.newCachedThreadPool();

    public static void main(String[] args) {
        try (ServerSocket serverSocket = new ServerSocket(PORT)) {
            logger.log(Level.INFO, "Starting Thread-Per-Request Server on port {0}", PORT);
            while (true) {
                Socket clientSocket = serverSocket.accept();
                executor.submit(() -> handleClient(clientSocket));
            }
        } catch (IOException e) {
            logger.log(Level.SEVERE, "Server exception", e);
        } finally {
            executor.shutdown();
        }
    }

    private static void handleClient(Socket clientSocket) {
        logger.log(Level.INFO, "Accepted connection from {0}", clientSocket.getRemoteSocketAddress());

        try (
            Socket socket = clientSocket;
            BufferedReader in = new BufferedReader(new InputStreamReader(socket.getInputStream()));
            PrintWriter out = new PrintWriter(socket.getOutputStream(), true)
        ) {
            String inputLine;
            while ((inputLine = in.readLine()) != null) {
                final String message = inputLine;
                executor.submit(() -> {
                    logger.log(Level.INFO, "Processing request: {0}", message);
                    out.println("Processed: " + message);
                });
            }
        } catch (IOException e) {
            logger.log(Level.WARNING, "Client handling error", e);
        }
    }
}

This implementation showcases the thread-per-request model, where each incoming client message (i.e., request) is processed in its own thread. The server begins by opening a ServerSocket on port 8081 and listens indefinitely for incoming client connections. When a client connects, the server accepts the socket and passes it to the handleClient method.

Inside handleClient, resources like the socket, input reader, and output writer are managed using a try-with-resources block, ensuring they’re properly closed after use. The method reads lines of input from the client in a loop. Rather than assigning a single thread to manage the entire connection, each individual message received from the client is submitted as a new task to an ExecutorService. This executor uses a cached thread pool, which allows efficient reuse of threads for multiple requests.

Unlike the thread-per-connection model, this approach scales better by isolating each request in its own thread, allowing concurrent handling of multiple independent messages from one or more clients.

4. Performance and Use Case Comparison

Choosing between thread-per-connection and thread-per-request depends on your application’s workload and scalability needs. The table below compares key aspects of both models.

FeatureThread-per-ConnectionThread-per-Request
Thread Count1 per connection1 per request
OverheadHigh for many connectionsHigh for many requests
SuitabilityChat servers, FTP serversHTTP with short, stateless requests
ThroughputLimited by number of connectionsCan support more concurrency if requests are lightweight
Connection LifetimeLong-livedShared among requests

5. Conclusion

This article compared the thread-per-connection vs thread-per-request models in Java, explained how connections and requests differ, and built working socket-based servers to demonstrate both approaches. While thread-per-connection is simpler and suitable for low to moderate traffic, thread-per-request offers better scalability for high-throughput applications with many independent requests.

6. Download the Source Code

This was a guide to understanding thread-per-connection vs thread-per-request models in Java.

Download
You can download the full source code of this example here: java thread per connection vs per request

Omozegie Aziegbe

Omos Aziegbe is a technical writer and web/application developer with a BSc in Computer Science and Software Engineering from the University of Bedfordshire. Specializing in Java enterprise applications with the Jakarta EE framework, Omos also works with HTML5, CSS, and JavaScript for web development. As a freelance web developer, Omos combines technical expertise with research and writing on topics such as software engineering, programming, web application development, computer science, and technology.
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Back to top button