Netty optimization-parameter optimization

Netty optimization-parameter optimization

      • 1.1 Parameter tuning
        • 1) CONNECT_TIMEOUT_MILLIS
        • 2) SO_BACKLOG
        • 3) ulimit -n
        • 4)TCP_NODELAY
        • 5) SO_SNDBUF & SO_RCVBUF
        • 6) ALLOCATOR
        • 7)RCVBUF_ALLOCATOR

1.1 Parameter tuning

Parameter configuration:

  • Server:

    • new ServerBootstrap().option() //Configure the parameters of ServerSocketChannel
    • new ServerBootstrap().childOption() //Configure the parameters of SocketChannel
  • Client:

    • new Bootstrap().option() //Configure the parameters of SocketChannel
1) CONNECT_TIMEOUT_MILLIS
  • Belongs to SocketChannal parameter

  • When used to establish a connection on the client, if the connection cannot be made within the specified milliseconds, a timeout exception will be thrown.

  • SO_TIMEOUT is mainly used in blocking IO. In blocking IO, accept, read, etc. all wait indefinitely. If you don’t want to block forever, use it to adjust the timeout.

@Slf4j
public class TestConnectionTimeout {<!-- -->
    public static void main(String[] args) {<!-- -->
        NioEventLoopGroup group = new NioEventLoopGroup();
        try {<!-- -->
            Bootstrap bootstrap = new Bootstrap()
                    .group(group)
                    .option(ChannelOption.CONNECT_TIMEOUT_MILLIS, 300)
                    .channel(NioSocketChannel.class)
                    .handler(new LoggingHandler());
            ChannelFuture future = bootstrap.connect("127.0.0.1", 8080);
            future.sync().channel().closeFuture().sync(); // Breakpoint 1
        } catch (Exception e) {<!-- -->
            e.printStackTrace();
            log.debug("timeout");
        } finally {<!-- -->
            group.shutdownGracefully();
        }
    }
}

In addition, the source code part io.netty.channel.nio.AbstractNioChannel.AbstractNioUnsafe#connect

@Override
public final void connect(
        final SocketAddress remoteAddress, final SocketAddress localAddress, final ChannelPromise promise) {<!-- -->
    // ...
    // Schedule connect timeout.
    int connectTimeoutMillis = config().getConnectTimeoutMillis();
    if (connectTimeoutMillis > 0) {<!-- -->
        connectTimeoutFuture = eventLoop().schedule(new Runnable() {<!-- -->
            @Override
            public void run() {<!-- -->
                ChannelPromise connectPromise = AbstractNioChannel.this.connectPromise;
                ConnectTimeoutException cause =
                    new ConnectTimeoutException("connection timed out: " + remoteAddress); // Breakpoint 2
                if (connectPromise != null & amp; & amp; connectPromise.tryFailure(cause)) {<!-- -->
                    close(voidPromise());
                }
            }
        }, connectTimeoutMillis, TimeUnit.MILLISECONDS);
    }
// ...
}

The essence is that after configuring the timeout, eventLoop executes a scheduled task. If the timeout expires and the connection is not connected, the method in the scheduled task will be executed, and the exception will be passed from the nio thread to the main thread through connectPromise.tryFailure(cause). The future.sync() of the thread is then captured by catch

2) SO_BACKLOG
  • Belongs to ServerSocketChannal parameter

  1. In the first handshake, the client sends SYN to the server, and the status is changed to SYN_SEND. The server receives it, changes the status to SYN_REVD, and puts the request into the sync queue.
  2. In the second handshake, the server replies SYN + ACK to the client. When the client receives it, the status changes to ESTABLISHED and sends ACK to the server.
  3. In the third handshake, the server receives the ACK, changes the status to ESTABLISHED, and puts the request from the sync queue into the accept queue.

in

  • Before Linux 2.2, the backlog size included the sizes of the two queues. After 2.2, it was controlled by the following two parameters.

  • sync queue – Semi-connected queue

    • The size is specified via /proc/sys/net/ipv4/tcp_max_syn_backlog. When syncookies is enabled, there is logically no maximum limit and this setting is ignored.
  • accept queue – full connection queue

    • Its size is specified through /proc/sys/net/core/somaxconn. When using the listen function, the kernel will take the smaller value of the backlog parameters and system parameters passed in.
    • If the accpet queue is full, the server will send a connection refused error message to the client.

netty

The size can be set via option(ChannelOption.SO_BACKLOG, value)

debug:
The key breakpoint is: io.netty.channel.nio.NioEventLoop#processSelectedKey Each time a client is connected, it is put into the accept queue but not taken out, and then exceeds the configured option (ChannelOption.SO_BACKLOG, value) , see the effect

Source code view default size process
Backlog is used in nio’s bind method. Go directly to ServerSocketChannel to find the bind method, then right-click to find uasges, and then find javaChannel().bind(localAddress, config.getBacklog()); and then find the config layer by layer. Finally found this code, Windows defaults to 200, Linux defaults to 128

SOMAXCONN = AccessController.doPrivileged(new PrivilegedAction<Integer>() {<!-- -->
    @Override
    public Integer run() {<!-- -->
        // Determine the default somaxconn (server socket backlog) value of the platform.
        //The known defaults:
        // - Windows NT Server 4.0 + : 200
        // - Linux and Mac OS X: 128
        int somaxconn = PlatformDependent.isWindows() ? 200 : 128;
        File file = new File("/proc/sys/net/core/somaxconn");
        BufferedReader in = null;
        try {<!-- -->
            // file.exists() may throw a SecurityException if a SecurityManager is used, so execute it in the
            // try / catch block.
            // See https://github.com/netty/netty/issues/4936
            if (file.exists()) {<!-- -->
                in = new BufferedReader(new FileReader(file));
                somaxconn = Integer.parseInt(in.readLine());
                if (logger.isDebugEnabled()) {<!-- -->
                    logger.debug("{}: {}", file, somaxconn);
                }
            } else {<!-- -->
                // Try to get from sysctl
                Integer tmp = null;
                if (SystemPropertyUtil.getBoolean("io.netty.net.somaxconn.trySysctl", false)) {<!-- -->
                    tmp = sysctlGetInt("kern.ipc.somaxconn");
                    if (tmp == null) {<!-- -->
                        tmp = sysctlGetInt("kern.ipc.soacceptqueue");
                        if (tmp != null) {<!-- -->
                            somaxconn = tmp;
                        }
                    } else {<!-- -->
                        somaxconn = tmp;
                    }
                }

                if (tmp == null) {<!-- -->
                    logger.debug("Failed to get SOMAXCONN from sysctl and file {}. Default: {}", file,
                                 somaxconn);
                }
            }
        } catch (Exception e) {<!-- -->
            logger.debug("Failed to get SOMAXCONN from sysctl and file {}. Default: {}", file, somaxconn, e);
        } finally {<!-- -->
            if (in != null) {<!-- -->
                try {<!-- -->
                    in.close();
                } catch (Exception e) {<!-- -->
                    // Ignored.
                }
            }
        }
        return somaxconn;
    }
});
3) ulimit -n

Usually used to set the file descriptor limit (File Descriptor Limit) or handle limit of the operating system. This command is typically used to limit the number of files or network connections that a process can open simultaneously.

ulimit -n is used to view the file descriptor limit of the current user process. This limit controls the number of files that a process can open at the same time. In highly concurrent network applications, such as servers written using Netty, this limit may need to be adjusted to a higher value to support more concurrent connections.

To set file descriptor limits, you can use a command similar to the following:

ulimit -n 65536

This sets the file descriptor limit to 65536, allowing a process to increase the number of files open simultaneously to this number, which is useful for server applications that need to handle a large number of concurrent connections. Note that modifying file descriptor limits may require superuser privileges.

4) TCP_NODELAY

TCP_NODELAY is a TCP socket option usually used to control the latency and performance of TCP data transmission.

In TCP communication, data is usually buffered and waited for a period of time so that multiple small packets can be merged into a larger packet to reduce network overhead. This buffering behavior can improve network utilization, but it also introduces some latency as data waits for other data to form larger packets.

This buffering can be disabled by enabling the TCP_NODELAY option, allowing small packets to be transmitted immediately. This is very important for certain applications, especially those that require low latency, such as real-time audio and video communication, online games, etc. When TCP_NODELAY is enabled, data is sent immediately without waiting for other data in the buffer.

netty is turned off by default and belongs to the server-side SocketChannal parameter. Use new ServerBootstrap().childOption(ChannelOption.TCP_NODELAY, true); to turn it on

5) SO_SNDBUF & amp; SO_RCVBUF

Send buffer, receive buffer, system automatically adjusts

  • SO_SNDBUF belongs to SocketChannal parameter
  • SO_RCVBUF can be used for both SocketChannal parameters and ServerSocketChannal parameters (it is recommended to set it to ServerSocketChannal)
6) ALLOCATOR
  • Belongs to SocketChannal parameter

Chinese explanation of Allocator:

Memory allocation: Allocator is responsible for allocating memory blocks from heap memory or direct memory (off-heap) to store data. These memory blocks are usually divided into smaller blocks, each of which can be used to create a ByteBuf.

Memory Management: Allocator keeps track of allocated memory blocks and is responsible for freeing them when they are no longer needed so that the memory can be reclaimed. This helps prevent memory leaks and ensure efficient use of memory.

Memory Pool: Allocator usually uses memory pool to reuse memory blocks. This means that when a ByteBuf is no longer needed, its memory block is not released immediately, but is put back into the memory pool for future reuse. This improves performance and reduces memory allocation and deallocation overhead.

Allocation strategy: Allocator can adopt different allocation strategies, such as pooling (Pooling) and non-pooling (Non-pooling). The pooled allocation strategy uses a memory pool to reuse blocks of memory, whereas the non-pooled strategy allocates new blocks of memory each time.

In short, Allocator is a key component in Netty for managing memory allocation and release. It helps optimize the performance of network applications, ensure efficient use of memory, and reduce the complexity of memory management.

If you want to configure pooling, non-pooling, and direct memory heap memory, you need to find the default configuration item, which is configured in ChannelConfig.

7) RCVBUF_ALLOCATOR
  • Belongs to SocketChannal parameter
  • Control netty receive buffer size
  • Responsible for the allocation of inbound data, determines the size of the inbound buffer (and can be dynamically adjusted), uniformly uses direct direct memory, and the specific pooling or non-pooling is determined by the allocator
  • Work together with ALLOCATOR to complete the allocation of bytebuf