# IP stack config # Copyright (c) 2016 Intel Corporation. # SPDX-License-Identifier: Apache-2.0 menu "IP stack" config NET_NATIVE bool "Enable native IP stack" default y help Enables Zephyr native IP stack. If you disable this, then you need to enable the offloading support if you want to have IP connectivity. # Hidden options for enabling native IPv6/IPv4. Using these options # avoids having "defined(CONFIG_NET_IPV6) && defined(CONFIG_NET_NATIVE)" # in the code as we can have "defined(CONFIG_NET_NATIVE_IPV6)" instead. config NET_NATIVE_IPV6 bool depends on NET_NATIVE default y if NET_IPV6 config NET_NATIVE_IPV4 bool depends on NET_NATIVE default y if NET_IPV4 config NET_NATIVE_TCP bool depends on NET_NATIVE default y if NET_TCP config NET_NATIVE_UDP bool depends on NET_NATIVE default y if NET_UDP config NET_OFFLOAD bool "Offload IP stack [EXPERIMENTAL]" help Enables TCP/IP stack to be offload to a co-processor. if NET_OFFLOAD module = NET_OFFLOAD module-dep = NET_LOG module-str = Log level for offload layer module-help = Enables offload layer to output debug messages. source "subsys/net/Kconfig.template.log_config.net" endif # NET_OFFLOAD config NET_RAW_MODE bool help This is a very specific option used to built only the very minimal part of the net stack in order to get network drivers working without any net stack above: core, L2 etc... Basically this will build only net_pkt part. It is currently used only by IEEE 802.15.4 drivers, though any type of net drivers could use it. if !NET_RAW_MODE choice prompt "Qemu networking" default NET_QEMU_PPP if NET_PPP default NET_QEMU_SLIP depends on QEMU_TARGET help Can be used to select how the network connectivity is established from inside qemu to host system. This can be done either via serial connection (SLIP) or via Qemu ethernet driver. config NET_QEMU_SLIP bool "SLIP" help Connect to host or to another Qemu via SLIP. config NET_QEMU_PPP bool "PPP" help Connect to host via PPP. config NET_QEMU_ETHERNET bool "Ethernet" help Connect to host system via Qemu ethernet driver support. One such driver that Zephyr supports is Intel e1000 ethernet driver. endchoice config NET_INIT_PRIO int default 90 help Network initialization priority level. This number tells how early in the boot the network stack is initialized. source "subsys/net/ip/Kconfig.ipv6" source "subsys/net/ip/Kconfig.ipv4" config NET_SHELL bool "Enable network shell utilities" select SHELL help Activate shell module that provides network commands like ping to the console. config NET_SHELL_DYN_CMD_COMPLETION bool "Enable network shell dynamic command completion" depends on NET_SHELL default y help Enable various net-shell command to support dynamic command completion. This means that for example the nbr command can automatically complete the neighboring IPv6 address and user does not need to type it manually. Please note that this uses more memory in order to save the dynamic command strings. For example for nbr command the increase is 320 bytes (8 neighbors * 40 bytes for IPv6 address length) by default. Other dynamic completion commands in net-shell require also some smaller amount of memory. config NET_TC_TX_COUNT int "How many Tx traffic classes to have for each network device" default 1 range 1 8 help Define how many Tx traffic classes (queues) the system should have when sending a network packet. The network packet priority can then be mapped to this traffic class so that higher prioritized packets can be processed before lower prioritized ones. Each queue is handled by a separate thread which will need RAM for stack space. Only increase the value from 1 if you really need this feature. The default value is 1 which means that all the network traffic is handled equally. In this implementation, the higher traffic class value corresponds to lower thread priority. config NET_TC_RX_COUNT int "How many Rx traffic classes to have for each network device" default 1 range 1 8 help Define how many Rx traffic classes (queues) the system should have when receiving a network packet. The network packet priority can then be mapped to this traffic class so that higher prioritized packets can be processed before lower prioritized ones. Each queue is handled by a separate thread which will need RAM for stack space. Only increase the value from 1 if you really need this feature. The default value is 1 which means that all the network traffic is handled equally. In this implementation, the higher traffic class value corresponds to lower thread priority. choice prompt "Priority to traffic class mapping" help Select mapping to use to map network packet priorities to traffic classes. config NET_TC_MAPPING_STRICT bool "Strict priority mapping" help This is the recommended default priority to traffic class mapping. Use it for implementations that do not support the credit-based shaper transmission selection algorithm. See 802.1Q, chapter 8.6.6 for more information. config NET_TC_MAPPING_SR_CLASS_A_AND_B bool "SR class A and class B mapping" depends on NET_TC_TX_COUNT >= 2 depends on NET_TC_RX_COUNT >= 2 help This is the recommended priority to traffic class mapping for a system that supports SR (Stream Reservation) class A and SR class B. See 802.1Q, chapter 34.5 for more information. config NET_TC_MAPPING_SR_CLASS_B_ONLY bool "SR class B only mapping" depends on NET_TC_TX_COUNT >= 2 depends on NET_TC_RX_COUNT >= 2 help This is the recommended priority to traffic class mapping for a system that supports SR (Stream Reservation) class B only. See 802.1Q, chapter 34.5 for more information. endchoice config NET_TX_DEFAULT_PRIORITY int "Default network TX packet priority if none have been set" default 1 range 0 7 help What is the default network packet priority if user has not specified one. The value 0 means lowest priority and 7 is the highest. config NET_RX_DEFAULT_PRIORITY int "Default network RX packet priority if none have been set" default 0 range 0 7 help What is the default network RX packet priority if user has not set one. The value 0 means lowest priority and 7 is the highest. config NET_IP_ADDR_CHECK bool "Check IP address validity before sending IP packet" default y help Check that either the source or destination address is correct before sending either IPv4 or IPv6 network packet. config NET_MAX_ROUTERS int "How many routers are supported" default 2 if NET_IPV4 && NET_IPV6 default 1 if NET_IPV4 && !NET_IPV6 default 1 if !NET_IPV4 && NET_IPV6 range 1 254 help The value depends on your network needs. # Normally the route support is enabled by RPL or similar technology # that needs to use the routing infrastructure. config NET_ROUTE bool depends on NET_IPV6_NBR_CACHE default y if NET_IPV6_NBR_CACHE # Temporarily hide the routing option as we do not have RPL in the system # that used to populate the routing table. config NET_ROUTING bool depends on NET_ROUTE help Allow IPv6 routing between different network interfaces and technologies. Currently this has limited use as some entity would need to populate the routing table. RPL used to do that earlier but currently there is no RPL support in Zephyr. config NET_MAX_ROUTES int "Max number of routing entries stored." default NET_IPV6_MAX_NEIGHBORS depends on NET_ROUTE help This determines how many entries can be stored in routing table. config NET_MAX_NEXTHOPS int "Max number of next hop entries stored." default NET_MAX_ROUTES depends on NET_ROUTE help This determines how many entries can be stored in nexthop table. config NET_ROUTE_MCAST bool depends on NET_ROUTE config NET_MAX_MCAST_ROUTES int "Max number of multicast routing entries stored." default 1 depends on NET_ROUTE_MCAST help This determines how many entries can be stored in multicast routing table. config NET_TCP bool "Enable TCP" help The value depends on your network needs. config NET_TCP_CHECKSUM bool "Check TCP checksum" default y depends on NET_TCP help Enables TCP handler to check TCP checksum. If the checksum is invalid, then the packet is discarded. if NET_TCP module = NET_TCP module-dep = NET_LOG module-str = Log level for TCP module-help = Enables TCP handler output debug messages source "subsys/net/Kconfig.template.log_config.net" endif # NET_TCP config NET_TCP_BACKLOG_SIZE int "Number of simultaneous incoming TCP connections" depends on NET_TCP default 1 range 1 128 help The number of simultaneous TCP connection attempts, i.e. outstanding TCP connections waiting for initial ACK. config NET_TCP_AUTO_ACCEPT bool "Auto accept incoming TCP data" depends on NET_TCP help Automatically accept incoming TCP data packet to the valid connection even if the application has not yet called accept(). This speeds up incoming data processing and is done like in Linux. Drawback is that we allocate data for the incoming packets even if the application has not yet accepted the connection. If the peer sends lot of packets, we might run out of memory in this case. config NET_TCP_TIME_WAIT_DELAY int "How long to wait in TIME_WAIT state (in milliseconds)" depends on NET_TCP default 250 help To avoid a (low-probability) issue when delayed packets from previous connection get delivered to next connection reusing the same local/remote ports, RFC 793 (TCP) suggests to keep an old, closed connection in a special "TIME_WAIT" state for the duration of 2*MSL (Maximum Segment Lifetime). The RFC suggests to use MSL of 2 minutes, but notes "This is an engineering choice, and may be changed if experience indicates it is desirable to do so." For low-resource systems, having large MSL may lead to quick resource exhaustion (and related DoS attacks). At the same time, the issue of packet misdelivery is largely alleviated in the modern TCP stacks by using random, non-repeating port numbers and initial sequence numbers. Due to this, Zephyr uses much lower value of 250ms by default. Value of 0 disables TIME_WAIT state completely. config NET_TCP_ACK_TIMEOUT int "How long to wait for ACK (in milliseconds)" depends on NET_TCP default 1000 range 1 2147483647 help This value affects the timeout when waiting ACK to arrive in various TCP states. The value is in milliseconds. Note that having a very low value here could prevent connectivity. config NET_TCP_INIT_RETRANSMISSION_TIMEOUT int "Initial value of Retransmission Timeout (RTO) (in milliseconds)" depends on NET_TCP default 200 range 100 60000 help This value affects the timeout between initial retransmission of TCP data packets. The value is in milliseconds. config NET_TCP_RETRY_COUNT int "Maximum number of TCP segment retransmissions" depends on NET_TCP default 9 help The following formula can be used to determine the time (in ms) that a segment will be be buffered awaiting retransmission: n=NET_TCP_RETRY_COUNT Sum((1<