Make sure the multicast MAC address checker checks also
IPv4 multicast MAC address and accepts it.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
Some doxygen directives were missing from dns_pack.h file.
Also make function header documentation look better.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
This application does not do anything itself, it just waits
mDNS queries and responds to them.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
This creates mDNS responder and serves configured IP addresses
to the callers which want to resolve .local addresses.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
User can configure hostname of the device in Kconfig. This can
be used by mDNS responder to answer <hostname>.local queries.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
These headers provide an efficient, inline implementations of single-
and double- linked lists, and thus not threadsafe. They are intended
to be used as internal kernel APIs (and currently for example not
documented at https://www.zephyrproject.org/doc/). However, to avoid
issues when doing kernel programming (e.g. #4350), it makes sense
to explicitly, even verbosely, document these functions as not
threadsafe.
Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
net_context_sendto() returns an error if dest address is NULL.
If dest address is available, net_conext_sendto() should be used.
Otherwise, net_context_send() should be used.
Fixes#4347
Signed-off-by: Aska Wu <aska.wu@linaro.org>
With the introduction of CoAP and other protocols, URL parsing is
be needed when HTTP_PARSER is not. Let's split out the existing
functionality of URL parsing into it's own CONFIG and let
HTTP_PARSER use it by automatically selecting HTTP_PARSER_URL when
HTTP_PARSER is enabled.
Signed-off-by: Michael Scott <michael.scott@linaro.org>
Add skeleton for HCI vendor extenstions and convert the nRF5x-specific
static address setting to use the HCI VS commands instead.
Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
When a connection is disconnected with outstanding unacked packets, the
Host has no way to signal or acknowledge their processing to the
Controller, since it is illegal to send a Host Number of Completed
Packets command when the connection is not up. Instead, consider the
outstanding packets as acked in order not to affect the correct flow
control.
Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
The feature bits for Proxy and Friend were missing in the composition
data and heart beat messages.
Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
All sys_slist_*() functions aren't threadsafe and calls to them
must be protected with irq_lock. This is usually done in a wider
caller context, but k_queue_poll() is called with irq_lock already
relinquished, and is thus subject to hard to detect and explain
race conditions, as e.g. was tracked in #4022.
Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
When CONFIG_STDOUT_CONSOLE is not selected, there is no printk()
function. An alternative (printf) must be used.
This fix was taken from tests/crypto/mbedtls/src/mbedtls.c
Signed-off-by: Michael Scott <michael.scott@linaro.org>
Both count and period must be non-zero for message publication
Stop publication when count becomes zero
Add count to debug message in hb_publish
Signed-off-by: Steve Brown <sbrown@cortland.com>
User threads can only create other nonessential user threads
of equal or lower priority and must have access to the entire
stack area.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
We need to track permission on stack memory regions like we do
with other kernel objects. We want stacks to live in a memory
area that is outside the scope of memory domain permission
management. We need to be able track what stacks are in use,
and what stacks may be used by user threads trying to call
k_thread_create().
Some special handling is needed because thread stacks appear as
variously-sized arrays of struct _k_thread_stack_element which is
just a char. We need the entire array to be considered an object,
but also properly handle arrays of stacks.
Validation of stacks also requires that the bounds of the stack
are not exceeded. Various approaches were considered. Storing
the size in some header region of the stack itself would not allow
the stack to live in 'noinit'. Having a stack object be a data
structure that points to the stack buffer would confound our
current APIs for declaring stacks as arrays or struct members.
In the end, the struct _k_object was extended to store this size.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
We also need macros to assert that an object must be in an
uninitialized state. This will be used for validating thread
and stack objects to k_thread_create(), which must not be already
in use.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
We need to enforce that if the implementation function is inlined,
and we are using a syscall declaration macro where a runtime check
is performed, that all memory access in the inlined implementation
function is done after the user context check is performed.
Fixes bad memory access issues observed due to the compiler fetching
member data from a kernel object when the calling context was in
user mode.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This is too powerful for user mode, the other access APIs
require explicit permissions on the threads that are being
granted access.
The API is no longer exposed as a system call and hence will
only be usable by supervisor threads.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
It's currently too easy to run out of thread IDs as they
are never re-used on thread exit.
Now the kernel maintains a bitfield of in-use thread IDs,
updated on thread creation and termination. When a thread
exits, the permission bitfield for all kernel objects is
updated to revoke access for that retired thread ID, so that
a new thread re-using that ID will not gain access to objects
that it should not have.
Because of these runtime updates, setting the permission
bitmap for an object to all ones for a "public" object doesn't
work properly any more; a flag is now set for this instead.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
We got rid of letting uninitialized objects being a free-for-all
and permission to do stuff on an object is now done explicitly.
If a user thread is initializing an object, they will already have
permission on it.
If a supervisor thread is initializing an object, that supervisor
thread may or may not want that object added to its set of object
permissions for purposes of permission inheritance or dropping to
user mode.
Resetting all permissions on initialization makes objects much
harder to share and re-use; for example other threads will lose
access if some thread re-inits a shared semaphore.
For all these reasons, just keep the permissions as they are when
an object is initialized.
We will need some policy for permission reset when objects are
requested and released from pools, but the pool implementation
should take care of that.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This will allow these thread objects to be re-used.
_mark_thread_as_dead() removed, it was only being called in one
place.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
API to assist with re-using objects, such as terminated threads or
kernel objects returned to a pool.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
At very low optimization levels, the call to
K_THREAD_STACK_BUFFER doesn't get inlined, overflowing the
tiny stack.
Replace with _ARCH_THREAD_STACK_BUFFER() which on x86 is
just a macro.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Fixes issues where these were getting sign-extended when
dumped out, resulting in (for example) "ffffffff" being
printed when it ought to be "ff".
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Use some preprocessor trickery to automatically deduce the amount of
arguments for the various _SYSCALL_HANDLERn() macros. Makes the grunt
work of converting a bunch of kernel APIs to system calls slightly
easier.
Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
Previously, there was boolean CONFIG_SLIP_DEBUG, which effectively
switched between "logging off" and "debug-level logging". Instead,
switch to CONFIG_SYS_LOG_SLIP_LEVEL (the naming of the option follows
existing conventions) which allows to select any of the standard 5
logging levels.
Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
The current prescaler calculation incorrectly fails to configure the
desired frequency when it is possible to match it exactly. Fix this.
Without this patch, if the user requests frequency N Hz, and there is
a SPI prescaler that can match this frequency exactly, the actual
frequency chosen by spi_stm32_configure() will be N/2 Hz.
Signed-off-by: Marti Bolivar <marti.bolivar@linaro.org>
Clean up & rework JTAG documentation for ESP-32 boards to provide full
commands and clarify gotchas.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Explicitly call net_pkt_ref()/net_pkt_unref() to avoid packet being
freed after calling net_context_sendto() at retransmit_request().
Also, do not return when net_context_sendto() returns error. Instead,
we should keep retrying.
Signed-off-by: Robert Chou <robert.ch.chou@acer.com>
Original coap_client implementation does not setup "appdatalen" of
net_pkt correctly and does not strip the IP + UDP headers when doing
the retransmit. This will result in malformed coap packet. Fix it by
adding a strip_headers() function to set appdatalen and get rid of
IP + UDP headers.
Signed-off-by: Robert Chou <robert.ch.chou@acer.com>
This helps to debug issues with mass connection handling (e.g. when
issues happen at ~500th connection).
Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
It's possible to get number of free pkts/buffers with just
CONFIG_NET_BUF_POOL_USAGE, whereas CONFIG_NET_DEBUG_NET_PKT
depends on CONFIG_NET_LOG and adds quite a bunch of other
overhead. Also, give a hint that this option should be enabled
to get free buffer numbers.
Additionally, use unambiguous "Total" wording to represend the
maximum capacity of data structures, instead of previous "Count".
"Count" (or at least counter) is intuitively something which can
change, so not seeing any other numbers, it's very easy to assume
that it's actually number of free buffers (because that's the
information a user may be interested in in many cases).
Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
There are 3 cases of possible allocation failures, only 1 of them
was logged. Now, all the cases are logged: 1) failure to allocate
net_pkt; 2) failure to allocate very first net_buf for it; 3)
failure to allocate additional net_buf for it (this latter was
the only one logged previously).
Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
There have been situations where the remote stacks cannot responds
within a second, so increases it to 2 seconds. The timeout has to be
relatively short as the channel cannot be reused while disconnecting.
Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
MPS shall never be bigger than MTU + 2 as the remaining bytes cannot
be used since the SDU is limited to length + MTU.
Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
Add implementation to support Coded PHY update procedure
with packet transmit time restrictions.
Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>