Moving the net_buf_pool objects to a dedicated area lets us access
them by array offset into this area instead of directly by pointer.
This helps reduce the size of net_buf objects by 4 bytes.
Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
The net_pkt_split() was incorrectly checking fragA pointer
even before it was allocated.
The unit test is fixed and converted to ztest.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
llvm complains about char* -> u8_t* type conversion.
tests/net/ipv6_fragment/src/main.c:645:52: warning:
passing 'char [11]' to parameter of type 'const u8_t *' (aka
'const unsigned char *') converts between pointers to integer
types with different sign [-Wpointer-sign]
bool written = net_pkt_append_all(pkt, data_len, data,
Jira: ZEP-2274
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
1. Changed _tsc_read() to k_cycles_get_32(). Thus reading the
time stamp will be agnostic of the architecutre used.
2. Changed the variable names from *_tsc to *_time_stamp.
JIRA: ZEP-1426
Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
Update the arm.conf to enable CONFIG_FLOAT for the float test and use a
filter on that so we only run/build the test on SoCs/boards that support
floating point hardware.
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
This adds bt_gatt_register_service using bt_gatt_service which contains
the attribute array that is then added to the database saving a pointer
in each and every attribute declared.
Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
GAP is mandatory service and now that the db can only be build
dynamically there is no reason to keep the applications registering it.
Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
Removes CONFIG_BLUETOOTH_GATT_DYNAMIC_DB in preparation to the
introduction of bt_gatt_unregister.
Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
The main thread was doing nothing but spawning another thread to perform
the test. Delete the alternate thread, and just do the test on the main
thread, adjusting stack size and priority as necessary.
Issue: ZEP-2236
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
On some devices, when k_cpu_idle() was called we were getting
interrupts that were not the timer interrupt. On bbc_micro
a power clock control driver interrupt was happening instead
and k_cpu_idle() was returning without the system tick advancing,
failing the test.
The clock control interrupts seem to only happen early in device
boot; moving the idle test much later lets the test pass on this
board (and likely all other NRF5 based boards).
Issue: ZEP-2257
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Add tests for the newly-added JSON_OBJ_DESCR_OBJ_ARRAY. These pass.
Note that this also adds test coverage for decoding an array of
maximum length, to avoid regressing the recently-introduced fix for
this edge case.
Signed-off-by: Marti Bolivar <marti.bolivar@linaro.org>
- _SysFatalErrorHandler is supposed to be user-overridable.
The test case now installs its own handler to show that this
has happened properly.
- Use TC_PRINT() TC_ERROR() macros
- Since we have out own _SysFatalErrorHandler, show that
k_panic() works
- Show that _SysFatalErrorHandler gets invoked with the expected
reason code for some of the scenarios.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The hard-coded value of 10ms doesn't take the system configured
amount of ticks per second, nor does it account for an unlucky
tick advance which causes the test to fail very intermittently
in QEMU.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
When src and dst addresses are compressed based on context
information, uncompression method should verify CID bit,
SAC and DAC bits and context ID's. But it has missed some
cases which resulted in invalid uncompressed IPv6 header.
e.g. CID is set, SAC is 0 and DAC is 1 and context id's provided.
Uncompression method assumed that src address is compressed based
on context information but it is not.
Signed-off-by: Ravi kumar Veeramally <ravikumar.veeramally@linux.intel.com>
Increase to 1024 to get more tests and sample running on this device
with only 8K of SRAM.
Change thread stack size in the mslab test to make it fit into this
board.
Jira: ZEP-2079
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Add tests for new macro helpers that allow JSON field names to differ
from their corresponding C struct field names. These pass.
Signed-off-by: Marti Bolivar <marti.bolivar@linaro.org>
The error was generated by a piece of code that is
not currently being used. This piece of code was kept to measure
the overhead caused by the benchmarking code on x86.
JIRA:ZEP-2160
Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
64-bit types were not being handled properly and depending on the
calling convention could result in garbage values being printed.
We still truncate these to 32-bit values, the predominant use-case
is printing timestamp delta values which generally fit in a 32-bit
value. However we are no longer printing random stuff.
Test case for printk() updated appripriately to catch this regression.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
These tests make sure that the IPv6 fragments are build correctly
when a large IPv6 packet is being sent.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
Remove existing commands supported by CONFIG_BLUETOOTH_SHELL and make
sure all .conf select it.
Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
Static code analysis reported some kernel APIs were used without
reading the return value. Since the benchmark doesn't need error
conditions, a simple read of the values followed by a ARG_UNUSED
is used to handle static code analysis errors.
JIRA: ZEP-2134
Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
As this is a test, it's minor issue, but let's keep Coverity report
clean.
Coverity-CID: 169303
Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
These files are no longer used and can be removed.
Change-Id: I781256ab93b5364346d99cd4aac488762c437151
Signed-off-by: Inaky Perez-Gonzalez <inaky.perez-gonzalez@intel.com>
This tests enables all networking features and requires lots of RAM, so
increase the available RAM to avoid failures due to not enough RAM when
building.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Change reference voltage for Analog comparator to Internal reference
voltage.
Currently AIO is using External reference voltage (Reference A) and
external reference is set at 3.3V. In this usecase both reference
Voltage and AIO IP are set at 3.3V. So rising edge interrupt behaviour
will be unpredictable.
So by changing internal reference voltage to Internal (set at 1.09V)
interrupt will be generated as soon as Voltage on I/P will exceed it.
Jira: ZEP-1927
Signed-off-by: Youvedeep Singh <youvedeep.singh@intel.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
AIO should be supported on more platforms. Adapt
this case to make it run on more platforms.
Also keep reference voltage for comprator as internal.
Signed-off-by: Qiu Peiyang <peiyangx.qiu@intel.com>
Fixes sparse warning:
<snip>/zephyr/zephyr/misc/printk.c:50:5: warning: symbol '_char_out' was not declared. Should it be static?
Change-Id: I5af0860e9f8f827002ae9a142b5924d3de8d51b6
Signed-off-by: Maciek Borzecki <maciek.borzecki@gmail.com>
We did not check UDP or TCP checksum to be valid after receiving
the packet. Fix this so that the checksum is validated when
packet is received in connection handler. As the checksum validation
can be resource intensive, do it after we have verified that
there is a connection handler for this connection.
The checksum calculation can be turned OFF if needed, but it is
ON by default.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
A non-tickless system with 10ms granularity was occasionally
taking up to 70ms for the cancellation to be propagated back.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
It is possible to remove the forward declaration of l2cap_alloc_buf as
the recv pointer can be compared directly with chan pointer avoiding
using l2cap_ops directly.
Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
For all arches except ARC, enable stack sentinel and test that
some common stack violations trigger exceptions.
For ARC, use the hardware stack checking feature.
Additional testcase.ini blocks may be added to do stack bounds checking
for MMU/MPU-based stack protection schemes.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This patch amounts to a mostly complete rewrite of the k_mem_pool
allocator, which had been the source of historical complaints vs. the
one easily available in newlib. The basic design of the allocator is
unchanged (it's still a 4-way buddy allocator), but the implementation
has made different choices throughout. Major changes:
Space efficiency: The old implementation required ~2.66 bytes per
"smallest block" in overhead, plus 16 bytes per log4 "level" of the
allocation tree, plus a global tracking struct of 32 bytes and a very
surprising 12 byte overhead (in struct k_mem_block) per active
allocation on top of the returned data pointer. This new allocator
uses a simple bit array as the only per-block storage and places the
free list into the freed blocks themselves, requiring only ~1.33 bits
per smallest block, 12 bytes per level, 32 byte globally and only 4
bytes of per-allocation bookeeping. And it puts more of the generated
tree into BSS, slightly reducing binary sizes for non-trivial pool
sizes (even as the code size itself has increased a tiny bit).
IRQ safe: atomic operations on the store have been cut down to be at
most "4 bit sets and dlist operations" (i.e. a few dozen
instructions), reducing latency significantly and allowing us to lock
against interrupts cleanly from all APIs. Allocations and frees can
be done from ISRs now without limitation (well, obviously you can't
sleep, so "timeout" must be K_NO_WAIT).
Deterministic performance: there is no more "defragmentation" step
that must be manually managed. Block coalescing is done synchronously
at free time and takes constant time (strictly log4(num_levels)), as
the detection of four free "partner bits" is just a simple shift and
mask operation.
Cleaner behavior with odd sizes. The old code assumed that the
specified maximum size would be a power of four multiple of the
minimum size, making use of non-standard buffer sizes problematic.
This implementation re-aligns the sub-blocks at each level and can
handle situations wehre alignment restrictions mean fewer than 4x will
be available. If you want precise layout control, you can still
specify the sizes rigorously. It just doesn't break if you don't.
More portable: the original implementation made use of GNU assembler
macros embedded inline within C __asm__ statements. Not all
toolchains are actually backed by a GNU assembler even when the
support the GNU assembly syntax. This is pure C, albeit with some
hairy macros to expand the compile-time-computed values.
Related changes that had to be rolled into this patch for bisectability:
* The new allocator has a firm minimum block size of 8 bytes (to store
the dlist_node_t). It will "work" with smaller requested min_size
values, but obviously makes no firm promises about layout or how
many will be available. Unfortunately many of the tests were
written with very small 4-byte minimum sizes and to assume exactly
how many they could allocate. Bump the sizes to match the allocator
minimum.
* The mbox and pipes API made use of the internals of k_mem_block and
had to be ported to the new scheme. Blocks no longer store a
backpointer to the pool that allocated them (it's an integer ID in a
bitfield) , so if you want to "nullify" them you have to use the
data pointer.
* test_mbox_api had a bug were it was prematurely freeing k_mem_blocks
that it sent through the mailbox. This worked in the old allocator
because the memory wouldn't be touched when freed, but now we stuff
list pointers in there and the bug was exposed.
* Remove test_mpool_options: the options (related to defragmentation
behavior) tested no longer exist.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Updated tests conf file with coverage for new feature Kconfig
options in the Controller. This is required to catch compile/
build regression.
Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
Added because previously, Zephyr used API incompatible with Newlib
for errno handling. Even with Newlib compatibility changes, we
override the function which is defined in Zephyr SDK libc.a, so
makes sense to ensure thsi works as expected.
Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
Currently, a queue/fifo getter chooses how long to wait for an
element. But there are scenarios when putter would know better,
there should be a way to expire getter's timeout to make it run
again. k_queue_cancel_wait() and k_fifo_cancel_wait() functions
do just that. They cause corresponding *_get() functions to return
with NULL value, as if timeout expired on getter's side (even
K_FOREVER).
This can be used to signal out of band conditions from putter to
getter, e.g. end of processing, error, configuration change, etc.
A specific event would be communicated to getter by other means
(e.g. using existing shared context structures).
Without this call, achieving the same effect would require e.g.
calling k_fifo_put() with a pointer to a special sentinal memory
structure - such structure would need to be allocated somewhere
and somehow, and getter would need to recognize it from a normal
data item. Having cancel_wait() functions offers an elegant
alternative. From this perspective, these calls can be seen as
an equivalent to e.g. k_fifo_put(fifo, NULL), except that such
call won't work in practice.
Change-Id: I47b7f690dc325a80943082bcf5345c41649e7024
Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
Implement the framework for LE Scan Request Received Event.
The feature is available under the Controller's advanced
features and will be selected implcitly when Bluetooth v5.0
LE Advertising Extensions feature is implemented.
Jira: ZEP-2073
Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
Since Controller to Host flow control is a feature that affects both
sides equally, move it to the top-level Kconfig file and consolidate its
use in both Controller and Host.
Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
The TX and RX pool needs to be split otherwise the TX code path may
consume all buffers causing the RX thread to deadlock which will
possible deadlock the TX thread as well in case it needs more credits.
Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
The Bluetooth Specification allows for optional Controller to Host flow
control based on the same credit-based mechanism as the Host to
Controller one. This is particularly useful in 2-chip solutions where
the Host and the Controller are connected via a physical link (UART, SPI
or similar) where the Host is sometimes required to ask the Controller
to throttle its data traffic while still making sure that relevant
events get through the line.
This implementation is based on a simple queue of pending events and
data that is populated whenever the Controller detects that the Host is
out of buffers and then emptied whenever the Host notifies the
Controller that is ready to receive data again. Events relevant to the
connections are also queued to preserve the order of arrival.
At this point the Controller ignores the connection handle sent by the
Host and treats all connections equally, and it also queues events even
for connections that have no data pending in the queue. Both this items
can be improved if necessity arises.
Note that Number of Completed Packets will still flow freely from the
Controller to the Host regardless of the pending ACL data packets, which
might lead to inconsistencies in the sequential order of certain
operations that include bi-directional data transfer.
Jira: ZEP-1735
Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
The files for the Arduino Due needed to be updated to use the new
configuration when the SoC moved from the atmel_sam3 directory to
the atmel_sam/sam3x directory.
Jira: ZEP-2067
Signed-off-by: Justin Watson <jwatson5@gmail.com>
This test generates a fault as part of the test,hence make the
test-suite aware of that by tagging it.
Signed-off-by: Rishi Khare <rishi.khare@intel.com>
There're (at least) 2 UART TX interrupt causes: "tx fifo has more
room" and "transmission of tx fifo complete". Zephyr API has only
one function to enable TX interrupts, uart_irq_tx_enable(), so it's
fair to assume it enables interrupt for both conditions. But then
immediately after enabling TX IRQ, it will be fired with "tx fifo
has more room" cause. If ISR doesn't do anything to fill FIFO, on
some architectures, immediately after return from ISR, it will be
fired again (with no instruction progress in the main application
thread). That's exactly the situation with this test, and on ARM,
it leads to inifnite IRQ loop.
So, instead move call to uart_fifo_fill() inside ISR, and be sure
to disable TX IRQ after we transmitted enough characters.
Change-Id: Ibbd05acf96b01fdb128f08054819fd104d6dfae8
Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>