Commit Graph

7 Commits

Author SHA1 Message Date
Kumar Gala a1b77fd589 zephyr: replace zephyr integer types with C99 types
git grep -l 'u\(8\|16\|32\|64\)_t' | \
		xargs sed -i "s/u\(8\|16\|32\|64\)_t/uint\1_t/g"
	git grep -l 's\(8\|16\|32\|64\)_t' | \
		xargs sed -i "s/s\(8\|16\|32\|64\)_t/int\1_t/g"

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2020-06-08 08:23:57 -05:00
Nicolas Pitre dc246fb1ca SYS_MEM_POOL_DEFINE(): move BUILD_ASSERT() at the end
Similarly to commit 223a2b950f ("mempool: move BUILD_ASSERT to the
end of K_MEM_POOL_DEFINE"), move BUILD_ASSERT() at the end for
consistency.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2019-10-01 10:22:18 -07:00
Nicolas Pitre 2129937d3d realloc(): move mempool internal knowledge out of generic lib code
The realloc function was a bit too intimate with the mempool accounting.
Abstract that knowledge away and move it where it belongs.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2019-09-30 10:57:24 -07:00
Nicolas Pitre ace11bbefd mempool: make sure max block size isn't smaller than minimum allowed
If maxsize is smaller than _MPOOL_MINBLK, then Z_MPOOL_LVLS() will be 0.
That means the loop in z_sys_mem_pool_base_init() that initializes the
block free list for the nonexistent level 0 will corrupt whatever memory
at the location the zero-sized struct sys_mem_pool_lvl array was
located. And the corruption happens to be done with a perfectly legit
memory pool block address which makes for really nasty bugs to solve.

This is more likely on 64-bit systems due to _MPOOL_MINBLK being twice
the size of 32-bit systems.

Let's prevent that with a build-time assertion on maxsize when defining
a memory pool, and adjust the affected test accordingly.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2019-07-02 19:41:20 -07:00
Nicolas Pitre cf974371fb mempool: make alignment/rounding 64-bit compatible
Minimum alignment and rounding must be done on a word boundary. Let's
replace _ALIGN4() with WB_UP() which is equivalent on 32-bit targets,
and 64-bit aware.

Also enforce a minimal alignment on the memory pool. This is making
a difference mostly on64-bit targets where the widely used 4-byte
alignment is not sufficient.

The _ALIGN4() macro has no users left so it is removed.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2019-07-02 19:41:20 -07:00
Anas Nashif 0c9e280547 cleanup: include/: move misc/mutex.h to sys/mutex.h
move misc/mutex.h to sys/mutex.h and
create a shim for backward-compatibility.

No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.

Related to #16539

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-06-27 22:55:49 -04:00
Anas Nashif 08ee8b09ba cleanup: include/: move misc/mempool.h to sys/mempool.h
move misc/mempool.h to sys/mempool.h and
create a shim for backward-compatibility.

No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.

Related to #16539

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-06-27 22:55:49 -04:00