zephyr/doc/services/index.rst

43 lines
743 B
ReStructuredText
Raw Normal View History

.. _os_services:
OS Services
###########
.. toctree::
:maxdepth: 1
binary_descriptors/index.rst
console.rst
crypto/index
debugging/index.rst
device_mgmt/index
dsp/index.rst
file_system/index.rst
formatted_output.rst
input/index.rst
ipc/index.rst
llext/index.rst
logging/index.rst
tracing/index.rst
resource_management/index.rst
dt: Make zephyr,memory-attr a capabilities bitmask This is the final step in making the `zephyr,memory-attr` property actually useful. The problem with the current implementation is that `zephyr,memory-attr` is an enum type, this is making very difficult to use that to actually describe the memory capabilities. The solution proposed in this PR is to use the `zephyr,memory-attr` property as an OR-ed bitmask of memory attributes. With the change proposed in this PR it is possible in the DeviceTree to mark the memory regions with a bitmask of attributes by using the `zephyr,memory-attr` property. This property and the related memory region can then be retrieved at run-time by leveraging a provided helper library or the usual DT helpers. The set of general attributes that can be specified in the property are defined and explained in `include/zephyr/dt-bindings/memory-attr/memory-attr.h` (the list can be extended when needed). For example, to mark a memory region in the DeviceTree as volatile, non-cacheable, out-of-order: mem: memory@10000000 { compatible = "mmio-sram"; reg = <0x10000000 0x1000>; zephyr,memory-attr = <( DT_MEM_VOLATILE | DT_MEM_NON_CACHEABLE | DT_MEM_OOO )>; }; The `zephyr,memory-attr` property can also be used to set architecture-specific custom attributes that can be interpreted at run time. This is leveraged, among other things, to create MPU regions out of DeviceTree defined memory regions on ARM, for example: mem: memory@10000000 { compatible = "mmio-sram"; reg = <0x10000000 0x1000>; zephyr,memory-region = "NOCACHE_REGION"; zephyr,memory-attr = <( DT_ARM_MPU(ATTR_MPU_RAM_NOCACHE) )>; }; See `include/zephyr/dt-bindings/memory-attr/memory-attr-mpu.h` to see how an architecture can define its own special memory attributes (in this case ARM MPU). The property can also be used to set custom software-specific attributes. For example we can think of marking a memory region as available to be used for memory allocation (not yet implemented): mem: memory@10000000 { compatible = "mmio-sram"; reg = <0x10000000 0x1000>; zephyr,memory-attr = <( DT_MEM_NON_CACHEABLE | DT_MEM_SW_ALLOCATABLE )>; }; Or maybe we can leverage the property to specify some alignment requirements for the region: mem: memory@10000000 { compatible = "mmio-sram"; reg = <0x10000000 0x1000>; zephyr,memory-attr = <( DT_MEM_CACHEABLE | DT_MEM_SW_ALIGN(32) )>; }; The conventional and recommended way to deal and manage with memory regions marked with attributes is by using the provided `mem-attr` helper library by enabling `CONFIG_MEM_ATTR` (or by using the usual DT helpers). When this option is enabled the list of memory regions and their attributes are compiled in a user-accessible array and a set of functions is made available that can be used to query, probe and act on regions and attributes, see `include/zephyr/mem_mgmt/mem_attr.h` Note that the `zephyr,memory-attr` property is only a descriptive property of the capabilities of the associated memory region, but it does not result in any actual setting for the memory to be set. The user, code or subsystem willing to use this information to do some work (for example creating an MPU region out of the property) must use either the provided `mem-attr` library or the usual DeviceTree helpers to perform the required work / setting. Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2023-08-16 18:48:26 +08:00
mem_mgmt/index.rst
modbus/index.rst
modem/index.rst
notify.rst
pm/index.rst
portability/index.rst
lib: os: add support for system power off Add a new API to perform an immediate system power off: `sys_poweroff()`. Until now, this functionality has been implemented via the system power management module, but in a clunky fashion. The way system PM works is by defining some idle states in devicetree, that, given some properties (e.g. minimal residency, exit latency, etc.) are automatically selected when system goes to idle based on the expected next wake-up. However, system off is a power state that one typically wants to control manually from the application because it implies state loss, and in most cases, configuring some sort of wake-up source. So in general, it is not desired to let the system enter this state automatically. This led to the following stuff in-tree: from `boards/arm/mimxrt595_evk/mimxrt595_evk_cm33.dts`: ```c /* * Deep power-down mode is supported in this SoC through * 'PM_STATE_SOFT_OFF' state. There is no entry for this in device tree, * user can call pm_state_force to enter this state. */ ``` That is, state not being defined in devicetree so that PM subsystem doesn't pick it automatically, but still implemented in in the PM hooks: from `soc/arm/nxp_imx/rt5xx/power.c`, `pm_state_set()`: ```c case PM_STATE_SOFT_OFF: set_deepsleep_pin_config(); POWER_EnterDeepPowerDown(EXCLUDE_FROM_DEEP_POWERDOWN); break; ``` And to actually make use of this state, users had to do this kind of abominations: ```c pm_state_force(0u, &(struct pm_state_info){ PM_STATE_SOFT_OFF, 0, 0 }); /* Now we need to go sleep. This will let the idle thread runs and * the pm subsystem will use the forced state. To confirm that the * forced state is used, lets set the same timeout used previously. */ k_sleep(K_SECONDS(SLEEP_S)); printk("ERROR: System off failed\n"); while (true) { /* spin to avoid fall-off behavior */ } ``` Signed-off-by: Gerard Marull-Paretas <gerard@teslabs.com>
2023-07-20 17:46:41 +08:00
poweroff.rst
shell/index.rst
settings/index.rst
smf/index.rst
storage/index.rst
sensing/index.rst
task_wdt/index.rst
tfm/index
virtualization/index.rst
retention/index.rst
rtio: Real-Time Input/Output Stream A DMA friendly Stream API for zephyr. Based on ideas from io_uring and iio, a queue based API for I/O operations. Provides a pair of fixed length ringbuffer backed queues for submitting I/O requests and recieving I/O completions. The requests may be chained together to ensure the next operation does not start until the current one is complete. Requests target an abstract rtio_iodev which is expected to wrap all the hardware particulars of how to perform the operation. For example with a SPI bus device, a description of what a read, and write mean can be decided by the iodev wrapping a particular device hanging off of a SPI controller. The queue pair are submitted to an executor which may be a simple inplace looping executor done in the callers execution context (thread/stack) but other executors are expected. A threadpool executor might for example allow for concurrent request chains to execute in parallel. A DMA executor, in conjunction with DMA aware iodevs would allow for hardware offloading of operations going so far as to schedule with priority using hardware arbitration. Both the iodev and executor are definable by a particular SoC, meaning they can work in conjuction to perform IO operations using a particular DMA controller or methodology if desired. The application decides entirely how large the queues are, where the buffers to read/write come from (some executors may have particular demands!), and which executor to submit requests to. Signed-off-by: Tom Burdick <thomas.burdick@intel.com>
2019-06-26 23:17:18 +08:00
rtio/index.rst
zbus/index.rst
misc.rst