sof/zephyr/Kconfig

90 lines
2.8 KiB
Plaintext
Raw Normal View History

if SOF
rsource "../Kconfig.sof"
config SOF_ZEPHYR_HEAP_CACHED
bool "Cached Zephyr heap for SOF memory non-shared zones"
default y if CAVS || ACE
default n
help
Enable cached heap by mapping cached SOF memory zones to different
Zephyr sys_heap objects and enable caching for non-shared zones.
config ZEPHYR_NATIVE_DRIVERS
bool "Use Zephyr native drivers"
default n
help
Enable Zephyr native api drivers for host and dai audio components
host-zephyr
dai-zephyr
will be used instead of legacy xtos version.
config SOF_ZEPHYR_STRICT_HEADERS
bool "Experimental: Force build with Zephyr RTOS headers only"
default n
help
This is a transitional option that allows developers to test builds
only using the Zephyr RTOS headers. This will eventually become the
default header configuration when native Zephyr is ready and this menu
choice will be removed.
If unsure, say n.
config DMA_DOMAIN
bool "Enable the usage of DMA domain."
default y if IMX8M
help
This enables the usage of the DMA domain in scheduling.
zephyr_dma_domain: Give semaphore resources based on sched_comp's state upon cancel This implies the following changes: 1) domain_task_cancel() shall no longer receive the number of tasks, but, instead, will receive the task to be cancelled. 2) zephyr_dma_domain_task_cancel() will do k_sem_give() if the sched_comp associated with the given task is != COMP_STATE_ACTIVE. 3) SEM_LIMIT is changed to CONFIG_DMA_DOMAIN_SEM_LIMIT and can be configured. The reasoning for the changes are the following: 1) and 2): In the case of mixers, domain_task_cancel()'s num_tasks is not a reliable way to determine if the DMA IRQs got cut off. Let's consider the following scenario: We have a mixer with 1 non-registrable pipeline task and 1 registrable pipeline task. Upon TRIGGER_STOP we'd have the following flow (i.MX boards): a) SAI_STOP => DMA IRQs get cut off. b) Cancel non-registrable pipeline task. c) Cancel registrable pipeline task. During b) and c), domain_task_cancel() would get the following arguments: b) domain_task_cancel(sch, 1) c) domain_task_cancel(sch, 1) This is because the non-registrable pipeline task wasn't dequeued before c) so, even though the DMA IRQs got cut off during a), zephyr_dma_domain_task_cancel() does not give resources to the semaphore so what happens is zephyr_ll_run() will no longer execute and the pipeline tasks remain queued. 3) Since the semaphore can accumulate more than 1 resource at a given time (and since it's safe to make SEM_LIMIT depend on the load of the system), SEM_LIMIT was changed into a config. This allows the user to change SEM_LIMIT based on the system load. For example, if there's 2 non-registrable pipeline tasks and 1 registrable pipeline task (same scheduling component), an appropriate value for SEM_LIMIT should be 3 (since the semaphore can be given at most 3 resources during the task cancellation process). Of course, making SEM_LIMIT depend on the system load is the worst case but this way we can make sure that the cancelled tasks get dequeued properly. Signed-off-by: Laurentiu Mihalcea <laurentiu.mihalcea@nxp.com>
2023-05-09 17:10:20 +08:00
config DMA_DOMAIN_SEM_LIMIT
int "Number of resources the Zephyr's DMA domain can accumulate"
depends on DMA_DOMAIN
default 10
help
Set this value according to the load of the system. Please make sure
that SEM_LIMIT covers the maximum number of tasks your system will be
executing at some point (worst case).
config PIPELINE_2_0
bool "Enable pipeline 2.0 changes"
depends on IPC_MAJOR_4
default y if ACE
help
This flag enables changes to new pipeline structure, known as pipeline2_0
It is required for certain new features, like DP_SCHEDULER.
config ZEPHYR_DP_SCHEDULER
bool "use Zephyr thread based DP scheduler"
default y if ACE
default n
depends on IPC_MAJOR_4
depends on ZEPHYR_SOF_MODULE
depends on ACE
depends on PIPELINE_2_0
help
Enable Data Processing preemptive scheduler based on
Zephyr preemptive threads.
DP modules can be located in dieffrent cores than LL pipeline modules, may have
different tick (i.e. 300ms for speech reccognition, etc.)
config CROSS_CORE_STREAM
bool "Enable cross-core connected pipelines"
default y if IPC_MAJOR_4
help
Enables support for pipelines from different cores to be
connected together cross-core. So stream can travel from one
core to another. Note, this is different from "multicore"
support. In SOF "multicore" support means different streams
can be processed on different cores, however, each stream
is processed entirely on single core.
config SOF_BOOT_TEST
bool "enable SOF run-time testing"
depends on ZTEST
help
Run tests during boot. This enables an SOF boot-time self-test. When
enabled, the resulting image will run a number of self-tests when the
first global IPC command is received, i.e. when SOF is completely
initialized. After that SOF will continue running and be usable as
usual.
endif