2020-11-28 02:00:26 +08:00
|
|
|
if SOF
|
2021-08-04 06:40:58 +08:00
|
|
|
rsource "../Kconfig.sof"
|
2021-09-15 20:08:41 +08:00
|
|
|
|
|
|
|
config SOF_ZEPHYR_HEAP_CACHED
|
|
|
|
bool "Cached Zephyr heap for SOF memory non-shared zones"
|
2022-09-06 21:44:56 +08:00
|
|
|
default y if CAVS || ACE
|
2021-09-15 20:08:41 +08:00
|
|
|
default n
|
|
|
|
help
|
|
|
|
Enable cached heap by mapping cached SOF memory zones to different
|
|
|
|
Zephyr sys_heap objects and enable caching for non-shared zones.
|
|
|
|
|
2022-05-12 14:44:48 +08:00
|
|
|
config ZEPHYR_NATIVE_DRIVERS
|
|
|
|
bool "Use Zephyr native drivers"
|
|
|
|
default n
|
|
|
|
help
|
|
|
|
Enable Zephyr native api drivers for host and dai audio components
|
|
|
|
host-zephyr
|
|
|
|
dai-zephyr
|
|
|
|
will be used instead of legacy xtos version.
|
|
|
|
|
2022-08-19 21:10:56 +08:00
|
|
|
config SOF_ZEPHYR_STRICT_HEADERS
|
|
|
|
bool "Experimental: Force build with Zephyr RTOS headers only"
|
|
|
|
default n
|
|
|
|
help
|
|
|
|
This is a transitional option that allows developers to test builds
|
|
|
|
only using the Zephyr RTOS headers. This will eventually become the
|
|
|
|
default header configuration when native Zephyr is ready and this menu
|
|
|
|
choice will be removed.
|
|
|
|
If unsure, say n.
|
|
|
|
|
2022-10-12 18:58:06 +08:00
|
|
|
config DMA_DOMAIN
|
2023-04-03 23:02:07 +08:00
|
|
|
bool "Enable the usage of DMA domain."
|
|
|
|
default y if IMX
|
2022-10-12 18:58:06 +08:00
|
|
|
help
|
|
|
|
This enables the usage of the DMA domain in scheduling.
|
|
|
|
|
zephyr_dma_domain: Give semaphore resources based on sched_comp's state upon cancel
This implies the following changes:
1) domain_task_cancel() shall no longer receive the number
of tasks, but, instead, will receive the task to be cancelled.
2) zephyr_dma_domain_task_cancel() will do k_sem_give() if the
sched_comp associated with the given task is != COMP_STATE_ACTIVE.
3) SEM_LIMIT is changed to CONFIG_DMA_DOMAIN_SEM_LIMIT and can
be configured.
The reasoning for the changes are the following:
1) and 2): In the case of mixers, domain_task_cancel()'s
num_tasks is not a reliable way to determine if the DMA
IRQs got cut off. Let's consider the following scenario:
We have a mixer with 1 non-registrable pipeline task and
1 registrable pipeline task. Upon TRIGGER_STOP we'd have
the following flow (i.MX boards):
a) SAI_STOP => DMA IRQs get cut off.
b) Cancel non-registrable pipeline task.
c) Cancel registrable pipeline task.
During b) and c), domain_task_cancel() would get the following
arguments:
b) domain_task_cancel(sch, 1)
c) domain_task_cancel(sch, 1)
This is because the non-registrable pipeline task wasn't
dequeued before c) so, even though the DMA IRQs got cut
off during a), zephyr_dma_domain_task_cancel() does not give
resources to the semaphore so what happens is zephyr_ll_run()
will no longer execute and the pipeline tasks remain queued.
3) Since the semaphore can accumulate more than 1 resource
at a given time (and since it's safe to make SEM_LIMIT depend
on the load of the system), SEM_LIMIT was changed into a config.
This allows the user to change SEM_LIMIT based on the system
load. For example, if there's 2 non-registrable pipeline tasks
and 1 registrable pipeline task (same scheduling component),
an appropriate value for SEM_LIMIT should be 3 (since the
semaphore can be given at most 3 resources during the task
cancellation process). Of course, making SEM_LIMIT depend on
the system load is the worst case but this way we can make sure
that the cancelled tasks get dequeued properly.
Signed-off-by: Laurentiu Mihalcea <laurentiu.mihalcea@nxp.com>
2023-05-09 17:10:20 +08:00
|
|
|
config DMA_DOMAIN_SEM_LIMIT
|
|
|
|
int "Number of resources the Zephyr's DMA domain can accumulate"
|
|
|
|
depends on DMA_DOMAIN
|
|
|
|
default 10
|
|
|
|
help
|
|
|
|
Set this value according to the load of the system. Please make sure
|
|
|
|
that SEM_LIMIT covers the maximum number of tasks your system will be
|
|
|
|
executing at some point (worst case).
|
|
|
|
|
2023-02-14 23:30:58 +08:00
|
|
|
config ZEPHYR_DP_SCHEDULER
|
|
|
|
bool "use Zephyr thread based DP scheduler"
|
|
|
|
default y if ACE
|
|
|
|
default n
|
|
|
|
depends on IPC_MAJOR_4
|
|
|
|
depends on ZEPHYR_SOF_MODULE
|
2023-04-19 00:07:21 +08:00
|
|
|
depends on ACE
|
2023-02-14 23:30:58 +08:00
|
|
|
help
|
|
|
|
Enable Data Processing preemptive scheduler based on
|
|
|
|
Zephyr preemptive threads.
|
|
|
|
DP modules can be located in dieffrent cores than LL pipeline modules, may have
|
|
|
|
different tick (i.e. 300ms for speech reccognition, etc.)
|
|
|
|
|
2023-10-12 23:20:35 +08:00
|
|
|
config CROSS_CORE_STREAM
|
|
|
|
bool "Enable cross-core connected pipelines"
|
|
|
|
default y if IPC_MAJOR_4
|
|
|
|
help
|
|
|
|
Enables support for pipelines from different cores to be
|
|
|
|
connected together cross-core. So stream can travel from one
|
|
|
|
core to another. Note, this is different from "multicore"
|
|
|
|
support. In SOF "multicore" support means different streams
|
|
|
|
can be processed on different cores, however, each stream
|
|
|
|
is processed entirely on single core.
|
|
|
|
|
2023-11-16 20:02:34 +08:00
|
|
|
config SOF_BOOT_TEST
|
|
|
|
bool "enable SOF run-time testing"
|
|
|
|
depends on ZTEST
|
|
|
|
help
|
|
|
|
Run tests during boot. This enables an SOF boot-time self-test. When
|
|
|
|
enabled, the resulting image will run a number of self-tests when the
|
|
|
|
first global IPC command is received, i.e. when SOF is completely
|
|
|
|
initialized. After that SOF will continue running and be usable as
|
|
|
|
usual.
|
|
|
|
|
2020-11-28 02:00:26 +08:00
|
|
|
endif
|