This places all SectionWidget.TONE* topology widgets, created by
the W_TONE() macro on the same core, on which the respective pipeline
is scheduled.
Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
This places all SectionWidget.MUXDEMUX* topology widgets, created by
the W_MUXDEMUX() macro on the same core, on which the respective
pipeline is scheduled.
Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
This places all SectionWidget.MIXER* topology widgets, created by
the W_MIXER() macro on the same core, on which the respective
pipeline is scheduled.
Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
This places all SectionWidget.KPBM* topology widgets, created by
the W_KPBM() macro on the same core, on which the respective pipeline
is scheduled.
Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
This places all SectionWidget.DETECT* topology widgets, created by
the W_DETECT() macro on the same core, on which the respective
pipeline is scheduled.
Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
This places all SectionWidget.DCBLOCK* topology widgets, created by
the W_DCBLOCK() macro on the same core, on which the respective
pipeline is scheduled.
Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
This places all SectionWidget.SELECTOR* topology widgets, created by
the W_SELECTOR() macro on the same core, on which the respective
pipeline is scheduled.
Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
This places all SectionWidget.EQFIR* topology widgets, created by
the W_EQ_FIR() macro on the same core, on which the respective pipeline
is scheduled.
Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
This places all SectionWidget.<DAI type><DAI index>.IN topology
widgets, created by the W_DAI_IN() macro on the same core, on which
the respective pipeline is scheduled.
Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
This places all SectionWidget.EQIIR* topology widgets, created by
the W_EQ_IIR() macro on the same core, on which the respective pipeline
is scheduled.
Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
This places all SectionWidget.PCM*C topology widgets, created by
the W_PCM_CAPTURE macro on the same core, on which the respective
pipeline is scheduled.
Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
This places all SectionWidget.<DAI type><DAI index>.OUT topology
widgets, created by the W_DAI_OUT() macro on the same core, on which
the respective pipeline is scheduled.
Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
This adds an optional "core" parameter to the W_BUFFER() macro, which
then allows buffers to be placed on the same core as the rest of the
pipeline. So far we only modify pipe-volume-playback.m4 to do that,
other pipelines can be modified later as needed.
Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
This places all SectionWidget.PGA* topology widgets, created by
the W_PGA() macro on the same core, on which the respective pipeline
is scheduled.
Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
This places all SectionWidget.PCM*P topology widgets, created by
the W_PCM_PLAYBACK macro on the same core, on which the respective
pipeline is scheduled.
Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
We need a way to explicitly place components on specific DSP cores.
Add a SOF_TKN_COMP_CORE_ID word token for that.
Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
Typical use of existing comp_get_copy_limits() requires to lock
the buffers for the operation to ensure consistent readings.
Grouping the operations in a function simplifies the components
code. It also guarantees correct double lock with dedicated flags
and helps avoid a common bug in client code where single flags
variable is used for both locks. When both buffers are shared
between cores, and single variable is used, the rsil in second lock
returns int level 5 set by the first lock. Later, during unlocking
the original level returned by the first lock is not restored, the
dsp stays on level 5.
Signed-off-by: Marcin Maka <marcin.maka@linux.intel.com>
Original description rephrased to provide better information
about function commonly used by component developers.
Signed-off-by: Marcin Maka <marcin.maka@linux.intel.com>
This reverts and replaces commit 2765b22049 ("probe-app: assume probe
packet aligned"). That commit applied a __builtin_assume_aligned(4) to
every pointer assignment of the following type:
uint32_t *ptr = struct probe_data_packet *packet;
Instead of using __builtin_assume_aligned(4) every time a
probe_data_packet is used like this, the probe_data_packet type
definition is now 32 bits aligned in only one place thanks to (packed,
aligned(4)).
struct probe_data_packet is made entirely of uint32_t members. It
is (packed) to avoid a 64 bits compiler hypothetically padding and
aligning these 32 bits members to 64 bits. However, (packed) alone is
roughly equivalent to aligned(1) (neither is part of the C standard)
which causes some gcc versions to warn about this:
sof/tools/probes/probes_main.c:228:5: error: converting a packed
‘struct probe_data_packet’ pointer (alignment 1) to a ‘uint32_t’
{aka ‘unsigned int’} pointer (alignment 4) may result in an
unaligned pointer value [-Werror=address-of-packed-member]
228 | w_ptr = (uint32_t *)packet;
| ^~~~~
Unlike "assuming" with __builtin_assume_aligned, (packed, aligned(4))
makes sure probe_data_packet are _actually_ 4 bytes aligned. As a
demontration, it stops the following _Static_assert() from failing:
modified src/probe/probe.c
@ -64,9 +64,12 @ struct probe_pdata {
struct probe_dma_ext ext_dma;
struct probe_dma_ext inject_dma[CONFIG_PROBE_DMA_MAX];
struct probe_point probe_points[CONFIG_PROBE_POINTS_MAX];
+ uint8_t dummy;
struct probe_data_packet header; // aligned(4)?
struct task dmap_work;
};
+_Static_assert(offsetof(struct probe_pdata, header) % 4 == 0 ,
+ "probe_data_packet is not 4 aligned");
In addition to gcc version 8 and 9, I tested and confirmed with
_Static_assert and pahole that the same attributes behave the same with
clang 9.0.1-2.fc31.
Signed-off-by: Marc Herbert <marc.herbert@intel.com>
For headset playback or capture, default pipe is volume. If The high pass
filter is required to prevent HDA codec headset glitches, it can be set by
HSMICPROC=eq-iir-volume. It use use 40 Hz high pass filter with +0 db
boost. If there is DMIC, this set default pipe as eq-iir, which use
the same 40 Hz high-pass filter too. For DMIC16K, 40 Hz high-pass for 16khz
version filter is used.
Default coefficient for FIR/IIR is changed fr m coef flat to coef pass.
Pass configuration has the advantage of low system load.
eq_fir_coef_flat.m4 to eq_fir_coef_pass.m4
eq_iir_coef_flat.m4 to eq_iir_coef_pass.m4
Below pipe files are renamed for naming convention,
pipe-eq-volume-playback.m4 -> pipe-eq-iir-eq-fir-volume-playback.m4
pipe-eq-capture-16khz.m4 -> pipe-eq-iir-volume-capture-16khz.m4
pipe-eq-capture.m4 -> pipe-eq-iir-volume-capture.m4
Signed-off-by: Fred Oh <fred.oh@linux.intel.com>
The patch shows a proposal how to propagate pipeline filter
coefficients data definition at topology top level. It is an
useful way to avoid to create different new pipeline macros for
different filter. Define filter coefficients data with macro
DEF_EQFIR_PRIV and DEF_EQFIR_COEF, which will be replaced
unique name by adding PIPELINE number. For IIR filter, equivalent macro
is DEF_EQIIR_PRIV and DEF_EQIIR_COEF.
Makefile define each pipeline processing HSEARPROC, HSMICRPOC, SPKPROC,
DMICPROC etc. Each variable should support default value to minimize
current topologies. When new filter blob is required, define
PIPELINE_FILTERx in makefile. This method provides more flexible way
to support variable number of filters in the pipeline.
The other clean up for topology macros include definions for unique
volume tokens and unique filter coefficient definitions.
Signed-off-by: Fred Oh <fred.oh@linux.intel.com>
Add macro for binary blob generation for muxdemux. With this macro the
stream routing matrix will be easier to visualize and manipulate.
Signed-off-by: Jaska Uimonen <jaska.uimonen@intel.com>
Bit and byte manipulation macros are needed for building binary blobs,
so add them. Add also macro to generate sof_abi_version.
Signed-off-by: Jaska Uimonen <jaska.uimonen@intel.com>
This updates the minimal host buffer size to the
minimum value which enables sefe draining for all
pcm parameters.
Signed-off-by: Marcin Rajwa <marcin.rajwa@linux.intel.com>
This patch changes the order of members in comp_data
struct to improve memory access for related data.
So now data that are commonly used together like *state*
and *state_log* are put next to each other so fewer cache
reads are needed.
Signed-off-by: Marcin Rajwa <marcin.rajwa@linux.intel.com>
This patch moves the update of buffered data during
draining from buffering function to the caller
function.
Signed-off-by: Marcin Rajwa <marcin.rajwa@linux.intel.com>
This patch changes the error message caused by
no bytes to be buffered from "error" to warning
as this is not crirical error.
Signed-off-by: Marcin Rajwa <marcin.rajwa@linux.intel.com>
This patch reworks the calculation of buffered data.
NOTE! We only keep record of buffered data up to
the size of history buffer as there is no usecase
beyond that.
Signed-off-by: Marcin Rajwa <marcin.rajwa@linux.intel.com>
This patch adds the condition that if we are
in draining or init draining state and
pipeline_copy() appeared before we actually
started the draining, plus the amount of new data
can overwrite the data staged for draining then
we should postpone buffering procedure for next
period. At that time we assume draining task will have
already drained some samples therefore making a space
for new ones.
Signed-off-by: Marcin Rajwa <marcin.rajwa@linux.intel.com>
This additional variable says how much data can be
written to history buffer and won't cause overwrite
of data staged for draining. It is important to realize
the history buffer is circular and its data gets
overwrite all the time. So *free* in this case doesn't
mean "amount of bytes before buffer is full". History
buffer is *full* 99% of the time. However before the
draining starts we "freeze" history buffer data
so new samples can be written only if:
HISTORY_BUFFER_SIZE - HISTORY_DEPTH > 0
The result of above subtraction is what we call "free"
Signed-off-by: Marcin Rajwa <marcin.rajwa@linux.intel.com>
This patch adds new struct called "history_data"
which contains all variables related to
history buffer namely, its size, amount of free and
available data as well as address of current write
buffer (history buffer is a collection of smaller
buffers)
Signed-off-by: Marcin Rajwa <marcin.rajwa@linux.intel.com>
This patch changes the name of "dd" structure to
more meaningful "draining_data". Therefore code
is more readable.
Signed-off-by: Marcin Rajwa <marcin.rajwa@linux.intel.com>
This patch changes the name of the "hb" struct to
more meaningfull "history_buffer". Therefore
code is more readable.
Signed-off-by: Marcin Rajwa <marcin.rajwa@linux.intel.com>
This short helper not only groups and hides some infrastructure level
initialization steps but also guarantees that none of it is
missed in the component implementation.
It is easier to group the steps in internal function rather
then explain them and remember to add to every new component
(especially that initialized comp_dev members are used internally).
Signed-off-by: Marcin Maka <marcin.maka@linux.intel.com>
MCU2SHP means processor to peripheral that is memory to device.
SHP2MCU means peripheral to processor that is device to memory.
There was a confusion in the initial patch, fix this now to
have the correct load addresses.
Signed-off-by: Daniel Baluta <daniel.baluta@nxp.com>