clear-pkgs-linux-iot-lts2018/0243-Soundwire-squashed-com...

7684 lines
222 KiB
Diff

From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
From: Sanyog Kale <sanyog.r.kale@intel.com>
Date: Sun, 20 Nov 2016 20:25:37 +0530
Subject: [PATCH] Soundwire squashed commits [2]
SoundWire: Optimizations in BW calculation and runtime ops
Includes:
- Removed multiple ifdefs from code.
- Clock frequency divider changes.
- clock divider added for clock scaling
- cleanup in row column definition
- block offset adjusted for loopback test
Change-Id: I6e9344d7c44681d696ffc5baa61f33ae5f3ac436
Signed-off-by: Sanyog Kale <sanyog.r.kale@intel.com>
SoundWire: Underrun/Overrun fix for SDW playback and capture
Includes:
- Fix for playback & capture not working after
underrun/overrun and pause/resume scenario.
- Optimization in Master/Slave configuration.
Change-Id: Id8e6ebe0083e0b5d8bf255128b43d245bb177bc9
Signed-off-by: Sanyog Kale <sanyog.r.kale@intel.com>
SoundWire: Optimization in BW calculation and runtime operations.
Includes:
- Optimizations in APIs
- Renaming of APIs
- Split of bankswitch function.
- Split of calc_bw & calc_bw_dis functions.
- Individial APIs for prepare, enable, disable
and unprepare operations.
Change-Id: I5c72bc451d943ced60d1f40b15ae816a048796a6
Signed-off-by: Sanyog Kale <sanyog.r.kale@intel.com>
SoundWire: Fix for assigning port capabilities for all the
master ports while registering
Also adds check for Master and Slave capabilities
SoundWire: Fix for assigning port capabilities for all the
master ports while registering2
Also adds check for Master and Slave capabilities
SoundWire: Port configuration changes for Multiple port support
This patch supports multiple port configuration for given
stream.
Signed-off-by: Ashish Panwar <ashish.panwar@intel.com>
Signed-off-by: Sanyog Kale <sanyog.r.kale@intel.com>
SoundWire: Multiple port support for Master and Slave ports
Includes:
- Computes transport parameters for all Master and
Slave Ports.
- SV codec driver changes to support multi port PCM capture.
- Machine driver changes to support multi port PCM capture.
- Free up resources for port runtime handle.
Change-Id: I18d7247f44a9aff400bc709bd35f968ecfc66eea
Signed-off-by: Sanyog Kale <sanyog.r.kale@intel.com>
Soundwire: Add Interrupt Status SCP Registers
Change-Id: I7a037c74861bfcce5b263fac54a07f58cac078e0
Signed-off-by: Guneshwor Singh <guneshwor.o.singh@intel.com>
SoundWire: Add support for getting bus params.
Some Slave may require to know bus params at probe to program its
registers. Provide API to get current bus params.
Signed-off-by: Sanyog Kale <sanyog.r.kale@intel.com>
SoundWire: Fix the Slave alert handling.
1. Return the proper status to slave for interrupt.
2. Enable to specify scp_interrupt mask register.
3. Re-enable interrupts when slave changes from unattached to attached.
4. Ack only the handled interrupts.
Change-Id: If1732460e0c4ca286b8d09f5e212b4834e53b533
Signed-off-by: Sanyog Kale <sanyog.r.kale@intel.com>
SoundWire: Add deprepare after clock resume.
1. According to SoundWire spec deprepare is required after resuming from
clock stop mode0. Add this functionality.
2. According to SoundWire spec deprepare is optionally required after
resuming from clock stop mode1. Add this functionality.
3. Add Slave callbacks to call the pre and post clock stop prepeare before
doing actual clock stop.
Signed-off-by: Sanyog Kale <sanyog.r.kale@intel.com>
SDW: Remove hardcoding to enable normal capture
Remove hardcode for loopback and enable normal capture/playback.
Change-Id: If0d16c8d0d0e6409ffe5372002e2bc18b8ba0588
Signed-off-by: Shreyas NC <shreyas.nc@intel.com>
Soundwire: Change clockstop exit sequence for losing ctx
When Cavs runtime pm is enabled, the sequence for clockstop
exit is also changed.
Change-Id: I834aa87c65aa97172f477dced11c3610e412edc2
Signed-off-by: Guneshwor Singh <guneshwor.o.singh@intel.com>
Soundwire: Hard bus reset is not required in resume
According to MIPI, bus reset is not required during
clockstop exit. So remove bus reset in the clockstop
exit sequence.
Change-Id: Iea7b3a8030cb683caa97d9648ac873f8000ca072
Signed-off-by: Guneshwor Singh <guneshwor.o.singh@intel.com>
[CNL FPGA] Soundwire: Add #if for frameshape change in CNL RVP
This is added for supporting both RVP and FPGA setups.
Frameshape is different for both cases, so add #if to
distinguish.
Change-Id: Ib8ca64c4b0e138e8260392adeba6e27524c438aa
Signed-off-by: Guneshwor Singh <guneshwor.o.singh@intel.com>
SoundWire: Bus header file changes for BRA feature
This patch includes:
- Bus API for supporting BRA feature.
- BRA defines as per MIPI 1.1 spec.
Signed-off-by: Sanyog Kale <sanyog.r.kale@intel.com>
SoundWire: Bus implementation for BRA feature
This patch includes:
- Implementation of bus API
sdw_slave_xfer_bra_block used for BRA transfers
by SoundWire Slave(s).
- Bandwidth allocation for BRA.
- Data port 0 prepare/enable/de-prepare/disable ops.
Signed-off-by: Sanyog Kale <sanyog.r.kale@intel.com>
SoundWire: Bus header file changes for CRC8 helper function
This patch adds bus CRC8 helper function used in BRA feature to
compute CRC8.
Signed-off-by: Sanyog Kale <sanyog.r.kale@intel.com>
SoundWire: Bus CRC8 helper function implementation
This patch implements helper function for calculating
CRC8 values.
Signed-off-by: Sanyog Kale <sanyog.r.kale@intel.com>
Signed-off-by: Guneshwor Singh <guneshwor.o.singh@intel.com>
SoundWire: Master driver header file changes for BRA feature
This patch includes:
- Data structure required for BRA operations.
- BRA ops definition.
- Defines used by Master driver for BRA operations.
Signed-off-by: Sanyog Kale <sanyog.r.kale@intel.com>
SoundWire: Master driver implementation for BRA feature
This patch includes:
- Implementation for Master API for BRA.
- Preparation of TX and RX PDI buffer.
- Preparation of BRA packets.
- Verification of RX packets.
- PDI configuration for BRA.
Signed-off-by: Sanyog Kale <sanyog.r.kale@intel.com>
Soundwire: Fix build regression when PM is disabled
Build regression observed when CONFIG_PM and CONFIG_PM_SLEEP is disabled.
To fix this #ifdefs are added for functions specific to PM.
Change-Id: Ia975415cafad536832d3383ed3e8c4314bf0d305
Signed-off-by: Anamika Lal <anamikax.lal@intel.com>
Reviewed-on:
Reviewed-by: Diwakar, Praveen <praveen.diwakar@intel.com>
Reviewed-by: Singh, Guneshwor O <guneshwor.o.singh@intel.com>
Reviewed-by: Koul, Vinod <vinod.koul@intel.com>
Tested-by: Sm, Bhadur A <bhadur.a.sm@intel.com>
Soundwire: Add return value to avoid warning
Since return type of sdw_transfer_trace_reg, return zero
to avoid compiler warning.
Signed-off-by: Guneshwor Singh <guneshwor.o.singh@intel.com>
SoundWire: TX and RX Host DMA & Pipeline creation support for BRA
Signed-off-by: Sanyog Kale <sanyog.r.kale@intel.com>
SoundWire: Creates single module for SoundWire bus framework
This patch make SoundWire bus framework as single module.
Change-Id: I966d42e57a9899d82ad99ec75f879a0b627afa7f
Signed-off-by: Sanyog Kale <sanyog.r.kale@intel.com>
Reviewed-on:
Reviewed-by: Diwakar, Praveen <praveen.diwakar@intel.com>
Reviewed-by: Koul, Vinod <vinod.koul@intel.com>
Reviewed-by: Singh, Guneshwor O <guneshwor.o.singh@intel.com>
Reviewed-by: Nemallapudi, JaikrishnaX <jaikrishnax.nemallapudi@intel.com>
Reviewed-by: Kp, Jeeja <jeeja.kp@intel.com>
Tested-by: Avati, Santosh Kumar <santosh.kumar.avati@intel.com>
SoundWire: Add export symbol for SoundWire Bus BRA API
This patch adds export symbol for sdw_slave_xfer_bra_block
SoundWire Bus BRA API
Change-Id: I8bb8d6b1595c46077bc0914b9e8f3b9d89bcd686
Signed-off-by: Sanyog Kale <sanyog.r.kale@intel.com>
Reviewed-on:
Reviewed-by: Diwakar, Praveen <praveen.diwakar@intel.com>
Reviewed-by: Koul, Vinod <vinod.koul@intel.com>
Reviewed-by: Singh, Guneshwor O <guneshwor.o.singh@intel.com>
Reviewed-by: Kp, Jeeja <jeeja.kp@intel.com>
Tested-by: Avati, Santosh Kumar <santosh.kumar.avati@intel.com>
Reviewed-by: Nemallapudi, JaikrishnaX <jaikrishnax.nemallapudi@intel.com>
SoundWire: Mask gSync pulse to avoid gSync and frame mis-aligment
Cadence Master IP supports Multi-Master Mode where the IP can be configured
such that its generated Frame boundary is synchronized to the periodically
occurring gSync pulses.
Certain versions of the IP implementation have a bug whereby if a gSync
pulse collides with the register configuration update that brings up the IP
into Normal operation (where the IP begins Frame tracking), then the
resulting Frame boundary will misalign with the periodic gSync pulses.
This patch adds gSync masking logic where gSync pulse is masked before
performing register configuration and is un-masked after setting
Mcp_ConfigUpdate bit. Due to this the initialization-pending Master IP
SoundWire bus clock will start up synchronizing to gSync, leading to bus
reset entry, subsequent exit, and 1st Frame generation aligning to
gSync.
Change-Id: I8e3620244de3f0c0636520db017df4296c7ae5e5
Signed-off-by: Sanyog Kale <sanyog.r.kale@intel.com>
Reviewed-on:
Reviewed-by: Singh, Guneshwor O <guneshwor.o.singh@intel.com>
Reviewed-by: Avati, Santosh Kumar <santosh.kumar.avati@intel.com>
Tested-by: Avati, Santosh Kumar <santosh.kumar.avati@intel.com>
Reviewed-by: Koul, Vinod <vinod.koul@intel.com>
SoundWire: Remove Maxim FPGA support from SoundWire bus
Maxim codec FPGA is no more used for SoundWire use case verification,
removing related code changes.
Change-Id: I7584e7f81922df3f3d168d41ef7192a6449ff044
Signed-off-by: Sanyog Kale <sanyog.r.kale@intel.com>
Reviewed-on:
Reviewed-by: Koul, Vinod <vinod.koul@intel.com>
Reviewed-by: Singh, Guneshwor O <guneshwor.o.singh@intel.com>
Reviewed-by: Nc, Shreyas <shreyas.nc@intel.com>
Reviewed-by: Diwakar, Praveen <praveen.diwakar@intel.com>
Tested-by: Avati, Santosh Kumar <santosh.kumar.avati@intel.com>
SoundWire: Remove hardcoding for SSP Interval
This patch removes hardcoding for setting SSP Interval
and sets default value based on the platform configuration.
Change-Id: I4cd14a9a2ddda28e4b9d2a8cee931ac5eec88e03
Signed-off-by: Sanyog Kale <sanyog.r.kale@intel.com>
Reviewed-on:
Reviewed-by: Nc, Shreyas <shreyas.nc@intel.com>
Reviewed-by: Singh, Guneshwor O <guneshwor.o.singh@intel.com>
Reviewed-by: Diwakar, Praveen <praveen.diwakar@intel.com>
Tested-by: Avati, Santosh Kumar <santosh.kumar.avati@intel.com>
---
drivers/sdw/Kconfig | 1 +
drivers/sdw/Makefile | 2 +-
drivers/sdw/sdw.c | 1600 +++++++++++---
drivers/sdw/sdw_bwcalc.c | 2858 +++++++++++++------------
drivers/sdw/sdw_cnl.c | 943 +++++++-
drivers/sdw/sdw_cnl_priv.h | 40 +
drivers/sdw/sdw_priv.h | 50 +-
drivers/sdw/sdw_utils.c | 49 +
include/linux/sdw/sdw_cnl.h | 29 +
include/linux/sdw/sdw_registers.h | 7 +-
include/linux/sdw_bus.h | 207 +-
sound/soc/codecs/svfpga-sdw.c | 2 +-
sound/soc/intel/boards/cnl_svfpga.c | 2 +-
sound/soc/intel/skylake/cnl-sst.c | 16 +-
sound/soc/intel/skylake/skl-sdw-pcm.c | 77 +-
15 files changed, 4151 insertions(+), 1732 deletions(-)
create mode 100644 drivers/sdw/sdw_utils.c
diff --git a/drivers/sdw/Kconfig b/drivers/sdw/Kconfig
index 90e954c392e0..1b7e2cc2ebc3 100644
--- a/drivers/sdw/Kconfig
+++ b/drivers/sdw/Kconfig
@@ -1,5 +1,6 @@
menuconfig SDW
tristate "SoundWire bus support"
+ depends on CRC8
help
SoundWire interface is typically used for transporting data
related to audio functions.
diff --git a/drivers/sdw/Makefile b/drivers/sdw/Makefile
index 184682a88a1a..e2ba440f4ef2 100644
--- a/drivers/sdw/Makefile
+++ b/drivers/sdw/Makefile
@@ -1,4 +1,4 @@
-sdw_bus-objs := sdw.o sdw_bwcalc.o
+sdw_bus-objs := sdw.o sdw_bwcalc.o sdw_utils.o
obj-$(CONFIG_SDW) += sdw_bus.o
obj-$(CONFIG_SDW_CNL) += sdw_cnl.o
diff --git a/drivers/sdw/sdw.c b/drivers/sdw/sdw.c
index 78c8cfd32d4c..aefd25d4e393 100644
--- a/drivers/sdw/sdw.c
+++ b/drivers/sdw/sdw.c
@@ -210,6 +210,29 @@ static int sdw_slv_probe(struct device *dev)
return ret;
}
+
+int sdw_slave_get_bus_params(struct sdw_slv *sdw_slv,
+ struct sdw_bus_params *params)
+{
+ struct sdw_bus *bus;
+ struct sdw_master *mstr = sdw_slv->mstr;
+
+ list_for_each_entry(bus, &sdw_core.bus_list, bus_node) {
+ if (bus->mstr == mstr)
+ break;
+ }
+ if (!bus)
+ return -EFAULT;
+
+ params->num_rows = bus->row;
+ params->num_cols = bus->col;
+ params->bus_clk_freq = bus->clk_freq >> 1;
+ params->bank = bus->active_bank;
+
+ return 0;
+}
+EXPORT_SYMBOL(sdw_slave_get_bus_params);
+
static int sdw_mstr_remove(struct device *dev)
{
const struct sdw_mstr_driver *sdrv = to_sdw_mstr_driver(dev->driver);
@@ -373,17 +396,19 @@ static int sdw_pm_resume(struct device *dev)
return sdw_legacy_resume(dev);
}
+#else
+#define sdw_pm_suspend NULL
+#define sdw_pm_resume NULL
+#endif /* CONFIG_PM_SLEEP */
+
static const struct dev_pm_ops soundwire_pm = {
.suspend = sdw_pm_suspend,
.resume = sdw_pm_resume,
+#ifdef CONFIG_PM
.runtime_suspend = pm_generic_runtime_suspend,
.runtime_resume = pm_generic_runtime_resume,
-};
-
-#else
-#define sdw_pm_suspend NULL
-#define sdw_pm_resume NULL
#endif
+};
struct bus_type sdwint_bus_type = {
.name = "soundwire",
@@ -404,6 +429,8 @@ static struct static_key sdw_trace_msg = STATIC_KEY_INIT_FALSE;
int sdw_transfer_trace_reg(void)
{
static_key_slow_inc(&sdw_trace_msg);
+
+ return 0;
}
void sdw_transfer_trace_unreg(void)
@@ -835,7 +862,7 @@ int sdw_slave_transfer(struct sdw_master *mstr, struct sdw_msg *msg, int num)
EXPORT_SYMBOL_GPL(sdw_slave_transfer);
static int sdw_handle_dp0_interrupts(struct sdw_master *mstr,
- struct sdw_slave *sdw_slv)
+ struct sdw_slv *sdw_slv, u8 *status)
{
int ret = 0;
struct sdw_msg rd_msg, wr_msg;
@@ -886,8 +913,8 @@ static int sdw_handle_dp0_interrupts(struct sdw_master *mstr,
SDW_DP0_INTSTAT_IMPDEF2_MASK |
SDW_DP0_INTSTAT_IMPDEF3_MASK;
if (rd_msg.buf[0] & impl_def_mask) {
- /* TODO: Handle implementation defined mask ready */
wr_msg.buf[0] |= impl_def_mask;
+ *status = wr_msg.buf[0];
}
ret = sdw_slave_transfer(mstr, &wr_msg, 1);
if (ret != 1) {
@@ -901,15 +928,20 @@ static int sdw_handle_dp0_interrupts(struct sdw_master *mstr,
}
static int sdw_handle_port_interrupt(struct sdw_master *mstr,
- struct sdw_slave *sdw_slv, int port_num)
+ struct sdw_slv *sdw_slv, int port_num,
+ u8 *status)
{
int ret = 0;
struct sdw_msg rd_msg, wr_msg;
u8 rbuf[1], wbuf[1];
int impl_def_mask = 0;
- if (port_num == 0)
- ret = sdw_handle_dp0_interrupts(mstr, sdw_slv);
+/*
+ * Handle the Data port0 interrupt separately since the interrupt
+ * mask and stat register is different than other DPn registers
+ */
+ if (port_num == 0 && sdw_slv->sdw_slv_cap.sdw_dp0_supported)
+ return sdw_handle_dp0_interrupts(mstr, sdw_slv, status);
/* Create message for reading the port interrupts */
wr_msg.ssp_tag = 0;
@@ -953,6 +985,7 @@ static int sdw_handle_port_interrupt(struct sdw_master *mstr,
if (rd_msg.buf[0] & impl_def_mask) {
/* TODO: Handle implementation defined mask ready */
wr_msg.buf[0] |= impl_def_mask;
+ *status = wr_msg.buf[0];
}
/* Clear and Ack the interrupt */
ret = sdw_slave_transfer(mstr, &wr_msg, 1);
@@ -972,6 +1005,10 @@ static int sdw_handle_slave_alerts(struct sdw_master *mstr,
u8 rbuf[3], wbuf[1];
int i, ret = 0;
int cs_port_mask, cs_port_register, cs_port_start, cs_ports;
+ struct sdw_impl_def_intr_stat *intr_status;
+ struct sdw_portn_intr_stat *portn_stat;
+ u8 port_status[15] = {0};
+ u8 control_port_stat = 0;
/* Read Instat 1, Instat 2 and Instat 3 registers */
@@ -1027,214 +1064,711 @@ static int sdw_handle_slave_alerts(struct sdw_master *mstr,
dev_err(&mstr->dev, "Bus clash error detected\n");
wr_msg.buf[0] |= SDW_SCP_INTCLEAR1_BUS_CLASH_MASK;
}
- /* Handle Port interrupts from Instat_1 registers */
+ /* Handle implementation defined mask */
+ if (rd_msg[0].buf[0] & SDW_SCP_INTSTAT1_IMPL_DEF_MASK) {
+ wr_msg.buf[0] |= SDW_SCP_INTCLEAR1_IMPL_DEF_MASK;
+ control_port_stat = (rd_msg[0].buf[0] &
+ SDW_SCP_INTSTAT1_IMPL_DEF_MASK);
+ }
+
+ /* Handle Cascaded Port interrupts from Instat_1 registers */
+
+ /* Number of port status bits in this register */
cs_ports = 4;
+ /* Port number starts at in this register */
cs_port_start = 0;
+ /* Bit mask for the starting port intr status */
cs_port_mask = 0x08;
+ /* Bit mask for the starting port intr status */
cs_port_register = 0;
- for (i = cs_port_start; i < cs_port_start + cs_ports; i++) {
+
+ /* Look for cascaded port interrupts, if found handle port
+ * interrupts. Do this for all the Int_stat registers.
+ */
+ for (i = cs_port_start; i < cs_port_start + cs_ports &&
+ i <= sdw_slv->sdw_slv_cap.num_of_sdw_ports; i++) {
if (rd_msg[cs_port_register].buf[0] & cs_port_mask) {
ret += sdw_handle_port_interrupt(mstr,
- sdw_slv, cs_port_start + i);
+ sdw_slv, i, &port_status[i]);
}
cs_port_mask = cs_port_mask << 1;
}
- /* Handle interrupts from instat_2 register */
+
+ /*
+ * Handle cascaded interrupts from instat_2 register,
+ * if no cascaded interrupt from SCP2 cascade move to SCP3
+ */
if (!(rd_msg[0].buf[0] & SDW_SCP_INTSTAT1_SCP2_CASCADE_MASK))
goto handle_instat_3_register;
+
+
cs_ports = 7;
cs_port_start = 4;
cs_port_mask = 0x1;
cs_port_register = 1;
- for (i = cs_port_start; i < cs_port_start + cs_ports; i++) {
+ for (i = cs_port_start; i < cs_port_start + cs_ports &&
+ i <= sdw_slv->sdw_slv_cap.num_of_sdw_ports; i++) {
+
if (rd_msg[cs_port_register].buf[0] & cs_port_mask) {
+
ret += sdw_handle_port_interrupt(mstr,
- sdw_slv, cs_port_start + i);
+ sdw_slv, i, &port_status[i]);
}
cs_port_mask = cs_port_mask << 1;
}
-handle_instat_3_register:
+ /*
+ * Handle cascaded interrupts from instat_2 register,
+ * if no cascaded interrupt from SCP2 cascade move to impl_def intrs
+ */
+handle_instat_3_register:
if (!(rd_msg[1].buf[0] & SDW_SCP_INTSTAT2_SCP3_CASCADE_MASK))
- goto handle_instat_3_register;
+ goto handle_impl_def_interrupts;
+
cs_ports = 4;
cs_port_start = 11;
cs_port_mask = 0x1;
cs_port_register = 2;
- for (i = cs_port_start; i < cs_port_start + cs_ports; i++) {
+
+ for (i = cs_port_start; i < cs_port_start + cs_ports &&
+ i <= sdw_slv->sdw_slv_cap.num_of_sdw_ports; i++) {
+
if (rd_msg[cs_port_register].buf[0] & cs_port_mask) {
+
ret += sdw_handle_port_interrupt(mstr,
- sdw_slv, cs_port_start + i);
+ sdw_slv, i, &port_status[i]);
}
cs_port_mask = cs_port_mask << 1;
}
- /* Ack the IntStat 1 interrupts */
+
+handle_impl_def_interrupts:
+
+ /*
+ * If slave has not registered for implementation defined
+ * interrupts, dont read it.
+ */
+ if (!sdw_slv->driver->handle_impl_def_interrupts)
+ goto ack_interrupts;
+
+ intr_status = kzalloc(sizeof(*intr_status), GFP_KERNEL);
+ if (!intr_status)
+ return -ENOMEM;
+
+ portn_stat = kzalloc((sizeof(*portn_stat)) *
+ sdw_slv->sdw_slv_cap.num_of_sdw_ports,
+ GFP_KERNEL);
+ if (!portn_stat)
+ return -ENOMEM;
+
+ intr_status->portn_stat = portn_stat;
+ intr_status->control_port_stat = control_port_stat;
+
+ /* Update the implementation defined status to Slave */
+ for (i = 1; i < sdw_slv->sdw_slv_cap.num_of_sdw_ports; i++) {
+
+ intr_status->portn_stat[i].status = port_status[i];
+ intr_status->portn_stat[i].num = i;
+ }
+
+ intr_status->port0_stat = port_status[0];
+ intr_status->control_port_stat = wr_msg.buf[0];
+
+ ret = sdw_slv->driver->handle_impl_def_interrupts(sdw_slv,
+ intr_status);
+ if (ret)
+ dev_err(&mstr->dev, "Implementation defined interrupt handling failed\n");
+
+ kfree(portn_stat);
+ kfree(intr_status);
+
+ack_interrupts:
+ /* Ack the interrupts */
ret = sdw_slave_transfer(mstr, &wr_msg, 1);
if (ret != 1) {
ret = -EINVAL;
dev_err(&mstr->dev, "Register transfer failed\n");
- goto out;
}
out:
- return ret;
+ return 0;
}
-static void handle_slave_status(struct kthread_work *work)
+int sdw_en_intr(struct sdw_slv *sdw_slv, int port_num, int mask)
{
- int i, ret = 0;
- struct sdw_slv_status *status, *__status__;
- struct sdw_bus *bus =
- container_of(work, struct sdw_bus, kwork);
- struct sdw_master *mstr = bus->mstr;
- unsigned long flags;
- /* Handle the new attached slaves to the bus. Register new slave
- * to the bus.
- */
- list_for_each_entry_safe(status, __status__, &bus->status_list, node) {
- if (status->status[0] == SDW_SLAVE_STAT_ATTACHED_OK) {
- ret += sdw_register_slave(mstr);
- if (ret)
- /* Even if adding new slave fails, we will
- * continue.
- */
- dev_err(&mstr->dev, "Registering new slave failed\n");
- }
- for (i = 1; i <= SOUNDWIRE_MAX_DEVICES; i++) {
- if (status->status[i] == SDW_SLAVE_STAT_NOT_PRESENT &&
- mstr->sdw_addr[i].assigned == true)
- /* Logical address was assigned to slave, but
- * now its down, so mark it as not present
- */
- mstr->sdw_addr[i].status =
- SDW_SLAVE_STAT_NOT_PRESENT;
+ struct sdw_msg rd_msg, wr_msg;
+ u8 buf;
+ int ret;
+ struct sdw_master *mstr = sdw_slv->mstr;
- else if (status->status[i] == SDW_SLAVE_STAT_ALERT &&
- mstr->sdw_addr[i].assigned == true) {
- /* Handle slave alerts */
- mstr->sdw_addr[i].status = SDW_SLAVE_STAT_ALERT;
- ret = sdw_handle_slave_alerts(mstr,
- mstr->sdw_addr[i].slave);
- if (ret)
- dev_err(&mstr->dev, "Handle slave alert failed for Slave %d\n", i);
+ rd_msg.addr = wr_msg.addr = SDW_DPN_INTMASK +
+ (SDW_NUM_DATA_PORT_REGISTERS * port_num);
+ /* Create message for enabling the interrupts */
+ wr_msg.ssp_tag = 0;
+ wr_msg.flag = SDW_MSG_FLAG_WRITE;
+ wr_msg.len = 1;
+ wr_msg.buf = &buf;
+ wr_msg.slave_addr = sdw_slv->slv_number;
+ wr_msg.addr_page1 = 0x0;
+ wr_msg.addr_page2 = 0x0;
+ /* Create message for reading the interrupts for DP0 interrupts*/
+ rd_msg.ssp_tag = 0;
+ rd_msg.flag = SDW_MSG_FLAG_READ;
+ rd_msg.len = 1;
+ rd_msg.buf = &buf;
+ rd_msg.slave_addr = sdw_slv->slv_number;
+ rd_msg.addr_page1 = 0x0;
+ rd_msg.addr_page2 = 0x0;
+ ret = sdw_slave_transfer(mstr, &rd_msg, 1);
+ if (ret != 1) {
+ dev_err(&mstr->dev, "DPn Intr mask read failed for slave %x\n",
+ sdw_slv->slv_number);
+ return -EINVAL;
+ }
- } else if (status->status[i] ==
- SDW_SLAVE_STAT_ATTACHED_OK &&
- mstr->sdw_addr[i].assigned == true)
- mstr->sdw_addr[i].status =
- SDW_SLAVE_STAT_ATTACHED_OK;
- }
- spin_lock_irqsave(&bus->spinlock, flags);
- list_del(&status->node);
- spin_unlock_irqrestore(&bus->spinlock, flags);
- kfree(status);
+ buf |= mask;
+
+ /* Set the port ready and Test fail interrupt mask as well */
+ buf |= SDW_DPN_INTSTAT_TEST_FAIL_MASK;
+ buf |= SDW_DPN_INTSTAT_PORT_READY_MASK;
+ ret = sdw_slave_transfer(mstr, &wr_msg, 1);
+ if (ret != 1) {
+ dev_err(&mstr->dev, "DPn Intr mask write failed for slave %x\n",
+ sdw_slv->slv_number);
+ return -EINVAL;
}
+ return 0;
}
-static int sdw_register_master(struct sdw_master *mstr)
+static int sdw_en_scp_intr(struct sdw_slv *sdw_slv, int mask)
{
- int ret = 0;
- int i;
- struct sdw_bus *sdw_bus;
+ struct sdw_msg rd_msg, wr_msg;
+ u8 buf = 0;
+ int ret;
+ struct sdw_master *mstr = sdw_slv->mstr;
+ u16 reg_addr;
- /* Can't register until after driver model init */
- if (unlikely(WARN_ON(!sdw_bus_type.p))) {
- ret = -EAGAIN;
- goto bus_init_not_done;
- }
- /* Sanity checks */
- if (unlikely(mstr->name[0] == '\0')) {
- pr_err("sdw-core: Attempt to register an master with no name!\n");
- ret = -EINVAL;
- goto mstr_no_name;
- }
- for (i = 0; i <= SOUNDWIRE_MAX_DEVICES; i++)
- mstr->sdw_addr[i].slv_number = i;
+ reg_addr = SDW_SCP_INTMASK1;
- rt_mutex_init(&mstr->bus_lock);
- INIT_LIST_HEAD(&mstr->slv_list);
- INIT_LIST_HEAD(&mstr->mstr_rt_list);
+ rd_msg.addr = wr_msg.addr = reg_addr;
- sdw_bus = kzalloc(sizeof(struct sdw_bus), GFP_KERNEL);
- if (!sdw_bus)
- goto bus_alloc_failed;
- sdw_bus->mstr = mstr;
- init_completion(&sdw_bus->async_data.xfer_complete);
+ /* Create message for reading the interrupt mask */
+ rd_msg.ssp_tag = 0;
+ rd_msg.flag = SDW_MSG_FLAG_READ;
+ rd_msg.len = 1;
+ rd_msg.buf = &buf;
+ rd_msg.slave_addr = sdw_slv->slv_number;
+ rd_msg.addr_page1 = 0x0;
+ rd_msg.addr_page2 = 0x0;
+ ret = sdw_slave_transfer(mstr, &rd_msg, 1);
+ if (ret != 1) {
+ dev_err(&mstr->dev, "SCP Intr mask read failed for slave %x\n",
+ sdw_slv->slv_number);
+ return -EINVAL;
+ }
- mutex_lock(&sdw_core.core_lock);
- list_add_tail(&sdw_bus->bus_node, &sdw_core.bus_list);
- mutex_unlock(&sdw_core.core_lock);
+ /* Enable the Slave defined interrupts. */
+ buf |= mask;
- dev_set_name(&mstr->dev, "sdw-%d", mstr->nr);
- mstr->dev.bus = &sdw_bus_type;
- mstr->dev.type = &sdw_mstr_type;
+ /* Set the port ready and Test fail interrupt mask as well */
+ buf |= SDW_SCP_INTMASK1_BUS_CLASH_MASK;
+ buf |= SDW_SCP_INTMASK1_PARITY_MASK;
- ret = device_register(&mstr->dev);
- if (ret)
- goto out_list;
- kthread_init_worker(&sdw_bus->kworker);
- sdw_bus->status_thread = kthread_run(kthread_worker_fn,
- &sdw_bus->kworker, "%s",
- dev_name(&mstr->dev));
- if (IS_ERR(sdw_bus->status_thread)) {
- dev_err(&mstr->dev, "error: failed to create status message task\n");
- ret = PTR_ERR(sdw_bus->status_thread);
- goto task_failed;
- }
- kthread_init_work(&sdw_bus->kwork, handle_slave_status);
- INIT_LIST_HEAD(&sdw_bus->status_list);
- spin_lock_init(&sdw_bus->spinlock);
- ret = sdw_mstr_bw_init(sdw_bus);
- if (ret) {
- dev_err(&mstr->dev, "error: Failed to init mstr bw\n");
- goto mstr_bw_init_failed;
+ /* Create message for enabling the interrupts */
+ wr_msg.ssp_tag = 0;
+ wr_msg.flag = SDW_MSG_FLAG_WRITE;
+ wr_msg.len = 1;
+ wr_msg.buf = &buf;
+ wr_msg.slave_addr = sdw_slv->slv_number;
+ wr_msg.addr_page1 = 0x0;
+ wr_msg.addr_page2 = 0x0;
+ ret = sdw_slave_transfer(mstr, &wr_msg, 1);
+ if (ret != 1) {
+ dev_err(&mstr->dev, "SCP Intr mask write failed for slave %x\n",
+ sdw_slv->slv_number);
+ return -EINVAL;
}
- dev_dbg(&mstr->dev, "master [%s] registered\n", mstr->name);
- return 0;
+ /* Return if DP0 is not present */
+ if (!sdw_slv->sdw_slv_cap.sdw_dp0_supported)
+ return 0;
-mstr_bw_init_failed:
-task_failed:
- device_unregister(&mstr->dev);
-out_list:
- mutex_lock(&sdw_core.core_lock);
- list_del(&sdw_bus->bus_node);
- mutex_unlock(&sdw_core.core_lock);
- kfree(sdw_bus);
-bus_alloc_failed:
-mstr_no_name:
-bus_init_not_done:
- mutex_lock(&sdw_core.core_lock);
- idr_remove(&sdw_core.idr, mstr->nr);
- mutex_unlock(&sdw_core.core_lock);
- return ret;
-}
-/**
- * sdw_master_update_slv_status: Report the status of slave to the bus driver.
- * master calls this function based on the
- * interrupt it gets once the slave changes its
- * state.
- * @mstr: Master handle for which status is reported.
- * @status: Array of status of each slave.
- */
-int sdw_master_update_slv_status(struct sdw_master *mstr,
- struct sdw_status *status)
-{
- struct sdw_bus *bus = NULL;
- struct sdw_slv_status *slv_status;
- unsigned long flags;
+ reg_addr = SDW_DP0_INTMASK;
+ rd_msg.addr = wr_msg.addr = reg_addr;
+ mask = sdw_slv->sdw_slv_cap.sdw_dp0_cap->imp_def_intr_mask;
+ buf = 0;
- list_for_each_entry(bus, &sdw_core.bus_list, bus_node) {
- if (bus->mstr == mstr)
- break;
- }
- /* This is master is not registered with bus driver */
- if (!bus) {
- dev_info(&mstr->dev, "Master not registered with bus\n");
- return 0;
+ /* Create message for reading the interrupt mask */
+ /* Create message for reading the interrupt mask */
+ rd_msg.ssp_tag = 0;
+ rd_msg.flag = SDW_MSG_FLAG_READ;
+ rd_msg.len = 1;
+ rd_msg.buf = &buf;
+ rd_msg.slave_addr = sdw_slv->slv_number;
+ rd_msg.addr_page1 = 0x0;
+ rd_msg.addr_page2 = 0x0;
+ ret = sdw_slave_transfer(mstr, &rd_msg, 1);
+ if (ret != 1) {
+ dev_err(&mstr->dev, "DP0 Intr mask read failed for slave %x\n",
+ sdw_slv->slv_number);
+ return -EINVAL;
+ }
+
+ /* Enable the Slave defined interrupts. */
+ buf |= mask;
+
+ /* Set the port ready and Test fail interrupt mask as well */
+ buf |= SDW_DP0_INTSTAT_TEST_FAIL_MASK;
+ buf |= SDW_DP0_INTSTAT_PORT_READY_MASK;
+ buf |= SDW_DP0_INTSTAT_BRA_FAILURE_MASK;
+
+ wr_msg.ssp_tag = 0;
+ wr_msg.flag = SDW_MSG_FLAG_WRITE;
+ wr_msg.len = 1;
+ wr_msg.buf = &buf;
+ wr_msg.slave_addr = sdw_slv->slv_number;
+ wr_msg.addr_page1 = 0x0;
+ wr_msg.addr_page2 = 0x0;
+
+ ret = sdw_slave_transfer(mstr, &wr_msg, 1);
+ if (ret != 1) {
+ dev_err(&mstr->dev, "DP0 Intr mask write failed for slave %x\n",
+ sdw_slv->slv_number);
+ return -EINVAL;
+ }
+ return 0;
+}
+
+static int sdw_prog_slv(struct sdw_slv *sdw_slv)
+{
+
+ struct sdw_slv_capabilities *cap;
+ int ret, i;
+ struct sdw_slv_dpn_capabilities *dpn_cap;
+ struct sdw_master *mstr = sdw_slv->mstr;
+
+ if (!sdw_slv->slave_cap_updated)
+ return 0;
+ cap = &sdw_slv->sdw_slv_cap;
+
+ /* Enable DP0 and SCP interrupts */
+ ret = sdw_en_scp_intr(sdw_slv, cap->scp_impl_def_intr_mask);
+
+ /* Failure should never happen, even if it happens we continue */
+ if (ret)
+ dev_err(&mstr->dev, "SCP program failed\n");
+
+ for (i = 0; i < cap->num_of_sdw_ports; i++) {
+ dpn_cap = &cap->sdw_dpn_cap[i];
+ ret = sdw_en_intr(sdw_slv, (i + 1),
+ dpn_cap->imp_def_intr_mask);
+
+ if (ret)
+ break;
+ }
+ return ret;
+}
+
+
+static void sdw_send_slave_status(struct sdw_slv *slave,
+ enum sdw_slave_status *status)
+{
+ struct sdw_slave_driver *slv_drv = slave->driver;
+
+ if (slv_drv && slv_drv->update_slv_status)
+ slv_drv->update_slv_status(slave, status);
+}
+
+static int sdw_wait_for_deprepare(struct sdw_slv *slave)
+{
+ int ret;
+ struct sdw_msg msg;
+ u8 buf[1] = {0};
+ int timeout = 0;
+ struct sdw_master *mstr = slave->mstr;
+
+ /* Create message to read clock stop status, its broadcast message. */
+ buf[0] = 0xFF;
+
+ msg.ssp_tag = 0;
+ msg.flag = SDW_MSG_FLAG_READ;
+ msg.len = 1;
+ msg.buf = &buf[0];
+ msg.slave_addr = slave->slv_number;
+ msg.addr_page1 = 0x0;
+ msg.addr_page2 = 0x0;
+ msg.addr = SDW_SCP_STAT;
+ /*
+ * Read the ClockStopNotFinished bit from the SCP_Stat register
+ * of particular Slave to make sure that clock stop prepare is done
+ */
+ do {
+ /*
+ * Ideally this should not fail, but even if it fails
+ * in exceptional situation, we go ahead for clock stop
+ */
+ ret = sdw_slave_transfer_nopm(mstr, &msg, 1);
+
+ if (ret != 1) {
+ WARN_ONCE(1, "Clock stop status read failed\n");
+ break;
+ }
+
+ if (!(buf[0] & SDW_SCP_STAT_CLK_STP_NF_MASK))
+ break;
+
+ /*
+ * TODO: Need to find from spec what is requirement.
+ * Since we are in suspend we should not sleep for more
+ * Ideally Slave should be ready to stop clock in less than
+ * few ms.
+ * So sleep less and increase loop time. This is not
+ * harmful, since if Slave is ready loop will terminate.
+ *
+ */
+ msleep(2);
+ timeout++;
+
+ } while (timeout != 500);
+
+ if (!(buf[0] & SDW_SCP_STAT_CLK_STP_NF_MASK))
+
+ dev_info(&mstr->dev, "Clock stop prepare done\n");
+ else
+ WARN_ONCE(1, "Clk stp deprepare failed for slave %d\n",
+ slave->slv_number);
+
+ return -EINVAL;
+}
+
+static void sdw_prep_slave_for_clk_stp(struct sdw_master *mstr,
+ struct sdw_slv *slave,
+ enum sdw_clk_stop_mode clock_stop_mode,
+ bool prep)
+{
+ bool wake_en;
+ struct sdw_slv_capabilities *cap;
+ u8 buf[1] = {0};
+ struct sdw_msg msg;
+ int ret;
+
+ cap = &slave->sdw_slv_cap;
+
+ /* Set the wakeup enable based on Slave capability */
+ wake_en = !cap->wake_up_unavailable;
+
+ if (prep) {
+ /* Even if its simplified clock stop prepare,
+ * setting prepare bit wont harm
+ */
+ buf[0] |= (1 << SDW_SCP_SYSTEMCTRL_CLK_STP_PREP_SHIFT);
+ buf[0] |= clock_stop_mode <<
+ SDW_SCP_SYSTEMCTRL_CLK_STP_MODE_SHIFT;
+ buf[0] |= wake_en << SDW_SCP_SYSTEMCTRL_WAKE_UP_EN_SHIFT;
+ } else
+ buf[0] = 0;
+
+ msg.ssp_tag = 0;
+ msg.flag = SDW_MSG_FLAG_WRITE;
+ msg.len = 1;
+ msg.buf = &buf[0];
+ msg.slave_addr = slave->slv_number;
+ msg.addr_page1 = 0x0;
+ msg.addr_page2 = 0x0;
+ msg.addr = SDW_SCP_SYSTEMCTRL;
+
+ /*
+ * We are calling NOPM version of the transfer API, because
+ * Master controllers calls this from the suspend handler,
+ * so if we call the normal transfer API, it tries to resume
+ * controller, which result in deadlock
+ */
+
+ ret = sdw_slave_transfer_nopm(mstr, &msg, 1);
+ /* We should continue even if it fails for some Slave */
+ if (ret != 1)
+ WARN_ONCE(1, "Clock Stop prepare failed for slave %d\n",
+ slave->slv_number);
+}
+
+static int sdw_check_for_prep_bit(struct sdw_slv *slave)
+{
+ u8 buf[1] = {0};
+ struct sdw_msg msg;
+ int ret;
+ struct sdw_master *mstr = slave->mstr;
+
+ msg.ssp_tag = 0;
+ msg.flag = SDW_MSG_FLAG_READ;
+ msg.len = 1;
+ msg.buf = &buf[0];
+ msg.slave_addr = slave->slv_number;
+ msg.addr_page1 = 0x0;
+ msg.addr_page2 = 0x0;
+ msg.addr = SDW_SCP_SYSTEMCTRL;
+
+ ret = sdw_slave_transfer_nopm(mstr, &msg, 1);
+ /* We should continue even if it fails for some Slave */
+ if (ret != 1) {
+ dev_err(&mstr->dev, "SCP_SystemCtrl read failed for Slave %d\n",
+ slave->slv_number);
+ return -EINVAL;
+
+ }
+ return (buf[0] & SDW_SCP_SYSTEMCTRL_CLK_STP_PREP_MASK);
+
+}
+
+static int sdw_slv_deprepare_clk_stp1(struct sdw_slv *slave)
+{
+ struct sdw_slv_capabilities *cap;
+ int ret;
+ struct sdw_master *mstr = slave->mstr;
+
+ cap = &slave->sdw_slv_cap;
+
+ /*
+ * Slave might have enumerated 1st time or from clock stop mode 1
+ * return if Slave doesn't require deprepare
+ */
+ if (!cap->clk_stp1_deprep_required)
+ return 0;
+
+ /*
+ * If Slave requires de-prepare after exiting from Clock Stop
+ * mode 1, than check for ClockStopPrepare bit in SystemCtrl register
+ * if its 1, de-prepare Slave from clock stop prepare, else
+ * return
+ */
+ ret = sdw_check_for_prep_bit(slave);
+ /* If prepare bit is not set, return without error */
+ if (!ret)
+ return 0;
+
+ /* If error in reading register, return with error */
+ if (ret < 0)
+ return ret;
+
+ /*
+ * Call the pre clock stop prepare, if Slave requires.
+ */
+ if (slave->driver && slave->driver->pre_clk_stop_prep) {
+ ret = slave->driver->pre_clk_stop_prep(slave,
+ cap->clock_stop1_mode_supported, false);
+ if (ret) {
+ dev_warn(&mstr->dev, "Pre de-prepare failed for Slave %d\n",
+ slave->slv_number);
+ return ret;
+ }
+ }
+
+ sdw_prep_slave_for_clk_stp(slave->mstr, slave,
+ cap->clock_stop1_mode_supported, false);
+
+ /* Make sure NF = 0 for deprepare to complete */
+ ret = sdw_wait_for_deprepare(slave);
+
+ /* Return in de-prepare unsuccessful */
+ if (ret)
+ return ret;
+
+ if (slave->driver && slave->driver->post_clk_stop_prep) {
+ ret = slave->driver->post_clk_stop_prep(slave,
+ cap->clock_stop1_mode_supported, false);
+
+ if (ret)
+ dev_err(&mstr->dev, "Post de-prepare failed for Slave %d\n",
+ slave->slv_number);
+ }
+
+ return ret;
+}
+
+static void handle_slave_status(struct kthread_work *work)
+{
+ int i, ret = 0;
+ struct sdw_slv_status *status, *__status__;
+ struct sdw_bus *bus =
+ container_of(work, struct sdw_bus, kwork);
+ struct sdw_master *mstr = bus->mstr;
+ unsigned long flags;
+ bool slave_present = 0;
+
+ /* Handle the new attached slaves to the bus. Register new slave
+ * to the bus.
+ */
+ list_for_each_entry_safe(status, __status__, &bus->status_list, node) {
+ if (status->status[0] == SDW_SLAVE_STAT_ATTACHED_OK) {
+ ret += sdw_register_slave(mstr);
+ if (ret)
+ /* Even if adding new slave fails, we will
+ * continue.
+ */
+ dev_err(&mstr->dev, "Registering new slave failed\n");
+ }
+ for (i = 1; i <= SOUNDWIRE_MAX_DEVICES; i++) {
+ slave_present = false;
+ if (status->status[i] == SDW_SLAVE_STAT_NOT_PRESENT &&
+ mstr->sdw_addr[i].assigned == true) {
+ /* Logical address was assigned to slave, but
+ * now its down, so mark it as not present
+ */
+ mstr->sdw_addr[i].status =
+ SDW_SLAVE_STAT_NOT_PRESENT;
+ slave_present = true;
+ }
+
+ else if (status->status[i] == SDW_SLAVE_STAT_ALERT &&
+ mstr->sdw_addr[i].assigned == true) {
+ ret = 0;
+ /* Handle slave alerts */
+ mstr->sdw_addr[i].status = SDW_SLAVE_STAT_ALERT;
+ ret = sdw_handle_slave_alerts(mstr,
+ mstr->sdw_addr[i].slave);
+ if (ret)
+ dev_err(&mstr->dev, "Handle slave alert failed for Slave %d\n", i);
+
+ slave_present = true;
+
+
+ } else if (status->status[i] ==
+ SDW_SLAVE_STAT_ATTACHED_OK &&
+ mstr->sdw_addr[i].assigned == true) {
+
+ sdw_prog_slv(mstr->sdw_addr[i].slave);
+
+ mstr->sdw_addr[i].status =
+ SDW_SLAVE_STAT_ATTACHED_OK;
+ ret = sdw_slv_deprepare_clk_stp1(
+ mstr->sdw_addr[i].slave);
+
+ /*
+ * If depreparing Slave fails, no need to
+ * reprogram Slave, this should never happen
+ * in ideal case.
+ */
+ if (ret)
+ continue;
+ slave_present = true;
+ }
+
+ if (!slave_present)
+ continue;
+
+ sdw_send_slave_status(mstr->sdw_addr[i].slave,
+ &mstr->sdw_addr[i].status);
+ }
+ spin_lock_irqsave(&bus->spinlock, flags);
+ list_del(&status->node);
+ spin_unlock_irqrestore(&bus->spinlock, flags);
+ kfree(status);
+ }
+}
+
+static int sdw_register_master(struct sdw_master *mstr)
+{
+ int ret = 0;
+ int i;
+ struct sdw_bus *sdw_bus;
+
+ /* Can't register until after driver model init */
+ if (unlikely(WARN_ON(!sdwint_bus_type.p))) {
+ ret = -EAGAIN;
+ goto bus_init_not_done;
+ }
+ /* Sanity checks */
+ if (unlikely(mstr->name[0] == '\0')) {
+ pr_err("sdw-core: Attempt to register an master with no name!\n");
+ ret = -EINVAL;
+ goto mstr_no_name;
+ }
+ for (i = 0; i <= SOUNDWIRE_MAX_DEVICES; i++)
+ mstr->sdw_addr[i].slv_number = i;
+
+ rt_mutex_init(&mstr->bus_lock);
+ INIT_LIST_HEAD(&mstr->slv_list);
+ INIT_LIST_HEAD(&mstr->mstr_rt_list);
+
+ sdw_bus = kzalloc(sizeof(struct sdw_bus), GFP_KERNEL);
+ if (!sdw_bus)
+ goto bus_alloc_failed;
+ sdw_bus->mstr = mstr;
+ init_completion(&sdw_bus->async_data.xfer_complete);
+
+ mutex_lock(&sdw_core.core_lock);
+ list_add_tail(&sdw_bus->bus_node, &sdw_core.bus_list);
+ mutex_unlock(&sdw_core.core_lock);
+
+ dev_set_name(&mstr->dev, "sdw-%d", mstr->nr);
+ mstr->dev.bus = &sdwint_bus_type;
+ mstr->dev.type = &sdw_mstr_type;
+
+ ret = device_register(&mstr->dev);
+ if (ret)
+ goto out_list;
+ kthread_init_worker(&sdw_bus->kworker);
+ sdw_bus->status_thread = kthread_run(kthread_worker_fn,
+ &sdw_bus->kworker, "%s",
+ dev_name(&mstr->dev));
+ if (IS_ERR(sdw_bus->status_thread)) {
+ dev_err(&mstr->dev, "error: failed to create status message task\n");
+ ret = PTR_ERR(sdw_bus->status_thread);
+ goto task_failed;
+ }
+ kthread_init_work(&sdw_bus->kwork, handle_slave_status);
+ INIT_LIST_HEAD(&sdw_bus->status_list);
+ spin_lock_init(&sdw_bus->spinlock);
+ ret = sdw_mstr_bw_init(sdw_bus);
+ if (ret) {
+ dev_err(&mstr->dev, "error: Failed to init mstr bw\n");
+ goto mstr_bw_init_failed;
+ }
+ dev_dbg(&mstr->dev, "master [%s] registered\n", mstr->name);
+
+ return 0;
+
+mstr_bw_init_failed:
+task_failed:
+ device_unregister(&mstr->dev);
+out_list:
+ mutex_lock(&sdw_core.core_lock);
+ list_del(&sdw_bus->bus_node);
+ mutex_unlock(&sdw_core.core_lock);
+ kfree(sdw_bus);
+bus_alloc_failed:
+mstr_no_name:
+bus_init_not_done:
+ mutex_lock(&sdw_core.core_lock);
+ idr_remove(&sdw_core.idr, mstr->nr);
+ mutex_unlock(&sdw_core.core_lock);
+ return ret;
+}
+
+/**
+ * sdw_master_update_slv_status: Report the status of slave to the bus driver.
+ * master calls this function based on the
+ * interrupt it gets once the slave changes its
+ * state.
+ * @mstr: Master handle for which status is reported.
+ * @status: Array of status of each slave.
+ */
+int sdw_master_update_slv_status(struct sdw_master *mstr,
+ struct sdw_status *status)
+{
+ struct sdw_bus *bus = NULL;
+ struct sdw_slv_status *slv_status;
+ unsigned long flags;
+
+ list_for_each_entry(bus, &sdw_core.bus_list, bus_node) {
+ if (bus->mstr == mstr)
+ break;
+ }
+ /* This is master is not registered with bus driver */
+ if (!bus) {
+ dev_info(&mstr->dev, "Master not registered with bus\n");
+ return 0;
}
slv_status = kzalloc(sizeof(struct sdw_slv_status), GFP_ATOMIC);
memcpy(slv_status->status, status, sizeof(struct sdw_status));
@@ -1355,6 +1889,106 @@ void sdw_del_master_controller(struct sdw_master *mstr)
}
EXPORT_SYMBOL_GPL(sdw_del_master_controller);
+/**
+ * sdw_slave_xfer_bra_block: Transfer the data block using the BTP/BRA
+ * protocol.
+ * @mstr: SoundWire Master Master
+ * @block: Data block to be transferred.
+ */
+int sdw_slave_xfer_bra_block(struct sdw_master *mstr,
+ struct sdw_bra_block *block)
+{
+ struct sdw_bus *sdw_mstr_bs = NULL;
+ struct sdw_mstr_driver *ops = NULL;
+ int ret;
+
+ /*
+ * This API will be called by slave/codec
+ * when it needs to xfer firmware to
+ * its memory or perform bulk read/writes of registers.
+ */
+
+ /*
+ * Acquire core lock
+ * TODO: Acquire Master lock inside core lock
+ * similar way done in upstream. currently
+ * keeping it as core lock
+ */
+ mutex_lock(&sdw_core.core_lock);
+
+ /* Get master data structure */
+ list_for_each_entry(sdw_mstr_bs, &sdw_core.bus_list, bus_node) {
+ /* Match master structure pointer */
+ if (sdw_mstr_bs->mstr != mstr)
+ continue;
+
+ break;
+ }
+
+ /*
+ * Here assumption is made that complete SDW bandwidth is used
+ * by BRA. So bus will return -EBUSY if any active stream
+ * is running on given master.
+ * TODO: In final implementation extra bandwidth will be always
+ * allocated for BRA. In that case all the computation of clock,
+ * frame shape, transport parameters for DP0 will be done
+ * considering BRA feature.
+ */
+ if (!list_empty(&mstr->mstr_rt_list)) {
+
+ /*
+ * Currently not allowing BRA when any
+ * active stream on master, returning -EBUSY
+ */
+
+ /* Release lock */
+ mutex_unlock(&sdw_core.core_lock);
+ return -EBUSY;
+ }
+
+ /* Get master driver ops */
+ ops = sdw_mstr_bs->mstr->driver;
+
+ /*
+ * Check whether Master is supporting bulk transfer. If not, then
+ * bus will use alternate method of performing BRA request using
+ * normal register read/write API.
+ * TODO: Currently if Master is not supporting BRA transfers, bus
+ * returns error. Bus driver to extend support for normal register
+ * read/write as alternate method.
+ */
+ if (!ops->mstr_ops->xfer_bulk)
+ return -EINVAL;
+
+ /* Data port Programming (ON) */
+ ret = sdw_bus_bra_xport_config(sdw_mstr_bs, block, true);
+ if (ret < 0) {
+ dev_err(&mstr->dev, "BRA: Xport parameter config failed ret=%d\n", ret);
+ goto error;
+ }
+
+ /* Bulk Setup */
+ ret = ops->mstr_ops->xfer_bulk(mstr, block);
+ if (ret < 0) {
+ dev_err(&mstr->dev, "BRA: Transfer failed ret=%d\n", ret);
+ goto error;
+ }
+
+ /* Data port Programming (OFF) */
+ ret = sdw_bus_bra_xport_config(sdw_mstr_bs, block, false);
+ if (ret < 0) {
+ dev_err(&mstr->dev, "BRA: Xport parameter de-config failed ret=%d\n", ret);
+ goto error;
+ }
+
+error:
+ /* Release lock */
+ mutex_unlock(&sdw_core.core_lock);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(sdw_slave_xfer_bra_block);
+
/*
* An sdw_driver is used with one or more sdw_slv (slave) nodes to access
* sdw slave chips, on a bus instance associated with some sdw_master.
@@ -1432,6 +2066,7 @@ int sdw_register_slave_capabilities(struct sdw_slv *sdw,
struct sdw_slv_dpn_capabilities *slv_dpn_cap, *dpn_cap;
struct port_audio_mode_properties *prop, *slv_prop;
int i, j;
+ int ret = 0;
slv_cap = &sdw->sdw_slv_cap;
@@ -1441,6 +2076,8 @@ int sdw_register_slave_capabilities(struct sdw_slv *sdw,
slv_cap->clock_stop1_mode_supported = cap->clock_stop1_mode_supported;
slv_cap->simplified_clock_stop_prepare =
cap->simplified_clock_stop_prepare;
+ slv_cap->scp_impl_def_intr_mask = cap->scp_impl_def_intr_mask;
+
slv_cap->highphy_capable = cap->highphy_capable;
slv_cap->paging_supported = cap->paging_supported;
slv_cap->bank_delay_support = cap->bank_delay_support;
@@ -1560,6 +2197,9 @@ int sdw_register_slave_capabilities(struct sdw_slv *sdw,
prop->ch_prepare_behavior;
}
}
+ ret = sdw_prog_slv(sdw);
+ if (ret)
+ return ret;
sdw->slave_cap_updated = true;
return 0;
}
@@ -1723,10 +2363,23 @@ static void sdw_release_mstr_stream(struct sdw_master *mstr,
struct sdw_runtime *sdw_rt)
{
struct sdw_mstr_runtime *mstr_rt, *__mstr_rt;
+ struct sdw_port_runtime *port_rt, *__port_rt, *first_port_rt = NULL;
list_for_each_entry_safe(mstr_rt, __mstr_rt, &sdw_rt->mstr_rt_list,
mstr_sdw_node) {
if (mstr_rt->mstr == mstr) {
+
+ /* Get first runtime node from port list */
+ first_port_rt = list_first_entry(&mstr_rt->port_rt_list,
+ struct sdw_port_runtime,
+ port_node);
+
+ /* Release Master port resources */
+ list_for_each_entry_safe(port_rt, __port_rt,
+ &mstr_rt->port_rt_list, port_node)
+ list_del(&port_rt->port_node);
+
+ kfree(first_port_rt);
list_del(&mstr_rt->mstr_sdw_node);
if (mstr_rt->direction == SDW_DATA_DIR_OUT)
sdw_rt->tx_ref_count--;
@@ -1744,10 +2397,23 @@ static void sdw_release_slave_stream(struct sdw_slv *slave,
struct sdw_runtime *sdw_rt)
{
struct sdw_slave_runtime *slv_rt, *__slv_rt;
+ struct sdw_port_runtime *port_rt, *__port_rt, *first_port_rt = NULL;
list_for_each_entry_safe(slv_rt, __slv_rt, &sdw_rt->slv_rt_list,
slave_sdw_node) {
if (slv_rt->slave == slave) {
+
+ /* Get first runtime node from port list */
+ first_port_rt = list_first_entry(&slv_rt->port_rt_list,
+ struct sdw_port_runtime,
+ port_node);
+
+ /* Release Slave port resources */
+ list_for_each_entry_safe(port_rt, __port_rt,
+ &slv_rt->port_rt_list, port_node)
+ list_del(&port_rt->port_node);
+
+ kfree(first_port_rt);
list_del(&slv_rt->slave_sdw_node);
if (slv_rt->direction == SDW_DATA_DIR_OUT)
sdw_rt->tx_ref_count--;
@@ -1917,7 +2583,6 @@ int sdw_config_stream(struct sdw_master *mstr,
} else
sdw_rt->rx_ref_count++;
- /* SRK: check with hardik */
sdw_rt->type = stream_config->type;
sdw_rt->stream_state = SDW_STATE_CONFIG_STREAM;
@@ -1947,6 +2612,142 @@ int sdw_config_stream(struct sdw_master *mstr,
}
EXPORT_SYMBOL_GPL(sdw_config_stream);
+/**
+ * sdw_chk_slv_dpn_caps - Return success
+ * -EINVAL - In case of error
+ *
+ * This function checks all slave port capabilities
+ * for given stream parameters. If any of parameters
+ * is not supported in port capabilities, it returns
+ * error.
+ */
+int sdw_chk_slv_dpn_caps(struct sdw_slv_dpn_capabilities *dpn_cap,
+ struct sdw_stream_params *strm_prms)
+{
+ struct port_audio_mode_properties *mode_prop =
+ dpn_cap->mode_properties;
+ int ret = 0, i, value;
+
+ /* Check Sampling frequency */
+ if (mode_prop->num_sampling_freq_configs) {
+ for (i = 0; i < mode_prop->num_sampling_freq_configs; i++) {
+
+ value = mode_prop->sampling_freq_config[i];
+ if (strm_prms->rate == value)
+ break;
+ }
+
+ if (i == mode_prop->num_sampling_freq_configs)
+ return -EINVAL;
+
+ } else {
+
+ if ((strm_prms->rate < mode_prop->min_sampling_frequency)
+ || (strm_prms->rate >
+ mode_prop->max_sampling_frequency))
+ return -EINVAL;
+ }
+
+ /* check for bit rate */
+ if (dpn_cap->num_word_length) {
+ for (i = 0; i < dpn_cap->num_word_length; i++) {
+
+ value = dpn_cap->word_length_buffer[i];
+ if (strm_prms->bps == value)
+ break;
+ }
+
+ if (i == dpn_cap->num_word_length)
+ return -EINVAL;
+
+ } else {
+
+ if ((strm_prms->bps < dpn_cap->min_word_length)
+ || (strm_prms->bps > dpn_cap->max_word_length))
+ return -EINVAL;
+ }
+
+ /* check for number of channels */
+ if (dpn_cap->num_ch_supported) {
+ for (i = 0; i < dpn_cap->num_ch_supported; i++) {
+
+ value = dpn_cap->ch_supported[i];
+ if (strm_prms->bps == value)
+ break;
+ }
+
+ if (i == dpn_cap->num_ch_supported)
+ return -EINVAL;
+
+ } else {
+
+ if ((strm_prms->channel_count < dpn_cap->min_ch_num)
+ || (strm_prms->channel_count > dpn_cap->max_ch_num))
+ return -EINVAL;
+ }
+
+ return ret;
+}
+
+/**
+ * sdw_chk_mstr_dpn_caps - Return success
+ * -EINVAL - In case of error
+ *
+ * This function checks all master port capabilities
+ * for given stream parameters. If any of parameters
+ * is not supported in port capabilities, it returns
+ * error.
+ */
+int sdw_chk_mstr_dpn_caps(struct sdw_mstr_dpn_capabilities *dpn_cap,
+ struct sdw_stream_params *strm_prms)
+{
+
+ int ret = 0, i, value;
+
+ /* check for bit rate */
+ if (dpn_cap->num_word_length) {
+ for (i = 0; i < dpn_cap->num_word_length; i++) {
+
+ value = dpn_cap->word_length_buffer[i];
+ if (strm_prms->bps == value)
+ break;
+ }
+
+ if (i == dpn_cap->num_word_length)
+ return -EINVAL;
+
+ } else {
+
+ if ((strm_prms->bps < dpn_cap->min_word_length)
+ || (strm_prms->bps > dpn_cap->max_word_length)) {
+ return -EINVAL;
+ }
+
+
+ }
+
+ /* check for number of channels */
+ if (dpn_cap->num_ch_supported) {
+ for (i = 0; i < dpn_cap->num_ch_supported; i++) {
+
+ value = dpn_cap->ch_supported[i];
+ if (strm_prms->bps == value)
+ break;
+ }
+
+ if (i == dpn_cap->num_ch_supported)
+ return -EINVAL;
+
+ } else {
+
+ if ((strm_prms->channel_count < dpn_cap->min_ch_num)
+ || (strm_prms->channel_count > dpn_cap->max_ch_num))
+ return -EINVAL;
+ }
+
+ return ret;
+}
+
static int sdw_mstr_port_configuration(struct sdw_master *mstr,
struct sdw_runtime *sdw_rt,
struct sdw_port_config *port_config)
@@ -1955,6 +2756,9 @@ static int sdw_mstr_port_configuration(struct sdw_master *mstr,
struct sdw_port_runtime *port_rt;
int found = 0;
int i;
+ int ret = 0, pn = 0;
+ struct sdw_mstr_dpn_capabilities *dpn_cap =
+ mstr->mstr_capabilities.sdw_dpn_cap;
list_for_each_entry(mstr_rt, &sdw_rt->mstr_rt_list, mstr_sdw_node) {
if (mstr_rt->mstr == mstr) {
@@ -1966,16 +2770,35 @@ static int sdw_mstr_port_configuration(struct sdw_master *mstr,
dev_err(&mstr->dev, "Master not found for this port\n");
return -EINVAL;
}
+
port_rt = kzalloc((sizeof(struct sdw_port_runtime)) *
port_config->num_ports, GFP_KERNEL);
if (!port_rt)
return -EINVAL;
+
+ if (!dpn_cap)
+ return -EINVAL;
+ /*
+ * Note: Here the assumption the configuration is not
+ * received for 0th port.
+ */
for (i = 0; i < port_config->num_ports; i++) {
port_rt[i].channel_mask = port_config->port_cfg[i].ch_mask;
- port_rt[i].port_num = port_config->port_cfg[i].port_num;
+ port_rt[i].port_num = pn = port_config->port_cfg[i].port_num;
+
+ /* Perform capability check for master port */
+ ret = sdw_chk_mstr_dpn_caps(&dpn_cap[pn],
+ &mstr_rt->stream_params);
+ if (ret < 0) {
+ dev_err(&mstr->dev,
+ "Master capabilities check failed\n");
+ return -EINVAL;
+ }
+
list_add_tail(&port_rt[i].port_node, &mstr_rt->port_rt_list);
}
- return 0;
+
+ return ret;
}
static int sdw_slv_port_configuration(struct sdw_slv *slave,
@@ -1984,8 +2807,10 @@ static int sdw_slv_port_configuration(struct sdw_slv *slave,
{
struct sdw_slave_runtime *slv_rt;
struct sdw_port_runtime *port_rt;
- int found = 0;
- int i;
+ struct sdw_slv_dpn_capabilities *dpn_cap =
+ slave->sdw_slv_cap.sdw_dpn_cap;
+ int found = 0, ret = 0;
+ int i, pn;
list_for_each_entry(slv_rt, &sdw_rt->slv_rt_list, slave_sdw_node) {
if (slv_rt->slave == slave) {
@@ -1997,6 +2822,12 @@ static int sdw_slv_port_configuration(struct sdw_slv *slave,
dev_err(&slave->mstr->dev, "Slave not found for this port\n");
return -EINVAL;
}
+
+ if (!slave->slave_cap_updated) {
+ dev_err(&slave->mstr->dev, "Slave capabilities not updated\n");
+ return -EINVAL;
+ }
+
port_rt = kzalloc((sizeof(struct sdw_port_runtime)) *
port_config->num_ports, GFP_KERNEL);
if (!port_rt)
@@ -2004,10 +2835,21 @@ static int sdw_slv_port_configuration(struct sdw_slv *slave,
for (i = 0; i < port_config->num_ports; i++) {
port_rt[i].channel_mask = port_config->port_cfg[i].ch_mask;
- port_rt[i].port_num = port_config->port_cfg[i].port_num;
+ port_rt[i].port_num = pn = port_config->port_cfg[i].port_num;
+
+ /* Perform capability check for master port */
+ ret = sdw_chk_slv_dpn_caps(&dpn_cap[pn],
+ &slv_rt->stream_params);
+ if (ret < 0) {
+ dev_err(&slave->mstr->dev,
+ "Slave capabilities check failed\n");
+ return -EINVAL;
+ }
+
list_add_tail(&port_rt[i].port_node, &slv_rt->port_rt_list);
}
- return 0;
+
+ return ret;
}
/**
@@ -2040,7 +2882,6 @@ int sdw_config_port(struct sdw_master *mstr,
struct sdw_runtime *sdw_rt = NULL;
struct sdw_stream_tag *stream = NULL;
-
for (i = 0; i < SDW_NUM_STREAM_TAGS; i++) {
if (stream_tags[i].stream_tag == stream_tag) {
sdw_rt = stream_tags[i].sdw_rt;
@@ -2048,10 +2889,12 @@ int sdw_config_port(struct sdw_master *mstr,
break;
}
}
+
if (!sdw_rt) {
dev_err(&mstr->dev, "Invalid stream tag\n");
return -EINVAL;
}
+
if (static_key_false(&sdw_trace_msg)) {
int i;
@@ -2060,13 +2903,16 @@ int sdw_config_port(struct sdw_master *mstr,
&port_config->port_cfg[i], stream_tag);
}
}
+
mutex_lock(&stream->stream_lock);
+
if (!slave)
ret = sdw_mstr_port_configuration(mstr, sdw_rt, port_config);
else
ret = sdw_slv_port_configuration(slave, sdw_rt, port_config);
mutex_unlock(&stream->stream_lock);
+
return ret;
}
EXPORT_SYMBOL_GPL(sdw_config_port);
@@ -2078,9 +2924,6 @@ int sdw_prepare_and_enable(int stream_tag, bool enable)
struct sdw_stream_tag *stream_tags = sdw_core.stream_tags;
struct sdw_stream_tag *stream = NULL;
- /* TBD: SRK, Check with hardik whether both locks needed
- * stream and core??
- */
mutex_lock(&sdw_core.core_lock);
for (i = 0; i < SDW_NUM_STREAM_TAGS; i++) {
@@ -2200,126 +3043,357 @@ int sdw_wait_for_slave_enumeration(struct sdw_master *mstr,
}
EXPORT_SYMBOL_GPL(sdw_wait_for_slave_enumeration);
-int sdw_prepare_for_clock_change(struct sdw_master *mstr, bool stop,
- enum sdw_clk_stop_mode *clck_stop_mode)
+static enum sdw_clk_stop_mode sdw_get_clk_stp_mode(struct sdw_slv *slave)
{
- int i;
+ enum sdw_clk_stop_mode clock_stop_mode = SDW_CLOCK_STOP_MODE_0;
+ struct sdw_slv_capabilities *cap = &slave->sdw_slv_cap;
+
+ if (!slave->driver)
+ return clock_stop_mode;
+ /*
+ * Get the dynamic value of clock stop from Slave driver
+ * if supported, else use the static value from
+ * capabilities register. Update the capabilities also
+ * if we have new dynamic value.
+ */
+ if (slave->driver->get_dyn_clk_stp_mod) {
+ clock_stop_mode = slave->driver->get_dyn_clk_stp_mod(slave);
+
+ if (clock_stop_mode == SDW_CLOCK_STOP_MODE_1)
+ cap->clock_stop1_mode_supported = true;
+ else
+ cap->clock_stop1_mode_supported = false;
+ } else
+ clock_stop_mode = cap->clock_stop1_mode_supported;
+
+ return clock_stop_mode;
+}
+
+/**
+ * sdw_master_stop_clock: Stop the clock. This function broadcasts the SCP_CTRL
+ * register with clock_stop_now bit set.
+ *
+ * @mstr: Master handle for which clock has to be stopped.
+ *
+ * Returns 0 on success, appropriate error code on failure.
+ */
+int sdw_master_stop_clock(struct sdw_master *mstr)
+{
+ int ret = 0, i;
struct sdw_msg msg;
u8 buf[1] = {0};
- struct sdw_slave *slave;
- enum sdw_clk_stop_mode clock_stop_mode;
- int timeout = 0;
- int ret = 0;
- int slave_dev_present = 0;
+ enum sdw_clk_stop_mode mode;
- /* Find if all slave support clock stop mode1 if all slaves support
- * clock stop mode1 use mode1 else use mode0
- */
- for (i = 1; i <= SOUNDWIRE_MAX_DEVICES; i++) {
- if (mstr->sdw_addr[i].assigned &&
- mstr->sdw_addr[i].status != SDW_SLAVE_STAT_NOT_PRESENT) {
- slave_dev_present = 1;
- slave = mstr->sdw_addr[i].slave;
- clock_stop_mode &=
- slave->sdw_slv_cap.clock_stop1_mode_supported;
- if (!clock_stop_mode)
- break;
- }
- }
- if (stop) {
- *clck_stop_mode = clock_stop_mode;
- dev_info(&mstr->dev, "Entering Clock stop mode %x\n",
- clock_stop_mode);
- }
- /* Slaves might have removed power during its suspend
- * in that case no need to do clock stop prepare
- * and return from here
+ /* Send Broadcast message to the SCP_ctrl register with
+ * clock stop now. If none of the Slaves are attached, then there
+ * may not be ACK, flag the error about ACK not recevied but
+ * clock will be still stopped.
*/
- if (!slave_dev_present)
- return 0;
- /* Prepare for the clock stop mode. For simplified clock stop
- * prepare only mode is to be set, For others set the ClockStop
- * Prepare bit in SCP_SystemCtrl register. For all the other slaves
- * set the clock stop prepare bit. For all slave set the clock
- * stop mode based on what we got in earlier loop
+ msg.ssp_tag = 0;
+ msg.flag = SDW_MSG_FLAG_WRITE;
+ msg.len = 1;
+ msg.buf = &buf[0];
+ msg.slave_addr = SDW_SLAVE_BDCAST_ADDR;
+ msg.addr_page1 = 0x0;
+ msg.addr_page2 = 0x0;
+ msg.addr = SDW_SCP_CTRL;
+ buf[0] |= 0x1 << SDW_SCP_CTRL_CLK_STP_NOW_SHIFT;
+ ret = sdw_slave_transfer_nopm(mstr, &msg, 1);
+
+ /* Even if broadcast fails, we stop the clock and flag error */
+ if (ret != 1)
+ dev_err(&mstr->dev, "ClockStopNow Broadcast message failed\n");
+
+ /*
+ * Mark all Slaves as un-attached which are entering clock stop
+ * mode1
*/
for (i = 1; i <= SOUNDWIRE_MAX_DEVICES; i++) {
+
+ if (!mstr->sdw_addr[i].assigned)
+ continue;
+
+ /* Get clock stop mode for all Slaves */
+ mode = sdw_get_clk_stp_mode(mstr->sdw_addr[i].slave);
+ if (mode == SDW_CLOCK_STOP_MODE_0)
+ continue;
+
+ /* If clock stop mode 1, mark Slave as not present */
+ mstr->sdw_addr[i].status = SDW_SLAVE_STAT_NOT_PRESENT;
+ }
+ return 0;
+}
+EXPORT_SYMBOL_GPL(sdw_master_stop_clock);
+
+static struct sdw_slv *get_slave_for_prep_deprep(struct sdw_master *mstr,
+ int *slave_index)
+{
+ int i;
+
+ for (i = *slave_index; i <= SOUNDWIRE_MAX_DEVICES; i++) {
if (mstr->sdw_addr[i].assigned != true)
continue;
+
if (mstr->sdw_addr[i].status == SDW_SLAVE_STAT_NOT_PRESENT)
continue;
- slave = mstr->sdw_addr[i].slave;
- msg.ssp_tag = 0;
- slave = mstr->sdw_addr[i].slave;
- if (stop) {
- /* Even if its simplified clock stop prepare
- * setting prepare bit wont harm
- */
- buf[0] |= (1 << SDW_SCP_SYSTEMCTRL_CLK_STP_PREP_SHIFT);
- buf[0] |= clock_stop_mode <<
- SDW_SCP_SYSTEMCTRL_CLK_STP_MODE_SHIFT;
- }
- msg.flag = SDW_MSG_FLAG_WRITE;
- msg.addr = SDW_SCP_SYSTEMCTRL;
- msg.len = 1;
- msg.buf = buf;
- msg.slave_addr = i;
- msg.addr_page1 = 0x0;
- msg.addr_page2 = 0x0;
- ret = sdw_slave_transfer_nopm(mstr, &msg, 1);
- if (ret != 1) {
- dev_err(&mstr->dev, "Clock Stop prepare failed\n");
- return -EBUSY;
- }
+ *slave_index = i + 1;
+ return mstr->sdw_addr[i].slave;
}
+ return NULL;
+}
+
+/*
+ * Wait till clock stop prepare/deprepare is finished. Prepare for all
+ * mode, De-prepare only for the Slaves resuming from clock stop mode 0
+ */
+static void sdw_wait_for_clk_prep(struct sdw_master *mstr)
+{
+ int ret;
+ struct sdw_msg msg;
+ u8 buf[1] = {0};
+ int timeout = 0;
+
+ /* Create message to read clock stop status, its broadcast message. */
+ msg.ssp_tag = 0;
+ msg.flag = SDW_MSG_FLAG_READ;
+ msg.len = 1;
+ msg.buf = &buf[0];
+ msg.slave_addr = SDW_SLAVE_BDCAST_ADDR;
+ msg.addr_page1 = 0x0;
+ msg.addr_page2 = 0x0;
+ msg.addr = SDW_SCP_STAT;
+ buf[0] = 0xFF;
/*
- * Once clock stop prepare bit is set, broadcast the message to read
- * ClockStop_NotFinished bit from SCP_Stat, till we read it as 11
- * we dont exit loop. We wait for definite time before retrying
- * if its simple clock stop it will be always 1, while for other
- * they will driver 0 on bus so we wont get 1. In total we are
- * waiting 1 sec before we timeout.
+ * Once all the Slaves are written with prepare bit,
+ * we go ahead and broadcast the read message for the
+ * SCP_STAT register to read the ClockStopNotFinished bit
+ * Read till we get this a 0. Currently we have timeout of 1sec
+ * before giving up. Even if its not read as 0 after timeout,
+ * controller can stop the clock after warning.
*/
do {
- buf[0] = 0xFF;
- msg.ssp_tag = 0;
- msg.flag = SDW_MSG_FLAG_READ;
- msg.addr = SDW_SCP_STAT;
- msg.len = 1;
- msg.buf = buf;
- msg.slave_addr = 15;
- msg.addr_page1 = 0x0;
- msg.addr_page2 = 0x0;
+ /*
+ * Ideally this should not fail, but even if it fails
+ * in exceptional situation, we go ahead for clock stop
+ */
ret = sdw_slave_transfer_nopm(mstr, &msg, 1);
- if (ret != 1)
- goto prepare_failed;
+
+ if (ret != 1) {
+ WARN_ONCE(1, "Clock stop status read failed\n");
+ break;
+ }
if (!(buf[0] & SDW_SCP_STAT_CLK_STP_NF_MASK))
- break;
- msleep(100);
+ break;
+
+ /*
+ * TODO: Need to find from spec what is requirement.
+ * Since we are in suspend we should not sleep for more
+ * Ideally Slave should be ready to stop clock in less than
+ * few ms.
+ * So sleep less and increase loop time. This is not
+ * harmful, since if Slave is ready loop will terminate.
+ *
+ */
+ msleep(2);
timeout++;
- } while (timeout != 11);
- /* If we are trying to stop and prepare failed its not ok
- */
- if (!(buf[0] & SDW_SCP_STAT_CLK_STP_NF_MASK)) {
+
+ } while (timeout != 500);
+
+ if (!(buf[0] & SDW_SCP_STAT_CLK_STP_NF_MASK))
+
dev_info(&mstr->dev, "Clock stop prepare done\n");
- return 0;
- /* If we are trying to resume and un-prepare failes its ok
- * since codec might be down during suspned and will
- * start afresh after resuming
+ else
+ WARN_ONCE(1, "Some Slaves prepare un-successful\n");
+}
+
+/**
+ * sdw_master_prep_for_clk_stop: Prepare all the Slaves for clock stop.
+ * Iterate through each of the enumerated Slave.
+ * Prepare each Slave according to the clock stop
+ * mode supported by Slave. Use dynamic value from
+ * Slave callback if registered, else use static values
+ * from Slave capabilities registered.
+ * 1. Get clock stop mode for each Slave.
+ * 2. Call pre_prepare callback of each Slave if
+ * registered.
+ * 3. Prepare each Slave for clock stop
+ * 4. Broadcast the Read message to make sure
+ * all Slaves are prepared for clock stop.
+ * 5. Call post_prepare callback of each Slave if
+ * registered.
+ *
+ * @mstr: Master handle for which clock state has to be changed.
+ *
+ * Returns 0
+ */
+int sdw_master_prep_for_clk_stop(struct sdw_master *mstr)
+{
+ struct sdw_slv_capabilities *cap;
+ enum sdw_clk_stop_mode clock_stop_mode;
+ int ret = 0;
+ struct sdw_slv *slave = NULL;
+ int slv_index = 1;
+
+ /*
+ * Get all the Slaves registered to the master driver for preparing
+ * for clock stop. Start from Slave with logical address as 1.
*/
- } else if (!stop) {
- dev_info(&mstr->dev, "Some Slaves un-prepare un-successful\n");
- return 0;
+ while ((slave = get_slave_for_prep_deprep(mstr, &slv_index)) != NULL) {
+
+ cap = &slave->sdw_slv_cap;
+
+ clock_stop_mode = sdw_get_clk_stp_mode(slave);
+
+ /*
+ * Call the pre clock stop prepare, if Slave requires.
+ */
+ if (slave->driver && slave->driver->pre_clk_stop_prep) {
+ ret = slave->driver->pre_clk_stop_prep(slave,
+ clock_stop_mode, true);
+
+ /* If it fails we still continue */
+ if (ret)
+ dev_warn(&mstr->dev, "Pre prepare failed for Slave %d\n",
+ slave->slv_number);
+ }
+
+ sdw_prep_slave_for_clk_stp(mstr, slave, clock_stop_mode, true);
+ }
+
+ /* Wait till prepare for all Slaves is finished */
+ /*
+ * We should continue even if the prepare fails. Clock stop
+ * prepare failure on Slaves, should not impact the broadcasting
+ * of ClockStopNow.
+ */
+ sdw_wait_for_clk_prep(mstr);
+
+ slv_index = 1;
+ while ((slave = get_slave_for_prep_deprep(mstr, &slv_index)) != NULL) {
+
+ cap = &slave->sdw_slv_cap;
+
+ clock_stop_mode = sdw_get_clk_stp_mode(slave);
+
+ if (slave->driver && slave->driver->post_clk_stop_prep) {
+ ret = slave->driver->post_clk_stop_prep(slave,
+ clock_stop_mode,
+ true);
+ /*
+ * Even if Slave fails we continue with other
+ * Slaves. This should never happen ideally.
+ */
+ if (ret)
+ dev_err(&mstr->dev, "Post prepare failed for Slave %d\n",
+ slave->slv_number);
+ }
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(sdw_master_prep_for_clk_stop);
+
+/**
+ * sdw_mstr_deprep_after_clk_start: De-prepare all the Slaves
+ * exiting clock stop mode 0 after clock resumes. Clock
+ * is already resumed before this. De-prepare all the Slaves
+ * which were earlier in ClockStop mode0. De-prepare for the
+ * Slaves which were there in ClockStop mode1 is done after
+ * they enumerated back. Its not done here as part of master
+ * getting resumed.
+ * 1. Get clock stop mode for each Slave its exiting from
+ * 2. Call pre_prepare callback of each Slave exiting from
+ * clock stop mode 0.
+ * 3. De-Prepare each Slave exiting from Clock Stop mode0
+ * 4. Broadcast the Read message to make sure
+ * all Slaves are de-prepared for clock stop.
+ * 5. Call post_prepare callback of each Slave exiting from
+ * clock stop mode0
+ *
+ *
+ * @mstr: Master handle
+ *
+ * Returns 0
+ */
+int sdw_mstr_deprep_after_clk_start(struct sdw_master *mstr)
+{
+ struct sdw_slv_capabilities *cap;
+ enum sdw_clk_stop_mode clock_stop_mode;
+ int ret = 0;
+ struct sdw_slv *slave = NULL;
+ /* We are preparing for stop */
+ bool stop = false;
+ int slv_index = 1;
+
+ while ((slave = get_slave_for_prep_deprep(mstr, &slv_index)) != NULL) {
+
+ cap = &slave->sdw_slv_cap;
+
+ /* Get the clock stop mode from which Slave is exiting */
+ clock_stop_mode = sdw_get_clk_stp_mode(slave);
+
+ /*
+ * Slave is exiting from Clock stop mode 1, De-prepare
+ * is optional based on capability, and it has to be done
+ * after Slave is enumerated. So nothing to be done
+ * here.
+ */
+ if (clock_stop_mode == SDW_CLOCK_STOP_MODE_1)
+ continue;
+ /*
+ * Call the pre clock stop prepare, if Slave requires.
+ */
+ if (slave->driver && slave->driver->pre_clk_stop_prep)
+ ret = slave->driver->pre_clk_stop_prep(slave,
+ clock_stop_mode, false);
+
+ /* If it fails we still continue */
+ if (ret)
+ dev_warn(&mstr->dev, "Pre de-prepare failed for Slave %d\n",
+ slave->slv_number);
+
+ sdw_prep_slave_for_clk_stp(mstr, slave, clock_stop_mode, false);
}
-prepare_failed:
- dev_err(&mstr->dev, "Clock Stop prepare failed\n");
- return -EBUSY;
+ /*
+ * Wait till prepare is finished for all the Slaves.
+ */
+ sdw_wait_for_clk_prep(mstr);
+
+ slv_index = 1;
+ while ((slave = get_slave_for_prep_deprep(mstr, &slv_index)) != NULL) {
+
+ cap = &slave->sdw_slv_cap;
+
+ clock_stop_mode = sdw_get_clk_stp_mode(slave);
+
+ /*
+ * Slave is exiting from Clock stop mode 1, De-prepare
+ * is optional based on capability, and it has to be done
+ * after Slave is enumerated.
+ */
+ if (clock_stop_mode == SDW_CLOCK_STOP_MODE_1)
+ continue;
+ if (slave->driver && slave->driver->post_clk_stop_prep) {
+ ret = slave->driver->post_clk_stop_prep(slave,
+ clock_stop_mode,
+ stop);
+ /*
+ * Even if Slave fails we continue with other
+ * Slaves. This should never happen ideally.
+ */
+ if (ret)
+ dev_err(&mstr->dev, "Post de-prepare failed for Slave %d\n",
+ slave->slv_number);
+ }
+ }
+ return 0;
}
-EXPORT_SYMBOL_GPL(sdw_prepare_for_clock_change);
+EXPORT_SYMBOL_GPL(sdw_mstr_deprep_after_clk_start);
+
struct sdw_master *sdw_get_master(int nr)
{
diff --git a/drivers/sdw/sdw_bwcalc.c b/drivers/sdw/sdw_bwcalc.c
index cafaccbeea3a..7ebb26756f59 100644
--- a/drivers/sdw/sdw_bwcalc.c
+++ b/drivers/sdw/sdw_bwcalc.c
@@ -25,37 +25,38 @@
#include <linux/sdw/sdw_registers.h>
-#define MAXCLOCKFREQ 6
-#ifdef CONFIG_SND_SOC_SVFPGA
-/* For PDM Capture, frameshape used is 50x10 */
-int rows[MAX_NUM_ROWS] = {50, 100, 48, 60, 64, 72, 75, 80, 90,
- 96, 125, 144, 147, 120, 128, 150,
+#ifndef CONFIG_SND_SOC_SVFPGA /* Original */
+#if IS_ENABLED(CONFIG_SND_SOC_INTEL_CNL_FPGA)
+int rows[MAX_NUM_ROWS] = {48, 50, 60, 64, 72, 75, 80, 90,
+ 96, 125, 144, 147, 100, 120, 128, 150,
160, 180, 192, 200, 240, 250, 256};
+#define SDW_DEFAULT_SSP 50
+#else
+int rows[MAX_NUM_ROWS] = {125, 64, 48, 50, 60, 72, 75, 80, 90,
+ 96, 144, 147, 100, 120, 128, 150,
+ 160, 180, 192, 200, 240, 250, 256};
+#define SDW_DEFAULT_SSP 24
+#endif /* IS_ENABLED(CONFIG_SND_SOC_INTEL_CNL_FPGA) */
-int cols[MAX_NUM_COLS] = {10, 2, 4, 6, 8, 12, 14, 16};
-
-int clock_freq[MAXCLOCKFREQ] = {19200000, 19200000,
- 19200000, 19200000,
- 19200000, 19200000};
+int cols[MAX_NUM_COLS] = {2, 4, 6, 8, 10, 12, 14, 16};
#else
-/* TBD: Currently we are using 100x2 as frame shape. to be removed later */
-int rows[MAX_NUM_ROWS] = {100, 48, 50, 60, 64, 72, 75, 80, 90,
+/* For PDM Capture, frameshape used is 50x10 */
+int rows[MAX_NUM_ROWS] = {50, 100, 48, 60, 64, 72, 75, 80, 90,
96, 125, 144, 147, 120, 128, 150,
160, 180, 192, 200, 240, 250, 256};
-int cols[MAX_NUM_COLS] = {2, 4, 6, 8, 10, 12, 14, 16};
+int cols[MAX_NUM_COLS] = {10, 2, 4, 6, 8, 12, 14, 16};
+#define SDW_DEFAULT_SSP 50
+#endif
/*
* TBD: Get supported clock frequency from ACPI and store
* it in master data structure.
*/
-/* Currently only 9.6MHz clock frequency used */
-int clock_freq[MAXCLOCKFREQ] = {9600000, 9600000,
- 9600000, 9600000,
- 9600000, 9600000};
-#endif
+#define MAXCLOCKDIVS 1
+int clock_div[MAXCLOCKDIVS] = {1};
struct sdw_num_to_col sdw_num_col_mapping[MAX_NUM_COLS] = {
{0, 2}, {1, 4}, {2, 6}, {3, 8}, {4, 10}, {5, 12}, {6, 14}, {7, 16},
@@ -117,19 +118,8 @@ int sdw_mstr_bw_init(struct sdw_bus *sdw_bs)
sdw_bs->frame_freq = 0;
sdw_bs->clk_state = SDW_CLK_STATE_ON;
sdw_mstr_cap = &sdw_bs->mstr->mstr_capabilities;
-#ifdef CONFIG_SND_SOC_SVFPGA
- /* TBD: For PDM capture to be removed later */
- sdw_bs->clk_freq = 9.6 * 1000 * 1000 * 2;
- sdw_mstr_cap->base_clk_freq = 9.6 * 1000 * 1000 * 2;
-#else
- /* TBD: Base Clock frequency should be read from
- * master capabilities
- * Currenly hardcoding to 9.6MHz
- */
- sdw_bs->clk_freq = 9.6 * 1000 * 1000;
- sdw_mstr_cap->base_clk_freq = 9.6 * 1000 * 1000;
+ sdw_bs->clk_freq = (sdw_mstr_cap->base_clk_freq * 2);
-#endif
return 0;
}
EXPORT_SYMBOL_GPL(sdw_mstr_bw_init);
@@ -203,9 +193,8 @@ int sdw_lcm(int num1, int num2)
* transport and port parameters.
*/
int sdw_cfg_slv_params(struct sdw_bus *mstr_bs,
- struct sdw_slave_runtime *slv_rt,
struct sdw_transport_params *t_slv_params,
- struct sdw_port_params *p_slv_params)
+ struct sdw_port_params *p_slv_params, int slv_number)
{
struct sdw_msg wr_msg, wr_msg1, rd_msg;
int ret = 0;
@@ -241,7 +230,7 @@ int sdw_cfg_slv_params(struct sdw_bus *mstr_bs,
wbuf[2] = ((t_slv_params->sample_interval - 1) >> 8) &
SDW_DPN_SAMPLECTRL1_LOW_MASK; /* DPN_SampleCtrl2 */
wbuf[3] = t_slv_params->offset1; /* DPN_OffsetCtrl1 */
- wbuf[4] = t_slv_params->offset2; /* DPN_OffsetCtrl1 */
+ wbuf[4] = t_slv_params->offset2; /* DPN_OffsetCtrl2 */
/* DPN_HCtrl */
wbuf[5] = (t_slv_params->hstop | (t_slv_params->hstart << 4));
wbuf[6] = t_slv_params->blockpackingmode; /* DPN_BlockCtrl3 */
@@ -256,21 +245,18 @@ int sdw_cfg_slv_params(struct sdw_bus *mstr_bs,
rd_msg.ssp_tag = 0x0;
rd_msg.flag = SDW_MSG_FLAG_READ;
rd_msg.len = 1;
- rd_msg.slave_addr = slv_rt->slave->slv_number;
+ rd_msg.slave_addr = slv_number;
+
rd_msg.buf = rbuf;
rd_msg.addr_page1 = 0x0;
rd_msg.addr_page2 = 0x0;
-/* Dont program slave params for the Aggregation.
- * Its with master loop back
- */
-#ifndef CONFIG_SND_SOC_MXFPGA
+
ret = sdw_slave_transfer(mstr_bs->mstr, &rd_msg, 1);
if (ret != 1) {
ret = -EINVAL;
dev_err(&mstr_bs->mstr->dev, "Register transfer failed\n");
goto out;
}
-#endif
wbuf1[0] = (p_slv_params->port_flow_mode |
(p_slv_params->port_data_mode <<
@@ -295,7 +281,8 @@ int sdw_cfg_slv_params(struct sdw_bus *mstr_bs,
#else
wr_msg.len = (7 + (1 * (t_slv_params->blockgroupcontrol_valid)));
#endif
- wr_msg.slave_addr = slv_rt->slave->slv_number;
+
+ wr_msg.slave_addr = slv_number;
wr_msg.buf = &wbuf[0 + (1 * (!t_slv_params->blockgroupcontrol_valid))];
wr_msg.addr_page1 = 0x0;
wr_msg.addr_page2 = 0x0;
@@ -303,14 +290,12 @@ int sdw_cfg_slv_params(struct sdw_bus *mstr_bs,
wr_msg1.ssp_tag = 0x0;
wr_msg1.flag = SDW_MSG_FLAG_WRITE;
wr_msg1.len = 2;
- wr_msg1.slave_addr = slv_rt->slave->slv_number;
+
+ wr_msg1.slave_addr = slv_number;
wr_msg1.buf = &wbuf1[0];
wr_msg1.addr_page1 = 0x0;
wr_msg1.addr_page2 = 0x0;
-/* Dont program slave params for the Aggregation.
- * Its with master loop back
- */
-#ifndef CONFIG_SND_SOC_MXFPGA
+
ret = sdw_slave_transfer(mstr_bs->mstr, &wr_msg, 1);
if (ret != 1) {
ret = -EINVAL;
@@ -326,7 +311,6 @@ int sdw_cfg_slv_params(struct sdw_bus *mstr_bs,
goto out;
}
out:
-#endif
return ret;
}
@@ -370,119 +354,16 @@ int sdw_cfg_mstr_params(struct sdw_bus *mstr_bs,
return 0;
}
-
-/*
- * sdw_cfg_mstr_slv - returns Success
- * -EINVAL - In case of error.
- *
- *
- * This function call master/slave transport/port
- * params configuration API's, called from sdw_bus_calc_bw
- * & sdw_bus_calc_bw_dis API's.
- */
-int sdw_cfg_mstr_slv(struct sdw_bus *sdw_mstr_bs,
- struct sdw_mstr_runtime *sdw_mstr_bs_rt,
- bool is_master)
-{
- struct sdw_transport_params *t_params, *t_slv_params;
- struct sdw_port_params *p_params, *p_slv_params;
- struct sdw_slave_runtime *slv_rt = NULL;
- struct sdw_port_runtime *port_rt, *port_slv_rt;
- int ret = 0;
-
- if (is_master) {
- /* should not compute any transport params */
- if (sdw_mstr_bs_rt->rt_state == SDW_STATE_UNPREPARE_RT)
- return 0;
-
- list_for_each_entry(port_rt,
- &sdw_mstr_bs_rt->port_rt_list, port_node) {
-
- /* Transport and port parameters */
- t_params = &port_rt->transport_params;
- p_params = &port_rt->port_params;
-
- p_params->num = port_rt->port_num;
- p_params->word_length =
- sdw_mstr_bs_rt->stream_params.bps;
- p_params->port_flow_mode = 0x0; /* Isochronous Mode */
- p_params->port_data_mode = 0x0; /* Normal Mode */
-
- /* Configure xport params and port params for master */
- ret = sdw_cfg_mstr_params(sdw_mstr_bs,
- t_params, p_params);
- if (ret < 0)
- return ret;
-
- /* Since one port per master runtime,
- * breaking port_list loop
- * TBD: to be extended for multiple port support
- */
-
- break;
- }
-
- } else {
-
-
- list_for_each_entry(slv_rt,
- &sdw_mstr_bs_rt->slv_rt_list, slave_node) {
-
- if (slv_rt->slave == NULL)
- break;
-
- /* should not compute any transport params */
- if (slv_rt->rt_state == SDW_STATE_UNPREPARE_RT)
- continue;
-
- list_for_each_entry(port_slv_rt,
- &slv_rt->port_rt_list, port_node) {
-
- /* Fill in port params here */
- port_slv_rt->port_params.num =
- port_slv_rt->port_num;
- port_slv_rt->port_params.word_length =
- slv_rt->stream_params.bps;
- /* Isochronous Mode */
- port_slv_rt->port_params.port_flow_mode = 0x0;
- /* Normal Mode */
- port_slv_rt->port_params.port_data_mode = 0x0;
- t_slv_params = &port_slv_rt->transport_params;
- p_slv_params = &port_slv_rt->port_params;
-
- /* Configure xport & port params for slave */
- ret = sdw_cfg_slv_params(sdw_mstr_bs,
- slv_rt, t_slv_params, p_slv_params);
- if (ret < 0)
- return ret;
-
- /* Since one port per slave runtime,
- * breaking port_list loop
- * TBD: to be extended for multiple
- * port support
- */
-
- break;
- }
- }
-
- }
-
- return 0;
-}
-
-
/*
- * sdw_cpy_params_mstr_slv - returns Success
- * -EINVAL - In case of error.
- *
+ * sdw_cfg_params_mstr_slv - returns Success
*
* This function copies/configure master/slave transport &
- * port params to alternate bank.
+ * port params.
*
*/
-int sdw_cpy_params_mstr_slv(struct sdw_bus *sdw_mstr_bs,
- struct sdw_mstr_runtime *sdw_mstr_bs_rt)
+int sdw_cfg_params_mstr_slv(struct sdw_bus *sdw_mstr_bs,
+ struct sdw_mstr_runtime *sdw_mstr_bs_rt,
+ bool state_check)
{
struct sdw_slave_runtime *slv_rt = NULL;
struct sdw_port_runtime *port_rt, *port_slv_rt;
@@ -496,6 +377,11 @@ int sdw_cpy_params_mstr_slv(struct sdw_bus *sdw_mstr_bs,
if (slv_rt->slave == NULL)
break;
+ /* configure transport params based on state */
+ if ((state_check) &&
+ (slv_rt->rt_state == SDW_STATE_UNPREPARE_RT))
+ continue;
+
list_for_each_entry(port_slv_rt,
&slv_rt->port_rt_list, port_node) {
@@ -511,20 +397,17 @@ int sdw_cpy_params_mstr_slv(struct sdw_bus *sdw_mstr_bs,
p_slv_params = &port_slv_rt->port_params;
/* Configure xport & port params for slave */
- ret = sdw_cfg_slv_params(sdw_mstr_bs,
- slv_rt, t_slv_params, p_slv_params);
+ ret = sdw_cfg_slv_params(sdw_mstr_bs, t_slv_params,
+ p_slv_params, slv_rt->slave->slv_number);
if (ret < 0)
return ret;
- /*
- * Since one port per slave runtime,
- * breaking port_list loop
- * TBD: to be extended for multiple port support
- */
- break;
}
}
+ if ((state_check) &&
+ (sdw_mstr_bs_rt->rt_state == SDW_STATE_UNPREPARE_RT))
+ return 0;
list_for_each_entry(port_rt,
&sdw_mstr_bs_rt->port_rt_list, port_node) {
@@ -544,10 +427,6 @@ int sdw_cpy_params_mstr_slv(struct sdw_bus *sdw_mstr_bs,
if (ret < 0)
return ret;
- /* Since one port per slave runtime, breaking port_list loop
- * TBD: to be extended for multiple port support
- */
- break;
}
return 0;
@@ -608,10 +487,6 @@ int sdw_cfg_slv_enable_disable(struct sdw_bus *mstr_bs,
*/
/* 2. slave port enable */
-/* Dont program slave params for the Aggregation.
- * Its with master loop back
- */
-#ifndef CONFIG_SND_SOC_MXFPGA
ret = sdw_slave_transfer(mstr_bs->mstr, &rd_msg, 1);
if (ret != 1) {
ret = -EINVAL;
@@ -638,7 +513,6 @@ int sdw_cfg_slv_enable_disable(struct sdw_bus *mstr_bs,
"Register transfer failed\n");
goto out;
}
-#endif
/*
* 3. slave port enable post pre
* --> callback
@@ -653,10 +527,6 @@ int sdw_cfg_slv_enable_disable(struct sdw_bus *mstr_bs,
* --> callback
* --> no callback available
*/
-/* Dont program slave params for the Aggregation.
- * Its with master loop back
- */
-#ifndef CONFIG_SND_SOC_MXFPGA
/* 2. slave port disable */
ret = sdw_slave_transfer(mstr_bs->mstr, &rd_msg, 1);
@@ -685,7 +555,7 @@ int sdw_cfg_slv_enable_disable(struct sdw_bus *mstr_bs,
"Register transfer failed\n");
goto out;
}
-#endif
+
/*
* 3. slave port enable post unpre
* --> callback
@@ -695,9 +565,7 @@ int sdw_cfg_slv_enable_disable(struct sdw_bus *mstr_bs,
slv_rt_strm->rt_state = SDW_STATE_DISABLE_RT;
}
-#ifndef CONFIG_SND_SOC_MXFPGA
out:
-#endif
return ret;
}
@@ -783,13 +651,6 @@ int sdw_en_dis_mstr_slv(struct sdw_bus *sdw_mstr_bs,
if (ret < 0)
return ret;
- /*
- * Since one port per slave runtime,
- * breaking port_list loop
- * TBD: to be extended for multiple port support
- */
- break;
-
}
break;
@@ -811,13 +672,6 @@ int sdw_en_dis_mstr_slv(struct sdw_bus *sdw_mstr_bs,
if (ret < 0)
return ret;
- /*
- * Since one port per master runtime,
- * breaking port_list loop
- * TBD: to be extended for multiple port support
- */
- break;
-
}
}
@@ -858,13 +712,6 @@ int sdw_en_dis_mstr_slv_state(struct sdw_bus *sdw_mstr_bs,
if (ret < 0)
return ret;
- /*
- * Since one port per slave runtime,
- * breaking port_list loop
- * TBD: to be extended for multiple
- * port support
- */
- break;
}
}
}
@@ -879,13 +726,6 @@ int sdw_en_dis_mstr_slv_state(struct sdw_bus *sdw_mstr_bs,
if (ret < 0)
return ret;
- /*
- * Since one port per master runtime,
- * breaking port_list loop
- * TBD: to be extended for multiple port support
- */
-
- break;
}
}
@@ -902,41 +742,109 @@ int sdw_en_dis_mstr_slv_state(struct sdw_bus *sdw_mstr_bs,
* clock frequency.
*/
int sdw_get_clock_frmshp(struct sdw_bus *sdw_mstr_bs, int *frame_int,
- int *col, int *row)
+ struct sdw_mstr_runtime *sdw_mstr_rt)
{
- int i, rc, clock_reqd = 0, frame_interval = 0, frame_frequency = 0;
- int sel_row = 0, sel_col = 0;
+ struct sdw_master_capabilities *sdw_mstr_cap = NULL;
+ struct sdw_slv_dpn_capabilities *sdw_slv_dpn_cap = NULL;
+ struct port_audio_mode_properties *mode_prop = NULL;
+ struct sdw_slave_runtime *slv_rt = NULL;
+ struct sdw_port_runtime *port_slv_rt = NULL;
+ int i, j, rc;
+ int clock_reqd = 0, frame_interval = 0, frame_frequency = 0;
+ int sel_row = 0, sel_col = 0, pn = 0;
+ int value;
bool clock_ok = false;
+ sdw_mstr_cap = &sdw_mstr_bs->mstr->mstr_capabilities;
+
/*
* Find nearest clock frequency needed by master for
* given bandwidth
*/
-
- /*
- * TBD: Need to run efficient algorithm to make sure we have
- * only 1 to 10 percent of control bandwidth usage
- */
- for (i = 0; i < MAXCLOCKFREQ; i++) {
+ for (i = 0; i < MAXCLOCKDIVS; i++) {
/* TBD: Check why 3000 */
- if ((clock_freq[i] <= sdw_mstr_bs->bandwidth) ||
- ((clock_freq[i] % 3000) != 0))
+ if ((((sdw_mstr_cap->base_clk_freq * 2) / clock_div[i]) <=
+ sdw_mstr_bs->bandwidth) ||
+ ((((sdw_mstr_cap->base_clk_freq * 2) / clock_div[i])
+ % 3000) != 0))
continue;
- clock_reqd = clock_freq[i];
+
+ clock_reqd = ((sdw_mstr_cap->base_clk_freq * 2) / clock_div[i]);
/*
- * TBD: Check all the slave device capabilities
+ * Check all the slave device capabilities
* here and find whether given frequency is
* supported by all slaves
*/
+ list_for_each_entry(slv_rt, &sdw_mstr_rt->slv_rt_list,
+ slave_node) {
+
+ /* check for valid slave */
+ if (slv_rt->slave == NULL)
+ break;
+
+ /* check clock req for each port */
+ list_for_each_entry(port_slv_rt,
+ &slv_rt->port_rt_list, port_node) {
+
+ pn = port_slv_rt->port_num;
+
+
+ sdw_slv_dpn_cap =
+ &slv_rt->slave->sdw_slv_cap.sdw_dpn_cap[pn];
+ mode_prop = sdw_slv_dpn_cap->mode_properties;
+
+ /*
+ * TBD: Indentation to be fixed,
+ * code refactoring to be considered.
+ */
+ if (mode_prop->num_freq_configs) {
+ for (j = 0; j <
+ mode_prop->num_freq_configs; j++) {
+ value =
+ mode_prop->freq_supported[j];
+ if (clock_reqd == value) {
+ clock_ok = true;
+ break;
+ }
+ if (j ==
+ mode_prop->num_freq_configs) {
+ clock_ok = false;
+ break;
+ }
+
+ }
+
+ } else {
+ if ((clock_reqd <
+ mode_prop->min_frequency) ||
+ (clock_reqd >
+ mode_prop->max_frequency)) {
+ clock_ok = false;
+ } else
+ clock_ok = true;
+ }
+
+ /* Go for next clock frequency */
+ if (!clock_ok)
+ break;
+ }
+
+ /*
+ * Dont check next slave, go for next clock
+ * frequency
+ */
+ if (!clock_ok)
+ break;
+ }
+
+ /* check for next clock divider */
+ if (!clock_ok)
+ continue;
/* Find frame shape based on bandwidth per controller */
- /*
- * TBD: Need to run efficient algorithm to make sure we have
- * only 1 to 10 percent of control bandwidth usage
- */
- for (rc = 0; rc <= MAX_NUM_ROW_COLS; rc++) {
+ for (rc = 0; rc < MAX_NUM_ROW_COLS; rc++) {
frame_interval =
sdw_core.rowcolcomb[rc].row *
sdw_core.rowcolcomb[rc].col;
@@ -952,21 +860,27 @@ int sdw_get_clock_frmshp(struct sdw_bus *sdw_mstr_bs, int *frame_int,
break;
}
+ /* Valid frameshape not found, check for next clock freq */
+ if (rc == MAX_NUM_ROW_COLS)
+ continue;
+
sel_row = sdw_core.rowcolcomb[rc].row;
sel_col = sdw_core.rowcolcomb[rc].col;
sdw_mstr_bs->frame_freq = frame_frequency;
sdw_mstr_bs->clk_freq = clock_reqd;
+ sdw_mstr_bs->clk_div = clock_div[i];
clock_ok = false;
*frame_int = frame_interval;
- *col = sel_col;
- *row = sel_row;
sdw_mstr_bs->col = sel_col;
sdw_mstr_bs->row = sel_row;
- break;
-
+ return 0;
}
+ /* None of clock frequency matches, return error */
+ if (i == MAXCLOCKDIVS)
+ return -EINVAL;
+
return 0;
}
@@ -982,74 +896,95 @@ int sdw_compute_sys_interval(struct sdw_bus *sdw_mstr_bs,
int frame_interval)
{
struct sdw_master *sdw_mstr = sdw_mstr_bs->mstr;
- struct sdw_mstr_runtime *sdw_mstr_bs_rt;
- struct sdw_transport_params *t_params;
- struct sdw_port_runtime *port_rt;
+ struct sdw_mstr_runtime *sdw_mstr_rt = NULL;
+ struct sdw_slave_runtime *slv_rt = NULL;
+ struct sdw_transport_params *t_params = NULL, *t_slv_params = NULL;
+ struct sdw_port_runtime *port_rt, *port_slv_rt;
int lcmnum1 = 0, lcmnum2 = 0, div = 0, lcm = 0;
+ int sample_interval;
/*
* once you got bandwidth frame shape for bus,
* run a loop for all the active streams running
- * on bus and compute sample_interval & other transport parameters.
+ * on bus and compute stream interval & sample_interval.
*/
- list_for_each_entry(sdw_mstr_bs_rt,
+ list_for_each_entry(sdw_mstr_rt,
&sdw_mstr->mstr_rt_list, mstr_node) {
- if (sdw_mstr_bs_rt->mstr == NULL)
+ if (sdw_mstr_rt->mstr == NULL)
break;
- /* should not compute any transport params */
- if (sdw_mstr_bs_rt->rt_state == SDW_STATE_UNPREPARE_RT)
- continue;
+ /*
+ * Calculate sample interval for stream
+ * running on given master.
+ */
+ if (sdw_mstr_rt->stream_params.rate)
+ sample_interval = (sdw_mstr_bs->clk_freq/
+ sdw_mstr_rt->stream_params.rate);
+ else
+ return -EINVAL;
+ /* Run port loop to assign sample interval per port */
list_for_each_entry(port_rt,
- &sdw_mstr_bs_rt->port_rt_list, port_node) {
+ &sdw_mstr_rt->port_rt_list, port_node) {
t_params = &port_rt->transport_params;
/*
- * Current Assumption:
- * One port per bus runtime structure
+ * Assign sample interval each port transport
+ * properties. Assumption is that sample interval
+ * per port for given master will be same.
*/
- /* Calculate sample interval */
-#ifdef CONFIG_SND_SOC_SVFPGA
- t_params->sample_interval =
- ((sdw_mstr_bs->clk_freq/
- sdw_mstr_bs_rt->stream_params.rate));
-#else
- t_params->sample_interval =
- ((sdw_mstr_bs->clk_freq/
- sdw_mstr_bs_rt->stream_params.rate) * 2);
+ t_params->sample_interval = sample_interval;
+ }
-#endif
- /* Only BlockPerPort supported */
- t_params->blockpackingmode = 0;
- t_params->lanecontrol = 0;
+ /* Calculate LCM */
+ lcmnum2 = sample_interval;
+ if (!lcmnum1)
+ lcmnum1 = sdw_lcm(lcmnum2, lcmnum2);
+ else
+ lcmnum1 = sdw_lcm(lcmnum1, lcmnum2);
- /* Calculate LCM */
- lcmnum2 = t_params->sample_interval;
- if (!lcmnum1)
- lcmnum1 = sdw_lcm(lcmnum2, lcmnum2);
- else
- lcmnum1 = sdw_lcm(lcmnum1, lcmnum2);
+ /* Run loop for slave per master runtime */
+ list_for_each_entry(slv_rt,
+ &sdw_mstr_rt->slv_rt_list, slave_node) {
- /*
- * Since one port per bus runtime, breaking
- * port_list loop
- * TBD: to be extended for multiple port support
- */
- break;
+ if (slv_rt->slave == NULL)
+ break;
+
+ /* Assign sample interval for each port of slave */
+ list_for_each_entry(port_slv_rt,
+ &slv_rt->port_rt_list, port_node) {
+
+ t_slv_params = &port_slv_rt->transport_params;
+ /* Assign sample interval each port */
+ t_slv_params->sample_interval = sample_interval;
+ }
}
}
+ /*
+ * If system interval already calculated
+ * In pause/resume, underrun scenario
+ */
+ if (sdw_mstr_bs->system_interval)
+ return 0;
+
+ /* Assign frame stream interval */
+ sdw_mstr_bs->stream_interval = lcmnum1;
/* 6. compute system_interval */
if ((sdw_mstr_cap) && (sdw_mstr_bs->clk_freq)) {
div = ((sdw_mstr_cap->base_clk_freq * 2) /
sdw_mstr_bs->clk_freq);
- lcm = sdw_lcm(lcmnum1, frame_interval);
+
+ if ((lcmnum1) && (frame_interval))
+ lcm = sdw_lcm(lcmnum1, frame_interval);
+ else
+ return -EINVAL;
+
sdw_mstr_bs->system_interval = (div * lcm);
}
@@ -1065,6 +1000,25 @@ int sdw_compute_sys_interval(struct sdw_bus *sdw_mstr_bs,
return 0;
}
+/**
+ * sdw_chk_first_node - returns True or false
+ *
+ * This function returns true in case of first node
+ * else returns false.
+ */
+bool sdw_chk_first_node(struct sdw_mstr_runtime *sdw_mstr_rt,
+ struct sdw_master *sdw_mstr)
+{
+ struct sdw_mstr_runtime *first_rt = NULL;
+
+ first_rt = list_first_entry(&sdw_mstr->mstr_rt_list,
+ struct sdw_mstr_runtime, mstr_node);
+ if (sdw_mstr_rt == first_rt)
+ return true;
+ else
+ return false;
+
+}
/*
* sdw_compute_hstart_hstop - returns Success
@@ -1074,273 +1028,199 @@ int sdw_compute_sys_interval(struct sdw_bus *sdw_mstr_bs,
* This function computes hstart and hstop for running
* streams per master & slaves.
*/
-int sdw_compute_hstart_hstop(struct sdw_bus *sdw_mstr_bs, int sel_col)
+int sdw_compute_hstart_hstop(struct sdw_bus *sdw_mstr_bs)
{
struct sdw_master *sdw_mstr = sdw_mstr_bs->mstr;
- struct sdw_mstr_runtime *sdw_mstr_bs_rt;
+ struct sdw_mstr_runtime *sdw_mstr_rt;
struct sdw_transport_params *t_params = NULL, *t_slv_params = NULL;
struct sdw_slave_runtime *slv_rt = NULL;
struct sdw_port_runtime *port_rt, *port_slv_rt;
- int hstop = 0, hwidth = 0;
- int payload_bw = 0, full_bw = 0, column_needed = 0;
- bool hstop_flag = false;
-
- /* Calculate hwidth, hstart and hstop */
- list_for_each_entry(sdw_mstr_bs_rt,
+ int hstart = 0, hstop = 0;
+ int column_needed = 0;
+ int sel_col = sdw_mstr_bs->col;
+ int group_count = 0, no_of_channels = 0;
+ struct temp_elements *temp, *element;
+ int rates[10];
+ int num, ch_mask, block_offset, i, port_block_offset;
+
+ /* Run loop for all master runtimes for given master */
+ list_for_each_entry(sdw_mstr_rt,
&sdw_mstr->mstr_rt_list, mstr_node) {
- if (sdw_mstr_bs_rt->mstr == NULL)
+ if (sdw_mstr_rt->mstr == NULL)
break;
/* should not compute any transport params */
- if (sdw_mstr_bs_rt->rt_state == SDW_STATE_UNPREPARE_RT)
+ if (sdw_mstr_rt->rt_state == SDW_STATE_UNPREPARE_RT)
continue;
- list_for_each_entry(port_rt,
- &sdw_mstr_bs_rt->port_rt_list, port_node) {
-
- t_params = &port_rt->transport_params;
- t_params->num = port_rt->port_num;
+ /* Perform grouping of streams based on stream rate */
+ if (sdw_mstr_rt == list_first_entry(&sdw_mstr->mstr_rt_list,
+ struct sdw_mstr_runtime, mstr_node))
+ rates[group_count++] = sdw_mstr_rt->stream_params.rate;
+ else {
+ num = group_count;
+ for (i = 0; i < num; i++) {
+ if (sdw_mstr_rt->stream_params.rate == rates[i])
+ break;
- /*
- * 1. find full_bw and payload_bw per stream
- * 2. find h_width per stream
- * 3. find hstart, hstop, block_offset,sub_block_offset
- * Note: full_bw is nothing but sampling interval
- * of stream.
- * payload_bw is serving size no.
- * of channels * bps per stream
- */
- full_bw = sdw_mstr_bs->clk_freq/
- sdw_mstr_bs_rt->stream_params.rate;
- payload_bw =
- sdw_mstr_bs_rt->stream_params.bps *
- sdw_mstr_bs_rt->stream_params.channel_count;
+ if (i == num)
+ rates[group_count++] =
+ sdw_mstr_rt->stream_params.rate;
+ }
+ }
+ }
- hwidth = (sel_col * payload_bw + full_bw - 1)/full_bw;
- column_needed += hwidth;
+ /* check for number of streams and number of group count */
+ if (group_count == 0)
+ return 0;
- /*
- * These needs to be done only for
- * 1st entry in link list
- */
- if (!hstop_flag) {
- hstop = sel_col - 1;
- hstop_flag = true;
- }
+ /* Allocate temporary memory holding temp variables */
+ temp = kzalloc((sizeof(struct temp_elements) * group_count),
+ GFP_KERNEL);
+ if (!temp)
+ return -ENOMEM;
- /* Assumption: Only block per port is supported
- * For blockperport:
- * offset1 value = LSB 8 bits of block_offset value
- * offset2 value = MSB 8 bits of block_offset value
- * For blockperchannel:
- * offset1 = LSB 8 bit of block_offset value
- * offset2 = MSB 8 bit of sub_block_offload value
- * if hstart and hstop of different streams in
- * master are different, then block_offset is zero.
- * if not then block_offset value for 2nd stream
- * is block_offset += payload_bw
- */
+ /* Calculate full bandwidth per group */
+ for (i = 0; i < group_count; i++) {
+ element = &temp[i];
+ element->rate = rates[i];
+ element->full_bw = sdw_mstr_bs->clk_freq/element->rate;
+ }
- t_params->hstop = hstop;
-#ifdef CONFIG_SND_SOC_SVFPGA
- /* For PDM capture, 0th col is also used */
- t_params->hstart = 0;
-#else
- t_params->hstart = hstop - hwidth + 1;
-#endif
+ /* Calculate payload bandwidth per group */
+ list_for_each_entry(sdw_mstr_rt,
+ &sdw_mstr->mstr_rt_list, mstr_node) {
- /*
- * TBD: perform this when you have 2 ports
- * and accordingly configure hstart hstop for slave
- * removing for now
- */
-#if 0
- hstop = hstop - hwidth;
-#endif
- /* Since one port per bus runtime,
- * breaking port_list loop
- * TBD: to be extended for multiple port support
- */
+ if (sdw_mstr_rt->mstr == NULL)
break;
- }
- /*
- * Run loop for slave_rt_list for given master_list
- * to compute hstart hstop for slave
- */
- list_for_each_entry(slv_rt,
- &sdw_mstr_bs_rt->slv_rt_list, slave_node) {
+ /* should not compute any transport params */
+ if (sdw_mstr_rt->rt_state == SDW_STATE_UNPREPARE_RT)
+ continue;
- if (slv_rt->slave == NULL)
- break;
+ for (i = 0; i < group_count; i++) {
+ element = &temp[i];
+ if (sdw_mstr_rt->stream_params.rate == element->rate) {
+ element->payload_bw +=
+ sdw_mstr_rt->stream_params.bps *
+ sdw_mstr_rt->stream_params.channel_count;
+ }
- if (slv_rt->rt_state == SDW_STATE_UNPREPARE_RT)
- continue;
+ /* Any of stream rate should match */
+ if (i == group_count)
+ return -EINVAL;
+ }
+ }
- list_for_each_entry(port_slv_rt,
- &slv_rt->port_rt_list, port_node) {
+ /* Calculate hwidth per group and total column needed per master */
+ for (i = 0; i < group_count; i++) {
+ element = &temp[i];
+ element->hwidth =
+ (sel_col * element->payload_bw +
+ element->full_bw - 1)/element->full_bw;
+ column_needed += element->hwidth;
+ }
- t_slv_params = &port_slv_rt->transport_params;
- t_slv_params->num = port_slv_rt->port_num;
-
- /*
- * TBD: Needs to be verifid for
- * multiple combination
- * 1. 1 master port, 1 slave rt,
- * 1 port per slave rt -->
- * In this case, use hstart hstop same as master
- * for 1 slave rt
- * 2. 1 master port, 2 slave rt,
- * 1 port per slave rt -->
- * In this case, use hstart hstop same as master
- * for 2 slave rt
- * only offset will change for 2nd slave rt
- * Current assumption is one port per rt,
- * hence no multiple port combination
- * considered.
- */
- t_slv_params->hstop = hstop;
- t_slv_params->hstart = hstop - hwidth + 1;
-
- /* Only BlockPerPort supported */
- t_slv_params->blockpackingmode = 0;
- t_slv_params->lanecontrol = 0;
-
- /*
- * below copy needs to be changed when
- * more than one port is supported
- */
- if (t_params)
- t_slv_params->sample_interval =
- t_params->sample_interval;
-
- /* Since one port per slave runtime,
- * breaking port_list loop
- * TBD: to be extended for multiple
- * port support
- */
- break;
- }
-
- }
- }
-
-#if 0
- /* TBD: To be verified */
+ /* Check column required should not be greater than selected columns*/
if (column_needed > sel_col - 1)
- return -EINVAL; /* Error case, check what has gone wrong */
-#endif
+ return -EINVAL;
- return 0;
-}
+ /* Compute hstop */
+ hstop = sel_col - 1;
+ /* Run loop for all groups to compute transport parameters */
+ for (i = 0; i < group_count; i++) {
+ port_block_offset = block_offset = 1;
+ element = &temp[i];
-/*
- * sdw_compute_blk_subblk_offset - returns Success
- *
- *
- * This function computes block offset and sub block
- * offset for running streams per master & slaves.
- */
-int sdw_compute_blk_subblk_offset(struct sdw_bus *sdw_mstr_bs)
-{
- struct sdw_master *sdw_mstr = sdw_mstr_bs->mstr;
- struct sdw_mstr_runtime *sdw_mstr_bs_rt;
- struct sdw_transport_params *t_params, *t_slv_params;
- struct sdw_slave_runtime *slv_rt = NULL;
- struct sdw_port_runtime *port_rt, *port_slv_rt;
- int hstart1 = 0, hstop1 = 0, hstart2 = 0, hstop2 = 0;
- int block_offset = 1;
+ /* Find streams associated with each group */
+ list_for_each_entry(sdw_mstr_rt,
+ &sdw_mstr->mstr_rt_list, mstr_node) {
+ if (sdw_mstr_rt->mstr == NULL)
+ break;
- /* Calculate block_offset and subblock_offset */
- list_for_each_entry(sdw_mstr_bs_rt,
- &sdw_mstr->mstr_rt_list, mstr_node) {
+ /* should not compute any transport params */
+ if (sdw_mstr_rt->rt_state == SDW_STATE_UNPREPARE_RT)
+ continue;
- if (sdw_mstr_bs_rt->mstr == NULL)
- break;
+ if (sdw_mstr_rt->stream_params.rate != element->rate)
+ continue;
- /* should not compute any transport params */
- if (sdw_mstr_bs_rt->rt_state == SDW_STATE_UNPREPARE_RT)
- continue;
+ /* Compute hstart */
+ sdw_mstr_rt->hstart = hstart =
+ hstop - element->hwidth + 1;
+ sdw_mstr_rt->hstop = hstop;
- list_for_each_entry(port_rt,
- &sdw_mstr_bs_rt->port_rt_list, port_node) {
+ /* Assign hstart, hstop, block offset for each port */
+ list_for_each_entry(port_rt,
+ &sdw_mstr_rt->port_rt_list, port_node) {
- t_params = &port_rt->transport_params;
+ t_params = &port_rt->transport_params;
+ t_params->num = port_rt->port_num;
+ t_params->hstart = hstart;
+ t_params->hstop = hstop;
+ t_params->offset1 = port_block_offset;
+ t_params->offset2 = port_block_offset >> 8;
+ /* Only BlockPerPort supported */
+ t_params->blockgroupcontrol_valid = true;
+ t_params->blockgroupcontrol = 0x0;
+ t_params->lanecontrol = 0x0;
+ /* Copy parameters if first node */
+ if (port_rt == list_first_entry
+ (&sdw_mstr_rt->port_rt_list,
+ struct sdw_port_runtime, port_node)) {
- if ((!hstart2) && (!hstop2)) {
- hstart1 = hstart2 = t_params->hstart;
- hstop1 = hstop2 = t_params->hstop;
- /* TBD: Verify this condition */
-#ifdef CONFIG_SND_SOC_SVFPGA
- block_offset = 1;
-#else
- block_offset = 0;
-#endif
- } else {
+ sdw_mstr_rt->hstart = hstart;
+ sdw_mstr_rt->hstop = hstop;
- hstart1 = t_params->hstart;
- hstop1 = t_params->hstop;
+ sdw_mstr_rt->block_offset =
+ port_block_offset;
-#ifndef CONFIG_SND_SOC_SVFPGA
- /* hstart/stop not same */
- if ((hstart1 != hstart2) &&
- (hstop1 != hstop2)) {
- /* TBD: Harcoding to 0, to be removed*/
- block_offset = 0;
- } else {
- /* TBD: Harcoding to 0, to be removed*/
- block_offset = 0;
- }
-#else
- if ((hstart1 != hstart2) &&
- (hstop1 != hstop2)) {
- block_offset = 1;
- } else {
-/* We are doing loopback for the Aggregation so block offset should
- * always remain same. This is not a requirement. This we are doing
- * to test aggregation without codec.
- */
-#ifdef CONFIG_SND_SOC_MXFPGA
- block_offset = 1;
-#else
- block_offset +=
- (sdw_mstr_bs_rt->stream_params.
- bps
- *
- sdw_mstr_bs_rt->stream_params.
- channel_count);
-#endif
}
-#endif
- }
+ /* Get no. of channels running on curr. port */
+ ch_mask = port_rt->channel_mask;
+ no_of_channels = (((ch_mask >> 3) & 1) +
+ ((ch_mask >> 2) & 1) +
+ ((ch_mask >> 1) & 1) +
+ (ch_mask & 1));
- /*
- * TBD: Hardcding block control group as true,
- * to be changed later
- */
- t_params->blockgroupcontrol_valid = true;
- t_params->blockgroupcontrol = 0x0; /* Hardcoding to 0 */
+ port_block_offset +=
+ sdw_mstr_rt->stream_params.bps *
+ no_of_channels;
+ }
+
+ /* Compute block offset */
+ block_offset += sdw_mstr_rt->stream_params.bps *
+ sdw_mstr_rt->stream_params.channel_count;
/*
- * Since one port per bus runtime,
- * breaking port_list loop
- * TBD: to be extended for multiple port support
+ * Re-assign port_block_offset for next stream
+ * under same group
*/
- break;
+ port_block_offset = block_offset;
}
- /*
- * Run loop for slave_rt_list for given master_list
- * to compute block and sub block offset for slave
- */
+ /* Compute hstop for next group */
+ hstop = hstop - element->hwidth;
+ }
+
+ /* Compute transport params for slave */
+
+ /* Run loop for master runtime streams running on master */
+ list_for_each_entry(sdw_mstr_rt,
+ &sdw_mstr->mstr_rt_list, mstr_node) {
+
+ /* Get block offset from master runtime */
+ port_block_offset = sdw_mstr_rt->block_offset;
+
+ /* Run loop for slave per master runtime */
list_for_each_entry(slv_rt,
- &sdw_mstr_bs_rt->slv_rt_list, slave_node) {
+ &sdw_mstr_rt->slv_rt_list, slave_node) {
if (slv_rt->slave == NULL)
break;
@@ -1348,139 +1228,72 @@ int sdw_compute_blk_subblk_offset(struct sdw_bus *sdw_mstr_bs)
if (slv_rt->rt_state == SDW_STATE_UNPREPARE_RT)
continue;
+ /* Run loop for each port of slave */
list_for_each_entry(port_slv_rt,
&slv_rt->port_rt_list, port_node) {
t_slv_params = &port_slv_rt->transport_params;
+ t_slv_params->num = port_slv_rt->port_num;
- /*
- * TBD: Needs to be verifid for
- * multiple combination
- * 1. 1 master port, 1 slave rt,
- * 1 port per slave rt -->
- * In this case, use block_offset same as
- * master for 1 slave rt
- * 2. 1 master port, 2 slave rt,
- * 1 port per slave rt -->
- * In this case, use block_offset same as
- * master for 1st slave rt and compute for 2nd.
- */
-
- /*
- * Current assumption is one port per rt,
- * hence no multiple port combination.
- * TBD: block offset to be computed for
- * more than 1 slave_rt list.
- */
- t_slv_params->offset1 = block_offset;
- t_slv_params->offset2 = block_offset >> 8;
-
+ /* Assign transport parameters */
+ t_slv_params->hstart = sdw_mstr_rt->hstart;
+ t_slv_params->hstop = sdw_mstr_rt->hstop;
+ t_slv_params->offset1 = port_block_offset;
+ t_slv_params->offset2 = port_block_offset >> 8;
- /*
- * TBD: Hardcding block control group as true,
- * to be changed later
- */
+ /* Only BlockPerPort supported */
t_slv_params->blockgroupcontrol_valid = true;
- /* Hardcoding to 0 */
t_slv_params->blockgroupcontrol = 0x0;
- /* Since one port per slave runtime,
- * breaking port_list loop
- * TBD:to be extended for multiple port support
- */
- break;
+ t_slv_params->lanecontrol = 0x0;
+
+ /* Get no. of channels running on curr. port */
+ ch_mask = port_slv_rt->channel_mask;
+ no_of_channels = (((ch_mask >> 3) & 1) +
+ ((ch_mask >> 2) & 1) +
+ ((ch_mask >> 1) & 1) +
+ (ch_mask & 1));
+
+ /* Increment block offset for next port/slave */
+ port_block_offset += slv_rt->stream_params.bps *
+ no_of_channels;
}
}
}
- return 0;
-}
-
-
-/*
- * sdw_configure_frmshp_bnkswtch - returns Success
- * -EINVAL - In case of error.
- *
- *
- * This function broadcast frameshape on framectrl
- * register and performs bank switch.
- */
-int sdw_configure_frmshp_bnkswtch(struct sdw_bus *mstr_bs, int col, int row)
-{
- struct sdw_msg wr_msg;
- int ret = 0;
- int banktouse, numcol, numrow;
- u8 wbuf[1] = {0};
-
- numcol = sdw_get_col_to_num(col);
- numrow = sdw_get_row_to_num(row);
-
- wbuf[0] = numcol | (numrow << 3);
- /* Get current bank in use from bus structure*/
- banktouse = mstr_bs->active_bank;
- banktouse = !banktouse;
-
- if (banktouse) {
- wr_msg.addr = (SDW_SCP_FRAMECTRL + SDW_BANK1_REGISTER_OFFSET) +
- (SDW_NUM_DATA_PORT_REGISTERS * 0); /* Data port 0 */
- } else {
-
- wr_msg.addr = SDW_SCP_FRAMECTRL +
- (SDW_NUM_DATA_PORT_REGISTERS * 0); /* Data port 0 */
- }
-
- wr_msg.ssp_tag = 0x1;
- wr_msg.flag = SDW_MSG_FLAG_WRITE;
- wr_msg.len = 1;
- wr_msg.slave_addr = 0xF; /* Broadcast address*/
- wr_msg.buf = wbuf;
- wr_msg.addr_page1 = 0x0;
- wr_msg.addr_page2 = 0x0;
-
-
- ret = sdw_slave_transfer(mstr_bs->mstr, &wr_msg, 1);
- if (ret != 1) {
- ret = -EINVAL;
- dev_err(&mstr_bs->mstr->dev, "Register transfer failed\n");
- goto out;
- }
-
- msleep(100); /* TBD: Remove this */
-
- /*
- * TBD: check whether we need to poll on
- * mcp active bank bit to switch bank
- */
- mstr_bs->active_bank = banktouse;
-
-out:
+ kfree(temp);
- return ret;
+ return 0;
}
/*
- * sdw_configure_frmshp_bnkswtch - returns Success
+ * sdw_cfg_frmshp_bnkswtch - returns Success
* -EINVAL - In case of error.
+ * -ENOMEM - In case of memory alloc failure.
+ * -EAGAIN - In case of activity ongoing.
*
*
* This function broadcast frameshape on framectrl
* register and performs bank switch.
*/
-int sdw_configure_frmshp_bnkswtch_mm(struct sdw_bus *mstr_bs, int col, int row)
+int sdw_cfg_frmshp_bnkswtch(struct sdw_bus *mstr_bs, bool is_wait)
{
+ struct sdw_msg *wr_msg;
int ret = 0;
int banktouse, numcol, numrow;
u8 *wbuf;
- struct sdw_msg *wr_msg;
wr_msg = kzalloc(sizeof(struct sdw_msg), GFP_KERNEL);
- mstr_bs->async_data.msg = wr_msg;
if (!wr_msg)
return -ENOMEM;
+
+ mstr_bs->async_data.msg = wr_msg;
+
wbuf = kzalloc(sizeof(*wbuf), GFP_KERNEL);
- if (!wbuf)
- return -ENOMEM;
- numcol = sdw_get_col_to_num(col);
- numrow = sdw_get_row_to_num(row);
+ if (!wbuf)
+ return -ENOMEM;
+
+ numcol = sdw_get_col_to_num(mstr_bs->col);
+ numrow = sdw_get_row_to_num(mstr_bs->row);
wbuf[0] = numcol | (numrow << 3);
/* Get current bank in use from bus structure*/
@@ -1504,23 +1317,34 @@ int sdw_configure_frmshp_bnkswtch_mm(struct sdw_bus *mstr_bs, int col, int row)
wr_msg->addr_page1 = 0x0;
wr_msg->addr_page2 = 0x0;
- if (in_atomic() || irqs_disabled()) {
- ret = sdw_trylock_mstr(mstr_bs->mstr);
- if (!ret) {
- /* SDW activity is ongoing. */
- ret = -EAGAIN;
+ if (is_wait) {
+
+ if (in_atomic() || irqs_disabled()) {
+ ret = sdw_trylock_mstr(mstr_bs->mstr);
+ if (!ret) {
+ /* SDW activity is ongoing. */
+ ret = -EAGAIN;
+ goto out;
+ }
+ } else
+ sdw_lock_mstr(mstr_bs->mstr);
+
+ ret = sdw_slave_transfer_async(mstr_bs->mstr, wr_msg,
+ 1, &mstr_bs->async_data);
+ if (ret != 1) {
+ ret = -EINVAL;
+ dev_err(&mstr_bs->mstr->dev, "Register transfer failed\n");
goto out;
}
+
} else {
- sdw_lock_mstr(mstr_bs->mstr);
- }
+ ret = sdw_slave_transfer(mstr_bs->mstr, wr_msg, 1);
+ if (ret != 1) {
+ ret = -EINVAL;
+ dev_err(&mstr_bs->mstr->dev, "Register transfer failed\n");
+ goto out;
+ }
- ret = sdw_slave_transfer_async(mstr_bs->mstr, wr_msg,
- 1, &mstr_bs->async_data);
- if (ret != 1) {
- ret = -EINVAL;
- dev_err(&mstr_bs->mstr->dev, "Register transfer failed\n");
- goto out;
}
msleep(100); /* TBD: Remove this */
@@ -1531,12 +1355,25 @@ int sdw_configure_frmshp_bnkswtch_mm(struct sdw_bus *mstr_bs, int col, int row)
*/
mstr_bs->active_bank = banktouse;
+ if (!is_wait) {
+ kfree(mstr_bs->async_data.msg->buf);
+ kfree(mstr_bs->async_data.msg);
+ }
+
+
out:
return ret;
}
-int sdw_configure_frmshp_bnkswtch_mm_wait(struct sdw_bus *mstr_bs)
+/*
+ * sdw_cfg_frmshp_bnkswtch_wait - returns Success
+ * -ETIMEDOUT - In case of timeout
+ *
+ * This function waits on completion of
+ * bank switch.
+ */
+int sdw_cfg_frmshp_bnkswtch_wait(struct sdw_bus *mstr_bs)
{
unsigned long time_left;
struct sdw_master *mstr = mstr_bs->mstr;
@@ -1556,7 +1393,7 @@ int sdw_configure_frmshp_bnkswtch_mm_wait(struct sdw_bus *mstr_bs)
}
/*
- * sdw_cfg_bs_params - returns Success
+ * sdw_config_bs_prms - returns Success
* -EINVAL - In case of error.
*
*
@@ -1566,61 +1403,31 @@ int sdw_configure_frmshp_bnkswtch_mm_wait(struct sdw_bus *mstr_bs)
* from sdw_bus_calc_bw & sdw_bus_calc_bw_dis API.
*
*/
-int sdw_cfg_bs_params(struct sdw_bus *sdw_mstr_bs,
- struct sdw_mstr_runtime *sdw_mstr_bs_rt,
- bool is_strm_cpy)
+int sdw_config_bs_prms(struct sdw_bus *sdw_mstr_bs, bool state_check)
{
struct port_chn_en_state chn_en;
struct sdw_master *sdw_mstr = sdw_mstr_bs->mstr;
+ struct sdw_mstr_runtime *sdw_mstr_bs_rt = NULL;
struct sdw_mstr_driver *ops;
int banktouse, ret = 0;
list_for_each_entry(sdw_mstr_bs_rt,
- &sdw_mstr->mstr_rt_list, mstr_node) {
+ &sdw_mstr->mstr_rt_list, mstr_node) {
if (sdw_mstr_bs_rt->mstr == NULL)
continue;
- if (is_strm_cpy) {
- /*
- * Configure and enable all slave
- * transport params first
- */
- ret = sdw_cfg_mstr_slv(sdw_mstr_bs,
- sdw_mstr_bs_rt, false);
- if (ret < 0) {
- /* TBD: Undo all the computation */
- dev_err(&sdw_mstr_bs->mstr->dev,
- "slave config params failed\n");
- return ret;
- }
-
- /* Configure and enable all master params */
- ret = sdw_cfg_mstr_slv(sdw_mstr_bs,
- sdw_mstr_bs_rt, true);
- if (ret < 0) {
- /* TBD: Undo all the computation */
- dev_err(&sdw_mstr_bs->mstr->dev,
- "master config params failed\n");
- return ret;
- }
-
- } else {
-
- /*
- * 7.1 Copy all slave transport and port params
- * to alternate bank
- * 7.2 copy all master transport and port params
- * to alternate bank
- */
- ret = sdw_cpy_params_mstr_slv(sdw_mstr_bs,
- sdw_mstr_bs_rt);
- if (ret < 0) {
- /* TBD: Undo all the computation */
- dev_err(&sdw_mstr_bs->mstr->dev,
- "slave/master copy params failed\n");
- return ret;
- }
+ /*
+ * Configure transport and port params
+ * for master and slave ports.
+ */
+ ret = sdw_cfg_params_mstr_slv(sdw_mstr_bs,
+ sdw_mstr_bs_rt, state_check);
+ if (ret < 0) {
+ /* TBD: Undo all the computation */
+ dev_err(&sdw_mstr_bs->mstr->dev,
+ "slave/master config params failed\n");
+ return ret;
}
/* Get master driver ops */
@@ -1631,14 +1438,14 @@ int sdw_cfg_bs_params(struct sdw_bus *sdw_mstr_bs,
banktouse = !banktouse;
/*
- * TBD: Currently harcoded SSP interval to 50,
+ * TBD: Currently harcoded SSP interval,
* computed value to be taken from system_interval in
* bus data structure.
* Add error check.
*/
if (ops->mstr_ops->set_ssp_interval)
ops->mstr_ops->set_ssp_interval(sdw_mstr_bs->mstr,
- 50, banktouse);
+ SDW_DEFAULT_SSP, banktouse);
/*
* Configure Clock
@@ -1646,7 +1453,7 @@ int sdw_cfg_bs_params(struct sdw_bus *sdw_mstr_bs,
*/
if (ops->mstr_ops->set_clock_freq)
ops->mstr_ops->set_clock_freq(sdw_mstr_bs->mstr,
- sdw_mstr_bs->clk_freq, banktouse);
+ sdw_mstr_bs->clk_div, banktouse);
/* Enable channel on alternate bank for running streams */
chn_en.is_activate = true;
@@ -1676,10 +1483,10 @@ int sdw_cfg_bs_params(struct sdw_bus *sdw_mstr_bs,
* bank is enabled.
*
*/
-int sdw_dis_chan(struct sdw_bus *sdw_mstr_bs,
- struct sdw_mstr_runtime *sdw_mstr_bs_rt)
+int sdw_dis_chan(struct sdw_bus *sdw_mstr_bs)
{
struct sdw_master *sdw_mstr = sdw_mstr_bs->mstr;
+ struct sdw_mstr_runtime *sdw_mstr_bs_rt = NULL;
struct port_chn_en_state chn_en;
int ret = 0;
@@ -1762,9 +1569,6 @@ int sdw_cfg_slv_prep_unprep(struct sdw_bus *mstr_bs,
wr_msg.addr_page1 = 0x0;
wr_msg.addr_page2 = 0x0;
-#ifdef CONFIG_SND_SOC_MXFPGA
- sdw_slv_dpn_cap->prepare_ch = 0;
-#endif
if (prep) { /* PREPARE */
/*
@@ -1998,15 +1802,8 @@ int sdw_prep_unprep_mstr_slv(struct sdw_bus *sdw_mstr_bs,
slv_rt_strm, port_slv_strm, is_prep);
if (ret < 0)
return ret;
+ }
- /* Since one port per slave runtime,
- * breaking port_list loop
- * TBD: to be extended for multiple port support
- */
- break;
- }
-
- break;
}
list_for_each_entry(mstr_rt_strm,
@@ -2022,12 +1819,6 @@ int sdw_prep_unprep_mstr_slv(struct sdw_bus *sdw_mstr_bs,
mstr_rt_strm, port_mstr_strm, is_prep);
if (ret < 0)
return ret;
-
- /* Since one port per master runtime,
- * breaking port_list loop
- * TBD: to be extended for multiple port support
- */
- break;
}
}
@@ -2050,852 +1841,1257 @@ struct sdw_bus *master_to_bus(struct sdw_master *mstr)
return NULL;
}
-/**
- * sdw_bus_calc_bw - returns Success
+/*
+ * sdw_chk_strm_prms - returns Success
* -EINVAL - In case of error.
*
*
- * This function is called from sdw_prepare_and_enable
- * whenever new stream is processed. The function based
- * on the stream associated with controller calculates
- * required bandwidth, clock, frameshape, computes
- * all transport params for a given port, enable channel
- * & perform bankswitch.
+ * This function performs all the required
+ * check such as isynchronous mode support,
+ * stream rates etc. This API is called
+ * from sdw_bus_calc_bw API.
+ *
*/
-int sdw_bus_calc_bw(struct sdw_stream_tag *stream_tag, bool enable)
+int sdw_chk_strm_prms(struct sdw_master_capabilities *sdw_mstr_cap,
+ struct sdw_stream_params *mstr_params,
+ struct sdw_stream_params *stream_params)
+{
+ /* Asynchronous mode not supported, return Error */
+ if (((sdw_mstr_cap->base_clk_freq * 2) % mstr_params->rate) != 0)
+ return -EINVAL;
+
+ /* Check for sampling frequency */
+ if (stream_params->rate != mstr_params->rate)
+ return -EINVAL;
+
+ return 0;
+}
+
+/*
+ * sdw_compute_bs_prms - returns Success
+ * -EINVAL - In case of error.
+ *
+ *
+ * This function performs master/slave transport
+ * params computation. This API is called
+ * from sdw_bus_calc_bw & sdw_bus_calc_bw_dis API.
+ *
+ */
+int sdw_compute_bs_prms(struct sdw_bus *sdw_mstr_bs,
+ struct sdw_mstr_runtime *sdw_mstr_rt)
{
- struct sdw_runtime *sdw_rt = stream_tag->sdw_rt;
- struct sdw_stream_params *stream_params = &sdw_rt->stream_params;
- struct sdw_mstr_runtime *sdw_mstr_rt = NULL, *sdw_mstr_bs_rt = NULL;
- struct sdw_mstr_runtime *mstr_rt_act = NULL, *last_rt = NULL;
- struct sdw_bus *sdw_mstr_bs = NULL, *mstr_bs_act = NULL;
- struct sdw_master *sdw_mstr = NULL;
struct sdw_master_capabilities *sdw_mstr_cap = NULL;
- struct sdw_stream_params *mstr_params;
- int stream_frame_size;
- int frame_interval = 0, sel_row = 0, sel_col = 0;
- int ret = 0;
- bool last_node = false;
- struct sdw_master_port_ops *ops;
+ struct sdw_master *sdw_mstr = sdw_mstr_bs->mstr;
+ int ret = 0, frame_interval = 0;
+
+ sdw_mstr_cap = &sdw_mstr->mstr_capabilities;
- /* TBD: Add PCM/PDM flag in sdw_config_stream */
+ ret = sdw_get_clock_frmshp(sdw_mstr_bs, &frame_interval,
+ sdw_mstr_rt);
+ if (ret < 0) {
+ /* TBD: Undo all the computation */
+ dev_err(&sdw_mstr->dev, "clock/frameshape config failed\n");
+ return ret;
+ }
/*
- * TBD: check for mstr_rt is in configured state or not
- * If yes, then configure masters as well
- * If no, then do not configure/enable master related parameters
+ * TBD: find right place to run sorting on
+ * master rt_list. Below sorting is done based on
+ * bps from low to high, that means PDM streams
+ * will be placed before PCM.
*/
- /* BW calulation for active master controller for given stream tag */
- list_for_each_entry(sdw_mstr_rt, &sdw_rt->mstr_rt_list,
- mstr_sdw_node) {
-
- if (sdw_mstr_rt->mstr == NULL)
- break;
- last_rt = list_last_entry(&sdw_rt->mstr_rt_list,
- struct sdw_mstr_runtime, mstr_sdw_node);
- if (sdw_mstr_rt == last_rt)
- last_node = true;
- else
- last_node = false;
+ /*
+ * TBD Should we also perform sorting based on rate
+ * for PCM stream check. if yes then how??
+ * creating two different list.
+ */
- /* Get bus structure for master */
- sdw_mstr_bs = master_to_bus(sdw_mstr_rt->mstr);
- sdw_mstr = sdw_mstr_bs->mstr;
+ /* Compute system interval */
+ ret = sdw_compute_sys_interval(sdw_mstr_bs, sdw_mstr_cap,
+ frame_interval);
+ if (ret < 0) {
+ /* TBD: Undo all the computation */
+ dev_err(&sdw_mstr->dev, "compute system interval failed\n");
+ return ret;
+ }
- /*
- * All data structures required available,
- * lets calculate BW for master controller
- */
+ /* Compute hstart/hstop */
+ ret = sdw_compute_hstart_hstop(sdw_mstr_bs);
+ if (ret < 0) {
+ /* TBD: Undo all the computation */
+ dev_err(&sdw_mstr->dev, "compute hstart/hstop failed\n");
+ return ret;
+ }
- /* Check for isochronous mode plus other checks if required */
- sdw_mstr_cap = &sdw_mstr_bs->mstr->mstr_capabilities;
- mstr_params = &sdw_mstr_rt->stream_params;
+ return 0;
+}
- if ((sdw_rt->stream_state != SDW_STATE_CONFIG_STREAM) &&
- (sdw_rt->stream_state != SDW_STATE_UNPREPARE_STREAM))
- goto enable_stream;
+/*
+ * sdw_bs_pre_bnkswtch_post - returns Success
+ * -EINVAL or ret value - In case of error.
+ *
+ * This API performs on of the following operation
+ * based on bs_state value:
+ * pre-activate port
+ * bank switch operation
+ * post-activate port
+ * bankswitch wait operation
+ * disable channel operation
+ */
+int sdw_bs_pre_bnkswtch_post(struct sdw_runtime *sdw_rt, int bs_state)
+{
+ struct sdw_mstr_runtime *mstr_rt_act = NULL;
+ struct sdw_bus *mstr_bs_act = NULL;
+ struct sdw_master_port_ops *ops;
+ int ret = 0;
- /* we do not support asynchronous mode Return Error */
- if ((sdw_mstr_cap->base_clk_freq % mstr_params->rate) != 0) {
- /* TBD: Undo all the computation */
- dev_err(&sdw_mstr->dev, "Async mode not supported\n");
- return -EINVAL;
- }
+ list_for_each_entry(mstr_rt_act, &sdw_rt->mstr_rt_list,
+ mstr_sdw_node) {
- /* Check for sampling frequency */
- if (stream_params->rate != mstr_params->rate) {
- /* TBD: Undo all the computation */
- dev_err(&sdw_mstr->dev, "Sample frequency mismatch\n");
+ if (mstr_rt_act->mstr == NULL)
+ break;
+
+ /* Get bus structure for master */
+ mstr_bs_act = master_to_bus(mstr_rt_act->mstr);
+ if (!mstr_bs_act)
return -EINVAL;
- }
+
+ ops = mstr_bs_act->mstr->driver->mstr_port_ops;
/*
- * Calculate stream bandwidth, frame size and
- * total BW required for master controller
+ * Note that current all the operations
+ * of pre->bankswitch->post->wait->disable
+ * are performed sequentially.The switch case
+ * is kept in order for code to scale where
+ * pre->bankswitch->post->wait->disable are
+ * not sequential and called from different
+ * instances.
*/
- sdw_mstr_rt->stream_bw = mstr_params->rate *
- mstr_params->channel_count * mstr_params->bps;
- stream_frame_size = mstr_params->channel_count *
- mstr_params->bps;
+ switch (bs_state) {
- sdw_mstr_bs->bandwidth += sdw_mstr_rt->stream_bw;
-
- ret = sdw_get_clock_frmshp(sdw_mstr_bs,
- &frame_interval, &sel_col, &sel_row);
- if (ret < 0) {
- /* TBD: Undo all the computation */
- dev_err(&sdw_mstr->dev, "clock/frameshape config failed\n");
- return ret;
+ case SDW_UPDATE_BS_PRE:
+ /* Pre activate ports */
+ if (ops->dpn_port_activate_ch_pre) {
+ ret = ops->dpn_port_activate_ch_pre
+ (mstr_bs_act->mstr, NULL, 0);
+ if (ret < 0)
+ return ret;
+ }
+ break;
+ case SDW_UPDATE_BS_BNKSWTCH:
+ /* Configure Frame Shape/Switch Bank */
+ ret = sdw_cfg_frmshp_bnkswtch(mstr_bs_act, true);
+ if (ret < 0)
+ return ret;
+ break;
+ case SDW_UPDATE_BS_POST:
+ /* Post activate ports */
+ if (ops->dpn_port_activate_ch_post) {
+ ret = ops->dpn_port_activate_ch_post
+ (mstr_bs_act->mstr, NULL, 0);
+ if (ret < 0)
+ return ret;
+ }
+ break;
+ case SDW_UPDATE_BS_BNKSWTCH_WAIT:
+ /* Post Bankswitch wait operation */
+ ret = sdw_cfg_frmshp_bnkswtch_wait(mstr_bs_act);
+ if (ret < 0)
+ return ret;
+ break;
+ case SDW_UPDATE_BS_DIS_CHN:
+ /* Disable channel on previous bank */
+ ret = sdw_dis_chan(mstr_bs_act);
+ if (ret < 0)
+ return ret;
+ break;
+ default:
+ return -EINVAL;
+ break;
}
+ }
- /*
- * TBD: find right place to run sorting on
- * master rt_list. Below sorting is done based on
- * bps from low to high, that means PDM streams
- * will be placed before PCM.
- */
+ return ret;
- /*
- * TBD Should we also perform sorting based on rate
- * for PCM stream check. if yes then how??
- * creating two different list.
- */
+}
+
+/*
+ * sdw_update_bs_prms - returns Success
+ * -EINVAL - In case of error.
+ *
+ * Once all the parameters are configured
+ * for ports, this function performs bankswitch
+ * where all the new configured parameters
+ * gets in effect. This function is called
+ * from sdw_bus_calc_bw & sdw_bus_calc_bw_dis API.
+ * This function also disables all the channels
+ * enabled on previous bank after bankswitch.
+ */
+int sdw_update_bs_prms(struct sdw_bus *sdw_mstr_bs,
+ struct sdw_runtime *sdw_rt,
+ int last_node)
+{
+
+ struct sdw_master *sdw_mstr = sdw_mstr_bs->mstr;
+ int ret = 0;
+
+ /*
+ * Optimization scope.
+ * Check whether we can assign function pointer
+ * link sync value is 1, and call that function
+ * if its not NULL.
+ */
+ if ((last_node) && (sdw_mstr->link_sync_mask)) {
- /* Compute system interval */
- ret = sdw_compute_sys_interval(sdw_mstr_bs, sdw_mstr_cap,
- frame_interval);
+ /* Perform pre-activate ports */
+ ret = sdw_bs_pre_bnkswtch_post(sdw_rt, SDW_UPDATE_BS_PRE);
if (ret < 0) {
- /* TBD: Undo all the computation */
- dev_err(&sdw_mstr->dev, "compute system interval failed\n");
+ dev_err(&sdw_mstr->dev, "Pre-activate port failed\n");
return ret;
}
- /* Compute hstart/hstop */
- ret = sdw_compute_hstart_hstop(sdw_mstr_bs, sel_col);
+ /* Perform bankswitch operation*/
+ ret = sdw_bs_pre_bnkswtch_post(sdw_rt, SDW_UPDATE_BS_BNKSWTCH);
if (ret < 0) {
- /* TBD: Undo all the computation */
- dev_err(&sdw_mstr->dev, "compute hstart/hstop failed\n");
+ dev_err(&sdw_mstr->dev, "Bank Switch operation failed\n");
return ret;
}
- /* Compute block offset */
- ret = sdw_compute_blk_subblk_offset(sdw_mstr_bs);
+ /* Perform post-activate ports */
+ ret = sdw_bs_pre_bnkswtch_post(sdw_rt, SDW_UPDATE_BS_POST);
if (ret < 0) {
- /* TBD: Undo all the computation */
- dev_err(&sdw_mstr->dev, "compute block offset failed\n");
+ dev_err(&sdw_mstr->dev, "Pre-activate port failed\n");
return ret;
}
- /* Change Stream State */
- if (last_node)
- sdw_rt->stream_state = SDW_STATE_COMPUTE_STREAM;
-
- /* Configure bus parameters */
- ret = sdw_cfg_bs_params(sdw_mstr_bs, sdw_mstr_bs_rt, true);
+ /* Perform bankswitch post wait opearation */
+ ret = sdw_bs_pre_bnkswtch_post(sdw_rt,
+ SDW_UPDATE_BS_BNKSWTCH_WAIT);
if (ret < 0) {
- /* TBD: Undo all the computation */
- dev_err(&sdw_mstr->dev, "xport param config failed\n");
+ dev_err(&sdw_mstr->dev, "BnkSwtch wait op failed\n");
return ret;
}
- sel_col = sdw_mstr_bs->col;
- sel_row = sdw_mstr_bs->row;
-
- if ((last_node) && (sdw_mstr->link_sync_mask)) {
-
- list_for_each_entry(mstr_rt_act, &sdw_rt->mstr_rt_list,
- mstr_sdw_node) {
-
- if (mstr_rt_act->mstr == NULL)
- break;
-
- /* Get bus structure for master */
- mstr_bs_act = master_to_bus(mstr_rt_act->mstr);
- ops = mstr_bs_act->mstr->driver->mstr_port_ops;
-
- /* Run for all mstr_list and
- * pre_activate ports
- */
- if (ops->dpn_port_activate_ch_pre) {
- ret = ops->dpn_port_activate_ch_pre
- (mstr_bs_act->mstr, NULL, 0);
- if (ret < 0)
- return ret;
- }
- }
-
- list_for_each_entry(mstr_rt_act, &sdw_rt->mstr_rt_list,
- mstr_sdw_node) {
- if (mstr_rt_act->mstr == NULL)
- break;
-
- /* Get bus structure for master */
- mstr_bs_act = master_to_bus(mstr_rt_act->mstr);
-
- /* Configure Frame Shape/Switch Bank */
- ret = sdw_configure_frmshp_bnkswtch_mm(
- mstr_bs_act, sel_col, sel_row);
- if (ret < 0) {
- /* TBD: Undo all the computation */
- dev_err(&sdw_mstr->dev, "bank switch failed\n");
- return ret;
- }
- }
-
- list_for_each_entry(mstr_rt_act, &sdw_rt->mstr_rt_list,
- mstr_sdw_node) {
-
- if (mstr_rt_act->mstr == NULL)
- break;
-
- /* Get bus structure for master */
- mstr_bs_act = master_to_bus(mstr_rt_act->mstr);
-
- ops = mstr_bs_act->mstr->driver->mstr_port_ops;
-
- /* Run for all mstr_list and
- * post_activate ports
- */
- if (ops->dpn_port_activate_ch_post) {
- ret = ops->dpn_port_activate_ch_post
- (mstr_bs_act->mstr, NULL, 0);
- if (ret < 0)
- return ret;
- }
- }
-
- list_for_each_entry(mstr_rt_act,
- &sdw_rt->mstr_rt_list, mstr_sdw_node) {
-
- if (mstr_rt_act->mstr == NULL)
- break;
-
- mstr_bs_act = master_to_bus(
- mstr_rt_act->mstr);
- ret = sdw_configure_frmshp_bnkswtch_mm_wait(
- mstr_bs_act);
- }
-
- list_for_each_entry(mstr_rt_act, &sdw_rt->mstr_rt_list,
- mstr_sdw_node) {
-
- if (mstr_rt_act->mstr == NULL)
- break;
-
- /* Get bus structure for master */
- mstr_bs_act = master_to_bus(mstr_rt_act->mstr);
-
- /* Disable all channels
- * enabled on previous bank
- */
- ret = sdw_dis_chan(mstr_bs_act, sdw_mstr_bs_rt);
- if (ret < 0) {
- /* TBD: Undo all the computation */
- dev_err(&sdw_mstr->dev, "Channel disabled faile\n");
- return ret;
- }
- }
- }
- if (!sdw_mstr->link_sync_mask) {
-
- /* Configure Frame Shape/Switch Bank */
- ret = sdw_configure_frmshp_bnkswtch(sdw_mstr_bs,
- sel_col, sel_row);
- if (ret < 0) {
- /* TBD: Undo all the computation */
- dev_err(&sdw_mstr->dev, "bank switch failed\n");
- return ret;
- }
- /* Disable all channels enabled on previous bank */
- ret = sdw_dis_chan(sdw_mstr_bs, sdw_mstr_bs_rt);
- if (ret < 0) {
- /* TBD: Undo all the computation */
- dev_err(&sdw_mstr->dev, "Channel disabled failed\n");
- return ret;
- }
- }
- /* Prepare new port for master and slave */
- ret = sdw_prep_unprep_mstr_slv(sdw_mstr_bs, sdw_rt, true);
+ /* Disable channels on previous bank */
+ ret = sdw_bs_pre_bnkswtch_post(sdw_rt, SDW_UPDATE_BS_DIS_CHN);
if (ret < 0) {
- /* TBD: Undo all the computation */
- dev_err(&sdw_mstr->dev, "Channel prepare failed\n");
+ dev_err(&sdw_mstr->dev, "Channel disabled failed\n");
return ret;
}
- /* change stream state to prepare */
- if (last_node)
- sdw_rt->stream_state = SDW_STATE_PREPARE_STREAM;
}
-enable_stream:
- list_for_each_entry(sdw_mstr_rt, &sdw_rt->mstr_rt_list, mstr_sdw_node) {
-
-
- if (sdw_mstr_rt->mstr == NULL)
- break;
- last_rt = list_last_entry(&sdw_rt->mstr_rt_list,
- struct sdw_mstr_runtime, mstr_sdw_node);
- if (sdw_mstr_rt == last_rt)
- last_node = true;
- else
- last_node = false;
-
- /* Get bus structure for master */
- sdw_mstr_bs = master_to_bus(sdw_mstr_rt->mstr);
- sdw_mstr = sdw_mstr_bs->mstr;
-
- sdw_mstr_cap = &sdw_mstr_bs->mstr->mstr_capabilities;
- mstr_params = &sdw_mstr_rt->stream_params;
- if ((!enable) ||
- (sdw_rt->stream_state != SDW_STATE_PREPARE_STREAM))
- return 0;
+ if (!sdw_mstr->link_sync_mask) {
- ret = sdw_cfg_bs_params(sdw_mstr_bs, sdw_mstr_bs_rt, false);
+ /* Configure Frame Shape/Switch Bank */
+ ret = sdw_cfg_frmshp_bnkswtch(sdw_mstr_bs, false);
if (ret < 0) {
/* TBD: Undo all the computation */
- dev_err(&sdw_mstr->dev, "xport params config failed\n");
+ dev_err(&sdw_mstr->dev, "bank switch failed\n");
return ret;
}
- /* Enable new port for master and slave */
- ret = sdw_en_dis_mstr_slv(sdw_mstr_bs, sdw_rt, true);
+ /* Disable all channels enabled on previous bank */
+ ret = sdw_dis_chan(sdw_mstr_bs);
if (ret < 0) {
/* TBD: Undo all the computation */
- dev_err(&sdw_mstr->dev, "Channel enable failed\n");
+ dev_err(&sdw_mstr->dev, "Channel disabled failed\n");
return ret;
}
+ }
- /* change stream state to enable */
- if (last_node)
- sdw_rt->stream_state = SDW_STATE_ENABLE_STREAM;
+ return ret;
+}
- sel_col = sdw_mstr_bs->col;
- sel_row = sdw_mstr_bs->row;
+/**
+ * sdw_chk_last_node - returns True or false
+ *
+ * This function returns true in case of last node
+ * else returns false.
+ */
+bool sdw_chk_last_node(struct sdw_mstr_runtime *sdw_mstr_rt,
+ struct sdw_runtime *sdw_rt)
+{
+ struct sdw_mstr_runtime *last_rt = NULL;
- if ((last_node) && (sdw_mstr->link_sync_mask)) {
+ last_rt = list_last_entry(&sdw_rt->mstr_rt_list,
+ struct sdw_mstr_runtime, mstr_sdw_node);
+ if (sdw_mstr_rt == last_rt)
+ return true;
+ else
+ return false;
+}
- list_for_each_entry(mstr_rt_act, &sdw_rt->mstr_rt_list,
- mstr_sdw_node) {
+/**
+ * sdw_unprepare_op - returns Success
+ * -EINVAL - In case of error.
+ *
+ * This function perform all operations required
+ * to unprepare ports and does recomputation of
+ * bus parameters.
+ */
+int sdw_unprepare_op(struct sdw_bus *sdw_mstr_bs,
+ struct sdw_mstr_runtime *sdw_mstr_rt,
+ struct sdw_runtime *sdw_rt)
+{
+ struct sdw_master *sdw_mstr = sdw_mstr_bs->mstr;
+ struct sdw_stream_params *mstr_params;
+ bool last_node = false;
+ int ret = 0;
- if (mstr_rt_act->mstr == NULL)
- break;
+ last_node = sdw_chk_last_node(sdw_mstr_rt, sdw_rt);
+ mstr_params = &sdw_mstr_rt->stream_params;
- /* Get bus structure for master */
- mstr_bs_act = master_to_bus(mstr_rt_act->mstr);
+ /* 1. Un-prepare master and slave port */
+ ret = sdw_prep_unprep_mstr_slv(sdw_mstr_bs,
+ sdw_rt, false);
+ if (ret < 0) {
+ /* TBD: Undo all the computation */
+ dev_err(&sdw_mstr->dev, "Ch unprep failed\n");
+ return ret;
+ }
- ops = mstr_bs_act->mstr->driver->mstr_port_ops;
+ /* change stream state to unprepare */
+ if (last_node)
+ sdw_rt->stream_state =
+ SDW_STATE_UNPREPARE_STREAM;
- /* Run for all mstr_list and
- * pre_activate ports
- */
- if (ops->dpn_port_activate_ch_pre) {
- ret = ops->dpn_port_activate_ch_pre
- (mstr_bs_act->mstr, NULL, 0);
- if (ret < 0)
- return ret;
- }
- }
+ /*
+ * Calculate new bandwidth, frame size
+ * and total BW required for master controller
+ */
+ sdw_mstr_rt->stream_bw = mstr_params->rate *
+ mstr_params->channel_count * mstr_params->bps;
+ sdw_mstr_bs->bandwidth -= sdw_mstr_rt->stream_bw;
- list_for_each_entry(mstr_rt_act,
- &sdw_rt->mstr_rt_list, mstr_sdw_node) {
+ /* Something went wrong in bandwidth calulation */
+ if (sdw_mstr_bs->bandwidth < 0) {
+ dev_err(&sdw_mstr->dev, "BW calculation failed\n");
+ return -EINVAL;
+ }
+ if (!sdw_mstr_bs->bandwidth) {
+ /*
+ * Last stream on master should
+ * return successfully
+ */
+ sdw_mstr_bs->system_interval = 0;
+ sdw_mstr_bs->stream_interval = 0;
+ sdw_mstr_bs->frame_freq = 0;
+ sdw_mstr_bs->row = 0;
+ sdw_mstr_bs->col = 0;
+ return 0;
+ }
- if (mstr_rt_act->mstr == NULL)
- break;
+ /* Compute transport params */
+ ret = sdw_compute_bs_prms(sdw_mstr_bs, sdw_mstr_rt);
+ if (ret < 0) {
+ /* TBD: Undo all the computation */
+ dev_err(&sdw_mstr->dev, "Params computation failed\n");
+ return -EINVAL;
+ }
- /* Get bus structure for master */
- mstr_bs_act = master_to_bus(mstr_rt_act->mstr);
+ /* Configure bus params */
+ ret = sdw_config_bs_prms(sdw_mstr_bs, true);
+ if (ret < 0) {
+ /* TBD: Undo all the computation */
+ dev_err(&sdw_mstr->dev, "xport params config failed\n");
+ return ret;
+ }
- /* Configure Frame Shape/Switch Bank */
- ret = sdw_configure_frmshp_bnkswtch_mm(
- mstr_bs_act,
- sel_col, sel_row);
- if (ret < 0) {
- /* TBD: Undo all the computation */
- dev_err(&sdw_mstr->dev, "bank switch failed\n");
- return ret;
- }
- }
+ /*
+ * Perform SDW bus update
+ * For Aggregation flow:
+ * Pre-> Bankswitch -> Post -> Disable channel
+ * For normal flow:
+ * Bankswitch -> Disable channel
+ */
+ ret = sdw_update_bs_prms(sdw_mstr_bs, sdw_rt, last_node);
- list_for_each_entry(mstr_rt_act,
- &sdw_rt->mstr_rt_list, mstr_sdw_node) {
+ return ret;
+}
- if (mstr_rt_act->mstr == NULL)
- break;
+/**
+ * sdw_disable_op - returns Success
+ * -EINVAL - In case of error.
+ *
+ * This function perform all operations required
+ * to disable ports.
+ */
+int sdw_disable_op(struct sdw_bus *sdw_mstr_bs,
+ struct sdw_mstr_runtime *sdw_mstr_rt,
+ struct sdw_runtime *sdw_rt)
+{
- /* Get bus structure for master */
- mstr_bs_act = master_to_bus(mstr_rt_act->mstr);
+ struct sdw_master *sdw_mstr = sdw_mstr_bs->mstr;
+ struct sdw_master_capabilities *sdw_mstr_cap = NULL;
+ struct sdw_stream_params *mstr_params;
+ bool last_node = false;
+ int ret = 0;
- ops = mstr_bs_act->mstr->driver->mstr_port_ops;
- /* Run for all mstr_list and
- * post_activate ports
- */
- if (ops->dpn_port_activate_ch_post) {
- ret = ops->dpn_port_activate_ch_post
- (mstr_bs_act->mstr, NULL, 0);
- if (ret < 0)
- return ret;
- }
- }
+ last_node = sdw_chk_last_node(sdw_mstr_rt, sdw_rt);
+ sdw_mstr_cap = &sdw_mstr_bs->mstr->mstr_capabilities;
+ mstr_params = &sdw_mstr_rt->stream_params;
- list_for_each_entry(mstr_rt_act,
- &sdw_rt->mstr_rt_list, mstr_sdw_node) {
+ /* Lets do disabling of port for stream to be freed */
+ ret = sdw_en_dis_mstr_slv(sdw_mstr_bs, sdw_rt, false);
+ if (ret < 0) {
+ /* TBD: Undo all the computation */
+ dev_err(&sdw_mstr->dev, "Ch dis failed\n");
+ return ret;
+ }
- if (mstr_rt_act->mstr == NULL)
- break;
+ /* Change stream state to disable */
+ if (last_node)
+ sdw_rt->stream_state = SDW_STATE_DISABLE_STREAM;
- /* Get bus structure for master */
- mstr_bs_act = master_to_bus(mstr_rt_act->mstr);
- ret = sdw_configure_frmshp_bnkswtch_mm_wait(
- mstr_bs_act);
- }
- list_for_each_entry(mstr_rt_act,
- &sdw_rt->mstr_rt_list, mstr_sdw_node) {
+ ret = sdw_config_bs_prms(sdw_mstr_bs, false);
+ if (ret < 0) {
+ /* TBD: Undo all the computation */
+ dev_err(&sdw_mstr->dev, "xport params config failed\n");
+ return ret;
+ }
- if (mstr_rt_act->mstr == NULL)
- break;
+ /*
+ * Perform SDW bus update
+ * For Aggregation flow:
+ * Pre-> Bankswitch -> Post -> Disable channel
+ * For normal flow:
+ * Bankswitch -> Disable channel
+ */
+ ret = sdw_update_bs_prms(sdw_mstr_bs, sdw_rt, last_node);
- /* Get bus structure for master */
- mstr_bs_act = master_to_bus(mstr_rt_act->mstr);
+ return ret;
+}
- /* Disable all channels
- * enabled on previous bank
- */
- ret = sdw_dis_chan(mstr_bs_act,
- sdw_mstr_bs_rt);
- if (ret < 0) {
- /* TBD: Undo all the computation */
- dev_err(&sdw_mstr->dev,
- "Channel disabled faile\n");
- return ret;
- }
- }
- }
- if (!sdw_mstr->link_sync_mask) {
- /* Configure Frame Shape/Switch Bank */
- ret = sdw_configure_frmshp_bnkswtch(
- sdw_mstr_bs,
- sel_col, sel_row);
- if (ret < 0) {
- /* TBD: Undo all the computation */
- dev_err(&sdw_mstr->dev, "bank switch failed\n");
- return ret;
- }
- /* Disable all channels enabled on previous bank */
- ret = sdw_dis_chan(sdw_mstr_bs, sdw_mstr_bs_rt);
- if (ret < 0) {
- /* TBD: Undo all the computation */
- dev_err(&sdw_mstr->dev, "Ch disabled failed\n");
- return ret;
- }
- }
+/**
+ * sdw_enable_op - returns Success
+ * -EINVAL - In case of error.
+ *
+ * This function perform all operations required
+ * to enable ports.
+ */
+int sdw_enable_op(struct sdw_bus *sdw_mstr_bs,
+ struct sdw_mstr_runtime *sdw_mstr_rt,
+ struct sdw_runtime *sdw_rt)
+{
+
+ struct sdw_master *sdw_mstr = sdw_mstr_bs->mstr;
+ bool last_node = false;
+ int ret = 0;
+
+ last_node = sdw_chk_last_node(sdw_mstr_rt, sdw_rt);
+
+ ret = sdw_config_bs_prms(sdw_mstr_bs, false);
+ if (ret < 0) {
+ /* TBD: Undo all the computation */
+ dev_err(&sdw_mstr->dev, "xport params config failed\n");
+ return ret;
}
- return 0;
+ /* Enable new port for master and slave */
+ ret = sdw_en_dis_mstr_slv(sdw_mstr_bs, sdw_rt, true);
+ if (ret < 0) {
+ /* TBD: Undo all the computation */
+ dev_err(&sdw_mstr->dev, "Channel enable failed\n");
+ return ret;
+ }
+
+ /* change stream state to enable */
+ if (last_node)
+ sdw_rt->stream_state = SDW_STATE_ENABLE_STREAM;
+ /*
+ * Perform SDW bus update
+ * For Aggregation flow:
+ * Pre-> Bankswitch -> Post -> Disable channel
+ * For normal flow:
+ * Bankswitch -> Disable channel
+ */
+ ret = sdw_update_bs_prms(sdw_mstr_bs, sdw_rt, last_node);
+
+ return ret;
}
-EXPORT_SYMBOL_GPL(sdw_bus_calc_bw);
/**
- * sdw_bus_calc_bw_dis - returns Success
+ * sdw_prepare_op - returns Success
* -EINVAL - In case of error.
*
- *
- * This function is called from sdw_disable_and_unprepare
- * whenever stream is ended. The function based disables/
- * unprepare port/channel of associated stream and computes
- * required bandwidth, clock, frameshape, computes
- * all transport params for a given port, enable channel
- * & perform bankswitch for remaining streams on given
- * controller.
+ * This function perform all operations required
+ * to prepare ports and does computation of
+ * bus parameters.
*/
-int sdw_bus_calc_bw_dis(struct sdw_stream_tag *stream_tag, bool unprepare)
+int sdw_prepare_op(struct sdw_bus *sdw_mstr_bs,
+ struct sdw_mstr_runtime *sdw_mstr_rt,
+ struct sdw_runtime *sdw_rt)
{
- struct sdw_runtime *sdw_rt = stream_tag->sdw_rt;
- struct sdw_mstr_runtime *sdw_mstr_rt = NULL, *sdw_mstr_bs_rt = NULL;
- struct sdw_mstr_runtime *mstr_rt_act = NULL, *last_rt = NULL;
- struct sdw_bus *sdw_mstr_bs = NULL, *mstr_bs_act = NULL;
- struct sdw_master *sdw_mstr = NULL;
+ struct sdw_stream_params *stream_params = &sdw_rt->stream_params;
+ struct sdw_master *sdw_mstr = sdw_mstr_bs->mstr;
struct sdw_master_capabilities *sdw_mstr_cap = NULL;
struct sdw_stream_params *mstr_params;
- int stream_frame_size;
- int frame_interval = 0, sel_row = 0, sel_col = 0;
- int ret = 0;
+
bool last_node = false;
- struct sdw_master_port_ops *ops;
+ int ret = 0;
- /* BW calulation for active master controller for given stream tag */
- list_for_each_entry(sdw_mstr_rt,
- &sdw_rt->mstr_rt_list, mstr_sdw_node) {
+ last_node = sdw_chk_last_node(sdw_mstr_rt, sdw_rt);
+ sdw_mstr_cap = &sdw_mstr_bs->mstr->mstr_capabilities;
+ mstr_params = &sdw_mstr_rt->stream_params;
+ /*
+ * check all the stream parameters received
+ * Check for isochronous mode, sample rate etc
+ */
+ ret = sdw_chk_strm_prms(sdw_mstr_cap, mstr_params,
+ stream_params);
+ if (ret < 0) {
+ dev_err(&sdw_mstr->dev, "Stream param check failed\n");
+ return -EINVAL;
+ }
+
+ /*
+ * Calculate stream bandwidth, frame size and
+ * total BW required for master controller
+ */
+ sdw_mstr_rt->stream_bw = mstr_params->rate *
+ mstr_params->channel_count * mstr_params->bps;
+ sdw_mstr_bs->bandwidth += sdw_mstr_rt->stream_bw;
+
+ /* Compute transport params */
+ ret = sdw_compute_bs_prms(sdw_mstr_bs, sdw_mstr_rt);
+ if (ret < 0) {
+ /* TBD: Undo all the computation */
+ dev_err(&sdw_mstr->dev, "Params computation failed\n");
+ return -EINVAL;
+ }
+
+ /* Configure bus parameters */
+ ret = sdw_config_bs_prms(sdw_mstr_bs, true);
+ if (ret < 0) {
+ /* TBD: Undo all the computation */
+ dev_err(&sdw_mstr->dev, "xport param config failed\n");
+ return ret;
+ }
+
+ /*
+ * Perform SDW bus update
+ * For Aggregation flow:
+ * Pre-> Bankswitch -> Post -> Disable channel
+ * For normal flow:
+ * Bankswitch -> Disable channel
+ */
+ ret = sdw_update_bs_prms(sdw_mstr_bs, sdw_rt, last_node);
+
+ /* Prepare new port for master and slave */
+ ret = sdw_prep_unprep_mstr_slv(sdw_mstr_bs, sdw_rt, true);
+ if (ret < 0) {
+ /* TBD: Undo all the computation */
+ dev_err(&sdw_mstr->dev, "Channel prepare failed\n");
+ return ret;
+ }
+
+ /* change stream state to prepare */
+ if (last_node)
+ sdw_rt->stream_state = SDW_STATE_PREPARE_STREAM;
+
+
+ return ret;
+}
+
+/**
+ * sdw_pre_en_dis_unprep_op - returns Success
+ * -EINVAL - In case of error.
+ *
+ * This function is called by sdw_bus_calc_bw
+ * and sdw_bus_calc_bw_dis to prepare, enable,
+ * unprepare and disable ports. Based on state
+ * value, individual APIs are called.
+ */
+int sdw_pre_en_dis_unprep_op(struct sdw_mstr_runtime *sdw_mstr_rt,
+ struct sdw_runtime *sdw_rt, int state)
+{
+ struct sdw_master *sdw_mstr = NULL;
+ struct sdw_bus *sdw_mstr_bs = NULL;
+ int ret = 0;
+
+ /* Get bus structure for master */
+ sdw_mstr_bs = master_to_bus(sdw_mstr_rt->mstr);
+ if (!sdw_mstr_bs)
+ return -EINVAL;
+
+ sdw_mstr = sdw_mstr_bs->mstr;
+
+ /*
+ * All data structures required available,
+ * lets calculate BW for master controller
+ */
+
+ switch (state) {
+
+ case SDW_STATE_PREPARE_STREAM: /* Prepare */
+ ret = sdw_prepare_op(sdw_mstr_bs, sdw_mstr_rt, sdw_rt);
+ break;
+ case SDW_STATE_ENABLE_STREAM: /* Enable */
+ ret = sdw_enable_op(sdw_mstr_bs, sdw_mstr_rt, sdw_rt);
+ break;
+ case SDW_STATE_DISABLE_STREAM: /* Disable */
+ ret = sdw_disable_op(sdw_mstr_bs, sdw_mstr_rt, sdw_rt);
+ break;
+ case SDW_STATE_UNPREPARE_STREAM: /* UnPrepare */
+ ret = sdw_unprepare_op(sdw_mstr_bs, sdw_mstr_rt, sdw_rt);
+ break;
+ default:
+ ret = -EINVAL;
+ break;
+
+ }
+
+ return ret;
+}
+
+/**
+ * sdw_bus_calc_bw - returns Success
+ * -EINVAL - In case of error.
+ *
+ *
+ * This function is called from sdw_prepare_and_enable
+ * whenever new stream is processed. The function based
+ * on the stream associated with controller calculates
+ * required bandwidth, clock, frameshape, computes
+ * all transport params for a given port, enable channel
+ * & perform bankswitch.
+ */
+int sdw_bus_calc_bw(struct sdw_stream_tag *stream_tag, bool enable)
+{
+
+ struct sdw_runtime *sdw_rt = stream_tag->sdw_rt;
+ struct sdw_mstr_runtime *sdw_mstr_rt = NULL;
+ struct sdw_bus *sdw_mstr_bs = NULL;
+ struct sdw_master *sdw_mstr = NULL;
+ int ret = 0;
+
+
+ /*
+ * TBD: check for mstr_rt is in configured state or not
+ * If yes, then configure masters as well
+ * If no, then do not configure/enable master related parameters
+ */
+
+ /* BW calulation for active master controller for given stream tag */
+ list_for_each_entry(sdw_mstr_rt, &sdw_rt->mstr_rt_list,
+ mstr_sdw_node) {
if (sdw_mstr_rt->mstr == NULL)
break;
- last_rt = list_last_entry(&sdw_rt->mstr_rt_list,
- struct sdw_mstr_runtime, mstr_sdw_node);
- if (sdw_mstr_rt == last_rt)
- last_node = true;
- else
- last_node = false;
+ if ((sdw_rt->stream_state != SDW_STATE_CONFIG_STREAM) &&
+ (sdw_rt->stream_state != SDW_STATE_UNPREPARE_STREAM))
+ goto enable_stream;
/* Get bus structure for master */
sdw_mstr_bs = master_to_bus(sdw_mstr_rt->mstr);
- sdw_mstr = sdw_mstr_bs->mstr;
+ if (!sdw_mstr_bs)
+ return -EINVAL;
+ sdw_mstr = sdw_mstr_bs->mstr;
+ ret = sdw_pre_en_dis_unprep_op(sdw_mstr_rt, sdw_rt,
+ SDW_STATE_PREPARE_STREAM);
+ if (ret < 0) {
+ dev_err(&sdw_mstr->dev, "Prepare Operation failed\n");
+ return -EINVAL;
+ }
+ }
- sdw_mstr_cap = &sdw_mstr_bs->mstr->mstr_capabilities;
- mstr_params = &sdw_mstr_rt->stream_params;
+enable_stream:
- if (sdw_rt->stream_state != SDW_STATE_ENABLE_STREAM)
- goto unprepare_stream;
+ list_for_each_entry(sdw_mstr_rt, &sdw_rt->mstr_rt_list, mstr_sdw_node) {
- /* Lets do disabling of port for stream to be freed */
- list_for_each_entry(sdw_mstr_bs_rt,
- &sdw_mstr->mstr_rt_list, mstr_node) {
- if (sdw_mstr_bs_rt->mstr == NULL)
- continue;
+ if (sdw_mstr_rt->mstr == NULL)
+ break;
- /*
- * Disable channel for slave and
- * master on current bank
- */
- ret = sdw_en_dis_mstr_slv(sdw_mstr_bs, sdw_rt, false);
- if (ret < 0) {
- /* TBD: Undo all the computation */
- dev_err(&sdw_mstr->dev, "Ch dis failed\n");
- return ret;
- }
+ if ((!enable) ||
+ (sdw_rt->stream_state != SDW_STATE_PREPARE_STREAM))
+ return 0;
+ sdw_mstr_bs = master_to_bus(sdw_mstr_rt->mstr);
+ if (!sdw_mstr_bs)
+ return -EINVAL;
- /* Change stream state to disable */
- if (last_node)
- sdw_rt->stream_state = SDW_STATE_DISABLE_STREAM;
- }
+ sdw_mstr = sdw_mstr_bs->mstr;
- ret = sdw_cfg_bs_params(sdw_mstr_bs, sdw_mstr_bs_rt, false);
+ ret = sdw_pre_en_dis_unprep_op(sdw_mstr_rt, sdw_rt,
+ SDW_STATE_ENABLE_STREAM);
if (ret < 0) {
- /* TBD: Undo all the computation */
- dev_err(&sdw_mstr->dev, "xport params config failed\n");
- return ret;
+ dev_err(&sdw_mstr->dev, "Enable Operation failed\n");
+ return -EINVAL;
}
+ }
- sel_col = sdw_mstr_bs->col;
- sel_row = sdw_mstr_bs->row;
-
- if ((last_node) && (sdw_mstr->link_sync_mask)) {
-
- list_for_each_entry(mstr_rt_act, &sdw_rt->mstr_rt_list,
- mstr_sdw_node) {
- if (mstr_rt_act->mstr == NULL)
- break;
- /* Get bus structure for master */
- mstr_bs_act = master_to_bus(mstr_rt_act->mstr);
- ops = mstr_bs_act->mstr->driver->mstr_port_ops;
- /* Run for all mstr_list and
- * pre_activate ports
- */
- if (ops->dpn_port_activate_ch_pre) {
- ret = ops->dpn_port_activate_ch_pre
- (mstr_bs_act->mstr, NULL, 0);
- if (ret < 0)
- return ret;
- }
- }
- list_for_each_entry(mstr_rt_act,
- &sdw_rt->mstr_rt_list, mstr_sdw_node) {
- if (mstr_rt_act->mstr == NULL)
- break;
+ return 0;
+}
+EXPORT_SYMBOL_GPL(sdw_bus_calc_bw);
- /* Get bus structure for master */
- mstr_bs_act = master_to_bus(mstr_rt_act->mstr);
- /* Configure Frame Shape/Switch Bank */
- ret = sdw_configure_frmshp_bnkswtch_mm(
- mstr_bs_act,
- sel_col, sel_row);
- if (ret < 0) {
- /* TBD: Undo all the computation */
- dev_err(&sdw_mstr->dev, "bank switch failed\n");
- return ret;
- }
- }
+/**
+ * sdw_bus_calc_bw_dis - returns Success
+ * -EINVAL - In case of error.
+ *
+ *
+ * This function is called from sdw_disable_and_unprepare
+ * whenever stream is ended. The function based disables/
+ * unprepare port/channel of associated stream and computes
+ * required bandwidth, clock, frameshape, computes
+ * all transport params for a given port, enable channel
+ * & perform bankswitch for remaining streams on given
+ * controller.
+ */
+int sdw_bus_calc_bw_dis(struct sdw_stream_tag *stream_tag, bool unprepare)
+{
+ struct sdw_runtime *sdw_rt = stream_tag->sdw_rt;
+ struct sdw_mstr_runtime *sdw_mstr_rt = NULL;
+ struct sdw_bus *sdw_mstr_bs = NULL;
+ struct sdw_master *sdw_mstr = NULL;
+ int ret = 0;
- list_for_each_entry(mstr_rt_act, &sdw_rt->mstr_rt_list,
- mstr_sdw_node) {
- if (mstr_rt_act->mstr == NULL)
- break;
+ /* BW calulation for active master controller for given stream tag */
+ list_for_each_entry(sdw_mstr_rt,
+ &sdw_rt->mstr_rt_list, mstr_sdw_node) {
- /* Get bus structure for master */
- mstr_bs_act = master_to_bus(mstr_rt_act->mstr);
- ops = mstr_bs_act->mstr->driver->mstr_port_ops;
- /* Run for all mstr_list and
- * post_activate ports
- */
- if (ops->dpn_port_activate_ch_post) {
- ret = ops->dpn_port_activate_ch_post
- (mstr_bs_act->mstr, NULL, 0);
- if (ret < 0)
- return ret;
- }
+ if (sdw_mstr_rt->mstr == NULL)
+ break;
- }
- list_for_each_entry(mstr_rt_act, &sdw_rt->mstr_rt_list,
- mstr_sdw_node) {
- if (mstr_rt_act->mstr == NULL)
- break;
+ if (sdw_rt->stream_state != SDW_STATE_ENABLE_STREAM)
+ goto unprepare_stream;
- /* Get bus structure for master */
- mstr_bs_act = master_to_bus(mstr_rt_act->mstr);
- ret = sdw_configure_frmshp_bnkswtch_mm_wait(
- mstr_bs_act);
- }
- }
- if (!sdw_mstr->link_sync_mask) {
+ /* Get bus structure for master */
+ sdw_mstr_bs = master_to_bus(sdw_mstr_rt->mstr);
+ if (!sdw_mstr_bs)
+ return -EINVAL;
- /* Configure Frame Shape/Switch Bank */
- ret = sdw_configure_frmshp_bnkswtch(sdw_mstr_bs,
- sel_col, sel_row);
- if (ret < 0) {
- /* TBD: Undo all the computation */
- dev_err(&sdw_mstr->dev, "bank switch failed\n");
- return ret;
- }
- }
- /* Disable all channels enabled on previous bank */
- ret = sdw_dis_chan(sdw_mstr_bs, sdw_mstr_bs_rt);
+ sdw_mstr = sdw_mstr_bs->mstr;
+ ret = sdw_pre_en_dis_unprep_op(sdw_mstr_rt, sdw_rt,
+ SDW_STATE_DISABLE_STREAM);
if (ret < 0) {
- /* TBD: Undo all the computation */
- dev_err(&sdw_mstr->dev, "Channel disabled failed\n");
- return ret;
+ dev_err(&sdw_mstr->dev, "Disable Operation failed\n");
+ return -EINVAL;
}
}
+
unprepare_stream:
list_for_each_entry(sdw_mstr_rt,
&sdw_rt->mstr_rt_list, mstr_sdw_node) {
if (sdw_mstr_rt->mstr == NULL)
break;
+ if ((!unprepare) ||
+ (sdw_rt->stream_state != SDW_STATE_DISABLE_STREAM))
+ return 0;
- last_rt = list_last_entry(&sdw_rt->mstr_rt_list,
- struct sdw_mstr_runtime, mstr_sdw_node);
- if (sdw_mstr_rt == last_rt)
- last_node = true;
- else
- last_node = false;
-
- /* Get bus structure for master */
sdw_mstr_bs = master_to_bus(sdw_mstr_rt->mstr);
+ if (!sdw_mstr_bs)
+ return -EINVAL;
+
sdw_mstr = sdw_mstr_bs->mstr;
+ ret = sdw_pre_en_dis_unprep_op(sdw_mstr_rt, sdw_rt,
+ SDW_STATE_UNPREPARE_STREAM);
+ if (ret < 0) {
+ dev_err(&sdw_mstr->dev, "Unprepare Operation failed\n");
+ return -EINVAL;
+ }
+ }
+ return 0;
+}
+EXPORT_SYMBOL_GPL(sdw_bus_calc_bw_dis);
- sdw_mstr_cap = &sdw_mstr_bs->mstr->mstr_capabilities;
- mstr_params = &sdw_mstr_rt->stream_params;
+/*
+ * sdw_slv_dp0_en_dis - returns Success
+ * -EINVAL - In case of error.
+ *
+ *
+ * This function enable/disable Slave DP0 channels.
+ */
+int sdw_slv_dp0_en_dis(struct sdw_bus *mstr_bs,
+ bool is_enable, u8 slv_number)
+{
+ struct sdw_msg wr_msg, rd_msg;
+ int ret = 0;
+ int banktouse;
+ u8 wbuf[1] = {0};
+ u8 rbuf[1] = {0};
- if ((!unprepare) ||
- (sdw_rt->stream_state != SDW_STATE_DISABLE_STREAM))
- return 0;
+ /* Get current bank in use from bus structure*/
+ banktouse = mstr_bs->active_bank;
+ banktouse = !banktouse;
- /* 1. Un-prepare master and slave port */
- list_for_each_entry(sdw_mstr_bs_rt, &sdw_mstr->mstr_rt_list,
- mstr_node) {
- if (sdw_mstr_bs_rt->mstr == NULL)
- continue;
- ret = sdw_prep_unprep_mstr_slv(sdw_mstr_bs,
- sdw_rt, false);
- if (ret < 0) {
- /* TBD: Undo all the computation */
- dev_err(&sdw_mstr->dev, "Ch unprep failed\n");
- return ret;
- }
+ rd_msg.addr = wr_msg.addr = ((SDW_DPN_CHANNELEN +
+ (SDW_BANK1_REGISTER_OFFSET * banktouse)) +
+ (SDW_NUM_DATA_PORT_REGISTERS *
+ 0x0));
+ rd_msg.ssp_tag = 0x0;
+ rd_msg.flag = SDW_MSG_FLAG_READ;
+ rd_msg.len = 1;
+ rd_msg.slave_addr = slv_number;
+ rd_msg.buf = rbuf;
+ rd_msg.addr_page1 = 0x0;
+ rd_msg.addr_page2 = 0x0;
- /* change stream state to unprepare */
- if (last_node)
- sdw_rt->stream_state =
- SDW_STATE_UNPREPARE_STREAM;
- }
+ wr_msg.ssp_tag = 0x0;
+ wr_msg.flag = SDW_MSG_FLAG_WRITE;
+ wr_msg.len = 1;
+ wr_msg.slave_addr = slv_number;
+ wr_msg.buf = wbuf;
+ wr_msg.addr_page1 = 0x0;
+ wr_msg.addr_page2 = 0x0;
- /*
- * Calculate new bandwidth, frame size
- * and total BW required for master controller
- */
- sdw_mstr_rt->stream_bw = mstr_params->rate *
- mstr_params->channel_count * mstr_params->bps;
- stream_frame_size = mstr_params->channel_count *
- mstr_params->bps;
+ ret = sdw_slave_transfer(mstr_bs->mstr, &rd_msg, 1);
+ if (ret != 1) {
+ ret = -EINVAL;
+ dev_err(&mstr_bs->mstr->dev,
+ "Register transfer failed\n");
+ goto out;
+ }
- sdw_mstr_bs->bandwidth -= sdw_mstr_rt->stream_bw;
+ if (is_enable)
+ wbuf[0] = (rbuf[0] | 0x1);
+ else
+ wbuf[0] = (rbuf[0] & ~(0x1));
- /* Something went wrong in bandwidth calulation */
- if (sdw_mstr_bs->bandwidth < 0) {
- dev_err(&sdw_mstr->dev, "BW calculation failed\n");
- return -EINVAL;
- }
+ ret = sdw_slave_transfer(mstr_bs->mstr, &wr_msg, 1);
+ if (ret != 1) {
+ ret = -EINVAL;
+ dev_err(&mstr_bs->mstr->dev,
+ "Register transfer failed\n");
+ goto out;
+ }
- if (!sdw_mstr_bs->bandwidth) {
- /*
- * Last stream on master should
- * return successfully
- */
- if (last_node)
- sdw_rt->stream_state =
- SDW_STATE_UNCOMPUTE_STREAM;
- continue;
- }
+ rbuf[0] = 0;
+ /* This is just status read, can be removed later */
+ ret = sdw_slave_transfer(mstr_bs->mstr, &rd_msg, 1);
+ if (ret != 1) {
+ ret = -EINVAL;
+ dev_err(&mstr_bs->mstr->dev,
+ "Register transfer failed\n");
+ goto out;
+ }
+out:
+ return ret;
- ret = sdw_get_clock_frmshp(sdw_mstr_bs, &frame_interval,
- &sel_col, &sel_row);
- if (ret < 0) {
- /* TBD: Undo all the computation */
- dev_err(&sdw_mstr->dev, "clock/frameshape failed\n");
+}
+
+
+/*
+ * sdw_mstr_dp0_act_dis - returns Success
+ * -EINVAL - In case of error.
+ *
+ *
+ * This function enable/disable Master DP0 channels.
+ */
+int sdw_mstr_dp0_act_dis(struct sdw_bus *mstr_bs, bool is_enable)
+{
+ struct sdw_mstr_driver *ops = mstr_bs->mstr->driver;
+ struct sdw_activate_ch activate_ch;
+ int banktouse, ret = 0;
+
+ activate_ch.num = 0;
+ activate_ch.ch_mask = 0x1;
+ activate_ch.activate = is_enable; /* Enable/Disable */
+
+ /* Get current bank in use from bus structure*/
+ banktouse = mstr_bs->active_bank;
+ banktouse = !banktouse;
+
+ /* 1. Master port enable_ch_pre */
+ if (ops->mstr_port_ops->dpn_port_activate_ch_pre) {
+ ret = ops->mstr_port_ops->dpn_port_activate_ch_pre
+ (mstr_bs->mstr, &activate_ch, banktouse);
+ if (ret < 0)
return ret;
- }
+ }
- /* Compute new transport params for running streams */
- /* No sorting required here */
+ /* 2. Master port enable */
+ if (ops->mstr_port_ops->dpn_port_activate_ch) {
+ ret = ops->mstr_port_ops->dpn_port_activate_ch(mstr_bs->mstr,
+ &activate_ch, banktouse);
+ if (ret < 0)
+ return ret;
+ }
- /* Compute system interval */
- ret = sdw_compute_sys_interval(sdw_mstr_bs, sdw_mstr_cap,
- frame_interval);
- if (ret < 0) {
- /* TBD: Undo all the computation */
- dev_err(&sdw_mstr->dev, "compute SI failed\n");
+ /* 3. Master port enable_ch_post */
+ if (ops->mstr_port_ops->dpn_port_activate_ch_post) {
+ ret = ops->mstr_port_ops->dpn_port_activate_ch_post
+ (mstr_bs->mstr, &activate_ch, banktouse);
+ if (ret < 0)
return ret;
- }
+ }
- /* Compute hstart/hstop */
- ret = sdw_compute_hstart_hstop(sdw_mstr_bs, sel_col);
- if (ret < 0) {
- /* TBD: Undo all the computation */
- dev_err(&sdw_mstr->dev, "compute hstart/hstop fail\n");
+ return 0;
+}
+
+/*
+ * sdw_slv_dp0_prep_unprep - returns Success
+ * -EINVAL - In case of error.
+ *
+ *
+ * This function prepare/unprepare Slave DP0.
+ */
+int sdw_slv_dp0_prep_unprep(struct sdw_bus *mstr_bs,
+ u8 slv_number, bool prepare)
+{
+ struct sdw_msg wr_msg, rd_msg;
+ int ret = 0;
+ int banktouse;
+ u8 wbuf[1] = {0};
+ u8 rbuf[1] = {0};
+
+ /* Get current bank in use from bus structure*/
+ banktouse = mstr_bs->active_bank;
+ banktouse = !banktouse;
+
+ /* Read SDW_DPN_PREPARECTRL register */
+ rd_msg.addr = wr_msg.addr = SDW_DPN_PREPARECTRL +
+ (SDW_NUM_DATA_PORT_REGISTERS * 0x0);
+ rd_msg.ssp_tag = 0x0;
+ rd_msg.flag = SDW_MSG_FLAG_READ;
+ rd_msg.len = 1;
+ rd_msg.slave_addr = slv_number;
+ rd_msg.buf = rbuf;
+ rd_msg.addr_page1 = 0x0;
+ rd_msg.addr_page2 = 0x0;
+
+ wr_msg.ssp_tag = 0x0;
+ wr_msg.flag = SDW_MSG_FLAG_WRITE;
+ wr_msg.len = 1;
+ wr_msg.slave_addr = slv_number;
+ wr_msg.buf = wbuf;
+ wr_msg.addr_page1 = 0x0;
+ wr_msg.addr_page2 = 0x0;
+
+ ret = sdw_slave_transfer(mstr_bs->mstr, &rd_msg, 1);
+ if (ret != 1) {
+ ret = -EINVAL;
+ dev_err(&mstr_bs->mstr->dev,
+ "Register transfer failed\n");
+ goto out;
+ }
+
+ if (prepare)
+ wbuf[0] = (rbuf[0] | 0x1);
+ else
+ wbuf[0] = (rbuf[0] & ~(0x1));
+
+ /*
+ * TBD: poll for prepare interrupt bit
+ * before calling post_prepare
+ * 2. check capabilities if simplified
+ * CM no need to prepare
+ */
+ ret = sdw_slave_transfer(mstr_bs->mstr, &wr_msg, 1);
+ if (ret != 1) {
+ ret = -EINVAL;
+ dev_err(&mstr_bs->mstr->dev,
+ "Register transfer failed\n");
+ goto out;
+ }
+
+ /*
+ * Sleep for 100ms.
+ * TODO: check on check on prepare status for port_ready
+ */
+ msleep(100);
+
+out:
+ return ret;
+
+}
+
+/*
+ * sdw_mstr_dp0_prep_unprep - returns Success
+ * -EINVAL - In case of error.
+ *
+ *
+ * This function prepare/unprepare Master DP0.
+ */
+int sdw_mstr_dp0_prep_unprep(struct sdw_bus *mstr_bs,
+ bool prep)
+{
+ struct sdw_mstr_driver *ops = mstr_bs->mstr->driver;
+ struct sdw_prepare_ch prep_ch;
+ int ret = 0;
+
+ prep_ch.num = 0x0;
+ prep_ch.ch_mask = 0x1;
+ prep_ch.prepare = prep; /* Prepare/Unprepare */
+
+ /* 1. Master port prepare_ch_pre */
+ if (ops->mstr_port_ops->dpn_port_prepare_ch_pre) {
+ ret = ops->mstr_port_ops->dpn_port_prepare_ch_pre
+ (mstr_bs->mstr, &prep_ch);
+ if (ret < 0)
return ret;
- }
+ }
+
+ /* 2. Master port prepare */
+ if (ops->mstr_port_ops->dpn_port_prepare_ch) {
+ ret = ops->mstr_port_ops->dpn_port_prepare_ch
+ (mstr_bs->mstr, &prep_ch);
+ if (ret < 0)
+ return ret;
+ }
+
+ /* 3. Master port prepare_ch_post */
+ if (ops->mstr_port_ops->dpn_port_prepare_ch_post) {
+ ret = ops->mstr_port_ops->dpn_port_prepare_ch_post
+ (mstr_bs->mstr, &prep_ch);
+ if (ret < 0)
+ return ret;
+ }
+
+ return 0;
+}
+
+static int sdw_bra_config_ops(struct sdw_bus *sdw_mstr_bs,
+ struct sdw_bra_block *block,
+ struct sdw_transport_params *t_params,
+ struct sdw_port_params *p_params)
+{
+ struct sdw_mstr_driver *ops;
+ int ret, banktouse;
+
+ /* configure Master transport params */
+ ret = sdw_cfg_mstr_params(sdw_mstr_bs, t_params, p_params);
+ if (ret < 0) {
+ dev_err(&sdw_mstr_bs->mstr->dev, "BRA: Master xport params config failed\n");
+ return ret;
+ }
+
+ /* configure Slave transport params */
+ ret = sdw_cfg_slv_params(sdw_mstr_bs, t_params,
+ p_params, block->slave_addr);
+ if (ret < 0) {
+ dev_err(&sdw_mstr_bs->mstr->dev, "BRA: Slave xport params config failed\n");
+ return ret;
+ }
+
+ /* Get master driver ops */
+ ops = sdw_mstr_bs->mstr->driver;
+
+ /* Configure SSP */
+ banktouse = sdw_mstr_bs->active_bank;
+ banktouse = !banktouse;
- /* Compute block offset */
- ret = sdw_compute_blk_subblk_offset(sdw_mstr_bs);
+ if (ops->mstr_ops->set_ssp_interval) {
+ ret = ops->mstr_ops->set_ssp_interval(sdw_mstr_bs->mstr,
+ 24, banktouse);
if (ret < 0) {
- /* TBD: Undo all the computation */
- dev_err(&sdw_mstr->dev, "compute block offset failed\n");
+ dev_err(&sdw_mstr_bs->mstr->dev, "BRA: SSP interval config failed\n");
return ret;
}
+ }
- /* Configure bus params */
- ret = sdw_cfg_bs_params(sdw_mstr_bs, sdw_mstr_bs_rt, true);
+ /* Configure Clock */
+ if (ops->mstr_ops->set_clock_freq) {
+ ret = ops->mstr_ops->set_clock_freq(sdw_mstr_bs->mstr,
+ sdw_mstr_bs->clk_div, banktouse);
if (ret < 0) {
- /* TBD: Undo all the computation */
- dev_err(&sdw_mstr->dev, "xport params config failed\n");
+ dev_err(&sdw_mstr_bs->mstr->dev, "BRA: Clock config failed\n");
return ret;
}
- if ((last_node) && (sdw_mstr->link_sync_mask)) {
- list_for_each_entry(mstr_rt_act, &sdw_rt->mstr_rt_list,
- mstr_sdw_node) {
+ }
- if (mstr_rt_act->mstr == NULL)
- break;
+ return 0;
+}
- /* Get bus structure for master */
- mstr_bs_act = master_to_bus(mstr_rt_act->mstr);
+static int sdw_bra_xport_config_enable(struct sdw_bus *sdw_mstr_bs,
+ struct sdw_bra_block *block,
+ struct sdw_transport_params *t_params,
+ struct sdw_port_params *p_params)
+{
+ int ret;
- ops = mstr_bs_act->mstr->driver->mstr_port_ops;
+ /* Prepare sequence */
+ ret = sdw_bra_config_ops(sdw_mstr_bs, block, t_params, p_params);
+ if (ret < 0) {
+ dev_err(&sdw_mstr_bs->mstr->dev, "BRA: config operation failed\n");
+ return ret;
+ }
- /*
- * Run for all mstr_list and
- * pre_activate ports
- */
- if (ops->dpn_port_activate_ch_pre) {
- ret = ops->dpn_port_activate_ch_pre
- (mstr_bs_act->mstr, NULL, 0);
- if (ret < 0)
- return ret;
- }
- }
- list_for_each_entry(mstr_rt_act, &sdw_rt->mstr_rt_list,
- mstr_sdw_node) {
+ /* Bank Switch */
+ ret = sdw_cfg_frmshp_bnkswtch(sdw_mstr_bs, false);
+ if (ret < 0) {
+ dev_err(&sdw_mstr_bs->mstr->dev, "BRA: bank switch failed\n");
+ return ret;
+ }
- if (mstr_rt_act->mstr == NULL)
- break;
+ /*
+ * TODO: There may be some slave which doesn't support
+ * prepare for DP0. We have two options here.
+ * 1. Just call prepare and ignore error from those
+ * codec who doesn't support prepare for DP0.
+ * 2. Get slave capabilities and based on prepare DP0
+ * support, Program Slave prepare register.
+ * Currently going with approach 1, not checking return
+ * value.
+ * 3. Try to use existing prep_unprep API both for master
+ * and slave.
+ */
+ sdw_slv_dp0_prep_unprep(sdw_mstr_bs, block->slave_addr, true);
- /* Get bus structure for master */
- mstr_bs_act = master_to_bus(
- mstr_rt_act->mstr);
-
- /* Configure Frame Shape/Switch Bank */
- ret = sdw_configure_frmshp_bnkswtch_mm(
- mstr_bs_act,
- sel_col, sel_row);
- if (ret < 0) {
- /* TBD: Undo all the computation */
- dev_err(&sdw_mstr->dev,
- "bank switch failed\n");
- return ret;
- }
- }
+ /* Prepare Master port */
+ ret = sdw_mstr_dp0_prep_unprep(sdw_mstr_bs, true);
+ if (ret < 0) {
+ dev_err(&sdw_mstr_bs->mstr->dev, "BRA: Master prepare failed\n");
+ return ret;
+ }
- list_for_each_entry(mstr_rt_act, &sdw_rt->mstr_rt_list,
- mstr_sdw_node) {
+ /* Enable sequence */
+ ret = sdw_bra_config_ops(sdw_mstr_bs, block, t_params, p_params);
+ if (ret < 0) {
+ dev_err(&sdw_mstr_bs->mstr->dev, "BRA: config operation failed\n");
+ return ret;
+ }
+ /* Enable DP0 channel (Slave) */
+ ret = sdw_slv_dp0_en_dis(sdw_mstr_bs, true, block->slave_addr);
+ if (ret < 0) {
+ dev_err(&sdw_mstr_bs->mstr->dev, "BRA: Slave DP0 enable failed\n");
+ return ret;
+ }
- if (mstr_rt_act->mstr == NULL)
- break;
+ /* Enable DP0 channel (Master) */
+ ret = sdw_mstr_dp0_act_dis(sdw_mstr_bs, true);
+ if (ret < 0) {
+ dev_err(&sdw_mstr_bs->mstr->dev, "BRA: Master DP0 enable failed\n");
+ return ret;
+ }
- /* Get bus structure for master */
- mstr_bs_act = master_to_bus(mstr_rt_act->mstr);
+ /* Bank Switch */
+ ret = sdw_cfg_frmshp_bnkswtch(sdw_mstr_bs, false);
+ if (ret < 0) {
+ dev_err(&sdw_mstr_bs->mstr->dev, "BRA: bank switch failed\n");
+ return ret;
+ }
- ops = mstr_bs_act->mstr->driver->mstr_port_ops;
+ return 0;
+}
- /* Run for all mstr_list and
- * post_activate ports
- */
- if (ops->dpn_port_activate_ch_post) {
- ret = ops->dpn_port_activate_ch_post
- (mstr_bs_act->mstr, NULL, 0);
- if (ret < 0)
- return ret;
- }
- }
- list_for_each_entry(mstr_rt_act, &sdw_rt->mstr_rt_list,
- mstr_sdw_node) {
+static int sdw_bra_xport_config_disable(struct sdw_bus *sdw_mstr_bs,
+ struct sdw_bra_block *block)
+{
+ int ret;
- if (mstr_rt_act->mstr == NULL)
- break;
+ /* Disable DP0 channel (Slave) */
+ ret = sdw_slv_dp0_en_dis(sdw_mstr_bs, false, block->slave_addr);
+ if (ret < 0) {
+ dev_err(&sdw_mstr_bs->mstr->dev, "BRA: Slave DP0 disable failed\n");
+ return ret;
+ }
- /* Get bus structure for master */
- mstr_bs_act = master_to_bus(mstr_rt_act->mstr);
- ret = sdw_configure_frmshp_bnkswtch_mm_wait(
- mstr_bs_act);
- }
- }
- if (!sdw_mstr->link_sync_mask) {
- /* Configure Frame Shape/Switch Bank */
- ret = sdw_configure_frmshp_bnkswtch(sdw_mstr_bs,
- sel_col, sel_row);
- if (ret < 0) {
- /* TBD: Undo all the computation */
- dev_err(&sdw_mstr->dev, "bank switch failed\n");
- return ret;
- }
+ /* Disable DP0 channel (Master) */
+ ret = sdw_mstr_dp0_act_dis(sdw_mstr_bs, false);
+ if (ret < 0) {
+ dev_err(&sdw_mstr_bs->mstr->dev, "BRA: Master DP0 disable failed\n");
+ return ret;
+ }
+
+ /* Bank Switch */
+ ret = sdw_cfg_frmshp_bnkswtch(sdw_mstr_bs, false);
+ if (ret < 0) {
+ dev_err(&sdw_mstr_bs->mstr->dev, "BRA: bank switch failed\n");
+ return ret;
+ }
+ /*
+ * TODO: There may be some slave which doesn't support
+ * de-prepare for DP0. We have two options here.
+ * 1. Just call prepare and ignore error from those
+ * codec who doesn't support de-prepare for DP0.
+ * 2. Get slave capabilities and based on prepare DP0
+ * support, Program Slave prepare register.
+ * Currently going with approach 1, not checking return
+ * value.
+ */
+ sdw_slv_dp0_prep_unprep(sdw_mstr_bs, block->slave_addr, false);
+
+ /* De-prepare Master port */
+ ret = sdw_mstr_dp0_prep_unprep(sdw_mstr_bs, false);
+ if (ret < 0) {
+ dev_err(&sdw_mstr_bs->mstr->dev, "BRA: Master de-prepare failed\n");
+ return ret;
+ }
+
+ return 0;
+}
+
+int sdw_bus_bra_xport_config(struct sdw_bus *sdw_mstr_bs,
+ struct sdw_bra_block *block, bool enable)
+{
+ struct sdw_transport_params t_params;
+ struct sdw_port_params p_params;
+ int ret;
+
+ /* TODO:
+ * compute transport parameters based on current clock and
+ * frameshape. need to check how algorithm should be designed
+ * for BRA for computing clock, frameshape, SSP and transport params.
+ */
+
+ /* Transport Parameters */
+ t_params.num = 0x0; /* DP 0 */
+ t_params.blockpackingmode = 0x0;
+ t_params.blockgroupcontrol_valid = false;
+ t_params.blockgroupcontrol = 0x0;
+ t_params.lanecontrol = 0;
+ t_params.sample_interval = 10;
+
+ t_params.hstart = 7;
+ t_params.hstop = 9;
+ t_params.offset1 = 0;
+ t_params.offset2 = 0;
+
+ /* Port Parameters */
+ p_params.num = 0x0; /* DP 0 */
+
+ /* Isochronous Mode */
+ p_params.port_flow_mode = 0x0;
+
+ /* Normal Mode */
+ p_params.port_data_mode = 0x0;
+
+ /* Word length */
+ p_params.word_length = 3;
+
+ /* Frameshape and clock params */
+ sdw_mstr_bs->clk_div = 1;
+ sdw_mstr_bs->col = 10;
+ sdw_mstr_bs->row = 80;
+
+#if IS_ENABLED(CONFIG_SND_SOC_INTEL_CNL_FPGA)
+ sdw_mstr_bs->bandwidth = 9.6 * 1000 * 1000;
+#else
+ sdw_mstr_bs->bandwidth = 12 * 1000 * 1000;
+#endif
+
+ if (enable) {
+ ret = sdw_bra_xport_config_enable(sdw_mstr_bs, block,
+ &t_params, &p_params);
+ if (ret < 0) {
+ dev_err(&sdw_mstr_bs->mstr->dev, "BRA: Xport params config failed\n");
+ return ret;
}
- /* Change stream state to uncompute */
- if (last_node)
- sdw_rt->stream_state = SDW_STATE_UNCOMPUTE_STREAM;
- /* Disable all channels enabled on previous bank */
- ret = sdw_dis_chan(sdw_mstr_bs, sdw_mstr_bs_rt);
+ } else {
+ ret = sdw_bra_xport_config_disable(sdw_mstr_bs, block);
if (ret < 0) {
- /* TBD: Undo all the computation */
- dev_err(&sdw_mstr_bs->mstr->dev,
- "Channel disabled failed\n");
+ dev_err(&sdw_mstr_bs->mstr->dev, "BRA: Xport params de-config failed\n");
return ret;
}
}
return 0;
}
-EXPORT_SYMBOL_GPL(sdw_bus_calc_bw_dis);
diff --git a/drivers/sdw/sdw_cnl.c b/drivers/sdw/sdw_cnl.c
index 274966572499..9f8e77c20699 100644
--- a/drivers/sdw/sdw_cnl.c
+++ b/drivers/sdw/sdw_cnl.c
@@ -260,8 +260,9 @@ static int sdw_config_update(struct cnl_sdw *sdw)
{
struct cnl_sdw_data *data = &sdw->data;
struct sdw_master *mstr = sdw->mstr;
-
+ int sync_reg, syncgo_mask;
volatile int config_update = 0;
+ volatile int sync_update = 0;
/* Try 10 times before giving up on configuration update */
int timeout = 10;
int config_updated = 0;
@@ -271,6 +272,44 @@ static int sdw_config_update(struct cnl_sdw *sdw)
/* Bit is self-cleared when configuration gets updated. */
cnl_sdw_reg_writel(data->sdw_regs, SDW_CNL_MCP_CONFIGUPDATE,
config_update);
+
+ /*
+ * Set SYNCGO bit for Master(s) running in aggregated mode
+ * (MMModeEN = 1). This action causes all gSyncs of all Master IPs
+ * to be unmasked and asserted at the currently active gSync rate.
+ * The initialization-pending Master IP SoundWire bus clock will
+ * start up synchronizing to gSync, leading to bus reset entry,
+ * subsequent exit, and 1st Frame generation aligning to gSync.
+ * Note that this is done in order to overcome hardware bug related
+ * to mis-alignment of gSync and frame.
+ */
+ if (mstr->link_sync_mask) {
+ sync_reg = cnl_sdw_reg_readl(data->sdw_shim, SDW_CNL_SYNC);
+ sync_reg |= (CNL_SYNC_SYNCGO_MASK << CNL_SYNC_SYNCGO_SHIFT);
+ cnl_sdw_reg_writel(data->sdw_shim, SDW_CNL_SYNC, sync_reg);
+ syncgo_mask = (CNL_SYNC_SYNCGO_MASK << CNL_SYNC_SYNCGO_SHIFT);
+
+ do {
+ sync_update = cnl_sdw_reg_readl(data->sdw_shim,
+ SDW_CNL_SYNC);
+ if ((sync_update & syncgo_mask) == 0)
+ break;
+
+ msleep(20);
+ timeout--;
+
+ } while (timeout);
+
+ if ((sync_update & syncgo_mask) != 0) {
+ dev_err(&mstr->dev, "Failed to set sync go\n");
+ return -EIO;
+ }
+
+ /* Reset timeout */
+ timeout = 10;
+ }
+
+ /* Wait for config update bit to be self cleared */
do {
config_update = cnl_sdw_reg_readl(data->sdw_regs,
SDW_CNL_MCP_CONFIGUPDATE);
@@ -369,12 +408,10 @@ static int sdw_pdm_pdi_init(struct cnl_sdw *sdw)
int pdm_cap, pdm_ch_count, total_pdm_streams;
int pdm_cap_offset = SDW_CNL_PDMSCAP +
(data->inst_id * SDW_CNL_PDMSCAP_REG_OFFSET);
-
- pdm_cap = cnl_sdw_reg_readw(data->sdw_regs, pdm_cap_offset);
+ pdm_cap = cnl_sdw_reg_readw(data->sdw_shim, pdm_cap_offset);
sdw->num_pdm_streams = (pdm_cap >> CNL_PDMSCAP_BSS_SHIFT) &
CNL_PDMSCAP_BSS_MASK;
- /* Zero based value in register */
- sdw->num_pdm_streams++;
+
sdw->pdm_streams = devm_kzalloc(&mstr->dev,
sdw->num_pdm_streams * sizeof(struct cnl_sdw_pdi_stream),
GFP_KERNEL);
@@ -383,8 +420,7 @@ static int sdw_pdm_pdi_init(struct cnl_sdw *sdw)
sdw->num_in_pdm_streams = (pdm_cap >> CNL_PDMSCAP_ISS_SHIFT) &
CNL_PDMSCAP_ISS_MASK;
- /* Zero based value in register */
- sdw->num_in_pdm_streams++;
+
sdw->in_pdm_streams = devm_kzalloc(&mstr->dev,
sdw->num_in_pdm_streams * sizeof(struct cnl_sdw_pdi_stream),
GFP_KERNEL);
@@ -395,7 +431,6 @@ static int sdw_pdm_pdi_init(struct cnl_sdw *sdw)
sdw->num_out_pdm_streams = (pdm_cap >> CNL_PDMSCAP_OSS_SHIFT) &
CNL_PDMSCAP_OSS_MASK;
/* Zero based value in register */
- sdw->num_out_pdm_streams++;
sdw->out_pdm_streams = devm_kzalloc(&mstr->dev,
sdw->num_out_pdm_streams * sizeof(struct cnl_sdw_pdi_stream),
GFP_KERNEL);
@@ -443,15 +478,13 @@ static int sdw_port_pdi_init(struct cnl_sdw *sdw)
return ret;
}
-static int sdw_init(struct cnl_sdw *sdw)
+static int sdw_init(struct cnl_sdw *sdw, bool is_first_init)
{
struct sdw_master *mstr = sdw->mstr;
struct cnl_sdw_data *data = &sdw->data;
- int mcp_config, mcp_control, sync_reg;
-
+ int mcp_config, mcp_control, sync_reg, mcp_clockctrl;
volatile int sync_update = 0;
- /* Try 10 times before timing out */
- int timeout = 10;
+ int timeout = 10; /* Try 10 times before timing out */
int ret = 0;
/* Power up the link controller */
@@ -465,9 +498,11 @@ static int sdw_init(struct cnl_sdw *sdw)
/* Switch the ownership to Master IP from glue logic */
sdw_switch_to_mip(sdw);
- /* Set the Sync period to default */
+ /* Set SyncPRD period */
sync_reg = cnl_sdw_reg_readl(data->sdw_shim, SDW_CNL_SYNC);
sync_reg |= (SDW_CNL_DEFAULT_SYNC_PERIOD << CNL_SYNC_SYNCPRD_SHIFT);
+
+ /* Set SyncPU bit */
sync_reg |= (0x1 << CNL_SYNC_SYNCCPU_SHIFT);
cnl_sdw_reg_writel(data->sdw_shim, SDW_CNL_SYNC, sync_reg);
@@ -484,6 +519,39 @@ static int sdw_init(struct cnl_sdw *sdw)
return -EINVAL;
}
+ /*
+ * Set CMDSYNC bit based on Master ID
+ * Note that this bit is set only for the Master which will be
+ * running in aggregated mode (MMModeEN = 1). By doing
+ * this the gSync to Master IP to be masked inactive.
+ * Note that this is done in order to overcome hardware bug related
+ * to mis-alignment of gSync and frame.
+ */
+ if (mstr->link_sync_mask) {
+
+ sync_reg = cnl_sdw_reg_readl(data->sdw_shim, SDW_CNL_SYNC);
+ sync_reg |= (1 << (data->inst_id + CNL_SYNC_CMDSYNC_SHIFT));
+ cnl_sdw_reg_writel(data->sdw_shim, SDW_CNL_SYNC, sync_reg);
+ }
+
+ /* Set clock divider to default value in default bank */
+ mcp_clockctrl = cnl_sdw_reg_readl(data->sdw_regs,
+ SDW_CNL_MCP_CLOCKCTRL0);
+ mcp_clockctrl |= SDW_CNL_DEFAULT_CLK_DIVIDER;
+ cnl_sdw_reg_writel(data->sdw_regs, SDW_CNL_MCP_CLOCKCTRL0,
+ mcp_clockctrl);
+
+ /* Set the Frame shape init to default value */
+ cnl_sdw_reg_writel(data->sdw_regs, SDW_CNL_MCP_FRAMESHAPEINIT,
+ SDW_CNL_DEFAULT_FRAME_SHAPE);
+
+
+ /* Set the SSP interval to default value for both banks */
+ cnl_sdw_reg_writel(data->sdw_regs, SDW_CNL_MCP_SSPCTRL0,
+ SDW_CNL_DEFAULT_SSP_INTERVAL);
+ cnl_sdw_reg_writel(data->sdw_regs, SDW_CNL_MCP_SSPCTRL1,
+ SDW_CNL_DEFAULT_SSP_INTERVAL);
+
/* Set command acceptance mode. This is required because when
* Master broadcasts the clock_stop command to slaves, slaves
* might be already suspended, so this return NO ACK, in that
@@ -495,7 +563,6 @@ static int sdw_init(struct cnl_sdw *sdw)
MCP_CONTROL_CMDACCEPTMODE_SHIFT);
cnl_sdw_reg_writel(data->sdw_regs, SDW_CNL_MCP_CONTROL, mcp_control);
- cnl_sdw_reg_writel(data->sdw_regs, SDW_CNL_MCP_FRAMESHAPEINIT, 0x48);
mcp_config = cnl_sdw_reg_readl(data->sdw_regs, SDW_CNL_MCP_CONFIG);
/* Set Max cmd retry to 15 times */
@@ -541,22 +608,19 @@ static int sdw_init(struct cnl_sdw *sdw)
MCP_CONFIG_OPERATIONMODE_SHIFT);
cnl_sdw_reg_writel(data->sdw_regs, SDW_CNL_MCP_CONFIG, mcp_config);
- /* Set the SSP interval to 32 for both banks */
- cnl_sdw_reg_writel(data->sdw_regs, SDW_CNL_MCP_SSPCTRL0,
- SDW_CNL_DEFAULT_SSP_INTERVAL);
- cnl_sdw_reg_writel(data->sdw_regs, SDW_CNL_MCP_SSPCTRL1,
- SDW_CNL_DEFAULT_SSP_INTERVAL);
/* Initialize the phy control registers. */
sdw_init_phyctrl(sdw);
- /* Initlaize the ports */
- ret = sdw_port_pdi_init(sdw);
- if (ret) {
- dev_err(&mstr->dev, "SoundWire controller init failed %d\n",
+ if (is_first_init) {
+ /* Initlaize the ports */
+ ret = sdw_port_pdi_init(sdw);
+ if (ret) {
+ dev_err(&mstr->dev, "SoundWire controller init failed %d\n",
data->inst_id);
- sdw_power_down_link(sdw);
- return ret;
+ sdw_power_down_link(sdw);
+ return ret;
+ }
}
/* Lastly enable interrupts */
@@ -604,7 +668,7 @@ static int sdw_alloc_pcm_stream(struct cnl_sdw *sdw,
pdi_stream->h_ch_num = ch_cnt - 1;
ch_map_offset = SDW_CNL_PCMSCHM +
(SDW_CNL_PCMSCHM_REG_OFFSET * mstr->nr) +
- (0x2 * pdi_stream->pdi_num);
+ (SDW_PCM_STRM_START_INDEX * pdi_stream->pdi_num);
if (port->direction == SDW_DATA_DIR_IN)
pdi_ch_map |= (CNL_PCMSYCM_DIR_MASK << CNL_PCMSYCM_DIR_SHIFT);
else
@@ -1134,10 +1198,758 @@ static enum sdw_command_response cnl_sdw_xfer_msg(struct sdw_master *mstr,
return ret;
}
+static void cnl_sdw_bra_prep_crc(u8 *txdata_buf,
+ struct sdw_bra_block *block, int data_offset, int addr_offset)
+{
+
+ int addr = addr_offset;
+
+ txdata_buf[addr++] = sdw_bus_compute_crc8((block->values + data_offset),
+ block->num_bytes);
+ txdata_buf[addr++] = 0x0;
+ txdata_buf[addr++] = 0x0;
+ txdata_buf[addr] |= ((0x2 & SDW_BRA_SOP_EOP_PDI_MASK)
+ << SDW_BRA_SOP_EOP_PDI_SHIFT);
+}
+
+static void cnl_sdw_bra_prep_data(u8 *txdata_buf,
+ struct sdw_bra_block *block, int data_offset, int addr_offset)
+{
+
+ int i;
+ int addr = addr_offset;
+
+ for (i = 0; i < block->num_bytes; i += 2) {
+
+ txdata_buf[addr++] = block->values[i + data_offset];
+ if ((block->num_bytes - 1) - i)
+ txdata_buf[addr++] = block->values[i + data_offset + 1];
+ else
+ txdata_buf[addr++] = 0;
+
+ txdata_buf[addr++] = 0;
+ txdata_buf[addr++] = 0;
+ }
+}
+
+static void cnl_sdw_bra_prep_hdr(u8 *txdata_buf,
+ struct sdw_bra_block *block, int rolling_id, int offset)
+{
+
+ u8 tmp_hdr[6] = {0, 0, 0, 0, 0, 0};
+ u8 temp = 0x0;
+
+ /*
+ * 6 bytes header
+ * 1st byte: b11001010
+ * b11: Header is active
+ * b0010: Device number 2 is selected
+ * b1: Write operation
+ * b0: MSB of BRA_NumBytes is 0
+ * 2nd byte: LSB of number of bytes
+ * 3rd byte to 6th byte: Slave register offset
+ */
+ temp |= (SDW_BRA_HDR_ACTIVE & SDW_BRA_HDR_ACTIVE_MASK) <<
+ SDW_BRA_HDR_ACTIVE_SHIFT;
+ temp |= (block->slave_addr & SDW_BRA_HDR_SLV_ADDR_MASK) <<
+ SDW_BRA_HDR_SLV_ADDR_SHIFT;
+ temp |= (block->cmd & SDW_BRA_HDR_RD_WR_MASK) <<
+ SDW_BRA_HDR_RD_WR_SHIFT;
+
+ if (block->num_bytes > SDW_BRA_HDR_MSB_BYTE_CHK)
+ temp |= (SDW_BRA_HDR_MSB_BYTE_SET & SDW_BRA_HDR_MSB_BYTE_MASK);
+ else
+ temp |= (SDW_BRA_HDR_MSB_BYTE_UNSET &
+ SDW_BRA_HDR_MSB_BYTE_MASK);
+
+ txdata_buf[offset + 0] = tmp_hdr[0] = temp;
+ txdata_buf[offset + 1] = tmp_hdr[1] = block->num_bytes;
+ txdata_buf[offset + 3] |= ((SDW_BRA_SOP_EOP_PDI_STRT_VALUE &
+ SDW_BRA_SOP_EOP_PDI_MASK) <<
+ SDW_BRA_SOP_EOP_PDI_SHIFT);
+
+ txdata_buf[offset + 3] |= ((rolling_id & SDW_BRA_ROLLINGID_PDI_MASK)
+ << SDW_BRA_ROLLINGID_PDI_SHIFT);
+
+ txdata_buf[offset + 4] = tmp_hdr[2] = ((block->reg_offset &
+ SDW_BRA_HDR_SLV_REG_OFF_MASK24)
+ >> SDW_BRA_HDR_SLV_REG_OFF_SHIFT24);
+
+ txdata_buf[offset + 5] = tmp_hdr[3] = ((block->reg_offset &
+ SDW_BRA_HDR_SLV_REG_OFF_MASK16)
+ >> SDW_BRA_HDR_SLV_REG_OFF_SHIFT16);
+
+ txdata_buf[offset + 8] = tmp_hdr[4] = ((block->reg_offset &
+ SDW_BRA_HDR_SLV_REG_OFF_MASK8)
+ >> SDW_BRA_HDR_SLV_REG_OFF_SHIFT8);
+
+ txdata_buf[offset + 9] = tmp_hdr[5] = (block->reg_offset &
+ SDW_BRA_HDR_SLV_REG_OFF_MASK0);
+
+ /* CRC check */
+ txdata_buf[offset + 0xc] = sdw_bus_compute_crc8(tmp_hdr,
+ SDW_BRA_HEADER_SIZE);
+
+ if (!block->cmd)
+ txdata_buf[offset + 0xf] = ((SDW_BRA_SOP_EOP_PDI_END_VALUE &
+ SDW_BRA_SOP_EOP_PDI_MASK) <<
+ SDW_BRA_SOP_EOP_PDI_SHIFT);
+}
+
+static void cnl_sdw_bra_pdi_tx_config(struct sdw_master *mstr,
+ struct cnl_sdw *sdw, bool enable)
+{
+ struct cnl_sdw_pdi_stream tx_pdi_stream;
+ unsigned int tx_ch_map_offset, port_ctrl_offset, tx_pdi_config_offset;
+ unsigned int port_ctrl = 0, tx_pdi_config = 0, tx_stream_config;
+ int tx_pdi_ch_map = 0;
+
+ if (enable) {
+ /* DP0 PORT CTRL REG */
+ port_ctrl_offset = SDW_CNL_PORTCTRL + (SDW_BRA_PORT_ID *
+ SDW_CNL_PORT_REG_OFFSET);
+
+ port_ctrl &= ~(PORTCTRL_PORT_DIRECTION_MASK <<
+ PORTCTRL_PORT_DIRECTION_SHIFT);
+
+ port_ctrl |= ((SDW_BRA_BULK_ENABLE & SDW_BRA_BLK_EN_MASK) <<
+ SDW_BRA_BLK_EN_SHIFT);
+
+ port_ctrl |= ((SDW_BRA_BPT_PAYLOAD_TYPE &
+ SDW_BRA_BPT_PYLD_TY_MASK) <<
+ SDW_BRA_BPT_PYLD_TY_SHIFT);
+
+ cnl_sdw_reg_writel(sdw->data.sdw_regs, port_ctrl_offset,
+ port_ctrl);
+
+ /* PDI0 Programming */
+ tx_pdi_stream.l_ch_num = 0;
+ tx_pdi_stream.h_ch_num = 0xF;
+ tx_pdi_stream.pdi_num = SDW_BRA_PDI_TX_ID;
+ /* TODO: Remove hardcoding */
+ tx_pdi_stream.sdw_pdi_num = mstr->nr * 16 +
+ tx_pdi_stream.pdi_num + 3;
+
+ /* SNDWxPCMS2CM SHIM REG */
+ tx_ch_map_offset = SDW_CNL_CTLS2CM +
+ (SDW_CNL_PCMSCHM_REG_OFFSET * mstr->nr);
+
+ tx_pdi_ch_map |= (tx_pdi_stream.sdw_pdi_num &
+ CNL_PCMSYCM_STREAM_MASK) <<
+ CNL_PCMSYCM_STREAM_SHIFT;
+
+ tx_pdi_ch_map |= (tx_pdi_stream.l_ch_num &
+ CNL_PCMSYCM_LCHAN_MASK) <<
+ CNL_PCMSYCM_LCHAN_SHIFT;
+
+ tx_pdi_ch_map |= (tx_pdi_stream.h_ch_num &
+ CNL_PCMSYCM_HCHAN_MASK) <<
+ CNL_PCMSYCM_HCHAN_SHIFT;
+
+ cnl_sdw_reg_writew(sdw->data.sdw_shim, tx_ch_map_offset,
+ tx_pdi_ch_map);
+
+ /* TX PDI0 CONFIG REG BANK 0 */
+ tx_pdi_config_offset = (SDW_CNL_PDINCONFIG0 +
+ (tx_pdi_stream.pdi_num * 16));
+
+ tx_pdi_config |= ((SDW_BRA_PORT_ID &
+ PDINCONFIG_PORT_NUMBER_MASK) <<
+ PDINCONFIG_PORT_NUMBER_SHIFT);
+
+ tx_pdi_config |= (SDW_BRA_CHN_MASK <<
+ PDINCONFIG_CHANNEL_MASK_SHIFT);
+
+ tx_pdi_config |= (SDW_BRA_SOFT_RESET <<
+ PDINCONFIG_PORT_SOFT_RESET_SHIFT);
+
+ cnl_sdw_reg_writel(sdw->data.sdw_regs,
+ tx_pdi_config_offset, tx_pdi_config);
+
+ /* ALH STRMzCFG REG */
+ tx_stream_config = cnl_sdw_reg_readl(sdw->data.alh_base,
+ (tx_pdi_stream.sdw_pdi_num *
+ ALH_CNL_STRMZCFG_OFFSET));
+
+ tx_stream_config |= (CNL_STRMZCFG_DMAT_VAL &
+ CNL_STRMZCFG_DMAT_MASK) <<
+ CNL_STRMZCFG_DMAT_SHIFT;
+
+ tx_stream_config |= (0x0 & CNL_STRMZCFG_CHAN_MASK) <<
+ CNL_STRMZCFG_CHAN_SHIFT;
+
+ cnl_sdw_reg_writel(sdw->data.alh_base,
+ (tx_pdi_stream.sdw_pdi_num *
+ ALH_CNL_STRMZCFG_OFFSET),
+ tx_stream_config);
+
+
+ } else {
+
+ /*
+ * TODO: There is official workaround which needs to be
+ * performed for PDI config register. The workaround
+ * is to perform SoftRst twice in order to clear
+ * PDI fifo contents.
+ */
+
+ }
+}
+
+static void cnl_sdw_bra_pdi_rx_config(struct sdw_master *mstr,
+ struct cnl_sdw *sdw, bool enable)
+{
+
+ struct cnl_sdw_pdi_stream rx_pdi_stream;
+ unsigned int rx_ch_map_offset, rx_pdi_config_offset, rx_stream_config;
+ unsigned int rx_pdi_config = 0;
+ int rx_pdi_ch_map = 0;
+
+ if (enable) {
+
+ /* RX PDI1 Configuration */
+ rx_pdi_stream.l_ch_num = 0;
+ rx_pdi_stream.h_ch_num = 0xF;
+ rx_pdi_stream.pdi_num = SDW_BRA_PDI_RX_ID;
+ rx_pdi_stream.sdw_pdi_num = mstr->nr * 16 +
+ rx_pdi_stream.pdi_num + 3;
+
+ /* SNDWxPCMS3CM SHIM REG */
+ rx_ch_map_offset = SDW_CNL_CTLS3CM +
+ (SDW_CNL_PCMSCHM_REG_OFFSET * mstr->nr);
+
+ rx_pdi_ch_map |= (rx_pdi_stream.sdw_pdi_num &
+ CNL_PCMSYCM_STREAM_MASK) <<
+ CNL_PCMSYCM_STREAM_SHIFT;
+
+ rx_pdi_ch_map |= (rx_pdi_stream.l_ch_num &
+ CNL_PCMSYCM_LCHAN_MASK) <<
+ CNL_PCMSYCM_LCHAN_SHIFT;
+
+ rx_pdi_ch_map |= (rx_pdi_stream.h_ch_num &
+ CNL_PCMSYCM_HCHAN_MASK) <<
+ CNL_PCMSYCM_HCHAN_SHIFT;
+
+ cnl_sdw_reg_writew(sdw->data.sdw_shim, rx_ch_map_offset,
+ rx_pdi_ch_map);
+
+ /* RX PDI1 CONFIG REG */
+ rx_pdi_config_offset = (SDW_CNL_PDINCONFIG0 +
+ (rx_pdi_stream.pdi_num * 16));
+
+ rx_pdi_config |= ((SDW_BRA_PORT_ID &
+ PDINCONFIG_PORT_NUMBER_MASK) <<
+ PDINCONFIG_PORT_NUMBER_SHIFT);
+
+ rx_pdi_config |= (SDW_BRA_CHN_MASK <<
+ PDINCONFIG_CHANNEL_MASK_SHIFT);
+
+ rx_pdi_config |= (SDW_BRA_SOFT_RESET <<
+ PDINCONFIG_PORT_SOFT_RESET_SHIFT);
+
+ cnl_sdw_reg_writel(sdw->data.sdw_regs,
+ rx_pdi_config_offset, rx_pdi_config);
+
+
+ /* ALH STRMzCFG REG */
+ rx_stream_config = cnl_sdw_reg_readl(sdw->data.alh_base,
+ (rx_pdi_stream.sdw_pdi_num *
+ ALH_CNL_STRMZCFG_OFFSET));
+
+ rx_stream_config |= (CNL_STRMZCFG_DMAT_VAL &
+ CNL_STRMZCFG_DMAT_MASK) <<
+ CNL_STRMZCFG_DMAT_SHIFT;
+
+ rx_stream_config |= (0 & CNL_STRMZCFG_CHAN_MASK) <<
+ CNL_STRMZCFG_CHAN_SHIFT;
+
+ cnl_sdw_reg_writel(sdw->data.alh_base,
+ (rx_pdi_stream.sdw_pdi_num *
+ ALH_CNL_STRMZCFG_OFFSET),
+ rx_stream_config);
+
+ } else {
+
+ /*
+ * TODO: There is official workaround which needs to be
+ * performed for PDI config register. The workaround
+ * is to perform SoftRst twice in order to clear
+ * PDI fifo contents.
+ */
+
+ }
+}
+
+static void cnl_sdw_bra_pdi_config(struct sdw_master *mstr, bool enable)
+{
+ struct cnl_sdw *sdw;
+
+ /* Get driver data for master */
+ sdw = sdw_master_get_drvdata(mstr);
+
+ /* PDI0 configuration */
+ cnl_sdw_bra_pdi_tx_config(mstr, sdw, enable);
+
+ /* PDI1 configuration */
+ cnl_sdw_bra_pdi_rx_config(mstr, sdw, enable);
+}
+
+static int cnl_sdw_bra_verify_footer(u8 *rx_buf, int offset)
+{
+ int ret = 0;
+ u8 ftr_response;
+ u8 ack_nack = 0;
+ u8 ftr_result = 0;
+
+ ftr_response = rx_buf[offset];
+
+ /*
+ * ACK/NACK check
+ * NACK+ACK value from target:
+ * 00 -> Ignored
+ * 01 -> OK
+ * 10 -> Failed (Header CRC check failed)
+ * 11 -> Reserved
+ * NACK+ACK values at Target or initiator
+ * 00 -> Ignored
+ * 01 -> OK
+ * 10 -> Abort (Header cannot be trusted)
+ * 11 -> Abort (Header cannot be trusted)
+ */
+ ack_nack = ((ftr_response >> SDW_BRA_FTR_RESP_ACK_SHIFT) &
+ SDW_BRA_FTR_RESP_ACK_MASK);
+ if (ack_nack == SDW_BRA_ACK_NAK_IGNORED) {
+ pr_info("BRA Packet Ignored\n");
+ ret = -EINVAL;
+ } else if (ack_nack == SDW_BRA_ACK_NAK_OK)
+ pr_info("BRA: Packet OK\n");
+ else if (ack_nack == SDW_BRA_ACK_NAK_FAILED_ABORT) {
+ pr_info("BRA: Packet Failed/Reserved\n");
+ return -EINVAL;
+ } else if (ack_nack == SDW_BRA_ACK_NAK_RSVD_ABORT) {
+ pr_info("BRA: Packet Reserved/Abort\n");
+ return -EINVAL;
+ }
+
+ /*
+ * BRA footer result check
+ * Writes:
+ * 0 -> Good. Target accepted write payload
+ * 1 -> Bad. Target did not accept write payload
+ * Reads:
+ * 0 -> Good. Target completed read operation successfully
+ * 1 -> Bad. Target failed to complete read operation successfully
+ */
+ ftr_result = (ftr_response >> SDW_BRA_FTR_RESP_RES_SHIFT) &
+ SDW_BRA_FTR_RESP_RES_MASK;
+ if (ftr_result == SDW_BRA_FTR_RESULT_BAD) {
+ pr_info("BRA: Read/Write operation failed on target side\n");
+ /* Error scenario */
+ return -EINVAL;
+ }
+
+ pr_info("BRA: Read/Write operation complete on target side\n");
+
+ return ret;
+}
+
+static int cnl_sdw_bra_verify_hdr(u8 *rx_buf, int offset, bool *chk_footer,
+ int roll_id)
+{
+ int ret = 0;
+ u8 hdr_response, rolling_id;
+ u8 ack_nack = 0;
+ u8 not_ready = 0;
+
+ /* Match rolling ID */
+ hdr_response = rx_buf[offset];
+ rolling_id = rx_buf[offset + SDW_BRA_ROLLINGID_PDI_INDX];
+
+ rolling_id = (rolling_id & SDW_BRA_ROLLINGID_PDI_MASK);
+ if (roll_id != rolling_id) {
+ pr_info("BRA: Rolling ID doesn't match, returning error\n");
+ return -EINVAL;
+ }
+
+ /*
+ * ACK/NACK check
+ * NACK+ACK value from target:
+ * 00 -> Ignored
+ * 01 -> OK
+ * 10 -> Failed (Header CRC check failed)
+ * 11 -> Reserved
+ * NACK+ACK values at Target or initiator
+ * 00 -> Ignored
+ * 01 -> OK
+ * 10 -> Abort (Header cannot be trusted)
+ * 11 -> Abort (Header cannot be trusted)
+ */
+ ack_nack = ((hdr_response >> SDW_BRA_HDR_RESP_ACK_SHIFT) &
+ SDW_BRA_HDR_RESP_ACK_MASK);
+ if (ack_nack == SDW_BRA_ACK_NAK_IGNORED) {
+ pr_info("BRA: Packet Ignored rolling_id:%d\n", rolling_id);
+ ret = -EINVAL;
+ } else if (ack_nack == SDW_BRA_ACK_NAK_OK)
+ pr_info("BRA: Packet OK rolling_id:%d\n", rolling_id);
+ else if (ack_nack == SDW_BRA_ACK_NAK_FAILED_ABORT) {
+ pr_info("BRA: Packet Failed/Abort rolling_id:%d\n", rolling_id);
+ return -EINVAL;
+ } else if (ack_nack == SDW_BRA_ACK_NAK_RSVD_ABORT) {
+ pr_info("BRA: Packet Reserved/Abort rolling_id:%d\n", rolling_id);
+ return -EINVAL;
+ }
+
+ /* BRA not ready check */
+ not_ready = (hdr_response >> SDW_BRA_HDR_RESP_NRDY_SHIFT) &
+ SDW_BRA_HDR_RESP_NRDY_MASK;
+ if (not_ready == SDW_BRA_TARGET_NOT_READY) {
+ pr_info("BRA: Target not ready for read/write operation rolling_id:%d\n",
+ rolling_id);
+ chk_footer = false;
+ return -EBUSY;
+ }
+
+ pr_info("BRA: Target ready for read/write operation rolling_id:%d\n", rolling_id);
+ return ret;
+}
+
+static void cnl_sdw_bra_remove_data_padding(u8 *src_buf, u8 *dst_buf,
+ u8 size) {
+
+ int i;
+
+ for (i = 0; i < size/2; i++) {
+
+ *dst_buf++ = *src_buf++;
+ *dst_buf++ = *src_buf++;
+ src_buf++;
+ src_buf++;
+ }
+}
+
+
+static int cnl_sdw_bra_check_data(struct sdw_master *mstr,
+ struct sdw_bra_block *block, struct bra_info *info) {
+
+ int offset = 0, rolling_id = 0, tmp_offset = 0;
+ int rx_crc_comp = 0, rx_crc_rvd = 0;
+ int i, ret;
+ bool chk_footer = true;
+ int rx_buf_size = info->rx_block_size;
+ u8 *rx_buf = info->rx_ptr;
+ u8 *tmp_buf = NULL;
+
+ /* TODO: Remove below hex dump print */
+ print_hex_dump(KERN_DEBUG, "BRA RX DATA:", DUMP_PREFIX_OFFSET, 8, 4,
+ rx_buf, rx_buf_size, false);
+
+ /* Allocate temporary buffer in case of read request */
+ if (!block->cmd) {
+ tmp_buf = kzalloc(block->num_bytes, GFP_KERNEL);
+ if (!tmp_buf) {
+ ret = -ENOMEM;
+ goto error;
+ }
+ }
+
+ /*
+ * TODO: From the response header and footer there is no mention of
+ * read or write packet so controller needs to keep transmit packet
+ * information in order to verify rx packet. Also the current
+ * approach used for error mechanism is any of the packet response
+ * is not success, just report the whole transfer failed to Slave.
+ */
+
+ /*
+ * Verification of response packet for one known
+ * hardcoded configuration. This needs to be extended
+ * once we have dynamic algorithm integrated.
+ */
+
+ /* 2 valid read response */
+ for (i = 0; i < info->valid_packets; i++) {
+
+
+ pr_info("BRA: Verifying packet number:%d with rolling id:%d\n",
+ info->packet_info[i].packet_num,
+ rolling_id);
+ chk_footer = true;
+ ret = cnl_sdw_bra_verify_hdr(rx_buf, offset, &chk_footer,
+ rolling_id);
+ if (ret < 0) {
+ dev_err(&mstr->dev, "BRA: Header verification failed for packet number:%d\n",
+ info->packet_info[i].packet_num);
+ goto error;
+ }
+
+ /* Increment offset for header response */
+ offset = offset + SDW_BRA_HEADER_RESP_SIZE_PDI;
+
+ if (!block->cmd) {
+
+ /* Remove PDI padding for data */
+ cnl_sdw_bra_remove_data_padding(&rx_buf[offset],
+ &tmp_buf[tmp_offset],
+ info->packet_info[i].num_data_bytes);
+
+ /* Increment offset for consumed data */
+ offset = offset +
+ (info->packet_info[i].num_data_bytes * 2);
+
+ rx_crc_comp = sdw_bus_compute_crc8(&tmp_buf[tmp_offset],
+ info->packet_info[i].num_data_bytes);
+
+ /* Match Data CRC */
+ rx_crc_rvd = rx_buf[offset];
+ if (rx_crc_comp != rx_crc_rvd) {
+ ret = -EINVAL;
+ dev_err(&mstr->dev, "BRA: Data CRC doesn't match for packet number:%d\n",
+ info->packet_info[i].packet_num);
+ goto error;
+ }
+
+ /* Increment destination buffer with copied data */
+ tmp_offset = tmp_offset +
+ info->packet_info[i].num_data_bytes;
+
+ /* Increment offset for CRC */
+ offset = offset + SDW_BRA_DATA_CRC_SIZE_PDI;
+ }
+
+ if (chk_footer) {
+ ret = cnl_sdw_bra_verify_footer(rx_buf, offset);
+ if (ret < 0) {
+ ret = -EINVAL;
+ dev_err(&mstr->dev, "BRA: Footer verification failed for packet number:%d\n",
+ info->packet_info[i].packet_num);
+ goto error;
+ }
+
+ }
+
+ /* Increment offset for footer response */
+ offset = offset + SDW_BRA_HEADER_RESP_SIZE_PDI;
+
+ /* Increment rolling id for next packet */
+ rolling_id++;
+ if (rolling_id > 0xF)
+ rolling_id = 0;
+ }
+
+ /*
+ * No need to check for dummy responses from codec
+ * Assumption made here is that dummy packets are
+ * added in 1ms buffer only after valid packets.
+ */
+
+ /* Copy data to codec buffer in case of read request */
+ if (!block->cmd)
+ memcpy(block->values, tmp_buf, block->num_bytes);
+
+error:
+ /* Free up temp buffer allocated in case of read request */
+ if (!block->cmd)
+ kfree(tmp_buf);
+
+ /* Free up buffer allocated in cnl_sdw_bra_data_ops */
+ kfree(info->tx_ptr);
+ kfree(info->rx_ptr);
+ kfree(info->packet_info);
+
+ return ret;
+}
+
+static int cnl_sdw_bra_data_ops(struct sdw_master *mstr,
+ struct sdw_bra_block *block, struct bra_info *info)
+{
+
+ struct sdw_bra_block tmp_block;
+ int i;
+ int tx_buf_size = 384, rx_buf_size = 1152;
+ u8 *tx_buf = NULL, *rx_buf = NULL;
+ int rolling_id = 0, total_bytes = 0, offset = 0, reg_offset = 0;
+ int dummy_read = 0x0000;
+ int ret;
+
+ /*
+ * TODO: Run an algorithm here to identify the buffer size
+ * for TX and RX buffers + number of dummy packets (read
+ * or write) to be added for to align buffers.
+ */
+
+ info->tx_block_size = tx_buf_size;
+ info->tx_ptr = tx_buf = kzalloc(tx_buf_size, GFP_KERNEL);
+ if (!tx_buf) {
+ ret = -ENOMEM;
+ goto error;
+ }
+
+ info->rx_block_size = rx_buf_size;
+ info->rx_ptr = rx_buf = kzalloc(rx_buf_size, GFP_KERNEL);
+ if (!rx_buf) {
+ ret = -ENOMEM;
+ goto error;
+ }
+
+ /* Fill valid packets transferred per millisecond buffer */
+ info->valid_packets = 2;
+ info->packet_info = kcalloc(info->valid_packets,
+ sizeof(*info->packet_info),
+ GFP_KERNEL);
+ if (!info->packet_info) {
+ ret = -ENOMEM;
+ goto error;
+ }
+
+ /*
+ * Below code performs packet preparation for one known
+ * configuration.
+ * 1. 2 Valid Read request with 18 bytes each.
+ * 2. 22 dummy read packets with 18 bytes each.
+ */
+ for (i = 0; i < info->valid_packets; i++) {
+ tmp_block.slave_addr = block->slave_addr;
+ tmp_block.cmd = block->cmd; /* Read Request */
+ tmp_block.num_bytes = 18;
+ tmp_block.reg_offset = block->reg_offset + reg_offset;
+ tmp_block.values = NULL;
+ reg_offset += tmp_block.num_bytes;
+
+ cnl_sdw_bra_prep_hdr(tx_buf, &tmp_block, rolling_id, offset);
+ /* Total Header size: Header + Header CRC size on PDI */
+ offset += SDW_BRA_HEADER_TOTAL_SZ_PDI;
+
+ if (block->cmd) {
+ /*
+ * PDI data preparation in case of write request
+ * Assumption made here is data size from codec will
+ * be always an even number.
+ */
+ cnl_sdw_bra_prep_data(tx_buf, &tmp_block,
+ total_bytes, offset);
+ offset += tmp_block.num_bytes * 2;
+
+ /* Data CRC */
+ cnl_sdw_bra_prep_crc(tx_buf, &tmp_block,
+ total_bytes, offset);
+ offset += SDW_BRA_DATA_CRC_SIZE_PDI;
+ }
+
+ total_bytes += tmp_block.num_bytes;
+ rolling_id++;
+
+ /* Fill packet info data structure */
+ info->packet_info[i].packet_num = i + 1;
+ info->packet_info[i].num_data_bytes = tmp_block.num_bytes;
+ }
+
+ /* Prepare dummy packets */
+ for (i = 0; i < 22; i++) {
+ tmp_block.slave_addr = block->slave_addr;
+ tmp_block.cmd = 0; /* Read request */
+ tmp_block.num_bytes = 18;
+ tmp_block.reg_offset = dummy_read++;
+ tmp_block.values = NULL;
+
+ cnl_sdw_bra_prep_hdr(tx_buf, &tmp_block, rolling_id, offset);
+
+ /* Total Header size: RD header + RD header CRC size on PDI */
+ offset += SDW_BRA_HEADER_TOTAL_SZ_PDI;
+
+ total_bytes += tmp_block.num_bytes;
+ rolling_id++;
+ }
+
+ /* TODO: Remove below hex dump print */
+ print_hex_dump(KERN_DEBUG, "BRA PDI VALID TX DATA:",
+ DUMP_PREFIX_OFFSET, 8, 4, tx_buf, tx_buf_size, false);
+
+ return 0;
+
+error:
+ kfree(info->tx_ptr);
+ kfree(info->rx_ptr);
+ kfree(info->packet_info);
+
+ return ret;
+}
+
static int cnl_sdw_xfer_bulk(struct sdw_master *mstr,
struct sdw_bra_block *block)
{
- return 0;
+ struct cnl_sdw *sdw = sdw_master_get_platdata(mstr);
+ struct cnl_sdw_data *data = &sdw->data;
+ struct cnl_bra_operation *ops = data->bra_data->bra_ops;
+ struct bra_info info;
+ int ret;
+
+ /*
+ * 1. PDI Configuration
+ * 2. Prepare BRA packets including CRC calculation.
+ * 3. Configure TX and RX DMA in one shot mode.
+ * 4. Configure TX and RX Pipeline.
+ * 5. Run TX and RX DMA.
+ * 6. Run TX and RX pipelines.
+ * 7. Wait on completion for RX buffer.
+ * 8. Match TX and RX buffer packets and check for errors.
+ */
+
+ /* Memset bra_info data structure */
+ memset(&info, 0x0, sizeof(info));
+
+ /* Fill master number in bra info data structure */
+ info.mstr_num = mstr->nr;
+
+ /* PDI Configuration (ON) */
+ cnl_sdw_bra_pdi_config(mstr, true);
+
+ /* Prepare TX buffer */
+ ret = cnl_sdw_bra_data_ops(mstr, block, &info);
+ if (ret < 0) {
+ dev_err(&mstr->dev, "BRA: Request packet(s) creation failed\n");
+ goto out;
+ }
+
+ /* Pipeline Setup (ON) */
+ ret = ops->bra_platform_setup(data->bra_data->drv_data, true, &info);
+ if (ret < 0) {
+ dev_err(&mstr->dev, "BRA: Pipeline setup failed\n");
+ goto out;
+ }
+
+ /* Trigger START host DMA and pipeline */
+ ret = ops->bra_platform_xfer(data->bra_data->drv_data, true, &info);
+ if (ret < 0) {
+ dev_err(&mstr->dev, "BRA: Pipeline start failed\n");
+ goto out;
+ }
+
+ /* Trigger STOP host DMA and pipeline */
+ ret = ops->bra_platform_xfer(data->bra_data->drv_data, false, &info);
+ if (ret < 0) {
+ dev_err(&mstr->dev, "BRA: Pipeline stop failed\n");
+ goto out;
+ }
+
+ /* Pipeline Setup (OFF) */
+ ret = ops->bra_platform_setup(data->bra_data->drv_data, false, &info);
+ if (ret < 0) {
+ dev_err(&mstr->dev, "BRA: Pipeline de-setup failed\n");
+ goto out;
+ }
+
+ /* Verify RX buffer */
+ ret = cnl_sdw_bra_check_data(mstr, block, &info);
+ if (ret < 0) {
+ dev_err(&mstr->dev, "BRA: Response packet(s) incorrect\n");
+ goto out;
+ }
+
+ /* PDI Configuration (OFF) */
+ cnl_sdw_bra_pdi_config(mstr, false);
+
+out:
+ return ret;
}
static int cnl_sdw_mon_handover(struct sdw_master *mstr,
@@ -1179,7 +1991,7 @@ static int cnl_sdw_set_ssp_interval(struct sdw_master *mstr,
}
static int cnl_sdw_set_clock_freq(struct sdw_master *mstr,
- int cur_clk_freq, int bank)
+ int cur_clk_div, int bank)
{
struct cnl_sdw *sdw = sdw_master_get_drvdata(mstr);
struct cnl_sdw_data *data = &sdw->data;
@@ -1189,11 +2001,7 @@ static int cnl_sdw_set_clock_freq(struct sdw_master *mstr,
/* TODO: Retrieve divider value or get value directly from calling
* function
*/
-#ifdef CONFIG_SND_SOC_SVFPGA
- int divider = ((9600000 * 2/cur_clk_freq) - 1);
-#else
- int divider = ((9600000/cur_clk_freq) - 1);
-#endif
+ int divider = (cur_clk_div - 1);
if (bank) {
mcp_clockctrl_offset = SDW_CNL_MCP_CLOCKCTRL1;
@@ -1419,7 +2227,7 @@ static int cnl_sdw_probe(struct sdw_master *mstr,
sdw_master_set_drvdata(mstr, sdw);
init_completion(&sdw->tx_complete);
mutex_init(&sdw->stream_lock);
- ret = sdw_init(sdw);
+ ret = sdw_init(sdw, true);
if (ret) {
dev_err(&mstr->dev, "SoundWire controller init failed %d\n",
data->inst_id);
@@ -1466,8 +2274,6 @@ static int cnl_sdw_remove(struct sdw_master *mstr)
#ifdef CONFIG_PM
static int cnl_sdw_runtime_suspend(struct device *dev)
{
- enum sdw_clk_stop_mode clock_stop_mode;
-
int volatile mcp_stat;
int mcp_control;
int timeout = 0;
@@ -1493,12 +2299,12 @@ static int cnl_sdw_runtime_suspend(struct device *dev)
cnl_sdw_reg_writel(data->sdw_regs, SDW_CNL_MCP_CONTROL, mcp_control);
/* Prepare all the slaves for clock stop */
- ret = sdw_prepare_for_clock_change(sdw->mstr, 1, &clock_stop_mode);
+ ret = sdw_master_prep_for_clk_stop(sdw->mstr);
if (ret)
return ret;
/* Call bus function to broadcast the clock stop now */
- ret = sdw_stop_clock(sdw->mstr, clock_stop_mode);
+ ret = sdw_master_stop_clock(sdw->mstr);
if (ret)
return ret;
/* Wait for clock to be stopped, we are waiting at max 1sec now */
@@ -1534,12 +2340,9 @@ static int cnl_sdw_runtime_suspend(struct device *dev)
static int cnl_sdw_clock_stop_exit(struct cnl_sdw *sdw)
{
- u16 wake_en, wake_sts, ioctl;
- int volatile mcp_control;
- int timeout = 0;
+ u16 wake_en, wake_sts;
+ int ret;
struct cnl_sdw_data *data = &sdw->data;
- int ioctl_offset = SDW_CNL_IOCTL + (data->inst_id *
- SDW_CNL_IOCTL_REG_OFFSET);
/* Disable the wake up interrupt */
wake_en = cnl_sdw_reg_readw(data->sdw_shim,
@@ -1557,41 +2360,10 @@ static int cnl_sdw_clock_stop_exit(struct cnl_sdw *sdw)
wake_sts |= (0x1 << data->inst_id);
cnl_sdw_reg_writew(data->sdw_shim, SDW_CNL_SNDWWAKESTS_REG_OFFSET,
wake_sts);
-
- ioctl = cnl_sdw_reg_readw(data->sdw_shim, ioctl_offset);
- ioctl |= CNL_IOCTL_DO_MASK << CNL_IOCTL_DO_SHIFT;
- cnl_sdw_reg_writew(data->sdw_shim, ioctl_offset, ioctl);
- ioctl |= CNL_IOCTL_DOE_MASK << CNL_IOCTL_DOE_SHIFT;
- cnl_sdw_reg_writew(data->sdw_shim, ioctl_offset, ioctl);
- /* Switch control back to master */
- sdw_switch_to_mip(sdw);
-
- mcp_control = cnl_sdw_reg_readl(data->sdw_regs,
- SDW_CNL_MCP_CONTROL);
- mcp_control &= ~(MCP_CONTROL_BLOCKWAKEUP_MASK <<
- MCP_CONTROL_BLOCKWAKEUP_SHIFT);
- mcp_control |= (MCP_CONTROL_CLOCKSTOPCLEAR_MASK <<
- MCP_CONTROL_CLOCKSTOPCLEAR_SHIFT);
- cnl_sdw_reg_writel(data->sdw_regs, SDW_CNL_MCP_CONTROL, mcp_control);
- /*
- * Wait for timeout to be clear to successful enabling of the clock
- * We will wait for 1sec before giving up
- */
- while (timeout != 10) {
- mcp_control = cnl_sdw_reg_readl(data->sdw_regs,
- SDW_CNL_MCP_CONTROL);
- if ((mcp_control & (MCP_CONTROL_CLOCKSTOPCLEAR_MASK <<
- MCP_CONTROL_CLOCKSTOPCLEAR_SHIFT)) == 0)
- break;
- msleep(1000);
- timeout++;
- }
- mcp_control = cnl_sdw_reg_readl(data->sdw_regs,
- SDW_CNL_MCP_CONTROL);
- if ((mcp_control & (MCP_CONTROL_CLOCKSTOPCLEAR_MASK <<
- MCP_CONTROL_CLOCKSTOPCLEAR_SHIFT)) != 0) {
- dev_err(&sdw->mstr->dev, "Clop Stop Exit failed\n");
- return -EBUSY;
+ ret = sdw_init(sdw, false);
+ if (ret < 0) {
+ pr_err("sdw_init fail: %d\n", ret);
+ return ret;
}
dev_info(&sdw->mstr->dev, "Exit from clock stop successful\n");
@@ -1627,13 +2399,14 @@ static int cnl_sdw_runtime_resume(struct device *dev)
dev_info(&mstr->dev, "Exit from clock stop successful\n");
/* Prepare all the slaves to comeout of clock stop */
- ret = sdw_prepare_for_clock_change(sdw->mstr, 0, NULL);
+ ret = sdw_mstr_deprep_after_clk_start(sdw->mstr);
if (ret)
return ret;
return 0;
}
+#ifdef CONFIG_PM_SLEEP
static int cnl_sdw_sleep_resume(struct device *dev)
{
return cnl_sdw_runtime_resume(dev);
@@ -1642,7 +2415,15 @@ static int cnl_sdw_sleep_suspend(struct device *dev)
{
return cnl_sdw_runtime_suspend(dev);
}
-#endif
+#else
+#define cnl_sdw_sleep_suspend NULL
+#define cnl_sdw_sleep_resume NULL
+#endif /* CONFIG_PM_SLEEP */
+#else
+#define cnl_sdw_runtime_suspend NULL
+#define cnl_sdw_runtime_resume NULL
+#endif /* CONFIG_PM */
+
static const struct dev_pm_ops cnl_sdw_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(cnl_sdw_sleep_suspend, cnl_sdw_sleep_resume)
diff --git a/drivers/sdw/sdw_cnl_priv.h b/drivers/sdw/sdw_cnl_priv.h
index 8e9d68c2bc2c..504df88d681a 100644
--- a/drivers/sdw/sdw_cnl_priv.h
+++ b/drivers/sdw/sdw_cnl_priv.h
@@ -27,7 +27,14 @@
#define SDW_CNL_SLAVE_STATUS_BITS 4
#define SDW_CNL_CMD_WORD_LEN 4
#define SDW_CNL_DEFAULT_SSP_INTERVAL 0x18
+#define SDW_CNL_DEFAULT_CLK_DIVIDER 0
+#define SDW_CNL_DEFAULT_FRAME_SHAPE 0x30
+
+#if IS_ENABLED(CONFIG_SND_SOC_INTEL_CNL_FPGA)
#define SDW_CNL_DEFAULT_SYNC_PERIOD 0x257F
+#else
+#define SDW_CNL_DEFAULT_SYNC_PERIOD 0x176F
+#endif
#define SDW_CNL_PORT_REG_OFFSET 0x80
#define CNL_SDW_SCP_ADDR_REGS 0x2
@@ -70,6 +77,7 @@
#define MCP_CONTROL_CMDRST_MASK 0x1
#define MCP_CONTROL_SOFTRST_SHIFT 0x6
#define MCP_CONTROL_SOFTCTRLBUSRST_SHIFT 0x5
+#define MCP_CONTROL_HARDCTRLBUSRST_MASK 0x1
#define MCP_CONTROL_HARDCTRLBUSRST_SHIFT 0x4
#define MCP_CONTROL_CLOCKPAUSEREQ_SHIFT 0x3
#define MCP_CONTROL_CLOCKSTOPCLEAR_SHIFT 0x2
@@ -220,6 +228,8 @@
#define PDINCONFIG_CHANNEL_MASK_MASK 0xFF
#define PDINCONFIG_PORT_NUMBER_SHIFT 0x0
#define PDINCONFIG_PORT_NUMBER_MASK 0x1F
+#define PDINCONFIG_PORT_SOFT_RESET_SHIFT 0x18
+#define PDINCONFIG_PORT_SOFT_RESET 0x1F
#define DPN_CONFIG_WL_SHIFT 0x8
#define DPN_CONFIG_WL_MASK 0x1F
@@ -341,4 +351,34 @@
#define CNL_STRMZCFG_CHAN_SHIFT 16
#define CNL_STRMZCFG_CHAN_MASK 0xF
+#define SDW_BRA_HEADER_SIZE_PDI 12 /* In bytes */
+#define SDW_BRA_HEADER_CRC_SIZE_PDI 4 /* In bytes */
+#define SDW_BRA_DATA_CRC_SIZE_PDI 4 /* In bytes */
+#define SDW_BRA_HEADER_RESP_SIZE_PDI 4 /* In bytes */
+#define SDW_BRA_FOOTER_RESP_SIZE_PDI 4 /* In bytes */
+#define SDW_BRA_PADDING_SZ_PDI 4 /* In bytes */
+#define SDW_BRA_HEADER_TOTAL_SZ_PDI 16 /* In bytes */
+
+#define SDW_BRA_SOP_EOP_PDI_STRT_VALUE 0x4
+#define SDW_BRA_SOP_EOP_PDI_END_VALUE 0x2
+#define SDW_BRA_SOP_EOP_PDI_MASK 0x1F
+#define SDW_BRA_SOP_EOP_PDI_SHIFT 5
+
+#define SDW_BRA_STRM_ID_BLK_OUT 3
+#define SDW_BRA_STRM_ID_BLK_IN 4
+
+#define SDW_BRA_PDI_TX_ID 0
+#define SDW_BRA_PDI_RX_ID 1
+
+#define SDW_BRA_SOFT_RESET 0x1
+#define SDW_BRA_BULK_ENABLE 1
+#define SDW_BRA_BLK_EN_MASK 0xFFFEFFFF
+#define SDW_BRA_BLK_EN_SHIFT 16
+
+#define SDW_BRA_ROLLINGID_PDI_INDX 3
+#define SDW_BRA_ROLLINGID_PDI_MASK 0xF
+#define SDW_BRA_ROLLINGID_PDI_SHIFT 0
+
+#define SDW_PCM_STRM_START_INDEX 0x2
+
#endif /* _LINUX_SDW_CNL_H */
diff --git a/drivers/sdw/sdw_priv.h b/drivers/sdw/sdw_priv.h
index 42e948440481..fd060bfa74c4 100644
--- a/drivers/sdw/sdw_priv.h
+++ b/drivers/sdw/sdw_priv.h
@@ -34,16 +34,14 @@
#define SDW_STATE_INIT_STREAM_TAG 0x1
#define SDW_STATE_ALLOC_STREAM 0x2
#define SDW_STATE_CONFIG_STREAM 0x3
-#define SDW_STATE_COMPUTE_STREAM 0x4
-#define SDW_STATE_PREPARE_STREAM 0x5
-#define SDW_STATE_ENABLE_STREAM 0x6
-#define SDW_STATE_DISABLE_STREAM 0x7
-#define SDW_STATE_UNPREPARE_STREAM 0x8
-#define SDW_STATE_UNCOMPUTE_STREAM 0x9
-#define SDW_STATE_RELEASE_STREAM 0xa
-#define SDW_STATE_FREE_STREAM 0xb
-#define SDW_STATE_FREE_STREAM_TAG 0xc
-#define SDW_STATE_ONLY_XPORT_STREAM 0xd
+#define SDW_STATE_PREPARE_STREAM 0x4
+#define SDW_STATE_ENABLE_STREAM 0x5
+#define SDW_STATE_DISABLE_STREAM 0x6
+#define SDW_STATE_UNPREPARE_STREAM 0x7
+#define SDW_STATE_RELEASE_STREAM 0x8
+#define SDW_STATE_FREE_STREAM 0x9
+#define SDW_STATE_FREE_STREAM_TAG 0xA
+#define SDW_STATE_ONLY_XPORT_STREAM 0xB
#define SDW_STATE_INIT_RT 0x1
#define SDW_STATE_CONFIG_RT 0x2
@@ -53,6 +51,8 @@
#define SDW_STATE_UNPREPARE_RT 0x6
#define SDW_STATE_RELEASE_RT 0x7
+#define SDW_SLAVE_BDCAST_ADDR 15
+
struct sdw_runtime;
/* Defined in sdw.c, used by multiple files of module */
extern struct sdw_core sdw_core;
@@ -76,11 +76,33 @@ enum sdw_clk_state {
SDW_CLK_STATE_ON = 1,
};
+enum sdw_update_bs_state {
+ SDW_UPDATE_BS_PRE,
+ SDW_UPDATE_BS_BNKSWTCH,
+ SDW_UPDATE_BS_POST,
+ SDW_UPDATE_BS_BNKSWTCH_WAIT,
+ SDW_UPDATE_BS_DIS_CHN,
+};
+
+enum sdw_port_en_state {
+ SDW_PORT_STATE_PREPARE,
+ SDW_PORT_STATE_ENABLE,
+ SDW_PORT_STATE_DISABLE,
+ SDW_PORT_STATE_UNPREPARE,
+};
+
struct port_chn_en_state {
bool is_activate;
bool is_bank_sw;
};
+struct temp_elements {
+ int rate;
+ int full_bw;
+ int payload_bw;
+ int hwidth;
+};
+
struct sdw_stream_tag {
int stream_tag;
struct mutex stream_lock;
@@ -153,6 +175,10 @@ struct sdw_mstr_runtime {
unsigned int stream_bw;
/* State of runtime structure */
int rt_state;
+ int hstart;
+ int hstop;
+ int block_offset;
+ int sub_block_offset;
};
struct sdw_runtime {
@@ -185,8 +211,10 @@ struct sdw_bus {
unsigned int clk_state;
unsigned int active_bank;
unsigned int clk_freq;
+ unsigned int clk_div;
/* Bus total Bandwidth. Initialize and reset to zero */
unsigned int bandwidth;
+ unsigned int stream_interval; /* Stream Interval */
unsigned int system_interval; /* Bus System Interval */
unsigned int frame_freq;
unsigned int col;
@@ -239,6 +267,8 @@ int sdw_bus_bw_init(void);
int sdw_mstr_bw_init(struct sdw_bus *sdw_bs);
int sdw_bus_calc_bw(struct sdw_stream_tag *stream_tag, bool enable);
int sdw_bus_calc_bw_dis(struct sdw_stream_tag *stream_tag, bool unprepare);
+int sdw_bus_bra_xport_config(struct sdw_bus *sdw_mstr_bs,
+ struct sdw_bra_block *block, bool enable);
int sdw_chn_enable(void);
void sdw_unlock_mstr(struct sdw_master *mstr);
int sdw_trylock_mstr(struct sdw_master *mstr);
diff --git a/drivers/sdw/sdw_utils.c b/drivers/sdw/sdw_utils.c
new file mode 100644
index 000000000000..724323d01993
--- /dev/null
+++ b/drivers/sdw/sdw_utils.c
@@ -0,0 +1,49 @@
+/*
+ * sdw_bwcalc.c - SoundWire Bus BW calculation & CHN Enabling implementation
+ *
+ * Copyright (C) 2015-2016 Intel Corp
+ * Author: Sanyog Kale <sanyog.r.kale@intel.com>
+ *
+ * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 of the License.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ */
+
+#include <linux/slab.h>
+#include <linux/kernel.h>
+#include <linux/sdw_bus.h>
+#include <linux/crc8.h>
+
+
+
+/**
+ * sdw_bus_compute_crc8: SoundWire bus helper function to compute crc8.
+ * This API uses crc8 helper functions internally.
+ *
+ * @values: Data buffer.
+ * @num_bytes: Number of bytes.
+ */
+u8 sdw_bus_compute_crc8(u8 *values, u8 num_bytes)
+{
+ u8 table[256];
+ u8 poly = 0x4D; /* polynomial = x^8 + x^6 + x^3 + x^2 + 1 */
+ u8 crc = CRC8_INIT_VALUE; /* Initialize 8 bit to 11111111 */
+
+ /* Populate MSB */
+ crc8_populate_msb(table, poly);
+
+ /* CRC computation */
+ crc = crc8(table, values, num_bytes, crc);
+
+ return crc;
+}
+EXPORT_SYMBOL(sdw_bus_compute_crc8);
diff --git a/include/linux/sdw/sdw_cnl.h b/include/linux/sdw/sdw_cnl.h
index acf223cba595..6a9281c458b7 100644
--- a/include/linux/sdw/sdw_cnl.h
+++ b/include/linux/sdw/sdw_cnl.h
@@ -73,6 +73,33 @@ struct cnl_sdw_port {
struct cnl_sdw_pdi_stream *pdi_stream;
};
+struct bra_packet_info {
+ u8 packet_num;
+ u8 num_data_bytes;
+};
+
+struct bra_info {
+ unsigned int mstr_num;
+ u8 *tx_ptr;
+ u8 *rx_ptr;
+ unsigned int tx_block_size;
+ unsigned int rx_block_size;
+ u8 valid_packets;
+ struct bra_packet_info *packet_info;
+};
+
+struct cnl_bra_operation {
+ int (*bra_platform_setup)(void *context, bool is_enable,
+ struct bra_info *info);
+ int (*bra_platform_xfer)(void *context, bool is_enable,
+ struct bra_info *info);
+};
+
+struct cnl_sdw_bra_cfg {
+ void *drv_data;
+ struct cnl_bra_operation *bra_ops;
+};
+
struct cnl_sdw_data {
/* SoundWire IP registers per instance */
void __iomem *sdw_regs;
@@ -84,6 +111,8 @@ struct cnl_sdw_data {
int irq;
/* Instance id */
int inst_id;
+ /* BRA data pointer */
+ struct cnl_sdw_bra_cfg *bra_data;
};
struct cnl_sdw_port *cnl_sdw_alloc_port(struct sdw_master *mstr, int ch_count,
diff --git a/include/linux/sdw/sdw_registers.h b/include/linux/sdw/sdw_registers.h
index 1abdf4bf863a..2e831c0e93cf 100644
--- a/include/linux/sdw/sdw_registers.h
+++ b/include/linux/sdw/sdw_registers.h
@@ -68,13 +68,17 @@
#define SDW_SCP_INTSTAT_1 0x40
#define SDW_SCP_INTSTAT1_PARITY_MASK 0x1
#define SDW_SCP_INTSTAT1_BUS_CLASH_MASK 0x2
+#define SDW_SCP_INTSTAT1_IMPL_DEF_MASK 0x4
#define SDW_SCP_INTSTAT1_SCP2_CASCADE_MASK 0x80
#define SDW_SCP_INTCLEAR1 0x40
#define SDW_SCP_INTCLEAR1_PARITY_MASK 0x1
#define SDW_SCP_INTCLEAR1_BUS_CLASH_MASK 0x2
+#define SDW_SCP_INTCLEAR1_IMPL_DEF_MASK 0x4
#define SDW_SCP_INTCLEAR1_SCP2_CASCADE_MASK 0x80
-#define SDW_SCP_INTMASK1
+#define SDW_SCP_INTMASK1 0x41
+#define SDW_SCP_INTMASK1_PARITY_MASK 0x1
+#define SDW_SCP_INTMASK1_BUS_CLASH_MASK 0x2
#define SDW_SCP_INTSTAT2 0x42
#define SDW_SCP_INTSTAT2_SCP3_CASCADE_MASK 0x80
#define SDW_SCP_INTSTAT3 0x43
@@ -84,6 +88,7 @@
#define SDW_SCP_STAT 0x44
#define SDW_SCP_STAT_CLK_STP_NF_MASK 0x1
#define SDW_SCP_SYSTEMCTRL 0x45
+#define SDW_SCP_SYSTEMCTRL_CLK_STP_PREP_MASK 0x1
#define SDW_SCP_SYSTEMCTRL_CLK_STP_PREP_SHIFT 0x0
#define SDW_SCP_SYSTEMCTRL_CLK_STP_MODE_SHIFT 0x1
#define SDW_SCP_SYSTEMCTRL_WAKE_UP_EN_SHIFT 0x2
diff --git a/include/linux/sdw_bus.h b/include/linux/sdw_bus.h
index d16579b35f8a..ed075fbd9a99 100644
--- a/include/linux/sdw_bus.h
+++ b/include/linux/sdw_bus.h
@@ -60,6 +60,66 @@
#define SDW_PORT_ENCODING_TYPE_SIGN_MAGNITUDE 0x2
#define SDW_PORT_ENCODING_TYPE_IEEE_32_FLOAT 0x4
+#define SDW_BRA_PORT_ID 0
+#define SDW_BRA_CHN_MASK 0x1
+
+#define SDW_BRA_HEADER_SIZE 6 /* In bytes */
+#define SDW_BRA_HEADER_CRC_SIZE 1 /* In bytes */
+#define SDW_BRA_DATA_CRC_SIZE 1 /* In bytes */
+#define SDW_BRA_HEADER_RESP_SIZE 1 /* In bytes */
+#define SDW_BRA_FOOTER_RESP_SIZE 1 /* In bytes */
+#define SDW_BRA_PADDING_SZ 1 /* In bytes */
+#define SDW_BRA_HEADER_TOTAL_SZ 8 /* In bytes */
+
+#define SDW_BRA_BPT_PAYLOAD_TYPE 0x0
+#define SDW_BRA_BPT_PYLD_TY_MASK 0xFF3FFFFF
+#define SDW_BRA_BPT_PYLD_TY_SHIFT 22
+
+#define SDW_BRA_HDR_ACTIVE 0x3
+#define SDW_BRA_HDR_ACTIVE_SHIFT 6
+#define SDW_BRA_HDR_ACTIVE_MASK 0x3F
+
+#define SDW_BRA_HDR_SLV_ADDR_SHIFT 2
+#define SDW_BRA_HDR_SLV_ADDR_MASK 0xC3
+
+#define SDW_BRA_HDR_RD_WR_SHIFT 1
+#define SDW_BRA_HDR_RD_WR_MASK 0xFD
+
+#define SDW_BRA_HDR_MSB_BYTE_SET 1
+#define SDW_BRA_HDR_MSB_BYTE_UNSET 0
+#define SDW_BRA_HDR_MSB_BYTE_CHK 255
+#define SDW_BRA_HDR_MSB_BYTE_MASK 0xFE
+#define SDW_BRA_HDR_MSB_BYTE_SHIFT 0
+
+#define SDW_BRA_HDR_SLV_REG_OFF_SHIFT0 0
+#define SDW_BRA_HDR_SLV_REG_OFF_MASK0 0xFF
+#define SDW_BRA_HDR_SLV_REG_OFF_SHIFT8 8
+#define SDW_BRA_HDR_SLV_REG_OFF_MASK8 0xFF00
+#define SDW_BRA_HDR_SLV_REG_OFF_SHIFT16 16
+#define SDW_BRA_HDR_SLV_REG_OFF_MASK16 0xFF0000
+#define SDW_BRA_HDR_SLV_REG_OFF_SHIFT24 24
+#define SDW_BRA_HDR_SLV_REG_OFF_MASK24 0xFF000000
+
+#define SDW_BRA_HDR_RESP_ACK_SHIFT 3
+#define SDW_BRA_HDR_RESP_NRDY_SHIFT 5
+#define SDW_BRA_FTR_RESP_ACK_SHIFT 3
+#define SDW_BRA_FTR_RESP_RES_SHIFT 5
+#define SDW_BRA_HDR_RESP_ACK_MASK 0x3
+#define SDW_BRA_HDR_RESP_NRDY_MASK 0x1
+#define SDW_BRA_FTR_RESP_ACK_MASK 0x3
+#define SDW_BRA_FTR_RESP_RES_MASK 0x1
+
+#define SDW_BRA_TARGET_READY 0
+#define SDW_BRA_TARGET_NOT_READY 1
+
+#define SDW_BRA_ACK_NAK_IGNORED 0
+#define SDW_BRA_ACK_NAK_OK 1
+#define SDW_BRA_ACK_NAK_FAILED_ABORT 2
+#define SDW_BRA_ACK_NAK_RSVD_ABORT 3
+
+#define SDW_BRA_FTR_RESULT_GOOD 0
+#define SDW_BRA_FTR_RESULT_BAD 1
+
/* enum sdw_driver_type: There are different driver callbacks for slave and
* master. This is to differentiate between slave driver
* and master driver. Bus driver binds master driver to
@@ -420,6 +480,14 @@ struct sdw_slv_dp0_capabilities {
* the Port15 alias
* 0: Command_Ignored
* 1: Command_OK, Data is OR of all registers
+ * @scp_impl_def_intr_mask: Implementation defined interrupt for Slave control
+ * port
+ * @clk_stp1_deprep_required: De-prepare is required after exiting the clock
+ * stop mode 1. Noramlly exit from clock stop 1 is like
+ * hard reset, so de-prepare shouldn't be required but
+ * some Slave requires de-prepare after exiting from
+ * clock stop 1. Mark as true if Slave requires
+ * deprepare after exiting from clock stop mode 1.
* @sdw_dp0_supported: DP0 is supported by Slave.
* @sdw_dp0_cap: Data Port 0 Capabilities of the Slave.
* @num_of_sdw_ports: Number of SoundWire Data ports present. The representation
@@ -436,6 +504,8 @@ struct sdw_slv_capabilities {
bool paging_supported;
bool bank_delay_support;
unsigned int port_15_read_behavior;
+ u8 scp_impl_def_intr_mask;
+ bool clk_stp1_deprep_required;
bool sdw_dp0_supported;
struct sdw_slv_dp0_capabilities *sdw_dp0_cap;
int num_of_sdw_ports;
@@ -494,6 +564,40 @@ struct sdw_bus_params {
int bank;
};
+/** struct sdw_portn_intr_stat: Implementation defined interrupt
+ * status for slave ports other than port 0
+ *
+ * num: Port number for which status is reported.
+ * status: status of the implementation defined interrupts
+ */
+struct sdw_portn_intr_stat {
+ int num;
+ u8 status;
+};
+
+/** struct sdw_impl_def_intr_stat: Implementation define interrupt
+ * status for slave.
+ *
+ * control_port_stat: Implementation defined interrupt status mask
+ * for control ports. Mask Bits are exactly
+ * same as defined in MIPI Spec 1.0
+ * port0_stat: Implementation defined interrupt status mask
+ * for port 0. Mask bits are exactly same as defined
+ * in MIPI spec 1.0.
+ * num_ports: Number of ports in slave other than port 0.
+ * portn_stat: Implementation defined status for slave ports
+ * other than port0. Mask bits are exactly same
+ * as defined in MIPI spec 1.0. Array size is
+ * same as number of ports in Slave.
+ */
+struct sdw_impl_def_intr_stat {
+ u8 control_port_stat;
+ u8 port0_stat;
+ int num_ports;
+ struct sdw_portn_intr_stat *portn_stat;
+};
+
+
/**
* struct sdw_slave_driver: Manage SoundWire generic/Slave device driver
* @driver_type: To distinguish between master and slave driver. Set and
@@ -580,6 +684,13 @@ struct sdw_slave_driver {
int port, int ch_mask, int bank);
int (*handle_post_port_unprepare)(struct sdw_slv *swdev,
int port, int ch_mask, int bank);
+ int (*pre_clk_stop_prep)(struct sdw_slv *sdwdev,
+ enum sdw_clk_stop_mode mode, bool stop);
+ int (*post_clk_stop_prep)(struct sdw_slv *sdwdev,
+ enum sdw_clk_stop_mode mode, bool stop);
+ enum sdw_clk_stop_mode (*get_dyn_clk_stp_mod)(struct sdw_slv *swdev);
+ void (*update_slv_status)(struct sdw_slv *swdev,
+ enum sdw_slave_status *status);
const struct sdw_slv_id *id_table;
};
#define to_sdw_slave_driver(d) container_of(d, struct sdw_slave_driver, driver)
@@ -1298,6 +1409,15 @@ struct sdw_master *sdw_get_master(int nr);
*/
void sdw_put_master(struct sdw_master *mstr);
+/**
+ * sdw_slave_xfer_bra_block: Transfer the data block using the BTP/BRA
+ * protocol.
+ * @mstr: SoundWire Master Master
+ * @block: Data block to be transferred.
+ */
+int sdw_slave_xfer_bra_block(struct sdw_master *mstr,
+ struct sdw_bra_block *block);
+
/**
* module_sdw_slave_driver() - Helper macro for registering a sdw Slave driver
@@ -1311,19 +1431,50 @@ void sdw_put_master(struct sdw_master *mstr);
module_driver(__sdw_slave_driver, sdw_slave_driver_register, \
sdw_slave_driver_unregister)
/**
- * sdw_prepare_for_clock_change: Prepare all the Slaves for clock stop or
- * clock start. Prepares Slaves based on what they support
- * simplified clock stop or normal clock stop based on
- * their capabilities registered to slave driver.
+ * sdw_master_prep_for_clk_stop: Prepare all the Slaves for clock stop.
+ * Iterate through each of the enumerated Slave.
+ * Prepare each Slave according to the clock stop
+ * mode supported by Slave. Use dynamic value from
+ * Slave callback if registered, else use static values
+ * from Slave capabilities registered.
+ * 1. Get clock stop mode for each Slave.
+ * 2. Call pre_prepare callback of each Slave if
+ * registered.
+ * 3. Prepare each Slave for clock stop
+ * 4. Broadcast the Read message to make sure
+ * all Slaves are prepared for clock stop.
+ * 5. Call post_prepare callback of each Slave if
+ * registered.
+ *
* @mstr: Master handle for which clock state has to be changed.
- * @start: Prepare for starting or stopping the clock
- * @clk_stop_mode: Bus used which clock mode, if bus finds all the Slaves
- * on the bus to be supported clock stop mode1 it prepares
- * all the Slaves for mode1 else it will prepare all the
- * Slaves for mode0.
+ *
+ * Returns 0
+ */
+int sdw_master_prep_for_clk_stop(struct sdw_master *mstr);
+
+/**
+ * sdw_mstr_deprep_after_clk_start: De-prepare all the Slaves
+ * exiting clock stop mode 0 after clock resumes. Clock
+ * is already resumed before this. De-prepare all the Slaves
+ * which were earlier in ClockStop mode0. De-prepare for the
+ * Slaves which were there in ClockStop mode1 is done after
+ * they enumerated back. Its not done here as part of master
+ * getting resumed.
+ * 1. Get clock stop mode for each Slave its exiting from
+ * 2. Call pre_prepare callback of each Slave exiting from
+ * clock stop mode 0.
+ * 3. De-Prepare each Slave exiting from Clock Stop mode0
+ * 4. Broadcast the Read message to make sure
+ * all Slaves are de-prepared for clock stop.
+ * 5. Call post_prepare callback of each Slave exiting from
+ * clock stop mode0
+ *
+ *
+ * @mstr: Master handle
+ *
+ * Returns 0
*/
-int sdw_prepare_for_clock_change(struct sdw_master *mstr, bool start,
- enum sdw_clk_stop_mode *clck_stop_mode);
+int sdw_mstr_deprep_after_clk_start(struct sdw_master *mstr);
/**
* sdw_wait_for_slave_enumeration: Wait till all the slaves are enumerated.
@@ -1341,13 +1492,14 @@ int sdw_wait_for_slave_enumeration(struct sdw_master *mstr,
struct sdw_slv *slave);
/**
- * sdw_stop_clock: Stop the clock. This function broadcasts the SCP_CTRL
+ * sdw_master_stop_clock: Stop the clock. This function broadcasts the SCP_CTRL
* register with clock_stop_now bit set.
+ *
* @mstr: Master handle for which clock has to be stopped.
- * @clk_stop_mode: Bus used which clock mode.
+ *
+ * Returns 0 on success, appropriate error code on failure.
*/
-
-int sdw_stop_clock(struct sdw_master *mstr, enum sdw_clk_stop_mode mode);
+int sdw_master_stop_clock(struct sdw_master *mstr);
/* Return the adapter number for a specific adapter */
static inline int sdw_master_id(struct sdw_master *mstr)
@@ -1377,4 +1529,29 @@ static inline void sdw_slave_set_drvdata(struct sdw_slv *slv,
dev_set_drvdata(&slv->dev, data);
}
+static inline void *sdw_master_get_platdata(const struct sdw_master *mstr)
+{
+ return dev_get_platdata(&mstr->dev);
+}
+
+/**
+ * sdw_slave_get_bus_params: Get the current bus params. Some Slaves
+ * requires bus params at the probe to program its
+ * registers based on bus params. This API provides
+ * current bus params
+ *
+ * @sdw_slv: Slave handle
+ * @params: Bus params
+ */
+int sdw_slave_get_bus_params(struct sdw_slv *sdw_slv,
+ struct sdw_bus_params *params);
+/**
+ * sdw_bus_compute_crc8: SoundWire bus helper function to compute crc8.
+ * This API uses crc8 helper functions internally.
+ *
+ * @values: Data buffer.
+ * @num_bytes: Number of bytes.
+ */
+u8 sdw_bus_compute_crc8(u8 *values, u8 num_bytes);
+
#endif /* _LINUX_SDW_BUS_H */
diff --git a/sound/soc/codecs/svfpga-sdw.c b/sound/soc/codecs/svfpga-sdw.c
index 1fa06ef82ab2..dec412af6d5c 100644
--- a/sound/soc/codecs/svfpga-sdw.c
+++ b/sound/soc/codecs/svfpga-sdw.c
@@ -79,7 +79,7 @@ static int svfpga_register_sdw_capabilties(struct sdw_slv *sdw,
dpn_cap->num_audio_modes), GFP_KERNEL);
for (j = 0; j < dpn_cap->num_audio_modes; j++) {
prop = &dpn_cap->mode_properties[j];
- prop->max_frequency = 16000000;
+ prop->max_frequency = 19200000;
prop->min_frequency = 1000000;
prop->num_freq_configs = 0;
prop->freq_supported = NULL;
diff --git a/sound/soc/intel/boards/cnl_svfpga.c b/sound/soc/intel/boards/cnl_svfpga.c
index 312c0437ac15..23c1f6437cda 100644
--- a/sound/soc/intel/boards/cnl_svfpga.c
+++ b/sound/soc/intel/boards/cnl_svfpga.c
@@ -125,7 +125,7 @@ static int cnl_svfpga_codec_fixup(struct snd_soc_pcm_runtime *rtd,
pr_debug("Invoked %s for dailink %s\n", __func__, rtd->dai_link->name);
slot_width = 24;
rate->min = rate->max = 48000;
- channels->min = channels->max = 1;
+ channels->min = channels->max = 2;
snd_mask_none(hw_param_mask(params, SNDRV_PCM_HW_PARAM_FORMAT));
snd_mask_set(hw_param_mask(params, SNDRV_PCM_HW_PARAM_FORMAT),
SNDRV_PCM_FORMAT_S16_LE);
diff --git a/sound/soc/intel/skylake/cnl-sst.c b/sound/soc/intel/skylake/cnl-sst.c
index 911246d6f5d2..02e4ccc5200a 100644
--- a/sound/soc/intel/skylake/cnl-sst.c
+++ b/sound/soc/intel/skylake/cnl-sst.c
@@ -517,7 +517,7 @@ static int skl_register_sdw_masters(struct device *dev, struct skl_sst *dsp,
struct sdw_mstr_dpn_capabilities *dpn_cap;
struct sdw_master *master;
struct cnl_sdw_data *p_data;
- int ret = 0, i, j;
+ int ret = 0, i, j, k, wl = 0;
/* TODO: This number 4 should come from ACPI */
#if defined(CONFIG_SDW_MAXIM_SLAVE) || defined(CONFIG_SND_SOC_MXFPGA)
dsp->num_sdw_controllers = 3;
@@ -562,14 +562,20 @@ static int skl_register_sdw_masters(struct device *dev, struct skl_sst *dsp,
if (!m_cap->sdw_dpn_cap)
return -ENOMEM;
for (j = 0; j < m_cap->num_data_ports; j++) {
- dpn_cap = &m_cap->sdw_dpn_cap[i];
+ dpn_cap = &m_cap->sdw_dpn_cap[j];
/* Both Tx and Rx */
dpn_cap->port_direction = 0x3;
- dpn_cap->port_number = i;
+ dpn_cap->port_number = j;
dpn_cap->max_word_length = 32;
dpn_cap->min_word_length = 1;
- dpn_cap->num_word_length = 0;
- dpn_cap->word_length_buffer = NULL;
+ dpn_cap->num_word_length = 4;
+
+ dpn_cap->word_length_buffer =
+ kzalloc(((sizeof(unsigned int)) *
+ dpn_cap->num_word_length), GFP_KERNEL);
+ for (k = 0; k < dpn_cap->num_word_length; k++)
+ dpn_cap->word_length_buffer[k] = wl = wl + 8;
+ wl = 0;
dpn_cap->dpn_type = SDW_FULL_DP;
dpn_cap->min_ch_num = 1;
dpn_cap->max_ch_num = 8;
diff --git a/sound/soc/intel/skylake/skl-sdw-pcm.c b/sound/soc/intel/skylake/skl-sdw-pcm.c
index ea9a3e14434a..564602c0ee12 100644
--- a/sound/soc/intel/skylake/skl-sdw-pcm.c
+++ b/sound/soc/intel/skylake/skl-sdw-pcm.c
@@ -39,7 +39,8 @@
struct sdw_dma_data {
int stream_tag;
- struct cnl_sdw_port *port;
+ int nr_ports;
+ struct cnl_sdw_port **port;
struct sdw_master *mstr;
enum cnl_sdw_pdi_stream_type stream_type;
int stream_state;
@@ -133,11 +134,11 @@ int cnl_sdw_hw_params(struct snd_pcm_substream *substream,
enum sdw_data_direction direction;
struct sdw_stream_config stream_config;
struct sdw_port_config port_config;
- struct sdw_port_cfg port_cfg;
+ struct sdw_port_cfg *port_cfg;
int ret = 0;
struct skl_pipe_params p_params = {0};
struct skl_module_cfg *m_cfg;
- int upscale_factor = 16;
+ int i, upscale_factor = 16;
p_params.s_fmt = snd_pcm_format_width(params_format(params));
p_params.ch = params_channels(params);
@@ -155,13 +156,26 @@ int cnl_sdw_hw_params(struct snd_pcm_substream *substream,
direction = SDW_DATA_DIR_IN;
else
direction = SDW_DATA_DIR_OUT;
- /* Dynamically alloc port and PDI streams for this DAI */
- dma->port = cnl_sdw_alloc_port(dma->mstr, channels,
+ if (dma->stream_type == CNL_SDW_PDI_TYPE_PDM)
+ dma->nr_ports = channels;
+ else
+ dma->nr_ports = 1;
+
+ dma->port = kcalloc(dma->nr_ports, sizeof(struct cnl_sdw_port),
+ GFP_KERNEL);
+ if (!dma->port)
+ return -ENOMEM;
+
+ for (i = 0; i < dma->nr_ports; i++) {
+ /* Dynamically alloc port and PDI streams for this DAI */
+ dma->port[i] = cnl_sdw_alloc_port(dma->mstr, channels,
direction, dma->stream_type);
- if (!dma->port) {
- dev_err(dai->dev, "Unable to allocate port\n");
- return -EINVAL;
+ if (!dma->port[i]) {
+ dev_err(dai->dev, "Unable to allocate port\n");
+ return -EINVAL;
+ }
}
+
dma->stream_state = STREAM_STATE_ALLOC_STREAM;
m_cfg = skl_tplg_be_get_cpr_module(dai, substream->stream);
if (!m_cfg) {
@@ -170,10 +184,10 @@ int cnl_sdw_hw_params(struct snd_pcm_substream *substream,
}
if (!m_cfg->sdw_agg_enable)
- m_cfg->sdw_stream_num = dma->port->pdi_stream->sdw_pdi_num;
+ m_cfg->sdw_stream_num = dma->port[0]->pdi_stream->sdw_pdi_num;
else
m_cfg->sdw_agg.agg_data[dma->mstr_nr].alh_stream_num =
- dma->port->pdi_stream->sdw_pdi_num;
+ dma->port[0]->pdi_stream->sdw_pdi_num;
ret = skl_tplg_be_update_params(dai, &p_params);
if (ret)
return ret;
@@ -202,10 +216,23 @@ int cnl_sdw_hw_params(struct snd_pcm_substream *substream,
dev_err(dai->dev, "Unable to configure the stream\n");
return ret;
}
- port_config.num_ports = 1;
- port_config.port_cfg = &port_cfg;
- port_cfg.port_num = dma->port->port_num;
- port_cfg.ch_mask = ((1 << channels) - 1);
+ port_cfg = kcalloc(dma->nr_ports, sizeof(struct sdw_port_cfg),
+ GFP_KERNEL);
+ if (!port_cfg)
+ return -ENOMEM;
+
+ port_config.num_ports = dma->nr_ports;
+ port_config.port_cfg = port_cfg;
+
+ for (i = 0; i < dma->nr_ports; i++) {
+ port_cfg[i].port_num = dma->port[i]->port_num;
+
+ if (dma->stream_type == CNL_SDW_PDI_TYPE_PDM)
+ port_cfg[i].ch_mask = 0x1;
+ else
+ port_cfg[i].ch_mask = ((1 << channels) - 1);
+ }
+
ret = sdw_config_port(dma->mstr, NULL, &port_config, dma->stream_tag);
if (ret) {
dev_err(dai->dev, "Unable to configure port\n");
@@ -219,7 +246,7 @@ int cnl_sdw_hw_free(struct snd_pcm_substream *substream,
struct snd_soc_dai *dai)
{
struct sdw_dma_data *dma;
- int ret = 0;
+ int ret = 0, i;
dma = snd_soc_dai_get_dma_data(dai, substream);
@@ -228,16 +255,20 @@ int cnl_sdw_hw_free(struct snd_pcm_substream *substream,
if (ret)
dev_err(dai->dev, "Unable to release stream\n");
dma->stream_state = STREAM_STATE_RELEASE_STREAM;
- if (dma->port && dma->stream_state ==
+ for (i = 0; i < dma->nr_ports; i++) {
+ if (dma->port[i] && dma->stream_state ==
STREAM_STATE_RELEASE_STREAM) {
- /* Even if release fails, we continue,
- * while winding up we have
- * to continue till last one gets winded up
- */
- cnl_sdw_free_port(dma->mstr, dma->port->port_num);
- dma->stream_state = STREAM_STATE_FREE_STREAM;
- dma->port = NULL;
+ /* Even if release fails, we continue,
+ * while winding up we have
+ * to continue till last one gets winded up
+ */
+ cnl_sdw_free_port(dma->mstr,
+ dma->port[i]->port_num);
+ dma->port[i] = NULL;
+ }
}
+
+ dma->stream_state = STREAM_STATE_FREE_STREAM;
}
return 0;
}
--
https://clearlinux.org