misc/rpmsgblk.c:616:29: warning: implicit declaration of function ‘rpmsg_virtio_get_buffer_size’; did you mean ‘rpmsg_get_rx_buffer_size’? [-Wimplicit-function-declaration]
616 | if (MAX(msglen, rsplen) > rpmsg_virtio_get_buffer_size(priv->ept.rdev))
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
| rpmsg_get_rx_buffer_size
Signed-off-by: Yongrong Wang <wangyongrong@xiaomi.com>
1. Kconfig - Removed USART1 config option from STM32_STM32G47XX. Not necessary to adding LPUART functionality. 2. stm32_lowput.c - Added extra check from STM32G4 board because that is the only with LPUART functionality. 2. stm32_serial.c - Removed unneeded function (stm32_serial_get_lpuart). Fixed up_putc return bug. Added configuration for DMAMAP_LPUART RX and TX for STM32G4XXX only. The G4 is the only in this family with LPUART and uses a DMAMUX unlike the others.
1. Removed 1WIRE LPUART refereences in Kconfig and stm32_uart.h. There is no support for LPUART currently in stm32_1wire.c. 2. Removed references to LPUART under DMA_V2 ifdefs. STM32G4 uses DMA_V1, and I saw that none of the chips DMA_V2 (F20, F4) have LPUARTs. AFAIK the only chip in the stm32 folder that has LPUART peripherals is the STM32G4.
Removed unnecessary brackets and empty lines
Added lpuartnsh (LPUART NuttShell) config to the nucleo-g474re board configurations. nsh uses USART3 by default. lpuartnsh uses nsh as a template, changes the serial console to LPUART1, and adds the DMA configs to enable DMA for the LPUART.
Added support for using the lpuart prescaler register. Without prescaling the apbclock, 9600 baud is not supported on the G474RE. By utilizing the prescaler, when necessary, we can support nearly any baud rate (300 baud to 30M Mbaud). lowputc defaults to a prescaler of 16 for the lpuart so standard baud rates (9600 to 115200) are supported early in the boot process. Later in stm32_serial.c the ideal prescaler and BRR values are determined.
Added ifdef statements for LPUART code sections not compatible with other chips.
Changed LPUART BRR calcuation to use 64-bit integers.
Feedback from nuttx pull request. Added brackets around single line if/else statements. Reordered lpuartnsh defconfig file.
Fix lpuart brr calculation after attempting to break the calculation into 2 lines.
Removed TAB
reason:
In the kernel, we are planning to remove all occurrences of up_cpu_pause as one of the steps to
simplify the implementation of critical sections. The goal is to enable spin_lock_irqsave to encapsulate critical sections,
thereby facilitating the replacement of critical sections(big lock) with smaller spin_lock_irqsave(small lock)
Signed-off-by: hujun5 <hujun5@xiaomi.com>
1. Use the virtqueue_xxx_lock() api;
2. Add spinlock for some virtio and vhost drivers that do not
use spinlock to protect the virtqueues;
Signed-off-by: Yongrong Wang <wangyongrong@xiaomi.com>
the virtqueue should be protected by the spinlock, because the virtqueues are used
both in worker and interrupt.
Signed-off-by: wangyongrong <wangyongrong@xiaomi.com>
For remoteproc transport layer, the virtio drivers can not use the malloced
memory from the default system heap to communicate with the virtio devices,
the buffers placed in virtqueue should be in the share memory region
(mallcoed virtio_alloc_buf()).
Signed-off-by: Bowen Wang <wangbowen6@xiaomi.com>
By setting config CONFIG_DRIVERS_VIRTIO_SERIAL_NAME to "ttyXX0;ttyXX1;..."
to customize the virtio serial name.
For example:
If CONFIG_DRIVERS_VIRTIO_SERIAL_NAME="ttyBT;ttyTest0;ttyTest1",
virtio-serial will register three uart devices with names:
"/dev/ttyBT", "/dev/ttyTest0", "/dev/ttyTest1" to the VFS.
nsh> ls dev
/dev:
console
null
telnet
ttyBT
ttyS0
ttyTest0
ttyTest1
zero
Signed-off-by: Bowen Wang <wangbowen6@xiaomi.com>
In secure state, call virtio_register_mmio_device_secure() to
register the secure virtio mmio device;
In non-secure state, call virtio_register_mmio_device() to
register the non-secure virtio mmio device;
Board should ensure not mixed use the secure and non-seucre mmio
device.
Signed-off-by: Bowen Wang <wangbowen6@xiaomi.com>
Summary:
Fixed the problem of releasing the bucket prematurely in multi-threaded flock scenarios.
A thread setlk
B thread setlk_wait
A thread releases lock but fails to determine if nwaiter causes the bucket to be released prematurely
post B thread causes crash due to heap use after free
https://github.com/apache/nuttx/issues/13821
Signed-off-by: chenrun1 <chenrun1@xiaomi.com>
The upper layer expects -ENOTTY is returned if ioctl command is not
found in file system specific implementation.
Signed-off-by: Michal Lenc <michallenc@seznam.cz>
1. use '__attribute__((constructor))' mark initialize function
2. use '__attribute__((destructor))' mark uninitialize function
3. compile module with -fvisibility=hidden. use `__attribute__((visibility("default")))`
mark is need export symbol.so not need module_initialize to initialize export symbol.
Signed-off-by: anjiahao <anjiahao@xiaomi.com>
reason:
In the kernel, we are planning to remove all occurrences of up_cpu_pause as one of the steps to
simplify the implementation of critical sections. The goal is to enable spin_lock_irqsave to encapsulate critical sections,
thereby facilitating the replacement of critical sections(big lock) with smaller spin_lock_irqsave(small lock)
Configuring NuttX and compile:
$ ./tools/configure.sh -l qemu-armv8a:nsh_smp
$ make
Running with qemu
$ qemu-system-aarch64 -cpu cortex-a53 -smp 4 -nographic
-machine virt,virtualization=on,gic-version=3
-net none -chardev stdio,id=con,mux=on -serial chardev:con
-mon chardev=con,mode=readline -kernel ./nuttx
Signed-off-by: hujun5 <hujun5@xiaomi.com>
reason:
In the kernel, we are planning to remove all occurrences of up_cpu_pause as one of the steps to
simplify the implementation of critical sections. The goal is to enable spin_lock_irqsave to encapsulate critical sections,
thereby facilitating the replacement of critical sections(big lock) with smaller spin_lock_irqsave(small lock)
Configuring NuttX and compile:
$ ./tools/configure.sh -l qemu-armv8a:nsh_smp
$ make
Running with qemu
$ qemu-system-aarch64 -cpu cortex-a53 -smp 4 -nographic
-machine virt,virtualization=on,gic-version=3
-net none -chardev stdio,id=con,mux=on -serial chardev:con
-mon chardev=con,mode=readline -kernel ./nuttx
Signed-off-by: hujun5 <hujun5@xiaomi.com>
reason:
In the kernel, we are planning to remove all occurrences of up_cpu_pause as one of the steps to
simplify the implementation of critical sections. The goal is to enable spin_lock_irqsave to encapsulate critical sections,
thereby facilitating the replacement of critical sections(big lock) with smaller spin_lock_irqsave(small lock)
Configuring NuttX and compile:
$ ./tools/configure.sh -l qemu-armv8a:nsh_smp
$ make
Running with qemu
$ qemu-system-aarch64 -cpu cortex-a53 -smp 4 -nographic
-machine virt,virtualization=on,gic-version=3
-net none -chardev stdio,id=con,mux=on -serial chardev:con
-mon chardev=con,mode=readline -kernel ./nuttx
Signed-off-by: hujun5 <hujun5@xiaomi.com>
reason:
1 To improve efficiency, we mimic Linux's behavior where preemption disabling is only applicable to the current CPU and does not affect other CPUs.
2 In the future, we will implement "spinlock+sched_lock", and use it extensively. Under such circumstances, if preemption is still globally disabled, it will seriously impact the scheduling efficiency.
3 We have removed g_cpu_lockset and used irqcount in order to eliminate the dependency of schedlock on critical sections in the future, simplify the logic, and further enhance the performance of sched_lock.
4 We set lockcount to 1 in order to lock scheduling on all CPUs during startup, without the need to provide additional functions to disable scheduling on other CPUs.
5 Cpu1~n must wait for cpu0 to enter the idle state before enabling scheduling because it prevents CPUs1~n from competing with cpu0 for the memory manager mutex, which could cause the cpu0 idle task to enter a wait state and trigger an assert.
size nuttx
before:
text data bss dec hex filename
265396 51057 63646 380099 5ccc3 nuttx
after:
text data bss dec hex filename
265184 51057 63642 379883 5cbeb nuttx
size -216
Configuring NuttX and compile:
$ ./tools/configure.sh -l qemu-armv8a:nsh_smp
$ make
Running with qemu
$ qemu-system-aarch64 -cpu cortex-a53 -smp 4 -nographic \
-machine virt,virtualization=on,gic-version=3 \
-net none -chardev stdio,id=con,mux=on -serial chardev:con \
-mon chardev=con,mode=readline -kernel ./nuttx
Signed-off-by: hujun5 <hujun5@xiaomi.com>
Unwinding the kernel stack did not work previously due to the way the task
startup logic works via nxtask_start and the up_task_start() system call.
After modifying the logic behind those, the kernel stack is in fact fully
unwound when return_from_exception is executed, so calling the original
hack "riscv_current_ksp" is not necessary anymore.
This removes 2 reserved system calls and replaces them with an ASM snippet.
The result removes an unnecessary ecall from the process startup logic, as
well as ensures the stacks are FULLY unwound when the user process starts.
The logic is ported from ARM64.